sentences
list | labels
list |
---|---|
[
"Opinion role labeling (ORL) is a fine-grained opinion analysis task and aims to answer who expressed what kind of sentiment towards what? .",
"Due to the scarcity of labeled data, ORL remains challenging for data-driven methods.",
"In this work, we try to enhance neural ORL models with syntactic knowledge by comparing and integrating different representations.",
"We also propose dependency graph convolutional networks (DEPGCN) to encode parser information at different processing levels.",
"In order to compensate for parser inaccuracy and reduce error propagation, we introduce multi-task learning (MTL) to train the parser and the ORL model simultaneously.",
"We verify our methods on the benchmark MPQA corpus.",
"The experimental results show that syntactic information is highly valuable for ORL, and our final MTL model effectively boosts the F1 score by 9.29 over the syntax-agnostic baseline.",
"In addition, we find that the contributions from syntactic knowledge do not fully overlap with contextualized word representations (BERT).",
"Our best model achieves 4.34 higher F1 score than the current state-of-the-art.",
"Opinion and sentiment analysis has a wide range of real-world applications like social media monitoring (Bollen et al., 2011), stock market prediction (Nguyen et al., 2015), box office prediction (Yu et al., 2010), and general e-commerce applications (Kim et al., 2013; Hu et al., 2017; Cui et al., 2017).",
"In particular, fine-grained opinion analysis aims to identify users' opinions in a text, including opinion expressions, holders of the opinions, targets of the opinions, target-dependent attitude, and intensity of opinions (Marasovic and Frank, 2018), which is very important for understanding political stance, Corresponding author $ Cardoso says challenge facing Chavez is ...",
"customers' reviews, marketing trends, and other subjective information (Ravi and Ravi, 2015).",
"As a typical fine-grained opinion mining task, opinion role labeling (ORL) aims to identify different roles relevant to each opinion, i.e., who expressed what kind of sentiment towards what (Liu, 2012).",
"Due to the lack of large-scale labeled data, ORL remains a challenging task to tackle.",
"As a reference point, semantic role labeling (SRL) is very similar to ORL in the problem definition, but has 10 times more labeled data and thus achieves much higher performance than ORL (80 90 vs. 60 70 in F1 score).",
"Motivated by the correlations between the two tasks, SRL has been utilized to help the ORL task by many previous studies (Ruppenhofer et al., 2008; Marasovic and Frank, 2018; Zhang et al., 2019b).",
"However, when opinion expressions and arguments compose complicated syntactic structures, it is difficult to correctly recognize the opinion arguments even with shallow semantic representation like SRL (Marasovic and Frank, 2018).",
"To compensate for the limited scale of labeled data for data-driven approaches, linguistic knowledge like syntax provides structural information representing human understanding of the text.",
"Naturally, dependency relations between words ease the discovering of opinion roles.",
"Taking the example in Figure 1, the Target span is often incompletely recognized without syntactic dependency relations, missing either facing Chavez or chal-lenge.",
"For the similar SRL task, many previous works have proposed to incorporate syntax into the neural models (Marcheggiani and Titov, 2017; He et al., 2018; Xia et al., 2019a).",
"In contrast, few studies in the recent years explore this line of research for ORL.",
"There are two barriers to apply syntactic dependency parsing to NLP tasks, i.e., 1) inaccuracy of the parsing results, and 2) error propagation of the processing pipeline.",
"To overcome the first barrier, instead of employing the final discrete outputs (i.e., single 1-best dependency trees), we make use of the probability matrix of all dependency arcs (also can be viewed as an edge-weighted directed graph) before searching for the 1-best tree.",
"Such probabilistic representation of syntax provides more information while alleviating parsing errors.",
"For the second barrier, considering that the pipeline methods are notorious for the error propagation problem, we introduce multi-task learning (MTL) frameworks, which have been widely used in many NLP models when predictions at various processing levels are needed (Collobert and Weston, 2008; Ruder, 2017).",
"Apart from the syntactic information, contextualized word representations like BERT (Devlin et al., 2019) are widely used to compensate for the sparsity of task-specific training data.",
"They compress distributional semantics of words from large corpora, making the local context fluent and natural.",
"However, the long-distance dependencies between words are often ignored, which is ideally able to be captured by syntactic analysis.",
"In summary, based on previous studies in using syntax to improve various tasks, this work investigates whether syntax can enhance the neural ORL model.",
"Particularly, we try to answer the following three questions.",
"How to effectively integrate various syntactic information into the neural ORL model?",
"How to alleviate the propagation of errors brought by syntactic parsing?",
"Is syntactic knowledge already covered by the contextualized word representations like BERT?",
"Based on our experiments, we observe that 1) compared with single 1-best parse trees, encoding the edge-weighted graphs achieves better results, as the model is less sensitive to parsing errors while keeping richer structural information; 2) integrating various syntactic information, both explicit and implicit, boosts performance, and MTL framework can effectively alleviate the error propagation problem; and 3) contributions from syntactic information, especially from long-distance dependency relations, do not fully overlap with those from the contextualized word representations like BERT.",
"Our overall model delivers a new state-of-the-art result on the benchmark MPQA corpus, with 4.34 absolute improvement over the previous best result.",
"An opinion consists of several components, e.g., expressions, holders, and targets.",
"Some previous works focus on recognizing some components, whereas others try to recognize all components at the same time.",
"Yang and Cardie (2014) and Breck et al. (2007) work entirely on labeling of the opinion expressions.",
"Kim and Hovy (2006) and Johansson and Moschitti (2013) apply pipeline models to firstly predicting opinion expressions and then labeling holders and targets for each expression.",
"Joint models simultaneously identify all opinion components, predicting which role is related to which opinion (Choi et al., 2006; Yang and Cardie, 2013; Katiyar and Cardie, 2016).",
"In this work, we follow the opinion role labeling (ORL) task setting of Marasovic and Frank (2018) and Zhang et al. (2019b), and try to predict holders and targets for the given opinion expressions.",
"Previous works make use of SRL resources to address the issue of data scarcity for ORL, considering SRL is highly related to ORL and has a considerable amount of training data.",
"Inspired by the similarity between ORL and SRL in task definition, Kim and Hovy (2006) and Ruppenhofer et al. (2008) address ORL with a well-trained SRL model by treating opinion expressions as semantic predicates, and opinion roles as semantic roles.",
"Marasovic and Frank (2018) take SRL as an auxiliary task, and employ different MTL frameworks to learn the common grounds between ORL and SRL and distinguish task-specific knowledge.",
"Zhang et al. (2019b) extract neural features from a well-trained SRL model as SRL-aware word representations, and then feed them into the input layer of ORL, aiming to alleviate the error propagation problem.",
"Many previous works have shown that syntactic information is of great value for SRL and other NLP tasks (He et al., 2018; Zhang et al., 2019c; Strubell et al., 2018; Xia et al., 2019a; Miwa and Bansal, 2016; Zhang et al., 2019a).",
"Xia et al. (2019b) use the relative position between predicate words and other words in a dependency tree to represent syntactic information, while Roth and La-pata (2016) employ LSTM to obtain the embedding of a dependency path.",
"Tai et al. (2015) and Kipf and Welling (2016) propose TreeLSTM and graph convolution network (GCN) to encode the tree/graph-structural data respectively.",
"Both TreeLSTM and GCN are commonly used techniques to encode parse trees (Miwa and Bansal, 2016; Marcheggiani and Titov, 2017; Bastings et al., 2017).",
"Zhang et al. (2019a) and Xia et al. (2019a) extract the hidden states from the LSTM encoder of the parser model as syntax-aware word representations, and feed them to downstream tasks as extra inputs.",
"In contrast, few works have proved that syntactic knowledge is useful in the neural ORL models.",
"Yang and Cardie (2013) integrate the shortest path features from dependency trees into a traditional CRF-based ORL model.",
"To our best knowledge, this work is the first to investigate how to incorporate syntax into neural ORL models.",
"The ORL model aims to extract opinion-holder-target structures from text by identifying the segments of these opinion arguments.",
"The task can be modeled as a sequence labeling problem.",
"We adopt the { BMESO } encoding schema to assign a tag for each word (Zhang et al., 2019b).",
"Following Marasovic and Frank (2018) and Zhang et al. (2019b), we focus on recognizing the holders and the targets for the given opinion expression and exploit a deep BiLSTM-CRF-based model as our baseline.",
"The Figure",
"2-(a) shows the architecture of our ORL baseline model, which is composed of three key components, i.e., the input layer, the BiLSTM-based encoder, and the CRF-based decoder.",
"Given the input sentence S = w 1 , w 2 , ..., w n and the opinion expression segment E = w s , w s +1 , ..., w e (1 s e n ) , the input vector consists of the word embeddings and the expression-indicator embeddings as the following equation shows: x i = e wordw i e exp 0 / 1 (1) where e wordw i is the embedding of word w i , and the expression-indicator embedding is e exp 0 for non-expression words and e exp 1 for words inside the opinion expression (i.e., s i e ).",
"At the encoder layer, we apply three stacking layers of BiLSTM to fully encode the sentence and obtain the expression-specific representations at word level.",
"The CRF-based decoder at the output layer delivers the globally optimal sequence tags.",
"The Biaffine parser is the state-of-the-art dependency parser proposed by Dozat and Manning (2017), as shown in Figure",
"2-(b).",
"The parser contains a multi-layer BiLSTM layer for encoding the input sentence, followed by a biaffine transformation layer for computing the probabilities of all word pairs.",
"Then it searches for the highest-scoring and well-formed tree via the maximum spanning tree (MST) algorithm.",
"The three cascaded layers, i.e., the BiLSTM-based encoder, the biaffine scorer, and the MST decoder, represent syntactic information at different levels.",
"The encoder extracts the neural features from the input sentence and outputs hidden states (HDN), which can be regarded as implicit information.",
"The 1-best output parse tree, on the other hand, conveys explicit syntactic structures.",
"The biaffine scorer gives a probability matrix for all possible dependency arcs (also can be viewed as an edge-weighted directed graph), which represents richer explicit syntactic information than the 1-best parse tree.",
"Despite of recent advances in dependency parsing (Dozat and Manning, 2017), parsers still cannot output parse trees with high accuracy on out-of-domain or irregular data.",
"In this work, we exploit rich syntactic information contained in the edge-weighted graphs to mitigate the effects of parsing errors.",
"Specifically, we firstly employ graph convolutional networks (GCN) to encode the edge-weighted graphs, and then integrate them into different processing levels of ORL with implicit parser hidden states.",
"Finally, we employ novel MTL frameworks to alleviate the error propagation problem further.",
"In this subsection, we propose dependency graph convolutional networks (DEPGCN) to better encode the syntactic information from the edge-weighted graphs.",
"On the one hand, compared with explicit 1-best parse trees, edge-weighted graphs convey richer structural information by providing all latent syntactic structures, and avoid error propagation as well.",
"On the other hand, compared with the implicit hidden states of the parser encoder (Zhang et al., 2019a; Xia et al., 2019a), an edge-weighted graph, denoted as an attention matrix, explicitly captures the modification strength of word pairs.",
"The original GCN is designed for directly modeling graph-structured data (Kipf and Welling, 2016).",
"Although each node only receives information from its immediate neighbors through edges in one GCN layer, multi-layer GCN can propagate information more globally if there exist connected paths.",
"Formally, the output of node i at the l -th layer of GCN is computed by the following equation: h ( l ) i = F n (cid:88) j =1 A ij W ( l ) h ( l 1) j + b ( l ) (2) where A is the adjacency matrix of a graph with n nodes, W ( l ) and b ( l ) are the model parameters, F is an activation function.",
"h 0 i is the initial input vector.",
"As shown by Figure 2-(e), we apply DEPGCN to connect the parser model and the ORL model.",
"We first obtain the edge-weighted graph from the decoder of a well-trained biaffine parser as a data preprocessing step, and then feed the graph into our DEPGCN in the form of an adjacency matrix A 1 .",
"Then we feed the outputs of the ORL BiLSTM-based encoder as the initial inputs h 0 to the DEPGCN.",
"Finally, we feed the output of the DEPGCN to the CRF-based decoder, and update the ORL results under the guidance of the syntactic information.",
"Moreover, we introduce dense connections to the multi-layer DEPGCN for extracting more structural information (Huang et al., 2017; Guo et al., 2019).",
"Instead of only adding connections between adjacent layers, we use dense connections from each layer to all the subsequent layers.",
"Formally, the input of node i at the l -th layer is: x ( l ) i = h (0) i h (1) i h ( l 1) i (3) where h ( l ) i is the output of node i at the l -th layer.",
"We also make residual connections over DEPGCN to mitigate the vanishing gradient problem, which 1 Moreover, following Marcheggiani and Titov (2017), we also add a self-loop for each node in the graph, which means all diagonal elements of A are set to 1. means that the output dimension of each DEPGCN layer is decided by the layer number and the input dimension of the bottom DEPGCN.",
"Different from explicit 1-best parse trees or edge-weighted graphs, hidden states of the BiLSTM encoder of a dependency parser provide useful syntactic knowledge and are less sensitive to parsing errors.",
"Using such implicit syntactic representations has been demonstrated to be highly effective for downstream tasks (Zhang et al., 2019a; Xia et al., 2019a).",
"In this section, we describe how to integrate implicit syntactic information from parser hidden states and explicit syntactic information from the edge-weighted graph into the ORL model for better performance.",
"We first briefly describe the use of the dependency parser's hidden states, named as DEPHDN.",
"As shown by Figure",
"2-(d), we extract the outputs from the parser encoder and feed them into the BiLSTM-based encoder of ORL as extra inputs.",
"The hidden states of each parser BiLSTM layer are obtained as the syntactic representations, i.e., h ( l ) 1 , , h ( l ) n , where h ( l ) n is output of the l -th layer of the parser BiLSTM encoder at w n .",
"Then, we use the weighted-sum operation to get a single vector h syni as the final syntactic representation of word w i .",
"where L is the layer number of parser BiLSTM-based encoder; W , j and are model parameters; j is softmax-normalized weights for h j ; is used to scale the syntactic representations.",
"The syntactic representations h syni are concatenated with the original ORL input vectors, so that richer word representations are obtained.",
"Furthermore, in order to simultaneously benefit from the implicit and explicit syntactic information, as shown in Figure 2-(f), we simply extract the edge-weighted graph from the parser decoder and apply the DEPGCN approach over the ORL encoder to obtain syntax-enhanced representations.",
"The three approaches, depicted in Figure 2-(d-f) respectively, can work either in the pipeline way or in the MTL way.",
"Specifically, the pipeline way first trains the dependency parser and then fixes the parser components during training the ORL model.",
"In contrast, the MTL way trains both the parser and the ORL model at the same time.",
"In this subsection, we explore the MTL way to alleviate the error propagation problem further besides the DEPGCN approach.",
"As a baseline, Figure",
"2-(c) shows the most common MTL method, which shares a common encoder and uses multiple task-specific output layers, known as the hard-parameter-sharing MTL (Ruder, 2017; Marasovic and Frank, 2018).",
"However, this approach is not suitable for our scenario where the auxiliary parsing task has much more labeled data than the main ORL task, since the shared encoder is very likely to bias toward to parsing performance (Xia et al., 2019a).",
"Inspired by Xia et al. (2019a), we adopt the architectures of Figure 2-(d-f) to keep task model parameters separately, and train ORL and the parser simultaneously.",
"We update model parameters according to the combined loss of the ORL and the dependency parser during training: = ORL + Dep (5) where ORL and Dep is the loss of the ORL model and the parser respectively, and is a corpus weighting factor to control the loss contribution of the dependency data in each batch as discussed in Section 5.",
"Compared with the previous pipeline training process, the parameters of the parser are not pretrained and fixed, but updated by training objectives of both ORL and the parser.",
"This results in a ORL-preferred dependency parsing model.",
"Dataset.",
"We conduct experiments on MPQA version 2.0 corpus (Wiebe et al., 2005), which has been widely adopted as a benchmark dataset for opinion mining (Katiyar and Cardie, 2016; Marasovic and Frank, 2018; Zhang et al., 2019b).",
"In this work, we adopt the same data split (132/350 documents as dev/test data) and the same five-fold cross-validation (CV) data split on the test data as Zhang et al. (2019b) for a fair comparison.",
"Evaluation Metrics.",
"Unless specified, we use recall (R), precision (P) and their F1 measure value of exact match to evaluate the ORL performance, and the results are the average of the five-fold CV experiments.",
"Following Marasovic and Frank (2018) and Zhang et al. (2019b), we also include the binary and proportional overlap as additional evaluation metrics.",
"Dependency Parser.",
"Following the standard practice in the dependency parsing community, the original phrase-structure Penn Treebank data are converted into the Stanford dependencies using the Stanford Parser v3.3.0.",
"We use the converted dependency data to train our biaffine parser for obtaining the 1-best trees, the edge-weighted graphs, and the parser hidden states.",
"In addition, we use the Stanford POS tagger to obtain POS tags for the biaffine parser.",
"For other settings, we follow the work of Dozat and Manning (2017).",
"BERT.",
"We use BERT (Bidirectional Encoder Representations from Transformers) (Devlin et al., 2019) to obtain deep contextualized word representations as our extra inputs.",
"In particular, we use BERT-base (uncased) model and extract representations from the top-1 hidden layer.",
"Our experiments show that using the top-1 layer representations performs better than the more common use of aggregating top-4 hidden layers.",
"2 Parameters.",
"We follow the previous works of Zhang et al. (2019b) and Marasovic and Frank (2018) without much parameter tuning.",
"Specifi-cally, we use the pretrained 100-dimensional glove embeddings (Pennington et al., 2014).",
"The BiLSTM layer number is set to 3, and the hidden output size is 200.",
"We apply 0.33 dropout to word representation and the hidden states of the BiLSTM.",
"We choose Adam (Kingma and Ba, 2014) to optimize model parameters with a learning rate 10 3 .",
"The entire training instances are trained for 30 epochs with the batch size of 50, and the best-epoch model at the peak performance on the dev corpus is cho-sen.",
"For the MTL, we train the batches of ORL and parsing in turn since this interleaving training can obtain better performance in our experiments.",
"Besides, we use the corpus weighting trick to balance the gap in data sizes between the two tasks.",
"In this section, we first conduct experiments on the dev data to verify the effectiveness of our pro-2",
"pro-2 In fact, we also investigate another typical use of BERT, i.e., the fine-tuning method.",
"However, the ORL performance is much lower than the feature extraction method described above.",
"Besides, considering the training speed and flexibility in our proposed syntax-aware model, it is more flexible to adopt the feature extraction method, i.e., extracting BERT outputs as extra word representations (frozen during training).",
"posed approaches from several aspects: 1) how to effectively use explicit syntactic information; 2) usefulness of explicit vs. implicit syntax and their combination; 3) which MTL framework is most effective.",
"Then we present overall results on the test dataset, with and without BERT.",
"Finally, we conduct detailed analysis to gain more insights.",
"In order to know the best way to use explicit information from the dependency parser, we conduct comparative experiments by integrating the information of the explicit 1-best trees or the explicit edge-weighted graphs.",
"The second major row of Table 1 shows the results of integrating such explicit syntactic information on the dev data.",
"In particular, BASELINE uses no syntactic information, known as the syntax-agnostic method; DEPHEAD concatenates an extra embedding of the head word in the 1-best parse tree with the original input; TREELSTM applies the TreeLSTM to encode the 1-best tree structures; DEPGCN applies GCN to encode the edge-weighted graphs.",
"For DEPGCN-HARD , the 1-best tree is converted to a binary adjacency and is encoded by DEPGCN.",
"It is obvious that using explicit syntactic information is helpful for ORL.",
"All the syntax-aware models improve the performance by 0.88 2.26 F1 score.",
"The DEPHEAD approach is the most intuitive way to represent syntactic information by using head word embeddings, which serves as a simple syntax-aware baseline method.",
"The TREELSTM approach encodes 1-best tree recursively in a much more complex way, but achieves nearly the same performance with the DEPHEAD method.",
"We suspect the reason may be that the TREELSTM method is prone to parsing errors.",
"the 1-best tree, and achieves higher performance.",
"Compared with the TREELSTM approach, the DEPGCN-HARD approach is less sensitive to parsing errors, since a GCN layer only considers local adjacent structures and performs one-hop information propagation, whereas a TreeLSTM propagates information in either bottom-up or top-down order where earlier errors affect later computations a lot.",
"The best result of exploiting explicit information is obtained by the DEPGCN method, which is able to integrate richer structural information from edge-weighted graphs.",
"The bottom two major rows of Table 1 show the results on the dev data.",
"DEPHDN exploits implicit information of parser hidden states.",
"We can see that the implicit DEPHDN method outperforms the best explicit DEPGCN method by 2.17 F1 score, indicating the effectiveness of the integration of parser hidden states, which is consistent with previous studies on the SRL task (Xia et al., 2019a).",
"The advantage of using implicit hidden states is being able to greatly alleviate the error propagation from explicit parsing results.",
"We further simultaneously integrate explicit and implicit syntactic information into one model, which achieves the best performance of 62.58 F1 score, and outperforms the syntax-agnostic baseline and the DEPHDN method by 5.56 and 1.13 F1 scores, respectively.",
"This demonstrates that ORL can benefit from both explicit and implicit syntactic information.",
"In summary, we can conclude that encoding the edge-weighted graphs is more effective than the 1-best trees, and combining both explicit and implicit syntactic information brings higher performance than either.",
"approaches, we apply MTL frameworks to the abovementioned pipeline architectures.",
"Table 2 shows the results of the MTL settings with previously better-performing configurations on the dev dataset, together with a commonly used hard-parameter-sharing MTL for parsing and ORL.",
"M-BASELINE serves as an MTL baseline, which shares the encoder for the two tasks (Figure 2-c).",
"M-DEPGCN and M-DEPHDN respectively apply the DEPGCN and DEPHDN approaches under our MTL framework, and M-DEP GCN+D EPHDN combines them.",
"Firstly, although sharing the encoder of the parser and ORL already brings in more than 2 F1 score improvement compared with the syntax-agnostic baseline (BASELINE ), it is much inferior to other MTL approaches and the pipeline DEPHDN method (comparing Table 1).",
"This may be caused by the weakness of the encoder parameters for ORL, as discussed in Section 4 and Xia et al. (2019a).",
"Secondly, compared with the corresponding approaches under the pipeline architecture, all approaches under our MTL framework improve the performance by 2.45 4.24 F1 scores, which indicates that MTL is highly effective in alleviating the error propagation problem.",
"Finally, the combination of the explicit edge-weighted graphs and the implicit parser hidden states is still the most effective model under the MTL framework, outperforming the BASELINE in Table 1 by 8.01 F1 score.",
"In this section, we report the overall performance of our approaches compared with previous methods on the test data, as shown in Table 3.",
"In particular, we list our syntax-agnostic baseline (BASELINE in Table 1), others' works (Zhang et al. (2019b) and Marasovic and Frank (2018), using SRL for ORL), best non-MTL approaches based on our results on the dev data (DEPGCN for explicit syntactic information and DEPHDN for implicit syntactic information), and finally the MTL-based models.",
"The results of BASELINE with BERT and our best model with BERT are also listed to demonstrate the contributions from the contextualized word representations.",
"We can draw the following findings.",
"Compared with the DEPGCN and DEPHDN approaches (i.e., explicit or implicit only), the DEP GCN+D EPHDN approach achieves better performance on both Holder and Target recognition.",
"All of the MTL configurations boost the performance compared with their pipeline counterparts, as ORL-oriented parsing models are learned, and the error propagation problem is less severe.",
"Our best syntax-aware MTL model combined with BERT achieves the best performance, outperforming the baseline with BERT by more than 3 F1 score.",
"Compared with the previous state-of-the-art methods, we obtain 4.34 and 1.39 improvement of F1 scores with and without BERT, respectively.",
"Overall, our best model achieves 9.29 higher F1 score over the syntax-agnostic baseline.",
"In this section, we conduct analysis to better understand the contributions from the syntactic information and BERT.",
"In particular, we compute the exact F1 score according to different lengths of opinion arguments, as well as different distances between the arguments and their corresponding expressions.",
"Influence of Syntax.",
"Figure 3-(a-b) show the effects of syntax on predicting arguments of different span lengths and distances to their expressions, respectively.",
"We observe that 1) the performance of combining explicit and implicit syntactic information is always higher than either of them, while the DEPGCN and DEPHDN approaches compensate each other at different argument span lengths; and 2) MTL performs better than the best pipeline US and UK Criticise Mugabe 's Victory Gold Holder Target Base Target +BERT Holder Target +Syntax Holder Target Figure 4: An example of different ORL outputs for US and UK Criticise Mugabe 's Victory.",
"model consistently, which indicates that the usage of syntax is further enhanced as the error propagation is less severe.",
"Influence of BERT.",
"Figure 3-(c-d) show the similar graphs of the best syntax-aware model and BERT.",
"Firstly, both M-Comb and BERT bring substantial improvements over the syntax-agnostic baseline.",
"Secondly, despite that the syntactic information and BERT are similar in the overall performance, the syntactic information is more effective for arguments with longer spans and farther distances to the expressions, as the syntax helps to capture long-distance dependencies between words.",
"And lastly, the integration of syntax and BERT can further improve the performance, demonstrating that contributions from the two are complementary.",
"Case Study.",
"One case study is given in Figure 4. In this example, the gold holder US and UK is difficult to be identified by the baseline model.",
"Even with the help of BERT, which brings more contextual information, the model still only captures one of them, the closest holder UK.",
"Our syntax-aware model accurately predicts the holder due to the coordination structure being captured by the syntactic dependency information.",
"In this paper, we present a syntax-aware opinion role labeling approach based on dependency GCN and MTL.",
"We compare different representations of syntactic dependency information and propose dependency GCN to encode richer structural information from different processing levels of the parser.",
"The MTL framework further boosts the performance, and together with BERT, our best model achieves a new state-of-the-art result on the widely-used ORL benchmark MPQA corpus.",
"Overall, our syntax-aware model brings in about 9.29 improvement of exact F1 score compared with the syntax-agnostic baseline.",
"The authors would like to thank the anonymous reviewers for the helpful comments.",
"This work was supported by National Natural Science Foundation of China (Grant No. 61525205, 61876116) and a Project Funded by the Priority Academic Program Development (PAPD) of Jiangsu Higher Education Institutions, and was also partially supported by Alibaba Group through Alibaba Innovative Research Program."
] |
[
"abstain",
"abstain",
"method",
"objective",
"method",
"result",
"result",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"objective",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"method",
"other",
"other"
] |
[
"Abstract Meaning Representation parsing is a sentence-to-graph prediction task where target nodes are not explicitly aligned to sentence tokens.",
"However, since graph nodes are semantically based on one or more sentence tokens, implicit alignments can be derived.",
"Transition-based parsers operate over the sentence from left to right, capturing this inductive bias via alignments at the cost of limited expressiveness.",
"In this work, we propose a transition-based system that combines hard-attention over sentences with a target-side action pointer mechanism to decouple source tokens from node representations and address alignments.",
"We model the transitions as well as the pointer mechanism through straightforward modifications within a single Transformer architecture.",
"Parser state and graph structure information are efficiently encoded using attention heads.",
"We show that our action-pointer approach leads to increased expressiveness and attains large gains (+1.6 points) against the best transition-based AMR parser in very similar conditions.",
"While using no graph re-categorization, our single model yields the second best SMATCH score on AMR 2.0 (81.8), which is further improved to 83.4 with silver data and ensemble decoding.",
"Abstract Meaning Representation (AMR) (Ba-narescu et al., 2013) is a sentence level semantic formalism encoding who does what to whom in the form of a rooted directed acyclic graph.",
"Nodes represent concepts such as entities or predicates which are not explicitly aligned to words, and edges represent relations such as subject/object (see Figure 1).",
"AMR parsing, the task of generating the graph from a sentence, is nowadays tackled with sequence to sequence models parametrized with neural networks.",
"There are two broad categories of methods that are highly effective in recent years.",
"Transition-based approaches predict a sequence of boy want-01 go-02 city name York New ARG0 ARG0 ARG4 name op1 op2 ARG1 Figure 1: AMR graph expressing the meaning of the sentence The boy wants to go to New York .",
"actions given the sentence.",
"These actions generate the graph while processing tokens left-to-right through the sentence and store intermediate representations in memories such as stack and buffer (Wang et al., 2015; Damonte et al., 2016; Ballesteros and Al-Onaizan, 2017; Vilares and Gmez-Rodrguez, 2018; Naseem et al., 2019; Astudillo et al., 2020; Lee et al., 2020).",
"General graph-based approaches, on the other hand, directly predict nodes and edges in sequential order from graph traversals such as breath first search or depth first search (Zhang et al., 2019a,b; Cai and Lam, 2019, 2020).",
"While not modeling the local semantic correspondence between graph nodes and source tokens, the approaches achieve strong results without restrictions of transition-based approaches, but often require graph re-categorization, a form of graph normalization, for optimal performance.",
"The strong left-to-right constraint of transition-based parsers provides a form of inductive bias that fits AMR characteristics.",
"AMR nodes are very often normalized versions of sentence tokens and locality between words and nodes is frequently preserved.",
"The fact that transition-based systems for AMR have alignments as the core of their explanatory model also guarantees that they produce reliable alignments at decoding time, which are useful for applications utilizing AMR parses.",
"Despite these advantages, transition-based systems still suffer in situations when multiple nodes are best explained as aligned to one sentence token or none.",
"Furthermore, long distance edges in AMR, e.g. re-entrancies, require excessive use of SWAP or I offer a solution to the problem .",
"equivalent actions, leading to very long action sequences.",
"This in turn affects both a model's ability to learn and its decoding speed.",
"In this work, we propose the Action-Pointer Transition (APT) system which combines the advantages of both the transition-based approaches and more general graph-generation approaches.",
"We focus on predicting an action sequence that can build the graph from a source sentence.",
"The core idea is to put the target action sequence to a dual use as a mechanism for graph generation as well as the representation of the graph itself.",
"Inspired by recent progress in pointer-based parsers (Ma et al., 2018a; Fernndez-Gonzlez and Gmez-Rodrguez, 2020), we replace the stack and buffer by a cursor that moves from left to right and introduce a pointer network (Vinyals et al., 2015) as mechanism for edge creation.",
"Unlike previous works, we use the pointer mechanism on the target side, pointing to past node generation actions to create edges.",
"This eliminates the node generation and attachment restrictions of previous transition-based parsers.",
"It is also more natural for graph generation, essentially resembling the generation process in the graph-based approaches, but keeping the graph and source aligned.",
"We model both the action generation and the pointer prediction with a single Transformer model (Vaswani et al., 2017).",
"We relate target node and source token representations through masking of cross-attention mechanism, similar to Astudillo et al. (2020) but simply with monotonic action-source alignment driven by cursor positions, rather than stack and buffer contents.",
"Finally we also embed the AMR graph structural information in the target decoder by re-purposing edge-creating steps, and propose a novel step-wise incremental graph message passing method (Gilmer et al., 2017) enabled by the decoder self-attention mechanism.",
"Experiments on AMR 1.0, AMR 2.0, and AMR 3.0 benchmark datasets show the effectiveness of our APT system.",
"We outperform the best transition-based systems while using sensibly shorter action sequences, and achieve better performance than all previous approaches with similar size of training parameters.",
"Figure 2 shows a partially parsed example of a source sentence, a transition action sequence and the AMR graph for the proposed transitions.",
"Given a source sentence x = x 1 , x 2 , . . . , x S , our transition system works by scanning the sentence from left to right using a cursor c t { 1 , 2 , . . . , S } .",
"Cursor movement is controlled by three actions: SHIFT moves cursor one position to the right, such that c t +1 = c t + 1 .",
"MERGE merges tokens x c t and x c t +1 and SHIFT s.",
"Merged tokens act as a single token under the position of the last token merged.",
"COPY creates a node by copying the word under x c t .",
"Since AMR nodes are often lemmas or prop-bank frames, two versions of this action exist to copy the lemma of x c t or provide the first sense (frame 01 ) constructed from the lemma.",
"This covers a large portion of the total AMR nodes.",
"It also helps generalize for predictions of unseen nodes.",
"We use an external lemmatizer 1 for this action.",
"subgraph indexed by label LABEL .",
"Any future attachments can only be made to the root of the subgraph.",
"LA ( ID , LABEL ) creates an arc with LABEL from last generated node to a previous node at position ID .",
"Note that we can only point to past node generating actions in the action history.",
"Using the above actions, it is easy to derive an oracle action sequence given gold-graph information and initial word to node alignments.",
"For current cursor position, all the nodes aligned to it are generated using SUBGRAPH (), COPY or PRED () actions.",
"Each node prediction action is followed by edge creation actions.",
"Edges connecting to closer nodes are generated before the farther ones.",
"When multiple connected nodes are aligned to one token, they are traversed in pre-order for node generation.",
"A detailed description of oracle algorithm is given in Appendix B. The use of a cursor variable c t decouples node reference from source tokens, allowing to produce multiple nodes and edges (see Figure 3), even the entire AMR graph if necessary, from a single token.",
"This provides more expressiveness and flexibility than previous transition-based AMR parsers, while keeping a strong inductive bias.",
"The only restriction is that all inbound or outbound edges between current node and all previously produced nodes need to be generated before predicting a new node or shifting the cursor.",
"This does not limit the oracle coverage, however, for trained parsers, it leads to a small percentage of disconnected graphs in decoding.",
"Furthermore, nodes within the SUBGRAPH () action can not be reached for edge creation.",
"The use of SUBGRAPH () action, initially introduced in Ballesteros and Al-Onaizan (2017), is reduced in this work to cases where no such edges are expected, which is mainly the case for dates and named-entities.",
"Compared to previous oracles (Ballesteros and Al-Onaizan, 2017; Naseem et al., 2019; Astudillo et al., 2020), the action-pointer does not use a SWAP action.",
"It can establish an edge between the last predicted node and any previous node, since edges are created by pointing to decoder representations.",
"This oracle is expected to work with generic AMR aligners.",
"For this work, we use the alignments generation method of Astudillo et al. (2020), which generates many-to-many alignments.",
"It is a combination of Expectation Maximization based alignments of Pourdamghani et al. (2014) and rule base alignments of Flanigan et al. (2014).",
"Any remaining unaligned nodes are aligned based on their graph proximity to unaligned tokens.",
"For more details, we refer the reader to the works of Astudillo et al. (2020) and Naseem et al. (2019).",
"The backbone of our model is the encoder-decoder Transformer (Vaswani et al., 2017), combined with a pointer network (Vinyals et al., 2015).",
"The probability of an action sequence y = y 1 , y 2 , . . . , y T for input tokens x = x 1 , x 2 , . . . , x S is given in our model by P ( y | x ) = T (cid:89) t =1 P ( y t | y <t , x ) = T (cid:89) t =1 P ( a t | a <t , p <t , x ) P ( p t | a t , p <t , x ) (1) where at each time step t , we decompose the target action y t into the pointer-removed action and the pointer value with y t = ( a t , p t ) .",
"A dummy pointer p t = null is fixed for non-edge actions, so that P ( p t | a t , p <t , x ) = [ P ( p t | a <t , p <t , x )] ( a t ) where ( a t ) is an indicator variable set to 0 if a t is not an edge action and 1 otherwise.",
"Given a sequence to sequence Transformer model with N encoder layers and M decoder layers, each decoder layer is defined by d mt = FF m (CA m (SA m ( d m 1 t , d m 1 t ) , e N )) where FF m () , CA m () and SA m () are feedforward, multi-head cross-attention and multi-head self-attention components respectively 2 .",
"e N is the output of last encoder layer and d m 1 is the output of the previous decoder layer, with d 0 t initialized to be the embeddings of the action history y <t concatenated with a special start symbol.",
"The distribution over actions is given by P ( a t | a <t , p <t , x ) = softmax (cid:0) W d Mt (cid:1) a t where W are the output vocabulary embeddings, and the edge pointer distribution is given by P ( p t | a <t , p <t , x ) = softmax (cid:0) ( KM d M 1 t ) T QM d M 1 t (cid:1) p t where KM , QM are key and query matrices of 1 head of the last decoder self-attention layer SAM () .",
"The top layer self-attention is a natural choice for the pointer network, since it is likely to have high values for the nodes involved in the edge direction and label prediction.",
"Although the edge action and its pointing value are both output at the same step, the specialized pointer head is also part of the overall self-attention mechanism used to compute the model's hidden representations, thus making actions distribution aware of the pointer distribution.",
"Our transition system moves the cursor c t over the source from left to right during parsing, essentially maintaining a monotonic alignment between target actions and source tokens.",
"We encode the alignment c t with hard attentions in cross-attention heads CA m () with m = 1 M at every decoder layer.",
"We mask one head of the cross-attention to see only the aligned source token at c t , and augment it with another head masked to see only positions > c t .",
"This is similar to the hard attention in Peng et al. (2018) and parser state encoding in Astudillo et al. (2020).",
"2 Each of these are wrapped around with residual, dropout and layer normalization operations removed for simplicity.",
"is also internalized with the model during training so that the model can always focus on relevant action subsets when making predictions.",
"Incrementally generated graphs are usually modeled via graph neural networks (Li et al., 2018), where a node's representation is updated from the collection of it's neighboring nodes' representations by message passing (Gilmer et al., 2017).",
"However, this requires re-computation of all node representations every time the graph is modified, which is expensive, prohibiting its use in previous graph-based AMR parsing works (Cai and Lam, 2020).",
"To better utilize the intermediate topological graph information without losing the efficient parallelization of Transformer, we propose to use the edge creation actions as updated views of each node, that encode this node's neighboring subgraph.",
"This does not change the past computations and can be done by altering the hard masking of the self-attention heads of decoder layers SA m () .",
"By interpreting the decoder layers as implementing message passing vertically, we can fully encode graphs up to depth M .",
"Given a node generating action a t = v , it is followed by k 0 edge generating actions a t +1 , a t +2 , . . . , a t + k that connect the current node with previous nodes, pointed by p t +1 , p t +2 , . . . , p t + k positions on the target side.",
"This also defines k graph modifications, expanding the graph neighborhood on the current node.",
"Figure 4 shows an example for the sentence The boy wants to go , with node prediction actions at positions t = 2 , 4 , 8 , with k being 0, 1, 2, respectively.",
"We use the steps from t to t + k in the Transformer decoder to encode this expanding neighborhood.",
"In particular, we fix the decoder input as the current node action v for these steps, as illustrated in the input actions in Figure",
"4. At each intermediate step [ t, t + k ] , 2 decoder self-attention heads SA m () are restricted to only attend to the direct graph neighbors of the current node, represented by previous nodes at positions p t , p t +1 , , p as well as the current position .",
"This essentially builds sub-sequences of node representations with richer graph information step by step, and we use the last reference of the same node for pointing positions when generating new edges.",
"Moreover, when propagating this masking pattern along m layers, each node encodes its m -hop neighborhood information.",
"This defines a message passing procedure as shown in Figure 4, encoding the compositional relations between nodes.",
"Since the edges have directions indicated by LA and RA , we also encode the direction information by separating the two heads with each only considering one direction.",
"Our model is trained by maximizing the log likelihood of Equation (1).",
"The valid action space, action-source alignment c t , and the graph embedding mask at each step t are pre-calculated at training time.",
"For inference, we modify the beam search algorithm to jointly search for actions and edge pointers and combine them to find the action sequence that maximizes Equation (1).",
"We also consider hard constraints in the searching process such as valid output actions and valid target pointing values at different steps to ensure an AMR graph is recoverable.",
"For the structural information that is extracted from the parsing state such as c t and graph embedding masks, we compute them on the fly at each new step of decoding based on the current results, which are then used by the model for the next step decoding.",
"We detail our search algorithm in Appendix C. 5 Experimental Setup Data and Evaluation We test our approach on two widely used AMR parsing benchmark datasets: AMR 2.0 (LDC2017T10) and AMR 1.0 (LDC2014T12).",
"The AMR graphs are all human annotated.",
"The two datasets have 36521 and 10312 training AMRs, respectively, and share 1368 development AMRs and 1371 testing AMRs 3 .",
"We also report results on the latest AMR 3.0 (LDC2020T02) dataset, which is larger in size but has not been fully explored, with 55635 training AMRs and 1722 and 1898 AMRs for development and testing set.",
"Wiki links are removed in the preprocessing of data, and we run a wikification approach in post-processing to recover Wikipedia entries in the AMR graphs as in Naseem et al. (2019).",
"For evaluation, we use the SMATCH (F1) scores 4 (Cai and Knight, 2013) and further the fine-grained evaluation metrics (Damonte et al., 2016) to assess the model's AMR parsing performance.",
"Model Configuration Our base setup has 6 layers and 4 attention heads for both the Transformer encoder and decoder, with model size 256 and feedforward size 512.",
"We also compare with a small model with 3 layers in encoder and decoder but identical otherwise.",
"The pointer network is always tied with one target self-attention head of the top decoder layer.",
"We use the cross-attention of all decoder layers for action-source alignment.",
"For graph embedding, we use 2 heads of the bottom 3 layers for the base model and bottom 2 layers for the small model.",
"We use contextualized embeddings extracted from the pre-trained RoBERTa (Liu et al., 2019) large model for the source sentence, with average of all layer states and BPE tokens mapped to words by averaging as in (Lee et al., 2020).",
"The pre-trained embeddings are fixed.",
"For 3 Although there are annotation revisions from AMR 1.0 to AMR 2.0.",
"Implementation Details We use the Adam optimizer with 1 of 0.9 and 2 of 0.98 for training.",
"Each data batch has 3584 maximum number of tokens, and the learning rate schedule is the same as Vaswani et al. (2017), where we use the maximum learning rate of 5e 4 with 4000 warm-up steps.",
"We use a dropout rate of 0.3 and label smoothing rate of 0.01.",
"We train all the models for a maximum number of 120 epochs, and average the best 5 epoch checkpoints among the last 40 checkpoints based on the SMATCH scores on the development data with greedy decoding.",
"We use a default beam size of 10 for decoding.",
"We implement our model 5 with the FAIRSEQ toolkit (Ott et al., 2019).",
"All models are trained and tested on a single Nvidia Titan RTX GPU.",
"Training takes about 10 hours on AMR 2.0 and 3.5 hours on AMR 1.0.",
"Oracle Actions Table 1 compares the oracle data SMATCH and average action sequence length on the AMR 2.0 training set among recent transition systems.",
"Our approach yields much shorter action sequences due to the target-side pointing mechanism.",
"It has also the best coverage on training AMR graphs, due to the flexibility of our transitions that can capture the majority of graph components.",
"We chose not to tackle a number of small corner cases, such as disconnected subgraphs for a token, that account for the missing oracle performance.",
"Parsing Performance We compare our action-pointer transition/Transformer (APT) model with existing approaches in Table 2 6 .",
"We indicate the use of pre-trained BERT or RoBERTa embeddings 5 Available under https://github.com/IBM/ transition-amr-parser .",
"(from large models) with B or R , and graph re-categorization with G .",
"Graph re-categorization (Lyu and Titov, 2018; Zhang et al., 2019a; Cai and Lam, 2020; Bevilacqua et al., 2021) removes node senses and groups certain nodes together such as named entities in pre-processing.",
"It reverts these back in post-processing with the help of a name entity recognizer.",
"We report results over 3 runs for each model with different random seeds.",
"Given that we use fixed pre-trained embeddings, it becomes computationally cheap to build a partial ensemble Model Fixed Extra Features TrainedParam.",
"With the exception of the recent BART-based model Bevilacqua et al. (2021), we outperform all previously published approaches, both with our small and base models.",
"Our best single-model parsing scores are 81.8 on AMR 2.0 and 78.5 on AMR 1.0, which improves 1 .",
"6 points over the previous best model trained only with gold data.",
"Our small model only trails the base model by a small margin and we achieve high performance on small AMR 1.0 dataset, indicating that our approach ben-efits from having good inductive bias towards the problem so that the learning is efficient.",
"More remarkably, we even surpass the scores reported in Lee et al. (2020) combining various self-learning techniques and utilizing 85K extra sentences for self-annotation (silver data).",
"For the most recent AMR 3.0 dataset, we report our results for future reference.",
"Additionally, the partial ensemble decoding proves to be simple and effective in boosting the model performance, which consistently brings more than 1 point gain for AMR 1.0 and 2.0.",
"It should be noted that the ensemble decoding is only 20 % slower than a single model.",
"We thus use this ensemble to annotate the 85 K sentence set used in (Lee et al., 2020).",
"After removing parses with detached nodes we obtained 70K model-annotated silver data sentences.",
"Adding these for training regularly, we achieve our best score of 83.4 with ensemble on AMR 2.0.",
"Model Size In Table 3, we compare parameter sizes of recently published models alongside their parsing performances on AMR 2.0.",
"Similar to our approach, most models use large pre-trained models to extract contextualized embeddings as fixed features, with the exception of Xu et al. (2020), which is a seq-to-seq pre-training approach on large amount of data, and Bevilacqua et al. (2021), which directly fine-tunes a seq-to-seq BART large (Lewis et al., 2019) model.",
"7 Except the large BART model, our APT small (3 layers) has the least number of trained parameters yet already surpasses all the previous models.",
"This justifies our method is highly efficient in learning for AMR parsing.",
"Moreover, with the small parameter size, the partial ensemble is an appealing way to improve parsing quality with minor decoding overhead.",
"Although more performant, direct fine-tuning of pre-trained seq-to-seq models such as BART would require prohibitively large numbers to perform an ensemble.",
"Fine-grained Results Table 4 shows the fine-grained AMR 2.0 evaluation (Damonte et al., 2016) of APT and previous models with comparable trainable parameter sizes.",
"Our model achieves the best scores among all sub-tasks except negations and wikification, handled by post-processing on the best performing approach.",
"We obtain large improvement on edge related sub-tasks including SRL ( ARG arcs) and Reentrancies, proving the effectiveness of our target-side pointer mechanism.",
"Ablation of Model Components We evaluate the contribution of different components in our model in Table",
"5. The top part of the table shows effects of 2 major components that utilize parser state information and the graph structural information in the Transformer decoder.",
"The baseline model is a free Transformer model with pointers (row 1), which is greatly increased by including the monotonic action-source alignment via hard attention (row 2) on both AMR 1.0 and AMR 2.0 corpus, and combining it with the graph embedding (row 3) gives further improvements of 0.3 and 0.2 for AMR 1.0 and AMR 2.0.",
"This highlights that injecting hard encoded structural information in the Transformer decoder greatly helps our problem.",
"The bottom part of Table 5 evaluates the contribution of output space restriction for target and input pre-trained embeddings for source, respectively.",
"Removing the restriction for target output space i.e. the valid actions, hurts the model performance, as the model may not be able to learn the underlying rules that govern the target sequence restrictions.",
"Switching the RoBERTa large embeddings to RoBERTa base or BERT large also hurts the performance (although score drops are only 0 . 3 0 . 6 ), indicating that the contextual embeddings from large and better pre-trained models better equip the parser to capture semantic relations in the source sentence.",
"Effect of Oracle Setup As our model directly learns from the oracle actions, we study how the upstream transition system affects the model performance by varying transition setups in Table",
"6. We try three variations of the oracle.",
"In the first setup, we measure the impact of breaking down SUBGRAPH action into individual node generation and attachment actions.",
"We do this by using the SUBGRAPH for all cases of multi-node alignments.",
"This degrades the parser performance and oracle SMATCH considerably, dropping by absolute 1.1 points.",
"This is expected, since SUBGRAPH action makes internal nodes of the subgraph unattachable.",
"In the second setup, we vary the order of edge creation actions.",
"We reverse it so that the edges connecting farther nodes are built first.",
"Although this does not affect the oracle score, we observe that the model performance on this oracle drops by 0.3.",
"The reason might be that the easy close-range edge building actions become harder when pushed farther, also making easy decisions first is less prone to error propagation.",
"Finally, we also change the order in which the various nodes connected to a token are created.",
"Instead of generating the nodes from the root downwards, we perform a post-order traversal, where leaves are generated before parents.",
"This also does not affect oracle score, however it gave a minor gain in parser performance.",
"Effect of Beam Size Figure 5 shows performance for different beam sizes.",
"Ideally, if the model is more certain and accurate in making right predictions at different steps, the decoding performance should be less impacted by beam size.",
"The results show that performance improves with beam size, but the gains saturate at beam size",
"3. This indicates that a smaller beam size can be considered 2 4 6 8 10 beam size 81.0 81.2 81.4 81.6 81.8 S m a t c h ( % ) base model small model Figure 5: Effect of decoding beam size for SMATCH , with our best single models on AMR 2.0 test set.",
"With the exception of Astudillo et al. (2020), other works introducing stack and buffer information into sequence-to-sequence attention parsers (Liu and Zhang, 2017; Zhang et al., 2017; Buys and Blunsom, 2017), are based on RNNs and do not attain high performances.",
"Liu and Zhang (2017); Zhang et al. (2017) tackle dependency parsing and propose modified attention mechanisms while Buys and Blunsom (2017) predicts semantic graphs jointly with their alignments and compares stack-based with latent and fixed alignments.",
"Compared to the stack-Transformer (Astudillo et al., 2020), we propose the use of an action pointing mechanism to decouple word and node representation, remove the need for stack and buffer and model graph structure on the decoder side.",
"We show that these improvements yield superior performance while exploiting the same inductive biases with little train data or small models.",
"Vilares and Gmez-Rodrguez (2018) proposed an AMR-CONVINGTON system for unrestricted nonprojective AMR parsing, comparing the current word with all previous words for arc attachment as we propose.",
"However, their comparison is done with sequential actions whereas we use an efficient pointer mechanism to parallelize the process.",
"Regarding the use of pointer mechanisms for arc attachment, Ma et al. (2018b) proposed the stack-pointer network to build partial graph representations, and Fernndez-Gonzlez and Gmez-Rodrguez (2020) adopted pointers along with the left-to-right scan of the sentence, greatly improving the efficiency.",
"Compared with these works, we tackle a more general text-to-graph problem, where nodes are only loosely related to words, by utilizing the action-pointer mechanism.",
"Our method is also able to build up to depth M graph representations with M decoding layers.",
"While not explicitly stated, graph-based approaches (Zhang et al., 2019a; Cai and Lam, 2020) generate edges with a pointing mechanism, either with a deep biaffine classifier (Dozat and Manning, 2018) or with attention (Vaswani et al., 2017).",
"They also model inductive biases indirectly through graph re-categorization, detailed in Section 6.1, which requires a name entity recognition system at test time.",
"Re-categorization was proposed in Lyu and Titov (2018), which reformulated alignments as a differentiable permutation problem, interpretable as another form of inductive bias.",
"Finally, augmenting seq-to-seq models with graph structures has been explored in various NLP areas, including machine translation (Hashimoto and Tsuruoka, 2017; Moussallem et al., 2019), text classification (Lu et al., 2020), AMR to text generation (Zhu et al., 2019), etc.",
"Most of these works model graph structure in the encoder since the complete source sentence and graph are known.",
"We embed a dynamic graph in the Transformer decoder during parsing.",
"This is similar to broad graph generation approaches (Li et al., 2018) relying on graph neural networks (Li et al., 2019), but our approach is much more efficient as we do not require heavy re-computation of node representations.",
"We present an Action-Pointer mechanism that can naturally handle the generation of arbitrary graph constructs, including re-entrancies and multiple nodes per token.",
"Our structural modeling with incremental encoding of parser and graph states based on a single Transformer architecture proves to be highly effective, obtaining the best results on all AMR corpora among models with similar learnable parameter sizes.",
"An interesting future exploration is on combining our system with large pre-trained models such as BART, as directly fine-tuning on the latter shows great potential in boosting the performance (Bevilacqua et al., 2021).",
"Although we focus on AMR graphs in this work, our system can essentially be adopted to any task generating graphs from texts where copy mechanisms or hard-attention plays a central role."
] |
[
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"result",
"result",
"method"
] |
[
"Advances in variational inference enable pa-rameterisation of probabilistic models by deep neural networks.",
"This combines the statistical transparency of the probabilistic modelling framework with the representational power of deep learning.",
"Yet, due to a problem known as posterior collapse , it is difficult to estimate such models in the context of language modelling effectively.",
"We concentrate on one such model, the variational auto-encoder, which we argue is an important building block in hierarchical probabilistic models of language.",
"This paper contributes a sober view of the problem, a survey of techniques to address it, novel techniques, and extensions to the model.",
"To establish a ranking of techniques, we perform a systematic comparison using Bayesian optimisation and find that many techniques perform reasonably similar, given enough resources.",
"Still, a favourite can be named based on convenience.",
"We also make several empirical observations and recommendations of best practices that should help researchers interested in this exciting field.",
"Deep generative models (DGMs) are probabilistic latent variable models parameterised by neural networks (NNs).",
"Specifically, DGMs optimised with amortised variational inference and reparam-eterised gradient estimates (Kingma and Welling, 2014; Rezende et al., 2014), better known as variational auto-encoders (VAEs), have spurred much interest in various domains, including computer vision and natural language processing (NLP).",
"semantic parsing (Corro and Titov, 2018; Lyu and Titov, 2018), document modelling (Miao et al., 2016), summarisation (Miao and Blunsom, 2016), machine translation (Zhang et al., 2016; Schulz et al., 2018; Eikema and Aziz, 2019), language and vision (Pu et al., 2016; Wang et al., 2017), dialogue modelling (Wen et al., 2017; Serban et al., 2017), speech modelling (Fraccaro et al., 2016), and, of course, language modelling (Bowman et al., 2016; Goyal et al., 2017).",
"One problem remains common to the majority of these models, VAEs often learn to ignore the latent variables.",
"We investigate this problem, dubbed posterior collapse , in the context of language models (LMs).",
"In a deep generative LM (Bowman et al., 2016), sentences are generated conditioned on samples from a continuous latent space, an idea with various practical applications.",
"For example, one can constrain this latent space to promote generalisa-tions that are in line with linguistic knowledge and intuition (Xu and Durrett, 2018).",
"This also allows for greater flexibility in how the model is used, for example, to generate sentences that livein latent spacein a neighbourhood of a given observation (Bowman et al., 2016).",
"Despite this potential, VAEs that employ strong generators (e.g. recurrent NNs) tend to ignore the latent variable.",
"Figure 1 illustrates this point: neighbourhood in latent space does not correlate to patterns in data space, and the model behaves just like a standard LM.",
"Recently, many techniques have been proposed to address this problem ( 3 and 7) and they range from modifications to the objective to changes to the actual model.",
"Some of these techniques have only been tested under different conditions and under different evaluation criteria, and some of them have only been tested outside NLP.",
"This paper contributes: (1) a novel strategy based on constrained optimisation towards a pre-specified upper-bound on mutual information; (2) multimodal priors that by design promote increased mutual information between data and latent code; last and, arguably most importantly, (3) a systematic comparison in terms of resources dedicated to hyperparameter search and sensitivity to initial conditionsof strategies to counter posterior collapse, including some never tested for language models (e.g. InfoVAE, LagVAE, soft free-bits, and multimodal priors).",
"Density estimation for written text has a long history (Jelinek, 1980; Goodman, 2001), but in this work we concentrate on neural network models (Bengio et al., 2003), in particular, autoregressive ones (Mikolov et al., 2010).",
"Following common practice, we model sentences independently, each a sequence x = (cid:104) x 1 , . . . , x n (cid:105) of n = | x | tokens.",
"A language model (LM) prescribes the generation of a sentence as a sequence of categorical draws parameterised in context, i.e. P ( x | ) =",
"To condition on all of the available context, a fixed NN f ( ) maps from a prefix sequence (denoted x <i ) to the parameters of a categorical distribution over the vocabulary.",
"We estimate the parameters of the model by searching for a local optimum of the log-likelihood function L ( )= EX [log P ( x | )] via stochastic gradient-based optimisation (Rob-bins and Monro, 1951; Bottou and Cun, 2004), where the expectation is taken w.r.t. the true data distribution and approximated with samples x D from a data set of i.i.d. observations.",
"Throughout, we refer to this model as RNNLM alluding to a particular choice of f ( ; ) that employs a recurrent neural network (Mikolov et al., 2010).",
"Bowman et al. (2016) model observations as draws from the marginal of a DGM.",
"An NN maps from a latent sentence embedding z R d z to a distribution P ( x | z, ) over sentences, P ( x | ) = (cid:90) p ( z ) P ( x | z, )d z = (cid:90) N ( z | 0 , I ) | x | (cid:89) i =1 Cat( x i | f ( z, x <i ; ))d z , (2) where z follows a standard Gaussian prior.",
"1 Generation still happens one word at a time without Markov assumptions, but f ( ) now conditions on z in addition to the observed prefix.",
"The conditional P ( x | z, ) is commonly referred to as generator or decoder .",
"The quantity P ( x | ) is the marginal likelihood , essential for parameter estimation.",
"This model is trained to assign a high (marginal) probability to observations, much like standard LMs.",
"Unlike standard LMs, it employs a latent space which can accommodate a low-dimensional manifold where discrete sentences are mapped to, via posterior inference p ( z | x, ) , and from, via generation P ( x | z, ) .",
"This gives the model an explicit mechanism to exploit neighbourhood and smoothness in latent space to capture regularities in data space.",
"For example, it may group sentences according to latent factors (e.g. lexical choices, syntactic complexity, etc.).",
"It also gives users a mechanism to steer generation towards a specific purpose.",
"For example, one may be interested in generating sentences that are mapped from the neighbourhood of another in latent space.",
"To the extent this embedding space captures appreciable regularities, interest in this property is heightened.",
"Approximate inference Marginal inference for this model is intractable and calls for variational inference (VI; Jordan et al., 1999), whereby an auxiliary and independently parameterised model q ( z | x, ) approximates the true posterior p ( z | x, ) .",
"When this inference model is itself parameterised by a neural network, we have a case of amortised inference (Kingma and Welling, 2014; Rezende et al., 2014) and an instance of what is known as a VAE.",
"Bowman et al. (2016) approach posterior inference with a Gaussian model Z | , x N ( u , diag( s (cid:12) s )) [ u , s ] = g ( x ; ) (3) whose parameters, i.e. a location vector u RD and a scale vector s R D> 0 , are predicted by a neural network architecture g ( ; ) from an encoding of the complete observation x .",
"2 In this work, we use a bidirectional recurrent encoder.",
"Throughout the text we will refer to this model as SENVAE.",
"1 We use uppercase P ( ) for probability mass functions and lowercase p ( ) for probability density functions.",
"2 We use boldface for deterministic vectors and (cid:12) for elementwise multiplication.",
"(a) Greedy generation from prior samples (top) yields the same sentence every time, showing that the latent code is ignored.",
"Yet, ancestral sampling (bottom) produces good sentences, showing that the recurrent decoder learns about the structure of English sentences.",
"(b) Homotopy: ancestral samples mapped from points along a linear interpolation of two given sentences as represented in latent space.",
"The sentences do not seem to exhibit any coherent relation, showing that the model does not exploit neighbourhood in latent space to capture regularities in data space.",
"inference) by locally maximising a lower-bound on the log-likelihood function (ELBO) E ( , ) = EX (cid:2) E q ( z | x, ) [log P ( x | z, )] KL( q ( z | x, ) || p ( z )) (cid:3) .",
"(4) For as long as we can reparameterise samples from q ( z | x, ) using a fixed random source, automatic differentiation (Baydin et al., 2018) can be used to obtain unbiased gradient estimates of the ELBO (Kingma and Welling, 2014; Rezende et al., 2014).",
"In VI, we make inferences using an approximation q ( z | x, ) to the true posterior p ( z | x, ) and choose as to minimise the KL divergence EX [KL( q ( z | x, ) || p ( z | x, ))] .",
"The same principle yields a lower-bound on log-likelihood used to estimate jointly with , thus making the true posterior p ( z | x, ) a moving target.",
"If the estimated conditional P ( x | z, ) can be made independent of z , which in our case means relying exclusively on x <i to predict the distribution of X i , the true posterior will be independent of the data and equal to the prior.",
"3 Based on such observation, Chen et al. (2017) argue that information that can be modelled by the generator without using latent variables will be modelled that wayprecisely because when no information is encoded in the latent variable the true posterior equals the prior and it is then trivial to reduce EX [KL( q ( z | x, ) || p ( z | x, ))] to 0 .",
"This is typically diagnosed by noting that after training KL( q ( z | x, ) || p ( z )) 0 for most x : we say that the true posterior collapses to the prior .",
"Alemi et al. (2018) show that the rate , R = EX [KL( q ( z | x, ) || p ( z ))] , is an upperbound to I ( X ; Z | ) , the mutual information (MI) between X and Z .",
"Thus, if KL( q ( z | x, ) || p ( z )) is 3 This follows trivially from the definition of posterior: p ( z | x ) = p ( z ) P ( x | z ) P ( x ) X Z = p ( z ) P ( x ) P ( x ) = p ( z ) .",
"close to zero for most training instances, MI is either 0 or negligible.",
"They also show that the distortion , D = EX [ E q ( z | x, ) [log P ( x | z, )]] , relates to a lower-bound on MI (the lower-bound being H D , where H is the unknown data entropy).",
"A generator that makes no Markov assumptions, such as a recurrent LM, can potentially achieve X i Z | x <i , , and indeed many have noticed that VAEs whose observation models are parameterised by such strong generators (or strong decoders) tend to ignore the latent representation (Bowman et al., 2016; Higgins et al., 2017; Snderby et al., 2016; Zhao et al., 2018b).",
"For this reason, a strategy to prevent posterior collapse is to weaken the decoder (Yang et al., 2017; Semeniuta et al., 2017; Park et al., 2018).",
"In this work, we are interested in employing strong generators, thus we do not investigate weaker decoders.",
"Other strategies involve changes to the optimisation procedure and manipulations to the objective that target local optima of the ELBO with non-negligible MI.",
"Annealing Bowman et al. (2016) propose KL annealing, whereby the KL term in the ELBO is incorporated into the objective in gradual steps.",
"This way the optimiser can focus on reducing distortion early on in training, potentially by increasing MI.",
"They also propose to drop words from x <i at random to weaken the decoderintuitively the model would have to rely on z to compensate for missing history.",
"We experiment with a slight modification of word dropout whereby we slowly vary the dropout rate from 1 0 .",
"In a sense, we anneal from a weak to a strong generator.",
"Targeting rates Another idea is to target a pre-specified rate (Alemi et al., 2018).",
"Kingma et al. (2016) replace the KL term in the ELBO with max( r, KL( q ( z | x, ) || p ( z ))) , dubbed free bits (FB) because it allows encoding the first r nats of information for free.",
"As long as KL( q ( z | x, ) || p ( z )) < r , this does not optimise a proper ELBO (it misses the KL term), and the max introduces a discontinuity.",
"Chen et al. (2017) propose soft free bits (SFB), that instead multiplies the KL term in the ELBO with a weighing factor 0 < 1 that is dynamically adjusted based on the target rate r : is incremented (or reduced) by if R > r (or R < r ).",
"Note that this technique requires hyperparameters (i.e. , , ) besides r to be tuned in order to determine how is updated.",
"Change of objective We may also seek alternatives to the ELBO as an objective and relate them to quantities of interest such as MI.",
"A simple adaptation of the ELBO weighs its KL-term by a con-stant factor ( -VAE; Higgins et al., 2017).",
"Setting < 1 promotes increased MI.",
"Whilst being a useful counter to posterior collapse, low might lead to variational posteriors becoming point estimates.",
"InfoVAE (Zhao et al., 2018b) mitigates this with a term aimed at minimising the divergence from the aggregated posterior q ( z | ) = EX [ q ( z | x, )] to the prior.",
"Following Zhao et al. (2018b), we approximate this with an estimate of maximum mean discrepancy (MMD; Gretton et al., 2012) in our experiments.",
"Lagrangian VAE (LagVAE; Zhao et al., 2018a) casts VAE optimisation as a dual problem; it targets either maximisation or minimisation of (bounds on) I ( X ; Z | ) under constraints on the InfoVAE objective.",
"In MI-maximisation mode, LagVAE maximises a weighted lower-bound on MI, D , under two constraints, a maximum -ELBO and a maximum MMD, that prevent p ( z | x, ) from degenerating to a point mass.",
"Reasonable values for these constraints have to be found empirically.",
"We propose minimum desired rate (MDR), a technique to attain ELBO values at a pre-specified rate r that does not suffer from the gradient discontinuities of FB, and does not introduce the additional hyperparameters of SFB.",
"The idea is to optimise the ELBO subject to a minimum rate constraint r : max , E ( , ) , s.t. EX [KL( q ( z | x, ) || p ( z ))] > r .",
"Because constrained optimisation is generally intractable, we optimise the Lagrangian (Boyd and Vandenberghe, 2004) ( , , u ) =",
"where u R 0 is a positive Lagrangian multiplier.",
"We define the dual function ( u ) = max , ( , , u ) and solve the dual problem min u R 0 ( u ) .",
"Local minima of the resulting min-max objective can be found by performing stochastic gradient descent with respect to u and stochastic gradient ascent with respect to , .",
"It is insightful to compare MDR to the various techniques we surveyed in terms of the gradients involved in their optimisation.",
"The losses minimised by KL annealing, -VAE, and SFB have the form (cid:96) ( , ) = D + R , where 0 .",
"FB minimises the loss (cid:96) FB ( , ) = D + max( r, R ) , where r > 0 is the target rate.",
"Last, with respect to and , MDR minimises the loss (cid:96) MDR ( , ) = D + R + u ( r R ) , where u R 0 is the Lagrangian multiplier.",
"And with respect to u , MDR minimises ( u ) = D R u ( R r ) .",
"Let us inspect gradients with respect to the parameters of the VAE, namely, and .",
"FB's gradient , (cid:96) FB ( , ) = , D + (cid:40) 0 if R r , R otherwise (7a) is discontinuous, that is, there is a sudden jump' from zero to a (possibly) large gradient w.r.t. R when the rate dips above r .",
"KL annealing, -VAE, and SFB do not present such discontinuity , (cid:96) ( , ) = , D + , R , (7b) for scales the gradient w.r.t. R .",
"Hence, MDR is another form of KL weighing, albeit one that targets a specific rate.",
"Compared to -VAE, MDR has the advantage that is not fixed but estimated to meet the requirements on rate.",
"Compared to KL -annealing, MDR dispenses with a fixed schedule for updating , not only annealing schedules are fixed, they require multiple decisions (e.g. number of steps, linear or exponential increments) whose impact on the objective are not directly obvious.",
"Most similar then, seems SFB.",
"Like MDR, it flexibly updates by targeting a rate.",
"techniques become apparent when we observe how is updated.",
"In case of SFB: ( t +1) = ( t ) + (cid:40) if R > r if R < r (8a) where , and are hyperparameters.",
"In case of MDR (not taking optimiser-specific dynamics into account): u ( t +1) = u ( t ) ( u ) u = u ( t ) + ( R r ) (8b) where is a learning rate.",
"From this, we conclude that MDR is akin to SFB, but MDR's update rule is a direct consequence of Lagrangian relaxation and thus dispenses with the additional hyperparameters in SFB's handcrafted update rule.",
"4 5 Expressive Priors Suppose we employ a multimodal prior p ( z | ) , e.g. a mixture of Gaussians, and suppose we employ a unimodal posterior approximation, e.g. the typical diagonal Gaussian.",
"This creates a mismatch between the prior and the posterior approximation families that makes it impossible for KL( q ( z | x, ) || p ( z | )) to be precisely 0 .",
"For the aggregated posterior q ( z | ) to match the prior, the inference model would have toon averagecover all of the prior's modes.",
"Since the inference network is deterministic, it can only do so as a function of the conditioning input x , thus increasing I ( X ; Z | ) .",
"Admittedly, this conditioning might still only capture shallow features of x , and the generator may still choose to ignore the latent code, keeping I ( X ; Z | ) low, but the potential seems to justify an attempt.",
"This view builds upon Alemi et al. (2018)'s information-theoretic view which suggests that the prior regularises the inference model capping I ( X ; Z | ) .",
"Thus, we modify SENVAE to employ a more complex, ideally multimodal, parametric prior p ( z | ) and fit its parameters.",
"MoG Our first option is a uniform mixture of Gaussians (MoG), i.e. p ( z | ) = 1 CC (cid:88) c =1 N ( z | ( c ) , diag( ( c ) (cid:12) ( c ) )) (9) 4 Note that if we set = 1 , = 1 , and = ( R r ) at every step of SFB, we recover MDR.",
"where the Gaussian parameters are optimised along with other generative parameters.",
"Note that though we give this prior up to C modes, the optimiser might merge some of them (by learning approximately the same location and scale).",
"VampPrior Motivated by the fact that, for a fixed posterior approximation, the prior that optimises the ELBO equals EX [ q ( z | x, )] , Tomczak and Welling (2018) propose the VampPrior, a variational mixture of posteriors : p ( z | ) = 1 CC (cid:88) c =1 q ( z | v ( c ) , ) (10) where v ( c ) is a learned pseudo inputin their case a continuous vector.",
"Again the parameters of the prior, i.e. { v ( c ) } Cc =1 , are optimised in the ELBO.",
"In our case, the input to the inference network is a discrete sentence, which is incompatible with the design of the VampPrior.",
"Thus, we propose to bypass the inference network's embedding layer and estimate a sequence of word embeddings, which makes up a pseudo input.",
"That is, v ( c ) is a sequence (cid:104) v ( c ) 1 , . . . , v ( c ) l c (cid:105) where v ( c ) i has the dimensionality of our embeddings, and l c is the length of the sequence (fixed at the beginning of training).",
"Note, however, that for this prior to be multimodal, the inference model must already encode information in Z , thus there is some gambling in its design.",
"Our goal is to identify which techniques are effective in training VAEs for language modelling.",
"Our evaluation concentrates on intrinsic metrics: negative log-likelihood (NLL), perplexity per token (PPL), rate ( R ), distortion ( D ), the number of active units (AU; Burda et al., 2015)) 5 and gap in the accuracy of next word prediction (given gold prefixes) when decoding from a posterior sample versus decoding from a prior sample (Acc gap ).",
"For VAE models, NLL (and thus PPL) can only be estimated.",
"We use importance sampling (IS) P ( x | ) = (cid:90) p ( z, x | )d z IS = (cid:90) q ( z | x ) p ( z, x | ) q ( z | x ) d z MC 1 SS (cid:88) s =1 p ( z ( s ) , x | ) q ( z ( s ) | x ) where z ( s ) q ( z | x ) (11) 5 A latent unit (a single dimension of z ) is denoted active when its variance with respect to x is larger than 0.01.",
"with our trained approximate posterior as importance distribution (we use S = 1000 samples).",
"We first report on experiments using the English Penn Treebank (PTB; Marcus et al., 1993).",
"6 RNNLM The baseline RNNLM generator is a building block for all of our SEN VAEs, thus we validate its performance as a strong standalone generator.",
"We highlight that it outperforms an external baseline that employs a comparable number of parameters (Dyer et al., 2016) and that this performance boost is mostly due to tying embeddings with the output layer.",
"7 Appendix A.1 presents the complete architecture and a comparison.",
"Bayesian optimisation The techniques we compare are sensitive to one or more hyperparameters (see Table 1), which we tune using Bayesian optimisation (BO) towards minimising estimated NLL of the validation data.",
"For each technique, we ran 25 iterations of BO, each iteration encompassing training a model to full convergence.",
"This was suf-ficient for the hyperparameters of each technique to converge.",
"See Appendix A.2 for details.",
"On optimisation strategies First, we assess the effectiveness of techniques that aim at promoting local optima of SENVAE with better MI tradeoff.",
"As for the architecture, the approximate posterior q ( z | x, ) employs a bidirectional recurrent encoder, and the generator P ( x | z, ) is essentially our RNNLM initialised with a learned projection of z (complete specification in A.1).",
"We train with Adam (Kingma and Ba, 2014) with default parameters and a learning rate of 10 3 until convergence five times for each technique.",
"6 We report on Dyer et al. (2016)'s pre-processing, rather than Mikolov et al. (2010)'s.",
"Whereas our findings are quantitatively similar, qualitative analysis based on generations are less interesting with Mikolov's far too small vocabulary.",
"7 Stronger RNN-based models can be designed (Melis et al., 2018), but those use vastly more parameters.",
"the vanilla VAE (no special treatment) encodes no information in latent space ( R = 0 ).",
"Then note that all techniques converged to VAEs that attain better PPL than the RNNLM and vanilla VAE, and all but annealed word dropout did so at non-negligible rate.",
"Notably, the two most popular techniques, word dropout and KL annealing, perform sub-par to the other techniques.",
"8 The techniques that work well at non-negligible rates can be separated into two groups: one based on a change of objective (i.e., -VAE, InfoVAE and LagVAE), another based on targeting a specific rate (i.e., FB, SFB, and MDR).",
"InfoVAE, LagVAE and SFB all require tuning of multiple hyperparameters.",
"InfoVAE and LagVAE, in particular, showed poor performance without this careful tuning.",
"In the first group, consider LagVAE, for example.",
"Though Zhao et al. (2018a) argue that the magnitude of is not particularly important (in MI-maximisation mode, they fixed it to 1 ), we could not learn a useful SENVAE with LagVAE until we allowed BO to also estimate the magnitude of .",
"Once BO converges to the values in Table 1, the method does perform quite well.",
"Generally, it is hard to believe that hyperparameters transfer across data sets, thus it is fair to expect that this exercise will have to be repeated every time.",
"We argue that the rate hyperparameter common to the techniques in the second group is more interpretable and practical in most cases.",
"For example, it is easy to grid-search against a handful of values.",
"Hence, we further investigate FB and MDR by varying the target rate further (from 5 to 50 ).",
"SFB is left out, for MDR generalises SFB's handcrafted update rule.",
"We observe that FB and MDR attain essentially the same PPL across rates, 8 Though here we show annealed word dropout, to focus on techniques that do not weaken the generator, standard word dropout also converged to negligible rates.",
"though MDR attains the desired rate earlier on in training, especially for higher targets (where FB fails at reaching the specified rate).",
"Importantly, at the end of training, the validation rate is closer to the target for MDR.",
"Appendix B supports these claims.",
"Though Acc gap already suggests it, Figure 2 shows more visibly that MDR leads to output Categorical distributions that are more sensitive to the latent encoding.",
"We measure this sensitivity in terms of symmetrised KL between output distributions obtained from a posterior sample and output distributions obtained from a prior sample for the same time step given an observed prefix.",
"On expressive priors Second, we compare the impact of expressive priors.",
"This time, prior hyperparameters were selected via grid search and can be found in Appendix A.1.",
"All models are trained with a target rate of 5 using MDR, with settings otherwise the same as the previous experiment.",
"In Table 3 it can be seen that more expressive priors do not improve perplexity further, 9 though 9 Here we remark that best runs (based on validation performance) do show an advantage, which stresses the need to report multiple runs as we do.",
"they seem to encode more information in the latent variablenote the increased number of active units and the increased gap in accuracy.",
"One may wonder whether stronger priors allow us to target higher rates without hurting PPL.",
"This does not seem to be the case: as we increase rate to 50 , all models perform roughly the same, and beyond 20 performance degrades quickly.",
"10 The models did, however, show a further increase in active units (VampPrior) and accuracy gap (both priors).",
"Again, Appendix B contains plots supporting these claims.",
"Generated samples Figure 3 shows samples from a well-trained SENVAE, where we decode greedily from a prior samplethis way, all variability is due to the generator's reliance on the latent sample.",
"Recall that a vanilla VAE ignores z and thus greedy generation from a prior sample is essentially deterministic in that case (see Figure 1a).",
"Next to the samples we show the closest training instance, which we measure in terms of an edit distance (TER; Snover et al., 2006).",
"11 This near-est neighbour helps us assess whether the generator is producing novel text or simply reproducing something it memorised from training.",
"In Figure 4 we show a homotopy: here we decode greedily from points lying between a posterior sample conditioned on the first sentence and a posterior sample conditioned on the last sentence.",
"In contrast to the vanilla VAE (Figure 1b), neighbourhood in latent space is now used to capture some regularities in data space.",
"These samples add support to the quantitative evidence that our DGMs have been trained not to neglect the latent space.",
"In Appendix B we provide more samples.",
"Other datasets To address the generalisability of our claims to other, larger, datasets, we report results on the Yahoo and Yelp corpora (Yang et al., 2017) in Table 4.",
"We compare to the work of He et al. (2019), who proposed to mitigate posterior collapse with aggressive training of the inference network, optimising the inference network multiple steps for each step of the generative network.",
"12 .",
"We report on models trained with the standard prior as well as an MoG prior both op-10 We also remark that, without MDR, the MoG model attains validation rate of about 2 .",
"5 .",
"11 Thisdistancemetricvariesfrom 0 to 1 , where 1 indicates thesentenceiscompletelynoveland 0 indicatesthesentenceis essentiallycopiedfromthetrainingdata.",
"12 Toenabledirectcomparisonwereplicatedtheexperimental setup from (He et al., 2019) and built our methods into their codebase.",
"timised with MDR, and a model trained without optimisation techniques.",
"13 It can be seen that MDR compares favourably to other optimisation techniques reported in (He et al., 2019).",
"Whilst aggressive training of the inference network performs slightly better in terms of NLL and leads to more active units, it slows down training by a factor of 4.",
"The MoG prior improves results on Yahoo but not on Yelp.",
"This may indicate that a multimodal prior does offer useful extra capacity to the latent space, 14 at the cost of more instability in optimisation.",
"This confirms that targeting a pre-specified rate leads to VAEs that are not collapsed without hurting NLL.",
"13 WefocusonMoGsincethePTBexperimentsshowedthe VampPriortounderperformintermsofAU.",
"14 We tracked the average KL divergence between any two components of the prior and observed that the prior remained multimodal.",
"Recommendations We recommend targeting a specific rate via MDR instead of annealing (or word dropout).",
"Besides being simple to implement, it is fast and straightforward to use: pick a rate by plotting validation performance against a handful of values.",
"Stronger priors, on the other hand, while showing indicators of higher mutual information (e.g. AU and Acc gap ), seem less effective than MDR.",
"Use IS estimates of NLL, rather than single-sample ELBO estimates, for model selection, for the latter can be too loose of a bound and too heavily influenced by noisy estimates of KL.",
"15 Use many samples for a tight bound.",
"16 Inspect sentences greedily decoded from a prior (or posterior) sample as this shows whether the generator is at all sensitive to the latent code.",
"Retrieve nearest neighbours to spot copying behaviour.",
"In NLP, posterior collapse was probably first noticed by Bowman et al. (2016), who addressed it via word dropout and KL scaling.",
"Further investigation revealed that in the presence of strong generators, 15 This point seems obvious to many, but enough published papersreportexponentiatedlossordistortionpertoken,which, besidesunreliable,makecomparisonsacrosspapersdifficult.",
"16 Weuse 1000 samples.Comparedtoasinglesampleestimate, we have observed differences up to 5 perplexity points in noncollapsedmodels.",
"the ELBO itself becomes the culprit (Chen et al., 2017; Alemi et al., 2018), as it lacks a preference regarding MI.",
"Posterior collapse has also been ascribed to approximate inference (Kim et al., 2018; Dieng and Paisley, 2019).",
"Beyond the techniques compared and developed in this work, other solutions have been proposed, including modifications to the generator (Semeniuta et al., 2017; Yang et al., 2017; Park et al., 2018; Dieng et al., 2019), side losses based on weak generators (Zhao et al., 2017), factorised likelihoods (Ziegler and Rush, 2019; Ma et al., 2019), cyclical annealing (Liu et al., 2019) and changes to the ELBO (Tolstikhin et al., 2018; Goyal et al., 2017).",
"Exploiting a mismatch in correlation between the prior and the approximate posterior, and thus forcing a lower-bound on the rate, is the principle behind -VAEs (Razavi et al., 2019) and hyperspherical VAEs (Xu and Durrett, 2018).",
"The generative model of -VAEs has one latent variable per step of the sequence, i.e. z = (cid:104) z 1 , . . . , z | x | (cid:105) , making it quite different from that of the SEN VAEs considered here.",
"Their mean-field inference model is a product of independent Gaussians, one per step, but they construct a correlated Gaussian prior by making the prior distribution over the next step depend linearly on the previous step, i.e. Z i | z i 1 N ( z i 1 , ) with hyperparameters and .",
"Hyperspherical VAEs work on the unit hypersphere with a uniform prior and a nonuniform VonMises-Fisher posterior approximation (Davidson et al., 2018).",
"Note that, though in this paper we focused on Gaussian (and mixture of Gaussians, e.g. MoG and VampPrior) priors, MDR is applicable for whatever choice of prescribed prior.",
"Whether its benefits stack with the effects due to different priors remains an empirical question.",
"GECO (Rezende and Viola, 2018) casts VAE optimisation as a dual problem, and in that it is closely related to our MDR and the LagVAE.",
"GECO targets minimisation of EX [KL( q ( z | x, ) || p ( z ))] under constraints on distortion, whereas LagVAE targets either maximisation or minimisation of (bounds on) I ( X ; Z | ) under constraints on the InfoVAE objective.",
"Contrary to MDR, GECO focuses on latent space regularisation and offers no explicit mechanism to mitigate posterior collapse.",
"Recently Li et al. (2019) proposed to combine FB, KL scaling, and pre-training of the inference network's encoder on an auto-encoding objective.",
"Their techniques are complementary to ours in so far as their main findingthe mutual benefits of annealing, pre-training, and lower-bounding KLis perfectly compatible with ours (MDR and multimodal priors).",
"SENVAE is a deep generative model whose generative story is rather shallow, yet, due to its strong generator component, it is hard to make effective use of the extra knob it offers.",
"In this paper, we have introduced and compared techniques for effective estimation of such a model.",
"We show that many techniques in the literature perform reasonably similarly (i.e. FB, SFB, -VAE, InfoVAE), though they may require a considerable hyperparameter search (e.g. SFB and InfoVAE).",
"Amongst these, our proposed optimisation subject to a minimum rate constraint is simple enough to tune (as FB it only takes a pre-specified rate and unlike FB it does not suffer from gradient discontinuities), superior to annealing and word dropout, and require less resources than strategies based on multiple annealing schedules and/or aggressive optimisation of the inference model.",
"Other ways to lower-bound rate, such as by imposing a multimodal prior, though promising, still require a minimum desired rate.",
"The typical RNNLM is built upon an exact fac-torisation of the joint distribution, thus a well-trained architecture is hard to improve upon in terms of log-likelihood of gold-standard data.",
"Our interest in latent variable models stems from the desire to obtain generative stories that are less opaque than that of an RNNLM, for example, in that they may expose knobs that we can use to control generation and a hierarchy of steps that may award a degree of interpretability to the model.",
"The SENVAE is not that model, but it is a crucial building block in the pursue for hierarchical probabilistic models of language.",
"We hope this work, i.e. the organised review it contributes and the techniques it introduces, will pave the way to deeperin statistical hierarchy generative models of language.",
"This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 825299 (GoURMET)."
] |
[
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other"
] |
[
"Questions of fairness, robustness, and transparency are paramount to address before deploying NLP systems.",
"Central to these concerns is the question of reliability: Can NLP systems reliably treat different demographics fairly and function correctly in diverse and noisy environments?",
"To address this, we argue for the need for reliability testing and contextualize it among existing work on improving accountability.",
"We show how adversarial attacks can be reframed for this goal, via a framework for developing reliability tests.",
"We argue that reliability testing with an emphasis on interdisciplinary collaboration will enable rigorous and targeted testing, and aid in the enactment and enforcement of industry standards.",
"Rigorous testing is critical to ensuring a program works as intended (functionality) when used under real-world conditions (reliability).",
"Hence, it is troubling that while natural language technologies are becoming increasingly pervasive in our everyday lives, there is little assurance that these NLP systems will not fail catastrophically or amplify discrimination against minority demographics when exposed to input from outside the training distribution.",
"Recent examples include GPT-3 (Brown et al., 2020) agreeing with suggested suicide (Rousseau et al., 2020), the mistranslation of an innocuous social media post resulting in a minority's arrest (Hern, 2017), and biased grading algorithms that can negatively impact a minority student's future (Feathers, 2019).",
"Additionally, a lack of rigorous testing, coupled with machine learning's (ML) implicit assumption of identical training and testing distributions, may inadvertently result in systems that discriminate against minorities, who are often underrepresented in the training data.",
"This can take Correspondence to: [email protected] Figure 1: How DOCTOR can integrate with existing system development workflows.",
"the form of misrepresentation of or poorer performance for people with disabilities, specific gender, ethnic, age, or linguistic groups (Hovy and Spruit, 2016; Crawford, 2017; Hutchinson et al., 2020).",
"Amongst claims of NLP systems achieving human parity in challenging tasks such as question answering (Yu et al., 2018), machine translation (Has-san et al., 2018), and commonsense inference (De-vlin et al., 2019), research has demonstrated these systems' fragility to natural and adversarial noise (Goodfellow et al., 2015; Belinkov and Bisk, 2018) and out-of-distribution data (Fisch et al., 2019).",
"It is also still common practice to equate test-ing with measuring held-out accuracy, even as datasets are revealed to be harmfully biased (Wag-ner et al., 2015; Geva et al., 2019; Sap et al., 2019).",
"Many potential harms can be mitigated by detecting them early and preventing the offending model from being put into production.",
"Hence, in addition to being mindful of the biases in the NLP pipeline (Bender and Friedman, 2018; Mitchell et al., 2019; Waseem et al., 2021) and holding creators accountable via audits (Raji et al., 2020; Brundage et al., 2020), we argue for the need to evaluate an NLP system's reliability in diverse operating conditions.",
"Initial research on evaluating out-of-distribution generalization involved manually-designed challenge sets (Jia and Liang, 2017; Nie et al., 2020; Gardner et al., 2020), counterfactuals (Kaushik et al., 2019; Khashabi et al., 2020; Wu et al., 2021), biased sampling (Sgaard et al., 2021) or toolk-its for testing if a system has specific capabilities (Ribeiro et al., 2020) or robustness to distribution shifts (Goel et al., 2021).",
"However, most of these approaches inevitably overestimate a given system's worst-case performance since they do not mimic the NLP system's adversarial distribution 1 .",
"A promising technique for evaluating worst-case performance is the adversarial attack.",
"However, although some adversarial attacks explicitly focus on specific linguistic levels of analysis (Belinkov and Bisk, 2018; Iyyer et al., 2018; Tan et al., 2020; Eger and Benz, 2020), many often simply rely on word embeddings or language models for perturbation proposal (see 4).",
"While the latter may be useful to evaluate a system's robustness to malicious actors, they are less useful for dimension-specific testing (e.g., reliability when encountering grammatical variation).",
"This is because they often perturb the input across multiple dimensions at once, which may make the resulting adversaries unnatural.",
"Hence, in this paper targeted at NLP researchers, practitioners, and policymakers, we make the case for reliability testing and reformulate adversarial attacks as dimension-specific , worst-case tests that can be used to approximate real-world variation.",
"We contribute a reliability testing framework DOCTOR that translates safety and fairness concerns around NLP systems into quantitative tests.",
"We demonstrate how testing dimensions for DOCTOR can be drafted for a specific use case.",
"Finally, we discuss the policy implications, challenges, and directions for future research on reliability testing.",
"NLP system.",
"The entire text processing pipeline built to solve a specific task; taking raw text as input and producing predictions in the form of labels 1 The distribution of adversarial cases or failure profile.",
"(classification) or text (generation).",
"We exclude raw language models from the discussion since it is unclear how performance, and hence worst-case performance, should be evaluated.",
"We do include NLP systems that use language models internally (e.g., BERT-based classifiers (Devlin et al., 2019)).",
"Reliability.",
"Defined by IEEE (2017) as the degree to which a system, product or component performs specified functions under specified conditions for a specified period of time.",
"We prefer this term over robustness 2 to challenge the NLP community's common framing of inputs from outside the training distribution as noisy.",
"The notion of reliability requires us to explicitly consider the specific, diverse environments (i.e., communities) a system will operate in.",
"This is crucial to reducing the NLP's negative impact on the underrepresented.",
"Dimension.",
"An axis along which variation can occur in the real world, similar to Plank (2016)'s variety space.",
"A taxonomy of possible dimensions can be found in Table 1 (Appendix).",
"Adversarial attack.",
"A method of perturbing the input to degrade a target model's accuracy (Good-fellow et al., 2015).",
"In computer vision, this is achieved by adding adversarial noise to the image, optimized to be maximally damaging to the model.",
"4 describes how this is done in the NLP context.",
"Actor.",
"Someone who has influence over",
"a) the design of an NLP system and its reliability testing regime;",
"b) whether the system is deployed; and",
"c) who it can interact with.",
"Within the context of our discussion, actors are likely to be regulators, experts, and stakeholder advocates.",
"Expert.",
"An actor who has specialized knowledge, such as ethicists, linguists, domain experts, social scientists, or NLP practitioners.",
"The accelerating interest in building NLP-based products that impact many lives has led to urgent questions of fairness, safety, and accountability (Hovy and Spruit, 2016; Bender et al., 2021),",
"2 The degree to which a system or component can function correctly in the presence of invalid inputs or stressful environmental conditions (IEEE, 2017).",
"prompting research into algorithmic bias (Boluk-basi et al., 2016; Blodgett et al., 2020), explainability (Ribeiro et al., 2016; Danilevsky et al., 2020), robustness (Jia and Liang, 2017), etc.",
"Research is also emerging on best practices for productizing ML: from detailed dataset documentation (Bender and Friedman, 2018; Gebru et al., 2018), model documentation for highlighting important but often unreported details such as its training data, intended use, and caveats (Mitchell et al., 2019), and documentation best practices (Partnership on AI, 2019), to institutional mechanisms such as auditing (Raji et al., 2020) to enforce accountability and red-teaming (Brundage et al., 2020) to address developer blind spots, not to mention studies on the impact of organizational structures on responsible AI initiatives (Rakova et al., 2020).",
"Calls for increased accountability and transparency are gaining traction among governments (116th U.S. Congress, 2019; NIST, 2019; European Commission, 2020; Smith, 2020; California State Legislature, 2020; FDA, 2021) and customers increasingly cite ethical concerns as a reason for not engaging AI service providers (EIU, 2020).",
"While there has been significant discussion around best practices for dataset and model creation, work to ensure NLP systems are evaluated in a manner representative of their operational conditions has only just begun.",
"Initial work in constructing representative tests focuses on enabling development teams to easily evaluate their models' linguistic capabilities (Ribeiro et al., 2020) and accuracy on subpopulations and distribution shifts (Goel et al., 2021).",
"However, there is a clear need for a paradigm that allows experts and stakeholder advocates to collaboratively develop tests that are representative of the practical and ethical concerns of an NLP system's target demographic.",
"We argue that reliability testing , by reframing the concept of adversarial attacks, has the potential to fill this gap.",
"Despite the recent advances in neural architectures resulting in breakthrough performance on benchmark datasets, research into adversarial examples and out-of-distribution generalization has found ML systems to be particularly vulnerable to slight perturbations in the input (Goodfellow et al., 2015) and natural distribution shifts (Fisch et al., 2019).",
"While these perturbations are often chosen to maximize model failure, they highlight serious reliability issues for putting ML models into production since they show that these models could fail catastrophically in naturally noisy, diverse, real-world environments (Saria and Subbaswamy, 2019).",
"Additionally, bias can seep into the system at multiple stages of the NLP lifecycle (Shah et al., 2020), resulting in discrimination against minority groups (O'Neil, 2016).",
"The good news, however, is that rigorous testing can help to highlight potential issues before the systems are deployed.",
"The need for rigorous testing in NLP is reflected in ACL 2020 giving the Best Paper Award to CheckList (Ribeiro et al., 2020), which applied the idea of behavior testing from software engineering to testing NLP systems.",
"While invaluable as a first step towards the development of comprehensive testing methodology, the current implementation of CheckList may still overestimate the reliability of NLP systems since the individual test examples are largely manually constructed.",
"Importantly, with the complexity and scale of current models, humans cannot accurately determine a model's adversarial distribution (i.e., the examples that cause model failure).",
"Consequently, the test examples they construct are unlikely to be the worst-case examples for the model.",
"Automated assistance is needed.",
"Therefore, we propose to perform reliability testing , which can be thought of as one component of behavior testing.",
"We categorize reliability tests as average-case tests or the worst-case tests.",
"As their names suggest, average-case and worst-case tests estimate the expected and lower-bound performance, respectively, when the NLP system is exposed to the phenomena modeled by the tests.",
"Average-case tests are conceptually similar to Wu et al. (2021)'s counterfactuals, which is contemporaneous work, while worst-case tests are most similar to adversarial attacks (4).",
"Our approach parallels boundary value testing in software engineering: In boundary value testing, tests evaluate a program's ability to handle edge cases using test examples drawn from the extremes of the ranges the program is expected to handle.",
"Similarly, reliability testing aims to quantify the system's reliability under diverse and potentially extreme conditions.",
"This allows teams to perform better quality control of their NLP systems and introduce more nuance into discussions of why and when models fail (5).",
"Finally, we note that reliability testing and standards are established practices in engineering industries (e.g., aerospace (Nelson, 2003; Wilkinson et al., 2016)) and advocate for NL engineering to be at parity with these fields.",
"label-scarce world A proposed approach for testing robustness to natural and adverse distribution shifts is to construct test sets using data from different domains or writing styles (Miller et al., 2020; Hendrycks et al., 2020), or to use a human vs. model method of constructing challenge sets (Nie et al., 2020; Zhang et al., 2019b).",
"While they are the gold standard, such datasets are expensive to construct, 3 making it infeasible to manually create worst-case test examples for each NLP system being evaluated.",
"Consequently, these challenge sets necessarily overestimate each system's worst-case performance when the inference distribution differs from the training one.",
"Additionally, due to their crowdsourced nature, these challenge sets inevitably introduce distribution shifts across multiple dimensions at once, and even their own biases (Geva et al., 2019), unless explicitly controlled for.",
"Building individual challenge sets for each dimension would be prohibitively expensive due to combinatorial explosion, even before having to account for concept drift (Widmer and Kubat, 1996).",
"This coupling complicates efforts to design a nuanced and comprehensive testing regime.",
"Hence, simulating variation in a controlled manner via reliability tests can be a complementary method of evaluating the system's out-of-distribution generalization ability.",
"We first give a brief introduction to adversarial attacks in NLP before showing how they can be used for reliability testing.",
"We refer the reader to Zhang et al. (2020b) for a comprehensive survey.",
"Existing work on NLP adversarial attacks perturbs the input at various levels of linguistic analysis: phonology (Eger and Benz, 2020), orthography (Ebrahimi et al., 2018), morphology (Tan et al., 2020), lexicon (Alzantot et al., 2018; Jin et al., 2020), and syntax (Iyyer et al., 2018).",
"Early work did not place any constraints on the attacks and merely used the degradation to a tar-3 Dua et al. (2019) reports a cost of 60k USD for 96k questionanswer pairs.",
"get model's accuracy as the measure of success.",
"However, this often resulted in the semantics and expected prediction changing, leading to an overestimation of the attack's success.",
"Recent attacks aim to preserve the original input's semantics.",
"A popular approach has been to substitute words with their synonyms using word embeddings or a language model as a measure of semantic similarity (Alzantot et al., 2018; Ribeiro et al., 2018; Michel et al., 2019; Ren et al., 2019; Zhang et al., 2019a; Li et al., 2019; Jin et al., 2020; Garg and Ramakr-ishnan, 2020; Li et al., 2020a).",
"Focusing on maximally degrading model accuracy overlooks the key feature of adversarial attacks: the ability to find the worst-case example for a model from an arbitrary distribution.",
"Many recent attacks perturb the input across multiple dimensions at once, which may make the result unnatural.",
"By constraining our sample perturbations to a distribution modeling a specific dimension of interest, the performance on the generated adversaries is a valid lower bound performance for that dimension.",
"Said another way, adversarial attacks can be reframed as interpretable reliability tests if we constrain them to meaningful distributions.",
"This is the key element of our approach as detailed in Alg.",
"1. We specify either an average (Lines 57) or worse case test (Lines 810), but conditioned on the data distribution D that models a particular dimension of interest d .",
"The resultant reliability score gauges real-world performance and the worst-case variant returns the adversarial examples that cause worst-case performance.",
"When invariance to input variation is expected, y (cid:48) is equivalent to the source label y .",
"However, the key difference between an adversarial robustness mindset and a testing one is the lat-ter's emphasis on identifying ways in which natural phenomena or ethical concerns can be operational-ized as reliability tests.",
"This change in perspective opens up new avenues for interdisciplinary research that will allow researchers and practitioners to have more nuanced discussions about model reliability and can be used to design comprehensive reliability testing regimes.",
"We describe such a framework for interdisciplinary collaboration next.",
"We introduce and then describe our general framework, DOCTOR, for testing the reliability of NLP systems.",
"DOCTOR comprises six steps:",
"1. D efine reliability requirements",
"2. O perationalize dimensions as distributions",
"3. C onstruct tests",
"4. T est system and report results",
"5. O bserve deployed system's behavior",
"6. R efine reliability requirements and tests Defining reliability requirements.",
"Before any tests are constructed, experts and stakeholder advocates should work together to understand the demographics and values of the communities the NLP system will interact with (Friedman and Hendry, 2019) and the system's impact on their lives.",
"The latter is also known as algorithmic risk assessment (Ada Lovelace Institute and DataKind UK, 2021).",
"There are three critical questions to address: 1) Along what dimensions should the model be tested?",
"2) What metrics should be used to measure system performance?",
"3) What are acceptable performance thresholds for each dimension?",
"Question 1 can be further broken down into:",
"a) general linguistic phenomena, such as alternative spellings or code-mixing;",
"b) task-specific quirks, e.g., an essay grading system should not use text length to predict score;",
"c) sensitive attributes, such as gender, ethnicity, sexual orientation, age, or disability status.",
"This presents an opportunity for interdisciplinary expert collaboration: Linguists are best equipped to contribute to discussions around",
"(a), domain experts to",
"(b), and ethicists and social scientists to",
"(c).",
"However, we recognize that such collaboration may not be feasible for every NLP system being tested.",
"It is more realistic to expect ethicists to be involved when applying DOCTOR at the company and industry levels, and ethics-trained NLP practitioners to answer these questions within the development team.",
"We provide a taxonomy of potential dimensions in Table 1 (Appendix).",
"Since it is likely unfeasible to test every possible dimension, stakeholder advocates should be involved to ensure their values and interests are accurately represented and prioritized (Hagerty and Rubinov, 2019), while experts should ensure the dimensions identified can be feasibly tested.",
"A similar approach to that of community juries 4 may be taken.",
"We recommend using this question to evaluate the feasibility of operationalizing potential dimensions: What is the system's performance when exposed to variation along dimension d ?.",
"For example, rather than simply gender, a better-defined dimension would be gender pronouns.",
"With this understanding, experts and policymakers can then create a set of reliability requirements , comprising the testing dimensions, performance metric(s), and passing thresholds.",
"Next, we recommend using the same metrics for held-out, average-case, and worst-case performance for easy comparison.",
"These often vary from task to task and are still a subject of active research (Novikova et al., 2017; Reiter, 2018; Kryscinski et al., 2019), hence the question of the right metric to use is beyond the scope of this paper.",
"Finally, ethicists, in consultation with the other aforementioned experts and stakeholders, will determine acceptable thresholds for worst-case performance.",
"The system under test must perform above said thresholds when exposed to variation along those dimensions in order to pass.",
"For worst-case performance, we recommend reporting thresholds as relative differences ( ) between the average-case and worst-case performance.",
"These questions may help in applying this step and deciding if specific NLP solutions should even exist (Leins et al., 2020): Who will interact with the NLP system, in what context, and using which language varieties?",
"What are the distinguishing features of these varieties compared to those used for training?",
"4 docs.microsoft.com/en-us/azure/.../community-jury What is the (shortand long-term) impact on the community's most underrepresented members if the system performs more poorly for them?",
"We note that our framework is general enough to be applied at various levels of organization: within the development team, within the company (com-pliance team, internal auditor), and within the industry (self-regulation or independent regulator).",
"However, we expect the exact set of dimensions, metrics and acceptable thresholds defined in Step 1 to vary depending on the reliability concerns of the actors at each level.",
"For example, independent regulators will be most concerned with establishing minimum safety and fairness standards that all NLP systems used in their industries must meet, while compliance teams may wish to have stricter and more comprehensive standards for brand reasons.",
"Developers can use DOCTOR to meet the other two levels of requirements and understand their system's behaviour better with targeted testing.",
"Operationalizing dimensions.",
"While the abstractness of dimensions allows people who are not NLP practitioners to participate in drafting the set of reliability requirements, there is no way to test NLP systems using fuzzy concepts.",
"Therefore, every dimension the system is to be tested along must be operationalizable as a distribution from which perturbed examples can be sampled in order for NLP practitioners to realize them as tests.",
"Since average-case tests attempt to estimate a system's expected performance in its deployed environment, the availability of datasets that reflect real-world distributions is paramount to ensure that the tests themselves are unbiased.",
"This is less of an issue for worst-case tests; the tests only needs to know which perturbations that are possible, but not how frequently they occur in the real world.",
"Figuring out key dimensions for different classes of NLP tasks and exploring ways of operationalizing them as reliability tests are also promising directions for future research.",
"Such research would help NLP practitioners and policymakers define reliability requirements that can be feasibly implemented.",
"Constructing tests.",
"Next, averageand worst-case tests are constructed (Alg. 1).",
"Average-case tests can be data-driven and could take the form of manually curated datasets or model-based perturbation generation (e.g., PolyJuice (Wu et al., 2021)), while worst-case tests can be rule-based (e.g., Morpheus (Tan et al., 2020)) or model-based (e.g., BERT-Attack (Li et al., 2020a)).",
"We recommend constructing tests that do not require access to the NLP model's parameters (black-box assump-tion); this not only yields more system-agnostic tests, but also allows for (some) tests to be created independently from the system development team.",
"If the black-box assumption proves limiting, the community can establish a standard set of items an NLP system should export for testing purposes, e.g., network gradients if the system uses a neural model.",
"Regardless of assumption, keeping the reg-ulators' test implementations separate and hidden from the system developers is critical for stakeholders and regulators to trust the results.",
"This separation also reduces overfitting to the test suite.",
"Testing systems.",
"A possible model for test ownership is to have independently implemented tests at the three levels of organization described above (team, company, industry).",
"At the development team level, reliability tests can be used to diagnose weaknesses with the goal of improving the NLP system for a specific use case and set of target users.",
"Compared to unconstrained adversarial examples, contrasting worst-case examples that have been constrained along specific dimensions with non-worst-case examples will likely yield greater intuition into the model's inner workings.",
"Studying how modifications (to the architecture, training data and process) affect the system's reliability on each dimension will also give engineers insight into the factors affecting system reliability.",
"These tests should be executed and updated regularly during development, according to software engineering best practices such as Agile (Beck et al., 2001).",
"Red teams are company-internal teams tasked with finding security vulnerabilities in their developed software or systems.",
"Brundage et al. (2020) propose to apply the concept of red teaming to surface flaws in an AI system's safety and security.",
"In companies that maintain multiple NLP systems, we propose employing similar, specialized teams composed of NLP experts to build and maintain reliability tests that ensure their NLP systems adhere to company-level reliability standards.",
"These tests will likely be less task-/domain-specific than those developed by engineering teams due to their wider scope, while the reliability standards may be created and maintained by compliance teams or the red teams themselves.",
"Making these standards available for public scrutiny and ensuring their products meet them will enable companies to build trust with their users.",
"To ensure all NLP systems meet the company's reliability standards, these reliability tests should be executed as a part of regular internal audits (Raji et al., 2020), investigative audits after incidents, and before major releases (especially if it is the system's first release or if it received a major update).",
"They may also be regularly executed on randomly chosen production systems and trigger an alert upon failure.",
"At the independent regulator level, reliability tests would likely be carried out during product certification (e.g., ANSI/ISO certification) and external audits.",
"These industry-level reliability standards and tests may be developed in a similar manner to the company-level ones.",
"However, we expect them to be more general and less comprehensive than the latter, analogous to minimum safety standards such as IEC 60335-1 (IEC, 2020).",
"Naturally, high risk applications and NLP systems used in regulated industries should comply with more stringent requirements (European Commission, 2021).",
"Our proposed framework is also highly compatible with the use of model cards (Mitchell et al., 2019) for auditing and transparent reporting (Raji et al., 2020).",
"In addition to performance on task-related metrics, model cards surface information and assumptions about a machine learning system and training process that may not be readily available otherwise.",
"When a system has passed all tests and is ready to be deployed, its averageand worst-case performance on all tested dimensions can be included as an extra section on the accompanying model card.",
"In addition, the perturbed examples generated during testing and their labels ( x (cid:48) , y (cid:48) ) can be stored for audit purposes or examined to ensure that the tests are performing as expected.",
"Observing and Refining requirements.",
"It is crucial to regularly monitor the systems' impact post-launch and add, update, or re-prioritize dimensions and thresholds accordingly.",
"Monitoring large-scale deployments can be done via community juries, in which stakeholders who will be likely impacted (or their advocates) give feedback on their pain points and raise concerns about potential negative effects.",
"Smaller teams without the resources to organize community juries can set up avenues (e.g., online forms) for affected stakeholders to give feedback, raise concerns, and seek remediation.",
"We now illustrate how reliability concerns can be converted into concrete testing dimensions (Step 1) by considering the scenario of applying automated text scoring to short answers and essays from students in the multilingual population of Singapore.",
"We study a second scenario in Appendix A. Automated Text Scoring (ATS) systems are increasingly used to grade tests and essays (Markoff, 2013; Feathers, 2019).",
"While they can provide instant feedback and help teachers and test agencies cope with large loads, studies have shown that they often exhibit demographic and language biases, such as scoring Africanand Indian-American males lower on the GRE Argument task compared to human graders (Bridgeman et al., 2012; Ramineni and Williamson, 2018).",
"Since the results of some tests will affect the futures of the test takers (Salaky, 2018), the scoring algorithms used must be suffi-ciently reliable.",
"Hence, let us imagine that Singapore's education ministry has decided to create a standard set of reliability requirements that all ATS systems used in education must adhere to.",
"Linguistic landscape.",
"A mix of language varieties are used in Singapore: a prestige English variety, a colloquial English variety, three other official languages (Chinese, Malay, and Tamil), and a large number of other languages.",
"English is the lingua franca , with fluency in the prestige variety correlating with socioeconomic status (Vaish and Tan, 2008).",
"A significant portion of the population does not speak English at home.",
"Subjects other than languages are taught in English.",
"Stakeholder impact.",
"The key stakeholders affected by ATS systems would be students in schools and universities.",
"The consequences of lower scores could be life-altering for the student who is unable to enroll in the major of their choice.",
"At the population level, biases in an ATS system trained on normally sampled data would unfairly discriminate against already underrepresented groups.",
"Additionally, biases against dis-fluent or ungrammatical text when they are not the tested attributes would result in discrimination against students with a lower socioeconomic status or for whom English is a second language.",
"Finally, NLP systems have also been known to be overly sensitive to alternative spellings (Belinkov and Bisk, 2018).",
"When used to score subject tests, this could result in the ATS system unfairly penalizing dyslexic students (Coleman et al., 2009).",
"Since education is often credited with enabling social mobility, 5 unfair grading may perpetuate systemic discrimination and increase social inequality.",
"Dimension.",
"We can generally categorize written tests into those that test for content correctness (e.g., essay questions in a history test), and those that test for language skills (e.g., proper use of grammar).",
"While there are tests that simultaneously assess both aspects, modern ATS systems often grade them separately (Ke and Ng, 2019).",
"We treat each aspect as a separate test here.",
"When grading students on content correctness, we would expect the ATS system to ignore linguistic variation and sensitive attributes as long as they do not affect the answer's validity.",
"Hence, we would expect variation in these dimensions to have no effect on scores: answer length, language/vocabulary simplicity, alternative spellings/misspellings of non-keywords, grammatical variation, syntactic variation (especially those resembling transfer from a first language), and proxies for sensitive attributes.",
"On the other hand, the system should be able to differentiate proper answers from those aimed at gaming the test (Chin, 2020; Ding et al., 2020).",
"When grading students on language skills, however, we would expect ATS systems to be only sensitive to the relevant skill.",
"For example, when assessing grammar use, we would expect the system to be sensitive to grammatical errors (from the perspective of the language variety the student is expected to use), but not to the other dimensions mentioned above (e.g., misspellings).",
"Actors.",
"Relevant experts include teachers of the subjects where the ATS systems will be deployed, linguists, and computer scientists.",
"The stakeholders (students) may be represented by student unions (at the university level) or focus groups comprising a representative sample of the student population.",
"There is a mounting effort to increase accountability and transparency around the development and use of NLP systems to prevent them from amplifying societal biases.",
"DOCTOR is highly complementary to the model card approach increasingly adopted 6 to surface oft hidden details about NLP 5 www.encyclopedia.com/.../education-and-mobility 6 huggingface.co/models;github.com/ivylee/model-cards-and-datasheets; models: Developers simply need to list the tested dimensions, metrics, and score on each dimension in the model card.",
"Crucially, reliability tests can be used to highlight fairness issues in NLP systems by including sensitive attributes for the target population, but it is paramount these requirements reflect local concerns rather than any prescriptivist perspective (Sambasivan et al., 2021).",
"At the same time, the ability to conduct quantitative, targeted reliability testing along specifiable dimensions paves the way for reliability standards to be established, with varying levels of stringency and rigor for different use cases and industries.",
"We envision minimum safety and fairness standards being established for applications that are non-sensitive, not safety-critical, and used in unregulated industries, analogous to standards for household appliances.",
"Naturally, applications at greater risks (Li et al., 2020b) of causing harm upon failure should be held to stricter standards.",
"Policymakers are starting to propose and implement regulations to enforce transparency and accountability in the use of AI systems.",
"For example, the European Union's General Data Protection Regulation grants data subjects the right to obtain meaningful information about the logic involved in automated decision systems (EU, 2016).",
"The EU is developing AI-specific regulation (European Commission, 2020): e.g., requiring developers of high-risk AI systems to report their capabilities and limitations, ... [and] the conditions under which they can be expected to function as intended.",
"In the U.S., a proposed bill of the state of Washington will require public agencies to report any potential impacts of the automated decision system on civil rights and liberties and potential disparate impacts on marginalized communities before using automated decision systems (Washington State Legislature, 2021).",
"One may note that language in the proposed regulation is intentionally vague.",
"There are many ways to measure bias and fairness, depending on the type of model, context of use, and goal of the system.",
"Today, companies developing AI systems employ the definitions they believe most reasonable (or perhaps easiest to implement), but regulation will need to be more specific for there to be meaningful compliance.",
"DOCTOR's requirement to explicitly define specific dimensions instead of a vague notion of reliability will help policymakers in this blog.einstein.ai/model-cards-for-ai-model-transparency regard, and can inform the ongoing development of national (NIST, 2019) and international standards 7 .",
"While external algorithm audits are becoming popular, testing remains a challenge since companies wishing to protect their intellectual property may be resistant to sharing their code (Johnson, 2021), and implementing custom tests for each system is unscalable.",
"Our approach to reliability testing offers a potential solution to this conundrum by treating NLP systems as black boxes.",
"If reliability tests become a legal requirement, regulatory authorities will be able to mandate independently conducted reliability tests for transparency.",
"Such standards, combined with certification programs (e.g., IEEE's Ethics Certification Program for Autonomous and Intelligent Systems 8 ), will further incentivize the development of responsible NLP, as the companies purchasing NLP systems will insist on certified systems to protect them from both legal and brand risk.",
"To avoid confusion, we expect certification to occur for individual NLP systems (e.g., an end-to-end question answering system for customer enquiries), rather than for general purpose language models that will be further trained to perform some specific NLP task.",
"While concrete standards and certification programs that can serve this purpose do not yet exist, we believe that they eventually will and hope our paper will inform their development.",
"This multi-pronged approach can help to mitigate NLP's potential harms while increasing public trust in language technology.",
"While DOCTOR is a useful starting point to implement reliability testing for NLP systems, we observe key challenges to its widespread adoption.",
"First, identifying and prioritizing the dimensions that can attest a system's reliability and fairness.",
"The former is relatively straightforward and can be achieved via collaboration with experts (e.g., as part of the U.S. NIST's future AI standards (NIST, 2019)).",
"The latter, however, is a question of values and power (Noble, 2018; Mohamed et al., 2020; Leins et al., 2020), and should be addressed via a code of ethics and ensuring that all stakeholders are adequately represented at the decision table.",
"Second, our proposed method of reliability testing may suffer from similar issues plaguing automatic 7 ethicsstandards.org/p7000 8 standards.ieee.org/industry-connections/ecpais.html evaluation metrics for natural language generation (Novikova et al., 2017; Reiter, 2018; Kryscinski et al., 2019): due to the tests' synthetic nature they may not fully capture the nuances of reality.",
"For example, if a test's objective were to test an NLP system's reliability when interacting with African American English (AAE) speakers, would it be possible to guarantee (in practice) that all generated examples fall within the distribution of AAE texts?",
"Potential research directions would be to design adversary generation techniques that can offer such guarantees or incorporate human feedback (Nguyen et al., 2017; Kreutzer et al., 2018; Stiennon et al., 2020).",
"Once language technologies leave the lab and start impacting real lives, concerns around safety, fairness, and accountability cease to be thought experiments.",
"While it is clear that NLP can have a positive impact on our lives, from typing auto-completion to revitalizing endangered languages (Zhang et al., 2020a), it also has the potential to perpetuate harmful stereotypes (Bolukbasi et al., 2016; Sap et al., 2019), perform disproportionately poorly for underrepresented groups (Hern, 2017; Bridgeman et al., 2012), and even erase already marginalized communities (Bender et al., 2021).",
"Trust in our tools stems from an assurance that stakeholders will remain unharmed, even in the worst-case scenario.",
"In many mature industries, this takes the form of reliability standards.",
"However, for standards to be enacted and enforced, we must first operationalize reliability.",
"Hence, we argue for the need for reliability testing (especially worst-case testing) in NLP by contextualizing it among existing work on promoting accountability and improving generalization beyond the training distribution.",
"Next, we showed how adversarial attacks can be reframed as worst-case tests.",
"Finally, we proposed a possible paradigm, DOCTOR, for how reliability concerns can be realized as quantitative tests, and discussed how this framework can be used at different levels of organization or industry.",
"Samson is supported by Salesforce and Singapore's Economic Development Board under the Industrial Postgraduate Programme.",
"Araz is supported by the NUS Centre for Trusted Internet and Community through project CTIC-RP-20-02.",
"Much like how we expect to not be exposed to harmful electric shocks when using electrical appliances, we should expect some minimum levels of safety and fairness for the NLP systems we interact with in our everyday lives.",
"As mentioned in 1, 3, and 7, standards and regulations for AI systems are in the process of being developed for this purpose, especially for applications deemed high-risk, e.g., healthcare (European Commission, 2020).",
"Reliability testing, and our proposed framework, is one way to approach the problem of enacting enforceable standards and regulations.",
"However, the flip side of heavily regulating every single application of NLP is that it may slow down innovation.",
"Therefore, it is important that the level of regulation for a particular application is proportionate to its potential for harm (Daten Ethik Kommission, 2019).",
"Our framework can be adapted to different levels of risk by scaling down the implementation of some steps (e.g., the method and depth in which stakeholder consultation happens or the comprehensiveness of the set of testing dimensions) for low-risk applications.",
"Finally, it is important to ensure that any tests, standards, or regulations developed adequately represents the needs of the most vulnerable stakeholders, instead of constructing them in a prescriptivist manner (Hagerty and Rubinov, 2019).",
"Hence, DOCTOR places a strong emphasis on involving stakeholder advocates and analyzing the impact of an application of NLP on the target community."
] |
[
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"result",
"result",
"objective",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Sequence tagging models for constituent parsing are faster, but less accurate than other types of parsers.",
"In this work, we address the following weaknesses of such constituent parsers:",
"(a) high error rates around closing brackets of long constituents,",
"(b) large label sets, leading to sparsity, and",
"(c) error propagation arising from greedy decoding.",
"To effectively close brackets, we train a model that learns to switch between tagging schemes.",
"To reduce sparsity, we decompose the label set and use multi-task learning to jointly learn to predict sublabels.",
"Finally, we mitigate issues from greedy decoding through auxiliary losses and sentence-level fine-tuning with policy gradient.",
"Combining these techniques, we clearly surpass the performance of sequence tagging constituent parsers on the English and Chinese Penn Treebanks, and reduce their parsing time even further.",
"On the SPMRL datasets, we observe even greater improvements across the board, including a new state of the art on Basque, Hebrew, Polish and Swedish.",
"1 1 Introduction Constituent parsing is a core task in natural language processing ( NLP ), with a wide set of applications.",
"Most competitive parsers are slow, however, to the extent that it is prohibitive of downstream applications in large-scale environments (Kummerfeld et al., 2012).",
"Previous efforts to obtain speed-ups have focused on creating more efficient versions of traditional shift-reduce (Sagae and Lavie, 2006; Zhang and Clark, 2009) or chart-based parsers (Collins, 1997; Charniak, 2000).",
"Zhu et al. (2013), for example, presented 1 After this paper was submitted, Kitaev and Klein (2018b) have improved our results using their previous self-attentive constituent parser (Kitaev and Klein, 2018a) and BERT representations (Devlin et al., 2018) as input to their system.",
"We will acknowledge these results in the Experiments section.",
"a fast shift-reduce parser with transitions learned by a SVM classifier.",
"Similarly, Hall et al. (2014) introduced a fast GPU implementation for Petrov and Klein (2007), and Shen et al. (2018) significantly improved the speed of the Stern et al. (2017) greedy top-down algorithm, by learning to predict a list of syntactic distances that determine the order in which the sentence should be split.",
"In an alternative line of work, some authors have proposed new parsing paradigms that aim to both reduce the complexity of existing parsers and improve their speed.",
"Vinyals et al. (2015) proposed a machine translation-inspired sequence-to-sequence approach to constituent parsing, where the input is the raw sentence, and the transla-tion' is a parenthesized version of its tree.",
"Gomez-Rodrguez and Vilares (2018) reduced constituent parsing to sequence tagging, where only n tagging actions need to be made, and obtained one of the fastest parsers to date.",
"However, the performance is well below the state of the art (Dyer et al., 2016; Stern et al., 2017; Kitaev and Klein, 2018a).",
"Contribution We first explore different factors that prevent sequence tagging constituent parsers from obtaining better results.",
"These include: high error rates when long constituents need to be closed, label sparsity, and error propagation arising from greedy inference.",
"We then present the technical contributions of the work.",
"To effectively close brackets of long constituents, we combine the relative-scale tagging scheme used by Gomez-Rodrguez and Vilares (2018) with a secondary top-down absolute-scale scheme.",
"This makes it possible to train a model that learns how to switch between two encodings, depending on which one is more suitable at each time step.",
"To reduce label sparsity, we recast the constituent-parsing-as-sequence-tagging problem as multi-task learning ( MTL ) (Caruana, 1997), to decompose a large label space and also obtain speed ups.",
"Finally, we mitigate error propagation using two strategies that come at no cost to inference efficiency: auxiliary tasks and policy gradient fine-tuning.",
"We briefly introduce preliminaries that we will build upon in the rest of this paper: encoding functions for constituent trees, sequence tagging, multi-task learning, and reinforcement learning.",
"Notation We use w = [ w 0 , w 1 , ..., w n ] to refer to a raw input sentence and bold style lower-cased and math style upper-cased characters to refer to vectors and matrices, respectively (e.g. x and W ).",
"Gomez-Rodrguez and Vilares (2018) define a linearization function of the form | w | : T | w | L ( | w | 1) to map a phrase structure tree with | w | words to a sequence of labels of length | w | 1 .",
"2 For each word w t , the function generates a label l t L of the form l t = ( n t , c t , u t ) , where: n t encodes the number of ancestors in common between between w t and w t +1 .",
"To reduce the number of possible values, n t is encoded as the relative variation in the number of common ancestors with respect to n t 1 .",
"c t encodes the lowest common ancestor between w t and w t +1 .",
"u t contains the unary branch for w t , if any.",
"Figure 1 explains the encoding with an example.",
"2 They (1) generate a dummy label for the last word and (2) pad sentences with a beginningand end-of-sentence tokens.",
"Sequence tagging is a structured prediction task that generates an output label for every input token.",
"Long short-term memory networks ( LSTM ) (Hochreiter and Schmidhuber, 1997) are a popular architecture for such tasks, often giving state-of-the-art performance (Reimers and Gurevych, 2017; Yang and Zhang, 2018).",
"Tagging with LSTM s In LSTM s, the prediction for the i th element is conditioned on the output of the previous steps.",
"Let LSTM ( x 1: n ) be a parametrized function of the network, where the input is a sequence of vectors x 1: n , its output is a sequence of hidden vectors h 1: n .",
"To obtain better contextualized hidden vectors, it is possible to instead use bidirectional LSTMS (Schuster and Paliwal, 1997).",
"First, a LSTM l processes the tokens from left-to-right and then an independent LSTM r processes them from right-to-left.",
"The i th final hidden vector is represented as the concatenation of both outputs, i.e. BILSTM ( x , i ) = LSTM l ( x [1: i ] ) LSTM r ( x [ | x | : i ] ) .",
"BILSTM s can be stacked in order to obtain richer representations.",
"To decode the final hidden vectors into discrete labels, a standard approach is to use a feed-forward network together with a softmax transformation, i.e. P ( y | h i ) = softmax ( W h i + b ) .",
"We will use the BILSTM -based model by Yang and Zhang (2018), for direct comparison against Gomez-Rodrguez and Vilares (2018), who use the same model.",
"As input, we will use word embeddings, PoS-tag embeddings and a second word embedding learned by a character-based LSTM layer.",
"The model is optimized minimizing the categorical cross-entropy loss, i.e. L = (cid:80) log ( P ( y | h i )) .",
"The architecture is shown in Figure",
"2. 2.3 Multi-task Learning Multi-task learning is used to solve multiple tasks using a single model architecture, with task-specific classifier functions from the outer-most representations (Caruana, 1997; Collobert and Weston, 2008).",
"The benefits are intuitive: sharing a common representation for different tasks acts as a generalization mechanism and allows to address them in a parallel fashion.",
"The hard-sharing strategy is the most basic MTL architecture, where the internal representation is fully shared across all tasks.",
"The approach has proven robust for a number of NLP tasks (Bingel and Sgaard, 2017) and comes with certain guarantees if a common, op-(1,S,NP) (1,VP,) (3,NP,) (-1,NP,) (1,PP,) (-2,S,) (-2,S,ADJP) I find your lack of faith disturbing .",
"timal representation exists (Baxter, 2000).",
"Dong et al. (2015) use it for their multilingual machine translation system, where the encoder is a shared gated recurrent neural network (Cho et al., 2014) and the decoder is language-specific.",
"Plank et al. (2016) also use a hard-sharing setup to improve the performance of BILSTM -based PoS taggers.",
"To do so, they rely on auxiliary tasks , i.e, tasks that are not of interest themselves, but that are co-learned in a MTL setup with the goal of improving the network's performance on the main task(s).",
"We will introduce auxiliary tasks for sequence tagging constituent parsing later on in this work.",
"AMTL architecture can also rely on partial sharing when the different tasks do not fully share the internal representations (Duong et al., 2015; Rei, 2017; Ruder et al., 2019) and recent work has also shown that hierarchical sharing (e.g. low-level task outputs used as input for higher-level ones) could be beneficial (Sgaard and Goldberg, 2016; Sanh et al., 2018).",
"Policy gradient ( PG ) methods are a class of reinforcement learning algorithms that directly learn a parametrized policy, by which an agent selects actions based on the gradient of a scalar performance measure with respect to the policy.",
"Compared to other reinforcement learning methods, PG is well-suited to NLP problems due to its appealing convergence properties and effectiveness in high-dimensional spaces (Sutton and Barto, 2018).",
"Previous work on constituent parsing has employed PG methods to mitigate the effect of exposure bias, finding that they function as a model-agnostic substitute for dynamic oracles (Fried and Klein, 2018).",
"Similarly, Le and Fokkens (2017) apply PG methods to Chen and Manning (2014)'s transition-based dependency parser to reduce error propagation.",
"In this work, we also employ PG to fine-tune models trained using supervised learning.",
"However, our setting (sequence tagging) has a considerably larger action space than a transition parser.",
"To deal with that, we will adopt a number of variance reduction and regularization techniques to make reinforcement learning stable.",
"We describe the methods introduced in this work, motivated by current limitations of existing sequence tagging models, which are first reviewed.",
"The source code can be found as a part of https: //github.com/aghie/tree2labels .",
"For brevity, we limit this analysis to the English Penn Treebank ( PTB ) (Marcus et al., 1993).",
"We reproduced the best setup by Gomez-Rodrguez and Vilares (2018), which we are using as baseline, and run the model on the development set.",
"We below show insights for the elements of the output tuple ( n t , c t , u t ) , where n t is the number of levels in common between w t and w t +1 , c t is the nonterminal symbol shared at that level, and u t is a leaf unary chain located at w t .",
"High error rate on closing brackets We first focus on predicting relative tree levels ( n t ).",
"See Figure 3 for F-scores over n t labels.",
"The sparsity on negative n t s is larger than for the positive ones, and we see that consequently, the performance is also significantly worse for negative n t values, and performance worsens with higher negative values.",
"This indicates that the current model cannot effectively identify the end of long constituents.",
"This is a known source of error for shift-reduce or chart-based parsers, but in the case of sequence tagging parsers, the problem seems particularly serious.",
"Sparsity The label space is large and sparse: the output labels are simply the possible values in the tuple ( n t , c t , u t ) .",
"An analysis over the PTB training set shows a total of 1423 labels, with 58% of them occurring 5 or less times.",
"These infrequent cases might be difficult to predict, even if some of the elements of the tuple are common.",
"Greedy decoding Greedy decoding is prone to issues such as error propagation.",
"This is a known source of error in transition-based dependency parsing (Qi and Manning, 2017); in contrast with graph-based parsing, in which parsing is reduced to global optimization over edge-factored scores (McDonald et al., 2005).",
"In the case of BILSTM -based sequence tagging parsers, for a given word w t , the output label as encoded by Gomez-Rodrguez and Vilares (2018) only reflects a relation between w t and w t +1 .",
"We hypothesize that even if the hidden vector representations are globally contextualized over the whole sequence, the intrinsic locality of the output label also turns into error propagation and consequently causes a drop in the performance.",
"These hypotheses will be tested in 4.",
"In particular, we will evaluate the impact of the different methods intended to perform structured inference ( 3.4).",
"Gomez-Rodrguez and Vilares (2018) encode the number of common ancestors n t , from the output tuple ( n t , c t , u t ) , as the variation with respect to n t 1 .",
"We propose instead to encode certain elements of a sentence using a secondary linearization function.",
"The aim is to generate a model that can dynamically switch between different tagging schemes at each time step t to select the one that represents the relation between w t and w t +1 in the most effective way.",
"On the one hand, the relative-scale encoding is effective to predict the beginning and the end of short constituents, i.e. when a short constituent must be predicted ( | n t | 2 ).",
"On the other hand, with a relative encoding scheme, the F-score was low for words where the corresponding n t has a large negative value (as showed in Figure 3).",
"This matches a case where a long constituent must be closed: w t is located at a deep level in the tree and will only (probably) share a few ancestors with w t +1 .",
"These configurations are encoded in a more sparse way by a relative scheme, as the n t value shows a large variability and it depends on the depth of the tree in the current time step.",
"We can obtain a compressed representation of these cases by using a top-down absolute scale instead, as any pair of words that share the same m top levels will be equally encoded.",
"The absolute scale becomes however sparse when predicting deep levels.",
"Figure 4 illustrates the strengths and weaknesses of both encodings with an example, and how a dynamically encoded tree helps reduce variability on n t values.",
"In our particular implementation, we will be using the following setup: | w | : T | w | L | w | 1 , the relative-scale encoding function, is used by default.",
"| w | : T | w | L (cid:48)| w | 1 is the secondary linearization function that maps words to labels according to a top-down absolute scale.",
"is used iff: (1) ( w [ t : t +1] ) = ( n (cid:48) t , c (cid:48) t , u (cid:48) t ) with n (cid:48) t 3 , i.e. w t and w t +1 share at most the three top levels, and (2) ( w [ t : t +1] ) = ( n t , c t , u t ) with n t 2 , i.e. w t is at least located two levels deeper in the tree than w t +1 .",
"3 a b c d e f g h i j k l m Relative: 2 1 1 1 -4 1 1 -2 1 1 1 -3 Absolute: 2 3 4 5 1 2 3 1 2 3 4 1 Dynamic: 2 r 1 r 1 r 1 r 1 a 1 r 1 r 1 a 1 r 1 r 1 r 1 a Figure 4: A synthetic constituent tree where n t is encoded using a relative scheme, a top-down absolute scale, and an ideal dynamic combination.",
"We showed that labels of the form ( n t , c t , u t ) L are sparse.",
"An intuitive approach is to decompose the label space into three smaller sub-spaces, such that n i N , c i C and u i U .",
"This reduces the output space from potentially | N | | C | | U | labels to just | N | + | C | + | U | .",
"We propose to learn this decomposed label space through a multitask learning setup, where each of the subspaces is considered a different task, namely task N , task C and task U .",
"The final loss is now computed as L = L n + L c + L u .",
"We relied on a hard-sharing architecture, as it has been proved to reduce the risk of overfitting the shared parameters (Baxter, 1997).",
"A natural issue that arises is that the prediction of labels from different label sub-spaces could be interdependent to a certain extent, and therefore a hierarchical sharing architecture could also be appropriate.",
"To test this, in preliminary experiments we considered variants of hierarchical sharing architectures.",
"We fed the output of the task U as input to task N and/or task C .",
"Similarly, we tested whether it was beneficial to feed the output of task N into task C , and viceversa.",
"However, all these results did not improve those of the hard-sharing model.",
"In this context, in addition to a generalization mechanism, the shared representation could be also acting as way to keep the model aware of the potential interdependencies that might exist between subtasks.",
"We propose two ways to mitigate error propagation arising from greedy decoding in constituent parsing as sequence tagging: auxiliary tasks and policy gradient fine-tuning.",
"Note that we want to optimize bracketing F-score and speed.",
"For this reason we do not explore approaches that come at a speed cost in testing time, such as beam-search or using conditional random fields (Lafferty et al., 2001) on top of our LSTM .",
"Auxiliary tasks Auxiliary tasks force the model to take into account patterns in the input space that can be useful to solve the main task(s), but that remain ignored due to a number of factors, such as the distribution of the output label space (Rei, 2017).",
"In a similar fashion, we use auxiliary tasks as a way to force the parser to pay attention to aspects beyond those needed for greedy decoding.",
"We propose and evaluate two separate strategies:",
"1. Predict partial labels n t + k that are k steps from the current time step t .",
"This way we can jointly optimize at each time step a prediction for the pairs ( w t , w t +1 ) , . . . , ( w t + k , w t + k +1 ) .",
"In particular, we will experiment both with previous and upcoming n k 's, setting | k | = 1 .",
"2. Predict the syntactic distances presented by Shen et al. (2018), which reflect the order a sentence must be split to obtain its constituent tree using a top-down parsing algorithm (Stern et al., 2017).",
"The algorithm was initially defined for binary trees, but its adaptation to n -ary trees is immediate: leaf nodes have a split priority of zero and the ancestors' priority is computed as the maximum priority of their children plus one.",
"In this work, we use this algorithm in a sequence tagging setup: the label assigned to each token corresponds to the syntactic distance of the lowest common ancestor with the next token.",
"This is illustrated in Figure",
"5. I find your lack of faith disturbing .",
"On the one hand, the encoding of the n t s by Gomez-Rodrguez and Vilares (2018) only needs to know about w t and w t +1 paths to generate the label for the time step t .",
"On the other hand, to compute the syntactic distance of a given non-terminal symbol, we need to compute the syntactic distances of its subtree, providing a more global, but also sparser context.",
"For training, the loss coming from the auxiliary task(s) is weighted by =0.1, i.e, the final loss is computed as L = L n + L c + L u + (cid:80) a L a .",
"Policy gradient fine-tuning Policy gradient training methods allow us to fine-tune our models with a tree-level objective, optimizing directly for bracketing F-score.",
"We start off with a converged supervised model as our initial policy.",
"The sequence labeling model can be seen as a functional approximation of the policy parametrized by , which at timestep t selects a label l t = ( n t , c t , u t ) 4 given the current state of the model's parameters, s t .",
"The agent's reward, R tree , is then derived from the bracketing F-score.",
"This can be seen as a variant of the REINFORCE algorithm (Williams, 1992) where the policy is updated by gradient ascent in the direction of: log ( l t | s t ; ) R tree (1) Baseline and Variance Reduction We use as baseline a copy of a pre-trained model where the parameters are frozen.",
"The reward used to scale the policy gradient can then be seen as an estimate of the advantage of an action l t in state s t over the baseline model.",
"This is equivalent to R tree B tree , where R tree is the bracketing F-score of a sequence sampled from the current policy and B tree is the the tree-level F-score of the sequence greedily predicted by the baseline.",
"To further reduce the variance, we standardize the gradient estimate using its running mean and standard deviation for all candidates seen in training so far.",
"In initial experiments without these augmentations, we observed that fine-tuning with vanilla PG often led to a deterioration in performance.",
"To encourage exploration away from the converged supervised model's policy, we add the entropy of the policy to the objective function (Williams and Peng, 1991).",
"Moreover, following Lillicrap et al. (2015), we optionally add noise sampled from a noise process N to the policy.",
"The gradient of our full fine-tuning objective function takes the following form: ( log ( l t | s t ; ) + N )( R tree B tree ) + H ( ( s t ; ) + N ) (2) where H is the entropy and controls the strength of the entropy regularization term.",
"Datasets We use the English Penn Treebank ( PTB ) (Marcus et al., 1993) and the Chinese Penn Treebank ( CTB ) (Xue et al., 2005).",
"For these, we use the same predicted PoS tags as Dyer et al. (2016).",
"We also provide detailed results on the SPMRL treebanks (Seddah et al., 2014), 5 a set of datasets for constituent parsing on morphologically rich languages.",
"For these, we use the predicted PoS tags provided together with the corpora.",
"To the best of our knowledge, we provide the first evaluation on the SPMRL datasets for sequence tagging constituent parsers.",
"Setup We use NCRF pp (Yang and Zhang, 2018), for direct comparison against Gomez-Rodrguez and Vilares (2018).",
"We adopt bracketing F-score instead of label accuracy for model selection and report this performance as our second baseline.",
"After 100 epochs, we select the model that fared best on the development set.",
"We use GloVe embeddings (Pennington et al., 2014) for our English models and zzgiga embeddings (Liu and Zhang, 2017) for the Chinese models, for a more homogeneous comparison against other parsers (Dyer et al., 2016; Liu and Zhang, 2017; Fernandez-Gonzalez and Gomez-Rodrguez, 2018).",
"ELMo (Peters et al., 2018) or BERT (Devlin et al., 2018) could be used to improve the precision, but in this paper we focus on keeping a good speed-accuracy tradeoff.",
"For SPMRL , no pretrained embeddings are used, following Kitaev and Klein (2018a).",
"As a side note, if we wanted to improve the performance on these languages we could rely on the CoNLL 2018 shared task pretrained word embeddings (Zeman et al., 2018) or even the multilingual BERT model 6 .",
"Our models are run on a single CPU 7 (and optionally on a consumer-grade GPU for further comparison) using a batch size of 128 for testing.",
"Additional hyperparameters can be found in Appendix A. 4.1 Results Table 1 contrasts the performance of our models against the baseline on the PTB development set.",
"5 Except for Arabic, for which we do not have the license.",
"6 https://github.com/google-research/ bert/blob/master/multilingual.md 7 Intel Core i7-7700 CPU 4.2 GHz Model F-score (+/-) Sents/s Gomez and Vilares (2018) 89.70 109 Our baseline 89.77 (+0.07) 111 + DE 90.22 (+0.52) 111 + MTL 90.38 (+0.68) 130 aux( n t +1 ) 90.41 (+0.71) 130 aux( n t 1 ) 90.57 (+0.87) 130 aux(distances) 90.55 (+0.85) 130 + PG 90.70 (+1.00) 130 Table 1: Results on the PTB dev set, compared against Gomez-Rodrguez and Vilares (2018).",
"To show that the model which employs dynamic encoding is better (+0.52) than the baseline when it comes to closing brackets from long constituents, we compare their F-scores in Figure",
"6. When we recast the constituent-parsing-as-sequence-tagging problem as multi-task learning, we obtain both a higher bracketing F-score (+0.68) and speed (1.17x faster).",
"Fusing strategies to mitigate issues from greedy decoding also leads to better models (up to +0.87 when adding an auxiliary task 8 and up to +1.00 if we also fine-tune with PG ).",
"Note that including auxiliary tasks and PG come at a time cost in training, but not in testing, which makes them suitable for fast parsing.",
"8 We observed that adding more than one auxiliary task did not translate into a clear improvement.",
"We therefore chose the auxiliary task that performed the best in the development set.",
"large treebanks, e.g. German, French or Korean, but causes some drops in the smaller ones, e.g. Swedish or Hebrew.",
"Overall, casting the problem as multitask learning and the strategies used to mitigate error propagation lead to improvements.",
"For the experiments on the test sets we select the models that summarize our contributions: the models with dynamic encoding and the multi-task setup, the models including the best auxiliary task, and the models fine-tuned with policy gradient.",
"Tables 3, 4 and 5 compare our parsers against the state of the art on the PTB , CTB and SPMRL test sets.",
"Gomez-Rodrguez and Vilares (2018) also run experiments without character embeddings, to improve speed without suffering from a big drop in performance.",
"For further comparison, we also include them as additional results (shadowed).",
"In a related line, Smith et al. (2018) show that for dependency parsing two out of three embeddings (word, postag and characters) can suffice.",
"The results across the board show that the dynamic encoding has a positive effect on 6 out of 10 treebanks.",
"Casting the constituent-parsing-as-sequence-labeling problem as MTL surpasses the baseline for all tested treebanks (and it leads to better parsing speeds too).",
"Finally, by mitigating issues from greedy decoding we further improve the performance of all models that include dynamic encodings and multi-task learning.",
"On the PTB , our models are both faster and more accurate than existing sequence tagging or sequence-to-sequence models, which already were among the fastest parsers (Gomez-Rodrguez and Vilares, 2018; Vinyals et al., 2015).",
"We also outperform other approaches that were not surpassed by the original sequence tagging models in terms of F-score (Zhu et al., 2013; Fernandez-Gonzalez and Martins, 2015).",
"On the CTB our techniques also have a positive effect.",
"The baseline parses 70 sents/s on the CTB , while the full model processes up to 120.",
"The speed up is expected to be larger than the one obtained for the PTB because the size of the label set for the baseline is bigger, and it is reduced in a greater proportion when the constituent-parsing-as-sequence-labeling problem is cast as MTL .",
"On the SPMRL corpora, we provide the first evaluation of sequence labeling constituent parsers, to verify if these perform well on mor-Model CTB Basque French German Hebrew Hungarian Korean Polish Swedish Our baseline 88.57 87.93 81.09 87.83 89.27 88.85 83.51 92.60 80.11 + DE 88.37 87.91 81.16 88.81 89.03 88.70 83.92 93.35 79.57 + MTL 88.57 89.41 81.70 88.52 92.72 89.73 84.10 93.81 82.83 aux( n t +1 ) 88.73 89.65 81.95 88.64 92.65 89.69 84.09 93.86 82.82 aux( n t 1 ) 88.48 89.47 81.77 88.58 92.53 89.71 84.13 93.87 82.74 aux(distances) 88.51 89.48 82.02 88.68 92.66 89.80 84.20 93.83 83.12 + PG 89.01 89.73 82.13 88.80 92.66 89.86 84.45 93.93 83.15 Table 2: Results on the CTB and SPMRL dev sets Model Sents/s Hardware F-score Vinyals et al. (2015) 120 Many CPU 88.30 Coavoux and Crabbe (2016) 168 1 CPU 88.60 Fernandez and Martins (2018) 41 1 CPU 90.20 Zhu et al. (2013) 90 1 CPU 90.40 Dyer et al. (2016) 17 1 CPU 91.20 Stern et al. (2017) 76 16 CPU 91.77 Shen et al. (2018) 111 1 GPU 91.80 Kitaev and Klein (2018a) 213 2 GPU 93.55 (single model) Kitaev and Klein (2018a) 71 2 GPU 95.13 (with ELM o) Kitaev and Klein (2018b) -95.77 (ensemble and BERT ) Gomez and Vilares (2018) 115 1 CPU 90.00 Our baseline 115 1 CPU 90.06 + DE 115 1 CPU 90.19 + MTL 132 1 CPU 90.36 + best aux 132 1 CPU 90.59 + PG 132 1 CPU 90.60 + PG 942 1 GPU 90.60 + PG (no char emb) 149 1 CPU 90.50 + PG (no char emb) 1267 1 GPU 90.50 Table 3: Comparison on the PTB test set.",
"phologically rich languages.",
"We then evaluated whether the proposed techniques can generalize on heterogeneous settings.",
"The tendency observed for the original tagging models by Gomez-Rodrguez and Vilares (2018) is similar to the one Model F-score Zhu et al. (2013) 83.2 Dyer et al. (2016) 84.6 Liu and Zhang (2017) 86.1 Shen et al. (2018) 86.5 Fern andez and G omez-Rodr guez (2018) 86.8 G omez and Vilares (2018) 84.1 Our baseline 83.90 + DE 83.98 + MTL 84.24 +best aux 85.01 + PG 85.61 + PG (no char emb) 83.93 Table 4: Comparison on the CTB test set for the PTB and CTB : they improve other fast parsers, e.g. Coavoux and Crabbe (2016), in 5 out of 8 treebanks and Fernandez-Gonzalez and Martins (2015) in 7 out of 8, but their performance is below more powerful models.",
"When incorporating the techniques presented in this work, we outperform the original sequence tagging models on all datasets.",
"We outperform the current best model for Basque, Hebrew and Polish (Kitaev and Klein, 2018a) and for Swedish (Bjorkelund et al., 2014), which corresponds to the four smallest treebanks among the SPMRL datasets.",
"This indicates that even if sequence tagging models are conceptually simple and fast, they can be very suitable when little training data is available.",
"This is also of special interest in terms of research for low-resource languages.",
"Again, casting the problem as MTL reduces the parsing time for all tested treebanks, as reflected in Table",
"6. Finally, for treebanks such as French, designing methods to handle multi-word expressions could lead to better results, getting closer to other parsers (Coavoux and Crabbe, 2017).",
"We have explored faster and more precise sequence tagging models for constituent parsing.",
"We proposed a multitask-learning architecture that employs dynamic encodings, auxiliary tasks, and policy gradient fine-tuning.",
"We performed experiments on the English and Chinese Penn Treebanks, and also on the SPMRL datasets.",
"Our models improve current sequence tagging parsers on all treebanks, both in terms of performance and speed.",
"We also report state-of-the-art results for the Basque, Hebrew, Polish, and Swedish datasets.",
"The methods presented in this work are specifi-cally designed for constituent parsing.",
"However, it seems natural to apply some of these to other NLP tagging tasks, e.g. using multi-task learning to predict sub-level morphological information for morphologically-rich part-of-speech tagging.",
"DV has received support from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from the TELEPARES-UDC project (FFI2014-51978-C2-2-R) and the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017/01).",
"MA and AS are funded by a Google Focused Research Award."
] |
[
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"result",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"result",
"method",
"abstain",
"other",
"other"
] |
[
"Large-scale pretrained language models are surprisingly good at recalling factual knowledge presented in the training corpus (Petroni et al., 2019; Jiang et al., 2020b).",
"In this paper, we present preliminary studies on how factual knowledge is stored in pretrained Transformers by introducing the concept of knowledge neurons .",
"Specifically, we examine the fill-in-the-blank cloze task for BERT.",
"Given a relational fact, we propose a knowledge attribution method to identify the neurons that express the fact.",
"We find that the activation of such knowledge neurons is positively correlated to the expression of their corresponding facts.",
"In our case studies, we attempt to leverage knowledge neurons to edit (such as update, and erase) specific factual knowledge without fine-tuning.",
"Our results shed light on understanding the storage of knowledge within pretrained Transformers.",
"The code is available at https://github.com/ Hunter-DDM/knowledge-neurons .",
"Large-scale pretrained Transformers (Devlin et al., 2019; Liu et al., 2019; Dong et al., 2019; Clark et al., 2020; Bao et al., 2020) are usually learned with a language modeling objective on large-scale corpora, such as Wikipedia, where exists oceans of factual knowledge.",
"Pretrained language models naturally play as a free-text knowledge base by predicting texts (Bosselut et al., 2019).",
"Petroni et al. (2019) and Jiang et al. (2020b) probe factual knowledge stored in pretrained language models by fill-in-the-blank cloze queries.",
"The evaluation shows that pretrained Transformers have a strong ability to recall factual knowledge without any fine-tuning.",
"Roberts et al. (2020) use closed-book question answering to show that the larger a model is, the more knowledge it can store.",
"However, most previous work focuses on evaluating the overall accuracy of Contribution during internship at Microsoft Research.",
"text-form knowledge prediction.",
"In this paper, we attempt to look deeper into pretrained Transformers and investigate how factual knowledge is stored.",
"As shown in Figure 1, we propose a knowledge attribution method to identify the neurons that express a relational fact, where such neurons are named knowledge neurons .",
"Specifically, we view feed-forward network (i.e., two-layer percep-tron) modules in Transformer as key-value memories (Geva et al., 2020).",
"For the example in Figure 1, the hidden state is fed into the first linear layer and activates knowledge neurons; then, the second linear layer integrates the corresponding memory vectors.",
"The key-value-memory nature (Geva et al., 2020) inspires us to propose the knowledge attribution method, which identifies knowledge neurons in feed-forward networks by computing the contribution of each neuron to the knowledge prediction.",
"Extensive analysis shows that the activation of the identified knowledge neurons is positively correlated to the knowledge expression, which shows 8493 Feed-Forward Network FFN (key) Activation inner product weighted sum FFN (val) FFN Output Hidden State The capital of Ireland is [MASK] Self-Attention Layer Feed-Forward Network Dublin Knowledge Neurons Figure 2: Illustration of how an FFN module in a Transformer block works as a key-value memory.",
"the effectiveness of the proposed knowledge attribution method.",
"First, suppressing and amplifying knowledge neurons notably affects the expression of the corresponding knowledge.",
"Second, we find that knowledge neurons of a fact tend to be activated more by corresponding knowledge-expressing prompts.",
"Third, given the knowledge neurons of a fact, the top activating prompts retrieved from open-domain texts usually express the corresponding fact, while the bottom activating prompts do not express the correct relation.",
"In our case studies, we try to leverage knowledge neurons to explicitly edit factual knowledge in pretrained Transformers without any fine-tuning.",
"We present two preliminary studies: updating facts, and erasing relations.",
"After identifying the knowledge neurons, we perform a knowledge surgery for pretrained Transformers by directly modifying the corresponding parameters in feed-forward networks.",
"Such surgery shows promising results, keeping a moderate influence on other knowledge.",
"Our contributions are summarized as follows: We introduce the concept of knowledge neurons and propose a knowledge attribution method to identify the knowledge neurons that express specific factual knowledge in the fill-in-the-blank cloze task.",
"We conduct both qualitative and quantitative analysis to show that knowledge neurons are positively correlated to knowledge expression.",
"We present preliminary studies of leveraging knowledge neurons to edit factual knowledge in Transformers, even without any fine-tuning.",
"Transformer (Vaswani et al., 2017) is one of the most popular and effective NLP architectures.",
"A Transformer encoder is stacked with L identical blocks.",
"Each Transformer block mainly contains two modules: a self-attention module, and a feed-forward network (abbreviated as FFN) module.",
"Let X R n d denote the input matrix, two modules can be formulated as follows: Q h = XW Qh ,K h = XW Kh , V h = XW Vh , (1) Self-Att h ( X ) = softmax (cid:0) Q h K Th (cid:1) V h , (2) FFN( H ) = gelu ( HW 1 ) W 2 , (3) where W Qh , W Kh , W Vh , W 1 , W 2 are parameter matrices; Self-Att h ( X ) computes a single attention head; H , the hidden state, is given by projecting the concatenation of all heads; gelu denotes the GELU activation function (Hendrycks and Gimpel, 2016).",
"For simplicity, we omit the scaling factor in self-attention and the bias terms.",
"Connections Between Self-Attention and FFN Comparing Equation (2) and Equation (3), we notice that the formula of FFN( ) is quite similar to Self-Att( ) , except the activation function gelu in FFN and softmax in self-attention.",
"Thus, similar to the query-key-value mechanism in self-attention, it is reasonable to regard the input of the FFN as a query vector, and two linear layers of the FFN as keys and values, respectively.",
"Similar observations are also described in (Geva et al., 2020).",
"Similar to (Geva et al., 2020), we view FFNs in Transformer as key-value memories as illustrated in Figure 2.",
"We hypothesize that factual knowledge is stored in FFN memories and expressed by knowledge neurons .",
"In this section, we propose a knowledge attribution method and a refining strategy to identify these knowledge neurons.",
"We employ the fill-in-the-blank cloze task to assess whether a pretrained model knows a fact.",
"Following Petroni et al. (2019), each relational fact is in the form of a triplet (cid:104) h, r, t (cid:105) , where h is the head entity, t is the tail entity, and r is the relation between them.",
"Given a fact, pretrained models answer the cloze query x that expresses the fact but leaves the tail entity as a blank.",
"For example, given the fact (cid:104) Ireland, capital, Dublin (cid:105) , a possible query is The capital of Ireland is .",
"We also call the query a knowledge-expressing prompt .",
"Petroni et al. (2019) describe that a model knows a fact if it can predict the correct answer.",
"In this paper, rather than just examining the model outputs, we identify the specific knowledge neurons that express factual knowledge.",
"Inspired by Hao et al. (2021), we propose a knowledge attribution method based on integrated gradients (Sundararajan et al., 2017).",
"Our method can evaluate the contribution of each neuron to knowledge predictions.",
"In this paper, we examine FFN intermediate neurons for the masked token, where the answer is predicted.",
"Given an input prompt x , we first define the model output P x ( w ( l ) i ) as the probability of the correct answer predicted by a pretrained model: P x ( w ( l ) i ) = p ( y | x, w ( l ) i = w ( l ) i ) , (4) where y denotes the correct answer; w ( l ) i denotes the i -th intermediate neuron in the l -th FFN; w ( l ) i is a given constant that w ( l ) i is assigned to.",
"In order to calculate the attribution score of a neuron Attr( w ( l ) i ) , we gradually change w ( l ) i from 0 to its original value w ( l ) i calculated by the pretrained model, and meanwhile integrate the gradients: Attr( w ( l ) i ) = w ( l ) i (cid:90) 1 =0 P x ( w ( l ) i ) w ( l ) i d, (5) where P x ( w ( l ) i ) w ( l ) i calculates the gradient of the model output with regard to w ( l ) i .",
"Intuitively, as changes from 0 to 1 , by integrating the gradients, Attr( w ( l ) i ) accumulates the output probability change caused by the change of w ( l ) i .",
"If the neuron has a great influence on the expression of a fact, the gradient will be salient, which in turn has large integration values.",
"Therefore, the attribution score can measure the contribution of the neuron w ( l ) i to the factual expressions.",
"Directly calculating continuous integrals is intractable.",
"We instead use Riemann approximation Attr( w ( l ) i ) = w ( l ) i m (cid:80) mk =1 P x ( km w ( l ) i ) w ( l ) i , where m = 20 is the number of approximation steps.",
"With the attribution algorithm, we can identify a coarse set of knowledge neurons whose attribution scores are greater than a threshold t .",
"In order to identify knowledge neurons more accurately, we further propose a refining strategy.",
"Besides true-positive knowledge neurons that express factual knowledge, the coarse set of knowledge neurons may contain false-positive knowledge neurons that express other information (e.g., syntactic or lexical information).",
"The refining strategy aims to filter out these false-positive neurons.",
"For different prompts corresponding to the same fact, we hypothesize that they share the same set of true-positive knowledge neurons, since they express the same factual knowledge.",
"Meanwhile, we hypothesize that they do not share the false-positive knowledge neurons as long as the prompts are diverse enough.",
"Therefore, given multiple diverse prompts, we can refine the coarse set of knowledge neurons by retaining only neurons that are widely shared among these prompts.",
"Specifically, given a relational fact, the complete process to identify its knowledge neurons is described as follows: (1) produce n diverse prompts; (2) for each prompt, calculate the knowledge attribution scores of neurons; (3) for each prompt, retain the neurons with attribution scores greater than the attribution threshold t , obtaining the coarse set of knowledge neurons; (4) considering all the coarse sets together, retain the knowledge neurons shared by more than p % prompts.",
"We conduct experiments for BERT-base-cased (De-vlin et al., 2019), one of the most widely-used pretrained models.",
"It contains 12 Transformer blocks, where the hidden size is 768 and the FFN inner hidden size is 3,072.",
"Notice that our method is not limited to BERT and can be easily generalized to other pretrained models.",
"For each prompt, we set the attribution threshold t to 0 .",
"2 times the maximum attribution score.",
"For each relation, we initialize the refining threshold p % (Section 3.3) as 0 .",
"7 .",
"Then, we increase or decrease it by 0 .",
"05 at a time until the average number of knowledge neurons lies in [2, 5].",
"We run our experiments on NVIDIA Tesla V100 GPUs.",
"On average, it costs 13.3 seconds to identify knowledge neurons for a relational fact with 9 prompts.",
"We examine knowledge neurons through the fill-in-the-blank cloze task based on the PARAREL dataset (Elazar et al., 2021).",
"PARAREL is curated by experts, containing various prompt templates for 38 relations from the T-REx dataset (ElSahar et al., 2018).",
"We show some example templates in Table 1.",
"For each relational fact, we fill in the head entity in prompt templates and leave the tail entity as a blank to predict.",
"In order to guarantee the template diversity, we filter out relations with fewer than 4 prompt templates and finally keep 34 relations, where each relation has 8.63 different prompt templates on average.",
"These prompt templates produce 253,448 knowledge-expressing prompts in total for 27,738 relational facts.",
"Our baseline method takes the neuron activation value as the attribution score, i.e., Attr base ( w ( l ) i ) = w ( l ) i , which measures how sensitive a neuron is to the input.",
"After computing attribution scores, we follow the same pipeline to obtain the refined 1 2 3 4 5 6 7 8 9 10 11 12 Layer 40%30%20%10%0%10%20%30%40% P e r c e n t a g e Figure 3: Percentage of knowledge neurons identified by our method in each Transformer layer.",
"knowledge neurons.",
"For a fair comparison, we employ the same method to choose the hyper-parameters t and p % for the baseline to ensure the average number of knowledge neurons for each relation lies in [2 , 5] .",
"The method based on neuron activation is a reasonable baseline.",
"It is motivated by FFNs's analogy with the self-attention mechanism (as described in Section 2), because self-attention scores are usually used as a strong attribution baseline (Kovaleva et al., 2019; Voita et al., 2019; Hao et al., 2021).",
"Figure 3 presents the layer distribution of knowledge neurons identified by our knowledge attribution method.",
"We notice that most fact-related neurons are distributed in the topmost layers of pretrained Transformers.",
"The finding also agrees with Tenney et al. (2019) and Geva et al. (2020).",
"Table 2 shows statistics of knowledge neurons.",
"On average, we identify 4 .",
"13 knowledge neurons for each relational fact using our knowledge attribution method, and 3 .",
"96 using the baseline method.",
"Their same order of magnitude guarantees the fairness of the subsequent comparisons in the paper.",
"We also compute the knowledge neuron intersection of different relational facts.",
"Table 2 shows the average number of pair-wise knowledge neuron intersections.",
"For our proposed method, (1) fact pairs with the same relation ( intra-relation fact pairs ) share 1.23 knowledge neurons on average; (2) fact pairs with different relations ( inter-relation fact pairs ) share almost no knowledge neurons.",
"In contrast, for the baseline, (3) most identified neurons are shared by intra-relation fact pairs; (4) even a substantial portion of neurons are common for inter-relation fact pairs.",
"The difference in knowledge neuron intersections suggests that our method can identify more exclusive knowledge neurons.",
"We investigate how much knowledge neurons can affect knowledge expression in Figure 4 and Figure 5.",
"Given a relational fact, we manipulate its knowledge neurons in two ways: (1) suppressing knowledge neurons by setting their activations to 0; (2) amplifying knowledge neurons by doubling their activations.",
"Then, for each relation, we plot the average change ratio of the probability for the correct answer, corresponding to the manipulation.",
"For comparison, we also plot the results of manipulating baseline-identified knowledge neurons.",
"Figure 4 shows that suppressing knowledge neurons identified by our knowledge attribution method leads to a consistent decrease (29.03% on average) in the correct probability.",
"By contrast, for baseline-identified neurons, the suppressing operation has a negligible influence (1.47% decrease on average) on the correct probability.",
"Notably, for the relation P178 ( developer ), the correct probability abnormally increases by using the baseline.",
"As shown in Figure 5, we have similar observations for amplifying the knowledge neurons identified by our knowledge attribution.",
"We see a consistent increase (31.17% on average) in the correct probability.",
"By contrast, the baseline even decreases the average correct probability by 1.27%.",
"In summary, the knowledge neurons identified by our knowledge attribution method tend to notably affect knowledge expression.",
"Notice that the above assessment is affected by the distribution of knowledge neurons.",
"For example, if the knowledge neurons for a relation are distributed more widely, we need to manipulate more topk neurons for better control.",
"We use the above experiments as a proof of concept while leaving precise control for future work.",
"In order to study what prompts can activate knowledge neurons, we compare the average activation of knowledge neurons for different types of prompts.",
"BINGREL Dataset We build a new dataset BINGREL by crawling the Bing search engine to collect new prompts, for a more extensive comparison beyond the PARAREL dataset.",
"For each of the 27,738 facts in PARAREL , we crawl two types of texts: (1) up to ten texts containing both the head and the tail entities (210,217 texts crawled in total); (2) up to ten texts containing only the head entity without restricting tail entities (266,020 texts crawled in total).",
"Following the distant supervision assump-tion (Mintz et al., 2009), the first type of texts tends to express the whole relational fact, while the second type does not.",
"We mask tail entities for the first type of texts to obtain knowledge-expressing prompts ( T 1 ).",
"In order to conduct a controlled experiment, we mask random words for the second type of texts, forming a control group ( T 2 ).",
"Results As shown in Table 4, for our method, the identified knowledge neurons are more significantly activated by knowledge-expressing prompts ( T 1 = 0 . 485 ), compared with the control groups ( T 2 = 0 . 019 and T 3 = 0 . 018 ).",
"By contrast, for the baseline, the activation of identified neurons cannot distinguish three types of prompts.",
"In addition, since our comparison is based on the web-crawled BINGREL dataset, we validate the generalization of knowledge neurons to open-domain texts that are unseen in PARAREL .",
"Example Prompts In Table 3, we present example prompts that activate knowledge neurons the most and the least, respectively.",
"Given a fact, we first identify its knowledge neurons with our knowledge attribution method.",
"Then, we calculate the average activation of knowledge neurons for each crawled prompt that contains both the head and the tail entities in BINGREL .",
"Finally, we demonstrate two prompts with the highest average activation values and two with the lowest (denoted as top-2 and bottom-2 activating prompts, respectively).",
"As shown in Table 3, the top-2 activating prompts express exactly the corresponding relational fact.",
"In contrast, despite containing the same head and tail entities, the bottom-2 activating prompts do not express the correct relation.",
"For example, although the bottom-2 activating prompts for (cid:104) Ireland, capital, Dublin (cid:105) express 8498 Erased Relations Perplexity (Erased Relation) Perplexity (Other Relations) Before Erasing After Erasing Before Erasing After Erasing P19 ( place_of_birth ) 1450.0 2996.0 (+106.6%) 120.3 121.6 (+1.1%) P27 ( country_of_citizenship ) 28.0 38.3 (+36.7%) 143.6 149.5 (+4.2%) P106 ( occupation ) 2279.0 5202.0 (+128.2%) 120.1 125.3 (+4.3%) P937 ( work_location ) 58.0 140.0 (+141.2%) 138.0 151.9 (+10.1%) Table 5: Case studies of erasing relations.",
"information like Dublin is a city in Ireland, they do not reflect the capital relation.",
"The examples support again that knowledge neurons are activated by corresponding knowledge-expressing prompts.",
"We present two preliminary studies to demonstrate the potential applications of knowledge neurons.",
"We use the case studies as a proof of concept while leaving precise fact editing for future work.",
"By leveraging knowledge neurons in pretrained models, we try to update a learned relational fact from (cid:104) h, r, t (cid:105) to (cid:104) h, r, t (cid:48) (cid:105) .",
"Methods First, we identify the knowledge neurons of (cid:104) h, r, t (cid:105) .",
"Then, we retain the knowledge neurons that are shared by less than 10% of intrarelation facts, to reduce the influence on other facts with the same relation.",
"Finally, we directly modify the corresponding value slots in FFN (val) (i.e., the second linear layer of FFNs; see Figure 2): FFN (val) i = FFN (val) i 1 t + 2 t (cid:48) , where FFN (val)i denotes the value slot corresponding to the i -th knowledge neuron; t and t (cid:48) are the word embeddings of t and t (cid:48) , respectively; 1 and 2 are set to 1 and 8 in our experiments.",
"Setup We conduct experiments on PARAREL .",
"For each relation, we randomly sample ten facts learned by the pretrained model.",
"For each fact (cid:104) h, r, t (cid:105) , we randomly choose a different entity t (cid:48) with the same type as t (e.g., both t and t (cid:48) belong to city ), and then update t (cid:48) as the target entity.",
"We only manipulate about four top knowledge neurons as in Section 4.4.",
"For reference purposes, we also perform the same update process on the same number of random neurons.",
"Evaluation Metrics We report two metrics to evaluate the fact updating: (1) change rate, the ratio that the original prediction t is modified to another; (2) success rate, the ratio that t (cid:48) becomes the top prediction.",
"In addition, we measure the influence on other knowledge by the following two metrics: (1) intra-relation PPL, the increase of perplexity on the prompts with the same relation r ; (2) inter-relation PPL, the increase of perplexity on the prompts with different relations.",
"Results As shown in Table 6, the surgery of knowledge neurons achieves a nontrivial success rate for updating facts, while random neurons are insufficient.",
"Moreover, we find that such manipulation has little negative influence on other knowledge predictions.",
"It is promising that we can change very few (i.e., about four in the above experiments) neurons to affect certain facts in pretrained Transformers.",
"We can further improve the success rate by including more top knowledge neurons in the update process.",
"We explore how to leverage knowledge neurons to erase specific relations in pretrained Transformers.",
"Specifically, we take four relations in PARAREL as examples, i.e., place_of_birth , country_of_citizenship , occupation , work_location , that typically express sensitive personal information.",
"Methods Given a relation r , we first identify knowledge neurons for all relational facts with r .",
"Then, we retain 20 knowledge neurons that appear most frequently among these facts.",
"Finally, we set the value slots in FFN (val) (see Figure",
"2) corresponding to these knowledge neurons to 0 , i.e., zero vectors.",
"Results As shown in Table 5, we report model perplexity before and after knowledge erasing.",
"With the erasing operation, the perplexity of the removed knowledge increases as expected.",
"Moreover, the model perplexity of other relations remains similar.",
"We argue that knowledge neurons provide a promising way to erase undesired knowledge with minimal efforts.",
"Probing Knowledge in Pretrained Models Many pieces of previous work aim to measure knowledge stored in pretrained models.",
"Petroni et al. (2019) propose to retrieve knowledge in pretrained models (such as BERT) using cloze queries.",
"Their experiments show that BERT has a strong ability to recall factual knowledge without any fine-tuning.",
"Jiang et al. (2020b) improve the cloze queries with mining-based and paraphrasing-based methods.",
"Roberts et al. (2020) propose the closed-book question answering to measure how much knowledge a pretrained model has stored in its parameters.",
"Elazar et al. (2021) measure and improve the consistency of pretrained models with respect to factual knowledge prediction.",
"Rather than examining only the model outputs, we provide an open-the-black-box analysis for the knowledge neurons in pretrained Transformers.",
"Attribution Methods In order to open the black boxes of deep learning models, attribution methods aim to attribute the model output to input features using different measures.",
"The product of the gradients (of the output with respect to input features) and feature values is a reasonable baseline (Baehrens et al., 2010; Simonyan et al., 2014).",
"Besides, a set of attribution methods (Shrikumar et al., 2017; Binder et al., 2016; Zeiler and Fergus, 2014; Springenberg et al., 2015) back-propagate the final output to input features.",
"However, as stated by Sundararajan et al. (2017), none of these methods can simultaneously satisfy sensitivity and implementation invariance , two fundamental axioms.",
"Taking the axioms as guidance, Sundararajan et al. (2017) propose the integrated gradient method.",
"Our knowledge attribution method is built upon integrated gradients.",
"Analysis of Transformer As one of the most popular and effective NLP architectures, Transformer (Vaswani et al., 2017) has attracted extensive studies.",
"Most previous work focuses on the self-attention module (Voita et al., 2019; Clark et al., 2019; Vig and Belinkov, 2019; Hao et al., 2021).",
"Recently, Wu et al. (2019) and Dong et al. (2021) have pointed out that the feed-forward network module also matters to Transformer.",
"Geva et al. (2020) attempt to connect feed-forward networks with key-value memories by qualitative analysis.",
"In this paper, we identify and analyze knowledge neurons in feed-forward networks for given factual knowledge.",
"Moreover, we present how to leverage knowledge neurons to explicitly edit factual knowledge stored in pretrained Transformers.",
"We propose an attribution method to identify knowledge neurons that express factual knowledge in pretrained Transformers.",
"We find that suppressing or amplifying the activation of knowledge neurons can accordingly affect the strength of knowledge expression.",
"Moreover, quantitative and qualitative analysis on open-domain texts shows that knowledge neurons tend to be activated by the corresponding knowledge-expressing prompts.",
"In addition, we present two preliminary case studies that attempt to utilize knowledge neurons to update or erase knowledge in pretrained Transformers.",
"Despite the effectiveness of identifying knowledge neurons, our current studies still have limitations.",
"First, we examine knowledge neurons based on the fill-in-the-blank cloze task, while knowledge can be expressed in a more implicit way.",
"It is an open question whether Transformer can utilize stored knowledge in a generalized way, such as for reasoning.",
"The interactions between knowledge neurons also remain under explored.",
"Second, we focus on factual knowledge for ease of evaluation, even though our method is also applicable for other types of knowledge.",
"Third, we use the single-word blank in cloze queries for simplicity, which requires multi-word extensions (Jiang et al., 2020a).",
"Besides, an interesting future direction is to figure out how knowledge neurons work in multilingual pretrained Transformers (Conneau and Lample, 2019; Conneau et al., 2020; Chi et al., 2021).",
"Damai Dai, Zhifang Sui, and Baobao Chang are supported by the National Key Research and Development Program of China 2020AAA0106701 and NSFC project U19A2065."
] |
[
"abstain",
"method",
"abstain",
"objective",
"result",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"objective",
"result",
"method",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"method",
"objective",
"result",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other"
] |
[
"To effectively characterize the nature of paraphrase pairs without expert human annotation, we propose two new metrics: word position deviation (WPD) and lexical deviation (LD).",
"WPD measures the degree of structural alteration, while LD measures the difference in vocabulary used.",
"We apply these metrics to better understand the commonly-used MRPC dataset and study how it differs from PAWS, another paraphrase identification dataset.",
"We also perform a detailed study on MRPC and propose improvements to the dataset, showing that it improves generalizability of models trained on the dataset.",
"Lastly, we apply our metrics to filter the output of a paraphrase generation model and show how it can be used to generate specific forms of paraphrases for data augmentation or robustness testing of NLP models.",
"A robust understanding of semantic meaning, despite variances in sentence expression, is an integral part of natural language processing (NLP) tasks.",
"However, many existing NLP models exhibit shortcomings in understanding real-world variations in natural language.",
"These models are often overreliant on learned spurious correlations resulting in poor generalization (Sanchez et al., 2018; McCoy et al., 2019).",
"This problem is challenging to address since it is difficult to distinguish spurious correlations from useful features (Gardner et al., 2021).",
"One way of improving the performance and robustness of NLP model is to increase the size of the dataset (Hestness et al., 2017).",
"It is possible to do so in an efficient manner through data augmentation, or the process of generating new data out of existing examples, thus creating more training data or test cases (Feng et al., 2021; Chen et al., 2021).",
"This would also enhance the capability to detecting error in a wide range of NLP systems.",
"We can also condition language models to generate paraphrases of input sentences (Witteveen and Andrews, 2019) through the use of large language models such as GPT (Radford et al., 2019).",
"However, commonly used paraphrase datasets and paraphrase generation techniques that rely on such datasets suffer from several shortfalls, such as being noisy due to loose labelling in these datasets and lack of accurate, controllable generation.",
"In this paper, we make three key contributions to address this issue.",
"Firstly, we propose two new metrics for better understanding of paraphrase pairs: word position deviation and lexical deviation.",
"We show, with examples, how these metrics are more effective at quantitatively capturing the linguistic characteristics of paraphrase pair than existing methods such as ROGUE-L, SELF-BLEU and edit distance.",
"Secondly, we apply the proposed metrics to better understand the commonly used Microsoft Research Paraphrase Corpus (MRPC) (Dolan and Brockett, 2005) dataset.",
"We also study how MRPC differs from Paraphrase Adversaries from Word Scrambling (PAWS) (Zhang et al., 2019), another paraphrase identification dataset.",
"In the process, we perform a detailed study on MRPC and propose some revisions to the dataset.",
"We demonstrate that this improves the quality of paraphrase identification models trained on MRPC, with higher transferability to other paraphrase identification datasets.",
"Lastly, we demonstrate the applicability of our proposed metrics.",
"By applying our metrics to filter the output of a paraphrase generation model, we show how it can be used to generate specific forms of paraphrases, which can be used as training data for data augmentation purposes and to generate test cases for robustness testing of NLP models.",
"There have been several survey papers done to better understand the task of paraphrase identification",
"and generation.",
"A Survey of Paraphrasing and Textual Entailment Methods (Androutsopoulos and Malakasiotis, 2010) presented a comprehensive survey and review on the the aforementioned tasks.",
"In this paper, the authors helped to properly define the tasks and identified some methods and their associated challenges.",
"This was followed up by a more recent survey specifically on the task of paraphrase identification, A Survey on Paraphrase Recognition (Magnolini, 2014), where the focus of the survey was the performance of various statistical and non-deep learning approaches on paraphrase identification on the MRPC dataset.",
"Additionally, in On Paraphrase Identification Corpora (Rus et al., 2014), the authors performed a survey of various paraphrase datasets, also highlighting several issues with paraphrase datasets, including MRPC, and providing some recommendations for improving the curation of paraphrase datasets.",
"There have also been previous work on the task of better quantifying various characteristics of paraphrase pairs.",
"In Texygen: A Benchmarking Platform for Text Generation Models (Zhu et al., 2018), SELF-BLEU was proposed to measure the diversity in text generation.",
"However, it suffers from limitations inherent to BLEU-style metrics: it captures the differences in presence of n-grams, but not their sequence, and is thus mostly limited to capturing the differences of vocabulary, but not the overall structure of a sentence.",
"In Paraphrasing with Large Language Models (Witteveen and Andrews, 2019), ROUGE-L is used as a measurement of paraphrase diversity, where lower ROUGE-L scores correspond to greater diversity in paraphrasing generation.",
"However, ROUGE-L mainly measures degree of similarity in sub-sequences, but not the order in which the sub-sequences occur, and thus cannot accurately capture the possible structural differences present in paraphrase pairs.",
"In our paper, we take a deeper look at some of the issues related to MRPC, proposing some useful improvements.",
"We also build upon previous attempts to characterise paraphrases through the use of quantitative metrics, demonstrating how our proposed metrics can capture various different paraphrasing techniques better than previously proposed metrics.",
"To facilitate more precise discussions in our paper, we clearly define a paraphrase as follows:",
"Definition 1 (Paraphrase) .",
"A sentence is a paraphrase of another sentence if they are not identical but share the same semantic meaning.",
"Therefore, there are two distinct criteria in order to fulfill the definition of being a paraphrase pair:",
"1. The two sentences must have the same meaning: it is impossible to derive different information from a paraphrase of a sentence.",
"Where two sentences are not certain to have exactly the same meaning, a common interpretation of both sentences should be the same in order for it to be a reasonable paraphrase.",
"This also implies that both sentences in a paraphrase pair necessarily entail each other.",
"2. The two sentences must not be identical, for example having lexical differences (differ-ences in vocabulary) or structural differences (differences in word order, punctuation and syntax).",
"In A Survey of Paraphrasing and Textual Entailment Methods (Androutsopoulos and Malakasiotis, 2010), the following example is provided, which we shall discuss:",
"It is argued that sentence 3 is not a precise paraphrase of sentences 1 and 2 as it is not stated precisely in sentence 3 that the bridge was completed.",
"For the purposes of our discussion, we would consider sentence 3 a reasonable paraphrase as well as it is very likely that all three sentences would be interpreted in the same way, and thus share the same semantic meaning based on the most common interpretation of the sentences.",
"These examples illustrate that it is non-trivial to precisely define what is a paraphrase pair, as there is some variance (depending on subjective interpretation) on what would be a precise paraphrase.",
"This problem is observed to have caused issues due to the imprecise definitions used while creating paraphrase datasets, such as the MRPC dataset which is very widely used.",
"By adhering strictly to the definition of a paraphrase as detailed in this section, we hope to better facilitate discussion throughout the paper.",
"In this paper, we will utilize and compare two commonly used paraphrase datasets, MRPC and PAWS.",
"The Microsoft Research Paraphrase Corpus (MRPC) is a corpus consists of sentence pairs collected from web news articles (Dolan and Brockett, 2005).",
"This dataset is widely used as a benchmark for the paraphrase identification task.",
"It can be used directly or indirectly as part of the GLUE benchmark (Wang et al., 2019).",
"In particular, as part of the GLUE benchmark, the dataset has been used for training and evaluation in more than 50 research papers as can be determined from the GLUE leaderboard 1 .",
"It is also less commonly used as a paraphrase generation dataset, in works such as (Huang and Chang, 2021).",
"MRPC contains 4076 training and 1725 test examples.",
"The Paraphrase Adversaries from Word Scrambling (PAWS) is a dataset contains sentence pairs extracted from Wikipedia and the Quora Question Pairs (QQP) dataset (Zhang et al., 2019).",
"While it is less commonly used than MRPC, it is a high quality and larger dataset, and is used in a number of papers such as (Yu and Ettinger, 2021), (Tu et al., 2020) and (Chen et al., 2020) for the purpose of paraphrase identification.",
"PAWS contains 49,401 training, 8000 development and 8000 test examples.",
"Our objective is to comprehensively evaluate the diverse linguistic phenomena involved in paraphrasing, which can include techniques such as synonym substitution, negation, diathesis alternation, coordination changes and more.",
"We can broadly classify these techniques into the use of structural alternations and lexical alternations to achieve paraphrasing.",
"Thus, to better characterise a paraphrase pair, we propose two metrics: word position deviation and lexical deviation .",
"These two metrics are introduced so as to provide a quantitative understanding on what type of paraphrase it is along the two types 1 https://gluebenchmark.com/leaderboard of changes.",
"A key design consideration of these metrics is the need to be able to capture the extents of structural and lexical alterations in an efficient manner, without resorting to costly human annotation or large amounts of computation.",
"We will use these metrics to provide a good understanding of the characteristics of paraphrase pairs both at a individual (paraphrase pair) level and at an aggregate level over the whole dataset.",
"In addition, we apply these metrics to filter outputs from paraphrase generation systems to select for specific types of paraphrases.",
"In this section, we define some terms that will be used across various metrics computations.",
"Let s 1 and s 2 denote two sentences.",
"We will also refer to the pair of sentences ( s 1 , s 2 ) as a paraphrase pair.",
"Definition 4.1 (Set of common words) .",
"The set of common words C ( s 1 ,s",
"2) of a paraphrase pair is the set of words, in uncased lemmatized form, which occurs in both s 1 and s 2 .",
"Definition 4.2 (Set of all words) .",
"The set of all words A ( s 1 ,s",
"2) of a paraphrase pair is the complete set of words, in uncased lemmatized form, which occurs in either or both sentences s 1 and s 2 .",
"Thus, given two sentences: s 1 : Yesterday, Bob met Tom at the store.",
"s 2 : Tom met Bob yesterday while they were at the store.",
"C ( s 1 ,s",
"2) : { yesterday, bob, meet, tom, at, the, store } A ( s 1 ,s",
"2) : { yesterday, bob, meet, tom, at, the, store, while, they, be } We will also use the notation NC ( s 1 ,s",
"2) to refer to the size of set C ( s 1 ,s",
"2) and NA ( s 1 ,s",
"2) to refer to the size of set A ( s 1 ,s",
"2) .",
"We use NC and NA for short when it is obvious which statements s 1 and s 2 we are referring to.",
"For a word W and a sentence s , we denote by N s ( W ) the number of times that the word W appears in the sentence s .",
"We propose the word position deviation (WPD) of a paraphrase pair as a metric that effectively captures the degree of deviation in the structure of paraphrased sentences by looking at changes in word",
"positions.",
"WPD can be intuitively understood as the mean of how much words shift in position after a paraphrase.",
"We find that this proposed metric is effective in identifying the amount of structural alterations present in paraphrase pairs.",
"To properly define WPD, we first introduce the concept of normalized word position in a paraphrase pair.",
"Definition 4.3 (Normalized Word Position) .",
"Let s be a sentence and W be a word.",
"For 1 n N s ( W ) , the normalized word position s,n ( W ) of n -th appearance of W in s is its index divided by the index of the last word.",
"Thus, a normalized word position value ranges from the first word in the sentence having a value of 0.0 and last word having value of 1.0.",
"For example, if the second appearance of W has index a and the last word has index b in the sentence s , then s, 2 ( W ) = a/b .",
"In WPD, we consider the mean differences between the normalized word positions.",
"For any given word that is common in both sentences in a paraphrase pair ( s 1 , s 2 ) , we can calculate the relative position shift as the difference in normalized word position.",
"Definition 4.4 (Relative Position Shift) .",
"The relative position shift of a word W with respect to sentence s 1 in paraphrase pair ( s 1 , s 2 ) is denoted as s 1 ,s 2 ( W ) , only defined for words in C ( s 1 ,s",
"2) , and has the expression s 1 ,s 2 ( W ) = N s 1 ( W ) (cid:88) n =1 min 1 k N s 2 ( W ) | s 1 ,n ( W ) s 2 ,k ( W ) | N s 1 ( W ) .",
"(1) For each occurrence of W in s 1 , we calculate the smallest difference between its normalized word position and that of the occurrences of W in s 2 .",
"We then average these smallest differences over all occurrence of W in s 1 to get the relative position shift of W with respect to s 1 in paraphrase pair ( s 1 , s 2 ) .",
"In a simple case with only one occurrence of W in each sentence, this reduces to the distance between s 1 , 1 ( W ) and s 2 , 1 ( W ) , which is s 1 ,s 2 ( W ) = | s 1 , 1 ( W ) s 2 , 1 ( W ) | .",
"(2) To the concepts described above, a simple example is provided in Figure 1 below.",
"close to 1.0.",
"Conversely, if the word W is near the start of s 1 and near the start of the s 2 , s 1 ,s 2 ( W ) is close to 1.0.",
"In a generalised case where there can be multiple occurrences of W can be present in s 1 or s 2 , the mean distance between one occurrence and the nearest occurrence in the other sentence is considered.",
"However, such instances are much rarer.",
"We illustrate the handling of using a real example, showing how the word his occurs twice, resulting in a mean ( his ) of 0.263.",
"Definition 4.5 (Word Position Deviation) .",
"Let ( s 1 , s 2 ) be a paraphrase pair.",
"The WPD of ( s 1 , s 2 ) , denoted as pos ( s 1 , s 2) , is the mean of all the relative position shifts of all the words in the set C ( s 1 ,s",
"2) , namely, pos ( s 1 , s 2) = 1 NC (cid:88) W C max { s 1 ,s 2 ( W ) , s 2 ,s 1 ( W ) } .",
"(3) Below are additional examples of the WPD computation on paraphrases in the MRPC dataset.",
"To aid visualization of what the metric measures, the common words are underlined and coloured to aid comparison.",
"We propose lexical deviation (LD), a metric that effectively captures the degree of deviation in the vocabulary used between the sentences in a paraphrase pair.",
"We find that the proposed metric is effective in identifying and ranking paraphrase pairs 8595 from various datasets according to meaningful differences in their usage of lexical changes to perform paraphrasing.",
"Definition 4.6 (Lexical Deviation) .",
"Let ( s 1 , s 2 ) be a paraphrase pair.",
"The lexical deviation lex ( s 1 , s 2 ) for a paraphrase pair ( s 1 , s 2 ) is defined by lex ( s 1 , s 2 ) = 1 NCNA .",
"For a case where there is complete reuse of words (in other words, NC = NA ), the metric will compute to 0.0.",
"Likewise, in a case where there is no reuse of words, the metric computes to 1.0.",
"For the purpose of computing the total set of words and the set of common words, we consider words that are the same after lemmatization (ig-noring capitalization) to be the same word.",
"Therefore, we do not consider words that are of different forms (e.g. tense) and capitalization to be different words.",
"This allows our metric to more accurately capture the range of vocabulary used.",
"As word forms tend to vary when used as part of different sentence structures, we do not wish to capture that in this metric, which focuses on the diversity of vocabulary (using of different words), and not the grammatical usage of a word.",
"In addition, we consider changes in capitalization a trivial paraphrase, and hence do not consider it in this metric.",
"To demonstrate the applicability of our proposed metrics of WPD and LD, we compare them against other metrics with similar purposes: ROGUE-L (Lin, 2004), SELF-BLEU (Zhu et al., 2018) and DamerauLevenshtein edit distance (Levenshtein, 1965).",
"In the examples below, we show that with WPD and LD, we can effectively distinguish between different types of paraphrases that have similar scores via various other metrics.",
"two paraphrases can have very similar ROGUE-L scores of 0.76 and 0.75, where ROUGE-L primarily measures the degree of sub-string similarity (longest common sub-strings).",
"However, with WPD, we are able to additionally distinguish the degree in which the similar sub-strings have been shuffled in position, which is a structural alteration to the sentence.",
"In Example Pair 2 (Figure 4), we again show that two paraphrases can have very similar SELF-BLEU scores of 0.60 and 0.59, where SELF-BLEU primarily measures the degree n-gram overlap.",
"However, similar to Example Pair 1, in one of the paraphrases, the two \"halves\" of the sentence has been swapped in position, and this structural alteration is captured by the WPD score.",
"Lastly, in Example Pair 3 (Figure 5), we show that two paraphrases can have very similar DamerauLevenshtein edit distance, but feature two completely different types of paraphrasing method.",
"Using WPD, we are able to obtain an aggregate view of both the MRPC and PAWS datasets.",
"We see that both datasets feature similar distributions of structural paraphrasing, where the average amount of structural paraphrasing is fairly low and MRPC features more structural paraphrasing compared to 8596 PAWS.",
"A visualization is provided in Figure 6 below.",
"Hence, we would expect the MRPC dataset to be somewhat more diverse in structural paraphrases as compared to PAWS.",
"Using LD, we are able to obtain an aggregate view of both the MRPC and PAWS datasets to see that both datasets feature a very different distribution of lexical paraphrasing.",
"A visualization is provided in Figure 7 below.",
"MRPC features a large amount of lexical paraphrasing, in contrast to PAWS where lexical paraphrasing is almost absent.",
"Hence, we would expect the MRPC dataset to be substantially more diverse in having different examples of lexical paraphrases as compared to PAWS.",
"We investigated the source of high LD in MRPC and determined that the reason is due to large inconsistencies in entities, such as named entities and quantities, present in MRPC paraphrase pairs.",
"We can see that many of the examples at the high-end of lexical deviation are not reasonable paraphrases of each other as they contain extremely different information in each sentence.",
"When used as training data for paraphrase identification or generation tasks, this can introduce undesired behaviour into models.",
"For example, this can make paraphrase generation models more prone to \"hallucinating\" additional information in paraphrases, while paraphrase identification models are less able to detect such inconsistencies.",
"Hence, this motivates us to more closely inspect the quality and consistency of labels in the MRPC dataset, and then propose improvements.",
"Despite its wide usage as a benchmark for paraphrase identification, the labels in the MRPC dataset are not of a consistently high quality.",
"This is a result of the annotation process used to create the MRPC dataset.",
"The annotation process used for MRPC, as described in the paper Automatically constructing a corpus of sentential paraphrases (Dolan and Brock-ett, 2005), is as follows: a collection of news articles is collected from the web over a 2-year period, and candidates for paraphrase pairs are extracted using automated approaches, followed by human evaluation used to determine if two similar sentences are paraphrases.",
"However, the instructions given to the human annotators of the pairs were \"ill-defined\".",
"Compounding the issue is that several classes of named entities in the text were replaced by generic tags, introducing large amounts of ambiguity.",
"As a result, the annotators labelled sentences with very inconsistent entities as valid paraphrases, leading to a relatively large number of sentences inside that are not in fact reasonable paraphrases, despite being labelled as such.",
"Thus, models that perform well on MRPC may not able to correctly identify paraphrases in a precise manner.",
"We can show this in Section 5.2.2, where a state-of-the-art language model that performs well on MRPC has nearly random performance on PAWS, despite both being paraphrase identification datasets.",
"To illustrate this issue, we use an example of a sentence pair, labelled as a paraphrase, from the MRPC dataset:",
"1. The stock rose $2.11, or about 11 percent, to close Friday at $21.51 on the New York Stock Exchange.",
"2. PG&E Corp. shares jumped $1.63 or 8 percent to $21.03 on the New York Stock Exchange on Friday.",
"In this example, which is labelled as a paraphrase-pair, there are a total of 9 entities across the paraphrase pair, but only 2 (\"the New York 8597 Stock Exchange\" and \"Friday\") are present across the two.",
"In other words, there is a great inconsistency in the entities present between each of the paraphrase pairs.",
"In this case, this results in a large discrepancy in the information contained in each sentence, and thus the two sentences are not in fact paraphrases despite being labelled as such in MRPC.",
"In MRPC, there are a total of 3900 paraphrase pairs.",
"Of those, 3016 (77%) have at least 1 inconsistent entity.",
"Thus, this is a common issue in MRPC.",
"With the aim to improve the precision of sentence pairs labelled as paraphrases in MRPC, we proposed some amendments to MRPC, including the following specific objectives:",
"1. Automatically correcting the inconsistency in entities;",
"2. Rectifying the labels where automated correction is not possible.",
"Our process to achieve this has two main steps.",
"First, we search for inconsistent examples where the inconsistency is limited to singular instances of any type of quantity.",
"For example, one instance of a monetary value that differs between two sentences in a paraphrase pair.",
"Next, when a match is found, we proceed with a to correct the paraphrase.",
"In this specific scenario, as we know that both values share the same type, we can correct one of the values to be identical to the instance in the other sentence, making it a more precise paraphrase.",
"In order to avoid being overly zealous in this replacement, we inspect the most frequent replacements to ensure that no unintended replacements occur.",
"Of the 3016 inconsistent paraphrase pairs in MRPC, 476 (16%) can be corrected using our approach.",
"For the rest of the paraphrase pairs that we cannot correct, we label them as non-paraphrases.",
"After the corrections, 2064 (53%) out of the original 3900 paraphrase pairs are relabelled as non-paraphrases.",
"This also changes the ratio of paraphrase:non-paraphrase in MRPC from approximately 8:5 to approximately 4:8.",
"We term this revised version of MRPC as MRPC-R1.",
"To illustrate the corrections to text performed during the creation of MRPC-R1, a few examples are shown in the table below: Figure 9: Correcting some examples from MRPC 5.2.2 Evaluating Changes to MRPC In order to evaluate the differences in quality of the datasets, we compare the transferability of a model trained on MRPC and MRPC-R1 to the PAWS test set.",
"Our training setup is as follows: We used a state-of-the-art DeBERTa (He et al., 2021) pretrained langauage model and fine-tuned it on each of the following: MRPC training set, MRPC-R1 training set, and lastly for a baseline, the PAWS training set).",
"We performed the training using the HuggingFace Transformers library (Wolf et al., 2020) and PyTorch (Paszke et al., 2019), learning rate of 1e-5, and the Adam optimizer (Kingma and Ba, 2015).",
"For MRPC and MRPC-R1, we use a batch size of 32, and for PAWS, which has a much larger training set, we use a batch size of 128.",
"We did not perform extensive hyper-parameter tuning.",
"We tested two variations of the DeBERTa model: DeBERTa-base (140M parameters) and DeBERTa-large (400M parameters) Each of the models are evaluated every 50 steps on the PAWS development set, and the best model checkpoint is evaluated against the PAWS test set.",
"We report the results below (median from 5 runs).",
"From our results, we can see that training on MRPC-R1 results in much better scores on the PAWS test set for both models.",
"Additionally, if we use the more powerful DeBERTa-large model, the model overfits more on MRPC training data.",
"Thus, DeBERTa-large scores lower than DeBERTa-base on the PAWS test set.",
"However, DeBERTa-large performs better than DeBERTa-base when trained on MRPC-R1, showing that more powerful 8598 models benefit more from MRPC-R1.",
"Thus, we can see that that MRPC-R1 has greater transferability to the PAWS test set.",
"These results demonstrate that we have increased the generalization ability of the trained model through the improving the consistency and quality of the labels in MRPC.",
"To demonstrate the applicability of our metrics to filter and thus control the output from a paraphrase generation model, we combine the paraphrase pairs from MRPC-R1 and PAWS to form a corpus to train a sequence-to-sequence T5 (Raf-fel et al., 2020) transformer language model to generate paraphrases.",
"We performed the training using the HuggingFace Transformers library and PyTorch, using the the pretrained T5-large model (770M parameters).",
"We performed training for a total of 10 epochs with a batch size of 16, learning rate of 1e-5, the Adam optimizer and did not perform extensive hyper-parameter tuning.",
"By using WPD and LD, we are able to effectively filter for specific types of paraphrases.",
"In the following example, we pass \" I keep a glass of water next to my bed when I sleep. \" as an input to be paraphrased by the model.",
"Some of the outputs are sampled and ranked below according to WPD, showing how WPD can be used to select paraphrases with varying extents of structural paraphrases, and the results can be seen in the table below: Generated Paraphrase WPDI keep a glass of water beside my bed when I sleep.",
"We can also do the same for LD, where we can see that the lower the the extent of word overlap between the original and paraphrase, the greater the LD value.",
"Words are marked with italics to visually indicate words that have changed from the source sentence.",
"The results can be seen in the table below: Generated Paraphrase LD When I sleep I keep a glass of water next to my bed.",
"Thus, we can use WPD, LD, or a combination of both to select specific types of paraphrases, therefore efficiently obtaining specific variations of data for data augmentation or robustness testing purposes.",
"To the best of our knowledge, we do not introduce any ethical concerns in this work.",
"Our work is based on the existing MRPC and PAWS datasets, which are sampled from online news articles as well as Wikipedia.",
"Hence we expect our findings to generalize well to other English datasets in the general domain.",
"Generalization of our work to domains where usage of language is markedly different (for example, in some forms of technical writing) is not certain.",
"When our proposed metrics are used in conjunction with other technology (such as large generative language models), it does not affect the existing ethical considerations of using those technology.",
"In our paper, we have proposed two new metrics to better understand paraphrase pairs: word position deviation (WPD) and lexical deviation (LD).",
"We have applied these metrics to better understand the MRPC and PAWS datasets, and also to filter the output of a paraphrase generation model to obtain specific forms of paraphrases.",
"However, our metrics still have some limitations, which can be address in future work.",
"Although we are able to measure the extent of structural and lexical alterations, we cannot determine the fine-grained type of alterations that is being made, for example, a specific form of structural alteration or word substitution.",
"We anticipate that improvements in this area would be valuable to improve our ability to effectively characterize various properties of paraphrases, leading to better data augmentation and robustness testing approaches that eventually resulting in better performing NLP systems."
] |
[
"objective",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"method",
"result"
] |
[
"Ruochen Zhang and Carsten Eickhoff Brown University",
"Abstract In the pursuit of natural language understanding, there has been a long standing interest in tracking state changes throughout narratives.",
"Impressive progress has been made in modeling the state of transaction-centric dialogues and procedural texts.",
"However, this problem has been less intensively studied in the realm of general discourse where ground truth descriptions of states may be loosely defined and state changes are less densely distributed over utterances.",
"This paper proposes to turn to simplified, fully observable systems that show some of these properties: Sports events.",
"We curated 2,263 soccer matches including timestamped natural language commentary accompanied by discrete events such as a team scoring goals, switching players or being penalized with cards.",
"We propose a new task formulation where, given paragraphs of commentary of a game at different timestamps, the system is asked to recognize the occurrence of in-game events.",
"This domain allows for rich descriptions of state while avoiding the complexities of many other real-world settings.",
"As an initial point of performance measurement, we include two baseline methods from the perspectives of sentence classification with temporal dependence and current state-of-the-art generative model, respectively, and demonstrate that even sophisticated existing methods struggle on the state tracking task when the definition of state broadens or non-event chatter becomes prevalent.",
"State tracking, the task of maintaining explicit representations of user requests and agent responses, has long been a key component of dialogue systems (Williams et al., 2013; Henderson et al., 2014a,b; Kim et al., 2016).",
"The same challenge arises during reading comprehension of procedural texts (recipes, how-to guides, etc.) where systems focus on predicting changes of object attributes at the entity-level (a car window may transition from foggy to clear) (Dalvi et al., 2018; Tandon et al., 2020).",
"However, both of these state tracking variants rely on transaction-based or turn-based data such as transactional dialogues or procedure descriptions that are information-dense.",
"Few works have studied state tracking tasks where state changes occur infrequently while a large proportion of messages are chatter.",
"As an alternative to altogether unrestricted state trackinga task that is daunting due to the complexity of even describing ground-truth states in a discrete mannerwe resort to a simpler and more self-contained setting: sports competitions.",
"Given the stream of natural language utterances with which a commentator describes the events in a real-world setting (here a sports competition), an ideal natural language understanding system would maintain and reason over a coherent and accurate representation of the match based on how the commentator described it.",
"This representation can, in turn, be used for downstream tasks such as inference or language generation.",
"Sports matches provide an ideal test bed for state tracking due to their self-contained, fully observable nature and their inherent interpretability in the form of the temporal evolution of scores.",
"However, existing sports-related commentary collections such as described by Aull and Brown (2013) and Merullo et al. (2019) do not provide such within-match temporal information.",
"To this end, we collect temporally-aligned commentaries and live scores of soccer matches along with other meta information from the website goal.com and compile the dataset SOCCER .",
"To the best of our knowledge, SOCCER is the first temporally-aligned collection of sports match commentary and state.",
"It contains over 2,200 matches from tournaments such as the UEFA Champions League or the UK Premier League between 2016 and 2020.",
"Across these matches, there are over Figure 1: An overview of the state tracking task in sports commentary.",
"135,000 individual comments and approximately 31,000 events.",
"A simplified example is shown in Figure 1.",
"To demonstrate the potential of state tracking for open-domain discourse, we use the proposed dataset to investigate to what degree state-of-the-art systems are able to track the progression of events described in the commentary.",
"This overview includes two model classes: classification models that treat match events as different class labels, and generative language models such as GPT-2 (Radford et al., 2019) that model context and events in a causal manner.",
"Our experiments show that both methods do not perform well on SOCCER and only slightly outperform distributional heuristics, leaving considerable room for improvement.",
"The novel contributions of this paper are threefold: (1) we propose a new task of tracking event occurrences via state changes, (2) we create SOCCER , a general discourse state tracking dataset that contains temporally-aligned human-composed commentary and in-game events, serving as the training and evaluation dataset for this task, and (3) we provide two intuitive baselines demonstrating the difficulty of this task and presenting exciting opportunities for future research.",
"Dialogue State Tracking (DST).",
"Current DST collections and benchmarks tend to rely on transaction-centric dialogues with predefined domain-specific ontologies and slot-value pairs.",
"Prominent examples include the DSTC2 (Hen-derson et al., 2014a) and MultiWOZ datasets (Budzianowski et al., 2018).",
"Consequently, previous work focuses on picklist-based approaches (Mrkic et al., 2017; Perez and Liu, 2017; Zhong et al., 2018; Ramadan et al., 2018; Gao et al., 2019) to formulate state tracking as a series of classification tasks over candidate-value lists.",
"A major difference between SOCCER and other DST datasets lies in its information density.",
"As dialogues in DST are usually short conversations with direct transactional objectives such as booking hotels or reserving restaurant tables, frequent state changes are required to be captured within limited turns of the conversation.",
"In sports commentary, on the contrary, in-game events occur at a comparatively low frequency and a considerable proportion of commentator utterances may not be related to any changes in the game state.",
"State Tracking in Procedural Text.",
"State tracking in procedural text understanding focuses on the task of tracking changes in entity attributes (Tandon et al., 2020).",
"A variety of procedural progresses have been proposed such as tracking entity presence and location in scientific processes (Dalvi et al., 2018), ingredients in cooking recipes (Bosselut et al., 2017), and character motivation and emotional reaction in simple stories (Rashkin et al., 2018).",
"Yet, similar to DST settings, these highly specific tasks depend on small fixed ontologies covering limited ranges of entities and states.",
"Another more recent dataset (Tandon et al., 2020) turns to an open-vocabulary setting when defining entity attributes.",
"But since the dataset is comprised of how-to guides from WikiHow.com, the task still sees a high density of state changes per natural language instruction.",
"Information Density The concept of Information Density has been mainly used in the Uniform Information Density (UID) theory (Jaeger, 2010) to measure the amount of information per unit comprising an utterance.",
"Levy and Jaeger (2007) demonstrated that speakers tend to maximize the uniformity of information via syntactic reduction.",
"The notion of information density in our paper, however, focuses on quantifying the frequency of event occurrences on the corpus level instead of understanding syntactic choices on the utterance level.",
"Sports Event Datasets and Tasks.",
"Commentary in the sports domain has been collected to study a variety of problems such as racial bias in football game reporting (Merullo et al., 2019) and gender construction in NBA/WNBA coverage (Aull and Brown, 2013).",
"However, these datasets do not provide any information on the temporal alignment between commentary and events.",
"Another similar dataset, BALLGAME (Keshet et al., 2011) is comprised of baseball commentary with annotated events and timestamps, but it contains less than 20 games and the annotation is unavailable online.",
"Some work focuses on sports-related inference of player performance metrics (Oved et al., 2019) or game outcomes (Velichkov et al., 2019) that predict full-time results based on signals from pre-game player interviews.",
"However, no in-game sequential contexts are provided in these datasets.",
"Most similar to our work, Bhagat (2018) collected in-game commentaries for soccer player analytics, but their approach is restricted by classical machine learning methods and ignores the effect of information sparsity within the dataset.",
"We collect time-stamped commentary with key events of 2,263 soccer matches in total.",
"The matches stem from four major soccer tournaments including the UEFA Champions League, UEFA Europa League, Premier League and Series A between 2016 and 2020.",
"SOCCER consists of over 135,000 time-stamped pieces of commentary and 31,000 within-match events.",
"This section describes our data collection and preparation process in detail.",
"Commentaries, events, team lineups, match dates and other meta-information are gathered from match-specific pages.",
"Out of a total of 9,028 matches covered on goal.com between 2014 and 2020, we retain only those 2,434 matches that list detailed event records and commentary.",
"Any matches missing either of the two information streams are discarded.",
"The retained matches belong to the four major tournaments mentioned above and all occurred starting 2016.",
"Figure 2 shows the frequency distribution of included and overall matches across the years in which they took place.",
"All commentaries are in English and available in text form, thus requiring no transcription.",
"Pieces of commentary come pre-segmented and aligned to match-internal timestamps so that in-game events and commentary with the same timestamps can be linked.",
"Comments whose temporal information is unavailable usually belong to the pre-game, intermission and post-game periods and are labeled as START, BREAK, END accordingly.",
"The total number of commentary paragraphs within a game is the same as the number of timestamps.",
"This number varies between matches as timestamps during which the commentator did not provide commen-Event Name Goal Assist Yellow Card Red Card Switch Team Home Guest Home Guest Home Guest Home Guest Home Guest Event # per team 3582 2799 2434 1871 3948 4320 163 197 6111 6117 Event total # 6381 4305 8268 360 12228 Player # per team 1001 924 882 774 1548 1613 145 183 2546 2575 Player total # 2915 1656 3161 328 5121 Table 1: Event type and player name distribution.",
"tary are omitted.",
"Finally, any templated sentences following the format team 1 score score team 2 are removed to avoid trivial leakage of the match state.",
"All annotation and filtering processes are done programmatically and no manual efforts are involved.",
"Events are classified into five types: goal , assist , yellow card , red card and switch .",
"We consider events as keys and the event-related players as the corresponding values.",
"For example, if player B from the home team assists in scoring a goal, player B will be the value of the event assist for the home team.",
"Hence, at each timestamp t , there are ten event-player pairs (five event types tracked for two teams).",
"From this representation, we construct a comprehensive game state incorporating all the event-player pairs for each team as well as a cumulative score at each timestamp (See Figure 3).",
"Special events such as penalty goals or own goals are not explicitly labeled, but can be derived from the evolution in cumulative score between neighboring timestamps.",
"After processing, 171 games were found to have ill-formed commentary or misaligned end-game match scores compared to the goal records in the key events.",
"These matches were eliminated from the original 2,434 games crawled with commentary, giving us a total of 2,263 games.",
"Finally, the collected data is partitioned into distinct training (70%), validation (15%) and test (15%) sets.",
"For each match m in the dataset M , there is a set of timestamps T m = { t } accurate to a minute.",
"As input, we are given a stream of commentaries C m = { c t } T m t =1 and c t represents the paragraph of commentary at time t .",
"The output will be a set of general match states S m = { s t } T m t =1 such that each s t reflects the state change in the comment c t at the same timestamp.",
"s t contains a set of events e ( t ) i,j , where i represents the event types ( i { goal , assist , yellow card , red card , switch } ) and j denotes the event actor ( j { home , guest } ).",
"Given the sparse distribution of s t , we propose two alternative variants of the variable to assess the difficulty of state tracking at different granularity levels of state resolution.",
"Team Level.",
"In this simplest notion of state, events are tracked at the team level.",
"In other words, e ( t ) i,j = { yes , no } .",
"Consider the event of the home team scoring a goal e ( t ) goal , home at time t as an example: given the commentary c t and other related meta-information, a model is tasked with determining the value of e ( t ) goal , home to be yes if the home team indeed scored a goal in a given minute, or no otherwise.",
"Player Level.",
"At this significantly increased level of resolution, all events are additionally associated with their player agents p P , where P denotes the collection of players.",
"Concretely, the variable e ( t ) i,j is mapped to either the related play-ers' names p or a none answer to each event at time t .",
"To facilitate this form of state, match meta-information includes lineups that associate present players with teams.",
"In the following, we provide descriptive statistics of the SOCCER dataset and include two model baselines for recognizing match events resulting in changes of states.",
"The SOCCER dataset covers 2,263 matches with 135,805 pieces of commentary and 31,542 in-game event records.",
"In all event records, each event type of each team appears approximately 3,154 times on average.",
"There are a total of 3,507 unique player names across all event types and an average 1,219 unique player names per event type per team.",
"A more detailed overview of the distribution of event types and player names can be seen in Table 1.",
"Common state tracking datasets either in dialogue systems or procedural texts are designed to capture frequent state changes in the text.",
"In Figure 4: Model architecture of the GRU classifier and GPT-2 based variant.",
"SOCCER , we study a more general setting where the corpus is much less information dense due to an abundance of non-event related chatter.",
"To quantify this difference, we define information density ( ID ) as: ID = Total # of state changes Total # of turns/steps/timestamps As shown in Table 2, our dataset has a considerably lower information density with more turns of information.",
"In SOCCER , the match state only gets updated every 5 timestamps, while in datasets such as MultiWOZ2.1 (Eric et al., 2019) and OpenPI (Tandon et al., 2020), there are between 1 and 4 state changes per turn or step on average.",
"SOCCER presents a new challenge to the state tracking community by introducing a more general corpus with an all-new state definition and a sparse information distribution.",
"These properties render it difficult to directly apply some existing models such as TRADE used in DST tasks and ProLocal (Dalvi et al., 2018) proposed for procedural texts.",
"Motivated by previous work on state tracking and based on the characteristics of the task, we use two baseline training and inference schemes:",
"1) a GRU (Cho et al., 2014) classifier with pre-trained BERT (Devlin et al., 2019) embeddings, and",
"2) a generative pre-trained GPT2 (Radford et al., 2019) variant.",
"assess the difficulty level of the SOCCER dataset.",
"Embeddings of the timestamped commentary c t are obtained from the pretrained weights of BERT (Devlin et al., 2019), that then get fed into a 1-layer GRU (Cho et al., 2014) network followed by two feed-forward layers.",
"We only tasked this model with team-level state tracking as the classification will be extremely difficult if each player name is treated as a distinct class.",
"We map the 10 event variables e ( t ) i,j as binary flags to a 10-bit scalar value in which each digit denotes the predicted value of a variable.",
"For example, if the 0th position corresponds to the variable e ( t ) goal , home , then the predicted value at that position denotes whether the home team scores a goal (See Figure 4).",
"Compared to converting the problem into ten binary classifi-cations, this allows us to directly model the joint occurrence of events.",
"GPT-2 Based Variant.",
"Recent approaches to state tracking (Kim et al., 2019; Hosseini-Asl et al., 2020; Tandon et al., 2020) have shown that generative models are competitive especially in open-vocabulary settings.",
"Inspired by simpleTOD (Hosseini-Asl et al., 2020) and the OpenPI baseline (Tandon et al., 2020), we cast the player-level state tracking task as a sequence generation problem, allowing us to leverage the capabilities of causal language models such as GPT-2 (Radford et al., 2019).",
"The training sequence consists of a concatenation of the commentary, event types and player names, allowing us to model the joint probability of the whole sequence.",
"Event names are preprocessed as tokens like goal_home to avoid being tokenized into sub-word units.",
"Commentary and event-player pairs are encapsulated in special tokens to help the model distinguish context from labels.",
"See Figure 4 for a schematic overview of the model training input.",
"In training, the model takes the concatenated Team Level Player Level Metrics Acc.",
"sequence as input to perform next token prediction task.",
"At inference time, greedy decoding is used to generate state predictions due to its superior performance compared to beam search and top-k sampling (Hosseini-Asl et al., 2020).",
"During preprocessing, we find that 98.1% of comments in the collection are shorter than 200 words, therefore any outliers with a length of more than 200 words are truncated at that point.",
"Then, the input text sequences are tokenized using byte-pair encoding (Sennrich et al., 2016) to avoid out-of-vocabulary words.",
"The sentence embeddings processed by the GRU classifier stem from the pretrained weights of HuggingFace's BERT model (Wolf et al., 2019).",
"The GPT-2 model (Radford et al., 2019) is also obtained from HuggingFace with pretrained weights, which are then fine-tuned on SOCCER 1 .",
"Accuracy, and recall for occurrences of all event-types are used to assess the performance of both models.",
"Due to the sparsity of event occurrences, recall is crucial to track the models' ability to extract events given the full set of types.",
"For convenience, we refer to event types with ground truth none answers as negative cases and positive cases otherwise.",
"Therefore, recall among event occurrences is referred to as positive recall in the tables.",
"More specifically, in Tables 3 and 5, accuracy and positive recall are measured on all labels (positive and negative combined).",
"In Table 4, the performance is reported on positive labels only, and detailed metrics including precision, recall and F1 scores are provided.",
"This section reports the results on the test set of SOCCER .",
"As a nave distributional baseline, we compute the ratio of negative cases in the test set to be 0.9766.",
"1 The SOCCER dataset as well as the code base used to collect it and run the experiments presented in the remainder of this paper are available here.",
"In Table 3, both models achieve an accuracy that is approximately equal to this majority class baseline due to the heavily imbalanced distribution of event positives and negatives.",
"While accuracy scores are very high, positive recall is much lower, indicating that many event occurrences are missed by the models.",
"When comparing the GPT-2 model's performance on both team level and player level event recognition 2 , we notice that player level recall is substantially worse than that on team-level.",
"This result suggests that complex state tracking involving broad ranges of possible slot values is a comparatively harder task that may require more sophisticated approaches.",
"In addition to these general results, we break down model performance of positive cases by event-type and provide additional metrics including precision, recall and F 1 scores (see Table 4).",
"When associating the scores with the event type distribution (see Table 1), we can observe that, generally, greater numbers of available data points result in better performance.",
"Take the event type goal as an example.",
"According to Table 1 there are about 800 more positive cases of the event e ( t ) goal , home than e ( t ) goal , guest .",
"A difference that is reflected in all the metrics in Table 4 for both models.",
"Another interesting point to note is the performance gap between the GRU classifier and GPT-2 model on the event type red card .",
"The red card event is extremely rare in SOCCER as illustrated in Table 1.",
"Though we observe the performance of both models on red card events to be comparably lower than those of the other events, the GRU classifier is able to capture more positive cases while no occurrences are detected by GPT-2.",
"In Section 5.1, we have shown that a key difference between SOCCER and other state tracking datasets lies in its low information density (See Table 2 for a detailed comparison).",
"It is conceivable that such differences in information density affect state tracking performance.",
"To eliminate confounding effects introduced via direct comparison to other datasets, this section explores the connection between event density across pieces 2 The GRU classifier is only used in team-level tasks since treating each player in the ontology as a distinct class to classify is very difficult.",
"Comment Sparsity 0% 20% 40% 60% 80% Metrics Acc.",
"Pos.",
"Recall Acc.",
"Pos.",
"Recall Acc.",
"Pos.",
"Recall Acc.",
"Pos.",
"Recall Acc.",
"Pos.",
"Recall Task Level Team Level GRU Classifier 0.89 0.44 0.90 0.41 0.92 0.35 0.94 0.30 0.97 0.31 GPT-2 Variant 0.88 0.49 0.90 0.49 0.93 0.47 0.95 0.41 0.98 0.44 Task Level Player Level GPT-2 Variant 0.83 0.06 0.87 0.06 0.90 0.04 0.94 0.04 0.98 0.02 Table 5: Model performance on team-level and player-level tasks with data of different information density.",
"of commentary and model performance.",
"We begin by discarding all but the truly event related comments in each match to obtain a subset containing 0% negative cases.",
"This subset contains 25,934 event related comments across all matches.",
"Then, by randomly replacing positive comments 3 with negative ones from the same match at a sparsity ratio r { 20% , 40% , 60% , 80% } , we keep the total number of comments at the same constant count of 25,934 and keep the temporal ordering of comments intact, while effectively reducing the level of information density.",
"Table 5 reports accuracy and positive recall for both methods and task levels when training and evaluating on non-overlapping splits of the newly constructed subsets.",
"Note that, despite our earlier discussion of information density, Table 5 reports a converse notion, sparsity.",
"In this setting, 0% corresponds to the highest and 80% the lowest information density.",
"Comparing accuracy at different event sparsity levels, we notice that scores increase as events become more sparsely distributed.",
"This effect stems from the fact that, when we are replacing event related comments with non-event chatter, chance agreement improves as the number of true negatives increases.",
"Positive recall of event occurrences, however, demonstrates an opposing trend, suggesting that the task of recognizing true state updates becomes more challenging the sparser the discourse domain is.",
"This assumption is further supported by the different degree of performance observed on SOCCER vs. existing collections such 3 Positive comments here refer to comments with event occurrences.",
"as MultiWOZ2.1 (Eric et al., 2019), where recall scores of many models range in the mid-fifty percent range.",
"In this paper, we introduce SOCCER , the first discourse state tracking collection in the sports commentary domain.",
"We propose two different levels of state granularity and provide two performance benchmarks for models ranging from GRU (Cho et al., 2014) for embedding temporal dependency to GPT-2 (Radford et al., 2019) for causal language modeling.",
"The dataset shows a much lower information density than many existing resources on state tracking, making it considerably more challenging.",
"We believe that, in conjunction with the wide vocabulary of player-level notions of state, this property makes SOCCER an exciting resource on which our community can advance discourse state tracking to a broader range of settings than have been studied previously.",
"This research is supported in part by the NSF (IIS-1956221).",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF or the U.S. Government.",
"We would like to thank Ellie Pavlick, Stephen Bach, Zejiang Shen and the anonymous reviewers for their constructive feedback and helpful discussion."
] |
[
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"other",
"other",
"other"
] |
[
"Typed entailment graphs try to learn the entailment relations between predicates from text and model them as edges between predicate nodes.",
"The construction of entailment graphs usually suffers from severe sparsity and unreliability of distributional similarity.",
"We propose a two-stage method, Entailment Graph with Textual Entailment and Transitivity (EGT2).",
"EGT2 learns local entailment relations by recognizing possible textual entailment between template sentences formed by typed CCG-parsed predicates.",
"Based on the generated local graph, EGT2 then uses three novel soft transitivity constraints to consider the logical transitivity in entailment structures.",
"Experiments on benchmark datasets show that EGT2 can well model the transitivity in entailment graph to alleviate the sparsity issue, and lead to significant improvement over current state-of-the-art methods 1 .",
"Entailment, as an important relation in natural language processing (NLP), is critical to semantic understanding and natural language inference (NLI).",
"Entailment relation has been widely applied in different NLP tasks such as Question Answering (Pathak et al., 2021; Khot et al., 2018), Machine Translation (Pad et al., 2009) and Knowledge Graph Completion (Yoshikawa et al., 2019).",
"When coming across a question that \"Which medicine cures the infection?\" , one can recognize the information \"Griseofulvin is preferred for the infection,\" in the corpus and appropriately write down the answer with the knowledge that \"is preferred for\" entails \"cures\" when their arguments are medicines and diseases , although the surface form of predicate \"cures\" does not exactly appear in the corpus.",
"There are many ways to present one question, and it is impossible to handle them without understanding Corresponding author.",
"the entailment relations behind the predicates.",
"Previous works on analyzing entailment mainly focus on Recognizing Textual Entailment (RTE) between pairs of sentences, and many recent attempts have achieved quite promising performance in detecting entailment relations using transformer-based language models (He et al., 2020; Raffel et al., 2020; Schmitt and Schtze, 2021b).",
"By modeling typed predicates as nodes and entailment relations as directed edges, the Entailment Graph ( EG ) is a powerful and well-established form to represent the context-independent entailment relations between predicates and reflect the global features of entailment inference, such as paraphrasing and transitivity.",
"As EGs are able to help reasoning without additional context or resource, they can be seen as a special type of structural knowledge in natural language.",
"Figure 1 shows an excerpt entailment graph about two types of arguments, Medicine and Disease .",
"Generally speaking, an entailment graphs can be built based on a three-step process: extracting predicate pairs from a corpus, building local graphs with locally computed entailment scores, and modifying the graphs with global methods.",
"However, existing EG construction methods still face challenges in both local and global stages.",
"The Distributional Inclusion Hypothesis (DIH) about entailment assumes that given a predicate (rela-5899 tion) p , it can be replaced in any context by another predicate (relation) q if and only if p entails q (Geffet and Dagan, 2005).",
"Most local methods in previous works are guided by DIH, thus rely on the distributional co-occurrences from corpora, including named entities, entity pairs and context, as features to compute the local entailment scores.",
"Since different predicate pairs are processed independently, the locally built graphs suffer from severe data sparsity .",
"That is, there are many entailment relations missing (as edges) in the graphs if the predicate pairs do not co-occur in the corpus.",
"Furthermore, predictions from local models may not be coherent with each other, for example, a local model may output three predictions like, a entails b , b entails c and c entails a at the same time, which actually indicate possible errors among the local predictions.",
"To overcome the challenges faced by local models, different global approaches are used to take the interactions and dependencies between entailment relations into consideration.",
"The first discussed global dependency is the logical transitivity among different predicates, that is, predicate a entails predicate c if there is another predicate b making both \" a entails b \" and \" b entails c \" hold simultaneously.",
"Berant et al. (2011) uses the Integer Linear Programming (ILP) to ensure the transitivity constraints on the entailment graphs, which is , unfortunately, not scalable on large graphs with thousands of nodes.",
"Hosseini et al. (2018) models the structural similarity across graphs and paraphrasing relations within graphs to learn the global consistency, but does not gain further improvement due to the lack of high-quality local graphs and proper transitivity modeling.",
"In order to deal with the problems in the local and global stages, we propose a novel entailment graph learning approach, E ntailment G raph with T extual Entailment and T ransitivity ( EGT2 ).",
"EGT2 builds high-quality local entailment graphs by inputting predicates as sentences into a transformer-based language model fine-tuned on an RTE task to avoid the unreliability of distributional scores, and models the global transitivity on these scores through carefully designed soft constraint losses, which alleviate the data sparsity and are feasible on large-scale local graphs.",
"Our key insight is that the entailment relation a c correctly implied by the transitivity constraint is based on two conditions: (1) the appropriate constraint scalable on large graphs containing rich information, and (2) the reliability of local graphs offering the premise a b and b c , which is impractical for previous distributional approaches, but may be available for the models well-behaved on RTE tasks.",
"Specifically, the input sentences fed to transformer-based language models are formed without context, which makes our method accessible to those predicates not appearing in the corpus.",
"The transitivity implication is confined to entailment relations with high confidence, which improves the quality of implied edges and cuts down the computational overheads.",
"In a word, this paper makes the following contributions: we present a novel approach based on textual entailment to scoring predicate pairs on local entailment graphs, which is reliable without distributional features and valid for arbitrary predicate pairs.",
"we present three carefully designed global soft constraint loss functions to model the transitivity among entailment relations on large entailment graphs, thus alleviate the data sparsity issue of previous local approaches.",
"we evaluate our method on benchmark datasets, and show that our EGT2 significantly outperforms previous entailment graphs construction approaches.",
"The further analysis proves that our local and global approaches are both useful for learning entailment graphs.",
"Based on DIH, previous works extract feature vectors for typed predicates to compute the local distributional similarity.",
"The set of entity argument pair strings, like \"Griseofulvin-infection\" in the example of Section 1, are used as the features weighted by Pointwise Mutual Information (Be-rant et al., 2015; Hosseini et al., 2018).",
"Given the feature vectors for a predicate pair, different similarity scores, like cosine similarity, Lin (Lin, 1998), DIRT (Lin and Pantel, 2001), Weeds (Weeds and Weir, 2003) and Balanced Inclusion (Szpektor and Dagan, 2008), are calculated as the local similarities.",
"Hosseini et al. (2019) and Hosseini et al. (2021) use Markov Chain on an entity-predicate bipartite graph weighted by link prediction scores to calculate the transition probability between two predicates as the local score.",
"They rely on the link 5900 predication model to generate the features in fact.",
"Guillou et al. (2020) adds temporal information into entailment graphs by extracting entity pairs within a limited temporal window as predicate features.",
"McKenna et al. (2021) extends the graphs to include entailment relations between predicates with different numbers of arguments by splitting the features from argument pairs into independent entity slots, which impairs the representation ability of features when unary predicates are involved.",
"As mentioned in Section 1, entailment graphs are generally learned by imposing global constraints on the local entailment relations about extracted predicates.",
"The transitivity in entailment graphs is modeled by the Integer Linear Programming (ILP) in Berant et al. (2011), which selects a transitive sub-graph of a local weighted graph to maximize the summation over the weights of its edges.",
"Their work is limited to a few hundreds of predicates due to the computational complexity of ILP.",
"For better scalability, Berant et al. (2012) and Berant et al. (2015) make a strong FRG-assumption that if predicate a entails predicates b and c , b and c entail each other , and an approximation method, called Tree-Node-Fix (TNF).",
"Obviously, the assumption is too strong to be satisfied by real cases.",
"Since the hard constraints are difficult to work well on large-scale entailment graphs, Hosseini et al. (2018) propose two global soft constraints that maintain the similarity between paraphrasing predicates within typed graphs and between predicates with the same names in graphs with different argument types.",
"Their soft constraints are also used in Hosseini et al. (2019) and Hosseini et al. (2021).",
"The similarity between paraphrasing predicates, which ensures ( a c ) ( b c ) and ( c a ) ( c b ) when a b , implicitly takes the transitivity between paraphrasing predicates and third predicate into consideration.",
"But it ignores the transitivity in more common cases, and leads to a limited improvement on performance.",
"Meanwhile, the transformer-based Language Model (LM), although proved to be effective in RTE tasks (He et al., 2020; Raffel et al., 2020; Schmitt and Schtze, 2021b), has received less attention in entailment graph learning.",
"Schmitt and Schtze (2021a) uses pretrained LM on the Lexical Inference in Context (LIiC) task, which is closely related to entailment graph learning.",
"Hosseini et al. (2021) uses pretrained BERT to initialize the contextualized embeddings in their contextualized link prediction and entailment score calculation.",
"Higher scores are assigned to the entailed predicates in the context of their premises, which is one implicit expression form of DIH and different from our direct utilization of LM on textual entailment.",
"The goal of entailment graph learning is to extract predicates, learn the entailment relations and build entailment graphs from raw text corpora.",
"Following previous works (Hosseini et al., 2018, 2019), we use the binary relations from neo-Davisonian semantics as predicates, which is a type of first-order logic with event identifiers.",
"For instance, with the semantic parser (here, GraphParser (Reddy et al., 2014)), the sentence: \"Griseofulvin is preferred for the infection.\" can be transformed into the logical form e.prefer 2 ( e, Griseofulvin ) prefer for ( e, infection ) where e denotes an event.",
"By considering a relation for each pair of extracted arguments, this sentence refers to one predicate, p = (pre-fer.2,prefer.for.2,medicine,disease) 2 .",
"Likely, the sentence \"Griseofulvin cures the infection.\" contains q = (cure.1,cure.2,medicine,disease) .",
"Formally, a predicate with argument types t 1 and t 2 is represented as p = ( w p, 1 .i p, 1 , w p, 2 .i p, 2 , t 1 , t 2 ) .",
"The event-based predicate form is strong enough to describe most of the relations in real cases (Parsons, 1990).",
"With T as the set of types and P as the set of all typed predicates, V ( t 1 , t 2 ) contains typed predicates p with unordered argument types t 1 and t 2 , where p P and t 1 , t 2 T .",
"For predicate p = ( w p, 1 .i p, 1 , w p, 2 .i p, 2 , t 1 , t 2 ) , we denote that 1 ( p ) = t 1 , 2 ( p ) = t 2 and ( p ) = ( w p, 1 .i p, 1 , w p, 2 .i p, 2 ) .",
"In other words, V ( t 1 , t 2 ) = { p | ( 1 ( p ) = t 1 2 ( p ) = t 2 ) ( 1 ( p ) = t 2 2 ( p ) = t 1 ) } .",
"A typed entailment graph G ( t 1 , t 2 ) = < V ( t 1 , t 2 ) , E ( t 1 , t 2 ) > is composed of the nodes of typed predicates V ( t 1 , t 2 ) and the weighted edges E ( t 1 , t 2 ) .",
"The edges can be also represented as sparse score matrix W ( t 1 , t 2 ) [0 , 1] | V ( t 1 ,t 2 ) || V ( t 1 ,t 2 ) | , containing the entailment scores between predicates with type t 1 and t 2 .",
"As 2 The numbers after the predicate words are corresponding argument positions of entity \"Griseofulvin\" (second argument of prefer ) and \"infection\" (second argument of the preposition for ), and the later two items are the types of arguments.",
"the different argument types can naturally determine whether two predicates have the same order of arguments, the order of argument type is not important while t 1 = t 2 , and therefore we can ensure that G ( t 1 , t 2 ) = G ( t 2 , t 1 ) .",
"For those predicates p with 1 ( p ) = 2 ( p ) , the two argument types are labeled with orders, which allows the graph to contain the entailment relations with different argument orders, like (be.1,be.capital.of.2,location 1 ,location 2 ) (contain.1,contain.2,location 2 ,location 1 ) .",
"Inspired by the outstanding performance of pretrained and fine-tuned LMs on RTE task, which is closely related to the entailment graphs, EGT2 uses fine-tuned transformer-based LM to calculate the local entailment scores of typed predicated pairs.",
"In order to utilize the knowledge about entailment relations in pretrained and fine-tuned LM, EGT2 firstly transfers the predicate pair ( p, q ) into corresponding sentence pair ( S ( p ) , S ( q )) by sentence generator S , as the complicated predicates cannot be directly input into the LM.",
"For typed predicate p = ( w p, 1 .i p, 1 , w p, 2 .i p, 2 , t 1 , t 2 ) , the generator deduces the positions of arguments about the predicate based on i p, 1 and i p, 2 , generates the surface form of p based on w p, 1 and w p, 2 , and finally concatenates the surface form with capitalized types as its arguments.",
"Some generated examples are shown in Table 1, and the detailed algorithm of S is described in Appendix A. After generating sentence pair ( S ( p ) , S ( q )) for predicate pair ( p, q ) , EGT2 inputs ( S ( p ) , S ( q )) into a transformer-based LM to calculate the probability of the entailment relation p q as the local entailment score in G ( t 1 , t 2 ) .",
"In our experiments, the LM is implemented as DeBERTa (He et al., 2020).",
"Generally, an entailment-oriented LM will output three scores for a sentence pair, representing the probability of relationship entail , contradict and neutral respectively.",
"Formally, we denote the weighted matrix of local entailment graph with type t 1 and t 2 as W local , and the weight of the edge between p and q in W local is calculated as: W localp,q = P ( p q ) [0 , 1] , P ( p q ) = e LM ( entail | p,q ) (cid:80) r { entail,contradict,neutral } e LM ( r | p,q ) , (1) where LM ( r | p, q ) is the output score of corresponding relationship by the LM.",
"As the local entailment is based on the LM fine-tuned to perform textual entailment, the local graph can be built for any predicates in the parsed semantic form, or in any other forms by changing sentence generator S .",
"Existing approaches use global learning to find correct entailment relations which are missing or underestimated in local entailment graphs to overcome the data sparsity.",
"Following Hosseini et al. (2018), the evidence from existing local edges with high confidence is used by EGT2 to predict missing edges in the entailment graphs.",
"The transitivity in entailment relation inference implies a c while both a b and b c hold.",
"For instance, in the example of Figure 1, the entailment \"is preferred for\" \"is effective for\" is discovered because \"is preferred for\" \"cures\" and \"cures\" \"is effective for\" have been learned.",
"The key challenge to incorporate the transitivity constraint into weighted graphs is discreteness of logical rules.",
"Discreteness makes the rules impossible to be directly used in gradient-based learning methods without NP-hard complexity, as different predicate pairs are jointly involved in the calculation.",
"To unify the discrete logical rules with gradient-based learning, inspired by Li et al. (2019), EGT2 uses the logical constraints in the form of differentiable triangular norms (Gupta and Qi, 1991; Klement et al., 2013), or called t-norms, as the 5902 L 1 = log (cid:89) a,b,c V ( t 1 ,t 2) , Wa,b,Wb,c> 1 min(1 , W a,c W a,b W b,c ) = (cid:88) a,b,c V ( t 1 ,t 2 ) I 1 ( W a,b ) I 1 ( W b,c ) ReLU ( logW a,b + logW b,c logW a,c ) L 2 = (cid:88) a,b,c V ( t 1 ,t 2 ) I 1 ( W a,b ) I 1 ( W b,c ) I 0 ( W a,b W b,c W a,c ) logW a,c L 3 = (cid:88) a,b,c V ( t 1 ,t 2 ) I 1 ( W a,b ) I 1 ( W b,c ) I 0 ( W a,b W b,c W a,c ) W a,b W b,c logW a,c (2) soft constraints so that the gradient-based learning methods can be applied.",
"Different t-norm methods transfer the discrete rules into different continuous loss functions.",
"Traditional product t-norm maps P ( A B ) into P ( A ) P ( B ) , P ( A B ) into P ( A ) + P ( B ) P ( A ) P ( B ) , and P ( A B ) into min(1 , P ( B ) P ( A ) ) .",
"For the entailment relations, the probability of transitivity to be satisfied is: P [( a b b c ) ( a c )] = min(1 , W a,c W a,b W b,c ) , (3) where the probability of the entailment relation a b is represented by the local entailment scores W a,b .",
"To alleviate the noise from those edges assigned low confidence by local LM, EGT2 only takes the local edges whose scores are higher than 1 into account (as a b and b c ), where is a small hyper-parameter because the local probability scores tend to be close to 0 or 1 in practice.",
"Therefore, to maximize the probability of transitivity constraint satisfied over all predicates in the entailment graph G ( t 1 , t 2 ) , EGT2 tries to minimize the following minus-log-likelihood loss function L 1 in Eq.",
"2, where I y ( x ) = 1 if x > y , or 0 otherwise.",
"Another important t-norm, called the Gdel t-norm, maps P ( A B ) into 1 if P ( B ) P ( A ) or P ( B ) otherwise.",
"Therefore, the Gdel probability of transitivity to be satisfied is: P [( a b b c ) ( a c )] = (cid:26) W a,c W a,b W b,c > W a,c 1 otherwise , (4) and EGT2 similarly tries to minimize the loss function L 2 in Eq.",
"2. It should be noted that transitivity constraints will be disobeyed not only by the missing edges, but also by the spurious edges in the local graphs.",
"Therefore, we expect the soft constraints to take reducing the weights of premise edges into consideration.",
"L 1 achieves this by the loss item W a,b and W b,c , and we modify L 2 to L 3 in Eq.",
"2 so that the low confidence of W a,c will help to detect whether W a,b and W b,c are spurious.",
"Our t-norm soft constraints, although do not guarantee the obedience of transitivity, are effective approximations for the transitivity property.",
"Given the local entailment graph G ( t 1 , t 2 ) with weighted edges W local , in order to ensure that the global entailment graph W is not too far from W local , EGT2 finally minimizes the following loss function L to trade off the distance from local graphs and the soft transitivity constraint: L = (cid:88) a,b V ( W a,b W locala,b ) 2 + L i , i = 1 , 2 , 3 (5) where L i is the specified implementation of soft transitivity constraint in Eq.",
"2, and is a nonnegative hyper-parameter that controls the influence of two loss terms.",
"Following Hosseini et al. (2018) and Hosseini et al. (2019), we use the multiple-source NewsSpike corpus (Zhang and Weld, 2013), which contains 550K news articles, to extract binary relations as generated predicates in EGT2.",
"We make use of the triples released and filtered in Hosseini et al. (2019), which applies GraphParser (Reddy et al., 2014) based on Combinatorial Categorial Grammar (CCG) syntactic derivations to extracting binary relations between predicates and arguments.",
"The argument entities are linked to Freebase (Bollacker et al., 2008) and mapped to the first level of FIGER types (Ling and Weld, 2012) hierarchy.",
"The type 5903 of a predicate is determined by its two corresponding argument entities.",
"The triples are filtered by two rules to remove the noisy binary relations and arguments: (1) we only keep those argument-pairs appearing in at least 3 relations; (2) we only keep those relations with at least 3 different argument-pairs.",
"The number of relations in the corpus is reduced from 26M to 3.9M, covering 304K typed predicates in 355 typed entailment graphs.",
"Only those predicate pairs co-occurring with at least one same entity-pair (e.g., Griseofulvin-infection ) will be linked to calculate the local scores, and as a result, our local predicate pairs are identical with Hosseini et al. (2019).",
"As we focus on using global models to alleviate the sparsity of local edges, more potential methods to extracting denser local edges will be studied in our future research.",
"We use Levy/Holt Dataset (Levy and Dagan, 2016; Holt, 2018) and Berant Dataset (Berant et al., 2011) to evaluate the performance of entailment graph models.",
"In Levy's dataset, each example contains a pair of triples with the same entities but different predicates.",
"Some questions with one predicate were shown to the annotating workers, like \"Which medicine cures the infection?\" .",
"The label for each example are either True or False , indicating whether the first typed predicate entails the second one, by asking the workers whether the first predicates can answer the question with the second one.",
"For example, if \"Griseofulvin is preferred for the infection\" is a correct answer of the above question, the dataset labels \"is preferred for\" \"cures\" .",
"Holt (2018) re-annotates Levy's dataset and forms a new dataset with 18,407 examples (3,916 positive and 14,491 negative), referred as Levy/Holt Dataset.",
"The dataset is split into validation set (30%) and test set (70%) as Hosseini et al. (2018) in our experiments.",
"Berant et al. (2011) annotates all the entailment relations in their corpus, which generates 3,427 positive and 35,585 negative examples, referred as Berant Dataset.",
"Their entity types do not exactly match with the first level of FIGER types hierarchy, and therefore a simple hand-mapping by Hosseini et al. (2018) is used to unify the predicate types.",
"calculating the area under the curves (AUC) with changing the classification threshold of global entailment scores.",
"Hosseini et al. (2018) argues that the AUC of Precision-Recall Curve (PRC) for precisions in the range [0 . 5 , 1] , as predictions with higher precision than random are more important for the downstream applications.",
"Therefore, we report both the AUC of PRC for precisions in the range [0 . 5 , 1] and the traditional AUC of ROC, which is more widely used in evaluation of other tasks.",
"We compare our model with existing entailment graph construction methods (Berant et al., 2011; Hosseini et al., 2018, 2019, 2021) and the best local distributional method, Balanced Inclusion (Szpek-tor and Dagan, 2008) , referred as BInc.",
"We also include ablation variants of our EGT2, including local models with or without fine-tuning.",
"For local transformer-based LM, EGT2 uses DeBERTa (He et al., 2020) implemented by the Hugging Face transformers library (Wolf et al., 2019) 3 , which has been fine-tuned on MNLI (Williams et al., 2018) dataset.",
"In order to adapt it to the special type-oriented sentence pattern generated by S , we expand the validation set by extracting all of the predicates, generating sentence pairs by generator S for every two predicates, and checking whether they are labeled as paraphrase or entailment in the Paraphrase Database collection (PPDB) (Pavlick et al., 2015).",
"We split 80% of the generated corpus to fine-tune the DeBERTa with Cross-Entropy Loss, and the rest as the validation set of fine-tuning process.",
"The fine-tuning learning rate f = 10 5 , and the process is terminated while the F 1 score of entail on validation set does not increase in 10 epochs or training after 100 epochs.",
"For global soft transitivity constrains, we use SGD (Cun et al., 1998) to optimize the scores W in entailment graphs with loss function L in Eq.",
"5 for e = 5 epochs.",
"The SGD learning rate = 0 .",
"05 , the coefficient = 1 , and the confidence threshold = 0 .",
"02 .",
"The hyper-parameters are selected based on Levy/Holt validation dataset.",
"More implementation details are given in Appendix B. For testing, if one or both predicates of the example do not appear in the corresponding typed entailment graph, we handle the example as un-3 https://github.com/huggingface/transformers 5904 Table 2: Model performance on Levy/Holt Dataset and Berant Dataset.",
"typed one by resorting to its average score among all typed entailment graphs.",
"This setting is also used for all local and global methods in the experiments for fair comparison.",
"We summarize the model performances on both Levy/Holt and Berant datasets in Table 2. All global methods, including Hosseini et al. (2018), Hosseini et al. (2019) and EGT2, perform better than their corresponding local methods, which demonstrates the effect of global constraints in alleviating the data sparsity.",
"Although using the same extracted entailment relations with Hosseini et al. (2019), our EGT2-Local significantly outperforms previous local methods because of the high-quality entailment scores generated by reliable fine-tuned textual entailment LM.",
"On the whole, EGT2 with transitivity constraint L 3 outperforms all the other models on both Levy/Holt Dataset and Berant Dataset with AUC of PRC, while EGT2L 1 performs best with AUC of ROC.",
"All of three soft transitivity constraints boost the performance of local model on all evaluation metrics, which shows that making use of transitivity rule between entailment relations improves the local entailment graph.",
"EGT2L 1 or EGT2L 3 performs better than EGT2-L 2 , which indicates that involving the premises a b and b c into loss function is also important for using transitivity constraints.",
"methods and the Precision-Recall Point of Berant et al. (2011) on the two evaluation datasets are shown in Figure",
"2(a) and",
"2(b) respectively.",
"The local and global models of EGT2 consistently outperform previous state-of-the-art methods on all levels of precision and recall, which indicates the effect of our local model based on textual entailment and global soft constraints based on transitivity.",
"The EGT2-Local achieves slightly higher precision than global models in the range recall < 0 .",
"5 , but its precision drops quickly if we require higher recall and therefore leads to worse performance than global models.",
"The result indicates that global models with transitivity constraints gain significant improvement on recall with far less expense on precision than EGT2-Local.",
"As described in Section 4.4, a new corpus is generated for fine-tuning the local model.",
"We claim that the fine-tuning corpus helps to improve the performance of EGT2-Local by adapting it to the special sentence pattern by S , rather than offering additional data to fit the distribution of target datasets as traditional training datasets do.",
"To prove this, we also test a simple supervised method, labelled as Local-Sup, which fits a 2-layers feedforward neural network on the fine-tuning corpus with cosine similarity, Weed, Lin and BInc scores as features.",
"If the corpus acts as training dataset, the performance of Local-Sup should be obviously better than its unsupervised features.",
"As shown in Table 2, Local-Sup does not perform significantly better on Levy/Holt Dataset, and even worse on Berant Dataset than BInc, which is one of the inputting features of Local-Sup.",
"The result illustrates the difference between the fine-tuning corpus and the evaluation datasets, and shows that the corpus plays a role as pattern adapting corpus rather than training dataset.",
"In Section 1, we expect that the improvement of soft transitivity constraints is attributed to the alleviation of data sparsity in corpus.",
"To examine the sparsity before and after the applying of transitivity constraints, we count how many the positive and negative entailment relations in the Levy/Holt test set exactly appear in the local and global entailment graph respectively, and show the counting results in Table 3. All three soft transitivity constraints help to find more entailment relations than 5905 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Precision 0.2 0.4 0.6 0.8 1.0 R e c a ll",
"local entailment graph and therefore achieve better performance on the evaluation datasets.",
"Although EGT2L 2 finds the most entailment relations in the dataset in global stage, it finds more negative examples concurrently and thus performs worse than L 1 and L 3 as shown in Table 2. On the other hand, EGT2L 1 and EGT2L 3 obtain more proportions of positive examples by considering premise relations during the gradient calculation.",
"The low confidence of hypothesis relationship W a,c should be helpful to detect spurious premises W a,b and W b,c .",
"Therefore, EGT2L 3 slightly outperforms EGT2L 1 as the gradients of W a,b and W b,c in L 3 are related to the hypothesis relationship W a,c .",
"We have also applied the soft transitivity constraints on the local graph with BInc and Hosseini et al. (2019), but observed only slightly improvement of performance, as .",
"155 .",
"157 and .",
"167 .",
"170 for EGT2L 3 on PRC of Levy/Holt Dataset respectively.",
"Comparing it with the significant improvement based on EGT2-Local, we claim that the high-quality local entailment graphs are the basis of effective soft transitivity constraints.",
"Hosseini et al. (2018) have shown improvement of performance based on their local graphs.",
"However, due to the distinct distribution and scales of local scores, their constraints are computationally unavailable on our local graphs , partially due to the high overhead for cross-graph calculation.",
"Generally, the logical entailment should be directional which makes it different from paraphrase .",
"Although EGT2 significantly improves the performance on two datasets, it is unclear whether the improvement comes from the directional entailment cases, or only paraphrasing ones, as the local LM might be strong in recognizing paraphrases but weak in recognizing directional entailment (Cabezudo et al., 2020).",
"To examine how EGT2 works under directional cases, we eliminate those paraphrase predicate pairs a b with label l { T rue, F alse } from Levy/Holt test dataset, if the corresponding b a is also appearing and labelled as l in the test dataset.",
"The rest directional section of Levy/Holt Dataset contains 8,140 examples (753 positive and 7,387 negative).",
"We 5906 expect that this section should be more challenging as undirectional paraphrase becomes unavailable.",
"We report the model performance on the directional section of Levy/Hold Dataset in Table 4. We can see that previous baselines do not perform well on AUC of PRC, which indicate that it is difficult for them to reach precision > 0 .",
"5 .",
"Meanwhile, EGT2-Local and EGT2L 3 outperform all baselines on the directional section of Levy/Holt Dataset.",
"Unsurprisingly, all models' AUC scores on the directional section become lower compared on the original Levy/Holt Dataset, showing the challenges of directional entailment inference.",
"Two EGT2 variants maintain high performance, which proves that our local model can learn to capture directional predicate entailment better than distributional baselines, and the global soft constraint also helps to make directional entailment inference.",
"We randomly sample and analyze 100 false positive (FP) examples and 100 false negative (FN) examples from Levy/Holt test set according to predictions by EGT2L 3 .",
"We manually setup the decision threshold as 0.574 to make the precision level close to 0.76, which is the same as Berant et al. (2011).",
"The major error types are shown in Table 5. Although the global constraint is used, about half of FN errors are due to the data sparsity where the entailment relations are not found in the entailment graph.",
"When compared with the results in Hosseini et al. (2018), EGT2L 3 reduces the ratio of Sparsity in FN errors from 93% to 46% with stronger alleviation ability of data sparsity.",
"About a quarter of FN are caused by the Under-weighted Relations in the graph, where EGT2 finds the entailment relations but gives them scores lower than the threshold.",
"The rest of FN are related to Dataset Wrong Labels which happens when the predicates are indeed entailed by others but labelled as negative, or the predicate pairs are incomplete.",
"Most of FP errors are caused by the Spurious Correlation as these relations are too fraudulent for EGT2 to see through their spurious relationships and consequently given high scores.",
"A few FP errors are caused by Lemma-based Processing in LM inevitably, but the ratio still reduces from 12% in Hosseini et al. (2018) to 5%.",
"The result indicates that our fine-tuned LM can handle the predicates even with similar surface forms and contexts better than parsing-based distributional local features.",
"In this paper, we propose a novel typed entailment graph learning framework, EGT2, which uses language models fine-tuned on textual entailment tasks to calculate local entailment scores and applies soft transitivity constraints to learn global entailment graphs in gradient-based method.",
"The transitivity constraints are achieved by carefully designed loss functions, and effectively boost the quality of local entailment graphs.",
"By using the fine-tuned local LM and global soft constraints, EGT2 does not rely on distributional features, and can be easily applied to large-scale graphs.",
"Experiments on standard benchmark datasets show that EGT2 achieves significantly better performance than existing state-of-the-art entailment graph methods.",
"This work is supported in part by National Key R&D Program of China (No. 2020AAA0106600) and NSFC (62161160339).",
"We would like to thank the anonymous reviewers and action editors for their helpful comments and suggestions."
] |
[
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Sequence-to-sequence models for abstractive summarization have been studied extensively, yet the generated summaries commonly suffer from fabricated content, and are often found to be near-extractive.",
"We argue that, to address these issues, the summarizer should acquire semantic interpretation over input, e.g., via structured representation, to allow the generation of more informative summaries.",
"In this paper, we present ASGARD , a novel framework for Abstractive Summarization with Graph-Augmentation and semantic-driven RewarD.",
"We propose the use of dual encoders a sequential document encoder and a graph-structured encoderto maintain the global context and local characteristics of entities, complementing each other.",
"We further design a reward based on a multiple choice cloze test to drive the model to better capture entity interactions.",
"Results show that our models produce significantly higher ROUGE scores than a variant without knowledge graph as input on both New York Times and CNN/Daily Mail datasets.",
"We also obtain better or comparable performance compared to systems that are fine-tuned from large pretrained language models.",
"Human judges further rate our model outputs as more informative and containing fewer unfaithful errors.",
"Abstractive summarization aims to produce concise and informative summaries with the goal of promoting efficient information consumption and knowledge acquisition (Luhn, 1958).",
"Significant progress has been made in this area by designing sequence-to-sequence-based neural models for single-document abstractive summarization (Gehrmann et al., 2018; Liu et al., 2018; Liu and Lapata, 2019).",
"However, due to the limitations of model structure and word prediction-based Input Article of New York Times: John M. Fabrizi , the mayor of Bridgeport, admitted on Tuesday that he had used cocaine and abused alcohol while in office.",
"Mr. Fabrizi , who was appointed mayor in 2003 after the former mayor, Joseph P. Ganim, went to prison on corruption charges, said he had sought help for his drug problem about 18 months ago and that he had not used drugs since.",
"About four months ago, he added, he stopped drinking alcohol .",
"drinking alcohol and sought help for his drug problem about 18 months ago.",
"learning objectives, these models frequently produce unfaithful content (Cao et al., 2018) and near-extractive summaries (See et al., 2017; Kryscinski et al., 2018).",
"These observations suggest that existing models lack semantic interpretation over the input, which is critical for summarization.",
"We argue that the generation of informative and succinct abstracts requires structured representation to facilitate the connection of relevant subjects, and the preservation of global context, e.g. entity interactions and topic flows.",
"Take Fig. 1 as an example.",
"Complex events related with the same entity may span multiple sentences, making it challenging for existing sequential models to capture.",
"A graph representation, on the contrary, produces a structured summary and highlights the proximity of relevant concepts.",
"To this end, we present ASGARD , a framework for Abstractive Summarization with Graph-Augmentation and semantic-driven RewarD.",
"1 Under the encoder-decoder framework, we enhance the regular document encoder with a separate graph-structured encoder to maintain the global context and local characteristics of entities by using the outputs from an open information extraction (OpenIE) system.",
"Specifically, we experiment with two graph variants, one mainly capturing entities' document-level interactions and the other reflecting such interactions within each paragraph plus topic shifts across paragraphs.",
"Both graphs can capture interactions among entities that are positioned far from one another in the document and significantly reduce redundancy, as shown in Fig. 1. The document encoder and the graph encoder then cooperate during abstract generation, wherein the model is trained to identify salient content by aligning graphs with human summaries.",
"Though structured representation has been studied before for summarization (Fer-nandes et al., 2019), to the best of our knowledge, we are the first to utilize graph neural networks to explicitly encode entity-centered information for abstractive summary generation.",
"Moreover, we propose a novel multi-choice cloze reward to drive the model to acquire semantic understanding over the input .",
"Concretely, we design cloze questions by removing pairwise entities that are connected with a predicate or co-occur in a human summary sentence, whereas prior work only considers single entities to construct questions (Eyal et al., 2019).",
"In tandem with our graph encoding of knowledge, the cloze reward further facilitates the acquisition of global entity interactions with reinforcement learning.",
"We carry out automatic and human evaluations on popular summarization datasets.",
"Models based on ASGARD yield significantly better ROUGE scores (Lin and Hovy, 2003) than a variant without access to the knowledge graph on two popular news summarization datasets, New York Times 1 Our code is available at https://github.com/luyang-huang96/GraphAugmentedSum.",
"corpus and CNN/Daily Mail dataset.",
"Moreover, ASGARD models attain performance better than or comparable to others that are fine-tuned from large pretrained language models, including BERT-Sum (Liu and Lapata, 2019), UniLM (Dong et al., 2019), and BART (Lewis et al., 2019).",
"Human judges further confirm that our models generate more informative summaries with less unfaithful errors than their counterparts without the graph encoder.",
"Importantly, we find that automatic evaluation metrics only weakly correlate with these errors, implying that new evaluation methods are needed to better gauge summary quality.",
"The rest of the paper is organized as follows.",
"We describe related work in the next section ( 2).",
"We then discuss the knowledge graph construction in 3 and formulate our graph-augmented summarization framework in 4.",
"In 5, we introduce reinforcement learning with cloze reward.",
"Experiments and results are presented in 6 and 7. Finally, we conclude in 8. 2 Related Work Graph-Augmented Summarization and Generation.",
"Graph structures have long been used for extractive summarization, such as in Textrank (Mi-halcea and Tarau, 2004) and Lexrank (Erkan and Radev, 2004).",
"For neural models, Tan et al. (2017) design graph-based attention to identify important sentences.",
"For generating abstractive summaries, Fernandes et al. (2019) enhance a sequence-based encoder with graph neural networks (GNNs) to consider token-level entity types, however, entity interactions are largely ignored.",
"On multi-document summarization, Fan et al. (2019) demonstrate the usefulness of encoding a linearized knowledge graph from OpenIE outputs.",
"In this work, we design a graph encoder, which improves upon Graph Attention Networks (GATs) (Velickovic et al., 2018), to capture the global context in a more effective manner.",
"Also related is the graph-to-sequence framework that has been adopted for text generation (Song et al., 2018).",
"Both Gated Graph Neural Networks (GGNNs) (Beck et al., 2018) and Graph Convolutional Networks (GCNs) (Damonte and Cohen, 2019) are shown to be effective in generating sentences from AMR graphs.",
"Since Graph Attention Networks can better handle sparse graphs, they are used by Koncel-Kedziorski et al. (2019) with a transformer model to create scientific paper abstracts from knowledge graphs.",
"Here we use graphs in addition to document encoder, both carrying complementary information for summarization.",
"Reinforcement Learning and QA Reward for Abstractive Summarization.",
"As pointed out by Ranzato et al. (2016), word-level maximum likelihood training brings the problem of exposure bias.",
"Recent work utilizes reinforcement learning to directly optimize the model to maximize the informativeness of summaries by using different forms of ROUGE scores (Paulus et al., 2018; Chen and Bansal, 2018; Sharma et al., 2019).",
"However, ROUGE does not always distinguish good summaries from bad ones (Novikova et al., 2017), and ignores entity interactions.",
"Since question answering (QA) has been used for summary evaluation (Narayan et al., 2018), and is shown to correlate with human judgment of summaries qualities (Eyal et al., 2019), QA-based rewards have been studied for summarization model training.",
"Arumae and Liu (2019) demonstrate that using fill-in-the-blank questions by removing entities or root words leads to improved content selection.",
"Scialom et al. (2019) consider a similar setup, but use both F1 score and QA system confidence as rewards in abstractive summarization.",
"Previous work, however, mainly focuses on single entities or words in human-written summaries, thereby losing contexts and relations.",
"Moreover, fill-in-the-blank questions by prior work give credits only when the answers exactly match the ground-truths, thus causing inaccuracies for rephrased answers and discouraging abstract content generation.",
"In contrast, we design a semantic-driven cloze reward by measuring how well a QA system can address multiple choice cloze questions which better encode entity interactions and handle paraphrased answers .",
"To construct a knowledge graph from an input document, we utilize Stanford CoreNLP (Manning et al., 2014) to first obtain outputs from corefer-ence resolution and open information extraction (OpenIE) models (Angeli et al., 2015).",
"Note that we do not conduct global entity linking across documents.",
"Next, we take the (cid:104) subject, predicate, object (cid:105) triples extracted by OpenIE and remove any triple whose argument (subject or object) has more than 10 words.",
"If two triples differ only by one argument, and the arguments overlap, we keep the longer triple.",
"We begin constructing the graph by treating subjects and objects as nodes connected by directed edges, with predicates as attributes.",
"We further collapse coreferential mentions of the same entity into one node.",
"With this, we can localize salient content related to each entity as well as make connections of spread-out entities through graph paths.",
"In this section, we describe our graph-augmented abstractive summarization framework, as displayed in Fig. 2. Our model takes as input a document, represented as a sequence of tokens x = { x k } , and a knowledge graph G consisting of nodes { v i } .",
"x and G are separately consumed by a document encoder and a graph encoder, as presented in 4.1.",
"Importantly, we present two types of graphs: DOCGRAPH , focusing on the global context, and SEGGRAPH , which additionally captures topic shift.",
"The summary decoder then generates an abstractive summary by attending to both the document and the graph ( 4.2).",
"In 4.3, we formulate a maximum likelihood training objective which leverages the detection of salient nodes in the graph.",
"output as token embeddings.",
"We then employ a single-layer bidirectional LSTM (BiLSTM) over token embeddings, producing encoder hidden states h k at time step k .",
"Graph Encoder.",
"Built on the graph constructed in 3, we create nodes for predicates as done in previous graph-to-sequence work (Beck et al., 2018) to reduce model parameters.",
"Directed, unlabeled edges are added from subject to predicate, and from predicate to object.",
"We further add reverse edges and self-loops to enhance the information flow, and this forms the graph G .",
"Node Initialization.",
"Each node often contains multiple mentions of an entity; we thus initialize node representation v i by using the average embedding of its tokens.",
"We leverage document encoder hidden states h k as the contextual representation of tokens.",
"Number of mentions in the node is added as an extra encoding to v i , to signify entity salience.",
"Contextualized Node Encoding.",
"Our graph encoder improves upon Graph Attention Networks (GATs) (Velickovic et al., 2018) by adding residual connections between layers as discussed in Koncel-Kedziorski et al. (2019).",
"Each node v i is represented by a weighted average of its neighbors: v i = v i + (cid:107) Nn =1 (cid:88) v j N ( v i ) ni,j W 0 ,n v j (1) ni,j = softmax(( W 1 ,n v i ) T ( W 2 ,n v j )) (2) where (cid:107) Nn =1 denotes the concatenation of N heads, each producing a vector of the same dimension as v i .",
"We use N = 4 in our experiments with two layers of GATs.",
"N ( v i ) denotes the neighbors of v i in graph G .",
"W are trainable parameters.",
"The graph encoder described above encodes document-level global context by merging entity mentions throughout the document and capturing their interactions with graph paths.",
"It is henceforth denoted as DOCGRAGH .",
"Encoder Extension to Capture Topic Shift (SEGGRAGH ).",
"Modeling topic transitions and recurrences enables the identification of notable content, thus benefiting summarization (Barzilay and Lee, 2004).",
"Since paragraphs naturally divide a document into different topic segments, we extend DocGragh by first encoding each paragraph as a subgraph G p (for the p -th paragraph) using the same graph encoder, and then connecting all subgraphs with a BiLSTM.",
"If two nodes in separate subgraphs refer to the same entity, they are initialized with the same embedding (as in the first oc-currence).",
"Concretely, we first apply max-pooling over all nodes in subgraph G p from the outputs of the final GAT layer; the max-pooling results are then used as inputs for a BiLSTM to produce the final subgraph representation h gp for G p .",
"Our summary decoder uses a single-layer unidirectional LSTM with a hidden state s t at step t ; it generates summary tokens recurrently by jointly attending to the input document and the graph.",
"Attending the Graph.",
"At each decoding step t , we compute a graph context vector c vt with the attention mechanism (Bahdanau et al., 2014): c vt = (cid:88) i a vi,t v i (3) a vi,t = softmax( u T 0 tanh( W 3 s t + W 4 v i )) (4) where u are also trainable parameters.",
"Attending the Document.",
"Similarly, the document context c t is computed over input tokens by additionally considering the graph context c vt : c t = (cid:88) k a k,t h k (5) a k,t = softmax( u T 1 tanh( W 5 s t + W 6 h k + W 7 c vt )) (6) Token Prediction.",
"Graph and document context vectors, treated as salient content summarized from both sources, are concatenated with the decoder hidden state s t to produce the vocabulary distribution P vocab : P vocab = softmax( W out [ s t | c t | c vt ]) (7) We use weight-sharing between the input embedding matrix and the matrix W out to allow reusing linguistic knowledge as proposed by Paulus et al. (2018).",
"We further add a copy mechanism similar to See et al. (2017), with copy probability as: P copy = ( W copy [ s t | c t | c vt | y t 1 ]) (8) where y t 1 denotes the embedding for the token predicted at step t 1 .",
"Modified Hierarchical Attention for SegGraph.",
"As mentioned in 4.1, SegGraph captures content salience by modeling topic shift across paragraphs.",
"We thus seek to leverage paragraph-level importance to redistribute the node attentions, e.g., giving more attentions to nodes in important paragraphs.",
"In particular, we utilize hierarchical attention (Hsu et al., 2018), where we first calculate attention a gt over subgraphs as done in Eq.",
"3 by replacing v i with subgraph representation h gp .",
"We then combine subgraph attentions a gt with the previously calculated attentions a vt for nodes in the subgraph using scalar multiplication and renormalization over all nodes in input.",
"This results in the new attention weights a vt , which are used to obtain graph context vector c vt as done in Eq.",
"3 for SegGraph.",
"Node Salience Labeling.",
"In addition to modeling local characteristics of nodes, we further enhance the model by adding an objective to label node salience, e.g., whether the entities in a node are mentioned in the reference summaries.",
"We introduce a soft mask layer over each node before it is passed into the graph encoder, to signify its salience.",
"This layer, serving as an information gate, predicts a real number m i in [0 , 1] for each node v i and multiplies to itself, i.e. m i v i .",
"For node v i , the mask is calculated as m i = sigmoid ( u 2 v i ) .",
"During training, the gold-standard mask m i for a node is set to 1 if it contains at least one content word in the reference summary; otherwise, 0 .",
"We add the following objective for all nodes in the dataset D : L mask = 1 N v (cid:88) v i D m i log( m i )+ (1 m i ) log(1 m i ) (10) where N v represents the number of nodes in the dataset.",
"Finally, the ML training objective takes the following form: L ml = L mask + L seq .",
"After maximum likelihood training with L ml , we further design a multiple choice cloze reward in a second-stage reinforcement learning (RL), leading the model to generate more faithful and informative summaries.",
"For RL, we use a self-critical policy gradient algorithm (Rennie et al., 2017).",
"During training, two summaries are generated: first, a summary y s , sampling tokens based on the probability distribution p ( y s | x ; ) at each decoding step; and second, a baseline summary y which greedily selects the tokens of the highest probability at each step.",
"The objective of RL is defined based on the rewards of the two summaries, R ( y s ) and R ( y ) , as follows: L rl = 1 | D | (cid:88) ( y s , x ) D ( R ( y s ) R ( y )) log p ( y s | x ; ) (11) Our reward function uses the combination of ROUGE and the multiple choice cloze score introduced below, i.e., R ( y ) = R rouge ( y ) + cloze R cloze .",
"For ROUGE, it considers F1 scores of ROUGE-1, ROUGE-2, and ROUGE-L calculated against the reference summary, and takes the form of R rouge ( y ) = 1 R rouge 1 ( y ) + 2 R rouge 2 ( y ) + (1 1 2 ) R rouge L ( y ) .",
"Multiple Choice Cloze Reward.",
"Here, we present a novel multiple choice cloze reward to work with our knowledge graph and guide the summarization model towards improved awareness of entity interactions.",
"We treat the system-generated summary as context .",
"We provide a set of questions automatically constructed from the corresponding reference summary written by a human.",
"We separately train a question answering (QA) model to address the questions by reading the context.",
"Intuitively, if the system summary shares salient information with the reference, the QA model will assign the correct answers with high probability.",
"We decide to use the average probability of the correct answers as our cloze reward .",
"Below, we give details on how to construct the questions and candidate answers with examples shown in Fig. 3. Question Construction.",
"We run the OpenIE tool on human-written summaries, retaining triples with arguments not longer than 5 words.",
"For each triple of (cid:104) subject, predicate, object (cid:105) , we create two types of questions: (1) argument pair questions , by removing the subject and object, and (2) predicate questions , by removing the predicate.",
"Candidate Answer Construction.",
"Because fill-in-the-blank style cloze may incorrectly penalize QA systems with answers paraphrased from the ground-truth, we opt for a multiple choice cloze.",
"We construct three candidate answers in addition to the Reference Summary : Federal Reserve increases interest rates .",
"Salient Context : Federal Reserve signals positivity about the market.",
"Fed increases benchmark interest rate again this May.",
"American economy keeps the high growth rate.",
"Jerome H. Powell discussed potential risks .",
"1. (cid:104) Federal Reserve , signals , positivity (cid:105) 2. (cid:104) American economy , keeps , the high growth rate (cid:105) 3. (cid:104) Jerome H. Powell, discussed , potential risks (cid:105) Multiple Choice Cloze Questions : Argument Pair Question : increases .",
"A. Federal Reserve , interest rates ( (cid:68) ) B. interest rates , Federal Reserve (swapping args in A) C. American economy , interest rates (replacing arg using triple 2) D. Federal Reserve , potential risks (replacing arg using triple 3) Predicate Question : Federal Reserve interest rates.",
"A. increases ( (cid:68) ) B. signals C. keeps D. discussed Figure 3: Sample construction of multiple choice cloze questions and candidate answers from reference summary and salient context.",
"gold-standard from the salient context , which are summary-worthy sentences selected from the input.",
"Specifically, we use greedy search to select the best combination of sentences that maximizes ROUGE-2 F1 with reference to human summary.",
"We further include a sentence in the salient context if it has a ROUGE-L recall greater than 0 .",
"6 when compared with any sentence in the reference.",
"We first select OpenIE triples from the salient context and filter out those that have any overlapping content word with the correct answer.",
"For argument pair questions , we create one candidate answer by swapping the subject and the object (e.g. candidate B as in Fig. 3) and two candidates by replacing the subject or the object with another argument of the same role extracted from the salient context (e.g. candidates C and D).",
"If not enough answers are created, we further consider randomly selecting sentences from the input.",
"For predicate questions , we use predicates in other triples from the context as candidate answers.",
"Among all candidates, we select the three that are able to construct the most fluent questions using perplexity predicted by BERT (Devlin et al., 2019).",
"In case reference summaries do not yield OpenIE triples, we create additional entity pair questions.",
"We remove two co-occurring entities from the summary and create three candidate answers in the same way as described above.",
"QA Model.",
"We fine-tune RoBERTa (Liu et al., 2019) to build our QA model.",
"We use the salient context described above as the context for training.",
"We then concatenate the context, the question, and each of the four candidate answers, and pass the final [CLS] representation through a fully-connected layer, from which the answer is predicted.",
"Datasets.",
"We experiment with two popular summarization datasets with summaries containing multiple sentences: the New York Times annotated corpus (NYT) (Sandhaus, 2008) and the CNN/Daily Mail dataset (CNN/DM) (Hermann et al., 2015).",
"We follow the preprocessing steps and experimental setups from prior work (Paulus et al., 2018; See et al., 2017) for both datasets.",
"For NYT, the training, validation, and test sets contain 588 , 909 , 32 , 716 , and 32 , 703 samples.",
"For CNN/DM, the numbers are 287 , 188 , 13 , 367 , and 11 , 490 .",
"To train our cloze QA model for NYT, we construct 1 , 414 , 336 question-answer pairs from human-written summaries in the training set based on the method described in 5. On CNN/DM, we collect 1 , 361 , 175 question-answer samples from the training set.",
"For both datasets, we set aside 20 , 000 samples as a validation set and 20 , 000 samples as a test set.",
"Our QA model achieves an accuracy of 97% on NYT and 95% on CNN.",
"Training Details and Parameters.",
"We use the base version of RoBERTa model to extract token features for all experiments.",
"We truncate input articles to 1024 (NYT) and 512 (CNN/DM) BPEs.",
"We employ LSTM models with 256 -dimensional hidden states for the document encoder ( 128 each direction) and the decoder.",
"For the residual connection of the graph encoder, we use 4 heads, each with a dimension of 72 .",
"For DocGraph training and inference, we prune isolated graphs with fewer than three nodes to increase robustness and reduce redundancy.",
"We set 1 = 0 , 2 = 0 .",
"75 on NYT and 1 = 0 .",
"33 , 2 = 0 .",
"33 on CNN/DM after tuning on the validation set.",
"For both datasets, we set cloze = 0 .",
"05 .",
"More details about parameters and graph statistics are in the Appendices.",
"Baselines and Comparisons.",
"For both datasets, System ROUGE-1 ROUGE-2 ROUGE-LLEAD -3 32.59 16.49 29.17 POINTGEN + COV 41.06 25.71 37.28 DEEPREINFORCE 47.03 30.72 43.10 BOTTOMUP 47.38 31.23 41.81 DCA 48.08 31.19 42.33 SENECA 47.94 31.77 44.34 BART 53.25 36.61 48.78 Our Models NOGRAPH 47.15 32.02 43.65 + R rouge 49.17 33.19 46.44 ASGARD-DOC 49.51 33.82 45.72 + R rouge 50.18 33.91 46.84 + R rouge + R cloze 50.59 33.98 48.24 ASGARD-SEG 49.54 33.84 45.75 + R rouge 50.47 33.95 47.43 + R rouge + R cloze 51.29 34.97 48.26 Table 1: Automatic evaluation with ROUGE on New York Times.",
"we include an extractive baseline LEAD -3. We further add the following abstractive models for comparison: (1) a pointer-generator model with coverage (See et al., 2017) (POINTGEN + COV ); (2) a deep reinforcement learning-based model (Paulus et al., 2018) (DEEPREINFORCE ); (3) a bottom-up model (Gehrmann et al., 2018) (BOTTOMUP ); (4) a deep communicating agents-based summarization model (Celikyilmaz et al., 2018) (DCA).",
"We also report results by fine-tuning BART model (Lewis et al., 2019).",
"In Lewis et al. (2019), fine-tuning is only performed on CNN/Daily Mail.",
"We apply the same method for NYT.",
"For NYT, we add results by SENECA model (Sharma et al., 2019) from our prior work, which previously achieved the best ROUGE-2.",
"On CNN/Daily Mail, we include comparisons of a two-stage fine-tuned model (first on an extractor, then on an abstractor) with BERT (Liu and Lapata, 2019) (BERTSUMEXTABS ), and a unified pretrained language model for generation (Dong et al., 2019) (UNILM).",
"In addition to ASGARD-DOC and ASGARD-SEG , which are trained with an ML objective, we report results trained with ROUGE as the reward ( R rouge ), and with an additional cloze reward ( R cloze ).",
"Lastly, we consider a variant NOGRAPH by ablating the graph encoder.",
"Results on NYT.",
"As displayed in Table 1, our ASGARD-SEG model trained with ROUGE and cloze rewards achieves better ROUGE scores (Lin and Hovy, 2003) than all other comparisons except the fine-tuned BART.",
"However, our ASGARD-SEG 's ROUGE-L score is comparable to BART.",
"This indicates the effectiveness of our graph-augmented summarization framework.",
"Moreover, both our ASGARD-DOC and ASGARD-SEG models yield significantly higher ROUGE scores than the variant without the graph encoder (NOGRAPH ).",
"This demonstrates the benefit of using structured representation to encode entity interactions.",
"Furthermore, both ASGARD-DOC and ASGARD-SEG with cloze reward ( R cloze ) obtain significantly higher scores compared to the models trained with ROUGE reward only.",
"This signifies that our multi-choice cloze reward can guide better semantic interpretation of content, leading to the generation of more informative summaries.",
"We also find that ASGARD-SEG outperforms ASGARD-DOC , indicating that ASGARD-SEG better captures topic drift through multiple paragraphs.",
"Results on CNN/DM.",
"We observe similar trends on the CNN/DM articles as shown in Table 2. No-NYT CNN/DM 55 60 65 70 75 80 85 90 C l o z e S c o r e 91.1 90.8 68.3 66.7 72.7 75.9 71.0 75.7 Probability NYT CNN/DM 70 75 80 85 90 95 100 97.8 96.6 78.7 76.1 82.1 84.2 80.9 83.9 Accuracy Human NoGraph+ R rouge ASGARD-doc+ R rouge + R cloze ASGARD-seg+ R rouge + R cloze Figure 4: Evaluation with QA model prediction probability and accuracy on our multiple choice cloze test, with higher numbers indicating better summaries.",
"ticeably, ASGARD-DOC trained with the combined ROUGE and cloze reward produces better ROUGE scores than BERTSUMEXTABS and UNILM, which are carefully fine-tuned from large pretrained language models, and the numbers are also comparable to the fine-tuned BART.",
"Evaluation with Cloze Test.",
"We further evaluate model-generated summaries with our proposed cloze test.",
"Here, we report two scores in Fig. 4: the average probability of the correct answers output by our QA model, and its prediction accuracy .",
"We first calculate one score per summary, then take the average over all summaries.",
"We can see that our models with graph encoders perform better than the variant without it.",
"We further conduct human evaluation to analyze the informativeness and fluency of the generated summaries, as well as to investigate the unfaithful errors made by different models.",
"We sample 100 articles from the NYT test set and hire three native or fluent speakers of English to rate summaries generated by our two systems, NOGRAPH + R rouge and ASGARD-SEG + R rouge + R cloze , along with outputs by BART and human-written summaries (presented in random order).",
"After reading the articles, each judge scores summaries on a Likert scale from 1 (worst) to 5 (best) on informativeness whether the summary covers important information from the input, and fluency whether the summary is grammatically correct.",
"We consider three types of unfaithful errors:",
"(i) hallucination error creating content not present in the input,",
"(ii) out-of-context error generating facts without including required context or within System Inf.",
"Summary by Human: Family Court in Burlington County, NJ, rules that lesbian couple can list both their names as parents on birth certificate of newborn; state attorney general's office drops opposition to move; court ruling negates couple's having to go through adoption proceedings to establish full parental rights for both.",
"NoGraph + R rouge : Lesbian couple in South Jersey wins court approval to have both of their names listed as parents on birth certificate of their newborn.",
"it will no longer oppose such applications ASGARD-doc + R rouge + R cloze : Lesbian couple in South Jersey, won court approval to have both of their names listed as parents on birth certificate of their newborn.",
"attorney general's office says it will no longer oppose such applications ASGARD-seg + R rouge + R cloze : Lesbian couple in South Jersey wins court approval to have both of their names listed as parents on birth certificate of newborn and attorney general 's office will no longer oppose such applications.",
"decision stems from Oct 0 ruling by New Jersey Supreme Court holding that same-sex couples are entitled to same legal rights and protections as heterosexual couples Figure 5: Sample summaries for an NYT article.",
"incorrect context, and",
"(iii) deletion or substitution error mistakenly deleting or substituting subjects, objects, or clauses.",
"We ask the annotators to label each type as 1 for existence of errors, and 0 otherwise.",
"Detailed guidelines are in the Appendices.",
"From Table 3, we can see that our ASGARD-SEG model obtains better scores in informativeness and fluency, compared to the variant without the graph encoder.",
"This indicates the effectiveness of leveraging knowledge graph representation.",
"Sample output summaries by our models can be found in Fig. 5. Meanwhile, fine-tuned BART model produces outputs with similar informativeness and fluency of human-constructed summaries, suggesting a future direction of building our model on top of a large-pretrained encoder-decoder model.",
"For unfaithful errors , we report the percentage of errors calculated by majority voting (i.e., more than one annotator vote as incorrect).",
"First, we find that our ASGARD-SEG model has a comparable error pattern as human summaries.",
"Specifically, for out-of-context and deletion or substitution errors, our graph-enhanced model produces significantly fewer mistakes in these categories, compared to the model without graph information.",
"This implies that knowledge graph-enhanced models can improve summary faithfulness.",
"Interestingly, human-written summaries are also discerned to contain a nontrivial amount of hallucination errors.",
"After inspection, we find that human tends to leverage world knowledge to include content that is not covered by the articles.",
"For instance, for an article discussing events in Boston, the human writer may describe them as happening in Massachusetts in the summary.",
"We further plot the distributions of automatic evaluation scores regarding the three types of unfaithful errors based on majority voting in Fig. 6. First, summaries with out-of-context and deletion or substitution errors receive lower cloze and ROUGE scores overall.",
"Nevertheless, with regard to hallucination errors, we do not see such pattern; there is even a slightly reversed relation with both cloze scores and ROUGE scores, wherein summaries with more hallucination errors tend to score higher.",
"This echos our previous observation that human summaries can be hallucinatory too, where world knowledge is used for writing the summaries.",
"2 Furthermore, we find a weak correlation between the three variants of ROUGE scores and three types of errors, e.g., the minimum and the maximum values of Pearson's r are 0 .",
"19 and 0 .",
"14 .",
"This suggests that new metrics should be designed to better gauge summary quality.",
"We plan to study this direction in future work.",
"2 During human evaluation, we do not ask human judges to distinguish the source of hallucination errors, i.e. from world knowledge or out of fabrication, since this requires significant domain knowledge.",
"We presented a novel knowledge graph-augmented abstractive summarization framework, along with a novel multiple choice cloze reward for reinforcement learning.",
"Our models capture both local characteristics and global interactions of entities from the input, thus generating summaries of higher quality.",
"In tandem with the graph representation, our cloze reward further improves summary content.",
"Human evaluation further confirms that our graph-augmented models trained with the cloze reward produce more informative summaries and significantly reduces unfaithful errors.",
"This research is supported in part by National Science Foundation through Grant IIS-1813341, and by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract # FA8650-17-C-9116.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.",
"We thank the anonymous reviewers for their suggestions."
] |
[
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"method",
"result",
"objective",
"other",
"other",
"other",
"other"
] |
[
"The problem of comparing two bodies of text and searching for words that differ in their usage between them arises often in digital humanities and computational social science.",
"This is commonly approached by training word embeddings on each corpus, aligning the vector spaces, and looking for words whose cosine distance in the aligned space is large.",
"However, these methods often require extensive filtering of the vocabulary to perform well, andas we show in this workresult in unstable, and hence less reliable, results.",
"We propose an alternative approach that does not use vector space alignment, and instead considers the neighbors of each word.",
"The method is simple, interpretable and stable.",
"We demonstrate its effectiveness in 9 different setups, considering different corpus splitting criteria (age, gender and profession of tweet authors, time of tweet) and different languages (English, French and Hebrew).",
"Analyzing differences in corpora from different sources (different time periods, populations, geographic regions, news outlets, etc) is a central use case in digital humanities and computational social science.",
"A particular methodology is to identify individual words that are used differently in the different corpora.",
"This includes words that have their meaning changed over time periods (Kim et al., 2014; Kulkarni et al., 2015; Hamilton et al., 2016b; Kutuzov et al., 2018; Tahmasebi et al., 2018), and words that are used differently by different populations (Azarbonyad et al., 2017; Rudolph et al., 2017).",
"It is thus desired to have an automatic , robust and simple method for detecting such potential changes in word usage and surfacing them for human analysis.",
"In this work we present such a method.",
"A popular method for performing the task ( 4) is to train word embeddings on each corpus and then to project one space to the other using a vector-space alignment algorithm.",
"Then, distances between a word-form to itself in the aligned space are used as an estimation of word usage change (Hamilton et al., 2016b).",
"We show that the common alignment-based approach is unstable, and hence less reliable for the usage change detection task ( 3, 7).",
"In addition, it is also sensitive to proper nouns and requires filtering them.",
"We propose a new and simple method for detecting usage change, that does not involve vector space alignment ( 5).",
"Instead of trying to align two different vector spaces, we propose to work directly in the shared vocabulary space : we take the neighbors of a word in a vector space to reflect its usage, and consider words that have drastically different neighbours in the spaces induced by the different corpora to be words subjected to usage change.",
"The intuition behind this approach is that words that are used significantly differently across corpora are expected to have different contexts and thus to have only few neighboring words in common.",
"In order to determine the extent of the usage change of a word, we simply consider its top-k neighbors in each of the two corpora, and compute the size of the intersection of the two lists.",
"The smaller the intersection is, the bigger we expect the change to be.",
"The words are ranked accordingly.",
"The advantages of our method are the following: 1. Simplicity : the method is extremely simple to implement and apply, with no need for space alignment, hyperparameter tuning, and vocabulary filtering, except for simple frequency cutoffs.",
"embeddings trained on the same corpora, in contrast to the alignment-based approach.",
"3. Interpretability : The ranking produced by our method is very intuitive to analyze.",
"Looking at the neighborhood of a word in the two corpora reveals both the meaning of the word in each, and the extent to which the word has changed.",
"4. Locality : The interpretability aspect is closely linked to the locality of the decision.",
"In our approach, the score of each word is determined only by its own neighbours in each of the spaces.",
"In contrast, in the projection based method the similarity of a pair of words after the projection depends on the projection process, which implicitly takes into account all the other words in both spaces and their relations to each other , as well as the projection lexicon itself, and the projection algorithm.",
"This makes the algorithmic predictions of the projection-based methods opaque and practically impossible to reason about.",
"We demonstrate the applicability and robustness of the proposed method ( 7) by performing a series of experiments in which we use it to identify word usage changes in a variety of corpus pairs, reflecting different data division criteria.",
"We also demonstrate the cross-linguistic applicability of the method by successfully applying it to two additional languages beyond English: French (a Romance language) and Hebrew (a Semitic language).",
"We argue that future work on detecting word change should use our method as an alternative to the now dominant projection-based method.",
"To this end, we provide a toolkit for detecting and visualizing word usage change across corpora.",
"1 2 Task Definition Our aim is to analyze differences between corpora by detecting words that are used differently across them.",
"This task is often referred to as detect-ing meaning change (Azarbonyad et al., 2017; Del Tredici et al., 2019).",
"However, we find the name meaning change to be misleading.",
"Words may have several meanings in the different corpora, but different dominant sense in each corpus, indicating different use of the 1 https://github.com/gonenhila/usage_ change word.",
"For this reason, we refer to this task as de-tecting usage change.",
"We define our task as follows: given two corpora with substantial overlapping vocabularies, identify words that their predominant use is different in the two corpora.",
"The algorithm should return a ranked list of words, from the candidate that is most likely to have undergone usage-change, to the least likely.",
"Since the primary use of such algorithm is corpus-based research, we expect a human to manually verify the results.",
"To this end, while the method does not need to be completely accurate, it is desirable that most of the top returned words are indeed those that underwent change, and it is also desirable to provide explanations or interpretations as to the usage of the word in each corpus.",
"Lastly, as humans are susceptible to be convinced by algorithms, we prefer algorithms that reflect real trends in the data and not accidental changes in environmental conditions.",
"A desired property of an analysis method is stability : when applied several times with slightly different conditions, we expect the method to return the same, or very similar, results.",
"Insignificant changes in the initial conditions should result in insignificant changes in the output.",
"This increases the likelihood that the uncovered effects are real and not just artifacts of the initial conditions.",
"Recent works question the stability of word embedding algorithms, demonstrating that different training runs produce different results, especially with small underlying datasets.",
"Antoniak and Mimno (2018) focuses on the cosine-similarity between words in the learned embedding space, showing large variability upon minor manipulations on the corpus.",
"Wendlandt et al. (2018) make a similar argument, showing that word embeddings are unstable by looking at the 10-nearest neighbors (NN) of a word across the different embeddings, and showing that larger lists of nearest neighbors are generally more stable.",
"In this work, we are concerned with the stability of usage-change detection algorithms, and present a metric for measuring this stability.",
"A usage-change detection algorithm takes as input two corpora, and returns a ranked list r of candidate words, sorted from the most likely to have changed to the least likely.",
"For a stable algorithm, we expect different runs to return similar lists.",
"While we do not care about the exact position of a word within a list, we do care about the composition of words at the top of the list.",
"We thus propose a measure we call intersection@ k , measuring the percentage of shared words in the the top-k predictions of both outputs: intersection@ k ( r 1 , r 2 ) = | r k 1 r k 2 | k (1) where r 1 and r 2 are the two ranked lists, and r ki is the set of top k ranked words in ranking r i .",
"A value of 0 in this measure means that there are no words in the intersection, which indicates high level of variability in the results, while a value of 1 means that all the words are in the intersection, indicating that the results are fully consistent.",
"We expect to see higher intersection@ k as k grows.",
"This expectation is confirmed by our experiments in Section 7.2.",
"We measure the stability of the usage-change detection algorithms with respect to a change in the underlying word embeddings: we apply the intersection@ k metric to two runs of the usage-change detection algorithm on the same corpus-pair, where each run is based on a different run of the underlying word embedding algorithm.",
"The most prominent method for detecting usage change is that of Hamilton et al. (2016b), originally applied to detect shifts in dominant word senses across time.",
"It is still the predominant approach in practice, 2 with recent works building upon it (Yao et al., 2018; Rudolph and Blei, 2018).",
"This method was also shown to be the best performing one among several others (Schlechtweg et al., 2019).",
"It works by training word embeddings on the two corpora, aligning the spaces, and then ranking the words by the cosine-distance between their representations in the two spaces, where large distance is expected to indicate significant change in meaning.",
"We refer to this method as AlignCos.",
"The alignment is performed by finding an orthogonal linear transformation Q that, when given matrices X and Y , projects X to Y while mini-mizng the squared loss: Q = arg min Q || QX Y || 2 , s.t. Q is orthogonal 2 This is also indicated by the large number of citations: 350 according to Google Scholar.",
"The rows of X correspond to embeddings of words in space A, while the rows of Y are the corresponding embeddings in space B. This optimization is solved using the Orthogonal Procrustes (OP) method (Schonemann, 1966), that provides a closed form solution.",
"Vector space alignment methods are extensively studied also outside of the area of detecting word change, primarily for aligning embedding spaces across language pairs (Xing et al., 2015; Artetxe et al., 2018b; Lample et al., 2018a; Artetxe et al., 2018a).",
"Also there, the Orthogonal Procrustes method is taken to be a top contender (Lample et al., 2018b; Kementchedjhieva et al., 2018).",
"Self-contradicting objective.",
"Note that the optimization procedure in the (linear) alignment stage attempts to project each word to itself.",
"This includes words that changed usage, and which therefore should not be near each other in the space.",
"While one may hope that other words and the linearity constraints will intervene, the method may succeed, by mistake, to project words that did change usage next to each other, at the expense of projecting words that did not change usage further apart than they should be.",
"This is an inherent problem with any alignment based method that attempts to project the entire vocabulary onto itself.",
"Requires non-trivial filtering to work well.",
"In addition, the alignment-based method requires nontrivial vocabulary filtering to work well.",
"For example, Hamilton et al. (2016b) extensively filter proper nouns.",
"Indeed, without such filtering, proper-nouns dominate the top of the changed words list.",
"This does not indicate real word usage change, but is an artifact of names being hard to map across embedding spaces.",
"In that respect, it makes sense to filter proper nouns.",
"However, some cases of word usage change do involve names.",
"For example, the word Harlem, which is used as either a name of a neighborhood in NY or as a name of a basketball team, was detected by our method as a word whose usage changed between tweets of celebrities with different occupations ( 7.1).",
"Not stable across runs.",
"As we discuss in Section 3 and show in Section 7.2, the approach is not very stable with respect to different random seeds in the embeddings algorithm.",
"Rather than attempting to project two embedding spaces into a shared space (which may not even map 1:1), we propose to work at the shared vocabulary space.",
"The underlying intuition is that words whose usage changed are likely to be interchangeable with different sets of words, and thus to have different neighbors in the two embedding spaces.",
"This gives rise to a simple and effective algorithm: we represent each word in a corpus as the set of its top k nearest neighbors (NN).",
"We then compute the score for word usage change across corpora by considering the size of the intersection of the two sets (not to be confused with intersection@ k defined in Section 3): score k ( w ) = | NN k 1 ( w ) NN k 2 ( w ) | (2) where NN ki ( w ) is the set of k-nearest neighbors of word w in space i .",
"Words with a smaller intersection are ranked higher as their meaning-change potential.",
"We only consider the words in the intersection of both vocabularies, as words that are rare in one of the corpora are easy to spot using the frequency in the two spaces, and do not neatly fit the definition of usage change.",
"Note that our method does not require extensive filtering of words we only filter words based on their frequency in the corpus 3 .",
"We use a large value of k = 1000 4 in practice, because large neighbor sets are more stable than small ones (Wendlandt et al., 2018), leading to improved stability for our algorithm as well.",
"Limitations Similar to previous methods, our method assumes high quality embeddings, and 3 For English experiments we also filter stopwords according to the predefined list from NLTK.",
"4 While this value may seem arbitrary, we tested several values in that range which yielded very similar results.",
"However, the appropriate range may change when used with smaller corpora, or substantially different vocabulary sizes.",
"We consider k to be the only hyperparameter of our method, and note that it is rather easy to set.",
"hence also a relatively large corpus.",
"Indeed, in many cases we can expect large quantities of data to be available to the user, especially when considering the fact that the data needed is raw text rather than labeled text.",
"Using a limited amount of data results in lower quality embeddings, but also with smaller vocabulary size, which might affect our method.",
"For high-quality embeddings with small vocabulary sizes, we believe that changing k accordingly should suffice.",
"Naturally, results will likely degrade as embeddings quality deteriorate.",
"It is also important to note that, like previous approaches, our method does not attempt to provide any guarantees that the detected words have indeed undergone usage change.",
"It is only intended to propose and highlight candidates for such words.",
"These candidates are meant to later be verified by a user who needs to interpret the results in light of their hypothesis and familiarity with the domain.",
"Unlike previous methods, as we discuss in Section 7.4, our method also provides intuitive means to aid in such an interpretation process.",
"We compare our proposed method ( NN ) to the method of Hamilton et al. (2016b) described in Section 4 ( AlignCos ), in which the vector spaces are first aligned using the OP algorithm, and then words are ranked according to the cosine-distance between the word representation in the two spaces.",
"5 This method was shown to outperform all others that were compared to it by Schlechtweg et al. (2019).",
"We demonstrate our approach by using it to detect change in word usage in different scenarios.",
"We use the following corpora, whose statistics are listed in Table 1. We consider three demographics-based distinctions (age, gender, occupation), a day-of-week 5 Some extensions may yield improved results (filtering out proper names, as done in Hamilton et al. (2016b), or jointly learning and aligning the spaces (Bamler and Mandt, 2017; Rudolph et al., 2017; Rudolph and Blei, 2018; Yao et al., 2018), but we stick to this setting as it is the most general out of this line of work, and the one most commonly used in practice, for which an open implementation is available.",
"based distinction, and short-term (4y) diachronic distinctions.",
"We also compare to the longer-term (90y) diachronic setup of Hamilton et al. (2016b), which is based on Google books.",
"Author Demographics The Celebrity Profiling corpus (Wiegmann et al., 2019) consists of tweets from celebrities along with their traits such as age, gender and occupation.",
"Based on these labels, we create the following splits: (1) Age : Young (birthyear 19902009) vs. Older (birthyear 1950 1969); (2) Gender : Male vs. Female; (3) Occupation : pairwise splits with Performer, Sports and Creator.",
"Day-of-week Yang and Leskovec (2011) collect 580 million tweets in English from June 2009 to February 2010, along with their time-stamps.",
"As this is a fairly large corpus, we consider the tweets of a single month (November 2009).",
"We create a split based on the Day-of-Week: weekday (tweets created on Tuesday and Wednesday) vs. weekend (tweets created on Saturday and Sunday).",
"We remove duplicated tweets, as preliminary experiments revealed odd behavior of the representations due to heavily duplicated spam tweets.",
"French Diachronic (4y, tweets) Abitbol et al. (2018) compile a collection of tweets in French between the years 2014 and 2018.",
"The authors utilize several heuristics based on the users' spatial information to consider tweets from users based in French territory only.",
"We use the 2014 and 2018 portions of the data, and create a split accordingly.",
"Hebrew Diachronic (4y, tweets) The Hebrew data we use is taken from a collection of Hebrew tweets we collected for several consecutive years, up to 2018.",
"The collection was performed by using the streaming API and filtering for tweets containing at least one of the top 400 most frequent Hebrew words.",
"We use the 2014 and 2018 portions of the data, and create a split accordingly.",
"English Diachronic (90y, books) For diachronic study on English corpora, we make use of the embeddings trained on Fiction from Google Books (Davies, 2015) provided by the authors of Hamilton et al. (2016b), specifically for the two years, 1900 and 1990.",
"These embeddings are originally aligned using Orthogonal Procrustes and the words whose relative frequencies are above 10 5 in both the time periods are ranked using cosine distance.",
"Tokenization and Word Embeddings We use 300 dimensions word2vec vectors with 4 words context window.",
"Further details of embeddings algorithm and tokenization are available in the appendix.",
"Vocabulary and Filtering We perform frequency-based filtering of the vocabulary, removing stop words (the most frequent 200 words for each corpus, as well as English stop words as defined in nltk 6 ), as well as low frequency words (we discard the 20% least frequent words in each corpus, and require a minimum of 200 occurrences).",
"Notably, we do not perform any other form of filtering , and keep proper-nouns and person-names intact.",
"We consider neighbors having a raw frequency greater than 100 and identify 1000 such nearest neighbors ( k = 1000) to perform the intersection.",
"We run our proposed method and AlignCos (Hamil-ton et al., 2016b) on the different scenarios described in Section 6, and manually inspect the results.",
"While somewhat subjective, we believe that the consistent success on a broad setting, much larger than explored in any earlier work, is convincing.",
"We provide examples for two of the setups (English Diachronic and Performer vs. Sports), with the rest of the setups in the appendix.",
"For each one, we list a few interesting words detected by the method, accompanied by a brief explanation (according to the neighbors in each corpus).",
"In addition, we depict the top-10 words our method yields for the Age split (Table 2), accompanied by the nearest neighbors in each corpus (excluding words in the intersection), to better understand the context.",
"For comparison, we also mention the top-10 words according to the AlignCos method.",
"Similar tables for the other splits are provided in the Appendix.",
"Across all splits, our method is able to detect high quality words as words that undergo usage change, most of them easily explained by their neighboring words in the two corpora.",
"As expected, we see that the AlignCos method (Hamilton et al., 6 https://www.nltk.org/ AGE (YOUNG VS . OLDER ) NN neighbors in each corpus dem dese, yuh, them, nuh, dey, ayye, dats, tha, betta, fuk repub, democrats, centrist, manchin, primaries, party's, alp, dfl, gopers, repubs dam damm, mannnnn, mannnn, mane, huh, ahh, oo, buggin, koo, mannn dams, basin, river, dredging, reservoir, drainage, wastewater, sewerage, refinery, canal rep reppin, wear, allegiance, all-american, wildcat, alumni, tryout, hoosier, recruit, ua",
"2016b) is highly sensitive to names, featuring many in the top-10 lists across the different splits.",
"As opposed to AlignCos, our method is robust to global changes in the embedding space, since it looks at many neighbors.",
"As a result, it is not sensitive to groups of words that move together in the embedding space (which might be the case with names).",
"English (diachronic, 90y) Top-100 words identified by our method cover all the words attested as real semantic shift in Hamilton et al. (2016b)'s top-10 except the word wanting'.",
"Specifically, three attested words, gay', major' and check' are present in our top-10, which also has more interesting words not present in Hamilton et al. (2016b)'s top-10 (1900 vs. 1990): van (captain vs. vehicle), press (printing vs. places), oxford (loca-tion vs. university).",
"In addition, interesting words that came up in the top-30 list are the following: headed (body part vs. move in a direction), mystery (difficulty in understanding vs. book genre).",
"Occupation (performer vs. sports) Interesting words found at the top-10 list are the following: cc (carbon copy vs. country club), duo (duet vs. pair of people), wing (politics vs. football player position).",
"In addition, interesting words that came up in the top-30 list are the following: jazz (music genre vs. basketball team), worlds (general meaning vs. championships), stages (platforms vs. com-pany(bikes)), record (music record vs. achieve-ment), harlem (neighborhood vs. basketball team).",
"We compare the stability of our method to that of the AlignCos method (Hamilton et al., 2016b) using the intersection@ k metric, as defined in Section 3. We use k 10 , 20 , 50 , 100 , 200 , 500 , 1000",
"In Figure",
"1(a) we plot the intersection@ k for different values of k for all splits, with solid lines for the results of our method and dashed lines for the results of AlignCos method.",
"It is clear that our method is significantly more stable, for all k values and across all splits.",
"To better understand the parameters that affect the stability of the different methods, we also examine how the intersection changes with different values of frequency cut-off.",
"In Figure",
"1(b) we plot intersection@100 as a function of the frequency cut-off (minimum word occurrences required for a word to be included in the ranking).",
"Here, our method is again more stable for all corpus splits.",
"In addition, our method is similarly stable, regardless the frequency cut-off, unlike the AlignCos method.",
"We also examine how the size of NN lists considered for the intersection 0 200 400 600 800 1000 k 0.5 0.6 0.7 0.8 0.9 i n t e r s e c t i o n @ k",
"affects the stability.",
"In Figure",
"1(c) we plot the in-tersection@100 against number of neighbors taken into consideration using our method.",
"We get that from around k = 250 , our method is substantially more stable for all splits.",
"This field of semantic change suffers from lack of proper evaluation datasets, and there is no common benchmark that is being used.",
"Two new datasets were recently introduced, and used to extensively compare between previous methods (Schlechtweg et al., 2019): the DURel dataset (Schlechtweg et al., 2018) focuses on diachronic changes, while the SURel dataset (Hatty et al., 2019) focuses on domain-based semantic changes.",
"We use them to verify the quality of our results and compare against AlignCos (Hamilton et al., 2016b).",
"Both datasets include a limited number of German words, along with human annotations of the degrees of semantic relatedness between contexts of the words (across the different texts).",
"However, they are not ideal as they are extremely limited (22 words each) 7 .",
"Evaluation Metrics Spearman correlation is the standard measure used in this field to compare between methods with respect to gold rankings.",
"However, it is extremely important to note its limitations in this setting, since comparing to a very small gold ranking might be tricky.",
"Specifically, it does 7 For our experiments, we follow the setup of Schlechtweg et al. (2019) and use 19/21 words for DURel/ SURel respectively.",
"not take into account the global ranking of each method, but only the relative position of each of the gold words in each method's ranking.",
"For example, a method that ranks all the gold words at the bottom of the ranking (out of all the words in the vocabulary) in the same order, would be considered perfect, even though it is clearly not the case.",
"As a possible solution for this problem, we suggest to use Discounted Cumulative Gain (DCG), which better captures also global rankings.",
"As opposed to Spearman, this measure takes into account not only the order of the words, but also their actual scores: DCG(M) = (cid:88) w W GoldScore ( w ) log 2 ( rank M ( w ) + 1) (3) where W are the words in the gold dataset, and M is the model being evaluated.",
"We report the results in Table 3. We compute AlignCos results with the best parameters reported in Schlechtweg et al. (2019) 8 .",
"Our method outperforms AlignCos on SURel, both when measur-8 We were unable to reproduce the exact results from the paper: spearman correlation of 0.866 and 0.851 on SURel and DURel, respectively.",
"ing with spearman correlation 9 and with DCG.",
"For DURel, AlignCos gets better results when measuring with spearman, but both methods are on par when using DCG.",
"We find that in many cases, it is not clear why the returned candidate words were chosen, and questions such as why is the word dam' different across age groups? often arise.",
"The NN method lends itself to interpretation, by considering the top-10 neighbors, as shown in Table 2. We note that this interpretation approach is very reliable in our method, as we are guaranteed to gain insights about the usage change when looking at neighboring words, since most of the neighbors will be different for the identified words.",
"While we can definitely attempt at looking at the NN also for the OP-based meth-9 Average Spearman score over model runs with different numbers of iterations, as done in (Schlechtweg et al., 2019).",
"ods, there we are not guaranteed at all to even spot a difference between the neighbors: it may absolutely be the case that the identified word moved in the embedding space together with most of its neighbors.",
"In this case, looking at the neighbors will provide no insight on the nature of this change.",
"We observed this phenomenon in practice.",
"Nonetheless, comparing flat word lists is hard, and 10 words are often insufficient.",
"We present a visualization method that aids in understanding the model's suggestions.",
"The visualization consists of projecting the word of interest and its top-50 neighbors from each corpus into two dimensions using t-SNE (Maaten and Hinton, 2008), and plotting the result while coloring the neighbors in the intersection in one color and the neighbors unique to each corpus in other colors.",
"We expect the neighbors of a word of interest to have distinct neighbors across the corpora.",
"Figures 2 and 3 show the visualizations for the word clutch in the Gender split, with cyan for female and violet for male, and the word dam in the Age split, with cyan for older and violet for young (in both cases they were no shared neighbours).",
"We plot the projection of the words twice one plot for each embedding space.",
"We can see that, as expected, the neighboring words are distinct, and that the target word belongs to the respective neighborhood in each space.",
"We conclude that this is a useful tool for interpreting the results of our model.",
"Extensive work has been done on detecting word usage change across corpora that predated the alignment-based methods (Mitra et al., 2014; Jatowt and Duh, 2014; Kenter et al., 2015; Ho et al., 2016; Frermann and Lapata, 2016).",
"In addition, two works are more closely related to our approach.",
"In Azarbonyad et al. (2017), the authors also use the neighbors of a word in order to determine its stability (and therefore, the extent to which it changes).",
"Their best model combines the traditional alignment-based approach with weighting the neighbors according to their rank and their stability.",
"The algorithm is iterative, and they update the stability of all the words in the vocabulary in each update step.",
"Our method uses the neighbors of the words directly, does not include an iterative process, and does not rely on cosine-distance in the aligned embeddings.",
"In addition, their method requires computation for the whole vocabulary, while other methods, including ours, usually allow querying for a single word.",
"Another work that considers the neighbors of the word in order to determine the extent of change is that of Hamilton et al. (2016a), in which they suggest a measure that is based on the changes of similarities between the target word and its neighbors in both spaces.",
"They find that this method is more suitable for identifying changes that are due to cultural factors, rather than linguistic shift.",
"This may serve as another motivation to move from the global measures to a local one.",
"Recent works (Giullianelli, 2019; Martinc et al., 2019) explored the possibility of modeling diachronic and usage change using contextualized embeddings extracted from now ubiquitous Bert representations (Devlin et al., 2019).",
"Focusing on the financial domain, Montariol and Allauzen (2020) use, on top of Bert embeddings, a clustering method that does not need to predefine the number of clusters and which leads to interesting results on that domain.",
"Another approach from Hu et al. (2019) relies on the inclusion of example-based word sense inventories over time from the Oxford dictionary to a Bert model.",
"Doing so provides an efficient fine-grained word sense representation and enables a seemingly accurate way to monitor word sense change over time.",
"Most of those approaches could be easily used with our method, the inclusion of contextualized embeddings would be for example straightforward, we leave it for future work.",
"Detecting words that are used differently in different corpora is an important use-case in corpus-based research.",
"We present a simple and effective method for this task, demonstrating its applicability in multiple different settings.",
"We show that the method is considerably more stable than the popular alignment-based method popularized by Hamilton et al. (2016b), and requires less tuning and word filtering.",
"We suggest researchers to adopt this method, and provide an accompanying software toolkit.",
"We thank Marianna Apidianiaki for her insightful comments on an earlier version of this work.",
"This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT), and from the the Israeli ministry of Science, Technology and Space through the Israeli-French Maimonide Cooperation programme.",
"The second and third authors were partially funded by the French Research Agency projects ParSiTi (ANR-16-CE33-0021), SoSweet (ANR15-CE38-0011-01) and by the French Ministry of Industry and Ministry of Foreign Affairs via the PHC Maimonide France-Israel cooperation programme."
] |
[
"abstain",
"abstain",
"result",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"objective",
"method",
"objective",
"objective",
"abstain",
"objective",
"objective",
"objective",
"objective",
"method",
"method",
"objective",
"abstain",
"result",
"other",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"objective",
"result",
"method",
"other",
"other",
"other"
] |
[
"In the deep learning (DL) era, parsing models are extremely simplified with little hurt on performance, thanks to the remarkable capability of multi-layer BiLSTMs in context representation.",
"As the most popular graph-based dependency parser due to its high efficiency and performance, the biaffine parser directly scores single dependencies under the arc-factorization assumption, and adopts a very simple local token-wise cross-entropy training loss.",
"This paper for the first time presents a second-order TreeCRF extension to the biaffine parser.",
"For a long time, the complexity and inefficiency of the inside-outside algorithm hinder the popularity of TreeCRF.",
"To address this issue, we propose an effective way to batchify the inside and Viterbi algorithms for direct large matrix operation on GPUs, and to avoid the complex outside algorithm via efficient back-propagation.",
"Experiments and analysis on 27 datasets from 13 languages clearly show that techniques developed before the DL era, such as structural learning (global TreeCRF loss) and high-order modeling are still useful, and can further boost parsing performance over the state-of-the-art biaffine parser, especially for partially annotated training data.",
"We release our code at https: //github.com/yzhangcs/crfpar .",
"As a fundamental task in NLP, dependency parsing has attracted a lot of research interest due to its simplicity and multilingual applicability in capturing both syntactic and semantic information (Nivre et al., 2016).",
"Given an input sentence x = w 0 w 1 . . . w n , a dependency tree, as depicted in Figure 1, is defined as y = { ( i, j, l ) , 0 i n, 1 j n, l L} , where ( i, j, l ) is a dependency from the head word w i to the modifier word Corresponding author $ 0 I 1 saw 2 Sarah 3 with 4 a 5 telescope 6 nsubj dobj pobj det root prep Figure 1: An example full dependency tree.",
"w j with the relation label l L .",
"Between two mainstream approaches, this work focuses on the graph-based paradigm (vs. transition-based).",
"Before the deep learning (DL) era, graph-based parsing relies on many hand-crafted features and differs from its neural counterpart in two major aspects.",
"First, structural learning, i.e., explicit awareness of tree structure constraints during training, is indispensable.",
"Most non-neural graph-based parsers adopt the max-margin training algorithm, which first predicts a highest-scoring tree with the current model, and then updates feature weights so that the correct tree has a higher score than the predicted tree.",
"Second, high-order modeling brings significant accuracy gains.",
"The basic first-order model factors the score of a tree into independent scores of single dependencies (McDonald et al., 2005a).",
"Second-order models were soon propose to incorporate scores of dependency pairs, such as adjacent-siblings (McDonald and Pereira, 2006) and grand-parent-child (Carreras, 2007; Koo and Collins, 2010), showing significant accuracy improvement yet with the cost of lower efficiency and more complex decoding algorithms.",
"1 In contrast, neural graph-based dependency parsing exhibits an opposite development trend.",
"Pei et al. (2015) propose to use feed-forward neural 1 Third-order and fourth-order models show little accuracy improvement probably due to the feature sparseness problem (Koo and Collins, 2010; Ma and Zhao, 2012).",
"networks for automatically learning combinations of dozens of atomic features similar to Chen and Manning (2014), and for computing subtree scores.",
"They show that incorporating second-order scores of adjacent-sibling subtrees significantly improved performance.",
"Then, both Wang and Chang (2016) and Kiperwasser and Goldberg (2016) propose to utilize BiLSTM as an encoder and use minimal feature sets for scoring single dependencies in a first-order parser.",
"These three representative works all employ global max-margin training.",
"Dozat and Manning (2017) propose a strong and efficient biaffine parser and obtain state-of-the-art accuracy on a variety of datasets and languages.",
"The biaffine parser is also first-order and employs simpler and more efficient non-structural training via local head selection for each token (Zhang et al., 2017).",
"Observing such contrasting development, we try to make a connection between pre-DL and DL techniques for graph-based parsing.",
"Specifically, the first question to be addressed in this work is: can previously useful techniques such as structural learning and high-order modeling further improve the state-of-the-art 2 biaffine parser, and if so, in which aspects are they helpful?",
"For structural learning, we focus on the more complex and less popular TreeCRF instead of max-margin training.",
"The reason is two-fold.",
"First, estimating probability distribution is the core issue in modern data-driven NLP methods (Le and Zuidema, 2014).",
"The probability of a tree, i.e., p ( y | x ) , is potentially more useful than an unbounded score s ( x , y ) for high-level NLP tasks when utilizing parsing outputs.",
"Second, as a theoretically sound way to measure model confidence of subtrees, marginal probabilities can support Minimum Bayes Risk (MBR) decoding (Smith and Smith, 2007), and are also proven to be crucial for the important research line of token-level active learning based on partial trees (Li et al., 2016).",
"One probable reason for the less popularity of TreeCRF, despite its usefulness, is due to the complexity and inefficiency of the inside-outside algorithm, especially the outside algorithm.",
"As far as we know, all existing works compute the inside and outside algorithms on CPUs.",
"The inefficiency issue becomes more severe in the DL era, due to 2 Though many recent works report higher performance with extra resources, for example contextualized word representations learned from large-scale unlabeled texts under language model loss, they either adopt the same architecture or achieve similar performance under fair comparison.",
"the unmatched speed of CPU and GPU computation.",
"This leads to the second question : can we batchify the inside-outside algorithm and perform computation directly on GPUs?",
"In that case, we can employ efficient TreeCRF as a built-in component in DL toolkits such as PyTorch for wider applications (Cai et al., 2017; Le and Zuidema, 2014).",
"Overall, targeted at the above two questions, this work makes the following contributions.",
"We for the first time propose second-order TreeCRF for neural dependency parsing.",
"We also propose an efficient and effective triaffine operation for scoring second-order subtrees.",
"We propose to batchify the inside algorithm via direct large tensor computation on GPUs, leading to very efficient TreeCRF loss computation.",
"We show that the complex outside algorithm is no longer needed for the computation of gradients and marginal probabilities, and can be replaced by the equally efficient back-propagation process.",
"We conduct experiments on 27 datasets from 13 languages.",
"The results and analysis show that both structural learning and high-order modeling are still beneficial to the state-of-the-art biaffine parser in many ways in the DL era.",
"We re-implement the state-of-the-art biaffine parser (Dozat and Manning, 2017) with two modifica-tions, i.e., using CharLSTM word representation vectors instead of POS tag embeddings, and the first-order Eisner algorithm (Eisner, 2000) for projective decoding instead of the non-projective MST algorithm.",
"Input vectors.",
"The i th input vector is composed of two parts: the word embedding and the CharLSTM word representation vector of w i .",
"where CharLSTM( w i ) is obtained by feeding w i into a BiLSTM and then concatenating the two last hidden vectors (Lample et al., 2016).",
"We find that replacing POS tag embeddings with . . . e i . . . e k . . . e j . . . BiLSTM 3 MLP h MLP m Biane MLP h 0 MLP s MLP m 0 Triane h i h k h j r hi r mj r h 0 i r sk r m 0 j s( i, j ) s( i, k, j ) Figure 2: Scoring architecture with second-order extension.",
"CharLSTM( w i ) leads to consistent improvement, and also simplifies the multilingual experiments by avoiding POS tag generation (especially n-fold jackknifing on training data).",
"BiLSTM encoder.",
"To encode the sentential contexts, the parser applies three BiLSTM layers over e 0 . . . e n .",
"The output vector of the top-layer BiLSTM for the i th word is denoted as h i .",
"where r hi and r mi are the representation vector of w i as a head word and a modifier word respectively.",
"Local token-wise training loss.",
"The biaffine parser adopts a simple non-structural training loss, trying to independently maximize the local probability of the correct head word for each word.",
"For a gold-standard head-modifier pair ( w i , w j ) in a training instance, the cross-entropy loss is L ( i, j ) = log e s ( i,j ) (cid:80) 0 k n e s ( k,j ) (4) In other words, the model is trained based on simple head selection, without considering the tree structure at all, and losses of all words in a mini-batch are accumulated.",
"Handling dependency labels.",
"The biaffine parser treats skeletal tree searching and labeling as two independent (training phase) and cascaded (parsing phase) tasks.",
"This work follows the same strategy for simplicity.",
"Please refer to Dozat and Manning (2017) for details.",
"This work substantially extends the biaffine parser in two closely related aspects: using probabilistic TreeCRF for structural training and explicitly incorporating high-order subtree scores.",
"Specifically, we further incorporate adjacent-sibling subtree scores into the basic first-order model: 3 s ( x , y ) = (cid:88) i j y s ( i, j )+ (cid:88) i { k,j } y s ( i, k, j ) (6) where k and j are two adjacent modifiers of i and satisfy either i < k < j or j < k < i .",
"where Y ( x ) is the set of all legal (projective) trees for x , and Z ( x ) is commonly referred to as the normalization (or partition) term.",
"During training, TreeCRF employs the following structural training loss to maximize the conditional probability of the gold-standard tree y given x .",
"3 This work can be further extended to incorporate grand-parent-modifier subtree scores based on the viterbi algorithm of O ( n 4 ) time complexity proposed by Koo and Collins (2010), which we leave for future work.",
"To avoid major modification to the original scoring architecture, we take a straightforward extension to obtain scores of adjacent-sibling subtrees.",
"First, we employ three extra MLPs to perform similar feature extraction.",
"where r h (cid:48) i ; r si ; r m (cid:48) i are the representation vectors of w i as head, sibling, and modifier respectively.",
"4 Then, we propose a natural extension to the biaffine equation, and employ triaffine for score computation over three vectors.",
"5 s ( i, k, j ) = (cid:20) r sk 1 (cid:21) T r h (cid:48) i TW triaffine (cid:20) r m (cid:48) j 1 (cid:21) (10) where W triaffine R d (cid:48) d (cid:48) d (cid:48) is a three-way tensor.",
"The key to TreeCRF loss is how to efficiently compute log Z ( x ) , as shown in Equation 8.",
"This problem has been well solved long before the DL era for non-neural dependency parsing.",
"Straightforwardly, we can directly extend the viterbi decoding algorithm by replacing max product with sum 4 Another way is to use one extra MLP for sibling representation, and re-use head and modifier representation from the basic first-order components, which however leads to inferior performance in our preliminary experiments.",
"5 We have also tried the approximate method of Wang et al. (2019), which uses three biaffine operations to simulate the interactions of three input vectors, but observed inferior performance.",
"We omit the results due to the space limitation.",
"product, and naturally obtain log Z ( x ) in the same polynomial time complexity.",
"However, it is not enough to solely perform the inside algorithm for non-neural parsing, due to the inapplicability of the automatic differentiation mechanism.",
"In order to obtain marginal probabilities and then feature weight gradients, we have to realize the more sophisticated outside algorithm, which is usually at least twice slower than the inside algorithm.",
"This may be the major reason for the less popularity of TreeCRF (vs. max-margin training) before the DL era.",
"As far as we know, all previous works on neural TreeCRF parsing explicitly implement the inside-outside algorithm for gradient computation (Zhang et al., 2019; Jiang et al., 2018).",
"To improve efficiency, computation is transferred from GPUs to CPUs with Cython programming.",
"This work shows that the inside algorithm can be effectively batchified to fully utilize the power of GPUs.",
"Figure 3 and Algorithm 1 together illustrate the batchified version of the second-order inside algorithm, which is a direct extension of the second-order Eisner algorithm in McDonald and Pereira (2006) by replacing max product with sum product.",
"We omit the generations of incomplete, complete, and sibling spans in the opposite direction from j to i for brevity.",
"Basically, we first pack the scores of same-width spans at different positions ( i, j ) for all B sentences in the data batch into large tensors.",
"Then we can do computation and aggregation simultaneously on GPUs via efficient large tensor operation.",
"algorithm.",
"Due to space limitation, we omit the details.",
"It is noteworthy that the techniques described here are also applicable to other grammar formulations such as CKY-style constituency parsing (Finkel et al., 2008; Drozdov et al., 2019).",
"Eisner (2016) proposes a theoretical proof on the equivalence between the back-propagation mechanism and the outside algorithm in the case of constituency (phrase-structure) parsing.",
"This work empirically verifies this equivalence for dependency parsing.",
"Moreover, we also find that marginal probabilities p ( i j | x ) directly correspond to gradients after back-propagation with log Z ( x ) as the loss: log Z s( i, j ) = (cid:88) y :( i,j ) y p ( y | x ) = p ( i j | x ) (11) which can be easily proved.",
"For TreeCRF parsers, we perform MBR decoding (Smith and Smith, 2007) by replacing scores with marginal probabilities in the decoding algorithm, leading to a slight but consistent accuracy increase.",
"As an attractive research direction, studies show that it is more effective to construct or even col-lect partially labeled data (Nivre et al., 2014; Hwa, 1999; Pereira and Schabes, 1992), where a sentence may correspond to a partial tree | y p | < n in the case of dependency parsing.",
"Partial annotation can be very powerful when combined with active learning, because annotation cost can be greatly reduced if annotators only need to annotate sub-structures that are difficult for models.",
"Li et al. (2016) present a detailed survey on this topic.",
"Moreover, Peng et al. (2019) recently released a partially labeled multi-domain Chinese dependency treebank based on this idea.",
"Then, the question is how to train models on partially labeled data.",
"Li et al. (2016) propose to extend TreeCRF for this purpose and obtain promising results in the case of non-neural dependency parsing.",
"This work applies their approach to the neural biaffine parser.",
"We are particularly concerned at the influence of structural learning and high-order modeling on the utilization of partially labeled training data.",
"For the basic biaffine parser based on first-order local training, it seems the only choice is omitting losses of unannotated words.",
"In contrast, tree constraints allow annotated dependencies to influence the probability distributions of unannotated words, and high-order modeling further helps by promoting inter-token interaction.",
"Therefore, both structural learning and high-order modeling are intuitively very beneficial.",
"Under partial annotation, we follow Li et al. (2016) and define the training loss as: L ( x , y p ) = log (cid:88) y Y ( x ); y y p p ( y | x ) = log Z ( x , y p ) (cid:80) y Y ( x ); y y p e s ( x , y ) Z ( x ) (12) where Z ( x , y p ) only considers all legal trees that are compatible with the given partial tree and can also be efficiently computed like Z ( x ) .",
"Data.",
"We conduct experiments and analysis on 27 datasets from 13 languages, including two widely used datasets: the English Penn Treebank (PTB) data with Stanford dependencies (Chen and Manning, 2014), and the Chinese data at the CoNLL09 shared task (Hajic et al., 2009).",
"We also adopt the Chinese dataset released at the NLPCC19 cross-domain dependency parsing shared task (Peng et al., 2019), containing one source domain and three target domains.",
"For simplicity, we directly merge the train/dev/test data of the four domains into larger ones respectively.",
"One characteristic of the data is that most sentences are partially annotated based on active learning.",
"Finally, we conduct experiments on Universal Dependencies (UD) v2.2 and v2.3 following Ji et al. (2019) and Zhang et al. (2019) respectively.",
"We adopt the 300d multilingual pretrained word embeddings used in Zeman et al. (2018) and take the CharLSTM representations as input.",
"For UD2.2, to compare with Ji et al. (2019), we follow the raw text setting of the CoNLL18 shared task (Zeman et al., 2018), and directly use their sentence segmentation and tokenization results.",
"For UD2.3, we also report the results of using gold-standard POS tags to compare with Zhang et al. (2019).",
"Evaluation metrics.",
"We use unlabeled and labeled attachment score (UAS/LAS) as the main metrics.",
"Punctuations are omitted for PTB.",
"For the partially labeled NLPCC19 data, we adopt the official evaluation script, which simply omits the words without gold-standard heads to accommodate partial annotation.",
"We adopt Dan Bikel's randomized parsing evaluation comparator for significance test.",
"Parameter settings.",
"We directly adopt most parameter settings of Dozat and Manning (2017), including dropout and initialization strategies.",
"For CharLSTM, the dimension of input char embeddings is 50, and the dimension of output vector is 100, following Lample et al. (2016).",
"For the second-order model, we set the dimensions of r h (cid:48) /s/m (cid:48) i to 100, and find little accuracy improvement when increasing to 300.",
"We trained each model for at most 1,000 iterations, and stop training if the peak performance on the dev data does not increase in 100 consecutive epochs.",
"Models.",
"LOC uses local cross-entropy training loss and employs the Eisner algorithm for finding the optimal projective tree.",
"CRF and CRF 2 O denote the first-order and second-order TreeCRF model respectively.",
"LOCMST denotes the basic local model that directly produces non-projective tree based on the MST decoding algorithm of Dozat and Manning (2017).",
"Figure 4 compares the parsing speed of different models on PTB-test.",
"For a fair comparison, we run all models on the same machine with Intel Xeon CPU (E5-2650v4, 2.20GHz) and GeForce GTX 1080 Ti GPU.",
"C RF ( CPU ) refers to the model that explicitly performs the inside-outside algorithm using Cython on CPUs.",
"Multi-threading is employed since sentences are mutually independent.",
"However, we find that using more than 4 threads does not further improve the speed.",
"We can see that the efficiency of TreeCRF is greatly improved by batchifying the inside algorithm and implicitly realizing the outside algorithm by back-propagation on GPUs.",
"For the first-order CRF model, our implementation can parse about 500 sentences per second, over 10 times faster than the multi-thread C RF ( CPU ).",
"For the second-order CRF 2 O , our parser achieves the speed of 400 Dev Test UAS LAS UAS LAS PTB Biaffine17 -95.74 94.08 F&K19 --91.59 Li19 95.76 93.97 95.93 94.19 Ji19 95.88 93.94 95.97 94.31 Zhang19 --93.96 LOC 95.82 93.99 96.08 94.47 CRF w/o MBR 95.74 93.96 96.04 94.34 CRF 95.76 93.99 96.02 94.33 CRF 2 O w/o MBR 95.92 94.16 96.14 94.49 CRF 2 O 95.90 94.12 96.11 94.46 CoNLL09 Biaffine17 -88.90 85.38 Li19 88.68 85.47 88.77 85.58 LOC 89.07 86.10 89.15 85.98 CRF w/o MBR 89.04 86.04 89.14 86.06 CRF 89.12 86.12 89.28 86.18 CRF 2 O w/o MBR 89.29 86.24 89.49 86.39 CRF 2 O 89.44 86.37 89.63 86.52 NLPCC19 LOC 77.01 71.14 76.92 71.04 CRF w/o MBR 77.40 71.65 77.17 71.58 CRF 77.34 71.62 77.53 71.89 CRF 2 O w/o MBR 77.58 71.92 77.89 72.25 CRF 2 O 78.08 72.32 78.02 72.33 Table 1: Main results.",
"sentences per second, which is able to meet the requirements of a real-time system.",
"More discussions on efficiency are presented in Appendix A. 4.2 Main Results Table 1 lists the main results on the dev and test data.",
"The trends on dev and test are mostly consistent.",
"For a fair comparison with previous works, we only consider those without using extra resources such as ELMo (Peters et al., 2018) and BERT (De-vlin et al., 2019).",
"We can see that our baseline LOC achieves the best performance on both PTB and CoNLL09.",
"On PTB, both CRF and CRF 2 O fail to improve 100 200 300 400 91 92 93 94 CRF 2 OCRFLOC 100 200 300 400 83 84 85 86 CRF 2 OCRFLOC 100 200 300 400 66 68 70 72 CRF 2 OCRFLOC Figure 5: Convergence curves (LAS vs. training epochs) on dev data of PTB, CoNLL09, and NLPCC19.",
"the parsing accuracy further, probably because the performance is already very high.",
"However, as shown by further analysis in Section 4.3, the positive effect is actually introduced by structural learning and high-order modeling.",
"On CoNLL09, CRF significantly outperforms LOC , and CRF 2 O can further improve the performance.",
"On the partially annotated NLPCC19 data, CRF outperforms LOC by a very large margin, indicating the usefulness of structural learning in the scenario of partial annotation.",
"CRF 2 O further improves the parsing performance by explicitly modeling second-order subtree features.",
"These results con-firm our intuitions discussed in Section 3.4.",
"Please note that the parsing accuracy looks very low because the partially annotated tokens are usually difficult for models.",
"Impact of MBR decoding.",
"For CRF and CRF 2 O , we by default to perform MBR decoding, which employs the Eisner algorithm over marginal probabilities (Smith and Smith, 2007) to find the best tree.",
"Table 1 reports the results of directly finding 1-best trees according to dependency scores.",
"Except for PTB, probably due to the high accuracy already, MBR decoding brings small yet consistent improvements for both CRF and CRF 2 O .",
"Convergence behavior.",
"Figure 5 compares the convergence curves.",
"For clarity, we plot one data point corresponding to the peak LAS every 20 epochs.",
"We can clearly see that both structural learning and high-order modeling consistently improve the model.",
"CRF 2 O achieves steadily higher accuracy and converges much faster than the basic LOC .",
"Performance at suband full-tree levels.",
"Beyond the dependency-wise accuracy (UAS/LAS), we would like to evaluate the models regarding performance at sub-tree and full-tree levels.",
"Table 2 shows the results.",
"We skip the partially labeled NLPCC19 data.",
"UCM means unlabeled complete matching rate, i.e., the percent of sentences obtaining whole correct skeletal trees, while LCM further requires that all labels are also correct.",
"For SIB, we evaluate the model regarding unlabeled adjacent-sibling subtrees (system outputs vs. gold-standard references).",
"According to Equation 6, ( i, k, j ) is an adjacent-sibling subtree, if and only if w k and w j are both children of w i at the same side, and there are no other children of w i between them.",
"Given two trees, we can col-bg ca cs de en es fr it nl no ro ru Avg.",
"lect all adjacent-sibling subtrees and compose two sets of triples.",
"Then we evaluate the P/R/F values.",
"Please note that it is impossible to evaluate SIB for partially annotated references.",
"We can clearly see that by modeling adjacent-sibling subtree scores, the SIB performance obtains larger improvement than both CRF and LOC , and this further contributes to the large improvement on full-tree matching rates (UCM/LCM).",
"Capability to learn from partial trees.",
"To better understand why CRF 2 O performs very well on partially annotated NLPCC19, we design more comparative experiments by retaining either a proportion of random training sentences (full trees) or a proportion of random dependencies for each sentence (partial trees).",
"Figure 6 shows the results.",
"We can see that the performance gap is quite steady when we gradually reduce the number of training sentences.",
"In contrast, the gap clearly becomes larger when each training sentence has less annotated dependencies.",
"This shows that CRF 2 O is superior to the basic LOC in utilizing partial annotated data for model training.",
"Table 3 compares different models on UD datasets, which contain a lot of non-projective trees.",
"We adopt the pseudo-projective approach (Nivre and Nilsson, 2005) for handling the ubiquitous nonprojective trees of most languages.",
"Basically, the idea is to transform non-projective trees into projective ones using more complex labels for postprocessing recovery.",
"We can see that for the basic local parsers, the direct non-projective LOCMST and the pseudo-projective LOC achieve very similar performance.",
"More importantly, both CRF and CRF 2 O produce consistent improvements over the baseline in many languages.",
"On both UD2.2 and UD2.3, Our proposed CRF 2 O model achieves the highest accuracy for 10 languages among 12, and obtains significant improvement in more than 7 languages.",
"Overall, the averaged improvement is 0.45 and 0.29 on UD2.2 and UD2.3 respectively, which is also significant at p < 0 .",
"005 .",
"On average, our CRF 2 O parser outperforms Ji et al. (2019) by 2.30 on UD2.2 raw texts following CoNLL-2018 shared task setting, and Zhang et al. (2019) by 0.91 on UD2.3 data with gold POS tags.",
"It is noteworthy that the German (de) result is kindly provided by Tao Ji after rerunning their parser with predicted XPOS tags, since their reported result in Ji et al. (2019) accidentally used gold-standard sentence segmentation, tokenization, and XPOS tags.",
"Our CRF 2 O parser achieves an average LAS of 87.64 using their XPOS tags.",
"Batchification has been widely used in linear-chain CRF, but is rather complicated for tree structures.",
"Eisner (2016) presents a theoretical proof on the equivalence of outside and back-propagation for constituent tree parsing, and also briefly discusses other formalisms such as dependency grammar.",
"Unfortunately, we were unaware of Eisner's great work until we were surveying the literature for paper writing.",
"As an empirical study, we believe this work is valuable and makes it practical to deploy TreeCRF models in real-life systems.",
"Falenska and Kuhn (2019) present a nice analytical work on dependency parsing, similar to Gaddy et al. (2018) on constituency parsing.",
"By extending the first-order graph-based parser of Kiperwasser and Goldberg (2016) into second-order, they try to find out how much structural context is implicitly captured by the BiLSTM encoder.",
"They concatenate three BiLSTM output vectors ( i, k, j ) for scoring adjacent-sibling subtrees, and adopt max-margin loss and the second-order Eisner decoding algorithm (McDonald and Pereira, 2006).",
"Based on their negative results and analysis, they draw the conclusion that high-order modeling is redundant because BiLSTM can implicitly and effectively encode enough structural context.",
"They also present a nice survey on the relationship between RNNs and syntax.",
"In this work, we use a much stronger basic parser and observe more significant UAS/LAS improvement than theirs.",
"Particularly, we present an in-depth analysis showing that explicitly high-order modeling certainly helps the parsing model and thus is complementary to the BiLSTM encoder.",
"Ji et al. (2019) employ graph neural networks to incorporate high-order structural information into the biaffine parser implicitly.",
"They add a three-layer graph attention network (GAT) component (Velickovic et al., 2018) between the MLP and Biaffine layers.",
"The first GAT layer takes r hi and r mi from MLPs as inputs and produces new representation r h 1 i and r m 1 i by aggregating neighboring nodes.",
"Similarly, the second GAT layer operates on r h 1 i and r m 1 i , and produces r h 2 i and r m 2 i .",
"In this way, a node gradually collects multi-hop high-order information as global evidence for scoring single dependencies.",
"They follow the original local head-selection training loss.",
"In contrast, this work adopts global TreeCRF loss and explicitly incorporates high-order scores into the biaffine parser.",
"Zhang et al. (2019) investigate the usefulness of structural training for the first-order biaffine parser.",
"They compare the performance of local head-selection loss, global max-margin loss, and TreeCRF loss on multilingual datasets.",
"They show that TreeCRF loss is overall slightly superior to max-margin loss, and LAS improvement from structural learning is modest but significant for some languages.",
"They also show that structural learning (especially TreeCRF) substantially improves sentence-level complete matching rate, which is consistent with our findings.",
"Moreover, they explicitly compute the inside and outside algorithms on CPUs via Cython programming.",
"In contrast, this work proposes an efficient second-order TreeCRF extension to the biaffine parser, and presents much more in-depth analysis to show the effect of both structural learning and high-order modeling.",
"This paper for the first time presents second-order TreeCRF for neural dependency parsing using triaffine for explicitly scoring second-order subtrees.",
"We propose to batchify the inside algorithm to accommodate GPUs.",
"We also empirically verify that the complex outside algorithm can be implicitly performed via efficient back-propagation, which naturally produces gradients and marginal probabilities.",
"We conduct experiments and detailed analysis on 27 datasets from 13 languages, and find that structural learning and high-order modeling can further enhance the state-of-the-art biaffine parser in various aspects:",
"1) better convergence behavior;",
"2) higher performance on suband full-tree levels;",
"3) better utilization of partially annotated data.",
"The authors would like to thank:",
"1) the anonymous reviewers for the helpful comments,",
"2) Wenliang Chen for helpful discussions on high-order neural dependency parsing,",
"3) Tao Ji for kindly sharing the data and giving beneficial suggestions for the experiments on CoNLL18 datasets,",
"4) Wei Jiang, Yahui Liu, Haoping Yang, Houquan Zhou and Mingyue Zhou for their help in paper writing and polishing.",
"This work was supported by National Natural Science Foundation of China (Grant No. 61876116, 61525205, 61936010) and a Project Funded by the Priority Academic Program Development (PAPD) of Jiangsu Higher Education Institutions."
] |
[
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"objective",
"objective",
"objective",
"objective",
"result",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"We address the detection of abusive words.",
"The task is to identify such words among a set of negative polar expressions.",
"We propose novel features employing information from both corpora and lexical resources.",
"These features are calibrated on a small manually annotated base lexicon which we use to produce a large lexicon.",
"We show that the word-level information we learn cannot be equally derived from a large dataset of annotated microposts.",
"We demonstrate the effectiveness of our (domain-independent) lexicon in the cross-domain detection of abusive microposts.",
"Abusive or offensive language is commonly de-fined as hurtful, derogatory or obscene utterances made by one person to another person.",
"1 Examples are (1)-(3).",
"In the literature, closely related terms include hate speech (Waseem and Hovy, 2016) or cyber bullying (Zhong et al., 2016).",
"While there may be nuanced differences in meaning 2 , they are all compatible with the general definition above for abusive language.",
"3 (1) stop editing this, you dumbass .",
"(2) Just want to slap the stupid out of these bimbos",
"(3) Go lick a pig you arab muslim piece of scum .",
"Due to the rise of user-generated web content, in particular on social media networks, the amount of abusive language is also steadily growing.",
"NLP methods are required to focus human review efforts towards the most relevant microposts.",
"In this paper, we address the task of detecting abusive words (e.g. dumbass , bimbo , scum ).",
"Our 1 http://thelawdictionary.org/ 2 For example, several research efforts just focus on utterances addressed towards minorities.",
"3 The examples in this work are included to illustrate the severity of abusive language.",
"They are taken from actual web data and in no way reflect the opinion of the authors.",
"main assumption is that abusive words form a subset of negative polar expressions.",
"The classification task is to filter the abusive words from a given set of negative polar expressions .",
"We proceed as follows.",
"On a base lexicon that is a small subset of negative polar expressions where the abusive words among them have been marked via crowdsourcing (3), we calibrate a supervised classifier by examining various novel features (4).",
"A classifier trained on that base lexicon, which contains 551 abusive words, is then applied to a very large list of unlabeled negative polar expressions (from Wiktionary) to extract an expanded lexicon of 2989 abusive words (5).",
"We extrinsically evaluate our new lexicon in the novel task of cross-domain classification of abusive documents (6) where we use it as a high-level feature.",
"In this work, we consider microposts as documents.",
"While for in-domain classification, supervised classifiers trained on generic features, such as bag of words or word embeddings, usually score very well, on cross-domain classification they perform poorly since they latch on to domain-specific information.",
"In subjectivity, polarity and emotion classification, high-level features based on predictive domain-independent word lists have been proposed to bridge the domain mismatch (Dias et al., 2009; Mohammad, 2012; Wiegand et al., 2013).",
"New abusive words constantly enter natural language.",
"For example, according to Wiktionary 4 the word gimboid , which refers to an incompetent person, was coined in the British television series Red Dwarf , possibly from the word gimp and the suffix -oid .",
"According to Urban Dictionary 5 , the word twunt , which is a portmanteau of the swearwords twat and cunt , has been invented 4 https://en.wiktionary.org 5 www.urbandictionary.com 1046 by humourist Chris Morris for the Channel 4 series Jam' in 2000.",
"One of the most recent abusive words is remoaner which describes someone who complains about or rejects the outcome of the 2016 EU referendum on the UK's membership of the European Union.",
"It is a blend of moan and remainer .",
"Wiktionary states that this word has a pejorative connotation.",
"These examples show that the task of creating a lexicon of abusive words cannot be reduced to a one-time manual annotation effort.",
"Recent web corpora and crowdsourced dictionaries (e.g. Wiktionary) should be ideal resources to find evidence of such words.",
"Our contribution is that we present the first work that systematically describes the automatic construction of a lexicon of abusive words.",
"We examine novel features derived from various textual resources.",
"We show that the information we learn cannot be equally derived from a large dataset with labeled microposts.",
"The effectiveness of our expanded lexicon is demonstrated on cross-domain detection of abusive microposts.",
"This is also the first work to address this task in general.",
"The supplementary material to this paper 6 includes all resources newly created for our research.",
"We frame our task as a binary classification problem .",
"Each given expression is to be classified as either abusive or not.",
"We study this problem on English.",
"However, many of our features should also be applicable to other languages.",
"Lexical knowledge for the detection of abusive language has only received little attention in previous work.",
"Most approaches consider it as one feature among many.",
"Very often existing word lists from the web are employed (Xiang et al., 2012; Burnap and Williams, 2015; Nobata et al., 2016).",
"Their limited effectiveness may be due to the fact that they were not built for the task of abusive language detection.",
"Only the manually-compiled lexicon from Razavi et al. (2010) and the lexicon of hate verbs from Gitari et al. (2015) have been compiled for this specific task.",
"Since the latter lexicon is not publicly available we can only consider the former in our evaluation.",
"In both publications, very little is said on the creation of these resources.",
"6 https://github.com/miwieg/naacl2018",
"words) work well and word lists are less important.",
"There have been investigations examining features on various datasets (Nobata et al., 2016; Samghabadi et al., 2017), however, these studies always trained and tested on the same domain.",
"We show that a lexicon-based approach is effective in cross-domain classification.",
"For a more detailed overview on previous work on the detection of abusive language in general, we refer the reader to Schmidt and Wiegand (2017).",
"Base Lexicon.",
"Our base lexicon exclusively comprises negative polar expressions.",
"It is a small set which we have annotated via crowdsourcing .",
"We consider abusive words to be a proper subset of negative polar expressions.",
"By just focusing on these types of words, we are more likely to obtain a significant amount of abusive words than just considering a sample of arbitrary words.",
"This lexicon will be used as a gold standard for calibrating features of a classifier.",
"That classifier will be run on a large set of unlabeled negative polar expressions to produce our expanded lexicon (5).",
"We sampled 500 negative nouns, verbs and adjectives each from the Subjectivity Lexicon (Wil-son et al., 2005).",
"We chose that lexicon since we have extra information available for its entries that we want to examine, namely polar intensity (4.1.1) and sentiment views (4.1.2).",
"However, since we noted that the Subjectivity Lexicon misses some prototypical abusive words (e.g. nigger , slut , cunt ) we added another 10% (i.e. 150 words) which are abusive words frequently occurring in the word lists mentioned in Schmidt and Wiegand (2017).",
"Each of the negative polar expressions was judged by 5 annotators from the crowdsourcing platform ProlificAcademic .",
"7 Each annotator had to be a native speaker of English and possess a task approval rate of at least 90%.",
"For our base lexicon (Table 1), we considered a binary word categorization: abusive or non-abusive .",
"A word was only classified abusive if at least 4 out of the 5 raters judged the word to be abusive.",
"This threshold should prevent many ambiguous words from being classified as abusive, a general problem of existing resources (Davidson et al., 2017).",
"unlabeled corpora (Table 2).",
"The two larger corpora, the Amazon Review Corpus AMZ (Jindal and Liu, 2008) and the Web As Corpus WAC (Baroni et al., 2009), are used for inducing word embeddings (4.2).",
"AMZ and the smallest corpus, rateitall.com RIA 8 , are used for computing polar word intensity (4.1.1) from star ratings.",
"In the following, we describe the two types of features of our feature-based approach: novel linguistic features and generic word embeddings.",
"They will be examined against some baselines on our base lexicon.",
"As a classifier we use an SVM as implemented in SVM light (Joachims, 1999).",
"We chose that classifier since it is most commonly used for the detection of abusive language (Schmidt and Wiegand, 2017).",
"For all classifiers in this paper, the supplementary material 6 contains information regarding (hyper)parameter settings.",
"high polar intensity.",
"We inspect 3 different types.",
"Binary Intensity (INT bin ).",
"Our first feature is a simple binary intensity feature we obtain from the Subjectivity Lexicon.",
"In that resource, each entry is categorized as either a weak polar expression (e.g. dirty ) or a strong polar expression (e.g. filthy ).",
"Table 3 (left half), which shows the distribution of intensity on the intersection of our base lexicon and the Subjectivity Lexicon, confirms that abusive words are rarely weak polar expressions and more frequently strong polar expressions.",
"Fine-grained Intensity (INT fine ).",
"We also investigate a more fine-grained feature which assigns a real-valued intensity score to polar expressions.",
"It is computed by leveraging the star-rating assigned to the reviews comprising the AMZ corpus (Table 2), a large publicly available review 8 This is a crawl from the review website www.",
"corpus.",
"A review is awarded between 1 and 5 stars where 1 is the most negative score.",
"We infer the polar intensity of a word by the distribution of star-ratings associated with the reviews in which it occurs.",
"We assume negative polar expressions with a very high polar intensity to occur significantly more often in reviews assigned few stars (i.e. 1 or 2).",
"Ruppenhofer et al. (2014) established that the most effective method to derive such polar intensity is by ranking words by their weighted mean of star ratings (Rill et al., 2012).",
"All words of our base lexicon are ranked according to that score.",
"As a feature we use the rank of a word.",
"Intensity Directed towards Persons (INT person ).",
"Not all negative polar expressions with a high intensity are equally likely to be abusive.",
"The high intensity expressions should also be words typically directed towards persons.",
"Most polar statements in AMZ, however, are directed towards a movie, book or some electronic product.",
"In order to extract negative polar intensity directed towards persons, we replace the AMZ corpus with the RIA corpus (Table 2).",
"RIA contains reviews on arbitrary entities rather than just commercial products as in the case of AMZ.",
"Each review has a category label (e.g. computer , person , travel ) that very easily allows us to extract from RIA just those reviews that concern persons.",
"Table 4 compares a typical 1-star review from AMZ with one from RIA.",
"We consider the RIA-review an abusive comment.",
"It contains many words predictive of abusive language (e.g. self-absorbed , loser , arrogant or loud-mouthed ).",
"Wiegand et al. (2016b) define sentiment views as the perspective of the opinion holder of polar expressions.",
"They distinguish between expressions conveying the view of the implicit speaker of the utterance typically referred to as speaker views (e.g. cheating in (4); ugly and stinks in (5)), and expressions conveying the view of event participants typically referred to as actor views (e.g. disappointed and horrified in (6); protested in (7)).",
"WAC liar (19), coward (7), name (6), idiot (6), hero (5), horse (5), saint (5), fool (5), snob (4), genius (4) Twitter bitch (1534), hoe (432), liar (317), cunt (274), whore (254), pussy (228), nigger (226), loser (217), faggot (217), slut (197) Table 5: Comparison of the 10 most frequent pattern matches ( numbers in brackets indicate frequency ).",
"Wiegand et al. (2016b) provided sentiment-view annotations for the entries of the Subjectivity Lexicon.",
"(4) Peter is always cheating speaker view .",
"(holder: speaker) (5) Mary is an ugly speaker view girl that stinks speaker view .",
"(holder: speaker) (6) [ Peter ] holder was disappointed actor view and horrified actor view at the same time.",
"(7) [ The public ] holder protested actor view against that law.",
"Sentiment views have been used for improving the extraction of opinion holders and targets (Deng and Wiebe, 2016; Wiegand et al., 2016a).",
"In this paper we show that they also have relevance for the detection of abusive words.",
"Among actor-view words, there is a much lower proportion of abusive words than among speaker-view words (right half of Table 3).",
"This can be explained by the fact that verbal abuse usually originates from the speaker of an utterance rather than some other discourse entity.",
"We use sentiment-view information as a binary feature.",
"We also examine whether knowledge of emotion categories associated with words is helpful.",
"Potentially negative emotions, such as disgust or anger , should correlate with abusive words.",
"We use the NRC lexicon (Mohammad and Turney, 2013) and employ the categories associated with the words contained in that resource as a feature.",
"Noun Pattern (PAT noun ).",
"We found that the noun pattern (8) can be used to extract abusive nouns.",
"Since this pattern is very sparse even on our largest corpus (i.e. WAC), we also run our pattern as a query on Twitter and extracted all matching tweets coming in a time period of 14 days.",
"(We observed that by then we had reached a saturation point.) (8) pattern : called { me | him | her } a(n) < noun > (9) pattern match example : He called me a bitch .",
"Table 5 compares the most frequent matches for that pattern.",
"Our pattern matches much more frequently on Twitter than on WAC.",
"The quality of the matches on Twitter is also much better than on WAC, where we still find many false positives (e.g. name or saint ).",
"We assume that tweets, in general, are much more negative in tone than arbitrary web documents (as represented by WAC) which could explain the fewer false positives on Twitter.",
"Note that the ranking from Twitter is not restricted to just prototypical abusive words (as Table 5 might suggest).",
"The entire ranking also contains many less common words, such as weaboo , dudebro or butterface .",
"The frequency ranks of the nouns extracted from Twitter are used as a feature.",
"Adjective Pattern (PAT adj ).",
"Abusive adjectives often modify an abusive noun as in brainless idiot , smarmy liar or gormless twat .",
"Therefore, we mined Twitter for adjectives modifying mentions of our extracted nouns (PAT noun ).",
"(We were not able to find a construction identifying abusive verbs, so our output from PAT includes no verbs.) 4.1.5 WordNet (WN) and Wiktionary (WK) We compare WordNet (Miller et al., 1990) and Wiktionary 4 as two general-purpose lexical resources.",
"Unlike WordNet, Wiktionary is produced collaboratively by volunteers rather than linguistic experts.",
"It contains more abusive words from our base lexicon, i.e. 97% (WK) vs. 87% (WN).",
"A common way to harness a general-purpose lexicon for induction tasks in sentiment analysis is by using its glosses (Choi and Wiebe, 2014; Kang et al., 2014).",
"Assuming that the explanatory texts 1049 of glosses are similar among abusive words, we treat glosses as a bag-of-words feature.",
"We also exploit information on word usage .",
"Many abusive words are marked with tags such as pejorative , derogatory or vulgar .",
"Both WordNet and Wiktionary contain such information.",
"However, in Wiktionary more than 6 times as many of our entries include a tag compared to WordNet.",
"In order to incorporate a semantic representation more general than individual words, we employ supersenses .",
"Supersenses are only contained in WordNet.",
"They represent a set of 45 classes into which entries are categorized.",
"They have been found effective for sentiment analysis (Flekova and Gurevych, 2016).",
"Some categories correlate with abusive words.",
"For example, 76% of the words of our base lexicon that belong to the supersense person (e.g. loser , idiot ) are abusive words.",
"FrameNet (Baker et al., 1998) is a semantic resource which provides over 1200 semantic frames that comprise words with similar semantic behaviour.",
"We use the frame-memberships of a word as features, expecting that abusive and nonabusive words occur in separate frames.",
"We induce word embeddings from the two largest corpora, i.e. AMZ and WAC (Table",
"2) using Word2Vec (Mikolov et al., 2013) in default config-uration (i.e. 200 dimensions; cbow).",
"The best performance was obtained by concatenating for each word the vectors induced from the two corpora.",
"9 4.3 Baselines to Feature-based Approach In addition to a majority-class classifier we consider the following baselines: Weak Supervision (WSUP).",
"With this baseline we want to build a lightweight classifier that does not require proper labeled training data.",
"It is inspired by previous induction approaches for sentiment lexicons, such as Hatzivassiloglou and McK-eown (1997) or Velikovich et al. (2010) which heuristically label some seed instances and then apply graph-based propagation to label the remaining words of a dataset.",
"On the basis of word embeddings (4.2), we build a word-similarity graph, where the nodes represent our negative polar expressions and each edge denotes the seman-9 We also ran experiments with pretrained embeddings from GoogleNews but they did not improve classification.",
"tic similarity between two arbitrary words.",
"We compute it by the cosine of their word-embedding vectors.",
"The output of PAT from Twitter (4.1.4) is considered as positive class seed instances.",
"We chose PAT since it is an effective feature that does not depend on a lexical resource.",
"As negative class seeds, we use the most frequent words in the WAC corpus (Table 2).",
"Our rationale is that high-frequency words are unlikely to be abusive.",
"We chose WAC instead of Twitter since the evidence of PAT (Table",
"5) suggested less abusive language in that corpus.",
"This word-similarity graph is illustrated in Figure 1.",
"In order to propagate the labels to the unlabeled words from the seeds, we use the Adsorption algorithm (Talukdar et al., 2008).",
"Using Labeled Microposts (MICR).",
"With our last baseline we examine in how far we can detect abusive words by only using information from labeled microposts rather than labeled words.",
"These experiments are driven by the fact that labeled microposts already exist.",
"We consider two methods using the largest dataset comprising manually labeled microposts, Wulczyn (Table 8).",
"The class labels of the microposts and our base lexicon (3) are the same.",
"Our aim is to produce a ranking of words where the high ranks represent words more likely to be abusive.",
"Since we want to produce a strong baseline, we consider the best possible cut-off rank ( see supplementary material 6 ).",
"Every word higher than this rank is considered abusive and all other words not abusive.",
"The first method MICR:pmi ranks the words of our base lexicon by their Pointwise Mutual Information with the class label abusive that is assigned to microposts.",
"To be even more competitive, we introduce a second method MICR:proj that learns a projection of embeddings.",
"MICR:proj has the advantage over MICR:pmi that it does not only rank words observed in the labeled microposts but all words represented by embeddings.",
"Since our embeddings (4.2) are induced on the combination of AMZ and WAC corpora, which together are about 360 times the size of the Wulczyn dataset, MICR:proj is likely to cover more abusive words.",
"Let M = [ w 1 ,. . . , w n ] denote a labeled micropost of n words.",
"Each column w { 0 , 1 } v of M represents a word in a one-hot form.",
"Our aim is learning a one-dimensional projection S E where E R e v represents our unsupervised embeddings of dimensionality e over the vocabulary size v (4.2) and S R 1 e represents the learnt 1050 Figure 1: Illustration of word-similarity graph as used for weakly-supervised baseline (WSUP); seeds for abusive words (e.g. bitch ) are obtained by the output of feature PAT (4.1.4); seeds for non-abusive words (e.g. disagree ) are high-frequency negative polar expressions.",
"projection matrix.",
"We compute a projected micropost h = S E M which is an n -dimensional vector.",
"Each component represents a word from the micropost.",
"The value represents the predictability of the word towards being abusive.",
"We then apply a bag-of-words assumption to use that projected micropost to predict the binary class label y : p ( y | M ) exp ( h 1 ) where 1 { 1 } n .",
"This model is a feed-forward network trained using Stochastic Gradient Descent (Rumelhart et al., 1986).",
"On the basis of the projected embeddings we rank our negative polar expressions.",
"We conduct experiments on our base lexicon (Ta-ble",
"1) and report macro-average precision, recall and f-score.",
"SVMs are evaluated on a 10-fold crossvalidation.",
"Table 6 displays the performance of the different classifiers.",
"The least effective information source are labeled microposts (MICR), though, as expected, the projected embeddings (MICR:proj) outperform PMI.",
"The performance of weak supervision (WSUP) outperforms MICR.",
"Among the SVM configurations, embeddings are already effective.",
"The linguistic features outperform all other methods.",
"The best classifier is an SVM trained on embeddings, linguistic features and the output of WSUP as a further feature.",
"10 Table 7 shows the performance of SVMs using different linguistic features (4.1).",
"Among the three intensity types, the most effective one is the person-based intensity (INT person ).",
"However, it can be effectively combined with the remaining types.",
"Among the lexical sentiment resources used (i.e. NRC, INT bin and VIEW), VIEW is most effective.",
"Their combination also results in an improvement.",
"The surface patterns (PAT) are surprisingly predictive.",
"Of the general-purpose lexical resources (i.e. WN, WK and FN), WN and WK are both very effective resources.",
"Glosses from WN are the strongest individual feature.",
"Combining WK, WN and FN results in significant improvement.",
"The best feature set combines all features.",
"Our results also suggest that for languages other than English, there are some very strong features, such as PAT, WK or embeddings, that could be easily adopted since they do not depend on a resource which is only available in English.",
"We produce a large feature-based lexicon of abusive words by classifying all (unlabeled) negative polar expressions from Wiktionary.",
"We chose Wiktionary since our previous experiments indicated a high coverage of abusive words on that resource (4.1.5).",
"The negative polar expressions 10 We did not include MICR among the further features, as they are trained on the labeled microposts that we also use as test data in the extrinsic evaluation (6).",
"are identified by applying to the vocabulary of Wiktionary an SVM trained on the words from the Subjectivity Lexicon with their respective polarities.",
"As features, we use word embeddings (4.2).",
"In order to produce the feature-based lexicon of abusive words another SVM is trained on our base lexicon (Table",
"1) using the best feature set from Table 6.",
"With 2989 abusive words, our expanded lexicon is 5 times as large as the base lexicon.",
"In order to measure the impact of our proposed features on the quality of the resulting lexicon, we devised an alternative expansion which just employs word embeddings.",
"For this, we used SentProp , the most effective induction method from the SocialSent package (Hamilton et al., 2016).",
"11 6 Cross-domain Classification 6.1 Motivation and Set Up We now apply our expanded lexicon (5) to the classification of abusive microposts, i.e. we classify entire comments rather than words out of context.",
"Table 8 shows the datasets of labeled microposts that we use.",
"The difference between these datasets is the source from which they originate.",
"Consequently, different topics are represented in the different datasets.",
"Still, we find similar types 11 Since SentProp produces a ranking rather than a classification, we consider 2989 as a cut-off value to separate the instances into 2 classes.",
"This corresponds to the size of abusive words predicted by our feature-based lexicon (Table 9).",
"of abusive language (e.g. racism , sexism ).",
"For example, both (10)-(11) from Waseem and (12) from Wulczyn are sexist comments 12 but (10)-(11) discuss the role of women in sports while (12) addresses women's hygiene in Slavic countries.",
"(10) from Waseem dataset: maybe that's where they should focus?",
"Less cunts on football .",
"(11) from Waseem dataset: I would rather brush my teeth with sandpaper then watch football with a girl!!",
"(12) from Wulczyn dataset: slavic women don't like to wash ...",
"Their pussy stinks.",
"Since our aim is to produce the best possible cross-domain classifier, all classifiers are trained on one dataset and tested on another .",
"This is a real-life scenario.",
"Often when a classifier for abusive microposts is needed, sufficient labeled data is only available for other text domains.",
"Having different topics in training and test data makes cross-domain classification difficult.",
"For example, since a large proportion of sexist comments in Waseem relate to sports, traditional supervised classifiers (using bag of words or word embeddings) will learn correlations between words of that domain with the class labels.",
"For instance, the domain-specific word football occurs frequently in Waseem (i.e. 90 occurrences) with a strong correlation towards abusive language (precision: 95%).",
"Other words, such as sports and commentator , display a similar behaviour.",
"A supervised classifier will assign a high weight to such words.",
"While such domain-specific words may aid in-domain classification and enable a correct classification of microposts, such as (11), we will show that it has a detrimental effect on cross-domain classification.",
"We claim that the predictive words that abusive comments share across different domains are abusive words, just of the sort that our expanded lexicon contains, e.g. cunts in (10) and pussy in (12).",
"Our proposed classifier for labeling microposts is an SVM trained on features derived from our expanded lexicon (5).",
"We do not use a binary feature encoding the presence of abusive words.",
"Instead, we rank all abusive words of our lexicon 12 (12) is also a racist comment.",
"according to the confidence score of the classifier it produced and use their ranks as features.",
"As baseline classifiers we consider publicly available word lists (Table 9).",
"We include the resource from Razavi et al. (2010), henceforth referred to as Ottawa , the entries of Hatebase 13 , which has been used in Nobata et al. (2016) and Davidson et al. (2017), and the derogatory words from Wiktionary ( Derogatory ) 14 .",
"15 Finally, we also include our base lexicon (Table",
"1) in order to evaluate the expansion process of our two expanded lexicons (5).",
"For all lists, we train on a single feature indicating the frequency of abusive words in a micropost to be classified.",
"Ottawa also contains weights assigned to abusive words.",
"We weight the observed frequency with these weights.",
"We further evaluate 3 classifiers representing the state of the art of in-domain evaluations: FastText (Joulin et al., 2017), Gated Recurrent Units Recurrent Neural Networks RNN , which have been reported to work best on English microposts (Pavlopoulos et al., 2017), and Yahoo , an SVM 13 www.hatebase.org 14 https://en.wiktionary.org/wiki/ Category:English_derogatory_terms 15 There are also similar but smaller lists in Wiktionary, e.g. offensive terms .",
"trained on the sophisticated feature set proposed by Nobata et al. (2016).",
"Next to character and token n-grams, Yahoo includes word and comment embeddings, syntactic features and some linguistic diagnostics.",
"In Table 10, we list the performance of the 3 state-of-the-art classifiers along with our proposed classifier using our expanded lexicon on in-domain 10-fold crossvalidation.",
"Due to space limitations, we cannot list the other classifiers.",
"We only provide this list to demonstrate the strength of the state-of-the-art classifiers on in-domain evaluation.",
"On this setting, a lexicon-based approach is not competitive since domain-specific information is not included.",
"However, as we show in Table 11, for cross-domain classification, it is exactly that property that ensures that our feature-based lexicon provides best performance.",
"Compared to the in-domain setting, FastText , RNN and Yahoo display a huge drop in performance.",
"They all suffer from overfitting to domain-specific knowledge.",
"Of all lexicons, our proposed feature-based lexicon performs best.",
"We were surprised by the poor performance of Hatebase but attribute this to its small size and the high amount of ambiguous (and debatable) entries, such as Charlie , pancake , Pepsi .",
"Although our feature-based lexicon is the largest of all tested (i.e. 2989 words), our experiments do not support the general rule that larger lexicons always outperform smaller ones.",
"For instance, already our base lexicon with 551 abusive words is much better than the lexicons Derogatory or Ottawa which are about 3 times larger (Table 9).",
"Each word in our base lexicon was only included if 4 out of 5 raters judged it to be abusive.",
"This ensured a fairly reliable annotation.",
"In contrast, Derogatory and Ottawa suffer from many ambiguous entries (e.g. bag , Tim , yellow ).",
"The high precision of our base lexicon is what ensures that our expanded lexicon does not include much noise.",
"Another shortcoming of most of the other existing lexicons is that they overwhelmingly focus on nouns.",
"While nouns undoubtedly represent the most frequent abusive terms, there is, however, a substantial number of abusive words that belong to other parts of speech, particularly adjectives (e.g. vile , sneaky , slimy , moronic ).",
"In our base lexicon, more than 30% of the abusive words are of that part of speech.",
"Our expanded lexicon, 1053 SVM datasets baseline lexicons newly created lexicons test training majority FastText RNN Yahoo Hatebase Derogat.",
"which roughly preserves that ratio, includes about 800 adjectives in total.",
"Since abusive adjectives often co-occur with abusive nouns (4.1.4), they may compensate for abusive nouns that are missing from the lexicon.",
"Such unknown nouns often occur when authors of microposts try to obfuscate their abusive language, e.g. sneaky assh0le , slimy b*st*rd .",
"Interestingly, the modifying adjectives are not obfuscated, probably because they are considered slightly less offensive in tone.",
"Given that among the newly created lexicons our feature-based expanded lexicon performs best, we conclude that the expansion is effective (since we improve over the base lexicon), and the features are more effective than a generic induction approach (i.e. SentProp ).",
"The results in Table 11 also show that the cross-domain performance of our proposed feature-based lexicon is lower on the two datasets Warner and Waseem .",
"We observed that while on the other two datasets almost all abusive microposts can be considered explicitly abusive posts, i.e. they contain abusive words, a large proportion of microposts labeled abusive in Warner and Waseem are implicitly abusive (Waseem et al., 2017), i.e. the abuse is conveyed by other means, such as sarcasm or metaphorical language (11).",
"We asked raters from Prolific Academic to identify explicitly abusive microposts by marking abusive words in those posts.",
"The annotators were not given access to any lexicon of abusive words.",
"We then conducted cross-domain classification on those subsets where the abusive instances were only those rated as explicit.",
"The results are displayed in Table 12.",
"The table shows that our feature-based lexicon is much better on this subset, while the most sophisticated supervised classifier ( Yahoo ) still performs worse.",
"From that we conclude that only explicitly abusive microposts can be reliably detected in cross-domain classification.",
"We examined the task of inducing a lexicon of abusive words.",
"We presented novel features including surface patterns, sentiment views, polar intensity and general purpose lexical resources, particularly Wiktionary.",
"The information we thus acquire cannot be learnt all that effectively from labeled microposts, not even with a projection-based classifier.",
"While a lexicon of abusive words can only aid the detection of explicit abuse, its effectiveness was demonstrated on the novel task of cross-domain detection of abusive microposts, where our domain-independent lexicon outperforms previous supervised classifiers which suffer from overfitting to domain-specific features.",
"The authors would like to thank Thomas Kleinbauer, Katja Markert and Ines Rehbein for feedback on earlier drafts of this paper.",
"We are also grateful to William Warner and Diana Inkpen for granting us access to their data on abusive language detection.",
"Special thanks go to Stefan Kazalski for crawling the rateitall -website.",
"We also give thanks to John Pavlopoulos for helping us reconstructing the configurations of his RNN.",
"The authors were partially supported by the German Research Foundation (DFG) under grants RU 1873/2-1 and WI 4204/2-1."
] |
[
"abstain",
"abstain",
"objective",
"method",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other"
] |
[
"For many structured learning tasks, the data annotation process is complex and costly.",
"Existing annotation schemes usually aim at acquiring completely annotated structures, under the common perception that partial structures are of low quality and could hurt the learning process.",
"This paper questions this common perception, motivated by the fact that structures consist of interdependent sets of variables.",
"Thus, given a fixed budget, partly annotating each structure may provide the same level of supervision, while allowing for more structures to be annotated.",
"We provide an information theoretic formulation for this perspective and use it, in the context of three diverse structured learning tasks, to show that learning from partial structures can sometimes outperform learning from complete ones.",
"Our findings may provide important insights into structured data annotation schemes and could support progress in learning protocols for structured tasks.",
"Many machine learning tasks require structured outputs, and the goal is to assign values to a set of variables coherently.",
"Specifically, the variables in a structure need to satisfy some global properties required by the task.",
"An important implication is that once some variables are determined, the values taken by other variables are constrained.",
"For instance, in the temporal relation extraction problem in Fig. 1a, if met happened before leaving and leaving happened on Thursday , then we know that met must either be before Thursday (met (1)) or has to happen on Thursday , too (met (2)) (Ning et al., 2018a).",
"Similarly, in the semantic frame of the predicate gave (Kingsbury and Palmer, 2002) in Fig. 1b, if the boy is ARG 0 (short for argument 0), then it rules out the possibility of a frog met (2) leaving met (1) Thursday Time I met with him before leaving for Paris on Thursday.",
"or to the girl taking the same role.",
"Figure 1c further shows an example of part-labeling of images (Choi et al., 2018); given the position of FOREHEAD and LEFT EYE of the cat in the picture, we roughly know that its NECK should be somewhere in the red solid box, while the blue dashed box is likely to be wrong.",
"Data annotation for these structured tasks is complex and costly, thus requiring one to make the most of a given budget.",
"This issue has been investigated for decades from the perspective of active learning for classification tasks (Angluin, 1988; Atlas et al., 1990; Lewis and Gale, 1994) and for structured tasks (Roth and Small, 2006a,b, 2008; Hu et al., 2019).",
"While active learning aims at selecting the next structure to label, we try to investigate, from a different perspective, whether we should annotate each structure completely or partially.",
"Conventional annotation schemes typically require complete structures, under the common perception that partial annotation could adversely affect the performance of the learning algorithm.",
"But note that partial annotations will allow for more structures to be annotated (see Fig. 2).",
"Therefore, a fair comparison should be done while maintaining a fixed annotation budget, which was not done before.",
"Moreover, even if partial annotation leads to comparable learning performance to conventional complete schemes, it provides more flexibility in data annotation.",
"Another potential benefit of partial annotation is that it imposes constraints on the remaining parts of a structure.",
"As illustrated by Fig. 1, with partial annotations, we already have some knowledge about the unannotated parts.",
"Therefore, further annotations of these variables may use the available budget less efficiently; this effect was first discussed in Ning et al. (2018c).",
"Motivated by the observations in Figs.",
"1-2, we think it is important to study partialness systematically, before we hastily assume that completeness should always be favored in data collection.",
"To study whether the above benefits of partialness can offset its weakness for learning, our first contribution is the proposal of early stopping partial annotation (ESPA) scheme, which randomly picks up instances to label in the beginning, and stops before a structure is completed.",
"We do not claim that ESPA should always be preferred; instead, it serves as an alternative to conventional, complete annotation schemes that we should keep in mind, because, as we show later, it can be comparable to (and sometimes even better than) complete annotation schemes.",
"ESPA is straightforward to implement even in crowdsourcing; instances to annotate can be selected offline and distributed to crowdsourcers; this can be contrasted with the difficulties of implementing active learning protocols in these settings (Ambati et al., 2010; Laws et al., 2011).",
"We think that ESPA is a good representative for a systematic study of partialness.",
"Our second contribution is the development of an information theoretic formulation to explain the benefit of ESPA (Sec. 2), which we further demonstrate via three structured learning tasks in Sec. 4: temporal relation (TempRel) extraction (UzZaman et al., 2013), semantic role classification (SRC), 1 and shallow parsing (Tjong Kim Sang and Buch-holz, 2000).",
"These tasks are chosen because they each represent a wide spectrum of structures that we will detail later.",
"As a byproduct , we extend constraint-driven learning (CoDL) (Chang et al., 2007) to cope with partially annotated structures (Sec. 3); we call the algorithm Structured Self-learning with Partial ANnotations (SSPAN) to distinguish it from CoDL.",
"2 We believe in the importance of work in this direction.",
"First, partialness is inevitable in practice , either by mistake or by choice, so our theoretical analysis can provide unique insight into understanding partialness.",
"Second, it opens up opportunities for new annotation schemes.",
"Instead of considering partial annotations as a compromise, we can in fact annotate partial data intentionally , allowing us to design favorable guidelines and collect more important annotations at a cheaper price.",
"Many recent datasets that were collected via crowdsourcing are already partial, and this paper provides some theoretical foundations for them.",
"Furthermore, the setting described here addresses natural scenarios where only partial, indirect supervision is available, as in Incidental Supervision 1 A subtask of semantic role labeling (SRL) (Palmer et al., 2010) that only classifies the role of an argument.",
"2 There has been many works on learning from partial annotations, which we review in Sec. 3. SSPAN is only an experimental choice in demonstrating ESPA.",
"Whether SSPAN is better than other algorithms is out of the scope here, and a better algorithm for ESPA will only strengthen the claims in this paper.",
"It is important to clarify that we assume uniform cost over individual annotations (that is, all edges in Fig. 2 cost equally), often the default setting in crowdsourcing.",
"We realize that the annotation dif-ficulty can vary a lot in practice, sometimes incurring different costs.",
"To address this issue, we randomly select instances to label so that on average, the cost is uniform.",
"We agree that, even with this randomness, there could still be situations where the assumption does not hold, but we leave it for future studies, possibly in the context of active learning schemes.",
"In this section, we study whether the effect demonstrated by the examples in Fig. 1 exists in general.",
"First, we formally define structure and annotation .",
"Definition 1. A structure of size d is a vector of random variables (RV) Y = [ Y 1 , . . . , Y d ] 2 C ( L d ) , where L = { ` 1 , . . . , ` |L| } is the label set for each variable and C ( L d ) L d represents the constraints imposed by this type of structure.",
"It is necessary to model a structure as a set of random variables because when it is not completely annotated, there is still uncertainty in the annotation assignment.",
"To study partial annotations, we introduce the following: Definition 2. A k -step annotation ( 0 k d ) is a vector of RVs A k = [ A k, 1 , . . . , A k,d ] 2 ( L [ u ) d where u is a special character for null, such that d X i =1 1 ( A k,i 6 = u ) = k, (1) P ( Y | A k = a k ) = P ( Y | Y j = a k,j , j 2 J ) , (2) where J is the set of indices that a k,j 6 = u . Eq. (1) means that, in total, k variables are already annotated at step k . Obviously, A 0 means that no variables are labeled, and A d means that all variables in Y are determined. A k is what we call a k -step ESPA, so hereafter we use k/d to represent annotation completeness. Eq. (2) assumes no annotation mistakes, so if the i -th variable is labeled, then Y i must be the same as A k,i . To measure the theoretical benefit of A k , we propose the following quantity I k = log | C ( L d ) | \u0000 E [log f ( a k )] (3) for k = 0 , . . . , d , where f ( a k ) = |{ y 2 C ( L d ) : P ( y | a k ) > 0 }| is the total number of structures in C ( L d ) that are still valid given A k = a k .",
"Since we assume that the labeled variables in A k are selected uniformly randomly, E [ ] is simply the average of log f ( a k ) .",
"When k = 0 , f ( a k ) C ( L d ) and I 0 0 ; as k increases, I k increases since the structure has more and more variables labeled; fi-nally, when k = d , the structure is fully determined and I d log | C ( L d ) | .",
"The first-order finite difference, I k \u0000 I k \u0000 1 , is the benefit brought by annotating an additional variable at step k ; if I k is concave (i.e., a decaying I k \u0000 I k \u0000 1 ), the benefit from a new annotation attenuates, suggesting the potential benefit of the ESPA strategy.",
"In an extreme case where the structure is so strong that it requires all individual variables to share the same label, then labeling any variable is sufficient for determining the entire structure.",
"Intuitively, we do not need to annotate more than one variable.",
"Our I k quantity can support this intuition: The structural constraint, C ( L d ) , contains only |L| elements: { [ ` i , ` i , . . . , ` i ] } |L| i =1 , so I 0 = 0 , and I 1 = = I d = log |L| .",
"Since I k does not in-crease at all when k > = 1 , we should adopt first-step annotation A 1 .",
"Another extreme case is that of a trivial structure that has no constraints (i.e., C ( Y d ) = Y d ).",
"The annotation of all variables are independent and we gain no advantage from skipping any variables.",
"This intuition can be supported by our I k analysis as well: Since I k = k log |L| , 8 k = 0 , 1 , . . . , d , I k is linear and all steps contribute equally to improving I k by log |L| ; therefore ESPA is not necessary.",
"Real-world structures are often not as trivial as the two extreme cases above, but I k can still serve as a guideline to help determine whether it is beneficial to use ESPA.",
"We next discuss three diverse types of structures and how to obtain I k for them.",
"Example 1. The ranking problem is an important machine learning task and often depends on pairwise comparisons, for which the label set is L = { <, > } .",
"For a ranking problem with n items, there are d = n ( n \u0000 1) / 2 pairwise comparisons in total.",
"Its structure is a chain following the transitivity constraints, i.e., if A < B and B < C , then A < C .",
"A k -step ESPA A k for a chain means that only k (out of d ) pairs are compared and labeled, resulting in a directed acyclic graph (DAG).",
"In this case, f ( a k ) is actually counting the number of linear extensions of the DAG, which is known to be #P-complete (Brightwell and Winkler, 1991), so we do not have a closed-form solution to I k .",
"In practice, however, we can use the Kahn's algorithm and backtracking to simulate with a relatively small n , as shown by Fig. 3, where n = 10 and I k was obtained through averaging 1000 random simulations.",
"I k is concave, as reflected by the downward shape of I k \u0000 I k \u0000 1 .",
"Therefore, new annotations are less and less efficient for the chain structure, suggesting the usage of ESPA.",
"Example 2. The general assignment problem requires assigning d agents to d 0 tasks such that the agent nodes and the task nodes form a bipartite graph (without loss of generality, assume d d 0 ).",
"That is, an agent can handle exactly one task, and each task can only be handled by at most one agent.",
"Then from the agents' point of view, the label set for each of them is L = { 1 , 2 , . . . , d 0 } , de-noting the task assigned to the agent.",
"that k agents are already assigned with tasks, and f ( a k ) is to count the valid assignments of the remaining tasks to the remaining d \u0000 k agents, to which we have closed-form solutions: f ( a k ) = ( d 0 \u0000 k )!",
"( d 0 \u0000 d )!",
", 8 a k .",
"According to Eq.",
"(3), I k = log d 0 !",
"( d 0 \u0000 k )! regardless of d or the distribution of A k , and is concave (Fig. 4 shows an example of it when d = 4 , d 0 = 10 ).",
"Example 3. Sequence tagging is an important NLP problem, where the tags of tokens are interdependent.",
"Take chunking as an example.",
"A basic scheme is for each token to choose from three labels, B(egin), I(nside), and O(utside), to represent text chunks in a sentence.",
"That is, L = { B, I, O } .",
"Obviously, O cannot be immediately followed by I. Let d be the number of tokens in a sentence.",
"A k -step ESPA A k for chunking means that k tokens are already labeled by B/I/O, and f ( a k ) counts the valid BIO sequences that do not violate those existing annotations.",
"Again, as far as we know, there is no closed-form solution to f ( a k ) and I k , but in practice, we can use dynamic programming to obtain f ( a k ) and then I k using Eq.",
"(3).",
"We set d = 10 and show I k \u0000 I k \u0000 1 for this task in Fig. 4, where we observe the same effect we see in previous examples: The benefit provided by labeling a new token in the structure attenuates.",
"Interestingly, based on Fig. 4, we find that the slope of I k \u0000 I k \u0000 1 may be a good measure of the tightness or strength of a structure .",
"When there is no structure at all, the curve is flat (black).",
"The BIO structure is intuitively simple, and it indeed has the flattest slope among the three structured tasks (purple).",
"When the structure is a chain, the level of uncertainty goes down rapidly with every single annotation (think of standard sorting al-gorithms); the constraint is intuitively strong and in Fig. 4, it indeed has a steep slope (blue).",
"Finally, we want to emphasize that the definition of I k in Eq.",
"(3) is in fact backed by information theory.",
"When we do not have prior information about Y , we can assume that Y follows a uniform distribution over C ( L d ) .",
"Then, I k is essentially the mutual information between structure Y and annotation A k , I ( Y ; A k ) : I ( Y ; A k ) = H ( Y ) \u0000 H ( Y | A k ) = log | C ( L d ) | \u0000 E [ H ( Y | A k = a k )] = log | C ( L d ) | \u0000 E [log f ( a k )] , where H ( ) is the entropy function.",
"This is an important discovery, since it points out a new way to view a structure and its annotations.",
"It may be useful for studying active learning methods for structured tasks, and other annotation phenomena such as noisy annotations.",
"The usage of mutual information also aligns well with the information bottleneck framework (Shamir et al., 2010; Shwartz-Ziv and Tishby, 2017; Yu and Principe, 2018), although a more recent paper challenges the interpretation of information bottleneck (Saxe et al., 2018).",
"So far, we have been advocating the ESPA strategy to maximize the information we can get from a fixed budget.",
"Since early stopping leads to partial annotations, one missing component before we can benefit from it is an approach to learning from partial structures.",
"In this study, we assume the existence of a relatively small but complete dataset that can provide a good initialization for learning from a partial dataset, which is very similar to semi-supervised learning (SSL).",
"SSL, in its most standard form, studies the combined usage of a labeled set T = { ( x i , y i ) } i and an unlabeled set U = { x j } j , where the x 's are instances and y 's are the corresponding labels.",
"SSL gains information about p ( x ) through U , which may improve the estimation of p ( y | x ) .",
"Spe-cific algorithms range from self-training (Scud-der, 1965; Yarowsky, 1995), co-training (Blum and Mitchell, 1998), generative models (Nigam et al., 2000), to transductive SVM (Joachims, 1999) etc., among which one of the most basic algorithms is Expectation-Maximization (EM) (Dempster et al., 1977).",
"By treating them as hidden variables, EM marginalizes out the missing labels of U via expectation (i.e., soft EM) or maximization (i.e., hard EM).",
"For structured ML tasks, soft and hard EMs turn into posterior regularization (PR) (Ganchev et al., 2010) and constraint-driven learning (CoDL) (Chang et al., 2007), respectively.",
"Unlike unlabeled data, the partially annotated structures caused by early stopping urge us to gain information not only about p ( x ) , but also from their labeled parts.",
"There have been many existing work along this line (Tsuboi et al., 2008; Fernandes and Brefeld, 2011; Hovy and Hovy, 2012; Lou and Hamprecht, 2012), but in this paper, we decide to extend CoDL to cope with partial annotations due to two reasons.",
"First, CoDL, which itself can be viewed as an extension of self-training to structured learning, is a wrapper algorithm having wide applications.",
"Second, as its name suggests, CoDL learns from U by guidance of constraints, so partial annotations in U are technically easy to be added as extra equality constraints.",
"Algorithm 1 describes our Structured Self-learning with Partial ANnotations (SSPAN) algorithm that learns a model H .",
"The same as CoDL, SSPAN is a wrapper algorithm requiring two components: LEARN and INFERENCE .",
"LEARN attempts to estimate the local decision function for each individual instance regardless of the global constraints, while INFERENCE takes those local decisions and performs a global inference.",
"Lines 3-9 are the procedure of self-training, which iteratively completes the missing annotations in P and learns from both T and the completed version of P (i.e., P ).",
"3 Line 6 requires that the inference follows the structural constraints inherently in the task, turning the algorithm into CoDL; Line 7 enforces those partial annotations in a i , further turning it into SSPAN.",
"In practice, INFERENCE can be realized by the Viterbi or beam search algorithm in sequence tagging, or more generally, by Integer Linear Programming (ILP) 3 Line 9 can be interpreted in different ways, either as T [ P (adopted in this work) or as a weighted combination of LEARN ( T ) and LEARN ( P ) (adopted by (Chang et al., 2007)).",
"(Punyakanok et al., 2005); either way, the partial constraints of Line 7 can be easily incorporated.",
"In Sec. 2, we argued from an information theoretic view that ESPA is beneficial for structured tasks if we have a fixed annotation resource.",
"We then proposed SSPAN in Sec. 3 to learn from the resulting partial structures.",
"However , on one hand, there is still a gap between the I k analysis and the actual system performance; on the other hand, whether the benefit can be realized in practice also depends on how effective the algorithm exploits partial annotations.",
"Therefore, it remains to be seen how ESPA works in practice.",
"Here we use three NLP tasks: temporal relation (TempRel) extraction, semantic role classification (SRC), and shallow parsing, analogous to the chain, assignment, and BIO structures, respectively.",
"For all tasks, we compare the following two schemes in Fig. 5, where we use graph structures for demonstration.",
"Initially, we have a relatively small but complete dataset T 0 , an unannotated dataset U 0 , and some budget to annotate U 0 .",
"The conventional scheme I, also our baseline here, is to annotate each structure completely before randomly picking up the next one.",
"Due to the limited budget, some U 0 remain untouched (denoted by U ).",
"The proposed scheme II adopts ESPA so that all structures at hand are annotated but only partially.",
"For fair comparisons, we use CoDL to incorporate U into scheme I as well.",
"Finally, the systems trained on the dataset from I/II via CoDL/SSPAN are evaluated on unseen but complete testset T test .",
"Note that because ESPA is a new annotation scheme, there exists no dataset collected this way.",
"We use existing complete datasets and randomly throw out some annotations to mimic ESPA in the following.",
"Due to the randomness in selecting which structures/instances to keep in scheme I/II, we repeat the whole process multiple times and report the mean F 1 .",
"The budget, defined as the total number of individual instances that can be annotated, ranges from 10% to 100% with a stepsize of 10%, where x% means x% of all instances in U 0 can be annotated.",
"Temporal relations (TempRel) are a type of important relations representing the temporal ordering of events described by natural language text.",
"That is to answer questions like which event happens earlier or later in time (see Fig. 1a).",
"Since time is physically one-dimensional, if A is before B and B is also before C , then A must be before C .",
"In practice, the label set for TempRels can be more complex, e.g., with labels such as SIMULTANEOUS and VAGUE , but the structure can still be represented by transitivity constraints (see Table 1 of (Ning et al., 2018a)), which can be viewed as an analogy of the chain structure in Example 1. To avoid missing relations, annotators are required to exhaustively label every pair of events in a document (i.e., the complete annotation scheme), so it is necessary to study ESPA in this context.",
"Here we adopt the MATRES dataset (Ning et al., 2018b) for its better inter-annotator agreement and relatively large size.",
"Specifically, we use 35 documents as T 0 (the TimeBank-Dense section, 4 147 documents as U 0 (the TimeBank section minus those documents in T 0 ), and the Platinum section (a benchmark testset of 20 documents with 1K TempRels) as T test .",
"Note that both schemes I and II are mimicked by down-sampling the original annotations in MATRES, where the budget is defined as the total number of TempRels that are kept.",
"Following CogComp-Time (Ning et al., 2018d), we choose the same features and sparse-averaged perceptron algorithm as the LEARN component and ILP as INFERENCE for SSPAN.",
"Semantic role labeling (SRL) is to represent the semantic meanings of language and answer questions like Who did What to Whom and When, Where, How (Palmer et al., 2010).",
"Semantic Role Classification (SRC) is a subtask of SRL, which assumes gold predicates and argument chunks and only classifies the semantic role of each argument.",
"We use the Verb SRL dataset provided by the CoNLL-2005 shared task (Carreras and M`arquez, 2005), where the semantic roles include numbered arguments, e.g., ARG 0 and ARG 1, and argument modifiers, e.g., location (AM-LOC ), temporal (AM-TMP ), and manner (AM-MNR ) (see Prop-Bank (Kingsbury and Palmer, 2002)).",
"The structural constraints for SRC is that each argument can be assigned to exactly one semantic role, and the same role cannot appear twice for a single verb, so SRC is an assignment problem as in Example 2. Specifically, we use the Wall Street Journal (WSJ) part of Penn TreeBank III (Marcus et al., 1993).",
"We randomly select 700 sentences from the Sec. 24 of WSJ, among which 100 sentences as T 0 and 600 sentences as U 0 .",
"Our T test is 5700 sentences (about 40K arguments) from Secs.",
"00, 01, 23.",
"The budget here is defined as the total num-4 The original TimeBank-Dense section contains 36 documents, but in collecting MATRES, one of the documents was filtered out because it contained no TempRels between main-axis events.",
"ber of the arguments.",
"We adopt the SRL system in CogCompNLP (Khashabi et al., 2018) and uses the sparse averaged perceptron as LEARN and ILP as INFERENCE .",
"Shallow parsing, also referred as chunking , is a fundamental NLP task to identify constituents in a sentence, such as noun phrases (NP), verb phrases (VP), and adjective phrases (ADJP), which can be viewed as extending the standard BIO structure in Example 3 with different chunk types: B-NP, I-NP, B-VP, I-VP, B-ADJP, I-ADJP, . . . , O.",
"We use the chunking dataset provided by the CoNLL-2000 shared task (Tjong Kim Sang and Buchholz, 2000).",
"Specifically, we use 2K tokens' annotations as T 0 , 14K tokens as U 0 , and the benchmark testset (25K tokens) as T test .",
"The budget here is defined as the total number of tokens' BIO labels.",
"The algorithm we use here is the chun-ker provided in CogCompNLP, where the LEARN component is the sparse averaged perceptron and the INFERENCE is described in (Punyakanok and Roth, 2001).",
"We compare the F 1 performances of all three tasks in Fig. 6, averaged from 50 experiments with different randomizations.",
"As the budget increases, the system F 1 increases for both schemes I and II in all three tasks, which confirms the capability of the proposed SSPAN framework to learn from partial structures.",
"When the budget is 100% (i.e., the entire U 0 is annotated), schemes I and II have negligible differences; when the budget is not large enough to cover the entire U 0 , scheme II is consistently better than I in all tasks, which follows our expectations based on the I k analysis.",
"The strict improvement for all budget ratios indicates that the observation is definitely not by chance.",
"Figure 7 further compares the improvement from I to II across tasks.",
"When the budget goes down from 100%, the advantage of ESPA is more prominent; but when the budget is too low, the quality of P degrades and hurts the performance of SSPAN, leading to roughly hill-shaped curves in Fig. 7.",
"We have also conjectured based on Fig. 4 that the structure strength goes up from BIO chunks, to bipartite graphs, and to chains; interestingly, the improvement brought by ESPA is consistent with this order.",
"Admittedly, the improvement, albeit statistically significant, is small, but it does not diminish the contribution of this paper : Our goal is to remind people that the ESPA scheme (or more generally, partialness) is, at the least, comparable to (or sometimes even better than) complete annotation schemes.",
"Also, the comparison here is in fact unfair to the partial scheme II, because we assume equal cost for both schemes, although it often costs less in a partial scheme as a large problem is decomposed into smaller parts.",
"Therefore, the results shown here implies that the information theoretical benefit of partialness can possibly offset its disadvantages for learning.",
"In this paper, we investigate a less studied, yet important question for structured learning: Given a limited annotation budget (either in time or money), which strategy is better, completely annotating",
"annotating each structure until the budget runs out, or annotating more structures at the cost of leaving some of them partially annotated?",
"Neubig and Mori (2010) investigated this issue specifi-cally in annotating word boundaries and pronunciations for Japanese.",
"Instead of annotating full sentences, they proposed to annotate only some words in a sentence (i.e., partially) that can be chosen heuristically (e.g., skip those that we have seen or those low frequency words).",
"Conceptually, Neubig and Mori (2010) is an active learning work, with the understanding that if the order of annotation is deliberately designed, better learning can be achieved.",
"The current paper addresses the problem from a different angle: Even without active learning, can we still answer the question above?",
"The observation driving our questions is that when annotating a particular structure, the labels of the yet to be labeled variables may already be constrained by previous annotations and carry less information than those in a totally new structure.",
"Therefore, we systematically study the ESPA scheme stop annotating a given structure before it is completed and continue annotating another new structure.",
"An important notion is annotation cost .",
"Throughout the paper we have an ideal assumption that the cost is linear in the total number of annotations, but in practice the case can be more complicated.",
"First, the actual cost of each individual annotation may vary across different instances.",
"We try to eliminate this issue by enforcing random selection of annotation instances, rather than allowing the annotators to select arbitrarily by themselves.",
"This strategy may be useful in practice as well, to avoid people only annotating easy cases.",
"Second, even if we only require labeling partial structures, it is likely that the annotator still needs to comprehend the entire structure, incurring additional cost (usually in terms of time).",
"This issue, however, is not addressed in this paper.",
"Using this definition of cost, we provide a theoretical analysis for ESPA based on the mutual information between target structures and annotation processes.",
"We show that for structures like chains, bipartite graphs, and BIO chunks, the information brought by an extra annotation attenuates as the annotation of the structure is more complete, suggesting to stop early and move to a new structure (although it still remains unclear when it is optimal to stop).",
"This analysis is further supported by experiments on temporal relation extraction, semantic role classification, and shallow parsing, three tasks analogous to the three structures analyzed earlier, respectively.",
"The ratio of the attenuation curve as in Fig. 4 is also shown to be an actionable metric to quantify the strength of a type of structure, which can be useful in various analysis, including judging whether ESPA is worthwhile for a particular task.",
"For example, a more detailed I k -based analysis for SRC shows that predicates with more arguments are stronger structures than those with fewer arguments; we have investigated ESPA on those with more than 6 arguments and indeed, observed much larger improvement in SRC.",
"More details on this analysis are put in the appendix.",
"We think that the findings in this paper are very important.",
"First, as far as we know, we are the first to propose the mutual information analysis that provides a unique view of structured annotation, that of the reduction in the uncertainty of a target of interest Y by another random variable/process.",
"From this perspective, signals that have non-zero mutual information with Y can be viewed as an-notations.",
"These can be partially labeled structures (studied here), partial labels (restricting the possible labels rather than determining a single one as in e.g., Hu et al. (2019), noisy labels (e.g., generated by crowdsourcing or heuristic rules) or, generally, other indirect supervision signals that are correlated with Y .",
"As we proposed, these can be studied within our mutual information framework as well.",
"This paper thus provides a way to analyze the benefit of general incidental supervision signals (Roth, 2017)) and possibly even provides guidance in selecting good incidental supervision signals.",
"Second, the findings here open up opportunities for new annotation schemes for structured learning.",
"In the past, partially annotated training data have been either a compromise when completeness is infeasible (e.g., when ranking entries in gigantic databases), or collected freely without human annotators (e.g., based on heuristic rules).",
"If we intentionally ask human annotators for partial annotations, the annotation tasks can be more flex-ible and potentially, cost even less.",
"This is because annotating complex structures typically require certain expertise, and smaller tasks are often easier (Fernandes and Brefeld, 2011).",
"It is very likely that some complex annotation tasks require people to read dozens of pages of annotation guidelines, but once decomposed into smaller subtasks, even laymen can handle them.",
"Annotation schemes driven by crowdsourced question-answering, known to provide only partial coverage are successful examples of this idea (He et al., 2015; Michael et al., 2017).",
"Therefore, this paper is hopefully interesting to a broad audience.",
"This research is supported in part by a grant from the Allen Institute for Artificial Intelligence (al-lenai.org); the IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) a research collaboration as part of the IBM AI Horizons Network; Contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA); and by the Army Research Laboratory (ARL) and was accomplished under Cooperative Agreement Number W911NF-09-2-0053 (the ARL Network Science CTA).",
"The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on."
] |
[
"abstain",
"abstain",
"objective",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"abstain",
"method",
"method",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Multi-head attention is appealing for its ability to jointly extract different types of information from multiple representation subspaces.",
"Concerning the information aggregation, a common practice is to use a concatenation followed by a linear transformation, which may not fully exploit the expressiveness of multi-head attention.",
"In this work, we propose to improve the information aggregation for multi-head attention with a more powerful routing-by-agreement algorithm.",
"Specifically, the routing algorithm iteratively updates the proportion of how much a part (i.e. the distinct information learned from a specific subspace) should be assigned to a whole (i.e. the final output representation), based on the agreement between parts and wholes.",
"Experimental results on linguistic probing tasks and machine translation tasks prove the superiority of the advanced information aggregation over the standard linear transformation.",
"Attention model becomes a standard component of the deep learning networks, contributing to impressive results in machine translation (Bah-danau et al., 2015; Luong et al., 2015), image captioning (Xu et al., 2015), speech recognition (Chorowski et al., 2015), among many other applications.",
"Its superiority lies in the ability of modeling the dependencies between representations without regard to their distance.",
"Recently, the performance of attention is further improved by multi-head mechanism (Vaswani et al., 2017), which parallelly performs attention functions on different representation subspaces of the input sequence.",
"Consequently, different attention heads are able to capture distinct linguistic properties of Zhaopeng Tu is the corresponding author of the paper.",
"the input, which are embedded in different subspaces (Raganato and Tiedemann, 2018).",
"Subsequently, a linear transformation is generally employed to aggregate the partial representations extracted by different attention heads (Vaswani et al., 2017; Ahmed et al., 2018).",
"Most existing work focus on extracting informative or distinct partial-representations from different subspaces (e.g. Lin et al., 2017; Li et al., 2018), while few studies have paid attention to the aggregation of the extracted partial-representations.",
"Arguably, information extraction and aggregation are both crucial for multi-head attention to generate an informative representation.",
"Recent studies in multimodal learning show that a straightforward linear transformation for fusing features in different sets of representations usually limits the extent of abstraction (Fukui et al., 2016; Ben-Younes et al., 2017).",
"A natural question arises: whether the straightforward linear transformation is expressive enough to fully capture the rich information distributed in the extracted partial-representations?",
"In this work, we provide the first answer to this question.",
"We propose to empirically validate the importance of information aggregation in multihead attention, by comparing the performance of the standard linear function and advanced aggregation functions on various tasks.",
"Specifically, we cast information aggregation as the assigning-parts-to-wholes problem (Hinton et al., 2011), and investigate the effectiveness of the routing-by-agreement algorithm an appealing alternative to solving this problem (Sabour et al., 2017; Hinton et al., 2018).",
"The routing algorithm iteratively updates the proportion of how much a part should be assigned to a whole, based on the agreement between parts and wholes.",
"We leverage the routing algorithm to aggregate the information distributed in the extracted partial-representations.",
"We evaluate the performance of the aggregated representations on both linguistic probing tasks as well as machine translation tasks.",
"The probing tasks (Conneau et al., 2018) consists of 10 classification problems to study what linguistic properties are captured by input representations.",
"Probing analysis show that our approach indeed produces more informative representation, which embeds more syntactic and semantic information.",
"For translation tasks, we validate our approach on top of the advanced TRANSFORMER model (Vaswani et al., 2017) on both WMT14 English German and WMT17 Chinese English data.",
"Experimental results show that our approach consistently improves translation performance across languages while keeps the computational efficiency.",
"To our best knowledge, this is the first work to demonstrate the necessity and effectiveness of advanced information aggregation for multi-head attention.",
"Our work is among the few studies ( cf.",
"(Gong et al., 2018; Zhao et al., 2018; Dou et al., 2019)) which prove that the idea of capsule networks can have promising applications on natural language processing tasks.",
"Attention mechanism aims at modeling the relevance between representation pairs, thus a representation is allowed to build a direct relation with another representation.",
"Instead of performing a single attention function, Vaswani et al. (2017) found it is beneficial to capture different context features with multiple individual attention functions, namely multi-head attention.",
"Formally, attention function maps a sequence of query Q = { q 1 , . . . , q J } and a set of key-value pairs { K , V } = { ( k 1 , v 1 ) , . . . , ( k M , v M ) } to outputs, where Q RJ d , { K , V } RM d .",
"More specifically, multi-head attention model first transforms Q , K , and V into H subspaces with different, learnable linear projections: Q h , K h , V h = QW Qh , KW Kh , VW Vh , (1) where { Q h , K h , V h } are respectively the query, key, and value representations of the h -th head.",
"{ W Qh , W Kh , W Vh } R d dH denote parameter matrices associated with the h -th head, where d represents the dimensionality of the model hidden states.",
"Furthermore, H attention functions are applied in parallel to produce the output states { O 1 , . . . , OH } , among them: O h = ATT ( Q h , K h ) V h , (2) where O h RJ dH , ATT ( ) is an attention model.",
"In this work, we use scaled dot-product attention (Luong et al., 2015), which achieves similar performance with its additive counterpart (Bah-danau et al., 2015) while is much faster and more space-efficient in practice (Vaswani et al., 2017).",
"Finally, the H output states are concatenated and linearly transformed to produce the final state: Concat: (cid:98) O = [ O 1 , . . . , OH ] , (3) Linear: O = (cid:98) OWO , (4) where O RJ d denotes the final output states, WO R d d is a trainable matrix.",
"As shown in Equations 3 and 4, the conventional multi-head attention uses a straightforward concatenation and linear mapping to aggregate the output representations of multiple attention heads.",
"We argue that this straightforward strategy may not fully exploit the expressiveness of multi-head attention, which can benefit from advanced information aggregation by exploiting the intrinsic relationship among the learned representations.",
"Our work synthesizes two strands of research work, namely multi-head attention and information aggregation .",
"Multi-head attention has shown promising empirical results in many NLP tasks, such as machine translation (Vaswani et al., 2017; Domhan, 2018), semantic role labeling (Strubell et al., 2018), and subject-verb agreement task (Tang et al., 2018).",
"The strength of multi-head attention lies in the rich expressiveness by using multiple attention functions in different representation subspaces.",
"Previous work show that multi-head attention can be further enhanced by encouraging individual attention heads to extract distinct information.",
"For example, Lin et al. (2017) introduce a penalization term to reduce the redundancy of attention weights among different attention heads.",
"Li et al. (2018) propose disagreement regularizations to encourage different attention heads to capture distinct features, and Yang et al. (2019) model the interactions among attention heads.",
"Shen et al. (2018) explicitly use multiple attention heads to model different dependencies of the same word pair, and Strubell et al. (2018) employ different attention heads to capture different linguistic features.",
"Our approach is complementary to theirs, since they focus on extracting distinct information while ours aims at effectively aggregating the extracted information.",
"Our study shows that information aggregation is as important as information extraction for multi-head attention.",
"Information aggregation in multi-head attention (e.g. Equations 3 and 4) aims at composing the partial representations of the input captured by different attention heads to a final representation.",
"Recent work shows that representation composition benefits greatly from advanced functions beyond simple concatenation or mean/max pooling.",
"For example, Fukui et al. (2016) and Ben-Younes et al. (2017) succeed on fusing multi-modal features (e.g., visual features and textual features) more effectively via employing the higher-order bilinear pooling instead of vector concatenation or element-wise operations.",
"In NLP tasks, Peters et al. (2018) aggregate layer representations with linear combination, and Dou et al. (2018) compose deep representations with layer aggregation and multi-layer attention mechanisms.",
"Recently, the routing-by-agreement algorithm, which origins from the capsule networks (Hin-ton et al., 2011), becomes an appealing alternative to representation composition.",
"The majority of existing work on capsule networks has focused on computer vision tasks, such as MNIST tasks (Sabour et al., 2017; Hinton et al., 2018), CI-FAR tasks (Xi et al., 2017), and object segmentation task (LaLonde and Bagci, 2018).",
"The applications of capsule networks in NLP tasks, however, have not been widely investigated to date.",
"Zhao et al. (2018) testify capsule networks on text classification tasks and Gong et al. (2018) propose to aggregate a sequence of vectors via dynamic routing for sequence encoding.",
"Dou et al. (2019) use routing-by-agreement strategies to aggregate layer representations dynamically.",
"Inspired by these successes, we apply the routing algorithms to multi-head attention on both linguistic probing . . . .",
"In this work, we cast information aggregation in multi-head attention as the problem of assigning-parts-to-wholes .",
"Specifically, each attention head extracts different linguistic properties of the same input (Raganato and Tiedemann, 2018), and the goal of information aggregation is to compose the partial representations extracted by different heads to a whole representation.",
"An appealing solution to this problem is the routing-by-agreement algorithm, as shown in Figure 1.",
"The routing algorithm consists of two layers: input capsules and output capsules .",
"The input capsules are constructed from the transformation of the partial representations extracted by different attention heads.",
"For each output capsule, each input capsule proposes a distinct voting vector, which represents the proportion of how much the information is transformed from this input capsule (i.e parts) to the corresponding output capsule (i.e. wholes).",
"The proportion is iteratively updated based on the agreement between the voting vectors and the output capsule.",
"Finally, all output capsules are concatenated to form the final representation.",
"Mathematically, the input capsules in = { in 1 , . . . , inH } with in R n d are constructed from the outputs of multi-head attention:",
"where f h ( ) is a distinct non-linear transformation function associated with the input capsule",
"Each output capsule outn is calculated as the normalization of its total input, which is a weighted sum over all vote vectors V n :",
"The weight C h n with (cid:80) n C h n = 1 measures the agreement between vote vector V h n and output capsule outn , which is determined by the iterative routing as described in the next section.",
"Note that (cid:80) Hh =1 C h n is not necessarily equal to 1 .",
"After the routing process, following Gong et al. (2018), we concatenate the N output capsules to form the final representation: O = [ out 1 , . . . , outN ] .",
"To make the dimensionality of the final output be consistent with that of hidden layer (i.e. d ), we set the dimensionality of each output capsule be dN .",
"In this work, we explore two representative routing mechanisms, namely simple routing (Sabour et al., 2017) and EM routing (Hinton et al., 2018), which differ at how the agreement weights C h n are calculated.",
"Algorithm 1 lists a straightforward implementation of routing mechanism.",
"B h n measures the degree that the input capsule inh should be coupled to the output capsule outn , which is initialized as all 0 (Line 2).",
"The agreement weights C h n are then iteratively refined by measuring the agreement between vote vector V h n and output Algorithm 2 Iterative EM Routing.",
"capsule outn (Lines 4-6), which is implemented as a simple scalar product outn V h n (Line 6).",
"To represent the probability that the output capsule outn is activated, Sabour et al. (2017) use a non-linear squashing function: outn = || outn || 2 1 + || outn || 2 outn || outn || , (8) The scalar product outn V h n saturates at 1 , which makes it insensitive to the difference between a quite good agreement and a very good agreement.",
"In response to this problem, Hinton et al. (2018) propose a novel Expectation-Maximization (EM) routing algorithm.",
"Comparing with simple routing, EM routing has two modifications.",
"First, it explicitly assigns an activation probability A to represent the probability of whether each output capsule is activated, rather than the length of vector calculated by a squashing function (Equation 8).",
"Second, it casts the routing process as fitting a mixture of Gaussians using EM algorithm, where the output capsules play the role of Gaussians and the means of the input capsules play the role of the datapoints.",
"Accordingly, EM routing can better estimate the agreement by allowing activated output capsules to receive a cluster of similar votes.",
"Algorithm 2 lists the EM routing, which iteratively adjusts the means, variances, and activation probabilities ( , , A ) of the output capsules, as well as the agreement weights C of the input capsules (Lines 4-5).",
"The representation of output capsule outn is calculated as out n = A n n = A n (cid:80) H h =1 C h n V h n (cid:80) Hh =1 C h n , (9) The EM algorithm alternates between an E-step and an M-step.",
"The E-step determines, for each datapoint (i.e. input capsule), the probability of agreement (i.e. C ) between it and each of the Gaussians (i.e. output capsules).",
"The M-step holds the agreement weights constant, and for each Gaussian (i.e. output capsule) consists of finding the mean of these weighted datapoints (i.e. input capsules) and the variance about that mean.",
"n = (cid:80) Hh =1 C h n V h n (cid:80) Hh =1 C h n , (10) ( n ) 2 = (cid:80) Hh =1 C h n ( V h n n ) 2 (cid:80) Hh =1 C h n .",
"(11)",
"The incremental cost of using an active capsule outn is n = (cid:88) i (cid:0) log( in ) + 1 + log(2 ) 2 (cid:1) H (cid:88) h =1 C h n , where in denotes the i -th dimension of the variance vector n .",
"The activation probability of capsule outn is calculated by A n = logistic (cid:0) ( A H (cid:88) h =1 C h n n ) (cid:1) , where A is a fixed cost for coding the mean and variance of outn when activating it, is another fixed cost per input capsule when not activating it, and is an inverse temperature parameter set with a fixed schedule.",
"We refer the readers to (Hinton et al., 2018) for more details.",
"E-Step adjusts the assignment probabilities C h for each input in h .",
"First, we compute the negative log probability density of the vote V h n from inh under the Gaussian distribution fitted by the output capsule outn it gets assigned to: P h n = (cid:88) i 1 (cid:112) 2 ( in ) 2 exp( ( V ih n in ) 2 2( in ) 2 ) .",
"Again, i denotes the i -th dimension of the vectors { V h n , n , n } .",
"Accordingly, the agreement weight is re-normalized by C h n = A n P h n (cid:80) Nn (cid:48) =1 A n (cid:48) P h n (cid:48) .",
"In this section, we evaluate the performance of our proposed models on both linguistic probing tasks and machine translation tasks.",
"5.1.1 Setup Tasks Recently, Conneau et al. (2018) designed 10 probing tasks to study what linguistic properties are captured by input representations.",
"A probing task is a classification problem that focuses on simple linguistic properties of sentences.",
"Se-Len' is to predict the length of sentences in terms of number of words.",
"WC' tests whether it is possible to recover information about the original words given its sentence embedding.",
"TrDep' checks whether an encoder infers the hierarchical structure of sentences.",
"In ToCo' task, sentences should be classified in terms of the sequence of top constituents immediately below the sentence node.",
"Bshif' tests whether two consecutive tokens within the sentence have been inverted.",
"Tense' asks for the tense of the main-clause verb.",
"SubNm' focuses on the number of the subject of the main clause.",
"ObjNm' tests for the number of the direct object of the main clause.",
"In SOMO', some sentences are modified by replacing a random noun or verb with another noun or verb and the classifier should tell whether a sentence has been modified.",
"CoIn' benchmark contains sentences made of two coordinate clauses.",
"Half of the sentences are inverted the order of the clauses and the task is to tell whether a sentence is intact or modified.",
"We conduct probing tasks to study whether the routing-based aggregation benefits multi-head attention to produce more informative representation.",
"Data and Models The models on each classification task are trained and examined using the open-source dataset provided by Conneau et al. (2018), where each task is assigned 100k sentences for training and 10k sentences for validating and testing.",
"Each of our probing model consists of 3 encoding layers followed by a MLP classifier.",
"For each encoding layer, we employ a multihead self-attention block and a feed-forward block as in TRANSFORMER-BASE , which have achieved promising results on several NLP tasks (Dehghani et al., 2018; Devlin et al., 2018).",
"The mean of the top encoding layer is served as the sentence Model Surface Syntactic Semantic SeLen WC TrDep ToCo BShif Tense SubNm ObjNm SOMO CoIn BASE 97.22 97.92 44.48 84.44 49.30 84.20 87.66 82.94 50.24 68.77 SIMPLE 97.10 98.85 43.37 86.15 49.87 88.22 87.25 85.07 48.77 69.12 EM 96.26 98.75 47.72 87.00 51.82 88.17 89.97 86.40 51.55 69.86 Table 1: Classification accuracies on 10 probing tasks of evaluating the linguistic properties (Surface, Syntec-tic, and Semantic) learned by sentence encoder.",
"representation passed to the classifier.",
"The difference between the compared models merely lies in the aggregation mechanism of multiple attention heads: B ASE uses a standard concatenation and linear transformation, S IMPLE and E M are assigned simple routing and EM routing algorithms, respectively.",
"For routing algorithms, the number of output capsules and routing iterations are empirically set to 512 and",
"3. 5.1.2 Results Table 1 lists the classification accuracies of the three models on the 10 probing tasks.",
"We highlight the best accuracies in bold.",
"Several observations can be made here.",
"First, routing-based models produce more informative representation.",
"The representation produced by encoders with routing-based aggregation outperforms that by the baseline in most tasks, proving that routing mechanisms indeed aggregate attention heads more effectively.",
"The only exception is the sentence length classification task (Se-Len'), which is consistent with the conclusion in (Conneau et al., 2018): as a model captures deeper linguistic properties, it will tend to forget about this superficial feature.",
"Second, EM routing outperforms simple routing by embedding more syntactic and semantic information.",
"As shown in the last row, EM routing for multi-head aggregation consistently achieves best performances on most syntactic and semantic tasks.",
"Especially on task TrDep', Tense' and ObjNm', EM routing-based model surpasses the baseline more than 3 points, demonstrating that EM routing benefits multi-head attention to capture more syntax structure and sentence meaning.",
"Simple routing, however, underperforms the baseline model in some cases such as TrDep' and SubNm'.",
"We attribute the superiority of EM routing to generating more accurate agreement weights with the Gaussian estimation.",
"Data We conduct experiments on the widely-used WMT2014 English German (En De) and WMT2017 Chinese English (Zh En) machine translation tasks.",
"For the En De task, the dataset consists of 4.6M sentence pairs.",
"We use newstest2013 as the development set and new-stest2014 as the test set.",
"For the Zh En task, we use all of the available parallel data, consisting of about 20.6M sentence pairs.",
"We use news-dev2017 as the development set and newstest2017 as the test set.",
"We employ byte-pair encoding (BPE) (Sennrich et al., 2016) with 32K merge operations for both language pairs.",
"We use the case-sensitive 4-gram NIST BLEU score (Pap-ineni et al., 2002) as evaluation metric, and bootstrap resampling (Koehn, 2004) for statistical significance test.",
"Models We implement the proposed approaches on top of the advanced TRANSFORMER model (Vaswani et al., 2017).",
"We follow Vaswani et al. (2017) to set the configurations and have reproduced their reported results on the En De task.",
"The Base and Big models differ at hidden size (512 vs. 1024) and number of attention heads (8 vs. 16).",
"All the models are trained on eight NVIDIA P40 GPUs where each is allocated with a batch size of 4096 tokens.",
"TRANSFORMER consists of three attention components: encoder-side self-attention, decoder-side self-attention and encoder-decoder attention, all of which are implemented as multi-head attention.",
"For the information aggregation in multihead attention, we replace the standard linear transformation with the proposed routing mechanisms.",
"We experimentally set the number of iterations to 3 and the number of output capsules as model hidden size, which outperform other configurations during our investigation.",
"Table 2 lists the results on the En De translation task with TRANSFORMER-BASE .",
"As seen, the proposed routing mechanism outperforms the standard aggregation in all cases, demonstrating the necessity of advanced aggregation functions for multi-head attention.",
"Routing Mechanisms (Rows 3-4) We first apply simple routing and EM routing to encoder self-attention.",
"Both strategies perform better than the standard multi-head aggregation (Row 1), verifying the effectiveness of the non-linear aggregation mechanisms.",
"Specifically, the two strategies require comparable parameters and computational speed, but EM routing achieves better performance on translation qualities.",
"Considering the training speed and performance, EM routing is used as the default multi-head aggregation method in subsequent experiments.",
"(Rows 4-6), we found that the encoder and decoder self-attention benefit more from the routing-based information aggregation than the encoder-decoder attention.",
"This is consistent with the finding in (Tang et al., 2018), which shows that self-attention is a strong semantic feature extractor.",
"Encouragingly, applying EM routing in the encoder (Row 4) significantly improve the translation quality with almost no decrease in decoding speed, which matches the requirement of online MT systems.",
"We find that this is due to the auto-regressive generation schema, modifications on the decoder influence the decoding speed more than the encoder.",
"Compared with individual attention components, applying routing to multiple components (Rows 7-8) marginally improves translation performance, at the cost of a significant decrease of the training and decoding speeds.",
"Possible reasons include that the added complexity makes the model harder to train, and the benefits enjoyed by different attention components are overlapping to some extent.",
"To balance translation performance and efficiency, we only apply EM routing to aggregate multi-head self-attention at the encoder in subsequent experiments.",
"Encoder Layers As shown in Row 4 of Table 2, applying EM routing to all encoder layers significantly decreases the training speed by 37.5%, which is not acceptable since TRANSFORMER is best known for both good performance and quick training.",
"We expect applying to fewer layers can alleviate the training burden.",
"Recent studies show that different layers of NMT encoder can capture different levels of syntax and semantic features (Shi et al., 2016; Peters et al., 2018).",
"There-System Architecture En De Zh En # Para.",
"fore, an investigation to study whether EM routing works for multi-head attention at different layers is highly desirable.",
"As shown in Table 3, we respectively employ EM routing for multi-head attention at the high-level three layers (Row 3) and low-level three layers (Row 4).",
"The translation quality marginally drop while parameters are fewer and training speeds are quicker.",
"This phenomena verifies that it is unnecessary to apply the proposed model to all layers.",
"We further reduce the applied layers to low-level two (Row 5), the above phenomena still holds.",
"However, a big drop on translation quality occurs when the number of layer is reduced to 1 (Rows 6-7).",
"Accordingly, to balance translation performance and efficiency, we only apply EM routing for multi-head aggregation at the low-level two layers of the encoder , which we term Effective Aggregation in the following sections.",
"In this section, we validate the proposed Effective Aggregation for multi-head attenion on both WMT17 Zh En and WMT14 En De translation tasks.",
"The results are listed in Table",
"4. Our implementations of both TRANSFORMER-BASE and TRANSFORMER-BIG outperform the reported NMT systems on the same data and match the strong results of TRANSFORMER reported in previous works, which we believe make the evaluation convincing.",
"Incorporating the effective aggregation consistently and significantly improves translation performance for both base and big TRANSFORMER models across language pairs, demonstrating the efficiency and universality of our proposed multi-head aggregation mechanism.",
"Moreover, it is encouraging to see that TRANSFORMER-BASE with effective aggregation strategy even achieves comparable performances to that of TRANSFORMER-BIG , with about two thirds fewer parameters, which further demonstrates that our performance gains are not simply brought by additional parameters.",
"In this work, we provide first empirical validation on the importance of information aggregation for multi-head attention.",
"Instead of the conventional linear transformation, we propose to aggregate the partial representations learned by multiple attention heads via routing-by-agreement .",
"The routing algorithm iteratively updates the proportion of how much a partial representation should be assigned to the final output representation, based on the agreement between parts and wholes.",
"Experimental results across 10 linguistic probing tasks reveal that our EM routing-based model indeed produces more informative representation, which benefits multi-head attention to capture more syntactic and semantic information.",
"In addition, our approach on various machine translation tasks consistently and significantly outperforms the strong TRANSFORMER baseline.",
"Extensive analysis further suggests that only applying EM routing to low-level two layers of the encoder can best balance the translation performance and computational efficiency.",
"Future work includes combining our information aggregation techniques together with other advanced information extraction models for multihead attention (Li et al., 2018).",
"We expect that the two kinds of approaches can complement each other to further improve the expressiveness of multi-head attention.",
"Jian Li and Michael R. Lyu were supported by the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14210717 of the General Research Fund), and Microsoft Research Asia (2018 Microsoft Research Asia Collaborative Research Award).",
"We thank the anonymous reviewers for their insightful comments and suggestions."
] |
[
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"result",
"objective",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"result",
"result",
"abstain",
"method",
"result",
"other",
"other"
] |
[
"While automated essay scoring (AES) can reliably grade essays at scale, automated writing evaluation (AWE) additionally provides formative feedback to guide essay revision.",
"However, a neural AES typically does not provide useful feature representations for supporting AWE.",
"This paper presents a method for linking AWE and neural AES, by extracting Topical Components (TCs) representing evidence from a source text using the intermediate output of attention layers.",
"We evaluate performance using a feature-based AES requiring TCs.",
"Results show that performance is comparable whether using automatically or manually constructed TCs for",
"1) representing essays as rubric-based features,",
"2) grading essays.",
"Automated essay scoring (AES) systems reliably grade essays at scale, while automated writing evaluation (AWE ) systems additionally provide formative feedback to guide revision.",
"Although neural networks currently generate state-of-the-art AES results (Alikaniotis et al., 2016; Taghipour and Ng, 2016; Dong et al., 2017; Farag et al., 2018; Jin et al., 2018; Li et al., 2018; Tay et al., 2018; Zhang and Litman, 2018), non-neural AES create feature representations more easily useable by AWE (Roscoe et al., 2014; Foltz and Rosenstein, 2015; Crossley and McNamara, 2016; Woods et al., 2017; Madnani et al., 2018; Zhang et al., 2019).",
"We believe that neural AES can also provide useful information for creating feature representations, e.g., by exploiting information in the intermediate layers.",
"Our work focuses on a particular source-based essay writing task called the response-to-text assessment (RTA) (Correnti et al., 2013).",
"Recently, an RTA AWE system (Zhang et al., 2019) was built by extracting rubric-based features related to the use of Topical Components (TCs) in an essay.",
"However, manual expert effort was first required to create the TCs.",
"For each source, the TCs consist of a comprehensive list of topics related to evidence which include:",
"1) important words indicating the set of evidence topics in the source, and",
"2) phrases representing specific examples for each topic that students need to find and use in their essays.",
"To eliminate this expert effort, we propose a method for using the interpretable output of the attention layers of a neural AES for source-based essay writing, with the goal of extracting TCs.",
"We evaluate this method by using the extracted TCs to support feature-based AES for two RTA source texts.",
"Our results show that",
"1) the feature-based AES with TCs manually created by humans is matched by our neural method for generating TCs , and",
"2) the values of the rubric-based essay features based on automatic TCs are highly correlated with human Evidence scores.",
"Three recent AWE systems have used non-neural AES to provide rubric-specific feedback.",
"Woods et al. (2017) developed an influence estimation process that used a logistic regresion AES to identify sentences needing feedback.",
"Shibani et al. (2019) presented a web-based tool that provides formative feedback on rhetorical moves in writing.",
"Zhang et al. (2019) used features created for a random forest AES to select feedback messages, although human effort was first needed to create TCs from a source text.",
"We automatically extract TCs using neural AES, thereby eliminating this expert effort.",
"Others have also proposed methods for preprocessing source information external to an essay.",
"Content importance models for AES predict the parts of a source text that students should include when writing a summary (Klebanov et al., Source Excerpt: Today, Yala Sub-District Hospital has medicine , free of charge , for all of the most common diseases .",
"Water is connected to the hospital , which also has a generator for electricity .",
"Bed nets are used in every sleeping site in Sauri...",
"Essay Prompt: The author provided one specific example of how the quality of life can be improved by the Millennium Villages Project in Sauri, Kenya.",
"Based on the article, did the author provide a convincing argument that winning the fight against poverty is achievable in our lifetime?",
"Explain why or why not with 3-4 examples from the text to support your answer.",
"Essay: In my opinion I think that they will achieve it in lifetime .",
"During the years threw 2004 and 2008 they made progress .",
"People didnt have the money to buy the stuff in 2004.",
"The hospital was packed with patients and they didnt have alot of treatment in 2004.",
"In 2008 it changed the hospital had medicine , free of charge , and for all the common dieases .",
"Water was connected to the hospital and has a generator for electricity .",
"Everybody has net in their site.",
"The hunger crisis has been addressed with fertilizer and seeds , as well as the tools needed to maintain the food .",
"The school has no fees and they serve lunch .",
"To me thats sounds like it is going achieve it in the lifetime.",
"2014).",
"Methods for extracting important keywords or keyphrases also exist, both supervised (unlike our approach) (Meng et al., 2017; Mahata et al., 2018; Florescu and Jin, 2018) and unsupervised (Florescu and Caragea, 2017).",
"Rahimi and Litman (2016) developed a TC extraction LDA model (Blei et al., 2003).",
"While the LDA model considers all words equally, our model takes essay scores into account by using attention to represent word importance.",
"Both the unsupervised keyword and LDA models will serve as baselines in our experiments.",
"In the computer vision area, attention cropped images have been used for further image classifi-cation or object detection (Cao et al., 2015; Yuxin et al., 2018; Ebrahimpour et al., 2019).",
"In the NLP area, Lei et al. (2016) proposed to use a generator to find candidate rationale and these are passed through the encoder for prediction.",
"Our work is similar in spirit to this type of work.",
"The essays in our corpus were written by students in grades 4 to 8 in response to two RTA source texts (Correnti et al., 2013): RT AMV P (2970 essays) and RT A Space (2076 essays).",
"Table 1 shows an excerpt from RT AMV P , the associated essay writing prompt, and a student essay.",
"The bolding in the source indicates evidence examples that experts manually labeled as important for students to discuss (i.e., TC phrases).",
"Evidence usage in each essay was manually scored on a scale of 1 to 4 (low to high).",
"The distribution of Evidence scores is shown in Table 2.",
"The essay in Table 1 received a score of 3, with the bolding indicating phrases semantically related to the TCs from the source text.",
"To date, two approaches to AES have been proposed for the RTA: AES rubric and AES neural .",
"To support the needs of AWE, AES rubric (Zhang and Litman, 2017) used a traditional supervised learning framework where rubric-motivated features were extracted from every essay before model training Number of Pieces of Evidence (NPE) 1 , Concentration (CON), Specificity (SPC) 2 , Word Count (WOC).",
"The two aspects of TCs introduced in Section 1 ( topic words , specific example phrases ) were used during feature extraction.",
"Motivated by improving stand-alone AES performance (i.e., when an interpretable model was not needed for subsequent AWE), Zhang and Litman (2018) developed AES neural , a hierarchical neural model with the co-attention mechanism in the sentence level to capture the relationship between the essay and the source.",
"Neither feature engineering nor TC creation were needed before training.",
"In this section we propose a method for extracting TCs based on the AES neural attention level outputs.",
"Since the self-attention and co-attention mechanisms were designed to capture sentence and phrase importance, we hypothesize that the attention scores can help determine if a sentence or 1 An integer feature based on the list of topic words for each topic.",
"2 A vector of integer values indicating the number of specific example phrases (semantically) mentioned in the essay per topic.",
"To provide intuition, Table 3 shows examples sentences from the student essay in Table 1.",
"Bolded are phrases with the highest self-attention score within the sentence.",
"Italics are specific example phrases that refer to the manually constructed TCs for the source.",
"Attn sent is the text to essay attention score that measures which essay sentences have the closest meaning to a source sentence.",
"Attn phrase is the self-attention score of the bolded phrase that measures phrase importance.",
"A sentence with a high attention score tends to include at least one specific example phrase, and vice versa.",
"The phrase with the highest attention score tends to include at least one specific example phrase if the sentence has a high attention score.",
"Based on these observations, we first extract the output of two layers from the neural network:",
"1) the attn sent of each sentence, and",
"2) the output of the convolutional layer as the representation of the phrase with the highest attn phrase in each sentence (denoted by cnn phrase ).",
"We also extract the plain text of the phrase with the highest attn phrase in each sentence (denoted by text phrase ).",
"Then, our T C attn method uses the extracted information in 3 main steps:",
"1) filtering out text phrase from sentences with low attn sent , 2) clustering all remaining text phrase based on cnn phrase , and",
"3) generating TCs from clusters.",
"The first filtering step keeps all text phrase where the original sentences have attn sent higher than a threshold.",
"The intuition is that lower attn sent indicates less source-related information.",
"The second step clusters these text phrase based on their corresponding representations cnn phrase .",
"We use k-medoids to cluster text phrase into M clusters, where M is the number of topics in the source text.",
"Then, for text phrase in each topic cluster, we use k-medoids to cluster them into N clusters, where N is the number of the specific example phrases we want to extract from each topic.",
"The outputs of this step are M N clusters.",
"tering to extract TCs.",
"As noted earlier, TCs include two parts: topic words, and specific example phrases.",
"Since our method is data-driven and students introduce their vocabulary into the corpus, essay text is noisy.",
"To make the TC output cleaner, we filter out words that are not in the source text.",
"To obtain topic words, we combine all text phrase from each topic cluster to calculate the word frequency per topic.",
"To make topics unique, we assign each word to the topic cluster in which it has the highest normalized word frequency.",
"We then include the top K topic words based on their frequency in each topic cluster.",
"To obtain example phrases, we combine all text phrase from each example cluster to calculate the word frequency per example, then include the top K example words based on their frequency in each example cluster.",
"Figure 1 shows an overview of four TC extraction methods to be evaluated.",
"T C manual (upper bound) uses a human expert to extract TCs from a source text.",
"T C attn is our proposed method and automatically extracts TCs using both a source text and student essays.",
"T C lda (Rahimi and Litman, 2016) (baseline) builds on LDA to extract TCs from student essays only, while T C pr (baseline) builds on PositionRank (Florescu and Caragea, 2017) to instead extract TCs from only the source text.",
"Since PositionRank is not designed for TC ex-Prompt Component Parameter TC lda TC pr TC attn RTAMVP Topic Words Number of Topics 9 19 16 Number of Words 30 20 25 Example Phrases Number of Topics 20 1 18 Number of Phrases 15 20 15 RTA Space Topic Words Number of Topics 15 20 10 Number of Words 10 10 20 Example Phrases Number of Topics 10 1 9 Number of Phrases 20 50 20 Table 5: Parameters for different models.",
"traction, we needed to further process its output to create T C pr .",
"To extract topic words, we extract all keywords from the output.",
"Next, we map each word to a higher dimension with word embedding.",
"Lastly, we cluster all keywords using k-medoids into P R topic topics.",
"To extract example phrases, we put them into only one topic and remove all redundant example phrases if they are subsets of other example phrases.",
"We configure experiments to test two hypotheses: H1) the AES rubric model for scoring Evidence (Zhang and Litman, 2017) will perform comparably when extracting features using either T C attn or T C manual , and will perform worse when using T C lda or T C pr ; H2) the correlation between the human Evidence score and the feature values (NPE and sum of SPC features) 3 will be comparable when extracted using T C attn and T C manual , and will be stronger than when using T C lda and T C pr .",
"The experiment for H1 tests the impact of using our proposed TC extraction method on the downstream AES rubric task, while the H2 experiment examines the impact on the essay representation itself.",
"Following Zhang and Litman (2017), we stratify essay corpora: 40% for training word embeddings and extracting TCs, 20% for selecting the best embedding and parameters, and 40% for testing.",
"We use the hyper-parameters from Zhang and Litman (2018) for neural training as shown in Table 4.",
"Table 5 shows all other parameters selected using the development set.",
"Results for H1.",
"H1 is supported by the results in Table 6, which compares the Quadratic Weighted Kappa (QWK) between human and AES rubric Evidence scores (values 1-4) when AES rubric uses T C manual versus each of the automatic methods.",
"T C attn always yields better performance, and even significantly better than T C manual .",
"base-3 These features are extracted based on TCs.",
"Prompt TC manual (1) TC lda (2) TC pr (3) TC attn (4) RTAMVP 0.643 (2,3) 0.614 (3) 0.525 0.648 (1,2,3) RTA Space 0.609 (3) 0.615 (3) 0.559 0.622 (1,3) Table 6: The performance (QWK) of AES rubric using different TC extraction methods for feature creation.",
"Qualitative Analysis.",
"The manually-created topic words for RT AMV P represent 4 topics, which are hospital, malaria, farming and school 4 .",
"Although Table 5 shows that the automated list has more topics for topic words and might have broken one topic into separate topics, a good automated list should have more topics related to the 4 topics above.",
"We manually assign a topic for each of the topic words from the different automated methods.",
"T C lda has 4 related topics out of 9 (44.44%), T C pr has 6 related topics out of 19 (31.58%), and T C attn has 10 related topics out of 16 (62.50%).",
"Obviously, T C attn preserves more related topics than our baselines.",
"Moving to the second aspect of TCs (specific example phrases), Table 8 shows the first 10 specific example phrases for a manually-created category that introduces the changes made by the MVP project 5 .",
"This category is a mixture of different topics because it talks about the hospital, malaria, school, and farming at the same time.",
"T C attn has overlap with T C manual on different topics.",
"However, T C lda mainly talks about hospi-tal, because the nature of the LDA model doesn't allow mixing specific example phrases about different topics in one category.",
"Unfortunately, T C pr 4 All Topic Words generated by different models can be found in the Appendix A.1.",
"5 All Specific Example Phrases generated by different models can be found in the Appendix A.2.",
"does not include any overlapped specific phrase in the first 10 items; they all refer to some general example phrases from the beginning of the source article.",
"Although there are some related specific example phrases in the full list, they are mainly about school.",
"This is because the PositionRank algorithm tends to assign higher scores to words that appear early in the text.",
"This paper proposes T C attn , a method for using the attention scores in a neural AES model to automatically extract the Topical Components of a source text.",
"Evaluations show the potential of T C attn for eliminating expert effort without degrading AES rubric performance or the feature representations themselves.",
"T C attn outperforms baselines and generates comparable or even better results than a manual approach.",
"Although T C attn outperforms all baselines and requires no human effort on TC extraction, annotation of essay evidence scores is still needed.",
"This leads to an interesting future investigation direction, which is training the AES neural using the gold standard that can be extracted automatically.",
"One of our next steps is to investigate the impact of TC extraction methods on a corresponding AWE system (Zhang et al., 2019), which uses the feature values produced by AES rubric to generate formative feedback to guide essay revision.",
"Currently, the T C lda are trained on student essays, while the T C pr only works on the source article.",
"However, T C attn uses both student essays and the source article for TC generation.",
"It might be hard to say that the superior performance of T C attn is due to the neural architecture and attention scores rather than the richer training resources.",
"Therefore, a comparison between T C attn and a model that uses both student essays and the source article is needed.",
"We would like to show our appreciation to every member of the RTA group for sharing their pearls of wisdom with us.",
"We are also immensely grateful to all members of the PETAL group and reviewers for their comments on an earlier version of the paper.",
"The research reported here was supported, in whole or in part, by the Institute of Education Sciences, U.S. Department of Education, through Grant R305A160245 to the University of Pittsburgh.",
"The opinions expressed are those of the authors and do not represent the views of the Institute or the U.S. Department of Education."
] |
[
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"method",
"abstain",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"We study learning from user feedback for extractive question answering by simulating feedback using supervised data.",
"We cast the problem as contextual bandit learning, and analyze the characteristics of several learning scenarios with focus on reducing data annotation.",
"We show that systems initially trained on a small number of examples can dramatically improve given feedback from users on model-predicted answers, and that one can use existing datasets to deploy systems in new domains without any annotation, but instead improving the system on-the-fly via user feedback.",
"Explicit feedback from users of NLP systems can be used to continually improve system performance.",
"For example, a user posing a question to a question-answering (QA) system can mark if a predicted phrase is a valid answer given the context from which it was extracted.",
"However, the dominant paradigm in NLP separates model training from deployment, leaving models static following learning and throughout interaction with users.",
"This approach misses opportunities for learning during system usage, which beside several exceptions we discuss in Section 8 is understudied in NLP.",
"In this paper, we study the potential of learning from explicit user feedback for extractive QA through simulation studies.",
"Extractive QA is a popular testbed for language reasoning, with rich prior work on datasets (e.g., Rajpurkar et al., 2016), task design (Yang et al., 2018; Choi et al., 2018), and model architecture development (Seo et al., 2017; Yu et al., 2018).",
"Learning from interaction with users remains relatively understudied, even though QA is well positioned to elicit user feedback.",
"An extracted answer can be clearly visualized within its supporting context, and a language-proficient user can then easily validate Figure 1: Illustration of an interaction setup for learning from user feedback for QA, and its potential.",
"if the answer is supported or not.",
"1 This allows for simple binary feedback, and creates a contextual bandit learning scenario (Auer et al., 2002; Langford and Zhang, 2007).",
"Figure 1 illustrates this learning signal and its potential.",
"We simulate user feedback using several widely used QA datasets, and use it as a bandit signal for learning.",
"We study the empirical characteristics of the learning process, including its performance, sensitivity to initial system performance, and tradeoffs between online and offline learning.",
"We also simulate zero-annotation domain adaptation, where we deploy a QA system trained from supervised 1 Answers could also come from erroneous or deceitful contexts.",
"This learning scenario can mitigate fundamental problems in extractive QA.",
"It reduces data collection costs, by delegating much of the learning to interaction with users.",
"It can avoid data collection artifacts because the data comes from the actual system deployment, unlike data from an annotation effort that often involves design decisions immaterial to the system's use case.",
"For example, sharing questionand answer-annotator roles (Rajpurkar et al., 2016), which is detrimental to emulate information seeking behavior (Choi et al., 2018).",
"Finally, it gives systems the potential to evolve over time as the world changes (Lazaridou et al., 2021; Zhang and Choi, 2021).",
"Our simulation experiments show that user feedback is an effective signal to continually improve QA systems across multiple benchmarks.",
"For example, an initial system trained with a small amount of SQUAD (Rajpurkar et al., 2016) annotations (64 examples) improves from 18 to 81.6 F1 score, and adapting a SearchQA (Dunn et al., 2017) system to SQUAD through user feedback improves it from 45 to 84 F1 score.",
"Our study shows the impact of initial system performance, trade-offs between online and offline learning, and the impact of source domain on adaptation.",
"These results create the base for future work that goes beyond simulation to use feedback from human users to improve extractive QA systems.",
"Our code is publicly available at https://github.com/ lil-lab/bandit-qa .",
"We study a scenario where a QA model learns from explicit user feedback.",
"We formulate learning as a contextual bandit problem.",
"The input to the learner is a question-context pair, where the context paragraph contains the answer to the question.",
"The output is a single span in the context paragraph that is the answer to the question.",
"Given a question-context pair, the model predicts an answer span.",
"The user then provides feedback about the model's predicted answer, which is used to update the model parameters.",
"We intentionally experiment with simple binary feedback and basic learning algorithms, to provide a baseline for what more advanced methods could achieve with as few assumptions as possible.",
"Background: Contextual Bandit Learning In a stochastic (i.i.d.) contextual bandit learning problem, at each time step t , the learner independently observes a context 2 x ( t ) D sampled from the data distribution D , chooses an action y ( t ) according to a policy , and observes a reward r ( t ) R .",
"The learner only observes the reward r ( t ) corresponding to the chosen action y ( t ) .",
"The learner aims to minimize the cumulative regret.",
"Intuitively, regret is the deficit suffered by the learner relative to the optimal policy up to a specific time step.",
"Formally, the cumulative regret at time T is computed with respect to the optimal policy arg max E ( x,y,r ) ( D, ) [ r ] : RT := T (cid:88) t =1 r ( t ) T (cid:88) t =1 r ( t ) , (1) where is the set of all policies, r ( t ) is the reward observed at time t and r ( t ) is the reward that the optimal policy would observe.",
"Minimising the cumulative regret is equivalent to maximising the total reward.",
"3 A key challenge in contextual bandit learning is to balance exploration and exploitation to minimize overall regret.",
"Scenario Formulation Let a question q be a sequence of m tokens (cid:104) q 1 , . . . , q m (cid:105) and a context paragraph c be a sequence of n tokens (cid:104) c 1 , . . . , c n (cid:105) .",
"An extractive QA model 4 predicts a span y = (cid:104) c i , . . . , c j (cid:105) where i, j [1 , n ] and i j in the context c as an answer.",
"When relevant, we denote as a QA model parameterized by .",
"We formalize learning as a contextual bandit process: at each time step t , the model is given a question-context pair ( q ( t ) , c ( t ) ) , predicts an answer span y , and receives a reward r ( t ) IR .",
"The learner's goal is to maximize the total reward (cid:80) Tt =1 r ( t ) .",
"This formulation reflects a setup where, given a question-context pair, the QA system interacts with a user, who validates the model-predicted answer in context, and provides feedback which is mapped to numerical reward.",
"2 The term context here refers to the input to the learner policy, and is different from the term context as we use it later in extractive QA, where the term context refers to the evidence document given as input to the model.",
"3 Equivalently, the problem is often formulated as loss minimization (Bietti et al., 2018).",
"Learning Algorithm We learn using policy gradient.",
"Our learner is similar to REINFORCE (Sut-ton and Barto, 1998; Williams, 2004), but we use arg max to predict answers instead of Monte Carlo sampling from the model's output distribution.",
"5 We study online and offline learning, also referred to as onand off-policy.",
"In online learning (Algorithm 1), the model identity is maintained between prediction and update; the parameter values that are updated are the same that were used to generate the output receiving reward.",
"This entails that a reward is only used once, to update the model after observing it.",
"In offline learning (Algorithm 2), this relation between update and prediction does not hold.",
"The learner observes reward, often across many examples, and may use it to update the model many times, even after the parameters drifted arbitrarily far from these that generated the prediction.",
"In practice, we observe reward for the entire length of the simulation ( T steps) and then update for E epochs.",
"The reward is re-weighted to provide an unbiased estimation using inverse propensity score (IPS; Horvitz and Thompson, 1952).",
"We clip the debiasing coefficient to avoid amplifying examples with large coefficients (line 10, Algorithm 2).",
"In general, offline learning is easier to implement because updating the model is not integrated with its deployment.",
"Offline learning also uses a training loop that is similar to optimization practices in supervised learning.",
"This allows to iterate over the data multiple times, albeit with the same feedback signal on each example.",
"However, online learning often has lower regret as the model is updated after each interaction.",
"It may also lead to higher overall performance, because as the model improves early on, it may observe more positive feedback overall, which is generally more informative.",
"We empiri-5 Early experiments showed that sampling is not as bene-ficial as arg max , potentially because of the relatively large output space of extractive QA.",
"Yao et al. (2020) made a similar observation for semantic parsing, and Lawrence et al. (2017) used arg max predictions for bandit learning in statistical machine translation.",
"Table 4 in Appendix A provides our experimental results with sampling.",
"Evaluating Performance We evaluate model performance using token-level F1 on a held-out test set, as commonly done in the QA literature (Ra-jpurkar et al., 2016).",
"We also estimate the learner regret (Equation 1).",
"Computing regret requires access to the an oracle .",
"We use human annotation as an estimate (Section 3).",
"6 Comparison to Supervised Learning In supervised learning, the data distribution is not dependent on the model, but on a fixed training set { ( q ( t ) , c ( t ) , y ( t ) ) } Tt =1 .",
"In contrast, bandit learners are provided with reward data that depends on the model itself: { ( q ( t ) , c ( t ) , y ( t ) , r ( t ) ) } Tt =1 where r is the reward for the model prediction y ( t ) = arg max y ( y | q ( t ) , c ( t ) ) at time step t .",
"Such feedback can be freely gathered from users interacting with the model, while building supervised datasets requires costly annotation.",
"This learning signal can also reflect changing task properties (e.g., world changes) to allow systems to adapt, and its origin in the deployed system use makes it more robust to biases introduced during annotation.",
"We initialize our model with supervised data, and then simulate bandit feedback using supervised data annotations.",
"Initialization is critical so the model does not return random answers, which are likely to be all bad because of the large output space.",
"We use relatively little supervised data from the same domain for in-domain experiments (Sec-tion 5 and 6) to focus on the data annotation re-6 Our oracle is an estimate because of annotation noise and ambiguity in exact span selection.",
"duction potential of user feedback.",
"For domain adaptation, we assume access to a large amount of training data in the source domain, and no annotated data in the target domain (Section 7).",
"Reward We use supervised data annotations to simulate the reward.",
"If the predicted answer span is an exact match index-wise to the annotated span, the learner observes a positive reward of 1.0, and a negative reward of -0.1 otherwise.",
"7 This reward signal is stricter than QA evaluation metrics (token-level F1 or exact match after normalization).",
"8 Noise Simulation We study robustness by simulating noisy feedback via reward perturbation: randomly flipping the binary reward with a fixed probability of 8% or 20% as the noise ratio.",
"9 4 Experimental Setup Data We use six English QA datasets that provide substantial amount of annotated training data taken from the MRQA training portion (Fisch et al., 2019): SQUAD (Rajpurkar et al., 2016), NewsQA (Trischler et al., 2017), SearchQA (Dunn et al., 2017), TriviaQA (Joshi et al., 2017), HotpotQA (Yang et al., 2018), and NaturalQuestions (NQ; Kwiatkowski et al., 2019).",
"The MRQA benchmark simplifies all datasets so that each example has a single span answer with a limited evidence document length (truncated at 800 tokens).",
"Table 7 in Appendix B provides dataset details.",
"We compute performance measures and learning curves on development sets following prior work (Rajpurkar et al., 2016; Ram et al., 2021).",
"Model We conduct experiments with a pretrained SpanBERT model (Joshi et al., 2020).",
"We fine-tune the pre-trained SpanBERT-base model during initial learning and our simulations.",
"Implementation Details We use Hugging Face Transformers (Wolf et al., 2020).",
"When training initial models with little in-domain supervised data (Section 5; Section 6), we use a learning rate of 3e-5 with a linear schedule, batch size 10, and 10 epochs.",
"We obtain the sets of 64, 256, or 1,024 7 We experimented with other reward values, but did not observe a significant difference in performance (Appendix A).",
"8 Normalization includes lowercasing, modifying spacing, removing articles and punctuation, etc.",
"NaturalQuestions (NQ; Kwiatkowski et al., 2019) is an exception, with an exact index match measure that has similar strictness.",
"9 Even without our noise simulation, the simulated feedback inherits the noise from the annotation, either from crowdsourcing or distant supervision.",
"examples from prior work (Ram et al., 2021).",
"10 For models initially trained on complete datasets (Section 7), we use a learning rate 2e-5 with a linear schedule, batch size 40, and 4 epochs.",
"In simulation experiments, we use batch size 40.",
"We turn off dropout to simulate interaction with users in deployment.",
"For single-pass online learning experiments (Section 5; Section 7), we use a constant learning rate of 1e-5.",
"For offline learning experiments (Section 6), we train the model for 3 epochs on the collected feedback with a linear schedule learning rate of 3e-5.",
"Online experiments with SQUAD, HotpotQA, NQ, and NewsQA take 24h each on one NVIDIA GeForce RTX 2080 Ti; 2.56h for offline.",
"For TriviaQA and SearchQA, each online simulation experiment on one NVIDIA TITAN RTX takes 49.5h; 920h for offline.",
"We simulate a scenario where only a limited amount of supervised data is available, and the model mainly learns from explicit user feedback on predicted answers.",
"We use 64, 256, or 1,024 in-domain annotated examples to train an initial model.",
"This section focuses on online learning, where the learner updates the model parameters after each feedback is observed (Algorithm 1).",
"Figure 2 presents the performance of in-domain simulation with online learning.",
"The performance pattern varies across different datasets.",
"Bandit learning consistently improves performance on SQUAD, HotpotQA, and NQ across different amounts of supervised data used to train the initial model.",
"The performance gain is larger with weaker initial models (i.e., trained on 64 supervised exam-ples): 63.6 on SQUAD, 42.7 on HotpotQA, and 40.0 on NQ.",
"Bandit learning is not always effective on NewsQA, TriviaQA, and SearchQA, especially with weaker initial models.",
"This may be attributed to the quality of training set annotations, which determines the accuracy of reward in our setup.",
"SearchQA and TriviaQA use distant supervision to match questions and relevant contexts from the web, likely decreasing reward quality in our setup.",
"While NewsQA is crowdsourced, Trischler et al. (2017) report relatively low human performance (69.4 F1), possibly indicating data challenges that also decrease our reward quality.",
"Learning progres-10 We use the seed 46 sets publicly available at https: //github.com/oriram/splinter .",
"sion across datasets (Figure 3) shows that initial models trained with 1,024 examples can achieve peak performance with one third or even one quarter of feedback provided.",
"Feedback Noise Simulation Figure 3 shows learning curves with simulated noise via different amounts of feedback perturbation (0%, 8%, or 20%).",
"When perturbation-free simulation is effective, models remain robust to noise: 8% noise results in small fluctuations of the learning curve, but the final performance degrades minimally.",
"Starting with weaker initial models and learning with a higher noise ratio may cause learning to fail (e.g., simulation on SQUAD with 64 initial examples and 20% noise).",
"When online perturbation-free simulation fails, online learning with noisy feedback fails too.",
"Sensitivity Analysis Training Transformer-based models has been shown to have stability issues, especially when training with limited amount of data (Zhang et al., 2021).",
"Our nonstandard training procedure (i.e., one epoch with a fixed learning rate) may further increase instability.",
"We study the stability of the learning process using initial models trained on only 64 in-domain supervised examples on HotpotQA and TriviaQA: the former shows significant performance gain while the latter shows the opposite.",
"We experiment with five initial models trained on different sets of 64 supervised examples, each used to initiate a separate simulation experiment.",
"Four out of five experiments on HotpotQA show performance gains similar to what we observed so far, except one experiment that starts with very low initialization performance.",
"In contrast, nearly all experiments on TriviaQA collapse (mean F1 of 7.3).",
"We also conduct sensitivity analysis with stronger initial models trained with 1,024 examples, and observe that the final performance is stable across runs on both HotpotQA and TriviaQA (standard deviations are 0.5 and 2.6).",
"Table 5 in Appendix B provides detailed performance numbers.",
"We simulate offline bandit learning (Algorithm 2), where feedback is collected all at once with the initial model.",
"The learning scenario follows the previous section: only a limited amount of supervised data is available (64, 256, or 1,024 in-domain examples) to train initial models.",
"Table 1 shows the performance of offline simulation experiments compared to online simulations.",
"We observe mixed results.",
"On SQUAD, HotpotQA, NQ, and NewsQA, offline learning outperforms online learning when using stronger initial models (i.e., models trained on 256 and 1,024 exam-ples).",
"This illustrates the benefit of the more standard training loop, especially with our Transformer-based model that is better optimized with a linear learning rate schedule and multiple epochs, both incompatible with the online setup.",
"On TriviaQA and SearchQA, offline simulation is ineffective regardless of the performance of initial models.",
"This result echoes the learning challenges in the online counterparts on these two datasets.",
"Online vs. Offline Regret Table 2 compares online and offline regret.",
"Regret numbers are averaged over the number of feedback observations.",
"11 Online learning generally displays lower regret for similar initial models on SQUAD, HotpotQA, and NQ.",
"This is expected because later interactions in the simulation can benefit from early feedback in online learning.",
"In contrast, in our offline scenario, we only update after seeing all examples, so regret numbers depend on the initial model only.",
"Regret results on NewsQA, TriviaQA, and SearchQA are counterintuitive, generally showing that online learning has similar or higher regret.",
"The cases showing significantly higher online regret (64+sim on NewsQA and SearchQA) can be explained by the learning failing, which impacts online regret, but not our offline regret.",
"The others are more complex, and we hypothesize that they may be because of combination of",
"(a) inherent noise in the data; and",
"(b) in cases where online learning is effective, the gap between the strictly-defined reward that is used to compute regret and the relaxed F1 evaluation metric.",
"Further analysis is required for a more conclusive conclusion.",
"Learning from user feedback creates a compelling avenue to deploy systems that target new domains not addressed by existing datasets.",
"The scenario we simulate in this section starts with training a QA model on a complete existing annotated dataset, and deploying it to interact with users and learn from their feedback in a new domain.",
"We do not assume access to any annotated training data in 11 Table 8 in Appendix B lists the percentage of positive feedback in online and offline in-domain simulation.",
"the target domain.",
"We report experiments with online learning.",
"Offline adaptation experiments are discussed in Appendix B.3.",
"Figure 4 shows online domain adaptation performance.",
"On 22/30 configurations, online adaptation introduces significant performance gains ( > 2 F1 score).",
"For example, adapting from TriviaQA and SearchQA to the other four domains improves performance by 2772.8 F1.",
"On HotpotQA, the model initially trained on TriviaQA shows an impressive adaptation, improving from 0.2 F1 to 73 F1.",
"12 Our simulations show reduced effectiveness when the target domain is either TriviaQA or SearchQA, likely because the simulated feedback is based on noisy distantly supervised data.",
"For SearchQA, the low performance of initial models from other domains may also contribute to the adaptation failure.",
"As expected, this indicates the effectiveness of the process depends on the relation 12 We replicate this result with different model initializations to confirm it is not random.",
"between the source and target domains.",
"SearchQA seems farthest from the other domains, mirroring observations from prior work (Su et al., 2019).",
"Figure 5 shows learning curves for our simulation experiments.",
"Generally, we observe the choice of source and target domains influences adaptation rates.",
"Models quickly adapt to SQUAD, HotpotQA, and NQ, reaching near final performance with a quarter of the total feedback provided.",
"On NewsQA, models initially trained on TriviaQA and SearchQA adapt slower than those initially trained on other three datasets.",
"On TriviaQA, we observe little change in performance throughout simulation.",
"On SearchQA, only the model initially trained on TriviaQA shows a performance gain.",
"Both SearchQA and TriviaQA include context paragraphs from the web, potentially making domain adaptation from one to the other easier.",
"Lastly, we compare bandit learning with initial models trained on a small amount of in-domain data (Section 5) and initial models trained on a large amount of out-of-domain data.",
"Table 3 compares online learning with initial models trained on 1,024 in-domain supervised examples and online domain adaptation with a SQU AD-initialized model.",
"SQUAD initialization provides a robust starting point for all datasets except SearchQA.",
"On four out of five datasets, the final performance is better with SQU AD-initialized model.",
"This is potentially because the model is exposed to different signals from two datasets and overall sees more data, either as supervised examples or through feedback.",
"However, on SearchQA, learning with SQU AD-initialized model performs much worse than learning with the initial model trained on 1,024 in-domain examples, potentially because of the gap in initial model performance (23.5 vs. 65 F1).",
"Bandit learning has been applied to a variety of NLP problems including neural machine translation (NMT; Sokolov et al., 2017; Kreutzer et al., 2018a,b; Mendoncca et al., 2021), structured prediction (Sokolov et al., 2016), semantic parsing (Lawrence and Riezler, 2018), intent recognition (Falke and Lehnen, 2021), and summarization (Gunasekara et al., 2021).",
"Explicit human feedback has been studied as a direct learning signal for NMT (Kreutzer et al., 2018b; Mendoncca et al., 2021), semantic parsing (Artzi and Zettle-moyer, 2011; Lawrence and Riezler, 2018), and summarization (Stiennon et al., 2020).",
"Nguyen et al. (2017) simulates bandit feedback to improve an MT system fully trained on a large annotated dataset, including analyzing robustness to feedback perturbations.",
"Our work shows that simulated bandit feedback is an effective learning signal for extractive question answering tasks.",
"Our work differs in focus on reducing annotation costs by relying on few annotated examples only to train the initial model, or by eliminating the need for in-domain annotation completely by relying on data in other domains to train initial models.",
"Implicit human feedback, where feedback is derived from human behavior rather than explicitly requested, has also been studied, including for dialogue (Jaques et al., 2020) and instruction generation (Kojima et al., 2021).",
"We focus on explicit feedback, but implicit signals also hold promise to improve QA systems.",
"Alternative forms of supervision for QA have been explored in prior work, such as explicitly providing fine-grained information (Dua et al., 2020; Khashabi et al., 2020a).",
"Kratzwald et al. (2020) resembles our setting in seeking binary feedback to replace span annotation, but their goal is to create supervised data more economically.",
"Campos et al. (2020) proposes feedback-weighted learning to improves conversational QA using simulated binary feedback.",
"Their approach relies on multiple samples (i.e., feedback signals) per example, training for multiple epochs online by re-visiting the same questions repeatedly, and tuning two additional hy-perparameters.",
"In contrast, we study improving QA systems via feedback as a bandit learning problem.",
"In both online and offline setups, we assume only one feedback sample per example.",
"We also provide extensive sensitivity studies to the amount of annotations available, different model initialization, and noisy feedback across various datasets.",
"Domain adaptation for QA has been widely studied (Fisch et al., 2019; Khashabi et al., 2020b), including using data augmentation (Yue et al., 2021), adversarial training (Lee et al., 2019), contrastive method (Yue et al., 2021), back-training (Kul-shreshtha et al., 2021), and exploiting small lottery subnetworks (Zhu et al., 2021).",
"We present a simulation study of learning from user feedback for extractive QA.",
"We formulate the problem as contextual bandit learning.",
"We conduct experiments to show the effectiveness of such feedback, the robustness to feedback noise, the impact of initial model performance, the trade-offs between online and offline learning, and the potential for domain adaptation.",
"Our study design emphasizes the potential for reducing annotation costs by annotating few examples or by utilizing existing datasets for new domains.",
"We intentionally adopt a basic setup, including a simple binary reward and vanilla learning algorithms, to illustrate what can be achieved with a relatively simple variant of the contextual bandit learning scenario.",
"Our results already indicate the strong potential of learning from feedback, which more advanced methods are likely to further improve.",
"For example, the balance between online and offline learning can be further explored using proximal policy optimization (PPO; Schulman et al., 2017) or replay memory (Mnih et al., 2015).",
"With well-designed interface, human users may be able to provide more sophisticated feedback (Lamm et al., 2021), which will provide a stronger signal compared to our binary reward.",
"Our aim in this study is to lay the foundation for future work, by formalizing the setup and showing its potential.",
"This is a critical step in enabling future research, especially going beyond simulation to study using real human feedback for QA systems.",
"Another important direction for future work is studying user feedback for QA systems that do both context retrieval and answer generation (Lewis et al., 2020), where assigning the feedback to the appropriate stage in the process poses a challenge.",
"Beyond extractive QA, we hope our work will inspire research of user feedback as a signal to improve other types of NLP systems.",
"Our work's limitations are discussed in Section 1 and Section 9.",
"All six datasets we use are from prior work, are publicly available, and are commonly used for the study of extractive QA.",
"Section 4 reports our computational bud-get and experimental setup in detail.",
"Our code-base is available at https://github.com/ lil-lab/bandit-qa .",
"This research was supported by ARO W911NF-21-1-0106, NSF under grants No. 1750499, the NSF AI Institute for the Foundations of Machine Learning (IFML), and a Google Faculty Research Award.",
"Finally, we thank the action editor and the anonymous reviewers for detailed comments."
] |
[
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"abstain",
"objective",
"other",
"abstain",
"other",
"abstain",
"other",
"other",
"objective",
"method",
"method",
"other",
"method",
"method",
"result",
"objective",
"result",
"result",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"other",
"other",
"other"
] |
[
"In the automatic evaluation of generative question answering (GenQA) systems, it is difficult to assess the correctness of generated answers due to the free-form of the answer.",
"Especially, widely used n-gram similarity metrics often fail to discriminate the incorrect answers since they equally consider all of the tokens.",
"To alleviate this problem, we propose KPQA-metric, a new metric for evaluating the correctness of GenQA.",
"Specifically, our new metric assigns different weights to each token via keyphrase prediction, thereby judging whether a generated answer sentence captures the key meaning of the reference answer.",
"To evaluate our metric, we create high-quality human judgments of correctness on two GenQA datasets.",
"Using our human-evaluation datasets, we show that our proposed metric has a significantly higher correlation with human judgments than existing metrics.",
"Code for KPQA-metric will be available at https://github.com/ hwanheelee1993/KPQA .",
"Question answering (QA) has received consistent attention from the natural language processing community.",
"Recently, research on QA systems has reached the stage of generating free-form answers, called GenQA, beyond extracting the answer to a given question from the context (Yin et al., 2016; Song et al., 2017; Bauer et al., 2018; Nishida et al., 2019; Bi et al., 2019, 2020).",
"However, as a bottleneck in developing GenQA models, there are no proper automatic metrics to evaluate generated answers (Chen et al., 2019).",
"In evaluating a GenQA model, it is essential to consider whether a generated response correctly contains vital information to answer the question.",
"There exist several n-gram similarity metrics such This research was done while the author was affiliated with Adobe Research.",
"Context : ... , this process, called hypothesis testing, consists of four steps .",
", ...",
"Question : How many steps are involved in a hypothesis test?",
"Reference Answer : Four steps are involved in a hypothesis test.",
"Generated Answer : There are seven steps involved in a hypothesis test .",
"Human Judgment : 0.063 BLEU-1 : 0.778 BLEU-1-KPQA : 0.057 ROUGE-L : 0.713 ROUGE-L-KPQA : 0.127 Figure 1: An example from MS-MARCO (Bajaj et al., 2016) where widely used n-gram similarity metrics does not align with human judgments of correctness.",
"as BLEU (Papineni et al., 2002) and ROUGE-L (Lin, 2004), that measure the word overlaps between the generated response and the reference answer; however, these metrics are insufficient to evaluate a GenQA system (Yang et al., 2018a; Chen et al., 2019).",
"For instance, in the example in Figure 1 from the MS-MARCO (Bajaj et al., 2016), the generated answer receives a high score on BLEU-1 (0.778) and ROUGE-L (0.713) due to the many overlaps of words with those in the reference.",
"However, humans assign a low score of 0.063 on the scale from 0 to 1 due to the mismatch of critical information.",
"As in this example, we find that existing metrics often fail to capture the correctness of the generated answer that considers the key information for the question.",
"To overcome this shortcoming of the existing metrics, we propose a new metric called KPQA-metric for evaluating GenQA systems.",
"To derive the metric, we first develop Keyphrase Predictor for Question Answering (KPQA).",
"KPQA computes the importance weight of each word in both the generated answer and the reference answer by considering the question.",
"By integrating the output from the KPQA, we compute the KPQA-metric in two steps: (1) Given a { question , generated answer , reference answer }, we compute importance weights for each question-answer pair { question , generated answer } and { question , reference answer } using a KPQA; (2) We then compute a weighted similarity score by integrating the importance weights into existing metrics.",
"Our approach can be easily integrated into most existing metrics, including n-gram similarity metrics and the recently proposed BERTScore (Zhang et al., 2020).",
"Additionally, we newly create two datasets for assessing automatic evaluation metrics with regard to the correctness in the GenQA domain.",
"We first generate answers using state-of-the-art GenQA models on MS-MARCO and AVSD (Alamri et al., 2019) where the target answers are natural sentences rather than short phrases.",
"We then collect human judgements of correctness over the 1k generated answers for each dataset.",
"In experiments on the human-evaluation datasets, we show that our KPQA-metrics have significantly higher correlations with human judgments than the previous metrics.",
"For example, BERTScore-KPQA, one of our KPQA-integrated metrics, obtains Pearson correlation coefficients of 0.673 on MS-MARCO whereas the original BERTScore obtains 0.463.",
"Further analyses demonstrate that our KPQA-metrics are robust to the question type and domain shift.",
"Overall, our main contributions can be summarized as follows: We propose KPQA metric, an importance weighting based evaluation metric for GenQA.",
"We collect high-quality human judgments of correctness for the model generated answers on MS-MARCO and AVSD, where those two GenQA datasets aim to generate sentence-level answers.",
"We show that our proposed metric has a dramatically higher correlation with human judgments than the previous metrics for these datasets.",
"We verify the robustness of our metric in various aspects such as question type and domain effect.",
"We release the human-annotated benchmark dataset and pre-trained models to compute the KPQA-metric to the research community 1 .",
"We briefly review the current automated text evaluation metrics that have been used to evaluate GenQA systems.",
"BLEU is a popular evaluation metric for generated text based on n -gram precision.",
"BLEU scores a candidate by counting the number present in the reference among the n -gram of the candidate.",
"In general, n varies from 1 to 4, and the scores for varying n are aggregated with a geometric mean.",
"ROUGE is a set of evaluation metrics used for automatic text generation such as summarization and machine translation.",
"Typically, most studies use ROUGE-L, which is a F-measure based on the longest common subsequence between a candidate and the reference.",
"METEOR (Banerjee and Lavie, 2005) is an F1 score of a set of unigram alignments.",
"METEOR has a unique property that it considers stemmed words, synonyms, and paraphrases, as well as the standard exact word matches.",
"CIDER (Vedantam et al., 2015) is a consensus-based evaluation metric that is designed for a high correlation with human judgment in the image captioning problem.",
"CIDEr uses Term Frequency-Inverse Document Frequency (TF-IDF) weights for human-like evaluation.",
"BERTScore is a recently proposed text evaluation metric that use pre-trained representations from BERT (Devlin et al., 2019).",
"BERTScore first computes the contextual embeddings for given references and candidates independently with BERT, and then computes pairwise cosine similarity scores.",
"When computing similarity, BERTScore adopts Inverse Document Frequency (IDF) to apply importance weighting.",
"To build a better metric for GenQA, we first propose KPQA.",
"By considering the question, the KPQA assigns different weights to each token in the answer sentence such that salient tokens receive a high value.",
"We then integrate the KPQA into existing metrics to make them evaluate correctness as well.",
"For GenQA, we observe that each word has different levels of importance when assessing a gen-KPQA",
"erated answer.",
"As shown in Figure 1, there exist keywords or keyphrases that are considered significant when evaluating the correctness of the answer.",
"Additionally, some words, such as function words are mostly irrelevant to the correctness of the answer.",
"Inspired by this observation, we introduce KPQA, which can predict the importance of each word when evaluating GenQA systems.",
"As shown in Figure 3, KPQA is a BERT-based (Devlin et al., 2019) classifier that predicts salient tokens in the answer sentences depending on the question.",
"We regard it as a multi-class classification task where each token is a single class.",
"To train KPQA, we first prepare extractive QA datasets such as SQuAD (Rajpurkar et al., 2016), which consist of { passage , question , answer-span }.",
"We transform these datasets into pairs of { answer-sentences , question , answer-span }.",
"We extract the answer-sentences that contain answer-span in the passage since these sentences are short summaries for the given question.",
"Specifically, for a single-hop QA dataset such as SQuAD, we pick a single sentence that includes answer-span as the answer sentence.",
"For the answers in a multi-hop QA dataset such as HotpotQA (Yang et al., 2018b), there are multiple supporting sentences for the single answer span.",
"For these cases, we use SpanBERT (Joshi et al., 2020) to resolve the coreferences in the paragraphs and extract all of the supporting sentences to compose answer sentences.",
"The { question , [SEP], answer-sentences } is then fed into the KPQA to classify the answer-span, which is a set of salient tokens, in the given answer-sentences considering the question.",
"Since KPQA's training process allows KPQA to find essential words in the answer sentences to a given question, we use a pre-trained KPQA to get the importance weights that are useful for evaluating the correctness of generated answers in GenQA.",
"The overall flow of our KPQA-metric is described in Figure",
"2. We describe how we combine these weights with existing metrics to derive the KPQA-metric.",
"We first compute the importance weights for a given question Q = ( q 1 , ..., q l ), reference answer X = ( x 1 , ..., x n ) and generated answer X = ( x 1 , ..., x m ) using pre-trained KPQA.",
"We provide each pair { question , generated answer } and { question , reference answer } to pre-trained KPQA and get the output of the softmax layer.",
"We define these parts as KeyPhrase Weight (KPW) as shown in Figure",
"3. We note that KPW ( Q, X ) = ( w 1 , ..., w m ) is an importance weight of generated answer X for a given question Q .",
"These weights reflect the importance of each token for evaluating the correctness.",
"incorporating the KPW into several existing metrics modifying the precision and recall to compute the weighted similarity.",
"BLEU-1-KPQA: We derive BLEU-1-KPQA, which is an weighted precision of unigram ( P KPQAUnigram ) as follows: P KPQAUnigram = mi =1 nj =1 KPW ( Q, X ) i I ( i, j ) mi =1 KPW ( Q, X ) i , (1) where I ( i, j ) is an indicator function assigned the value of 1 if token x i is the same as x j and 0 otherwise.",
"ROUGE-L-KPQA: We also derive ROUGE-L-KPQA, which is a modified version of ROUGE-L using KPW to compute weighted precision( PKPQALCS ), recall( RKPQALCS ) and F1( F 1 KPQALCS ), as follows: PKPQALCS = LCSKPQA ( X , X ) mi =1 KPW ( Q, X ) i , (2) RKPQALCS = LCSKPQA ( X , X ) ni =1 KPW ( Q, X ) i , (3) FKPQALCS = (1 + 2 ) RKPQALCSPKPQALCSRKPQALCS + 2 PKPQALCS , (4) where LCS is the Longest Common Subsequence between a generated answer and a reference answer.",
"The LCSKPQA ( X , X ) is defined as follows: LCSKPQA ( X , X ) = mi =1 I i KPW ( Q, X ) i , (5) where I i is an indicator function which is 1 if each word is in the LCS and 0 otherwise.",
"BERTScore-KPQA Similar to ROUGE-L-KPQA, we compute BERTScore-KPQA using KPW.",
"We first compute contextual embedding x for generated answer X and x for reference X using the BERT model.",
"Then, we compute weighted precision( PKPQABERT ), recall( RKPQABERT ) and F1( F 1 KPQABERT ) with contextual embedding and KPW of each token as follows: PKPQABERT = mi =1 KPW ( Q, X ) i max x j x x i T x j m i =1 KPW ( Q, X ) i (6) RKPQABERT = ni =1 KPW ( Q, X ) i max x j x x i T x j ni =1 KPW ( Q, X ) i (7) F 1 KPQABERT = 2 PKPQABERT RKPQABERTPKPQABERT + RKPQABERT (8) PKPQALCS = LCSKPQA ( X , X ) mi =1 KPW ( Q, X ) i , (9) RKPQALCS = LCSKPQA ( X , X ) ni =1 KPW ( Q, X ) i , (10) FKPQALCS = (1 + 2 ) RKPQALCSPKPQALCSRKPQALCS + 2 PKPQALCS , (11) where LCS is the Longest Common Subsequence between a generated answer and a reference answer.",
"The LCSKPQA ( X , X ) is defined as follows: LCSKPQA ( X , X ) = mi =1 I i KPW ( Q, X ) i , (12) where I i is an indicator function which is 1 if each word is in the LCS and 0 otherwise.",
"is defined in (Lin, 2004).",
"Similar to ROUGE-L-KPQA, we also derive BLEU-1-KPQA and BERTScore-KPQA by intergating KPW and provide the formulas in Appendix.",
"GenQA Datasets: To evaluate GenQA metrics, it is necessary to measure the correlation between human judgments and automated text evaluation metrics for evaluating the model generated answers.",
"Recently, Chen et al. (2019) released human judgments of correctness for two GenQA datasets, NarrativeQA (Kocisk et al., 2018) and SemEval-2018 Task 11 (SemEval) (Ostermann et al., 2018).",
"However, we find that the average lengths of the answer sentence are 4.7 and 2.5 for NarrativeQA and SemEval, respectively, as shown in Table",
"1. These short answers are often short phrases and cannot be representative of GenQA, because the answers could be long and may deliver complex meaning.",
"We argue that evaluating long and abstractive answers is more challenging and suitable for studying the metrics for general form of GenQA.",
"To fill this gap, we collect the human judgments of correctness for model generated answers on two other GenQA datasets, MS-MARCO and AVSD, which have longer answers than NarrativeQA and SemEval as shown in Table",
"1. For the MS-MARCO, we use the Natural Language Generation (NLG) subset, which has more abstractive and longer answers than the Q&A subset.",
"GenQA Models: For each of the two datasets, we first generate answers for questions on validation sets using two trained GenQA models: UniLM (Dong et al., 2019) and MHPGM (Bauer et al., 2018) for MS-MARCO, MTN (Le et al., 2019) and AMF (Alamri et al., 2018; Hori et al., 2017) for AVSD.",
"Details on these QA models are in Appendix.",
"After training, we select 1k samples for each dataset in the validation set.",
"Specifically, we first randomly pick the 500 questions in the validation set of each dataset and collect the corresponding model generated answers for each model so that we have two generated answers for each sample.",
"Therefore, we collect a total of 1k samples, two different answers for 500 questions for each dataset.",
"Also, we discard samples if one of two GenQA models exactly generates the ground-truth answer since human evaluation is useless during the sampling.",
"We hire workers from the Amazon Mechanical Turk (MTurk) to rate the correctness of the generated answers from the models we trained.",
"We assign ten workers for each sample to get reliable data.",
"We ask the workers to annotate correctness using a 5-point Likert scale (Likert, 1932), where 1 means completely wrong, and 5 means completely correct.",
"We provide the full instruction in Appendix.",
"Filtering Noisy Workers: Some workers did not follow the instructions, producing poor-quality judgments.",
"To solve this problem, we filter noisy Dataset # Annotators(avg.) MS MARCO 0.817 7.08 AVSD 0.725 6.88 Table 2: Inter annotator agreement measured by Krippendorff's alpha( ) and the average of number of annotators for each dataset.",
"ratings using the z-score, as in (Jung and Lease, 2011).",
"We first compute the z-score among the ten responses for each sample.",
"Then, we consider the responses whose z-score is higher than 1 to be noise and remove up to five of them in the order of the z-score.",
"The average number of annotators after filtering is shown in Table",
"2. We use the average score of the annotators for each sample as a ground-truth evaluation score to assess the quality of the evaluation metric.",
"Inter-Annotator Agreement: The final dataset is further validated with Krippendorff's al-pha (Krippendorff, 1970, 2011), a statistical measure of inter-rater agreement for multiple annotators.",
"We observe that Krippendorff's is higher than 0.6 for both datasets and models after filtering, as shown in Table",
"2. These coefficient numbers indicate a substantial agreement according to one of the general guidelines (Landis and Koch, 1977) for kappa-like measures.",
"We choose three datasets SQuAD v1.1 (Rajpurkar et al., 2016), HotpotQA (Yang et al., 2018b) and MS-MARCO Q&A subset to train KPQA.",
"We combine the training set of the three datasets and use a 9:1 split to construct the training and development set of KPQA.",
"For HotpotQA, we exclude yes/no type questions where the answers are not in the passage.",
"For model parameters, we choose bert-base-uncased variants for the BERT model and use one fully-connected layer with softmax layer after it.",
"We train 5 epochs and choose the model that shows the minimum evaluation loss.",
"We provide more details in Appendix.",
"metric, we use the Pearson coefficient and Spearman coefficient.",
"We compute these correlation coefficients with human judgments of correctness.",
"We test using MS-MARCO, AVSD, from which we collected human judgments, and NarrativeQA and SemEval from (Chen et al., 2019).",
"Performance Comparison: We present the correlation scores for the baseline metrics and KPQA-augmented ones for multiple datasets in Table",
"3. The correlations between human judgment and most of the existing metrics such as BLEU or ROUGE-L are very low, and this shows that those widely used metrics are not adequate to GenQA.",
"Moreover, the performance of existing metrics is especially low for the MS-MARCO, which has longer and more abstractive answers than the other three datasets.",
"We observe a significantly higher correlation score for our proposed KPQA-metric compared to existing metrics especially for MS-MARCO and AVSD where the answers are full-sentences rather than short phrases.",
"For the NarrativeQA, where existing metrics also have higher correlations, the gap in performance between KPQA-metric and existing metrics is low.",
"We explain this is because the answers in NarrativeQA are often a single word or short phrases that are already keyphrases.",
"Comparison with IDF: The next best metric after our proposed metric is the original BERTScore, which uses contextual embeddings and adopts IDF based importance weighting.",
"Since IDF is dependent on the word-frequency among the documents, it can assign a lower weight to some important words to evaluate correctness if they frequently occur in the corpus as shown in Table 5.",
"On the other hand, our KPQA integrated metric assigns weights Dataset MS-MARCO Metric r BLEU-1-KPQA 0.675 0.634 ROUGE-L-KPQA 0.698 0.642 BERTScore-KPQA 0.673 0.655 BLEU-1-KPQA /MARCO 0.573 0.529 ROUGE-L-KPQA /MARCO 0.598 0.564 BERTScore-KPQA /MARCO 0.602 0.595 BLEU-1-KP 0.629 0.589 ROUGE-L-KP 0.671 0.640 BERTScore-KP 0.657 0.649 Table 4: Ablation studies for our proposed metrics on domain effect and using the question context.",
"to words in the answer sentence using the context of the question.",
"This approach provides dynamic weights for each word that leads to a better correlation with human evaluation as shown in Table",
"3. 5.3 Ablation Study Domain Effect: Our KPQA metric computes importance weights using a supervised model; thus our proposed method may suffer from a domain shift problem.",
"Although our metric is evaluated on out-of-domain datasets except MS-MARCO, we further examine the effect of the domain difference by changing the trainset of KPQA.",
"Since we train KPQA with the combination of SQuAD, HotpotQA and MS-MARCO Q&A, the original KPQA works as in-domain for MS-MARCO.",
"To measure the negative domain effect, we exclude the MS-MARCO Q&A in the training set of KPQA and measure the performance of KPQA-metric on MS-MARCO.",
"We annotate it -KPQA /MARCO \" and report the results in Table",
"4. This drop shows the effect of the negative domain shift for our KPQA-metric.",
"However, -KPQA /MARCO \" is still much higher than all PERSON NUMERIC DESCRIPTION LOCATION ENTITY 0.0 0.2 0.4 0.6 0.8 P e a r s o n C o rr e l a t i o n ( MSMARCO ) Pearson Correlation by Q-type BLEU-1 BLEU-1-KPQA ROUGE-L ROUGE-L-KPQA BERTScore BERTScore-KPQA Figure 4: Pearson correlation coefficient among question types on MS-MARCO dataset.",
"Using the Question Context: Our KPQA uses the question as an additional context to predict the keyphrases in the sentence, as shown in Figure",
"3. To examine the power of utilizing the question information for the keyphrase predictor, we remove the question part from the dataset and train the keyphrase prediction model.",
"With the newly trained model, we compute the importance weights for words in the target sentence and apply them to BLEU-1, ROUGE-L, and BERTScore.",
"We call this metric as -KP\" and report the results in Table",
"4. We observe that -KPQA\" metric is better than -KP\" metric for all of the three variants.",
"These results show that training keyphrase predictor to find the short answer candidate in the sentence is effective for capturing the key information in the generated answer, but it is more effective when the question information is integrated.",
"Correlation Among Question Type: Since MS-MARCO provides the question type information ( PERSON , NUMERIC , DESCRIPTION , LOCATION , ENTITY ) for each { question , answer } pair, we evaluate the various metrics by the question type.",
"We split the dataset into these five question types and measure the performance of various metrics with Pearson correlation coefficients.",
"As shown in Figure 4, our KPQA-metric variants outperform their original version in all of the question types.",
"KPQA-metric is especially effective for the NUMERIC question type, whose answer sentence often has shorter keyphrase such as a number.",
"For ENTITY and PERSON question types, the gap between KPQA-integrated metric and original metric Question : How to cook sausage peppers onions ?",
"Reference Answer : To cook sausage peppers onions first place the sausage in a large skillet over medium heat, and brown on all sides after that remove from skillet, and slice meelt butter in the skillet, stir in the yellow onion, red onion, and garlic, and cook 2 to 3 minutes and then mix in red bell pepper and green bell pepper season with basil, and oregano in last stir in white wine.",
"is lower for BERTScore.",
"We speculate that this is because the original BERTScore uses IDF-based importance weighting, unlike other metrics.",
"Multiple Sentence Answers: Most of the answers in MS-MARCO and AVSD consist of single sentences, but the answers for GenQA can be multiple sentences like (Fan et al., 2019).",
"To verify our KPQA-metric on multiple sentence answers, we collect additional 100 human judgments for the generated answer whose answers are multiple sentences in the MS-MARCO like the example in Figure 5, and evaluate the various metrics on this dataset.",
"As shown in Table 6, our KPQA integrated metric shows still higher correlations than other metrics.",
"We observe that the gap between KPQA integrated metrics and existing metrics is relatively lower than that of Table",
"3. We speculate this is because many of the multiple sentence answers are DESCRIPTION type answers whose keyphrases are sometimes vague, similar to the results in Figure",
"4. Error Analysis: We pick 100 error cases from MS-MARCO in the order of a large difference in ranks among 1k samples between human judgments and BERTScore-KPQA.",
"The importance weights have no ground-truth data; thus we manually visualize the weights as shown in Table 5 and analyze the error cases.",
"From the analysis, we observe some obvious reasons for the different judgments between humans and BERTScore-KPQA.",
"We first classify error cases by the question types and observe that 51 cases belong to NUMERIC , and 31 cases belong to DESCRIPTION .",
"We further analyze the NUMERIC question type and find that many parts of the errors Context ... , it can take 5-20 hours of walking to lose 1 pound ... , ...",
"are due to higher weights on units such as million\" or years.\"",
"There exist a total of ten error cases for this type, and we believe that there is room for improvement with regard to these errors through post-processing.",
"In the case of the DESCRIPTION question type, 17 out of 31 cases are due to inappropriate importance weights.",
"We speculate this result is because the keyphrases for the answers to questions belonging to the DESCRIPTION type are sometimes vague; thus, the entire answer needs to be considered when it is evaluated.",
"Rank-Pair: One practical usage of the text evaluation metric is ranking outputs of multiple models.",
"Using the collected human judgments of correctness for the same 500 { question , reference answer } pairs for two models on MS-MARCO and AVSD, we can compare the output of each models through the human-annotated score.",
"To see the alignment of ranking ability among the various metrics with that of human judges, we conduct a win-lose match\" experiment, counting the number of times that a metric ranks the output of two models as the same as human judges.",
"To prepare test samples, we chose only those whose gap between human judgment scores on the two models is greater than",
"2. Finally, we obtain 93 and 193 samples for MS-MARCO and AVSD, respectively.",
"Considering that the range of scores is 1-5, this approach ensures that each output of the models has a clear quality difference.",
"Table 7 shows the percentage of rank-pair matches for each metric with human judgments of correctness on two datasets.",
"Our KPQA-metric shows more matches than previous metrics in all of the datasets; thus, it is more useful for comparing the generated answers from different models.",
"One important next step for current QA systems is to generate answers in natural language for a given question and context.",
"Following this interest, several generative (abstractive) QA datasets (Bajaj et al., 2016; He et al., 2018; Kocisk et al., 2018; Fan et al., 2019), where the answer is not necessarily in the passage, have recently been released.",
"Since the task is to generate natural language for the given question, the QA system is often trained with seq2seq (Sutskever et al., 2014) objective similarly to other natural generation tasks such as neural machine translation.",
"Hence, researchers often use n-gram based similarity metrics such as BLEU to evaluate the GenQA systems, following other natural language generation tasks.",
"However, most of these n-gram metrics including BLEU were originally developed to evaluate machine translation and previous works (Liu et al., 2016; Nema and Khapra, 2018; Kryscinski et al., 2019) have shown that these metrics have poor correlations with human judgments in other language generation tasks such as dialogue systems.",
"As with other text generation systems, for GenQA, it is difficult to assess the performance through n-gram metrics.",
"Especially, n-gram similarity metrics can give a high score to a generated answer that is incorrect but shares many unnecessary words with the reference answer.",
"Previous works (Mar-ton and Radul, 2006; Yang et al., 2018a; Chen et al., 2019) have pointed out the difficulty of similar problems and studied automated metrics for evaluating QA systems.",
"Inspired by these works, we focus on studying and developing evaluation metrics for GenQA datasets that have more abstractive and diverse answers.",
"We analyze the problem of using existing n-gram similarity metrics across multiple GenQA datasets and propose alternative metrics for GenQA.",
"In this paper, we create high-quality human judgments on two GenQA datasets, MS-MARCO and AVSD, and show that previous evaluation metrics are poorly correlated with human judgments in terms of the correctness of an answer.",
"We propose KPQA-metric, which uses the pre-trained model that can predict the importance weights of words in answers to a given question to be integrated with existing metrics.",
"Our approach has a dramatically higher correlation with human judgments than existing metrics, showing that our model-based importance weighting is critical to measure the correctness of a generated answer in GenQA.",
"Our paper and dataset follow ethical standards.",
"We compensate the annotators with competitive pay.",
"Furthermore, we follow all ethical procedures for data collection, where we use public datasets to train the models.",
"K. Jung is with ASRI, Seoul National University, Korea.",
"This work was supported by AIRS Company in Hyundai Motor Company & Kia Corporation through HKMC-SNU AI Consortium Fund."
] |
[
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"result",
"result",
"objective",
"objective",
"objective",
"objective",
"result",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"result",
"objective",
"result",
"method",
"abstain",
"method",
"other",
"other"
] |
[
"We introduce a new language representation model called BERT , which stands for B idirectional E ncoder R epresentations from T ransformers.",
"Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pretrain deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers.",
"As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.",
"BERT is conceptually simple and empirically powerful.",
"It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).",
"Language model pre-training has been shown to be effective for improving many natural language processing tasks (Dai and Le, 2015; Peters et al., 2018a; Radford et al., 2018; Howard and Ruder, 2018).",
"These include sentence-level tasks such as natural language inference (Bowman et al., 2015; Williams et al., 2018) and paraphrasing (Dolan and Brockett, 2005), which aim to predict the relationships between sentences by analyzing them holistically, as well as token-level tasks such as named entity recognition and question answering, where models are required to produce fine-grained output at the token level (Tjong Kim Sang and De Meulder, 2003; Rajpurkar et al., 2016).",
"There are two existing strategies for applying pre-trained language representations to downstream tasks: feature-based and fine-tuning .",
"The feature-based approach, such as ELMo (Peters et al., 2018a), uses task-specific architectures that include the pre-trained representations as additional features.",
"The fine-tuning approach, such as the Generative Pre-trained Transformer (OpenAI GPT) (Radford et al., 2018), introduces minimal task-specific parameters, and is trained on the downstream tasks by simply fine-tuning all pre-trained parameters.",
"The two approaches share the same objective function during pre-training, where they use unidirectional language models to learn general language representations.",
"We argue that current techniques restrict the power of the pre-trained representations, especially for the fine-tuning approaches.",
"The major limitation is that standard language models are unidirectional, and this limits the choice of architectures that can be used during pre-training.",
"For example, in OpenAI GPT, the authors use a left-to-right architecture, where every token can only attend to previous tokens in the self-attention layers of the Transformer (Vaswani et al., 2017).",
"Such restrictions are sub-optimal for sentence-level tasks, and could be very harmful when applying fine-tuning based approaches to token-level tasks such as question answering, where it is crucial to incorporate context from both directions.",
"In this paper, we improve the fine-tuning based approaches by proposing BERT: B idirectional E ncoder R epresentations from T ransformers.",
"BERT alleviates the previously mentioned unidirectionality constraint by using a masked language model (MLM) pre-training objective, inspired by the Cloze task (Taylor, 1953).",
"The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context.",
"Unlike left-to-right language model pre-training, the MLM objective enables the representation to fuse the left and the right context, which allows us to pretrain a deep bidirectional Transformer.",
"In addition to the masked language model, we also use a next sentence prediction task that jointly pre-trains text-pair representations.",
"The contributions of our paper are as follows: We demonstrate the importance of bidirectional pre-training for language representations.",
"Unlike Radford et al. (2018), which uses unidirectional language models for pre-training, BERT uses masked language models to enable pre-trained deep bidirectional representations.",
"This is also in contrast to Peters et al. (2018a), which uses a shallow concatenation of independently trained left-to-right and right-to-left LMs.",
"We show that pre-trained representations reduce the need for many heavily-engineered task-specific architectures.",
"BERT is the first fine-tuning based representation model that achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks, outperforming many task-specific architectures.",
"BERT advances the state of the art for eleven NLP tasks.",
"The code and pre-trained models are available at https://github.com/ google-research/bert .",
"Learning widely applicable representations of words has been an active area of research for decades, including non-neural (Brown et al., 1992; Ando and Zhang, 2005; Blitzer et al., 2006) and neural (Mikolov et al., 2013; Pennington et al., 2014) methods.",
"Pre-trained word embeddings are an integral part of modern NLP systems, offering significant improvements over embeddings learned from scratch (Turian et al., 2010).",
"To pretrain word embedding vectors, left-to-right language modeling objectives have been used (Mnih and Hinton, 2009), as well as objectives to discriminate correct from incorrect words in left and right context (Mikolov et al., 2013).",
"These approaches have been generalized to coarser granularities, such as sentence embeddings (Kiros et al., 2015; Logeswaran and Lee, 2018) or paragraph embeddings (Le and Mikolov, 2014).",
"To train sentence representations, prior work has used objectives to rank candidate next sentences (Jernite et al., 2017; Logeswaran and Lee, 2018), left-to-right generation of next sentence words given a representation of the previous sentence (Kiros et al., 2015), or denoising auto-encoder derived objectives (Hill et al., 2016).",
"ELMo and its predecessor (Peters et al., 2017, 2018a) generalize traditional word embedding research along a different dimension.",
"They extract context-sensitive features from a left-to-right and a right-to-left language model.",
"The contextual representation of each token is the concatenation of the left-to-right and right-to-left representations.",
"When integrating contextual word embeddings with existing task-specific architectures, ELMo advances the state of the art for several major NLP benchmarks (Peters et al., 2018a) including question answering (Rajpurkar et al., 2016), sentiment analysis (Socher et al., 2013), and named entity recognition (Tjong Kim Sang and De Meulder, 2003).",
"Melamud et al. (2016) proposed learning contextual representations through a task to predict a single word from both left and right context using LSTMs.",
"Similar to ELMo, their model is feature-based and not deeply bidirectional.",
"Fedus et al. (2018) shows that the cloze task can be used to improve the robustness of text generation models.",
"As with the feature-based approaches, the first works in this direction only pre-trained word embedding parameters from unlabeled text (Col-lobert and Weston, 2008).",
"More recently, sentence or document encoders which produce contextual token representations have been pre-trained from unlabeled text and fine-tuned for a supervised downstream task (Dai and Le, 2015; Howard and Ruder, 2018; Radford et al., 2018).",
"The advantage of these approaches is that few parameters need to be learned from scratch.",
"At least partly due to this advantage, OpenAI GPT (Radford et al., 2018) achieved previously state-of-the-art results on many sentence-level tasks from the GLUE benchmark (Wang et al., 2018a).",
"Left-to-right language model-BERT BERTE [CLS] E 1 E [SEP] ...",
"There has also been work showing effective transfer from supervised tasks with large datasets, such as natural language inference (Conneau et al., 2017) and machine translation (McCann et al., 2017).",
"Computer vision research has also demonstrated the importance of transfer learning from large pre-trained models, where an effective recipe is to fine-tune models pre-trained with Ima-geNet (Deng et al., 2009; Yosinski et al., 2014).",
"We introduce BERT and its detailed implementation in this section.",
"There are two steps in our framework: pre-training and fine-tuning .",
"During pre-training, the model is trained on unlabeled data over different pre-training tasks.",
"For fine-tuning, the BERT model is first initialized with the pre-trained parameters, and all of the parameters are fine-tuned using labeled data from the downstream tasks.",
"Each downstream task has separate fine-tuned models, even though they are initialized with the same pre-trained parameters.",
"The question-answering example in Figure 1 will serve as a running example for this section.",
"Model Architecture BERT's model architecture is a multi-layer bidirectional Transformer encoder based on the original implementation described in Vaswani et al. (2017) and released in the tensor2tensor library.",
"1 Because the use of Transformers has become common and our implementation is almost identical to the original, we will omit an exhaustive background description of the model architecture and refer readers to Vaswani et al. (2017) as well as excellent guides such as The Annotated Transformer. 2 In this work, we denote the number of layers (i.e., Transformer blocks) as L , the hidden size as H , and the number of self-attention heads as A .",
"3 We primarily report results on two model sizes: BERTBASE (L=12, H=768, A=12, Total Param-eters=110M) and BERTLARGE (L=24, H=1024, A=16, Total Parameters=340M).",
"BERTBASE was chosen to have the same model size as OpenAI GPT for comparison purposes.",
"Critically, however, the BERT Transformer uses bidirectional self-attention, while the GPT Transformer uses constrained self-attention where every token can only attend to context to its left.",
"4 1 https://github.com/tensorflow/tensor2tensor 2 http://nlp.seas.harvard.edu/2018/04/03/attention.html 3 In all cases we set the feed-forward/filter size to be 4 H , i.e., 3072 for the H = 768 and 4096 for the H = 1024 .",
"4 We note that in the literature the bidirectional Trans-Input/Output Representations To make BERT handle a variety of down-stream tasks, our input representation is able to unambiguously represent both a single sentence and a pair of sentences (e.g., (cid:104) Question, Answer (cid:105) ) in one token sequence.",
"Throughout this work, a sentence can be an arbitrary span of contiguous text, rather than an actual linguistic sentence.",
"A sequence refers to the input token sequence to BERT, which may be a single sentence or two sentences packed together.",
"We use WordPiece embeddings (Wu et al., 2016) with a 30,000 token vocabulary.",
"The first token of every sequence is always a special classification token ( [CLS] ).",
"The final hidden state corresponding to this token is used as the aggregate sequence representation for classification tasks.",
"Sentence pairs are packed together into a single sequence.",
"We differentiate the sentences in two ways.",
"First, we separate them with a special token ( [SEP] ).",
"Second, we add a learned embedding to every token indicating whether it belongs to sentence A or sentence B .",
"As shown in Figure 1, we denote input embedding as E , the final hidden vector of the special [CLS] token as C RH , and the final hidden vector for the i th input token as T i RH .",
"For a given token, its input representation is constructed by summing the corresponding token, segment, and position embeddings.",
"A visualization of this construction can be seen in Figure 2. 3.1 Pre-training BERT Unlike Peters et al. (2018a) and Radford et al. (2018), we do not use traditional left-to-right or right-to-left language models to pre-train BERT.",
"Instead, we pre-train BERT using two unsupervised tasks, described in this section.",
"This step is presented in the left part of Figure 1. Task #1: Masked LM Intuitively, it is reasonable to believe that a deep bidirectional model is strictly more powerful than either a left-to-right model or the shallow concatenation of a left-to-right and a right-to-left model.",
"Unfortunately, standard conditional language models can only be trained left-to-right or right-to-left, since bidirectional conditioning would allow each word to indirectly see itself, and the model could trivially predict the target word in a multi-layered context.",
"former is often referred to as a Transformer encoder while the left-context-only version is referred to as a Transformer decoder since it can be used for text generation.",
"In order to train a deep bidirectional representation, we simply mask some percentage of the input tokens at random, and then predict those masked tokens.",
"We refer to this procedure as a masked LM (MLM), although it is often referred to as a Cloze task in the literature (Taylor, 1953).",
"In this case, the final hidden vectors corresponding to the mask tokens are fed into an output softmax over the vocabulary, as in a standard LM.",
"In all of our experiments, we mask 15% of all WordPiece tokens in each sequence at random.",
"In contrast to denoising auto-encoders (Vincent et al., 2008), we only predict the masked words rather than reconstructing the entire input.",
"Although this allows us to obtain a bidirectional pre-trained model, a downside is that we are creating a mismatch between pre-training and fine-tuning, since the [MASK] token does not appear during fine-tuning.",
"To mitigate this, we do not always replace masked words with the actual [MASK] token.",
"The training data generator chooses 15% of the token positions at random for prediction.",
"If the i -th token is chosen, we replace the i -th token with (1) the [MASK] token 80% of the time (2) a random token 10% of the time (3) the unchanged i -th token 10% of the time.",
"Then, T i will be used to predict the original token with cross entropy loss.",
"We compare variations of this procedure in Appendix C.2.",
"Task #2: Next Sentence Prediction (NSP) Many important downstream tasks such as Question Answering (QA) and Natural Language Inference (NLI) are based on understanding the relationship between two sentences, which is not directly captured by language modeling.",
"In order to train a model that understands sentence relationships, we pre-train for a binarized next sentence prediction task that can be trivially generated from any monolingual corpus.",
"Specifically, when choosing the sentences A and B for each pretraining example, 50% of the time B is the actual next sentence that follows A (labeled as IsNext ), and 50% of the time it is a random sentence from the corpus (labeled as NotNext ).",
"As we show in Figure 1, C is used for next sentence prediction (NSP).",
"5 Despite its simplicity, we demonstrate in Section 5.1 that pre-training towards this task is very beneficial to both QA and NLI.",
"6 5 The final model achieves 97%-98% accuracy on NSP.",
"The NSP task is closely related to representation-learning objectives used in Jernite et al. (2017) and Logeswaran and Lee (2018).",
"However, in prior work, only sentence embeddings are transferred to down-stream tasks, where BERT transfers all parameters to initialize end-task model parameters.",
"Pre-training data The pre-training procedure largely follows the existing literature on language model pre-training.",
"For the pre-training corpus we use the BooksCorpus (800M words) (Zhu et al., 2015) and English Wikipedia (2,500M words).",
"For Wikipedia we extract only the text passages and ignore lists, tables, and headers.",
"It is critical to use a document-level corpus rather than a shuffled sentence-level corpus such as the Billion Word Benchmark (Chelba et al., 2013) in order to extract long contiguous sequences.",
"Fine-tuning is straightforward since the self-attention mechanism in the Transformer allows BERT to model many downstream tasks whether they involve single text or text pairsby swapping out the appropriate inputs and outputs.",
"For applications involving text pairs, a common pattern is to independently encode text pairs before applying bidirectional cross attention, such as Parikh et al. (2016); Seo et al. (2017).",
"BERT instead uses the self-attention mechanism to unify these two stages, as encoding a concatenated text pair with self-attention effectively includes bidirectional cross attention between two sentences.",
"For each task, we simply plug in the task-specific inputs and outputs into BERT and fine-tune all the parameters end-to-end.",
"At the input, sentence A and sentence B from pre-training are analogous to (1) sentence pairs in paraphrasing, (2) hypothesis-premise pairs in entailment, (3) question-passage pairs in question answering, and (4) a degenerate text pair in text classification or sequence tagging.",
"At the output, the token representations are fed into an output layer for token-level tasks, such as sequence tagging or question answering, and the [CLS] representation is fed into an output layer for classification, such as entailment or sentiment analysis.",
"Compared to pre-training, fine-tuning is relatively inexpensive.",
"All of the results in the paper can be replicated in at most 1 hour on a single Cloud TPU, or a few hours on a GPU, starting from the exact same pre-trained model.",
"7 We describe the task-specific details in the corresponding subsections of Section 4. More details can be found in Appendix A.5.",
"The General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018a) is a collection of diverse natural language understanding tasks.",
"Detailed descriptions of GLUE datasets are included in Appendix B.1.",
"To fine-tune on GLUE, we represent the input sequence (for single sentence or sentence pairs) as described in Section 3, and use the final hidden vector C RH corresponding to the first input token ( [CLS] ) as the aggregate representation.",
"The only new parameters introduced during fine-tuning are classification layer weights W RK H , where K is the number of labels.",
"We compute a standard classification loss with C and W , i.e., log(softmax( CWT )) .",
"We use a batch size of 32 and fine-tune for 3 epochs over the data for all GLUE tasks.",
"For each task, we selected the best fine-tuning learning rate (among 5e-5, 4e-5, 3e-5, and 2e-5) on the Dev set.",
"Additionally, for BERTLARGE we found that fine-tuning was sometimes unstable on small datasets, so we ran several random restarts and selected the best model on the Dev set.",
"With random restarts, we use the same pre-trained checkpoint but perform different fine-tuning data shuffling and classifier layer initialization.",
"9 Results are presented in Table 1. Both BERTBASE and BERTLARGE outperform all systems on all tasks by a substantial margin, obtaining 4.5% and 7.0% respective average accuracy improvement over the prior state of the art.",
"Note that BERTBASE and OpenAI GPT are nearly identical in terms of model architecture apart from the attention masking.",
"For the largest and most widely reported GLUE task, MNLI, BERT obtains a 4.6% absolute accuracy improvement.",
"On the official GLUE leaderboard 10 , BERTLARGE obtains a score of 80.5, compared to OpenAI GPT, which obtains 72.8 as of the date of writing.",
"We find that BERTLARGE significantly outperforms BERTBASE across all tasks, especially those with very little training data.",
"The effect of model size is explored more thoroughly in Section 5.2.",
"The Stanford Question Answering Dataset (SQuAD v1.1) is a collection of 100k crowdsourced question/answer pairs (Rajpurkar et al., 2016).",
"Given a question and a passage from 9 The GLUE data set distribution does not include the Test labels, and we only made a single GLUE evaluation server submission for each of BERTBASE and BERTLARGE .",
"10 https://gluebenchmark.com/leaderboard Wikipedia containing the answer, the task is to predict the answer text span in the passage.",
"As shown in Figure 1, in the question answering task, we represent the input question and passage as a single packed sequence, with the question using the A embedding and the passage using the B embedding.",
"We only introduce a start vector S RH and an end vector E RH during fine-tuning.",
"The probability of word i being the start of the answer span is computed as a dot product between T i and S followed by a softmax over all of the words in the paragraph: P i = e S Ti (cid:80) j e S Tj .",
"The analogous formula is used for the end of the answer span.",
"The score of a candidate span from position i to position j is defined as S T i + E T j , and the maximum scoring span where j i is used as a prediction.",
"The training objective is the sum of the log-likelihoods of the correct start and end positions.",
"We fine-tune for 3 epochs with a learning rate of 5e-5 and a batch size of 32.",
"Table 2 shows top leaderboard entries as well as results from top published systems (Seo et al., 2017; Clark and Gardner, 2018; Peters et al., 2018a; Hu et al., 2018).",
"The top results from the SQuAD leaderboard do not have up-to-date public system descriptions available, 11 and are allowed to use any public data when training their systems.",
"We therefore use modest data augmentation in our system by first fine-tuning on TriviaQA (Joshi et al., 2017) befor fine-tuning on SQuAD.",
"Our best performing system outperforms the top leaderboard system by +1.5 F1 in ensembling and +1.3 F1 as a single system.",
"In fact, our single BERT model outperforms the top ensemble system in terms of F1 score.",
"Without TriviaQA fine-11 QANet is described in Yu et al. (2018), but the system has improved substantially after publication.",
"tuning data, we only lose 0.1-0.4 F1, still outperforming all existing systems by a wide margin.",
"12 4.3 SQuAD v2.0 The SQuAD 2.0 task extends the SQuAD 1.1 problem definition by allowing for the possibility that no short answer exists in the provided paragraph, making the problem more realistic.",
"We use a simple approach to extend the SQuAD v1.1 BERT model for this task.",
"We treat questions that do not have an answer as having an answer span with start and end at the [CLS] token.",
"The probability space for the start and end answer span positions is extended to include the position of the [CLS] token.",
"For prediction, we compare the score of the no-answer span: s null = S C + E C to the score of the best non-null span 12 The TriviaQA data we used consists of paragraphs from TriviaQA-Wiki formed of the first 400 tokens in documents, that contain at least one of the provided possible answers.",
"s i,j = max j i S T i + E T j .",
"We predict a non-null answer when s i,j > s null + , where the threshold is selected on the dev set to maximize F1.",
"We did not use TriviaQA data for this model.",
"We fine-tuned for 2 epochs with a learning rate of 5e-5 and a batch size of 48.",
"The results compared to prior leaderboard entries and top published work (Sun et al., 2018; Wang et al., 2018b) are shown in Table 3, excluding systems that use BERT as one of their components.",
"We observe a +5.1 F1 improvement over the previous best system.",
"The Situations With Adversarial Generations (SWAG) dataset contains 113k sentence-pair completion examples that evaluate grounded commonsense inference (Zellers et al., 2018).",
"Given a sentence, the task is to choose the most plausible continuation among four choices.",
"When fine-tuning on the SWAG dataset, we construct four input sequences, each containing the concatenation of the given sentence (sentence A ) and a possible continuation (sentence B ).",
"The only task-specific parameters introduced is a vector whose dot product with the [CLS] token representation C denotes a score for each choice which is normalized with a softmax layer.",
"We fine-tune the model for 3 epochs with a learning rate of 2e-5 and a batch size of 16.",
"Results are presented in Table 4. BERTLARGE outperforms the authors' baseline ESIM+ELMo system by +27.1% and OpenAI GPT by 8.3%.",
"In this section, we perform ablation experiments over a number of facets of BERT in order to better understand their relative importance.",
"Additional Dev Set Tasks MNLI-m QNLI MRPC SST-2 SQuAD (Acc) (Acc) (Acc) (Acc) (F1) BERTBASE 84.4 88.4 86.7 92.7 88.5 No NSP 83.9 84.9 86.5 92.6 87.9 LTR & No NSP 82.1 84.3 77.5 92.1 77.8 + BiLSTM 82.1 84.1 75.7 91.6 84.9 Table 5: Ablation over the pre-training tasks using the BERTBASE architecture.",
"We demonstrate the importance of the deep bidirectionality of BERT by evaluating two pretraining objectives using exactly the same pretraining data, fine-tuning scheme, and hyperparameters as BERTBASE :",
"next sentence prediction (NSP) task.",
"LTR & No NSP : A left-context-only model which is trained using a standard Left-to-Right (LTR) LM, rather than an MLM.",
"The left-only constraint was also applied at fine-tuning, because removing it introduced a pre-train/fine-tune mismatch that degraded downstream performance.",
"Additionally, this model was pre-trained without the NSP task.",
"This is directly comparable to OpenAI GPT, but using our larger training dataset, our input representation, and our fine-tuning scheme.",
"We first examine the impact brought by the NSP task.",
"In Table 5, we show that removing NSP hurts performance significantly on QNLI, MNLI, and SQuAD 1.1.",
"Next, we evaluate the impact of training bidirectional representations by comparing No NSP to LTR & No NSP.",
"The LTR model performs worse than the MLM model on all tasks, with large drops on MRPC and SQuAD.",
"For SQuAD it is intuitively clear that a LTR model will perform poorly at token predictions, since the token-level hidden states have no right-side context.",
"In order to make a good faith attempt at strengthening the LTR system, we added a randomly initialized BiLSTM on top.",
"This does significantly improve results on SQuAD, but the results are still far worse than those of the pre-trained bidirectional models.",
"The BiLSTM hurts performance on the GLUE tasks.",
"We recognize that it would also be possible to train separate LTR and RTL models and represent each token as the concatenation of the two models, as ELMo does.",
"However:",
"(a) this is twice as expensive as a single bidirectional model;",
"(b) this is non-intuitive for tasks like QA, since the RTL model would not be able to condition the answer on the question;",
"(c) this it is strictly less powerful than a deep bidirectional model, since it can use both left and right context at every layer.",
"In this section, we explore the effect of model size on fine-tuning task accuracy.",
"We trained a number of BERT models with a differing number of layers, hidden units, and attention heads, while otherwise using the same hyperparameters and training procedure as described previously.",
"Results on selected GLUE tasks are shown in Table 6. In this table, we report the average Dev Set accuracy from 5 random restarts of fine-tuning.",
"We can see that larger models lead to a strict accuracy improvement across all four datasets, even for MRPC which only has 3,600 labeled training examples, and is substantially different from the pre-training tasks.",
"It is also perhaps surprising that we are able to achieve such significant improvements on top of models which are already quite large relative to the existing literature.",
"For example, the largest Transformer explored in Vaswani et al. (2017) is (L=6, H=1024, A=16) with 100M parameters for the encoder, and the largest Transformer we have found in the literature is (L=64, H=512, A=2) with 235M parameters (Al-Rfou et al., 2018).",
"By contrast, BERTBASE contains 110M parameters and BERTLARGE contains 340M parameters.",
"It has long been known that increasing the model size will lead to continual improvements on large-scale tasks such as machine translation and language modeling, which is demonstrated by the LM perplexity of held-out training data shown in Table 6. However, we believe that this is the first work to demonstrate convincingly that scaling to extreme model sizes also leads to large improvements on very small scale tasks, provided that the model has been suffi-ciently pre-trained.",
"Peters et al. (2018b) presented mixed results on the downstream task impact of increasing the pre-trained bi-LM size from two to four layers and Melamud et al. (2016) mentioned in passing that increasing hidden dimension size from 200 to 600 helped, but increasing further to 1,000 did not bring further improvements.",
"Both of these prior works used a feature-based approach we hypothesize that when the model is fine-tuned directly on the downstream tasks and uses only a very small number of randomly initialized additional parameters, the task-specific models can benefit from the larger, more expressive pre-trained representations even when downstream task data is very small.",
"All of the BERT results presented so far have used the fine-tuning approach, where a simple classification layer is added to the pre-trained model, and all parameters are jointly fine-tuned on a downstream task.",
"However, the feature-based approach, where fixed features are extracted from the pre-trained model, has certain advantages.",
"First, not all tasks can be easily represented by a Transformer encoder architecture, and therefore require a task-specific model architecture to be added.",
"Second, there are major computational benefits to pre-compute an expensive representation of the training data once and then run many experiments with cheaper models on top of this representation.",
"In this section, we compare the two approaches by applying BERT to the CoNLL-2003 Named Entity Recognition (NER) task (Tjong Kim Sang and De Meulder, 2003).",
"In the input to BERT, we use a case-preserving WordPiece model, and we include the maximal document context provided by the data.",
"Following standard practice, we formulate this as a tagging task but do not use a CRF Hyperparams Dev Set Accuracy #L #H #A LM (ppl) MNLI-m MRPC SST-2 3 768 12 5.84 77.9 79.8 88.4 6 768 3 5.24 80.6 82.2 90.7 6 768 12 4.68 81.9 84.8 91.3 12 768 12 3.99 84.4 86.7 92.9 12 1024 16 3.54 85.7 86.9 93.3 24 1024 16 3.23 86.6 87.8 93.7 Table 6: Ablation over BERT model size.",
"layer in the output.",
"We use the representation of the first sub-token as the input to the token-level classifier over the NER label set.",
"To ablate the fine-tuning approach, we apply the feature-based approach by extracting the activations from one or more layers without fine-tuning any parameters of BERT.",
"These contextual embeddings are used as input to a randomly initialized two-layer 768-dimensional BiLSTM before the classification layer.",
"Results are presented in Table 7. BERTLARGE performs competitively with state-of-the-art methods.",
"The best performing method concatenates the token representations from the top four hidden layers of the pre-trained Transformer, which is only 0.3 F1 behind fine-tuning the entire model.",
"This demonstrates that BERT is effective for both fine-tuning and feature-based approaches.",
"Recent empirical improvements due to transfer learning with language models have demonstrated that rich, unsupervised pre-training is an integral part of many language understanding systems.",
"In particular, these results enable even low-resource tasks to benefit from deep unidirectional architectures.",
"Our major contribution is further generalizing these findings to deep bidirectional architectures, allowing the same pre-trained model to successfully tackle a broad set of NLP tasks."
] |
[
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"objective",
"result",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective"
] |
[
"Visual referring expression recognition is a challenging task that requires natural language understanding in the context of an image.",
"We critically examine RefCOCOg , a standard benchmark for this task, using a human study and show that 83.7% of test instances do not require reasoning on linguistic structure, i.e., words are enough to identify the target object, the word order doesn't matter.",
"To measure the true progress of existing models, we split the test set into two sets, one which requires reasoning on linguistic structure and the other which doesn't.",
"Additionally, we create an out-of-distribution dataset Ref-Adv by asking crowdworkers to perturb in-domain examples such that the target object changes.",
"Using these datasets, we empirically show that existing methods fail to exploit linguistic structure and are 12% to 23% lower in performance than the established progress for this task.",
"We also propose two methods, one based on contrastive learning and the other based on multi-task learning, to increase the robustness of ViLBERT, the current state-of-the-art model for this task.",
"Our datasets are publicly available at https://github.com/ aws/aws-refcocog-adv .",
"Visual referring expression recognition is the task of identifying the object in an image referred by a natural language expression (Kazemzadeh et al., 2014; Nagaraja et al., 2016; Mao et al., 2016; Hu et al., 2016).",
"Figure 1 shows an example.",
"This task has drawn much attention due to its ability to test a model's understanding of natural language in the context of visual grounding and its application in downstream tasks such as image retrieval (Young et al., 2014) and question answering (Antol et al., 2015; Zhu et al., 2016).",
"To track Work done in part while AA was intern at Amazon AI.",
"progress on this task, various datasets have been proposed, in which real world images are annotated by crowdsourced workers (Kazemzadeh et al., 2014; Mao et al., 2016).",
"Recently, neural models have achieved tremendous progress on these datasets (Yu et al., 2018; Lu et al., 2019).",
"However, multiple studies have suggested that these models could be exploiting strong biases in these datasets (Cirik et al., 2018b; Liu et al., 2019).",
"For example, models could be just selecting a salient object in an image or a referring expression without recourse to linguistic structure (see Figure 1).",
"This defeats the true purpose of the task casting doubts on the actual progress.",
"In this work, we examine RefCOCOg dataset (Mao et al., 2016), a popular testbed for evaluating referring expression models, using crowdsourced workers.",
"We show that a large percentage of samples in the RefCOCOg test set indeed do not rely on linguistic structure (word order) of the expressions.",
"Accordingly, we split RefCOCOg test set into two splits, Ref-Easy and Ref-Hard , where linguistic structure is key for recognition in the latter but not the former ( 2).",
"In addition, we create a new out-of-distribution 1 dataset called Ref-Adv using Ref-Hard by rewriting a referring expression 1 This is a contrast set according to Gardner et al. (2020) such that the target object is different from the original annotation ( 3).",
"We evaluate existing models on these splits and show that the true progress is at least 12-23% behind the established progress, indicating there is ample room for improvement ( 4).",
"We propose two new models, one which make use of contrastive learning using negative examples, and the other based on multi-task learning, and show that these are slightly more robust than the current state-of-the-art models ( 5).",
"RefCOCOg is the largest visual referring expression benchmark available for real world images (Mao et al., 2016).",
"Unlike other referring expression datasets such as RefCOCO and RefCOCO+ (Kazemzadeh et al., 2014), a special care has been taken such that expressions are longer and diverse.",
"We therefore choose to examine the importance of linguistic structure in RefCOCOg .",
"Cirik et al. (2018b) observed that when the words in a referring expression are shuffled in random order, the performance of existing models on RefCOCOg drops only a little.",
"This suggests that models are relying heavily on the biases in the data than on linguistic structure, i.e., the actual sequence of words.",
"Ideally, we want to test models on samples where there is correlation between linguistic structure and spatial relations of objects, and any obscurity in the structure should lead to ambiguity.",
"To filter out such set, we use humans.",
"We randomly shuffle words in a referring expression to distort its linguistic structure, and ask humans to identify the target object of interest via predefined bounding boxes.",
"Each image in RefCOCOg test set is annotated by five Amazon Mechanical Turk (AMT) workers and when at least three annotators select a bounding box that has high overlap with the ground truth, we treat it as a correct prediction.",
"Following Mao et al. (2016), we set 0.5 IoU (intersection over union) as the threshold for high overlap.",
"Given that there are at least two objects in each image, the optimal performance of a random choice is less than 50%.",
"2 However, we observe that human accuracy on distorted examples is 83.7%, indicating that a large portion of RefCOCOg test set is insensitive to linguistic structure.",
"Based on this observation, we divide the test set into two splits for fine-grained evaluation of models: Ref-Easy contains samples insensitive 2 On average, there are 8.2 bounding boxes per image.",
"Due to unintended annotation artifacts in RefCOCOg , it is still possible that models could perform well on Ref-Hard without having to rely on linguistic structure, e.g., by selecting frequent objects seen during training time.",
"Essentially, Ref-Hard is an in-distribution split.",
"To avoid this, we create Ref-Adv , an adversarial test set with samples that may be fall out of training distribution.",
"We take each sample in Ref-Hard and collect additional referring expressions such that the target object is different from the original object.",
"We chose the target objects which humans are most confused with when the referring expression is shuffled (as described in the previous section).",
"For each target object, we ask three AMT workers to write a referring expression while retaining most content words in the original referring expression.",
"In contrast to the original expression, the modified expression mainly differs in terms of the structure while sharing several words.",
"For example, in Figure 1, the adversarial sample is created by swapping pastry and blue fork and making plate as the head of pastry .",
"We perform an extra validation step to filter out bad referring expressions.",
"In this step, three additional AMT workers select a bounding box to identify the target object, and we only select the samples where at least two workers achieve IoU > 0.5 with the target object.",
"Since the samples in Ref-Adv mainly differ in linguistic structure with respect to Ref-Hard , we hope that a model which does not make use of linguistic structure (and correspondingly spatial relations between objects) performs worse on Ref-Adv even when it performs well on Ref-Hard due to exploiting biases in the training data.",
"Ref-Easy and Ref-Hard (Figure 6 in appendix) and consists of rich and diverse spatial relationships (Figure 7 in appendix).",
"Concurrent to our work, Gardner et al. (2020) also propose perturbed test splits for several tasks by modifying in-domain examples.",
"Ref-Adv expressions are longer on average than Ref-Easy and Ref-Hard (Figure 6 in appendix) and consists of rich and diverse spatial relationships (Figure 7 in appendix).",
"In their setup, the original authors of each task create perturbed examples, whereas we use crowdworkers.",
"Closest to our work is from Kaushik et al. (2020) who also use crowdworkers.",
"While we use perturbed examples to evaluate robustness, they also use them to improve robustness (we propose complementary methods to improve robustness 5).",
"Moreover, we are primarily concerned with the robustness of models for visual expression recognition task, while Gardner et al. and Kaushik et al. focus on different tasks (e.g., sentiment, natural language inference).",
"3.1 Human Performance on Ref-Easy , Ref-Hard and Ref-Adv We conducted an additional human study (on AMT) to compare the human performance on Ref-Easy , Ref-Hard and Ref-Adv splits.",
"Concurrent to our work, Gardner et al. (2020) also propose perturbed test splits for several tasks by modifying in-domain examples.",
"In their setup, the original authors of each task create perturbed examples, whereas we use crowdworkers.",
"Closest to our work is from Kaushik et al. (2020) who also use crowdworkers.",
"While we use perturbed examples to evaluate robustness, they also use them to improve robustness (we propose complementary methods to improve robustness 5).",
"Moreover, we are primarily concerned with the robustness of models for visual expression recognition task, while Gardner et al. and Kaushik et al. focus on different tasks (e.g., sentiment, natural language inference).",
"First, we randomly sampled 100 referring expressions from each of the three splits.",
"Each referring expression is then assigned to three AMT workers and are asked to select a bounding box to identify the target object.",
"We considered a sample to be correctly annotated by humans if at least two out of three workers select the ground-truth annotation.",
"3.1 Human Performance on Ref-Easy , Ref-Hard and Ref-Adv We conducted an additional human study (on AMT) to compare the human performance on Ref-Easy , Ref-Hard and Ref-Adv splits.",
"First, we randomly sampled 100 referring expressions from each of the three splits.",
"Each referring expression is then assigned to three AMT workers and are asked to select a bounding box to identify the target object.",
"We considered a sample to be correctly annotated by humans if at least two out of three workers select the ground-truth annotation.",
"evaluation, we obtained human performance on each of the three splits Ref-Easy, Ref-Hard, and Ref-Adv as 98%, 95%, and 96% respectively.",
"4 Diagnosing Referring Expression Recognition models We evaluate the following models, most of which are designed to exploit linguistic structure.",
"Through this evaluation, we obtained human performance on each of the three splits Ref-Easy, Ref-Hard, and Ref-Adv as 98%, 95%, and 96% respectively.",
"2017; Andreas et al. 2016) grounds expressions using neural modules by decomposing an expression into < subject, relation, object > triples.",
"The subject and object are localized to the objects in the image using a localization module while the relation between them is modeled using a relationship module.",
"The full network learns to jointly decompose the input expression into a triple while also recognizing the target object.",
"GroundNet (Cirik et al., 2018a) is similar to CMN, however it makes use of rich linguistic structure (and correspondingly rich modules) as defined by an external syntactic parser.",
"CMN (Compositional Modular Networks; Hu et al. 2017; Andreas et al. 2016) grounds expressions using neural modules by decomposing an expression into < subject, relation, object > triples.",
"The subject and object are localized to the objects in the image using a localization module while the relation between them is modeled using a relationship module.",
"The full network learns to jointly decompose the input expression into a triple while also recognizing the target object.",
"MattNet (Yu et al., 2018) generalizes CMN to flex-ibly adapt to expressions that cannot be captured by the fixed template of CMN.",
"GroundNet (Cirik et al., 2018a) is similar to CMN, however it makes use of rich linguistic structure (and correspondingly rich modules) as defined by an external syntactic parser.",
"It introduces new modules and also uses an attention mechanism to weigh modules.",
"MattNet (Yu et al., 2018) generalizes CMN to flex-ibly adapt to expressions that cannot be captured by the fixed template of CMN.",
"It introduces new modules and also uses an attention mechanism to weigh modules.",
"ViLBERT (Lu et al., 2019), the state-of-the-art model for referring expression recognition, uses a Co-TRM TRM Co-TRM TRM Shared ViLBERT Layers Input image Input question/ Referring expression Task-Specific Layers Figure 3: Multi-task learning model for referring expression recognition with GQA pretrain-then-transfer learning approach to jointly learn visiolinguistic representations from large-scale data and utilizes them to ground expressions.",
"This is the only model that does not explicitly model compositional structure of language, but BERT-like models are shown to capture syntactic structure latently (Hewitt and Manning, 2019).",
"We trained on the full training set of RefCOCOg and performed hyperparameter tuning on a development set.",
"We used the development and test splits of Mao et al. (2016).",
"Table 2 shows the model accuracies on these splits and our proposed datasets.",
"The models are trained to select ground truth bounding box from a set of predefined bounding boxes.",
"We treat a prediction as positive if the predicted bounding box has IoU > 0.5 with the ground truth.",
"Although the overall performance on the test set seem high, in reality, models excel only at Ref-Easy while performing poorly on Ref-Hard .",
"The difference in performance between Ref-Easy and Ref-Hard ranges up to 15%.",
"This indicates that current models do not exploit linguistic structure effectively.",
"When tested on Ref-Adv , the performance goes down even further, increasing the gap between Ref-Easy and Ref-Adv (up to 26%).",
"This suggests that models are relying on reasoning shortcuts found in training than actual understanding.",
"Among the models, GroundNet performs worse, perhaps due to its reliance on rigid structure predicted by an external parser and the mismatches between the predicted structure and spatial relations between objects.",
"ViLBERT achieves the highest performance and is relatively more robust than other models.",
"In the next section, we propose methods to further increase the robustness of ViLBERT.",
"We extend ViLBERT in two ways, one based on contrastive learning using negative samples, and the other based on multi-task learning on GQA (Hudson and Manning, 2019), a task that requires linguistic and spatial reasoning on images.",
"Contrastive learning using negative samples Instead of learning from one single example, contrastive learning aims to learn from multiple examples by comparing one to the other.",
"In order to increase the sensitivity to linguistic structure, we mine negative examples that are close to the current example and learn to jointly minimize the loss on the current (positive) example and maximize the loss on negative examples.",
"We treat the triplets (cid:0) i, e, b (cid:1) in the training set as positive examples, where i , e , b stands for image, expression and ground truth bounding box.",
"For each triplet (cid:0) i, e, b (cid:1) , we sample another training example (cid:0) i (cid:48) , e (cid:48) , b (cid:48) (cid:1) , and use it to create two negative samples, defined by (cid:0) i (cid:48) , e, b (cid:48) (cid:1) and (cid:0) i, e (cid:48) , b (cid:1) , i.e., we pair wrong bounding boxes with wrong expressions.",
"For efficiency, we only consider negative pairs from the mini-batch.",
"We modify the batch loss function as follows: L (cid:0) i , e , b (cid:1) = F ( e , e (cid:48) ) (cid:2) (cid:96) (cid:0) i , e , b (cid:1) (cid:96) (cid:0) i , e (cid:48) , b (cid:1) (cid:3) + + F ( i , i (cid:48) ) (cid:2) (cid:96) (cid:0) i , e , b (cid:1) (cid:96) (cid:0) i (cid:48) , e , b (cid:48) (cid:1) (cid:3) + Model Dev Test Easy Hard Adv ViLBERT (VB) 83.39 83.63 85.93 72.00 70.90 VB+ Sum-H 81.61 83.00 85.93 70.60 72.30 VB+ Max-H 82.93 82.70 86.58 70.46 73.35 VB+ MTL (GQA) 83.45 84.30 86.23 73.79 73.92 Table 3: Accuracy of enhanced ViLBERT models.",
"Here (cid:96) ( i, e, b ) is the cross-entropy loss of ViLBERT, [ x ] + is the hinge loss defined by max (cid:0) 0 , x (cid:1) , and is the margin parameter.",
"F indicates a function over all batch samples.",
"We define F to be either sum of hinges (Sum-H) or max of hinges (Max-H).",
"While Sum-H takes sum over all negative samples, If batch size is n , for each (cid:0) i, e, b (cid:1) , there will be n 1 triplets of (cid:0) i (cid:48) , e, b (cid:48) (cid:1) and (cid:0) i, e (cid:48) , b (cid:1) .",
"For (cid:0) i, e, b (cid:1) , there will be one (cid:0) i (cid:48) , e, b (cid:48) (cid:1) and one (cid:0) i, e (cid:48) , b (cid:1) .",
"Similar proposals are known to increase the robustness of vision and language problems like visual-semantic embeddings and image description ranking (Kiros et al., 2014; Gella et al., 2017; Faghri et al., 2018).",
"Multi-task Learning (MTL) with GQA In order to increase the sensitivity to linguistic structure, we rely on tasks that require reasoning on linguistic structure and learn to perform them alongside our task.",
"We employ MTL with GQA (Hudson and Manning, 2019), a compositional visual question answering dataset.",
"Specifically, we use the GQA-Rel split which contains questions that require reasoning on both linguistic structure and spatial relations (e.g., Is there a boy wearing a red hat standing next to yellow bus? as opposed to Is there a boy wearing hat? ).",
"Figure 3 depicts the neural architecture.",
"We share several layers between the tasks to enable the model to learn representations useful for both tasks.",
"Each shared layer constitute a co-attention transformer block (Co-TRM; Lu et al. 2019) and a transformer block (TRM; Vaswani et al. 2017).",
"While in a transformer, attention is computed using queries and keys from the same modality, in a co-attention transformer they come from different modalities (see cross arrows in Figure 3).",
"The shared representations are eventually passed as input to task-specific MLPs.",
"We optimize each task using alternative training (Luong et al., 2015).",
"Results and discussion Table 3 shows the experimental results on the referring expression recognition task.",
"Although contrastive learning improves e1 : The ladder that is raised the tallest e2 : A wooden boat carries 5 boys with skis e1' : The ladder in front of the raised ladder e2' : A pair of skis in the boat ViLBERT MTL GT Figure 4: Predictions of ViLBERT and MTL model (GT denotes ground-truth).",
"the robustness of ViLBERT on Ref-Adv (+1.4% and +2.5% for Sum-H and Max-H respectively), it comes at a cost of slight performance drop on the full test (likely due to sacrificing biases shared between training and test sets).",
"Whereas MTL improves the robustness on all sets showing that multitask learning helps (we observe 2.3% increase on GQA A.5.2).",
"Moreover, the performance of MTL on Ref-Hard and Ref-Adv are similar, suggesting that the model generalizes to unseen data distribution.",
"Figure 4 shows qualitative examples comparing MTL predictions on Ref-Hard and Ref-Adv parallel examples.",
"These suggest that the MTL model is sensitive to linguistic structure.",
"However, there is still ample room for improvement indicated by the gap between Ref-Easy and Ref-Hard (12.4%).",
"Our work shows that current datasets and models for visual referring expressions fail to make effective use of linguistic structure.",
"Although our proposed models are slightly more robust than existing models, there is still significant scope for improvement.",
"We hope that Ref-Hard and Ref-Adv will foster more research in this area.",
"We would like to thank Volkan Cirik, Licheng Yu, Jiasen Lu for their help with GroundNet, MattNet and ViLBERT respectively, Keze Wang for his help with technical issues, and AWS AI data team for their help with Mechanical Turk.",
"We are grateful to the anonymous reviewers for their useful feedback."
] |
[
"abstain",
"result",
"method",
"method",
"result",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"method",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"other",
"other"
] |
[
"The aspect-based sentiment analysis (ABSA) consists of two conceptual tasks, namely an aspect extraction and an aspect sentiment classification.",
"Rather than considering the tasks separately, we build an end-to-end ABSA solution.",
"Previous works in ABSA tasks did not fully leverage the importance of syntactical information.",
"Hence, the aspect extraction model often failed to detect the boundaries of multi-word aspect terms.",
"On the other hand, the aspect sentiment classifier was unable to account for the syntactical correlation between aspect terms and the context words.",
"This paper explores the grammatical aspect of the sentence and employs the self-attention mechanism for syntactical learning.",
"We combine part-of-speech embeddings, dependency-based embeddings and contextualized embeddings (e.g. BERT, RoBERTa) to enhance the performance of the aspect extractor.",
"We also propose the syntactic relative distance to de-emphasize the adverse effects of unrelated words, having weak syntactic connection with the aspect terms.",
"This increases the accuracy of the aspect sentiment classifier.",
"Our solutions outperform the state-of-the-art models on SemEval-2014 dataset in both two subtasks.",
"The process of understanding the sentiments expressed by consumers in a product review (opin-ionated text) is referred to as sentiment analysis .",
"Deep insights into the opinionated text are gained through a fine-grained entityor aspect-based sentiment labeling of the product being reviewed.",
"Such insights can be invaluable for business decision making.",
"Aspect-based sentiment analysis (ABSA) consists of two sub-tasks, namely an aspect extraction (AE) and an aspect sentiment classification (ASC).",
"However, the majority of reported works focused on one of the two sub-tasks alone.",
"Representative works include (Xu et al., 2018; Da'u and Salim, 2019; Poria et al., 2016) for aspect extraction and (Zeng et al., 2019; Huang et al., 2018; Song et al., 2019; Thet et al., 2010) for aspect sentiment classification.",
"Recent approaches (He et al., 2019; Wang et al., 2018; Li et al., 2019) attempted to develop an integrated solution to solve both tasks simultaneously by formulating both sub-tasks as a single sequence labelling with a unified tagging scheme.",
"Adding unified tokens introduces overhead and complexity in the original ABSA tasks.",
"Thus, multi-task models often have poorer performance compared with single-task models which are trained independently.",
"Recent advances in the NLU introduced contextualized language models, namely OpenAI GPT (Radford et al., 2018), BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019).",
"These models can capture the characteristics of word uses and account for different textual context in which words appear.",
"Upon investigating the latest BERT/RoBERTa-based architectures used in aspect extraction, it became apparent that they were unable to determine the boundaries of multi-word aspects.",
"For instance, the extractors broke the multi-word ex-pression,quality of food into quality of and food.",
"We hypothesize that this shortcoming is caused by the inability of the contextualized embeddings to encode rich syntactical information.",
"In this paper, we integrate syntactical information into contextualized embeddings and propose an ABSA solution consisting of an aspect extractor and an aspect sentiment classifier as illustrated by Fig.",
"1. The proposed AE architecture, named contextualized syntax-based aspect extraction (CSAE), consists of POS embeddings, dependency-based embeddings (Levy and Goldberg, 2014) and self-attention in addition to RoBERTa layer.",
"of Zeng et al. (2019) in which the local context focus (LCF) mechanism is exploited to down-weight the contribution of words that are far away from local context.",
"However, this approach simply regarded the word counts between two words as their semantic relative distance and neglected the mutual syntactical relationship.",
"Our approach employs the shortest path between two words in dependency parsing tree as a syntactic relative distance (SRD).",
"We name this model local context focus on syntax ASC (LCFS-ASC).",
"Comparative experiments are conducted on two SemEval-2014 datasets (Pontiki et al., 2014) to demonstrate the importance of syntactical features in improving both AE and ASC models.",
"The main contributions of this paper can be highlighted as: (1) We propose the multi-channel CSAE model which distils grammatical aspects into contextualized features for improving sequential tag-gings; (2) We contribute the LCFS-ASC which can analyze syntactical connections between words to better understand local contexts that are relevant to target aspect terms; (3) We study the importance of the SRD by exploring the attention score in the LCF layer.",
"This section details the evolution of ABSA solutions from word-embedding-based models to contextualized-embedding-based models and highlights their strengths and weaknesses.",
"Word-embedding-based Model Recent ABSA works used pre-trained word embeddings as a data processing layer and added subsequent layers for a richer feature learning.",
"Target-dependent Long Short-Term Memory (TD-LSTM) model (Tang et al., 2015) embedded the context words and target words into a vector space and employed LSTM cells to encode long-distance relationships in an input sequence.",
"TD-LSTM captured the relatedness of target words with context words to extract relevant information for ABSA.",
"Attention mechanism has been widely applied to the ABSA problem to overcome the vanishing gradients observed in long input sequence.",
"Attention-based LSTM with Aspect Embedding (ATAE-LSTM) (Wang et al., 2016) utilized attention mechanism in addition to LSTM layers.",
"Hence, the network can concentrate on crucial sentiment parts of a sentence in response to given aspects.",
"The quality of word representation is gauged by its capability to encode syntactical features and polysemic behaviour (i.e. word senses).",
"Traditional word embeddings only produced single-context word representations.",
"Recent works diverged from global word representations and considered context-dependent word embeddings which described the words differently in order to account for inherent word senses.",
"BERT (Devlin et al., 2018) is a masked language model (LM) which masked a percentage of words in sentences and set up the training objective to predict the masked words.",
"RoBERTa (Liu et al., 2019) improved upon BERT model by training the model longer with larger amount of data and eliminating next-sentence prediction objective.",
"There have been several applications of BERT to the ABSA problem.",
"AEN-BERT (Song et al., 2019) used BERT to embed a context sequence and a target sequence; and applied attention to draw semantic interaction between targets and context words.",
"LCF-BERT (Zeng et al., 2019) employed context dynamic masking/ context dynamic weighting to localize sentiment signals using semantic relative distance.",
"This distance is measured by the word counts between the context word and target aspect terms.",
"The local context layer allowed the model to emphasize semantic-relative contextual words.",
"However, critical sentiment words sometimes can be associated with the target aspect terms through grammatical rules despite their large semantic relative distance.",
"We hypothesize that using syntactical-relative-distance to identify unrelated words avoids mistakenly eliminating the contribution of crucial sentiment words.",
"There are examples of recent BERT-based approaches works that achieved promising results in AE tasks (see for example Xu et al. (2019)).",
"However, they required re-training a BERT model on a large domain-specific corpus which made it infeasible to achieve a domain-independent aspect extractor.",
"We abstain from such post-training approaches and look for a generic AE architecture.",
"Given a contextual sentence S consisting of n tokens, S = { w i | i [1 , n ] } , an end-to-end ABSA tasks aims to extract the set A of m aspect terms being mentioned where A = { a i | i [1 , m ] } ; and determine the polarity y p { P ositive, Negative, Neutral } associated with each extracted aspect.",
"Aspect extraction can be cast as a sequential labelling problem in which each input token w i is assigned a label y i .",
"The labels y i take on values from the set { B, I, O } ( Begin, Inside, Outside ), representing respectively the beginning of aspect term, inside of aspect term and the non-aspect tokens.",
"Fig. 2 depicts the overall architecture of the proposed contextualized syntax-based aspect extraction (CSAE) model.",
"The CSAE consists of a contextualized embedding (e.g., BERT or RoBERTa), a part-of-speech embedding and a dependency-based embedding.",
"The syntactical information in the final representation is enriched by concatenating the contextualized hidden states, attended POS states and attended dependency-based states.",
"The contextualized model requires a special classification token [ CLS ] at the beginning of the input sequence and the separator [ SEP ] appended to the end of input sequence.",
"The input sentence is converted to the format [ CLS ] + Input sequence + [ SEP ] .",
"The part-of-speech (POS) of each word is annotated by the Universal POS tags 1 ; subsequently the POS of an input sequence P = { p 1 , p 2 , ..., p n } is retrieved.",
"The POS embedding layer takes the sparse vector representation P to extract a dense vector representation VP = { v pi | i [1 , n ] } wherein v pi R h pos emb , and h pos emb refers to the hidden size of the POS embeddings.",
"Then, the self-attention layer is utilized to observe the entire sequence of POS taggers and extract the grammatical dependencies in the input sentence.",
"1 Universal POS Tags.",
"URL: https://universaldependencies.org/u/pos/ Figure 2: Overall architecture of the proposed CSAE 3.1.3 Dependency-based Embedding Instead of using a linear bag-of-words context to form a context window, the dependency-based embedding (Levy and Goldberg, 2014) (DE) uses dependency-based contexts based on the syntactical relations in which the word participates.",
"The process starts by using a dependency tree to parse the sentence.",
"For each target word w and the modifiers m 1 , m 2 , . . . , m n associated with w , the context C = { ( m 1 , rel 1 ) , ( m 2 , rel 2 ) , . . . , ( m n , rel n ) } is constructed.",
"In this consideration, rel i is the dependency relation (e.g., subj, amod, pobj ) between a target word w and a modifier m i , while rel 1 represents the inverse relations.",
"Before extracting the final contexts, the relations consisting of a preposition are collapsed by subsuming the preposition into a dependency label.",
"Fig. 3 describes the process of collapsing prepositions into a dependency relation and demonstrates the extracted contexts of each target word in a given sentence.",
"The DE can incorporate the distant relation which is out of reach in linear-context word embedding.",
"It also de-emphasizes irrelevant words accidentally falling into the context windows.",
"Specifically, the optimal parameters of the deep learning model are obtained from L ( ) = n (cid:88) i =1 y i log y i + (cid:88) 2 , (1) where is the regularization parameter and y i the predicted label corresponding to y i .",
"Given a contextual sentence S = { w i | i [1 , n ] } and extracted aspect terms A = { a i | i [1 , m ] } , we need to determine the polarity { P ositive, Neutral, Negative } of the aspect terms in the contextual sentence.",
"Fig. 4 illustrates the overall architecture of the proposed Local Context Feature-Aspect Sentiment Classification including two independent Contextualized Embedding for global and local contexts.",
"To comprehend the global context, the contextual sentence S and aspect terms A are combined to construct global contexts G .",
"The input format of global context G is G = [ CLS ]+ S +[ SEP ]+ A +[ SEP ] .",
"On the other hand, the local contexts L is the contextual sentence S whose format is [ CLS ] + S + [ SEP ] .",
"In BERT architecture, the global context G is explicitly represented as a pair of text consisting of a contextual sentence S and aspect terms A .",
"When a token in G belongs to a first or second segment of the sentence pair, its segment token is Figure 4: Overall architecture of the proposed LCF-ASC indexed as 1 or 2 respectively.",
"This next-sentence-prediction characteristic of the BERT model allows BERT-based ASC models to capture the semantic relationship between the contextual sentence and the aspect.",
"Since RoBERTa removed the next-sentence-prediction task when training the model, it is suspected that the RoBERTa representation is not as informative as the BERT representation for the ASC task.",
"The hidden state corresponding to a special classification token [ CLS ] represents the aggregation of the entire sentence.",
"The local context vectors V l = { v li | i [1 , n ] } are obtained by feeding the local contexts into the contextualized embedding.",
"Next, we apply context feature dynamic weight/context feature dynamic mask (CDW/CDM) (Zeng et al., 2019) techniques on V l to alleviate the negative influence of irrelevant opinion words which are distant from the target aspect terms.",
"Relative Distance The SRD between words is measured by the shortest distance between their corresponding nodes in the dependency-parsed tree.",
"If the aspect term is composed of multiple words, the SRD between an input word and a multi-word aspect term is computed as an average distance between each component word and an input word.",
"Fig. 5 illustrates the dependency-parsed tree constructed from a sample product review.",
"The SRD between an aspect term sound amplifier and sentiment word loudly is computed as: SRD(amplifier , loudly) = 2 SRD(sound , loudly) = 3 = SRD(sound amplifier , loudly) = 2 .",
"On the other hand, the semantic relative distance when counting words between sound amplifier and loudly is 7 (as demonstrated in (Zeng et al., 2019)) which might make key sentiment words being down-weighted undesirably.",
"Context dynamic mask (CDM) masks out the less-semantic context features whose SRD to target words is greater than the pre-defined threshold.",
"Given the local contexts V l , the mask vector V mi for each contextual word m i is computed based on certain SRD threshold : v mi = (cid:40) O SRD i > I SRD i M = [ v m 1 , v m 2 , ..., v mn ] VCDM = V l (cid:12) M (2) O and I are vectors of all zero and one respectively; O and I R h where h is the hidden size of a contextualized embedding and also the dimension of local context vector v li .",
"(cid:12) represents the element-wise dot product to mask out the local vector V l by using the mask matrix M Context dynamic weighting retains the contribution of less-semantic-relative context features but de-emphasizes them based on their distance to aspect terms.",
"Thus, v wi = (cid:40) (1 SRD i N ) I SRD i > I SRD i W = [ v w 1 , v w 2 , ..., v wn ] VCDW = V l (cid:12) W (3) where N is the length of the contextual sentence.",
"The hidden state of classification token [CLS] h pool is pooled out and fed into a softmax layer to predict the polarity from the set { Positive, Neutral, Negative } .",
"Similarly to the AE model, we use the cross-entropy loss with L 2 regularization as a loss function to fine-tune the entire ASC deep-learning model.",
"We evaluate and compare the proposed AE and ASC models on two benchmark datasets as described in Table",
"1. They are laptop-domain and restaurant-domain datasets taken from SemEval-2014 Task 4 challenge (Pontiki et al., 2014).",
"Each sample sentence in the datasets is annotated with marked aspect terms and their associated polarity.",
"The first group of models follow pipelining approach which train single-task models independently and pipeline the output of AE and ASC to build an end-to-end ABSA solution.",
"To highlight the improved performance of the contextualized embeddings in ABSA tasks, we pick top high-performing word-embedding-based and contextualized-embedding-based models in both AE and ASC tasks.",
"For a fair comparison, we only consider domain-independent models and eschew comparing with post-training approaches because they require re-purposing the entire model on large corpora before fine-tuning it for the in-domain end task.",
"For AE task, we select two word-embedding-based model and one contextualized-embedding-based model to demonstrate that a simple BERT Figure 5: Dependency-parsed tree of the product review layer can outperform a sophisticated network using word embeddings: BiLSTM (Liu et al., 2015) is a Named Entity Recognition model employing Bidirectional LSTM on top of a Word Embedding representation.",
"DTBCSNN (Ye et al., 2017) is a dependency tree based stacked convolutional neural network which used the inference layer for aspect extraction.",
"BERT-AE (Devlin et al., 2018) utilizes a BERT representation for AE.",
"This model acts as a reference to demonstrate the importance of our designed components adding to a contextualized representation.",
"For ASC task, we select two word-embedding-based models and four contextualized-embedding-based models.",
"Various BERT-based models are examined to demonstrate that the provided information about aspects can be employed to attend to relevant sentiment information and improve the BERT-based ASC models: AOA (Huang et al., 2018) uses multiple attention layers to model the interaction between aspects and sentences.",
"MGAN (Fan et al., 2018) uses fine-grained and coarse-grained attention to capture word-level interaction between aspects and sentences.",
"BERT-ASC (Devlin et al., 2018), utilizes a BERT representation for ASCBERT-PT (Xu et al., 2018) re-trains a contextualized BERT model on a large domain-specific corpus to enhance the quality of word representations to the end-task.",
"AEN-BERT (Song et al., 2019) adopts contextualized BERT model and attention mechanism to model the relationship between context and targets.",
"This model is used to show the improvements in ASC tasks when leveraging additional information about target terms in the given context.",
"LCF-BERT (Zeng et al., 2019) employs Local-Context-Focus design with Semantic-Relative-Distance (SeRD) to discard unrelated sentiment words.",
"This model acts as a reference to illustrate the importance of our proposed SRD metrics in improving ASC models.",
"Since the choice of BERT model is not indicated in the paper (Zeng et al., 2019) and we do not have an access to BERT large model, we re-implement the LCF-BERT model using the BERT base model based on their proposed methodology.",
"The second group consists of integrated approaches which aim to extract aspect terms and determine polarity simultaneously through a unified tagging scheme.",
"This group of models can model the joint information in both sub-tasks and leverage all available sources of training information to handle an end-to-end ABSA problem: MNN (Wang et al., 2018) employs attention mechanism to jointly learn the relationship between aspects and sentiments for a multi-task neural network.",
"UABSA (Li et al., 2019) is a unified model for ABSA, consisting of two stacked RNNs for the target boundary detection tasks (auxiliary) and the complete ABSA tasks (primary).",
"IMN (He et al., 2019) uses message passing architecture to transfer information iteratively through different tasks along latent variables.",
"For our proposed AE solution, we perform ablation study where certain modules are removed from the CSAE architecture to show their effects on the end performance: RoBERTa-AE utilizes a RoBERTa representation to demonstrate the improved quality of the RoBERTa representation in AE task.",
"RoBERTa-POS employs a RoBERTa representation and a POS embedding to demonstrate that POS is helpful to identify aspect terms in a sentence.",
"RoBERTa-Dep uses a RoBERTa representation and a dependency-based embedding to compare the effects of dependency-based features and POS features in AE tasks.",
"CSAE is a complete model, consisting of RoBERTa, POS embedding and dependency-based embedding layers.",
"For our proposed ASC solution, we experiment with the RoBERTa-ASC model without the LCF layer and a complete LCFS-ASC model with the LCF layer.",
"Hence, the impact of LCF layer on ASC tasks can be demonstrated.",
"RoBERTa-ASC utilizes a RoBERTa representation for ASC to compare the suitability of BERT and RoBERTa representations in ASC tasks.",
"LCFS-ASC-CDW is a LCFS-ASC model employing CDW technique.",
"LCFS-ASC-CDM is a LCFS-ASC model employing CDM technique.",
"Note that we used the BERT base to implement LCFS-ASC model due to the lack of adequate computing resources, as well as to ensure the fair comparison between the LCF-BERT and our proposed model.",
"Similarly, the CSAE model is built on top of the RoBERT a base model.",
"For AE task, we use the standard evaluation script provided by SemEval challenge to report F1-score.",
"On the other hand, the accuracy and macro F1-score over 3 classes of polarities are considered to be evaluation metrics for ASC task.",
"Table 2 compares the performance of the RoBERTa-AE-based model and the complete CSAE model.",
"It is noticeable that the CSAE model outperforms RoBERTa-AE model in defining the boundary of multi-word aspect terms.",
"Using a contextualized RoBERTa feature, the RoBERTa-AE is only able to identify the noun cocktail in a noun phrase, suggesting a RoBERTa representation fails to capture rich syntactical structure in a contextual sentence.",
"In the universal dependencies schema, Times and Square are a PROPN (proper noun) tag which is part of the name of spe-cific place, and have compound relation with the noun cocktail.",
"Being given explicit information about special syntactical properties of an example, CSAE successfully identifies a compound noun as an aspect term even though an aspect term Time Square cocktail does not appear in a training set.",
"Additionally, even though RoBERTa-AE can identify individual aspect terms espresso cup filled with and chocolate mousse in example 2, it fails to group them together to form a complete multiword term.",
"CSAE, on the other hand, is able to model the role of the preposition with and detect the true boundary of the aspect term.",
"Table 3 summarizes the results of our proposed models compared with the baseline models.",
"When compared with the word-embedding-based models, our CSAE model performs better than the BiLSTM and DTBCSNN models with gains of 3.93 percentage points (p.p), 1.99p.p and 5.23p.p, 2.68p.p in laptop and restaurant datasets respectively.",
"The performance of our model is close to IMN's in laptop domain and outperforms other integrated approaches in both settings.",
"Especially, our CSAE model has F1-score at least 3.32 p.p higher than other integrated approaches in the restaurant domain, suggesting that single-task models can sig-nificantly outperform integrated solutions with sophisticated architecture by simply improving the quality of feature representations.",
"To investigate the effects of different designed components in a CSAE, we start with a base model using just a RoBERTa representation for aspect extraction and add other components one at a time.",
"We found that our base model always gives superior performance compared to the BERT-based model.",
"The performance is improved when we introduce the POS embedding and dependency-based embedding to capture rich syntactical information.",
"The POS embeddings solely represent the POS of each individual word and leave the feature extraction job for the attention layer, while the dependency-based embeddings directly infuse the grammatical interaction between words into the word representation.",
"Hence, it is expected that RoBERTa with dependency-based features has slightly higher F1-score than RoBERTa with POS features.",
"Overall, CSAE with full complement of both components gained significant improvement.",
"It suggests that the RoBERTa model has not entirely comprehended the grammatical aspects of natural language and there is room for improvements in contextualized LM by further leveraging syntactical information of sentences.",
"The single-task, integrated and our proposed approach are displayed in the first, second and third parts, respectively.",
"Our proposed model outperforms the BERT-PT by a large margin without utilizing additional knowledge from a larger corpus to train domain-specific embeddings.",
"All BERT-based single-task models outperform the integrated models, suggesting that the unified tagging schema imposed overheads to the ASC tasks by introducing extra classes.",
"As discussed in Section 3.2.1, the removal of the next-sentence-pair task in RoBERTa makes the RoBERTa representation less suitable to the ASC Table 4: Comparison results of our best performing ASC model variants in terms of F1 scores and accuracy (%) with the state-of-the-art methods Domain Laptop Rest Model F1 Acc F1 Acc AOA -74.5 -81.2 MGAN 72.47 75.39 71.94 81.25 BERT-ASC * 72.68 76.25 76.98 84.46 BERT-PT 75.08 78.07 76.96 84.95 AEN-BERT 76.31 79.93 73.76 83.12 LCF-BERT-CDW * 76.20 80.21 79.12 85.91 LCF-BERT-CDM * 75.76 79.65 78.74 85.73 MNN 65.98 70.40 68.45 77.17 UABSA 68.24 72.30 68.38 79.68 IMN 72.02 75.36 75.66 83.89 RoBERTa-ASC 70.52 74.12 75.12 82.82 LCFS-ASC-CDW 77.13 80.52 80.31 86.71 LCFS-ASC-CDM 76.45 80.34 80.10 86.13 Note: The best result in each dataset is highlighted in bold.",
"The results of models we reproduced by following the methodology published in the paper are indicated by asterisk (*).",
"task leading to the underperformance of RoBERTa-ASC.",
"The proposed LCFS-ASC has a slightly improved performance compared with the LCF-BERT when using either CDM or CDW.",
"The result demonstrates the effectiveness of Syntactical Relative Distance in encoding syntactical information.",
"CDW helps to boost the performance of LCFS-ASC model more than the CDM.",
"Since CDM completely blocks the signals of the contexts being identified unimportant, it may falsely disregard useful signals.",
"On the other hand, CDW emphasizes flexibility and allows further signals to contribute small weights corresponding to its relatedness with the aspect terms in the dependency-based tree.",
"best-performing LCFS-ASC-CDW and LCF-BERT-CDW models.",
"For a given input sentence, LCFS-ASC assigns a correct positive polarity to the aspect term cuisine, while LCF-BERT gives a wrong prediction as negative .",
"Since LCF-BERT uses Semantic Relative Distance, the sentiment term with-out a doubt has been paid the most focus due to its close distance to the aspect term cuisine based on word counts metrics.",
"On the other hand, the signal of a key sentiment word delicious is mistakenly down-weighted because it is far away from the aspect term cuisine.",
"Nevertheless, the LCFS-ASC retains the importance of the word delicious because Syntactical Relative Distance accounts for the direct interaction between the adjective delicious and the aspect term cuisine in a dependency-based tree.",
"We proposed an end-to-end ABSA solution which pipelined an aspect extractor and an aspect sentiment classifier.",
"The results indicate that exploitation of syntactical structures of sentences empowers the contextualized models to improve on current works in both ASC and AE tasks.",
"Our proposed aspect sentiment classifier outperformed post-training ASC model and enabled the creation of a domain-independent solution.",
"The proposed SRD allows the aspect sentiment classifier to focus on critical sentiment words which modify the target aspect term through dependency-based structure.",
"The substantial improvements highlight the under-performance of recent contextualized embedding models in understanding syntactical features and suggests future directions in developing more syntax-learning contextualized embeddings.",
"One can try to adapt our proposed CSAE architecture for an integrated approach by applying the unified tagging scheme; thereby, aspect extraction and sentiment classification can be achieved simultaneously.",
"Thanks to Vinh Hung Ngo, who has provided insightful advice to improve my writings and experimental results."
] |
[
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"other"
] |
[
"The need for tree structure modelling on top of sequence modelling is an open issue in neural dependency parsing.",
"We investigate the impact of adding a tree layer on top of a sequential model by recursively composing subtree representations (composition) in a transition-based parser that uses features extracted by a BiLSTM.",
"Composition seems superfluous with such a model, suggesting that BiLSTMs capture information about subtrees.",
"We perform model ablations to tease out the conditions under which composition helps.",
"When ablating the backward LSTM, performance drops and composition does not recover much of the gap.",
"When ablating the forward LSTM, performance drops less dramatically and composition recovers a substantial part of the gap, indicating that a forward LSTM and composition capture similar information.",
"We take the backward LSTM to be related to lookahead features and the forward LSTM to the rich history-based features both crucial for transition-based parsers.",
"To capture history-based information, composition is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM.",
"We correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages.",
"Recursive neural networks allow us to construct vector representations of trees or subtrees.",
"They have been used for constituency parsing by Socher et al. (2013) and Dyer et al. (2016) and for dependency parsing by Stenetorp (2013) and Dyer et al. (2015), among others.",
"In particular, Dyer et al. (2015) showed that composing representations of subtrees using recursive neural networks can be beneficial for transition-based dependency parsing.",
"These results were further strengthened in Kuncoro et al. (2017) who showed, using ablation experiments, that composition is key in the Recurrent Neural Network Grammar (RNNG) generative parser by Dyer et al. (2016).",
"In a parallel development, Kiperwasser and Goldberg (2016b) showed that using BiLSTMs for feature extraction can lead to high parsing accuracy even with fairly simple parsing architectures, and using BiLSTMs for feature extraction has therefore become very popular in dependency parsing.",
"It is used in the state-of-the-art parser of Dozat and Manning (2017), was used in 8 of the 10 highest performing systems of the 2017 CoNLL shared task (Zeman et al., 2017) and 10 out of the 10 highest performing systems of the 2018 CoNLL shared task (Zeman et al., 2018).",
"This raises the question of whether features extracted with BiLSTMs in themselves capture information about subtrees, thus making recursive composition superfluous.",
"Some support for this hypothesis comes from the results of Linzen et al. (2016) which indicate that LSTMs can capture hierarchical information: they can be trained to predict long-distance number agreement in English.",
"Those results were extended to more constructions and three additional languages by Gulordava et al. (2018).",
"However, Kuncoro et al. (2018) have also shown that although sequential LSTMs can learn syntactic information, a recursive neural network which explicitly models hierarchy (the RNNG model from Dyer et al. (2015)) is better at this: it performs better on the number agreement task from Linzen et al. (2016).",
"To further explore this question in the context of dependency parsing, we investigate the use of recursive composition (henceforth referred to as composition ) in a parser with an architecture like the one in Kiperwasser and Goldberg (2016b).",
"This allows us to explore variations of features and isolate the conditions under which composition is helpful.",
"We hypothesise that the use of a BiLSTM for feature extraction makes it possible to capture information about subtrees and therefore makes the use of subtree composition superfluous.",
"We hypothesise that composition becomes useful when part of the BiLSTM is ablated, the forward or the backward LSTM.",
"We further hypothesise that composition is most useful when the parser has no access to information about the function of words in the context of the sentence given by POS tags.",
"When using POS tags, the tagger has indeed had access to the full sentence.",
"We additionally look at what happens when we ablate character vectors which have been shown to capture information which is partially overlapping with information from POS tags.",
"We experiment with a wider variety of languages than Dyer et al. (2015) in order to explore whether the usefulness of different model variants vary depending on language type.",
"We define the parsing architecture introduced by Kiperwasser and Goldberg (2016b) at a high level of abstraction and henceforth refer to it as K&G.",
"A K&G parser is a greedy transition-based parser.",
"1 For an input sentence of length n with words w 1 , . . . , w n , a sequence of vectors x 1: n is created, where the vector x i is a vector representation of the word w i .",
"We refer to these as type vectors, as they are the same for all occurrences of a word type.",
"Type vectors are then passed through a feature function which learns representations of words in the context of the sentence.",
"We refer to the vector v i as a token vector, as it is different for different tokens of the same word type.",
"In Kiperwasser and Goldberg (2016b), the feature function used is a BiLSTM.",
"As is usual in transition-based parsing, parsing involves taking transitions from an initial configuration to a terminal one.",
"Parser configurations are represented by a stack, a buffer and set of dependency arcs (Nivre, 2008).",
"For each configuration c , the feature extractor concatenates the token representations of core elements from the stack and 1 Kiperwasser and Goldberg (2016b) also define a graph-based parser with similar feature extraction, but we focus on transition-based parsing.",
"buffer.",
"These token vectors are passed to a classifier, typically a Multilayer Perceptron (MLP).",
"The MLP scores transitions together with the arc labels for transitions that involve adding an arc.",
"Both the word type vectors and the BiLSTMs are trained together with the model.",
"Dyer et al. (2015) looked at the impact of using a recursive composition function in their parser, which is also a transition-based parser but with an architecture different from K&G.",
"They make use of a variant of the LSTM called a stack LSTM.",
"A stack LSTM has push and pop operations which allow passing through states in a tree structure rather than sequentially.",
"Stack LSTMs are used to represent the stack, the buffer, and the sequence of past parsing actions performed for a configuration.",
"The words of the sentence are represented by vectors of the word types, together with a vector representing the word's POS tag.",
"In the initial configuration, the vectors of all words are in the buffer and the stack is empty.",
"The representation of the buffer is the end state of a backward LSTM over the word vectors.",
"As parsing evolves, the word vectors are popped from the buffer, pushed to and popped from the stack and the representations of stack and buffer get updated.",
"Dyer et al. (2015) define a recursive composition function and compose tree representations incrementally, as dependents get attached to their head.",
"The composed representation c is built by concatenating the vector h of the head with the vector of the dependent d , as well as a vector r representing the label paired with the direction of the arc.",
"That concatenated vector is passed through an affine transformation and then through a tanh non-linear activation.",
"They create two versions of the parser.",
"In the first version, when a dependent is attached to a head, the word vector of the head is replaced by a composed vector of the head and dependent.",
"In the second version, they simply keep the vector of the head when attaching a dependent to a head.",
"They observe that the version with composition is substantially better than the version without, by 1.3 LAS points for English (on the Penn Treebank (PTB) test set) and 2.1 for Chinese (on the Chinese Treebank (CTB) test set).",
"Their parser uses POS tag information.",
"POS tags help to disambiguate between different functional uses of a word and in this way give information about the use of the word in context.",
"We hypothesise that the effect of using a recursive composition function is stronger when not making use of POS tags.",
"The parsing architectures of the stack LSTM parser (S-LSTM) and K&G are different but have some similarities.",
"2 In both cases, the configuration is represented by vectors obtained by LSTMs.",
"In K&G, it is represented by the token vectors of top items of the stack and the first item of the buffer.",
"In the S-LSTM, it is represented by the vector representations of the entire stack, buffer and sequence of past transitions.",
"Both types of parsers learn vector representations of word types which are passed to an LSTM.",
"In K&G, they are passed to an LSTM in a feature extraction step that happens before parsing.",
"The LSTM in this case is used to learn vectors that have information about the context of each word, a token vector.",
"In the S-LSTM, word type vectors are passed to Stack LSTMs as parsing evolves.",
"In this case, LSTMs are used to learn vector representations of the stack and buffer (as well as one which learns a representation of the parsing action his-tory).",
"When composition is not used in the S-LSTM, word vectors represent word types.",
"When composition is used, as parsing evolves, the stack and buffer vectors get updated with information about the subtrees they contain, so that they gradually become contextualised.",
"In this sense, those vectors become more like token vectors in K&G.",
"More specifically, as explained in the previous section, when a dependent is attached to its head, the composition function is applied to the vectors of head and dependent and the vector of the head is replaced by this composed vector.",
"We cannot apply composition on type vectors in the K&G architecture, since they are not used after the feature extraction step and hence cannot influence the representation of the configuration.",
"Instead, we apply composition on the token vectors.",
"We embed those composed representations in the same space as the token vectors.",
"In K&G, like in the S-LSTM, we can create a composition function and compose the representation of subtrees as parsing evolves.",
"We create two versions of the parser, one where word tokens are represented by their token vector.",
"The other where they are represented by their token vector and the vector of their subtree c i , which is initially just a copy of the token vector ( v i = f ( x 1: n , i ) c i ).",
"When a dependent word d is attached to a word h with a relation and direction r , c i is computed with the same composition function as in the S-LSTM defined in the previous section, repeated below.",
"3 This composition function is a simple recurrent cell.",
"Simple RNNs have known shortcomings which have been addressed by using LSTMs, as proposed by Hochreiter and Schmidhuber (1997).",
"A natural extension to this composition function is therefore to replace it with an LSTM cell.",
"We also try this variant.",
"We construct LSTMs for subtrees.",
"We initialise a new LSTM for each new subtree that is formed, that is, when a dependent d is attached to a head h which does not have any dependent yet.",
"Each time we attach a dependent to a head, we construct a vector which is a concatenation of h , d and r .",
"We pass this vector to the LSTM of h .",
"c is the output state of the LSTM after passing through that vector.",
"We denote those models with + rc for the one using an ungated recurrent cell and with + lc for the one using an LSTM cell.",
"c = tanh ( W [ h ; d ; r ] + b ) c = LSTM ([ h ; d ; r ]) As results show (see 5), neither type of composition seems useful when used with the K&G parsing model, which indicates that BiLSTMs capture information about subtrees.",
"To further investigate this and in order to isolate the conditions under which composition is helpful, we perform different model ablations and test the impact of recursive composition on these ablated models.",
"First, we ablate parts of the BiLSTMs: we ablate either the forward or the backward LSTM.",
"We therefore build parsers with 3 different feature functions f ( x, i ) over the word type vectors x i in the sentence x : a BiLSTM ( bi ) (our baseline), a backward LSTM ( bw ) (i.e., ablating the forward LSTM) and a forward LSTM ( fw ) (i.e., ablating 3 Note that, in preliminary experiments, we tried replacing the vector of the head by the vector of its subtree instead of concatenating the two but concatenating gave much better results. the backward LSTM): bi ( x, i ) = BILSTM ( x 1: n , i ) bw ( x, i ) = LSTM ( x n :1 , i ) fw ( x, i ) = LSTM ( x 1: n , i ) K&G parsers with unidirectional LSTMs are, in some sense, more similar to the S-LSTM than those with a BiLSTM, since the S-LSTM only uses unidirectional LSTMs.",
"We hypothesise that composition will help the parser using unidirectional LSTMs in the same way it helps an S-LSTM.",
"We additionally experiment with the vector representing the word at the input of the LSTM.",
"The most complex representation consists of a concatenation of an embedding of the word type e ( w i ) , an embedding of the (predicted) POS tag of w i , p ( w i ) and a character representation of the word obtained by running a BiLSTM over the characters ch 1: m of w i (BiLSTM ( ch 1: m ) ).",
"Without a POS tag embedding, the word vector is a representation of the word type.",
"With POS information, we have some information about the word in the context of the sentence and the tagger has had access to the full sentence.",
"The representation of the word at the input of the BiLSTM is therefore more contextualised and it can be expected that a recursive composition function will be less helpful than when POS information is not used.",
"Character information has been shown to be useful for dependency parsing first by Ballesteros et al. (2015).",
"Ballesteros et al. (2015) and Smith et al. (2018b) among others have shown that POS and character information are somewhat complementary.",
"Ballesteros et al. (2015) used similar character vectors in the S-LSTM parser but did not look at the impact of composition when using these vectors.",
"Here, we experiment with ablating either or both of the character and POS vectors.",
"We look at the impact of using composition on the full model as well as these ablated models.",
"We hypothesise that composition is most helpful when those vectors are not used, since they give information about the functional use of the word in context.",
"Parser We use UUParser, a variant of the K&G transition-based parser that employs the arc-hybrid transition system from Kuhlmann et al. (2011) extended with a SWAP transition and a Static-Dynamic oracle, as described in de Lhoneux et al. (2017b) 4 .",
"The SWAP transition is used to allow the construction of non-projective dependency trees (Nivre, 2009).",
"We use default hyperparameters.",
"When using POS tags, we use the universal POS tags from the UD treebanks which are coarse-grained and consistent across languages.",
"Those POS tags are predicted by UDPipe (Straka et al., 2016) both for training and parsing.",
"This parser obtained the 7th best LAS score on average in the 2018 CoNLL shared task (Zeman et al., 2018), about 2.5 LAS points below the best system, which uses an ensemble system as well as ELMo embed-dings, as introduced by Peters et al. (2018).",
"Note, however, that we use a slightly impoverished version of the model used for the shared task which is described in Smith et al. (2018a): we use a less accurate POS tagger (UDPipe) and we do not make use of multi-treebank models.",
"In addition, Smith et al. (2018a) use the three top items of the stack as well as the first item of the buffer to represent the configuration, while we only use the two top items of the stack and the first item of the buffer.",
"Smith et al. (2018a) also use an extended feature set as introduced by Kiperwasser and Goldberg (2016b) where they also use the rightmost and leftmost children of the items of the stack and buffer that they consider.",
"We do not use that extended feature set.",
"This is to keep the parser settings as simple as possible and avoid adding confounding factors.",
"It is still a near-SOTA model.",
"We evaluate parsing models on the development sets and report the average of the 5 best results in 30 epochs and 5 runs with different random seeds.",
"Data We test our models on a sample of treebanks from Universal Dependencies v2.1 (Nivre et al., 2017).",
"We follow the criteria from de Lhoneux et al. (2017c) to select our sample: we ensure typological variety, we ensure variety of domains, we verify the quality of the treebanks, and we use one treebank with a large amount of non-projective arcs.",
"However, unlike them, we do not use extremely small treebanks.",
"Our selection is the same as theirs but we remove the tiny treebanks and replace them with 3 others.",
"Our final set is: Ancient Greek (PROIEL), Basque, Chinese, Czech, English, Finnish, French, Hebrew and Japanese.",
"First, we look at the effect of our different recursive composition functions on the full model (i.e., the model using a BiLSTM for feature extraction as well as both character and POS tag informa-tion).",
"As can be seen from Figure 1, recursive composition using an LSTM cell ( + lc ) is generally better than recursive composition with a recurrent cell ( + rc ), but neither technique reliably improves the accuracy of a BiLSTM parser.",
"Figure 1 : LAS of models using a BiLSTM ( bi ) without composition, with a recurrent cell ( + rc ) and with an LSTM cell ( + lc ).",
"Bar charts truncated at 50 for visualization purposes.",
"Second, we only consider the models using character and POS information and look at the effect of ablating parts of the BiLSTM on the different languages.",
"The results can be seen in Figure",
"2. As expected, the BiLSTM parser performs considerably better than both unidirectional LSTM parsers, and the backward LSTM is considerably better than the forward LSTM, on average.",
"It is, however, interesting to note that using a forward LSTM is much more hurtful for some languages than others: it is especially hurtful for Chinese and Japanese.",
"This can be explained by language properties: the right-headed languages suffer more from ablating the backward LSTM than other languages.",
"We observe a correlation between how hurtful a forward model is compared to the baseline and the percentage of right-headed content dependency relations ( R = 0 . 838 , p < . 01 ), see Figure",
"3. 5 5 The reason we only consider content dependency relations is that the UD scheme focuses on dependency relations Figure 2 : LAS of models using a BiLSTM ( bi ), backward LSTM ( bw ) and forward LSTM ( fw ).",
"Figure 3 : Correlation between how hurtful it is to ablate the backward LSTM and right-headedness of languages.",
"There is no significant correlation between how hurtful ablating the forward LSTM is and the percentage of left-headed content dependency relations ( p > . 05 ) indicating that its usefulness is not dependent on language properties.",
"We hypothesise that dependency length or sentence length can play a role but we also find no correlation between how hurtful it is to ablate the forward LSTM and average dependency or sentence length in treebanks.",
"It is finally also interesting to note that the backward LSTM performance is close to the BiLSTMs performance for some languages (Japanese and French).",
"We now look at the effect of using recursive composition on these ablated models.",
"Results are given in Figure",
"4. First of all, we observe unsurprisingly that composition using an LSTM cell between content words and treats function words as features of content words to maximise parallelism across languages (de Marneffe et al., 2014).",
"Figure 4 : LAS of models using a BiLSTM ( bi ), backward LSTM ( bw ) and forward LSTM ( fw ), without recursive composition, with a recurrent cell ( + rc ) and with a LSTM cell ( + lc ).",
"Bar charts truncated at 50 for visualization purposes.",
"is much better than using a simple recurrent cell.",
"Second, both types of composition help the backward LSTM case, but neither reliably helps the bi models.",
"Finally, the recurrent cell does not help the forward LSTM case but the LSTM cell does to some extent.",
"It is interesting to note that using composition, especially using an LSTM cell, bridges a substantial part of the gap between the bw and the bi models.",
"These results can be related to the literature on transition-based dependency parsing.",
"Transition-based parsers generally rely on two types of features: history-based features over the emerging dependency tree and lookahead features over the buffer of remaining input.",
"The former are based on a hierarchical structure, the latter are purely sequential.",
"McDonald and Nivre (2007) and McDonald and Nivre (2011) have shown that history-based features enhance transition-based parsers as long as they do not suffer from error propagation.",
"However, Nivre (2006) has also shown that lookahead features are absolutely crucial given the greedy left-to-right parsing strategy.",
"In the model architectures considered here, the backward LSTM provides an improved lookahead.",
"Similarly to the lookahead in statistical parsing, it is sequential.",
"The difference is that it gives information about upcoming words with unbounded length.",
"The forward LSTM in this model architecture provides history-based information but unlike in statistical parsing, that information is built sequentially rather than hierarchically: the forward LSTM passes through the sentence in the linear order of the sentence.",
"In our results, we see that lookahead features are more important than the history-based ones.",
"It hurts parsing accuracy more to ablate the backward LSTM than to ablate the forward one.",
"This is expected given that some history-based information is still available through the top tokens on the stack, while the lookahead information is almost lost completely without the backward LSTM.",
"A composition function gives hierarchical information about the history of parsing actions.",
"It makes sense that it helps the backward LSTM model most since that model has no access to any information about parsing history.",
"It helps the forward LSTM slightly which indicates that there can be gains from using structured information about parsing history rather than sequential information.",
"We could then expect that composition should help the BiLSTM model which, however, is not the case.",
"This might be because the BiLSTM constructs information about parsing history and lookahead into a unique representation.",
"In any case, this indicates that BiLSTMs are powerful feature extractors which seem to capture useful information about subtrees.",
"Figure 5 : LAS of baseline, using char and/or POS tags to construct word representations 5.2 Ablating POS and character information Next, we look at the effect of the different word representation methods on the different languages, as represented in Figure",
"5. As is consistent with the literature (Ballesteros et al., 2015; de Lhoneux et al., 2017a; Smith et al., 2018b), using character-based word representations and/or POS tags consistently improves parsing accuracy but has a different impact in different languages and the benefits of both methods are not cumulative: using the two combined is not much better than using either on its own.",
"In particular, character models are an efficient way to obtain large improvements in morphologically rich languages.",
"We look at the impact of recursive compositions on all combinations of ablated models, see Table 1.",
"We only look at the impact of using an LSTM cell rather than a recurrent cell since it was a better technique across the board (see previous section).",
"Looking first at BiLSTMs, it seems that composition does not reliably help parsing accuracy, regardless of access to POS and character information.",
"This indicates that the vectors obtained from the BiLSTM already contain information that would otherwise be obtained by using composition.",
"Turning to results with either the forward or the backward LSTM ablated, we see the expected pattern.",
"Composition helps more when the model lacks POS tags, indicating that there is some redundancy between these two methods of building contextual information.",
"Composition helps recover a substantial part of the gap of the model with a backward LSTM with or without POS tag.",
"It recovers a much less substantial part of the gap in other cases which means that, although there is some redundancy between these different methods of building contextual information, they are still complementary and a recursive composition function cannot fully compensate for the lack of a backward LSTM or POS and/or character information.",
"There are some language idiosyncracies in the results.",
"While composition helps recover most of the gap for the backward LSTM models without POS and/or character information for Czech and English, it does it to a much smaller extent for Basque and Finnish.",
"We hypothesise that arc depth might impact the usefulness of composition, since more depth means more matrix multiplications with the composition function.",
"However, we find no correlation between average arc depth of the treebanks and usefulness of composition.",
"It is an open question why composition helps some languages more than others.",
"Note that we are not the first to use composition over vectors obtained from a BiLSTM in the context of dependency parsing, as this was done by Qi and Manning (2017).",
"The difference is that they compose vectors before scoring transitions.",
"It was also done by Kiperwasser and Goldberg (2016a) who showed that using BiLSTM vectors for words in their Tree LSTM parser is helpful but they did not compare this to using BiLSTM vectors without the Tree LSTM.",
"Recurrent and recursive LSTMs in the way they have been considered in this paper are two ways of constructing contextual information and making it available for local decisions in a greedy parser.",
"The strength of recursive LSTMs is that they can build this contextual information using hierarchical context rather than linear context.",
"A possible weakness is that this makes the model sensitive to error propagation: a wrong attachment leads to using the wrong contextual information.",
"It is therefore possible that the benefits and drawbacks of using this method cancel each other out in the context of BiLSTMs.",
"To investigate further the information captured by BiLSTMs, we ensemble the 6 versions of the models with POS and character information with the different feature extractors ( bi , bw , fw ) with ( + lc ) and without composition.",
"We use the (un-weighted) reparsing technique of Sagae and Lavie pos+char+ pos+char-bi bi+lc bw bw+lc fw fw+lc bi bi+lc bw bw+lc fw fw+lc cs 87.9 88.2 85.9 87.7 84.9 85.0 86.7 87.0 84.5 86.2 83.6 83.6 en 82.0 82.3 80.3 81.9 75.1 75.6 81.5 81.5 79.7 81.4 74.3 75.0 eu 73.3 73.5 72.0 72.4 66.8 67.4 67.4 67.6 65.6 66.3 59.6 60.5 fi 79.3 79.7 77.7 79.2 73.7 74.7 72.5 72.7 69.8 71.7 66.7 67.4 fr 87.5 87.6 86.4 87.5 86.3 86.4 87.1 87.2 85.8 86.9 85.7 85.9 grc 75.4 76.1 72.8 75.0 70.9 71.1 72.2 72.5 69.6 71.4 67.4 67.8 he 80.0 80.1 78.0 80.0 77.9 78.2 79.4 79.2 77.2 79.0 76.9 77.3 ja 94.6 94.6 94.4 94.5 83.3 83.9 94.3 94.3 94.2 94.3 83.0 83.6 zh 72.9 72.7 71.3 72.4 57.4 58.7 71.5 71.3 69.9 70.8 56.4 57.9 av 81.4 81.6 79.8 81.2 75.1 75.7 79.2 79.2 77.4 78.7 72.6 73.2 pos-char+ pos-char-bi bi+lc bw bw+lc fw fw+lc bi bi+lc bw bw+lc fw fw+lc cs 88.1 88.4 86.0 87.8 84.7 84.9 84.3 84.5 81.3 83.1 79.9 79.8 en 82.2 82.1 79.8 81.6 73.2 73.8 80.0 79.9 77.5 79.2 70.5 71.5 eu 72.8 72.9 71.5 71.8 65.4 66.4 61.6 62.0 57.7 59.5 48.7 51.2 fi 78.2 78.6 75.8 77.9 72.0 73.0 62.8 63.1 56.6 60.2 52.8 54.7 fr 87.6 87.7 86.1 87.4 85.4 85.7 85.9 85.8 83.7 85.3 83.1 83.3 grc 74.4 74.8 71.3 73.7 69.2 69.6 68.3 69.0 64.6 67.3 62.6 63.4 he 79.9 80.1 77.4 79.9 76.5 77.3 77.5 77.4 74.4 77.2 74.2 74.7 ja 94.2 94.4 94.2 94.4 81.3 81.8 93.2 93.3 92.7 93.1 79.5 80.2 zh 72.7 72.5 70.8 72.2 56.5 58.2 69.1 69.3 66.7 68.1 53.4 55.0 av 81.1 81.3 79.2 80.8 73.8 74.5 75.9 76.0 72.8 74.8 67.2 68.2 Table 1 : LAS for bi , bw and fw , without and with composition ( + lc ) with an LSTM.",
"Table 2 : UAS ensemble (full) and ablated experiments.",
"(2006) 6 and ignoring labels.",
"As can be seen from the UAS scores in Table 2, the ensemble ( full ) largely outperforms the parser using only a BiLSTM, indicating that the information obtained from the different models is complementary.",
"To investigate the contribution of each of the 6 models, we ablate each one by one.",
"As can be seen from Table 2, ablating either of the BiLSTM models or the backward LSTM using composition, results in the least effective of the ablated models, further strengthening the conclusion that BiLSTMs are powerful feature extractors.",
"We investigated the impact of composing the representation of subtrees in a transition-based parser.",
"6 This method scores all arcs by the number of parsers predicting them and extracts a maximum spanning tree using the Chu-Liu-Edmonds algorithm (Edmonds, 1967).",
"We observed that composition does not reliably help a parser that uses a BiLSTM for feature extraction, indicating that vectors obtained from the BiLSTM might capture subtree information, which is consistent with the results of Linzen et al. (2016).",
"However, we observe that, when ablating the backward LSTM, performance drops and recursive composition does not help to recover much of this gap.",
"We hypothesise that this is because the backward LSTM primarily improves the lookahead for the greedy parser.",
"When ablating the forward LSTM, performance drops to a smaller extent and recursive composition recovers a substantial part of the gap.",
"This indicates that a forward LSTM and a recursive composition function capture similar information, which we take to be related to the rich history-based features crucial for a transition-based parser.",
"To capture this information, a recursive composition function is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM.",
"We further find that recursive composition helps more when POS tags are ablated from the model, indicating that POS tags and a recursive composition function are partly redundant ways of constructing contextual information.",
"Finally, we correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages.",
"We acknowledge the computational resources provided by CSC in Helsinki and Sigma2 in Oslo through NeIC-NLPL (www.nlpl.eu).",
"We thank Sara Stymne and Aaron Smith for many discussions about this paper.",
"Joakim Nivre's contributions to this work were supported by grant 2016-01817 of the Swedish Research Council."
] |
[
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"result",
"result",
"abstain",
"method",
"abstain",
"result",
"result",
"other",
"other",
"other"
] |
[
"Clustering short text streams is a challenging task due to its unique properties: infinite length, sparse data representation and cluster evolution.",
"Existing approaches often exploit short text streams in a batch way.",
"However, determine the optimal batch size is usually a difficult task since we have no prior knowledge when the topics evolve.",
"In addition, traditional independent word representation in the graphical model tends to cause term ambiguity problem in short text clustering.",
"Therefore, in this paper, we propose an Online Semantic-enhanced Dirichlet Model for short text stream clustering, called OSDM, which integrates the word-occurrence semantic information (i.e., context) into a new graphical model and clusters for each arriving short text automatically in an online way.",
"Extensive results have demonstrated that OSDM gives better performance compared to many state-of-the-art algorithms on both synthetic and real-world data sets.",
"A massive amount of short text data is constantly generated with online social platforms such as mi-croblogs, Twitter and Facebook.",
"Clustering of such short text streams has thus gained increasing attention in recent years due to many real-world applications like event tracking, hot topic detection, and news recommendation (Hadifar et al., 2019).",
"However, due to the unique properties of short text streams such as infinite length, evolving patterns and sparse data representation, short text stream clustering is still a big challenge (Aggarwal et al., 2003; Mahdiraji, 2009).",
"During the past decade, many approaches have been proposed to address the text stream clustering problem from different points of view, and each method comes with specific advantages and drawbacks.",
"Initially, traditional clustering algorithms for static data were enhanced and transformed for text streams (Zhong, 2005).",
"Very soon, they are replaced by model-based algorithms such as LDA (Blei et al., 2003), DTM (Blei and Lafferty, 2006), TDPM (Ahmed and Xing, 2008), GSDMM(Yin and Wang, 2016b), DPMFP (Huang et al., 2013), TM-LDA (Wang et al., 2012), NPMM (Chen et al., 2019) and MStream (Yin et al., 2018), to mention a few.",
"However, for most established approaches, they often work in a batch way, and assume the instances within a batch are interchangeable.",
"This assumption usually cannot hold for topic-evolving text data corpus.",
"Determining an optimal batch size is also a non-trivial task for different text streams (Howard and Ruder, 2018).",
"Additionally, unlike long text documents, short text clustering further suffers from the lack of supportive term occurrence to capture semantics (Gong et al., 2018).",
"For most existing short text clustering algorithms like Sumblr (Shou et al., 2013), DCT (Liang et al., 2016) and MStreamF (Yin et al., 2018), exploiting independent word representation in their cluster models tends to cause ambiguity.",
"Let us show the following four tweets, for example: T1: A regular intake of an Apple can improve your health and muscle stamina.",
"T1: A glass of fresh apple juice is recommended for breakfast.",
"T2: New Apple Watch can monitor your health.",
"Tweets of these two topics share few common terms, i.e., ' health ' or ' apple '.",
"It creates an ambiguity if the model deals with only single term representation to calculate the similarity.",
"However, the co-occurring terms representation (i.e., context) helps a model to identify the topic 1 correctly.",
"To solve these aforementioned issues, we propose an online semantic-enhanced dirichlet model for short text stream clustering.",
"Compared to existing approaches, it has following advantages.",
"(1) It allows processing each arriving short text in an online way.",
"The online model is not only free of determining the optimal batch size, but also lends itself to handling large-scale data streams efficiently; (2) To the best of our knowledge, it is the first work to integrate semantic information for model-based online clustering, which is able to handle term ambiguity\" problem effectively and finally support high-quality clustering; (3) Equipped with Poly Urn Scheme, the number of clusters (topics) are determined automatically in our cluster model.",
"During the past decade, many text stream clustering algorithms have been proposed.",
"Here, due to the space limitation, we only report some model-based approaches which are highly related to our work.",
"For more details, please refer to comprehensive surveys, e.g., (Mahdiraji, 2009; Silva et al., 2013; Nguyen et al., 2015; Aggarwal, 2018).",
"The early classical attempt for text clustering is Latent Dirichlet Allocation (LDA) (Blei et al., 2003).",
"However, it cannot handle the temporal data for text streams.",
"For this purpose, many LDA variants have been proposed to consider the text streams such as dynamic topic model (DTM) (Blei and Lafferty, 2006), dynamic mixture model (DMM) (Wei et al., 2007), temporal LDA (T-LDA) (Wang et al., 2012), streaming LDA (S-LDA) (Amoualian et al., 2016), and dirichlet mixture model with feature partition (DPMFP) (Zhao et al., 2016).",
"These models assume that each document contains rich content, and thus they are not suitable for dealing with the short text streams.",
"Later, Dirichlet multinomial mixture model-based dynamic clustering topic (DCT) model was designed to deal with short text streams by assigning each 1 Topic and cluster will be interchangeably used in this paper document with single topic (Liang et al., 2016).",
"Very soon, GSDMM was proposed to extend DMM with collapsed gibbs sampling to infer the number of clusters (Yin and Wang, 2014).",
"However, most of these models did not investigate the evolving topics (clusters) in text streams where the number of topics usually evolves over time.",
"To automatically detecting the number of clusters, (Ahmed and Xing, 2008) proposed a temporal dirichlet process mixture model (TDMP).",
"It divides the text stream into many chunks (batches), and assumes that the documents inside each batch are interchangeable.",
"Later, GSDPMM was proposed with collapsed gibbs sampling to infer the number of clusters in each batch.",
"In contrast to LDA, GSDPMM not only converges faster but also dynamically assigns the number of clusters over time (Yin and Wang, 2016a).",
"However, both TDMP and GSDPMM models do not examine the evolving topics, and, these models process the text stream for multiple times.",
"Thereafter, MStreamF (Yin et al., 2018) was thus proposed by incorporating a forgetting mechanism to cope with cluster evolution, and allows processing each batch only one time.",
"The NPMM model (Chen et al., 2019) was recently introduced by using the word-embeddings to eliminate a cluster generating parameter of the model.",
"In summary, for most existing approaches, they usually work in a batch way.",
"However, determining optimal batch sizes for different text streams is usually a difficult task.",
"More importantly, due to the intrinsic sparse data representation of short-text data, the semantics, is little investigated in established approaches.",
"Actually, they need to be carefully considered to decrease the term ambiguity in short text clustering.",
"Here, the problem statement is first given, followed with a brief introduction about dirichlet process and Poly Urn scheme.",
"Formally, a text stream is continuous arrival of text documents over time: S t = { d t } t =1 .",
"Where d t denotes a document arrived at time t .",
"Each document contains specific words d t = { w 1 , w 2 , . . . , w n } and may have different length.",
"The key objective of the clustering task is to group similar documents into clusters: Z = { z t } t =1 , and each cluster z t contains documents represented as z t = { d z t 1 , d z t 2 , . . . , d z t n } .",
"For short text clustering, each document is the member of only one topic, so z i z j = , where i (cid:54) = j .",
"Dirichlet Process (DP) is a non-parametric stochastic processes to model the data (Teh et al., 2006).",
"It is the process to draw a sample from (base) distribution, where each sample itself is a distribution, denoted as N DP( , N 0 ) .",
"Here, N is the drawn sample from the base distribution N 0 .",
"The drawing procedure of a sample from the distribution is controlled by a concentration parameter .",
"The procedure to draw the sequential samples N 1 , N 2 . . . from a distribution is described by the poly urn scheme (Blackwell et al., 1973).",
"It can be summarized as: N n |N 1: n 1 + n 1 + (cid:80) n 1 k =1 ( N n N k ) + n 1 Here, ( x ) = 1 if x = 0 and ( x ) = 0 otherwise.",
"Initially, the urn is empty, so we draw a color from the base distribution i.e. N 1 N 0 , and put a ball of drawn color into the urn.",
"In the next turn, either we draw a color from the distribution which is already drawn with probability of n 1 + n 1 , or draw a new color with probability of N 0 + n 1 .",
"Since, drawing samples from distribution is repeated, so the same color may appear more than once.",
"This defines that we have K number of distinct colors and n number of draws.",
"This condition is defined by a well-known process called Chinese restaurant process (CRP) (Ferguson and Thomas S Ferguson, 1973).",
"In CRP, we suppose that there are infinite number of tables in a restaurant, and each table surrounds infinite number of empty chairs.",
"The first customer sits on first table, and later on the next customer either chooses to sit on any occupied table with probability of n k + n 1 or chooses an empty table with probability of + n 1 .",
"Here, n k is number of customers sitting on a specific table.",
"A new customer is tend to be attracted towards a highly crowded table.",
"This phenomenon is one part of our equation to understand creation of clusters over time.",
"The CRP represents the draws from distribution G , while the stick-breaking process shows the property of G explicitly: G ( N ) = (cid:88) k =1 k ( N N k ) , N k N 0 (1) The mixture weights = { k } k =1 can be formalized by GEM ( ) (Neal, 2000).",
"We exploit Equation (1) for the generative process of the Dirichlet process multinomial mixture model (DPMM) as follows.",
"z d | Mult( ) d = 1 , . . . , N k | Dir( ) k = 1 , . . . , d | z d , {N k } k =1 p ( d |N z d ) Here, z d is the assigned documents to the cluster, which are multinomial distributed.",
"The probability of document d generated by topic z is summarized as: p ( d |N z ) = (cid:89) w d Mult ( w |N z ) (2) Here, the naive Bayes assumption is considered where words in a document are independently generated by the topic.",
"Whereas, the sequential draw of the sample can be derived by following the CRP.",
"It is also assumed that the position of words in a document is not considered while calculating the probability.",
"This section gives a brief discussion about the representation and formulation of the proposed algorithm.",
"We build our model upon the DPMM (Yin and Wang, 2016a), which is an extension of the DMM model to deal with evolving clusters.",
"We call our model as OSDM (Online Semantic-enhanced Dirichlet Model), aiming at incorporating the semantic information and cluster evolution simultaneously for short text stream clustering in an online way .",
"The graphical model of OSDM is given in Figure 1a.",
"We show two major differences in our model to highlight the novelty.",
"First, for word-topic distribution, we embed semantic information by capturing the ratio of word co-occurrence.",
"Thereby, independent word generating process and word co-occurrence weight are well considered in topic generation.",
"Secondly, our model works instance",
"by instance fashion to cluster the documents, instead of batch by batch.",
"For comparison, Figure 1b further show the MStreamF (Yin et al., 2018) model.",
"At initial stage before clustering documents of a batch, MStreamF update vocabulary set (active terms) from all the documents in a batch, then it starts the clustering each document of the batch.",
"However, OSDM does not consider fixed number of documents to create vocabulary set, instead it incrementally updates with each arriving document.",
"Defining the relationship between documents and clusters is the most crucial task while dealing with the text stream clustering problem.",
"The threshold-based methodology (Nguyen et al., 2015) adapts similarity measures to define the homogeneity threshold between a cluster and a document.",
"If the dissimilarity between the exiting clusters and a new arriving document is above the threshold, then a new cluster is created.",
"However, due to the dynamic nature of the stream, it is very hard to define the similarity threshold manually.",
"In contrast, we assume that documents are generated by DPMM (see Section 3).",
"Most recent algorithm MStreamF improved DPMM to cluster short text documents in the stream.",
"As a further study, we integrate the semantic component in DPMM model.",
"Additionally, we integrate term importance on the basis of cluster frequency.",
"The derived equation for calculating the probability of a document d choosing existing cluster z is given in Equation (3).",
"The first term of this Equation (cid:16) m z D 1+ D (cid:17) represents completeness of the cluster.",
"Here, m z is the number of documents contained by the cluster z and D is the number of current documents in active clusters 2 .",
"Whereas, is the concentration parameter of the model.",
"The middle term of the equation based on multinomial distribution (see Equation (2)) with psuedo weight of words defines the homogeneity between a cluster and a document.",
"N d and N wd represents total number of words and term frequency of word w in document d , respectively.",
"The symbol n wz is the term frequency of the word w in the cluster z .",
"The current vocabulary size of the model is represented by V .",
"n z is the number of words in the cluster z .",
"ICF w calculates the term importance over the active clusters in the model, which is defined as follows.",
"Here, | Z | represents the number of active clusters in the model.",
"The denominator part of Equation (4) is the number of those cluster which contains the word w .",
"The term (cid:16) 1 + (cid:80) w i d w j d cw ij (cid:17) defines the semantic weight of term co-occurrence between the cluster and a document.",
"Formally, we define a value of an entry cw ij in the co-occurrence matrix as follows.",
"Here, n d (cid:48) z is frequency count of word w i in document d (cid:48) .",
"The ratio between w i and w j must satisfy the property cw ij + cw ji = 1 .",
"We calculate the term co-occurrence weight of those terms which are 2 Active clusters refer to those clusters which are not yet deleted from the model.",
"common in the cluster z and document d .",
"Term co-occurrence matrix is constructed where two terms are co-occurred in a single document.",
"Therefore, if the size of cluster feature set (discussed in Section 4.3) is | V z | , then it is not necessary that the co-occurrence matrix would be | V z | | V z | .",
"So far, we have defined the probability of a document choosing existing cluster, then we have to define the probability for a document to creating a new cluster.",
"By following the DPMM for infinite number of clusters, which transform GEM ( ) into GEM ( D ) , because the hyper-parameter for the mixture model should be dynamically change over time.",
"Therefore, the probability of creating a new cluster is as follows.",
"Here, the pseudo number of clusters related documents in the model is represented as D , and is the pseudo term frequency of each word (exist in document) of the new cluster.",
"The similarity-based text clustering approaches usually follow vector space model (VSM) to represent the cluster feature space (Din and Shao, 2020).",
"However, a topic needs to be represented as the subspace of global feature space.",
"Here, we use a micro-cluster feature set to represent each cluster.",
"Namely, a cluster is represented as the summary statistics of a set of words of related documents.",
"In our model, a cluster feature (CF) set is defined as a 6-tuple { m z , n wz , cw z , len z , l z , u z } , where m z is the number of documents in the cluster z , n w z is the number of frequency of the word w in the cluster, cw z is the word to word co-occurrence matrix, len z is the number of words in the cluster z which is sum of all frequencies of words, l z is the cluster weight, and u z is the last updated time stamp.",
"Definition 1: A document d can be added to a cluster z by using the addition property .",
"m z = m z + 1 n wz = n wz + N wd w d cw z = cw z cw d Algorithm 1: OSDM Input: S t : { d t } t =1 , : concentration parameter, : pseudo weight of term in cluster, : decay factor Output: Cluster assignments z d 1 K = 2 while d t in S t do 3 t = t + 1 4 K = removeOldZ i ( K ) 5 K = reduceClusterW eight ( , K ) 6 foreach z i K do 7 PZ i = prob ( z i , d t ) using Eq.",
"Here, cw d is word to word co-occurrence of the document, and len d represents the number of total words in the document.",
"The complexity of updating a cluster by adding a document is O ( L ) , where L is the average length of the document.",
"This property is useful to update evolving micro-clusters in the text stream clustering procedure.",
"We propose a semantic-enhanced non-parametric dirichlet model to cluster the short text streams in an online way, called OSDM.",
"The proposed algorithm allows processing each instance incrementally and updates the model accordingly.",
"document and the document is assigned to the newly created CF set.",
"Afterward, each arriving document in the stream either choose an existing cluster or generate a new cluster.",
"The corresponding probability for choosing either of an existing cluster or a new cluster is computed using Equation (6) and (3), respectively.",
"The CF vector with the highest probability is updated using the addition property.",
"To deal with the cluster evolution (i.e., evolving topics) in text streams, many existing approaches often delete the old clusters by using some of the forgetting mechanisms (e.g., decay rate) (Zhong, 2005; Aggarwal and Yu, 2010; Islam et al., 2019).",
"Instead of deleting old clusters, MStreamF (Yin et al., 2018) deletes old batches.",
"In this study, we investigate the importance of each micro-cluster to handle the cluster evolution problem.",
"Specifically, the importance of each micro-cluster is decreased over time if it is not updated.",
"l z in CF stores weight of each cluster.",
"If the weight is approximately equals to zero, then the cluster is removed from the model, i.e., it cannot capture recent topics in the text stream.",
"For this purpose, we applied the exponential decay function, l z = l z 2 ( (cid:52) t ) .",
"Here, (cid:52) t is the elapsed time from the last update, and is the decay rate.",
"The decay rate must be adjusted depending upon the applications at hand.",
"The initial value of l z (See Line 16 of Algorithm 1) is set to 1. Afterward, the importance of micro-cluster is exponentially decreases over time.",
"We can also store the deleted clusters in a permanent disk for offline analysis.",
"Complexity Analysis.",
"The OSDM algorithm always maintains the average K number of current topics (CF sets).",
"Every CF set store average V number of words in n wz and at most | V z | | V z | in cw z .",
"Thus the space complexity of OSDM is O ( K ( V + V 2 ) + V D ) , where V is the size of active vocabulary and D is the number of active documents.",
"On other side, OSDM calculates the probability of arriving document with each cluster (see Line 6 of Algorithm 1).",
"Therefore, the time complexity of OSDM is O ( K ( L V )) , where L is the average size of arriving document.",
"To evaluate the performance of the proposed algorithm, we conduct experiments on three real and two synthetic datasets.",
"These datasets were also used in (Yin and Wang, 2016a; Liang et al., 2016; Qiang et al., 2018; Yin et al., 2018; Jia et al., 2018; Chen et al., 2019) to evaluate short text clustering models.",
"In the preprocessing step, we removed stop words, converted all text into lowercase, and stemming.",
"The description of the datasets is as follows.",
"News (Ns): This dataset is collected by (Yin and Wang, 2014), which contains 11,109 news title belong to 152 topics.",
"Reuters (Rs): Similar to (Yin and Wang, 2016b) we skip the documents with more than one class and obtained the dataset consists of 9,447 documents from 66 topics.",
"Tweets (Ts): This dataset contain 30,322 tweets which are relevant to 269 topics in the TREC 3 microblog.",
"News-T (Ns-T) and Reuters-T (Rs-T): Naturally, we may find a situation where topics in social media appear only for a certain time period and then disappear.",
"However, the documents of each topic in original dataset is observed for long period of time.",
"Therefore, to construct synthetic dataset we sorted documents datasets by topic in two datasets including Reuters and News .",
"After sorting, we then divide each dataset into sixteen equal chunks and shuffled them.",
"We adopted five different evaluation metrics for deep analysis of all algorithms, which include Normalized Mutual Information (NMI), Homogeneity (Ho.), V-Measure (VM), Accuracy (Acc.) and cluster Purity (Pur.).",
"We utilized sklearn 4 API to implement these metrics.",
"We compute the measures on overall clustering results (Yin and Wang, 2014).",
"Homogeneity measures that each cluster should have only members of a single class.",
"Whereas, V-measure calculates how successfully the criteria of completeness and homogeneity are satisfied.",
"Cluster purity measures the true positive instances in each cluster.",
"The typical NMI measure calculates the overall clustering quality.",
"We have selected four state-of-the-art representative algorithms for stream text clustering to com-3",
"pare OSDM (Os).",
"A brief description of these algorithms are given as follows.",
"(1) DTM (Blei and Lafferty, 2006) is an extension of Latent Dirichlet Allocation which traces the evolution of hidden topics from corpus over time.",
"It was designed to deal with the sequential documents.",
"(2) Sumblr (Sb) (Shou et al., 2013) is an online stream clustering algorithm for tweets.",
"With only one pass, it enables the model to cluster the tweets efficiently while maintaining cluster statistics.",
"(3) DMM (Yin and Wang, 2014) is a Dirichlet multinomial mixture model for short text clustering, which does not consider temporal dependency of instances.",
"(4) MStreamF (Yin et al., 2018) is the latest model to deal with infinite number of latent topics in short text while processing one batch at a time.",
"Two models of MStreamF were proposed, one with one-pass clustering process, and another with gibbs sampling.",
"We refer to the former algorithm as MStreamF-O (MF-O) and the latter as MStreamF-G (MF-G).",
"We try to find the optimal parameter values of all baseline algorithms with grid search.",
"Finally, we set = 0 .",
"01 for DTM, = 0 .",
"02 for Sumblr.",
"For MStreamF-O and MStreamF-G, we set = 0 .",
"03 and = 0 .",
"03 .",
"As defined in (Yin et al., 2018), we set the number of iterations to 10 and saved batches = 2 for MStreamF-G.",
"We set = 0 .",
"3 and = 0 .",
"3 for DMM.",
"The DTM, DMM and Sumblr needs fixed number of cluster as input therefore we set K = 300 , K = 170 and K = 80 for Tweets, News and Reuters datasets, respectively.",
"We set = 2 e 3 , = 4 e 5 and = 6 e 6 for OSDM.",
"The source code of OSDM is publicly available at: https://github.com/ JayKumarr/OSDM .",
"In this section, we provide a detailed comparative analysis of OSDM with state-of-the-art algorithms.",
"The overall results are summarized in Table 1. We report NMI, Homogeneity, v-measure, purity and accuracy of each algorithm.",
"Additionally, we also evaluate the performance of each algorithm over different time-stamps of the stream (see Figure 2).",
"From Table 1, we can see that OSDM outperformed all baseline algorithms on almost every dataset in terms of all measures.",
"Here, MStreamF-G yielded much better results on the Ns-T data in terms of NMI measure.",
"The reason behind might be the multiple iterations of each batch in the stream.",
"However, MStreamF-G requires more execution time to process the data.",
"In contrast, our proposed algorithm OSDM processes the data only once.",
"And we can also observe that OSDM achieves the highest NMI in other data sets.",
"In addition, the crucial part of evaluating the cluster similarity is measured by the homogeneity measure.",
"We can see that OSDM outperformed all previous algorithms.",
"It also shows the same statistics except for v-measure of DTM.",
"Likewise, our model generates more pure clusters.",
"Furthermore, to investigate the performance over time, we plot the performance of \u0000\u001c\u0000H \u0000\u0000\u0016 \u0000\u001c\u0000H \u0000\u0000\u0015 \u0000\u001c\u0000H \u0000\u0000\u0014 \u0000\u0013\u0000\u0011\u0000\u0016 \u0000\u0013\u0000\u0011\u0000\u0018 \u0000\u0013\u0000\u0011\u0000\u001a \u0000\u0013\u0000\u0011\u0000\u001c \u00001\u00000\u0000, \u0000+\u0000R \u00003\u0000X \u0000$\u0000F\u0000F \u00009\u00000",
"We perform sensitivity analysis for OSDM with respects to three input parameters: concentration parameter , , and decay function parameter on the Tweets dataset.",
"From Figure 3a, we can observe the effect of , which ranges from 9 e 3 to 9 e 1 .",
"The performance in terms of all evaluation measures is stable over the different values of parameters.",
"The parameter is responsible for finer clustering, that is why we can observe a little fluctuation in initial values.",
"Figure 3b shows the performance on different values of , which ranges from 1 e 4 to 1 e 2 .",
"As we already defined that we modified homogeneity part of the clustering model (see Equation (3)), and is the related hyper-parameter.",
"We can observe that after a certain range, the values of all the evaluation measure become stable.",
"The crucial point to be observed is the stability of homogeneity on different values of .",
"Figure 3c shows effect of ranges from 9 e 4 to 9 e 6 .",
"Our model follows the forgetting mechanism on decay factor and the clusters are deleted from model when the value is approximately equals to zero.",
"We can observe the performance of OSDM \u0000\u0013 \u0000\u0015 \u0000\u0017 \u0000\u0019 \u0000\u001b \u0000\u0014\u0000\u0013 \u00006\u0000W\u0000U\u0000H\u0000D\u0000P\u0000\u0003\u0000\u000b\u0000L\u0000Q\u0000\u0003\u00007\u0000K\u0000R\u0000X\u0000V\u0000D\u0000Q\u0000G\u0000\u0003\u00003\u0000R\u0000L\u0000Q\u0000W\u0000V\u0000\f \u0000\u0013 \u0000\u0018\u0000\u0013\u0000\u0013 \u0000\u0014\u0000\u0013\u0000\u0013\u0000\u0013 \u0000\u0014\u0000\u0018\u0000\u0013\u0000\u0013 \u0000\u0015\u0000\u0013\u0000\u0013\u0000\u0013 \u00007 \u0000L \u0000P \u0000H \u0000'\u00007\u00000 \u00000\u0000)\u0000\u0010\u0000* \u00000\u0000)\u0000\u0010\u00002 \u00006\u0000E \u0000'\u00000\u00000 \u00002\u00006\u0000'\u00000 Figure 4: The runtime of different text stream clustering algorithms.",
"To compare the runtime of different algorithms, we performed all experiments on a PC with core i5-3470 and 8GB memory.",
"Figure 4 shows the runtime of all algorithms on the tweets dataset.",
"We can observe that Sumblr required the highest execution time to cluster the instances.",
"Whereas, the runtime of other algorithms are comparable.",
"Due to simple execution process of each instance MStreamF-O took least time because it does not need to maintain semantic similarity.",
"Comparatively, MStreamF-G required much higher time than OSDM.",
"The reason is that it needs to execute each batch data multiple times.",
"Due to online nature, the overall speed of OSDM is more efficient than most existing algorithms, and the benefit is strengthened with more and more arriving instances.",
"In this paper, we propose a new online semantic-enhanced dirichlet model for short text stream clustering.",
"In contrast to existing approaches, OSDM does not require to specify the batch size and the dynamic number evolving clusters.",
"It dynamically assigns each arriving document into an existing cluster or generating a new cluster based on the poly urn scheme.",
"More importantly, OSDM tried to incorporate semantic information in the proposed graphical representation model to remove the term ambiguity problem in short-text clustering.",
"Building upon the semantic embedding and online learning, our method allows finding high-quality evolving clusters.",
"Extensive results further demonstrate that OSDM has better performance compared to many state-of-the-art algorithms.",
"This work is supported by the National Natural Science Foundation of China (61976044), Fundamental Research Funds for the Central Universities (ZYGX2019Z014), Fok Ying-Tong Education Foundation for Young Teachers in the Higher Education Institutions of China (161062), National key research and development program (2016YFB0502300)."
] |
[
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other"
] |
[
"We propose several ways of reusing subword embeddings and other weights in subword-aware neural language models.",
"The proposed techniques do not benefit a competitive character-aware model, but some of them improve the performance of syllableand morpheme-aware models while showing significant reductions in model sizes.",
"We discover a simple hands-on principle: in a multilayer input embedding model, layers should be tied consecutively bottom-up if reused at output.",
"Our best morpheme-aware model with properly reused weights beats the competitive word-level model by a large margin across multiple languages and has 20%87% fewer parameters.",
"A statistical language model (LM) is a model which assigns a probability to a sequence of words.",
"It is used in speech recognition, machine translation, part-of-speech tagging, information retrieval and other applications.",
"Data sparsity is a major problem in building traditional n -gram language models, which assume that the probability of a word only depends on the previous n words.",
"To deal with potentially severe problems when confronted with any n -grams that have not explicitly been seen before, some form of smoothing is necessary.",
"Recent progress in statistical language modeling is connected with neural language models (NLM), which tackle the data sparsity problem by representing words as vectors.",
"Typically this is done twice: at input (to embed the current word of a sequence into a vector space) and at output (to embed candidates for the next word of a se-quence).",
"Especially successful are the models in which the architecture of the neural network between input and output is recurrent (Mikolov et al., 2010), which we refer to as recurrent neural network language models (RNNLM).",
"Tying input and output word embeddings in word-level RNNLM is a regularization technique, which was introduced earlier (Bengio et al., 2001; Mnih and Hinton, 2007) but has been widely used relatively recently, and there is empirical evidence (Press and Wolf, 2017) as well as theoretical justi-fication (Inan et al., 2017) that such a simple trick improves language modeling quality while decreasing the total number of trainable parameters almost two-fold, since most of the parameters are due to embedding matrices.",
"Unfortunately, this regularization technique is not directly applicable to subword-aware neural language models as they receive subwords at input and return words at output.",
"This raises the following questions: Is it possible to reuse embeddings and other parameters in subword-aware neural language models?",
"Would it benefit language modeling quality?",
"We experimented with different subword units, embedding models, and ways of reusing parameters, and our answer to both questions is as follows: There are several ways to reuse weights in subword-aware neural language models, and none of them improve a competitive character-aware model, but some of them do benefit syllableand morpheme-aware models, while giving significant reductions in model sizes.",
"A simple morpheme-aware model that sums morpheme embeddings of a word benefits most from appropriate weight tying, showing a significant gain over the competitive word-level baseline across different languages and data set sizes.",
"Another contribution of this paper is the discovery of a hands-on principle that in a multi-layer input embedding model, layers should be tied consecutively bottom-up if reused at output.",
"The source code for the morpheme-aware model is available at https://github.com/ zh3nis/morph-sum .",
"Subword-aware NLM: There has been a large number of publications in the last 23 years on subword-level and subword-aware NLMs, 1 especially for the cases when subwords are characters (Ling et al., 2015; Kim et al., 2016; Verwimp et al., 2017) or morphemes (Botha and Blunsom, 2014; Qiu et al., 2014; Cotterell and Schutze, 2015).",
"Less work has been done on syllable-level or syllable-aware NLMs (Mikolov et al., 2012; Assylbekov et al., 2017; Yu et al., 2017).",
"For a thor-ough and up-to-date review of the previous work on subword-aware neural language modeling we refer the reader to the paper by Vania and Lopez (2017), where the authors systematically compare different subword units (characters, character trigrams, BPE, morphs/morphemes) and different representation models (CNN, Bi-LSTM, summation) on languages with various morphological typology.",
"Tying weights in NLM: Reusing embeddings in word-level neural language models is a technique which was used earlier (Bengio et al., 2001; Mnih and Hinton, 2007) and studied in more details recently (Inan et al., 2017; Press and Wolf, 2017).",
"However, not much work has been done on reusing parameters in subword-aware or subword-level language models.",
"Jozefowicz et al. (2016) reused the CharCNN architecture of Kim et al. (2016) to dynamically generate softmax word embeddings without sharing parameters with an input word-embedding sub-network.",
"They managed to significantly reduce the total number of parameters for large models trained on a huge dataset in English (1B tokens) with a large vocabulary (800K tokens) at the expense of deteriorated performance.",
"Labeau and Allauzen (2017) used similar approach to augment the output word representations with subword-based embeddings.",
"They experimented with characters and morphological decompositions, and tried different compositional models (CNN, Bi-LSTM, concatenation) on Czech dataset consisting of 4.7M tokens.",
"They were not tying weights between input and output representations, since their preliminary experiments with tied weights gave worse results.",
"Our approach differs in the following aspects: 1 Subword-level LMs rely on subword-level inputs and make predictions at the level of subwords; subword-aware LMs also rely on subword-level inputs but make predictions at the level of words.",
"we focus on the ways to reuse weights at output, seek both model size reduction and performance improvement in subword-aware language models, try different subword units (characters, syllables, and morphemes), and make evaluation on small (1M2M tokens) and medium (17M51M tokens) data sets across multiple languages.",
"Let W be a finite vocabulary of words.",
"We assume that words have already been converted into indices.",
"Let E in W R |W| d W be an input embedding matrix for words i.e., it is a matrix in which the w th row (denoted as w ) corresponds to an embedding of the word w W .",
"Based on word embeddings w 1 : k = w 1 , . . . , w k for a sequence of words w 1: k , a typical word-level RNN language model produces a sequence of states h 1 : k according to h t = RNNCell ( w t , h t 1 ) , h 0 = 0 .",
"(1) The last state h k is assumed to contain information on the whole sequence w 1: k and is further used for predicting the next word w k +1 of a sequence according to the probability distribution Pr( w k +1 | w 1: k ) = softmax( h k E out W + b ) , (2) where E out W R d LM |W| is an output embedding matrix, b R |W| is a bias term, and d LM is a state size of the RNN.",
"Subword-based word embeddings: One of the more recent advancements in neural language modeling has to do with segmenting words at input into subword units (such as characters, syllables, morphemes, etc.) and composing each word's embedding from the embeddings of its subwords.",
"Formally, let S be a finite vocabulary of subwords, 2 and let E in S R |S| d S be an input embedding matrix for subwords.",
"Any word w W is a sequence of its subwords ( s 1 , s 2 , . . . , s n w ) = ( w ) , and hence can be represented as a sequence of the corresponding subword vectors: [ s 1 , s 2 , . . . , s n w ] .",
"A subword-based word embedding model E ( ; E in S , in ) with parameters ( E in S , in ) constructs a word vector x from the sequence of subword vectors (3), i.e.",
"which is then fed into a RNNLM (1) instead of a plain embedding w .",
"The additional parameters in correspond to the way the embedding model constructs the word vector: for instance, in the CharCNN model of Kim et al. (2016), in are the weights of the convolutional and highway layers.",
"W (cid:0) W (cid:1) under the assumption that d W = d LM .",
"Although being useful for word-level language modeling (Press and Wolf, 2017; Inan et al., 2017), this regularization technique is not directly applicable to subword-aware language models, as they receive subword embeddings at input and return word embeddings at output.",
"In the next section we describe a simple technique to allow reusing subword embeddings E in S as well as other parameters in in a subword-aware RNNLM.",
"Let E out S be an output embedding matrix for subwords and let us modify the softmax layer (2) so that it utilizes E out S instead of the word embedding matrix E out W .",
"The idea is fairly straightforward: we reuse an embedding model (4) to construct a new word embedding matrix: E out W = [ E ( ( w ); E out S , out ) for w W ] , (5) and use E out W instead of E out W in the softmax layer (2).",
"Such modification of the softmax layer will be referred to as subword-based softmax .",
"The overall architecture of a subword-aware RNNLM with subword-based softmax is given in Figure 1. Such a model allows several options for reusing embeddings and weights, which are discussed below.",
"Reusing neither subword embeddings nor embedding model weights: As was shown by Jozefowicz et al. (2015), this can significantly reduce the total number of parameters for large models trained on huge datasets (1B tokens) with large vocabularies (800K tokens).",
"However, we do not expect significant reductions on smaller data sets (1-2M tokens) with smaller vocabularies (10-30K tokens), which we use in our main experiments.",
"Reusing subword embeddings ( RE ) can be done by setting E out S = E in S in (5).",
"This will give a significant reduction in model size for models with | E in S | (cid:29) | in | , 3 such as the morpheme-aware model of Botha and Blunsom (2014).",
"Reusing weights of the embedding model ( RW ) can be done by setting out = in .",
"Unlike the previous option, this should significantly reduce sizes of models with | E in S | (cid:28) | in | , such as the character-aware model of Kim et al. (2016).",
"Reusing both subword embeddings and weights of the embedding model ( RE+RW ) can be done by setting E out S = E in S and out = in simultaneously in (5).",
"This should significantly reduce the number of trainable parameters in any subword-aware model.",
"Here we use exactly the same word representations both at input and at output, so this option corresponds to the reusing of plain word embeddings in pure word-level language models.",
"Data sets: All models are trained and evaluated on the PTB (Marcus et al., 1993) and the WikiText-2 (Merity et al., 2017) data sets.",
"For the PTB we utilize the standard training (0-20), validation (21-22), and test (23-24) splits along with preprocessing per Mikolov et al. (2010).",
"WikiText-2 is an alternative to PTB, which is approximately two times as large in size and three times as large 3 | A | denotes number of elements in A .",
"in vocabulary (Table 1).",
"experiment with existing representational models which have previously proven effective for language modeling.",
"CharCNN (Kim et al., 2016) is a character-aware convolutional model, which performs on par with the 20142015 state-of-the-art word-level LSTM model (Zaremba et al., 2014) despite having 60% fewer parameters.",
"SylConcat is a simple concatenation of syllable embeddings suggested by Assylbekov et al. (2017), which underperforms CharCNN but has fewer parameters and is trained faster.",
"MorphSum is a summation of morpheme embeddings, which is similar to the approach of Botha and Blunsom (2014) with one important difference: the embedding of the word itself is not included into the sum.",
"We do this since other models do not utilize word embeddings.",
"In all subword-aware language models we inject a stack of two highway layers (Srivastava et al., 2015) right before the word-level RNNLM as done by Kim et al. (2016), and the non-linear activation in any of these highway layers is a ReLU.",
"The highway layer size is denoted by d HW .",
"Word-level RNNLM: There is a large variety of RNN cells to choose from in (1).",
"To make our results directly comparable to the previous work of Inan et al. (2017), Press and Wolf (2017) on reusing word embeddings we select a rather conventional architecture a stack of two LSTM cells (Hochreiter and Schmidhuber, 1997).",
"Hyperparameters: We experiment with two configurations for the state size d LM of the word-level RNNLM: 200 (small models) and 650 (medium-sized models).",
"In what follows values outside brackets correspond to small models, and values within brackets correspond to medium models.",
"CharCNN: We use the same hyperparameters as in the work of Kim et al. (2016), where large model stands for what we call medium-sized model.",
"SylConcat: d S = 50 ( 200 ), d HW = 200 ( 800 ).",
"These choices are guided by the work of Assylbekov et al. (2017).",
"MorphSum : d S = d HW = 200 ( 650 ).",
"These choices are guided by Kim et al. (2016).",
"Optimizaton method is guided by the previous works (Zaremba et al., 2014; Gal and Ghahra-mani, 2016) on word-level language modeling with LSTMs.",
"See Appendix A for details.",
"Syllabification and morphological segmentation: True syllabification of a word requires its grapheme-to-phoneme conversion and then its splitting up into syllables based on some rules.",
"True morphological segmentation requires rather expensive morphological analysis and disambiguation tools.",
"Since these are not always available for under-resourced languages, we decided to utilize Liang's widely-used hyphenation algorithm (Liang, 1983) and an unsupervised morphological segmentation tool, Morfessor 2.0 (Vir-pioja et al., 2013), as approximations to syllabification and morphological segmentation respectively.",
"We use the default configuration of Morfessor 2.0.",
"Syllable and morpheme vocabulary sizes for both PTB and WikiText-2 are reported in Table 1. 6 Results In order to investigate the extent to which each of our proposed options benefits the language modeling task, we evaluate all four modifications (no reusing, RE, RW, RE+RW) for each subword-aware model against their original versions and word-level baselines.",
"The results of evaluation are given in Table 2. We have both negative and positive findings which are summarized below.",
"The no reusing' and RW options should never be applied in subword-aware language models as they deteriorate the performance.",
"Neither of the reusing options benefits CharCNN when compared to the original model with a plain softmax layer.",
"Positive results: The RE+RW option puts CharCNN 's performance close to that of the original version, while reducing the model size by 3075%.",
"The RE and RE+RW are the best reusing options for SylConcat , which make it on par with the original CharCNN model, despite having 35 75% fewer parameters.",
"The RE and RE+RW configurations benefit MorphSum making it not only better than its original version but also better than all other models and significantly smaller than the word-level model with reused embeddings.",
"In what follows we proceed to analyze the obtained results.",
"We hypothesize that the reason CharCNN does not benefit from tied weights is that CNN over character embeddings is an excessively flexible model which learns to adapt to a surface form more than to semantics.",
"To validate this hypothesis we pick several words 4 from the English PTB vocabulary and consider their nearest neighbors under cosine similarity as produced by the medium-sized models (with the regular softmax layer) at input (Ta-ble 3).",
"As we can see from the examples, the CharCNN model is somewhat more biased towards surface forms at input than SylConcat and 4 We pick the same words as Kim et al. (2016).",
"MorphSum .",
"5 When CharCNN is reused to generate a softmax embedding matrix this bias is propagated to output embeddings as well (Table 3).",
"From Table 2 one can notice that tying weights without tying subword embeddings ( RW ) always results in worse performance than the tying both weights and embeddings ( RE+RW ).",
"Recall that subword embedding lookup is done before the weights of subword-aware embedding model are used (see Figure 1).",
"This leads us to the following Conjecture.",
"Let E in S = in 0 , in 1 , in 2 , . . . , in n be the parameters of the consecutive layers of a subword-aware input embedding model (4), i.e. x = x ( n ) = f n (cid:0) x ( n 1) ; in n (cid:1) , . . . , x (1) = f 1 (cid:0) x (0) ; in 1 (cid:1) , x (0) = f 0 (cid:0) ( w ); E in S (cid:1) and let E out S = out 0 , out 1 , out 2 , . . . , out n be the parameters of the consecutive layers of a subword-aware embedding model used to generate the output projection matrix (5).",
"Let A be a subword-aware neu-5 A similar observation for character-aware NLMs was made by Vania and Lopez (2017).",
"ral language model in which the first ( j +1) layers of input and output embedding sub-networks have tied weights: i = 0 , j : in i = out i , and let B be a model in which at least one layer below the ( j + 1) th layer has untied weights: i = 0 , j 1 : in i 6 = out i , in j = out j .",
"Then model B performs at most as well as model A , i.e. PPLA PPLB .",
"To test this conjecture empirically, we conduct the following experiments: in all three embedding models ( CharCNN , SylConcat , and MorphSum ), we reuse different combinations of layers.",
"If an embedding model has n layers, there are 2 n ways to reuse them, as each layer can either be tied or untied at input and output.",
"However, there are two particular configurations for each of the embedding models that do not interest us:",
"(i) when neither of the layers is reused, or",
"(ii) when only the very first embedding layer is reused.",
"Hence, for each model we need to check 2 n 2 configurations.",
"For faster experimentation we evaluate only small-sized models on PTB.",
"The results are reported in Table 4. As we can see, the experiments in general reject our conjecture: in SylConcat leaving an untied first highway layer between tied embedding and second highway layers (denote this as HW 2 +Emb) turned out to be slightly better than tying all three layers (HW 2 +HW 1 +Emb).",
"Recall, that a highway is a weighted average between nonlinear and identity transformations of the incoming vector: x 7 t (cid:12) ReLU ( xA + b ) + ( 1 t ) (cid:12) x , where t = ( xW + c ) is a transform gate, A , W , b and c are trainable parameters, and (cid:12) is the element-wise multiplication operator.",
"To find out why leaving an untied highway below a tied one is beneficial in SylConcat , we compare the distributions of the transform gate values t from the first highway layers of both configurations, HW 2 +Emb and HW 2 +HW 1 +Emb, in SylConcat and MorphSum (Figure 2).",
"We can see that SylConcat heavily relies on nonlinearity in the first highway layer, while MorphSum does not utilize much of it.",
"This means that in MorphSum, the highway is close to an identity operator ( t 0 ), and does not transform the sum of morpheme vectors much, either at input or at output.",
"Therefore, tying the first highway layer is natural to Morh-Sum .",
"SylConcat , on the other hand, applies non-linear transformations to the concatenation of syllable vectors, and hence makes additional preparations of the word vector for the needs of the RNNLM at input and for Softmax prediction at output.",
"These needs differ from each other (as shown in the next subsection).",
"This is why SylConcat benefits from an additional degree of freedom when the first highway is left untied.",
"Despite not being true in all cases, and due to being true in many cases, we believe that the above-mentioned conjecture is still useful.",
"In short it can be summarized as a practical hands-1418 HW 2 HW 1 CNN Emb PPL X 94.1 X X 92.8 X 94.6 X X 94.5 X X 93.1 X X X 90.1 X 94.9 X X 99.2 X X 94.1 X X X 92.5 X X 94.3 X X X 97.8 X X X 96.3 X X X X 91.0 HW 2 HW 1 Emb PPL X 95.4 X X 87.4 X 99.0 X X 87.9 X X 96.2 X X X 88.4 HW 2 HW 1 Emb PPL X 90.0 X X 84.7 X 89.9 X X 85.7 X X 89.4 X X X 85.1 Table 4: Reusing different combinations of layers in small CharCNN (left), small SylConcat (top right) and small MorphSum on PTB data.",
"i.e. one should not leave untied layer(s) below a tied one.",
"Keep in mind that this rule does not guarantee a performance increase as more and more layers are tied.",
"It only says that leaving untied weights below the tied ones is likely to be worse than not doing so.",
"One can notice from the results of our experiments (Table 4) that having an untied second highway layer above the first one always leads to better performance than when it is tied.",
"This means that there is a benefit in letting word embeddings slightly differ at input and output, i.e. by specializing them for the needs of RNNLM at input and of Softmax at output.",
"This specialization is quite natural, as input and output representations of words have two different purposes: input representations send a signal to the RNNLM about the current word in a sequence, while output representations are needed to predict the next word given all the preceding words.",
"The difference between input and output word representations is discussed in greater detail by Garten et al. (2015) and Press and Wolf (2017).",
"Here we decided to verify the difference indirectly: we test whether intrinsic dimensionality of word embeddings significantly differs at input and output.",
"For this, we apply principal component analysis to word embeddings produced by all models in no reusing mode.",
"The results are given in Figure 3, where we can see that dimensionalities of input and output embeddings differ in the word-level model, CharCNN , and SylConcat models, but the difference is less significant in MorphSum model.",
"Interestingly, in word-level and MorphSum models the output embeddings have more principal components than the input ones.",
"In CharCNN and SylConcat , however, results are to other way around.",
"We defer the study of this phenomenon to the future work.",
"One may expect larger units to work better than smaller units, but smaller units to generalize better than larger units.",
"This certainly depends on how one defines generalizability of a language model.",
"If it is an ability to model unseen text with unseen words, then, indeed, character-aware models may perform better than syllableor morpheme-aware ones.",
"This can be partially seen from Table 3, where the OOV words are better handled by CharCNN in terms of in-vocabulary nearest neighbors.",
"However, to fully validate the abovementioned expectation we conduct additional experiments: we train two models, CharCNN and MorphSum , on PTB and then we evaluate them on the test set of Wikitext-2 (245K words, 10K word-types).",
"Some words in Wikitext-2 contain characters or morphemes that are not present in PTB, and therefore such words cannot be embedded by CharCNN or MorphSum correspondingly.",
"Such words were replaced by the <unk> token, and we call them new OOVs 6 .",
"The results of our experiments are reported in Table 5. Indeed, CharCNN Model # new OOVs PPL CharCNN + RE + RW 3659 306.8 MorphSum + RE + RW 4195 316.2 Table 5: Training on PTB and testing on Wikitext-2.",
"faces less OOVs on unseen text, and thus generalizes better than MorphSum .",
"According to Table 2, MorphSum+RE+RW comfortably outperforms the strong baseline Word+RE",
"(Inan et al., 2017).",
"It is interesting to see whether this advantage extends to non-English languages which have richer morphology.",
"For this purpose we conduct evaluation of both models on small (1M tokens) and medium (17M51M tokens) data in five languages (see corpora statistics in Appendix B).",
"Due to hardware constraints we only train the small models on medium-sized data.",
"We used the same architectures for all languages and did not perform any language-specific tuning of hyperparameters, which are specified in Appendix A. The results are provided in Table 6. As one can see, the advantage of the morpheme-aware model over the word-level one is even more pronounced for non-English data.",
"Also, we can notice that the gain is larger for small data sets.",
"We hypothesize that the advantage of MorphSum+RE+RW over Word+RE diminishes with the decrease of type-token ratio (TTR).",
"A scatterplot of PPL change versus TTR (Figure 4) supports this hypothesis.",
"Moreover, there is a strong correlation between these two quantities: (PPL , TTR) = 0 .",
"84 , i.e. one can predict the mean decrease in PPL from the TTR of a text with a simple linear regression: PPL 2 , 109 TTR .",
"The empirical perplexities in Table 2 are way above the current state-of-the-art on the same datasets (Melis et al., 2018).",
"However, the approach of Melis et al. (2018) requires thousands of evaluations and is feasible for researchers who have access to hundreds of GPUs.",
"Unfortunately, we do not have such access.",
"Also, the authors do not disclose the optimal hyperparameters they found, and thus we could not reproduce their models.",
"There is another state-of-the-art language model, AWD-LSTM (Merity et al., 2018), which has open-source code.",
"We replaced this model's word embedding layer with the MorphSum subnetwork and fully reused morpheme embeddings and other weights of MorphSum at output.",
"We refer to such modification as AWD-LSTM-MorphSum + RE + RW .",
"We trained both models without fine-tuning (due to time constraints) and we did not use embedding dropout (section 4.3 of Merity et al. (2018)) in either model, as it is not obvious how embeddings should be dropped in the case of AWD-LSTM-MorphSum .",
"The results of evaluation on the PTB, Wikitext-2, and non-English datasets are given in Table 7. Although AWD-LSTM-MorphSum is on par with AWD-LSTM-Word on PTB and is slightly better on Wikitext-2, replacing plain word embeddings with the subword-aware model with appropriately reused parameters is crucial for non-English data.",
"Notice that AWD-LSTM underperforms LSTM (used by us) on Czech dataset (cf. Table 6).",
"We think that the hyperparameters of AWD-LSTM in Merity et al. (2018) are thoroughly tuned for PTB and Wikitext-2 and may poorly generalize to other datasets.",
"There is no single best way to reuse parameters in all subword-aware neural language models: the reusing method should be tailored to each type of subword unit and embedding model.",
"However, instead of testing an exponential (w.r.t. sub-network depth) number of configurations, it is sufficient to check only those where weights are tied consecutively bottom-up.",
"Despite being similar, input and output embeddings solve different tasks.",
"Thus, fully tying input and output embedding sub-networks in subword-aware neural language models is worse than letting them be slightly different.",
"This raises the question whether the same is true for pure word-level models, and we defer its study to our future work.",
"One of our best configurations, a simple morpheme-aware model which sums morpheme embeddings and fully reuses the embedding subnetwork, outperforms the competitive word-level language model while significantly reducing the number of trainable parameters.",
"However, the performance gain diminishes with the increase of training set size.",
"We gratefully acknowledge the NVIDIA Corporation for their donation of the Titan X Pascal GPU used for this research.",
"The work of Zhenisbek Assylbekov has been funded by the Committee of Science of the Ministry of Education and Science of the Republic of Kazakhstan, contract # 346/018-2018/33-28, IRN AP05133700.",
"The authors would like to thank anonymous reviewers for their valuable feedback, and Dr. J. N. Washington for proofreading an early version of the paper."
] |
[
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other"
] |
[
"Abuse on the Internet represents a significant societal problem of our time.",
"Previous research on automated abusive language detection in Twitter has shown that community-based profiling of users is a promising technique for this task.",
"However, existing approaches only capture shallow properties of online communities by modeling follower following relationships.",
"In contrast, working with graph convolutional networks ( GCN s), we present the first approach that captures not only the structure of online communities but also the linguistic behavior of the users within them.",
"We show that such a heterogeneous graph-structured modeling of communities significantly advances the current state of the art in abusive language detection.",
"Matthew Zook (2012) carried out an interesting study showing that the racist tweets posted in response to President Obama's re-election were not distributed uniformly across the United States but instead formed clusters.",
"This phenomenon is known as homophily : i.e., people, both in real life and online, tend to cluster with those who appear similar to themselves.",
"To model homophily, recent research in abusive language detection on Twitter (Mishra et al., 2018a) incorporates embeddings for authors (i.e., users who have composed tweets) that encode the structure of their surrounding communities.",
"The embeddings (called author profiles ) are generated by applying a node embedding framework to an undirected unlabeled community graph where nodes denote the authors and edges the followerfollowing relationships amongst them on Twitter.",
"However, these profiles do not capture the linguistic behavior of the authors and their communities and do not convey whether their tweets tend to be abusive or not.",
"In contrast, we represent the community of authors as a heterogeneous graph consisting of two types of nodes, authors and their tweets, rather than a homogeneous community graph of authors only.",
"The primary advantage of such heterogeneous representations is that they enable us to model both community structure as well as the linguistic behavior of authors in these communities.",
"To generate richer author profiles, we then propose a semi-supervised learning approach based on graph convolutional networks ( GCN s) applied to the heterogeneous graph representation.",
"To the best of our knowledge, our work is the first to use GCN s to model online communities in social media.",
"We demonstrate that our methods provide significant improvements over existing techniques.",
"Supervised learning for abusive language detection was first explored by Spertus (1997) who extracted rule-based features to train their classifier.",
"Subsequently, manually-engineered lexical syntactic features formed the crux of most approaches to the task (Yin et al., 2009; Warner and Hirschberg, 2012).",
"Djuric et al. (2015) showed that dense comment representations generated using paragraph2vec outperform bag-of-words features.",
"Several works have since utilized (deep) neural architectures to achieve impressive results on a variety of abuse-annotated datasets (Nobata et al., 2016; Pavlopoulos et al., 2017a).",
"Recently, the research focus has shifted towards extraction of features that capture behavioral and social traits of users.",
"Pavlopoulos et al. (2017b) showed that including randomly-initialized user embeddings improved the performance of their RNN methods.",
"Qian et al. (2018) employed LSTM s to generate inter and intra-user representations based on tweets, but they did not leverage community information.",
"Following previous work (Mishra et al., 2018a), we experiment with a subset of the Twitter dataset compiled by Waseem and Hovy (2016).",
"Waseem and Hovy released a list of 16 , 907 tweet IDs along with their corresponding annotations, 1 labeling each tweet as racist , sexist or neither (clean) .",
"Recently, Mishra et al. (2018a) could only retrieve 16 , 202 of these tweets since some of them are no longer available.",
"This is the dataset we use in our experiments.",
"1 , 939 ( 12% ) of 16 , 202 tweets are racist , 3 , 148 ( 19 . 4% ) are sexist , and the remaining 11 , 115 ( 68 . 6% ) are clean .",
"The tweets have been authored by a total of 1 , 875 unique users.",
"Tweets in the racist class come from 5 of the users, while those in the sexist class come from 527 of them.",
"We create two different graphs: the first one is identical to the community graph of Mishra et al. (2018a) (referred to as the community graph).",
"It contains 1 , 875 nodes representing each of the authors in the dataset.",
"Two authors/nodes are connected by a single undirected edge if either one follows the other on Twitter.",
"There are 453 solitary authors in the graph who are neither followed by nor follow any other author in the dataset.",
"This graph is homogeneous, i.e., it has nodes (and hence edges) of a single type only.",
"Our second graph is an extended version of the first (referred to as the extended graph) that additionally contains nodes representing the tweets of the authors.",
"Specifically, in addition to the 1 , 875 author nodes, the graph contains 16 , 202 tweet nodes.",
"Each tweet node is connected to a single author node, denoting that the tweet is elicited from that particular author.",
"This graph is no longer homogeneous since it contains nodes and edges of two different types.",
"We first describe the approach of Mishra et al. (2018a) that learns author embeddings using node2vec (Grover and Leskovec, 2016); this serves as our baseline.",
"We then move on to our semi-supervised approach based on graph convolutional networks (Kipf and Welling, 2017).",
"Node2vec.",
"Node2vec extends the word2vec skip-gram model (Mikolov et al., 2013) to graphs in order to create low-dimensional embeddings for nodes based on their position and neighborhood.",
"Specifically, for a given graph with nodes V = { v 1 , v 2 , . . . , v n } , node2vec aims to maximize the following log probability: (cid:88) v V log P ( N s ( v ) | v ) where N s ( v ) denotes the neighbor set of node v generated using neighbor sampling strategy s .",
"The framework utilizes two different strategies for sampling neighbor sets of nodes: Depth-First Sampling (DFS) and Breadth-First Sampling (BFS).",
"The former captures the structural role of nodes, while the latter captures the local neighborhood around them.",
"Two hyper-parameters control the overall contribution of each of these strategies.",
"Following Mishra et al. (2018a), we initialize these parameters to their default value of 1 and set the embedding size and number of iterations to 200 and 25 respectively.",
"Since node2vec cannot produce embeddings for nodes without edges, we map the solitary authors to a single zero embedding as done by Mishra et al.",
"Graph convolutional networks.",
"We propose an approach for learning author profiles using GCN s applied to the extended graph.",
"In contrast to node2vec , our method allows us to additionally propagate information with respect to whether tweets composed by authors and their communities are abusive or not.",
"Specifically, as labels are available for a subset of nodes in our graph (i.e., the tweet nodes), we frame the task as a graph-based semi-supervised learning problem, allowing the model to distribute gradient information from the supervised loss on the labeled tweet nodes.",
"This, in turn, allows us to create profiles for authors that not only capture the structural traits of their surrounding community but also their own linguistic behavior based on the types of tweets that they have composed.",
"We consider a graph G = ( V, E ) , where V is the set of nodes ( | V | = n ) and E is the set of edges.",
"A denotes the adjacency matrix of G .",
"We assume that A is symmetric ( A ij = A ji ), and that all nodes in G have self loops ( A ii = 1 ).",
"The sig-nificance of these assumptions is explained in Kipf and Welling (2017).",
"Let D be the diagonal degree matrix defined as D ii = (cid:80) j A ij , and F R n m be the input feature matrix that holds feature vectors of length m for the nodes in G .",
"We can now recursively define the computation that takes place at the i th convolutional layer of a k -layer GCN as: O ( i ) = ( (cid:101) A O ( i 1) W ( i ) ) with the computation at the first layer being: O (1) = ( (cid:101) A F W (1) ) Here, denotes an activation function; (cid:101) A = D 12 A D 12 is the normalized adjacency matrix; W ( i ) R d i 1 d i is the weight matrix of the i th convolutional layer; O ( i 1) R n d i 1 represents the output from the preceding convolutional layer, where d i is the number of hidden units in the i th layer (note that d 0 = m , i.e., the length of the input feature vectors).",
"In our experiments, we apply a 2-layer GCN to the extended graph.",
"2 Specifically, our GCN performs the following computation, yielding a softmax distribution over the 3 classes in the dataset for each of the nodes: O = softmax ( (cid:101) A ReLU ( (cid:101) A F W (1) ) W (2) ) We set the input feature vectors in F to be the binary bag-of-words representations of the nodes (following Kipf and Welling 2017); for author nodes, these representations are constructed over the entire set of their respective tweets.",
"Note that F is row-normalized prior to being fed to the GCN .",
"We set the number of hidden units in the first convolutional layer to 200 in order to extract 200 -dimensional embeddings for author nodes so that they are directly comparable with those from node2vec .",
"The number of hidden units in the second convolutional layer is set to 3 for the output O R n 3 of the GCN to be a softmax distribution over the 3 classes in the data.",
"The GCN is trained by minimizing the cross-entropy loss with respect to the labeled nodes of the graph.",
"Once the model is trained, we extract 200 -dimensional embeddings E = (cid:101) A F W (1) from the first layer (i.e., the layer's output without activation).",
"This contains embeddings for author nodes as well as tweet nodes.",
"For our experiments on author profiles, we make use of the former.",
"2 Stacking more layers does not improve results on the validation set further.",
"We experiment with five different supervised classification methods for tweets in the dataset.",
"The first three ( LR , LR + AUTH , LR + EXTD ) serve as our baselines, 3 and the last two with GCN s 4 are the methods we propose.",
"LR .",
"This method is adopted from Waseem and Hovy (2016) wherein they train a logistic regression classifier on character n -grams (up to 4 grams) of the tweets.",
"Character n-grams have been shown to be highly effective for abuse detection due to their robustness to spelling variations.",
"LR + AUTH .",
"This is the state of the art method (Mishra et al., 2018a) for the dataset we are using.",
"For each tweet, the profile of its author (gen-erated by node2vec from the community graph) is appended onto the tweet's character n-gram representation for training the LR classifier as above.",
"LR + EXTD .",
"This method is identical to LR + AUTH , except that we now run node2vec on the extended graph to generate author profiles.",
"Intuitively, since node2vec treats both author and tweet nodes as the same and does not take into account the labels of tweets, the author profiles generated should exhibit the same properties as those generated from the community graph.",
"GCN .",
"Here, we simply assign a label to each tweet based on the highest score from the softmax distribution provided by our GCN model for the (tweet) nodes of the extended graph.",
"LR + GCN .",
"Identical to LR + EXTD , except that we replace the author profiles from node2vec with those extracted by our GCN approach.",
"We run every method 10 times with random initializations and stratified traintest splits.",
"Specifically, in each run, the dataset is split into a randomly-sampled train set ( 90% ) and test set ( 10% ) with identical distributions of the 3 classes in each.",
"In methods involving our GCN , a small part of the train set is held out as validation data to prevent over-fitting using early-stopping regularization.",
"When training the GCN , we only have 3 The implementations of the baselines are taken from https://github.com/pushkarmishra/ AuthorProfilingAbuseDetection .",
"labeled tweet nodes for those tweets in the extended graph that are part of the train set.",
"Our GCN is trained using the parameters from the original paper (Kipf and Welling, 2017): Glorot initialization (Glorot and Bengio, 2010), ADAM optimizer (Kingma and Ba, 2015) with a learning rate of 0 .",
"01 , dropout regularization (Srivastava et al., 2014) rate of 0 .",
"5 , 200 training epochs with an early-stopping patience of 10 epochs.",
"In Table 1, we report the mean precision, recall, and F 1 on the racism and sexism classes over the 10 runs.",
"We further report the mean macro-averaged precision, recall, and F 1 for each method (Overall') to investigate their overall performance on the data.",
"LR + GCN significantly ( p < 0 . 05 on paired t-test) outperforms all other methods.",
"The author profiles from node2vec only capture the structural and community information of the authors; however, those from the GCN also take into account the (abusive) nature of the tweets composed by the authors.",
"As a result, tweets like #MKR #mkr2015 Who is gonna win the peoples choice? that are misclassified as sexist by LR + AUTH (because their author is surrounded by others producing sexist tweets) are correctly classi-fied as clean by LR + GCN .",
"GCN on its own achieves a high performance, particularly on the sexism class where its performance is typical of a community-based profiling approach, i.e., high recall at the expense of precision.",
"However, on the racism class, its recall is hindered by the same factor that Mishra et al. (2018a) highlighted for their node2vec -only method, i.e., that racist tweets come from 5 unique authors only who have also contributed sexist or clean tweets.",
"The racist activity of these authors is therefore eclipsed, leading to misclassifications of their tweets.",
"LR + GCN alleviates this problem by incorporating character n-gram representations of the tweets, hence not relying solely on the linguistic behavior of their authors.",
"Figure 1 shows the tSNE (van der Maaten and Hinton, 2008) visualizations of node2vec author profiles from the community and extended graphs.",
"Both visualizations show that some authors belong to densely-connected communities while others are part of more sparse ones.",
"The results from LR + AUTH and LR + EXTD have insignificant differences, further confirming that their author profiles have similar properties.",
"In essence, node2vec is unable to gain anything more from the extended graph than what it does from the community graph.",
"(a) Author profiles from the community graph",
"Figure 2 shows a tSNE visualization of the author profiles generated using our GCN approach.",
"Red dots denote the authors who are abusive (sex-ist or racist) according to our model (i.e., as per Figure 2: Visualization of the author profiles extracted from our GCN .",
"the softmax outputs for the author nodes).",
"5 The red dots are mostly clustered in a small portion of the visualization, which corroborates the notion of homophily amongst abusive authors.",
"Despite the addition of improved author profiles, several abusive tweets remain misclassified.",
"As per our analysis, many of these tend to contain URL s to abusive content but not the content itself, e.g., @MENTION: Logic in the world of Islam http://t.co/6nALv2HPc3 and @MENTION Yes. http://t.co/ixbt0uc7HN .",
"Since Twitter shortens all URL s into a standard format, there is no indication of what they refer to.",
"One possible way to address this limitation could be to append the content of the URL to the tweet; however this can lead to misclassifications in cases where the tweet is disagreeing with the URL .",
"Another factor in misclassifications is the deliberate obfuscation of words and phrases by authors in order to evade detection, e.g., Kat, a massive c*nt. The biggest ever on #mkr #cuntandandre .",
"Mishra et al. (2018b) demonstrate in their work that character-based word composition models can be useful in dealing with this aspect.",
"In this paper, we built on the work of Mishra et al. (2018a) that introduces community-based profiling of authors for abusive language detection.",
"We proposed an approach based on graph convolutional networks to show that author profiles that directly capture the linguistic behavior of authors along with the structural traits of their community significantly advance the current state of the art. 5 Note that there are no such gold labels for authors in the dataset itself.",
"We would like to thank the anonymous reviewers for their useful feedback.",
"Helen Yannakoudakis was supported by Cambridge Assessment, University of Cambridge."
] |
[
"result",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"other",
"other"
] |
[
"Neural encoder-decoder models have been successful in natural language generation tasks.",
"However, real applications of abstractive summarization must consider additional constraint that a generated summary should not exceed a desired length.",
"In this paper, we propose a simple but effective extension of a sinusoidal positional encoding (Vaswani et al., 2017) to enable neural encoder-decoder model to preserves the length constraint.",
"Unlike in previous studies where that learn embeddings representing each length, the proposed method can generate a text of any length even if the target length is not present in training data.",
"The experimental results show that the proposed method can not only control the generation length but also improve the ROUGE scores.",
"Neural encoder-decoder models have been successfully applied to various natural language generation tasks including machine translation (Sutskever et al., 2014), summarization (Rush et al., 2015), and caption generation (Vinyals et al., 2015).",
"Still, it is necessary to control the output length for abstractive summarization, which generates a summary for a given text while satisfying a space constraint.",
"In fact, Figure 1 shows a large variance in output sequences produced by a widely used encoder-decoder model (Luong et al., 2015), which has no mechanism for controlling the length of the output sequences.",
"Fan et al. (2018) trained embeddings that correspond to each output length to control the output sequence length.",
"Since the embeddings for different lengths are independent, it is hard to generate a sequence of the length that is infrequent in training data.",
"Thus, a method that can model any lengths continuously is required.",
"Kikuchi et al. (2016) proposed two learning based methods for an LSTM encoder-decoder: LenEmb and LenInit.",
"LenEmb inputs an embedding representing the remaining length in each decoding step.",
"Since this approach also prepares embeddings for each length independently, it suffers from the same problem as that in Fan et al. (2018).",
"On the other hand, LenInit can handle arbitrary lengths because it combines the scalar value of a desired length with a trainable embedding.",
"LenInit initializes the LSTM cell of the decoder with the embedding depending on the scalar value of the desired length.",
"Liu et al. (2018) incorporated such scalar values into the initial state of the decoder in a CNN encoder-decoder.",
"These approaches deal with any length but it is reasonable to incorporate the distance to the desired terminal position into each decoding step such as in LenEmb.",
"In this study, we focused on Transformer (Vaswani et al., 2017), which recently achieved the state-of-the-art score on the machine translation task.",
"We extend the sinusoidal positional encoding, which represents a position of each token in Transformer (Vaswani et al., 2017), to represent a distance from a terminal position on the decoder side.",
"In this way, the proposed method considers the remaining length explicitly at each decoding step.",
"Moreover, the proposed method can handle any desired length regardless of its appearance in a training corpus because it uses the same continuous space for any length.",
"We conduct experiments on the headline generation task.",
"The experimental results show that our proposed method is able to not only control the output length but also improve the ROUGE scores from the baselines.",
"Our code and constructed test data are publicly available at: https://github.com/takase/control-length.",
"Transformer (Vaswani et al., 2017) uses a sinusoidal positional encoding to represent the position of an input.",
"Transformer feeds the sum of the positional encoding and token embedding to the input layer of its encoder and decoder.",
"Let pos be the position and d be the embedding size.",
"Then, the i th dimension of the sinusoidal positional encoding P E ( pos,i ) is as follows: P E ( pos, 2 i ) = sin (cid:18) pos 10000 2 id (cid:19) , (1) P E ( pos, 2 i +1) = cos (cid:18) pos 10000 2 id (cid:19) .",
"(2) In short, each dimension of the positional encoding corresponds to a sinusoid whose period is 10000 2 i/d 2 .",
"Since this function returns an identical value at the same position pos , the above positional encoding can be interpreted as representing the absolute position of each input token.",
"In this paper, we extend Equations (1) and (2) to depend on the given output length and the distance from the terminal position.",
"We propose two extensions: length-difference positional encoding ( LDP E ) and length-ratio positional encoding ( LRP E ).",
"Then we replace Equations (1) and (2) with (3) and (4) (or (5) and (6)) on the decoder side to control the output sequence length.",
"We define LDP E and LRP E as follows: LDP E ( pos,len, 2 i ) = sin (cid:18) len pos 10000 2 id (cid:19) , (3) LDP E ( pos,len, 2 i +1) = cos (cid:18) len pos 10000 2 id (cid:19) , (4) LRP E ( pos,len, 2 i ) = sin (cid:18) pos len 2 id (cid:19) , (5) LRP E ( pos,len, 2 i +1) = cos (cid:18) pos len 2 id (cid:19) , (6) where len presents the given length constraint.",
"LDP E returns an identical value at the position where the remaining length to the terminal position is the same.",
"LRP E returns a similar value at the positions where the ratio of the remaining length to the terminal position is similar.",
"Let us consider the d -th dimension as the simplest example.",
"Since we obtain sin( pos/len ) (or cos( pos/len ) ) at this dimension, the equations yield the same value when the remaining length ratio is the same, e.g., pos = 5 , len = 10 and pos = 10 , len = 20 .",
"We add LDP E (or LRP E ) to the input layer of Transformer in the same manner as in Vaswani et al. (2017).",
"In the training step, we assign the length of the correct output to len .",
"In the test phase, we control the output length by assigning the desired length to len .",
"We conduct experiments on the headline generation task on Japanese and English datasets.",
"The purpose of the experiments is to evaluate the ability of the proposed method to generate a summary of good quality within a specified length.",
"We used JAMUL corpus as the Japanese test set (Hitomi et al., 2019).",
"This test set contains three kinds of headlines for 1,181 1 news articles written by professional editors under the different upper bounds of headline lengths.",
"The upper bounds are 10, 13, and 26 characters ( len = 10 , 13 , 26 ).",
"This test set is suitable for simulating the real process of news production because it is constructed by a Japanese media company.",
"In contrast, we have no English test sets that contain headlines of multiple lengths.",
"Thus, we randomly extracted 3,000 sentence-headline 1 We obtained this test set by applying the pre-processing script at https://github.com/asahi-research/Gingo to the original JAMUL corpus.",
"pairs that satisfy a length constraint from the test set constructed from annotated English Gigaword (Napoles et al., 2012) by pre-processing scripts of Rush et al. (2015) 2 .",
"We set three configu-rations for the number of characters as the length constraint: 0 to 30 characters ( len = 30 ), 30 to 50 characters ( len = 50 ), and 50 to 75 characters ( len = 75 ).",
"Moreover, we also evaluate the proposed method on the DUC-2004 task 1 (Over et al., 2007) for comparison with published scores in previous studies.",
"Unfortunately, we have no large supervision data with multiple headlines of different lengths associated with each news article in both languages.",
"Thus, we trained the proposed method on pairs with a one-to-one correspondences between the source articles and headlines.",
"In the training step, we regarded the length of the target headline as the desired length len .",
"For Japanese, we used the JNC corpus, which contains a pair of the lead three sentences of a news article and its headline (Hitomi et al., 2019).",
"The training set contains about 1.6M pairs 3 .",
"For English, we used sentence-headline pairs extracted from the annotated English Gigaword with the same pre-processing script used in the construction of the test set.",
"The training set contains about 3.8M pairs.",
"In this paper, we used a character-level decoder to control the number of characters.",
"On the encoder side, we used subword units to construct the vocabulary (Sennrich et al., 2016; Kudo, 2018).",
"We set the hyper-parameter to fit the vocabulary size to about 8k for Japanese and 16k for English.",
"We implemented two methods proposed by previous studies to control the output length and handle arbitrary lengths.",
"We employed them and Transformer as baselines.",
"LenInit Kikuchi et al. (2016) proposed LenInit, which controls the output length by initializing the LSTM cell m of the decoder as follows: m = len b, (7) where b is a trainable vector.",
"We incorporated this method with a widely used LSTM encoder-decoder model (Luong et al., 2015) 4 .",
"For a fair 2 https://github.com/facebookarchive/NAMAS 3 We obtained this training set by applying the preprocessing script at https://github.com/asahi-research/Gingo.",
"4 We used an implementation at https://github.com/mlpnlp/mlpnlp-nmt.",
"comparison, we set the same hyper-parameters as in Takase et al. (2018) because they indicated that the LSTM encoder-decoder model trained with the hyper-parameters achieved a similar performance to the state-of-the-art on the headline generation.",
"Length Control (LC) Liu et al. (2018) proposed a length control method that multiplies the desired length by input token embeddings.",
"We trained the model with their hyper-parameters.",
"Transformer Our proposed method is based on Transformer (Vaswani et al., 2017) 5 .",
"We trained Transformer with the equal hyper-parameters as in the base model in Vaswani et al. (2017).",
"Table 1 shows the recall-oriented ROUGE-1 (R-1), 2 (R-2), and L (R-L) scores of each method on the Japanese test set 6 .",
"This table indicates that Transformer with the proposed method (Transformer+ LDP E and Transformer+ LRP E ) outperformed the baselines for all given constraints ( len = 10 , 13 , 26 ).",
"Transformer+ LRP E performed slightly better than Transformer+ LDP E .",
"Moreover, we improved the performance by incorporating the standard sinusoidal positional encoding (+ P E ) on len = 10 and 26 .",
"The results imply that the absolute position also helps to generate better headlines while controlling the output length.",
"Table 2 shows the recall-oriented ROUGE scores on the English Gigaword test set.",
"This table indicates that LDP E and LRP E significantly improved the performance on len = 75 .",
"Moreover, the absolute position ( P E ) also improved the performance in this test set.",
"In particular, P E was very effective in the setting of very short headlines ( len = 30 ).",
"However, the proposed method slightly lowered ROUGE-2 scores from the bare Transformer on len = 30 , 50 .",
"We infer that the bare Transformer can generate headlines whose lengths are close to 30 and 50 because the majority of the training set consists of headlines whose lengths are less than or equal to 50.",
"However, most of the generated headlines breached the length constraints, as explained in Section 3.4.",
"5 We used an implementation at https://github.com/pytorch/fairseq.",
"6 To calculate ROUGE scores on the Japanese dataset, we used https://github.com/asahi-research/Gingo.",
"desired length ( len ) from the training data.",
"The lower parts of Table 1 and 2 show ROUGE scores of the proposed method trained on the modified training data.",
"These parts show that the proposed method achieved comparable scores to ones trained on whole training dataset.",
"These results indicate that the proposed method can generate high-quality headlines even if the length does not appear in the training data.",
"Table 3 shows the recall-oriented ROUGE scores on the DUC-2004 test set.",
"Following the evaluation protocol (Over et al., 2007), we truncated characters over 75 bytes.",
"The table indicates that LDP E and LRP E significantly improved the performance compared to the bare Transformer, and achieved better performance than the baselines except for R-2 of LenInit.",
"This table also shows the scores reported in the previous studies.",
"The proposed method outperformed the previous methods that control the output length and achieved the competitive score to the state-of-the-art scores.",
"Since the proposed method consists of a character-based decoder, it sometimes generated Variance Japanese dataset English Gigaword Model len = 10 len = 13 len = 26 len = 30 len = 50 len = 75 BaselinesLenInit 0.047 0.144 0.058 0.114 0.112 0.091 LC 0.021 0.028 0.040 0.445 0.521 0.871 Transformer 181.261 115.431 38.169 193.119 138.566 620.887 Proposed method Transformer+ LDPE 0.000 0.000 0.000 0.015 0.012 0.013 + PE 0.003 0.001 0.001 0.016 0.009 0.007 Transformer+ LRPE 0.121 0.210 0.047 0.082 0.071 0.187 + PE 0.119 0.144 0.058 0.142 0.110 0.173 Proposed method trained on the dataset without headlines consisting of the target lengths Transformer+ LDPE 0.000 0.002 0.000 0.018 0.009 0.009 + PE 0.021 0.001 0.003 0.021 0.013 0.010 Transformer+ LRPE 0.191 0.362 0.043 0.120 0.058 0.133 + PE 0.183 0.406 0.052 0.138 0.081 0.154 Table 4: Variances of generated headlines.",
"words unrelated to a source sentence.",
"Thus, we applied a simple re-ranking to each n -best headlines generated by the proposed method ( n = 20 in this experiment) based on the contained words.",
"Our re-ranking strategy selects a headline that contains source-side words the most.",
"Table 3 shows that Transformer+ LRP E + P E with this re-ranking (+Re-ranking) achieved better scores than the state-of-the-art (Suzuki and Nagata, 2017).",
"Following Liu et al. (2018), we used the variance of the generated summary lengths against the desired lengths as an indicator of the preciseness of the output lengths.",
"We calculated variance ( var ) for n generated summaries as follows 7 : var = 1 n n (cid:88) i =1 | l i len | 2 , (8) where len is the desired length and l i is the length of the generated summary.",
"Table 4 shows the values of Equation (8) computed for each method and the desired lengths.",
"This table indicates that LDP E could control the length of headlines precisely.",
"In particular, LDP E could generate headlines with the identical length to the desired one in comparison with LenInit and LC.",
"LRP E also generated headlines with a precise length but its variance is larger than those of previous studies in very short lengths, i.e., len = 10 and 13 in Japanese.",
"However, we consider LRP E is enough for real applications because the averaged difference between its output and the desired length is small, e.g., 0 .",
"1 for len = 10 .",
"7 Liu et al. (2018) multiplies Equation (8) by 0 .",
"001 .",
"The lower part of Table 4 shows the variances of the proposed method trained on the modified training data that does not contain headlines whose lengths are equal to the desired length, similar to the lower parts of Table 1 and 2.",
"The variances for this part are comparable to the ones obtained when we trained the proposed method with whole training dataset.",
"This fact indicates that the proposed method can generate an output that satisfies the constraint of the desired length even if the training data does not contain instances of such a length.",
"In this paper, we proposed length-dependent positional encodings, LDP E and LRP E , that can control the output sequence length in Transformer.",
"The experimental results demonstrate that the proposed method can generate a headline with the desired length even if the desired length is not present in the training data.",
"Moreover, the proposed method significantly improved the quality of headlines on the Japanese headline generation task while preserving the given length constraint.",
"For English, the proposed method also generated headlines with the desired length precisely and achieved the top ROUGE scores on the DUC-2004 test set.",
"The research results have been achieved by Re-search and Development of Deep Learning Technology for Advanced Multilingual Speech Trans-lation, the Commissioned Research of National Institute of Information and Communications Technology (NICT), Japan."
] |
[
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other"
] |
[
"Fine-tuning is the de facto way of leveraging large pretrained language models for downstream tasks.",
"However, fine-tuning modifies all the language model parameters and therefore necessitates storing a full copy for each task.",
"In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning for natural language generation tasks, which keeps language model parameters frozen and instead optimizes a sequence of continuous task-specific vectors, which we call the prefix .",
"Prefix-tuning draws inspiration from prompting for language models, allowing subsequent tokens to attend to this prefix as if it were virtual tokens.",
"We apply prefix-tuning to GPT-2 for table-to-text generation and to BART for summarization.",
"We show that by modifying only 0.1% of the parameters, prefix-tuning obtains comparable performance in the full data setting, outperforms fine-tuning in low-data settings, and extrapolates better to examples with topics that are unseen during training.",
"Fine-tuning is the prevalent paradigm for using large pretrained language models (LMs) (Radford et al., 2019; Devlin et al., 2019) to perform downstream tasks (e.g., summarization), but it requires updating and storing all the parameters of the LM.",
"Consequently, to build and deploy NLP systems that rely on large pretrained LMs, one currently needs to store a modified copy of all the LM parameters for each task.",
"This can be prohibitively expensive given the size of current LMs; for example, GPT-2 has 774M parameters (Radford et al., 2019) and GPT-3 has 175B parameters (Brown et al., 2020).",
"A natural approach to this problem is lightweight fine-tuning , which freezes most of the pretrained parameters and only tunes a smaller set of parameters.",
"For example, adapter-tuning (Rebuffi et al., Figure 1: Fine-tuning (top) updates all LM parameters (the red Transformer box) and requires storing a full model copy for each task.",
"2017; Houlsby et al., 2019) inserts additional task-specific layers between the layers of pretrained language models.",
"Adapter-tuning has promising performance on natural language understanding and generation benchmarks, attaining comparable performance with fine-tuning while adding only around 24% task-specific parameters (Houlsby et al., 2019; Lin et al., 2020).",
"At the limit, GPT-3 (Brown et al., 2020) can be deployed using in-context learning, which is a form of prompting , without modifying any LM parameters.",
"In in-context learning, Brown et al. (2020) prepend a natural language task instruction (e.g., TL;DR for summarization) and a few examples to the task input, and then generate the task output from the LM.",
"However, since Transformers can only condition on a bounded-length context (e.g., 2048 tokens for GPT-3), in-context learning is restricted to very small training sets.",
"In this paper, we propose prefix-tuning , a lightweight alternative to fine-tuning for natural language generation (NLG) tasks, inspired by prompting.",
"Consider the task of generating a textual description of a data table, as shown in Figure 1, where the task input is a linearized table (e.g., name: Starbucks | type: coffee shop) and the output is a textual description (e.g., Starbucks serves coffee.).",
"Prefix-tuning prepends a sequence of continuous task-specific vectors to the input, which we call a prefix , depicted by red blocks in Figure 1 (bottom).",
"To generate each token, the LM can attend to the prefix as if it were a sequence of virtual tokens, but unlike prompting, the prefix consists entirely of free parameters which do not correspond to real tokens.",
"In contrast to fine-tuning in Figure 1 (top), which updates all LM parameters and thus requires storing a tuned copy of the model for each task, prefix-tuning only optimizes the prefix.",
"Consequently, we only need to store one copy of the large LM and a learned task-specific prefix, yielding a very small overhead for each additional task (e.g., 250K parameters for table-to-text).",
"In contrast to full fine-tuning, prefix-tuning is also modular: we train an upstream prefix which steers an unmodified LM, and therefore, a single LM can support many tasks at once.",
"In the context of personalization where the tasks correspond to users (Shokri and Shmatikov, 2015; McMahan et al., 2016), we would have a separate prefix for each user trained only on that user's data, thereby avoiding data cross-contamination.",
"Moreover, the prefix-based architecture enables us to even process examples from multiple users/tasks in a single batch, something that is not possible with other lightweight fine-tuning approaches like adapter-tuning.",
"We evaluate prefix-tuning on table-to-text generation using GPT-2 and abstractive summarization using BART.",
"In terms of storage, prefix-tuning stores 1000x fewer parameters than full fine-tuning.",
"In terms of performance when trained on full datasets, prefix-tuning and fine-tuning are comparable for table-to-text ( 6.1), while prefix-tuning suffers a small degradation for summarization ( 6.2).",
"In low-data settings, prefix-tuning outperforms finetuning on both tasks ( 6.3).",
"Prefix-tuning also extrapolates better to tables (for table-to-text) and articles (for summarization) with unseen topics ( 6.4).",
"Fine-tuning for natural language generation.",
"Current state-of-the-art systems for natural language generation (NLG) are based on fine-tuning pretrained LMs.",
"For table-to-text generation, Kale (2020) fine-tunes a sequence-to-sequence model (T5; Raffel et al., 2020).",
"For extractive and abstractive summarization, researchers fine-tune masked language models (e.g., BERT; Devlin et al., 2019) and encode-decoder models (e.g., BART; Lewis et al., 2020), respectively (Zhong et al., 2020; Liu and Lapata, 2019; Raffel et al., 2020).",
"For other conditional NLG tasks such as machine translation and dialogue generation, fine-tuning is also the prevalent paradigm (Zhang et al., 2020c; Stickland et al., 2020; Zhu et al., 2020; Liu et al., 2020).",
"In this paper, we focus on table-to-text using GPT-2 and summarization using BART, but prefix-tuning in principle can be applied to other generation tasks and pretrained models, such as masked LMs.",
"Lightweight fine-tuning.",
"Prefix-tuning falls under the broad class of lightweight fine-tuning methods, which freeze most of the pretrained parameters and only tune a smaller set of parameters.",
"The key question is how to augment the LM architecture and decide which subset of pretrained parameters to tune.",
"One line of research learns a task-specific parameter mask (Zhao et al., 2020; Radiya-Dixit and Wang, 2020).",
"Another line of research inserts new modules with trainable parameters.",
"For example, Zhang et al. (2020a) trains a side network that is fused with the pretrained model via summation; adapter-tuning inserts task-specific layers (adapters) between each layer of the pretrained LM (Houlsby et al., 2019; Lin et al., 2020; Rebuffi et al., 2017; Pfeiffer et al., 2020).",
"Compared to this line of work, which tunes around 3 .",
"6% of the LM parameters, our method obtains a further 30x reduction in task-specific parameters, tuning only 0.1% while maintaining comparable performance on table-to-text tasks.",
"Prompting.",
"Prompting is a way of leveraging a pretrained LM by prepending instructions and a few examples to the task input and generating the task output from the LM.",
"For autoregressive LMs, the most successful form of prompting is GPT-3's in-context learning (Brown et al., 2020), which uses manually designed prompts to adapt its generation for different tasks in few-shot settings.",
"For masked LMs like BERT and RoBERTa (Liu et al., 2019), prompt engineering has been explored for natural language understanding tasks (Jiang et al., 2020; Schick and Schutze, 2020).",
"For example, AutoPrompt (Shin et al., 2020) searches for a sequence of discrete trigger words and concatenates it with each input to elicit sentiment or factual knowledge from BERT and RoBERTa.",
"In contrast with AutoPrompt, our method optimizes continuous prefixes, which are more expressive ( 7.2); moreover, we focus on language generation tasks.",
"Continuous vectors have been used to steer LMs; for example, Subramani et al. (2020) showed that a pretrained LSTM language model can reconstruct arbitrary sentences by optimizing a continuous vector for each sentence, making the vector input-specific .",
"In contrast, prefix-tuning optimizes a task-specific prefix that applies to all instances of that task.",
"As a result, unlike the previous work whose application is limited to sentence reconstruction, prefix-tuning can be applied to NLG tasks.",
"Controllable generation.",
"Controllable generation aims to steer a pretrained language model to match a sentence-level attribute (e.g., positive sentiment or sports).",
"Such control can happen at training time: Keskar et al. (2019) pretrains the language model (CTRL) to condition on metadata such as keywords or URLs.",
"The control can also happen at decoding time, by weighted decoding (GeDi, Krause et al., 2020) or iteratively updating the past activations (PPLM, Dathathri et al., 2020).",
"However, there is no straightforward way to apply these controllable generation techniques to enforce fine-grained control over generated contents, as demanded by tasks like table-to-text and summarization.",
"P*-tuning.",
"Prefix tuning is an instance of a new class of methods that has emerged, which we call p*-tuning (since the other prominent instances, p-tuning and prompt-tuning, also start with p), all based on the idea of optimizing a continuous prefix or prompt.",
"Concurrent with our work, Qin and Eisner (2021) learn mixtures of soft fill-in-the-blank prompts to elicit knowledge from LMs such as BERT and BART.",
"Hambardzumyan et al. (2021) learns task-specific embeddings that adapts BERT for sentiment classification.",
"Both works show that tuning soft prompts outperforms previous work, which optimizes over discrete prompts.",
"P-tuning (Liu et al., 2021) shows that jointly updating the prompt embeddings and LM parameters improves GPT-2's performance on natural language understanding tasks, in both few-shot and full data settings.",
"In a followup work, Prompt-tuning (Lester et al., 2021) simplifies our approach and applies it to T5 (Raffel et al., 2020), demonstrating that the performance gap between fine-tuning and p*-tuning vanishes as the model size grows.",
"Consider a conditional generation task where the input x is a context and the output y is a sequence of tokens.",
"We focus on two tasks, shown in Figure 2 (right): In table-to-text, x corresponds to a linearized data table and y is a textual description; in summarization, x is an article and y is a summary.",
"Assume we have an autoregressive neural language model p ( y | x ) parametrized by (e.g., GPT-2; Radford et al., 2019).",
"As shown in Figure 2 (top), let z = [ x ; y ] be the concatenation of x and y ; let X idx denote the sequence of indices that corresponds to x , and Y idx denote the same for y .",
"The activation vector at time step i is h i R d , where h i = [ h (1) i ; ; h ( n ) i ] is a concatenation of all activation layers at this time step, and h ( j ) i is the activation vector of the j -th layer at time step i .",
"1 An autoregressive neural LM computes h i as a function of z i and the past activations in its left context, as follows: h i = LM ( z i , h <i ) , (1) where the last layer of h i is used to compute the distribution for the next token: p ( z i +1 | h i ) = softmax( W h ( n ) i ) and W is a matrix that maps h ( n ) i to logits over the vocabulary.",
"We can also use an encoder-decoder architecture (e.g., BART; Lewis et al., 2020) to model p ( y | x ) , where x is encoded by the bidirectional encoder, and the decoder predicts y autoregressively (condi-tioned on the encoded x and its left context).",
"We use the same indexing and activation notation, as shown in Figure 2 (bottom): each h i for i X idx is computed by the a bidirectional encoder; each h i for i Y idx is computed by an autoregressive decoder using the same equation (1).",
"1 In GPT-2, h ( n ) i consists of a key-value pair, and the dimension of each key and value is 1024 .",
"In the full fine-tuning framework, we initialize with the pretrained parameters .",
"Here p is a trainable language model distribution and we perform gradient updates on the following log-likelihood objective: max log p ( y | x ) = max (cid:88) i Y idx log p ( z i | h <i ) .",
"We propose prefix-tuning as an alternative to full fine-tuning for conditional generation tasks.",
"We first provide intuition in 4.1 before defining our method formally in 4.2.",
"Prompting has demonstrated that conditioning on a proper context can steer the LM without changing its parameters.",
"For example, if we want the LM to generate a word (e.g., Obama), we can prepend its common collocations as context (e.g., Barack), and the LM will assign much higher probability to the desired word.",
"Extending this intuition beyond generating a single word or sentence, we want to find a context that steers the LM to solve an NLG task.",
"Intuitively, the context could influence the encoding of the task input x by guiding what to extract from x , and it could influence the generation of the task output y by steering the next token distribution.",
"However, it's non-obvious whether such a context exists.",
"Using natural language task instructions (e.g., summarize the following table in one sentence) for the context might guide a human to solve the task, but this fails for moderately-sized pretrained LMs.",
"2 Optimizing over the discrete instructions might help, but discrete optimization is computationally challenging.",
"Instead of optimizing over discrete tokens, we can optimize the instruction as continuous word embeddings, whose effects will be propagated upward to all Transformer activation layers and rightward to subsequent tokens.",
"This is strictly more expressive than a discrete prompt which is constrained to the embeddings of real words.",
"Prefix-tuning goes one step further in increasing expressivity by optimizing the activations of all the layers, not just the embedding layer.",
"As another benefit, prefix-tuning can directly modify representations deeper in the network, therefore, avoiding long computation paths across the depth of the network.",
"Prefix-tuning prepends a prefix for an autoregressive LM to obtain z = [ PREFIX ; x ; y ] , or prepends prefixes for both encoder and decoder to obtain z = [ PREFIX ; x ; PREFIX (cid:48) ; y ] , as shown in Figure",
"2. Here, P idx denotes the sequence of prefix indices, and we use | P idx | to denote the length of the prefix.",
"We follow the recurrence relation in equation (1), except that the activations of the prefix indices are free parameters, given by a matrix P (parametrized by ) of dimension | P idx | dim( h i ) .",
"2 In our preliminary experiments, GPT-2 and BART fail in this setting; the only exception is GPT-3.",
"The training objective is the same as equation (2), but the set of trainable parameters changes: the language model parameters are fixed and the prefix parameters are the only trainable parameters.",
"Here, each h i is a function of the trainable P .",
"When i P idx , this is clear because h i copies directly from P .",
"When i (cid:54) P idx , h i still depends on P , because the prefix activations are always in the left context and will therefore affect any activations to the right.",
"Empirically, directly updating the P parameters leads to unstable optimization and a slight drop in performance.",
"3 So we reparametrize the matrix P [ i, :] = MLP ( P (cid:48) [ i, :]) by a smaller matrix ( P (cid:48) ) composed with a large feedforward neural network (MLP ).",
"Now, the trainable parameters include P (cid:48) and the parameters of MLP .",
"Note that P and P (cid:48) has the same number of rows (i.e., the prefix length), but different number of columns.",
"4 Once training is complete, these reparametriza-tion parameters can be dropped, and only the prefix ( P ) needs to be saved.",
"We evaluate on three standard neural generation datasets for the table-to-text task: E2E (Novikova et al., 2017), WebNLG (Gardent et al., 2017), and DART (Radev et al., 2020), as shown in Table",
"1. The datasets are ordered by increasing complexity and size.",
"E2E only has 1 domain (i.e. restaurant reviews); WebNLG has 14 domains, and DART is open-domain, using open-domain tables from Wikipedia.",
"For evaluation, we report the metrics using the official evaluation scripts (see details in Appendix A.1).",
"For the summarization task, we use the XSUM (Narayan et al., 2018) dataset, which is an abstractive summarization dataset on news articles.",
"We report ROUGE-1, ROUGE-2 and ROUGE-L.",
"For table-to-text generation, we compare prefix-tuning with three other methods: full fine-tuning",
"3 We find in preliminary experiments that directly optimizing the prefix is very sensitive to initialization.",
"4 P has dimensions | P idx | dim( h i ) while P has dimensions | P idx | k .",
"We choose k = 512 for table-to-text and 800 for summarization.",
"MLP maps from k to dim( h i ) .",
"(FT-FULL ), fine-tuning only the top 2 layers (FT-TOP 2), and adapter-tuning (ADAPTER ).",
"5 We also report the current state-of-the-art results on these datasets: On E2E, Shen et al. (2019) uses a pragmatically informed model without pretraining.",
"On WebNLG, Kale (2020) fine-tunes T5-large.",
"On DART, no official models trained on this dataset version are released.",
"6 For summarization, we compare against fine-tuning BART (Lewis et al., 2020).",
"For table-to-text, we use GPT-2 MEDIUM and GPT-2 LARGE .",
"For summarization, we use BARTLARGE .",
"Our implementation is based on the Hugging Face Transformers (Wolf et al., 2020).",
"At training time, we use the AdamW optimizer (Loshchilov and Hutter, 2019) and a linear learning rate scheduler, as suggested by the Hugging Face default setup.",
"The hyperparameters we tune include the number of epochs, batch size, learning rate, and prefix length.",
"Hyperparameter details are in the appendix.",
"The default setting is 10 epochs, batch size 5 , learning rate 5 10 5 and prefix length 10 .",
"The table-to-text models are trained on TITAN Xp or GeForce GTX TITAN X machines.",
"Prefix-tuning takes 0 .",
"2 hours per epoch to train on 22K examples, whereas fine-tuning takes around 0 .",
"3 hours per epoch.",
"The summarization models are trained on Tesla V100 machines, taking 1 .",
"25 hours per epoch on the XSUM dataset.",
"For time efficiency , prefix-tuning is around 30% faster than fine-tuning.",
"For GPU memory efficiency , prefix-tuning with batchsize 1 takes 18% of the total GPU memory, whereas fine-tuning takes 50%.",
"At decoding time, for table-to-text, we use beam search with beam size 5 .",
"For summarization, we use beam size 6 and length normalization 0 .",
"8 .",
"Decoding takes 1 .",
"2 seconds per sentence (without 5 Same implementation as Lin et al. (2020).",
"batching) for table-to-text, and 2 .",
"6 seconds per batch (using a batch size of 10) for summarization.",
"We find that by updating only 0.1% task-specific parameters, 7 prefix-tuning is effective in table-to-text generation, outperforming other lightweight baselines (ADAPTER and FT-TOP 2) even by updating 30x fewer parameters and achieving a comparable performance with (full) fine-tuning.",
"This trend holds for all datasets: E2E, WebNLG, 8 and DART.",
"If we match the number of parameters for prefix-tuning and adapter-tuning to be 0.1%, Table 2 shows that prefix-tuning is significantly better than ADAPTER (0.1%), attaining 4 .",
"1 BLEU improvement per dataset on average.",
"Even when we compare with fine-tuning (100%) and adapter-tuning (3.0%), which update significantly more parameters than prefix-tuning, prefix-tuning still achieves results comparable or better than those two systems.",
"This demonstrates that prefix-tuning is more Pareto efficient than adapter-tuning, significantly reducing parameters while improving generation quality.",
"Additionally, attaining good performance on DART suggests that prefix-tuning can generalize to tables with diverse domains and a large number of relations.",
"We will delve deeper into extrapolation performance (i.e., generalization to unseen categories or topics) in 6.4.",
"In summary, prefix-tuning is an effective and space-efficient method to adapt GPT-2 to table-to-text generation.",
"It also maintains the performance gains when scaling up to GPT-2 LARGE , suggesting it has the potential to scale to even larger models with a similar architecture, like GPT-3.",
"As shown in Table 3, with 2% parameters, prefix-tuning obtains slightly lower performance than finetuning (36.05 vs. 37.25 in ROUGE-L).",
"With only 0.1% parameters, prefix-tuning underperforms full fine-tuning (35.05 vs. 37.25).",
"There are several differences between XSUM and the three table-to-text datasets which could account for why prefix-tuning has comparative advantage in table-to-text: 7 250K for E2E, 250K for WebNLG, and 500K for DART versus 345M GPT-2 parameters.",
"8 The S,U,A columns in WebNLG represents SEEN, UNSEEN, and ALL respectively; SEEN categories appear at training time; UNSEEN categories only appears at test time; and ALL is the combination of the two.",
"(1) XSUM contains 4x more examples than the three table-to-text datasets on average; (2) the input articles are 17x longer than the linearized table input of table-to-text datasets on average; (3) summarization is more complex than table-to-text because it requires selecting key contents from an article.",
"Based on the results from table-to-text ( 6.1) and summarization ( 6.2), we observe that prefix-tuning has a comparative advantage when the number of training examples is smaller.",
"To explore the low-data setting more systematically, we subsample the full dataset (E2E for table-to-text and XSUM for summarization) to obtain small datasets of size { 50 , 100 , 200 , 500 } .",
"For each size, we sample 5 different datasets and average over 2 training random seeds.",
"Thus, we average over 10 models for each low-data setting.",
"9 Figure 3 (right) shows that prefix-tuning outperforms fine-tuning in low-data regimes by 2 .",
"9 BLEU on average, in addition to requiring much fewer parameters, but the gap narrows as the dataset size increases.",
"Qualitatively, Figure 3 (left) shows 8 examples generated by both prefix-tuning and fine-tuning models trained on different data levels.",
"While both methods tend to undergenerate (missing table contents) in low data regimes, prefix-tuning tends to be more faithful than fine-tuning.",
"For example, finetuning (100, 200) 10 falsely claims a low customer rating while the true rating is average, whereas prefix-tuning (100, 200) generates a description that is faithful to the table.",
"We now investigate extrapolation performance to unseen topics for both table-to-text and summarization.",
"In order to construct an extrapolation setting, we split the existing datasets so that training and test cover different topics.",
"For table-to-text, the WebNLG dataset is labeled with table topics.",
"There are 9 categories that appear in training and dev, denoted as SEEN and 5 categories that only appear at test time, denoted as UNSEEN.",
"So we evaluate extrapolation by training on the SEEN categories and testing on the UNSEEN categories.",
"For summarization, we construct two extrapolation data splits: 9 We also sample a dev split (with dev size = 30% training size) for each training set.",
"training data size training data size Figure 3: (Left) qualitative examples in lowdata settings.",
"(Right) prefix-tuning (orange) outperforms fine-tuning (blue) in low-data regimes in addition to requiring many fewer parameters.",
"The top two plots correspond to summarization, measured by ROUGE-1 and ROUGE-2.",
"The bottom two plots correspond to table-to-text, measured by BLEU and ROUGE-L.",
"The x-axis is the training size and the y-axis is the evaluation metric (higher is better).",
"In news-to-sports , we train on news articles and test on sports articles.",
"In within-news , we train on { world, UK, business } news and test on the remaining news categories (e.g., health, tech).",
"On both table-to-text and summarization, prefix-tuning extrapolates better than fine-tuning under all metrics, as shown in Table 4 and the U' columns of Table 2 (middle).",
"We also find that adapter-tuning achieves good extrapolation performance, comparable with prefix-tuning, as shown in Table",
"2. This shared trend suggests that preserving LM parameters indeed has a positive impact on extrapolation.",
"However, how prefix-tuning improves extrapolation is an open question and we will discuss this further in 8.",
"We compare different variants of prefix-tuning to study the impact of various design decisions.",
"7.1 studies the impact of the prefix length.",
"7.2 studies tuning only the embedding layer, which is more akin to tuning a discrete prompt.",
"7.3 compares prefixing and infixing, which inserts trainable activations between x and y .",
"7.4 studies the impact of various prefix initialization strategies.",
"7.5 further studies the data efficiency of prefix-tuning.",
"A longer prefix means more trainable parameters, and therefore more expressive power.",
"11 Figure 4 shows that performance increases as the prefix 11 Empirically, longer prefixes have a negligible impact on training and inference speed per batch, because attention computation over the entire prefix is parallellized on GPUs.",
"length increases up to a threshold ( 200 for summarization, 10 for table-to-text) and then a slight performance drop occurs.",
"Prefixes longer than the threshold lead to lower training loss, but slightly worse test performance, suggesting that they tend to overfit the training data.",
"Recall in 4.1, we discussed optimizing the continuous embeddings of the virtual tokens.",
"We instantiate that idea and call it embedding-only .",
"The word embeddings are free parameters, and the remaining activation layers are computed by the Transformer.",
"Table 5 (top) shows that the performance drops significantly, suggesting that tuning only the embedding layer is not sufficiently expressive.",
"Embedding-only upper bounds the performance of discrete prompt optimization (Shin et al., 2020), because discrete prompt restricts the embedding layer to exactly match the embedding of a real word.",
"Consequently, we have this chain of increasing expressive power: discrete prompting < embedding-only < prefix-tuning.",
"prefix-tuning, we place them at the beginning [ PREFIX ; x ; y ] .",
"We can also place the trainable activations between x and y (i.e. [ x ; INFIX ; y ] ) and call this infix-tuning.",
"Table 5 (bottom) shows that infix-tuning slightly underperforms prefix-tuning.",
"We believe this is because prefix-tuning can affect the activations of x and y whereas infix-tuning can only influence the activations of y .",
"We find that how the prefix is initialized has a large impact in low-data settings.",
"Random initialization leads to low performance with high variance.",
"Initializing the prefix with activations of real words significantly improves generation, as shown in Figure 5.",
"In particular, initializing with task relevant words such as summarization and table-to-text obtains slightly better performance than task irrelevant words such as elephant and divide, but using real words is still better than random.",
"Moreover, in full data settings, the initialization trick has no impact, and random initialization leads to equally good performance.",
"Since we initialize the prefix with activations of real words computed by the LM, this initialization strategy is concordant with prefix-tuning's philosophy, which preserves the pretrained LM as much as possible.",
"We also investigate the data efficiency of prefix-tuning (without initialization trick, a.k.a random initialization) and full fine-tuning by comparing their performance on 5 different data scales of the E2E task (10%, 20%, 40%, 60%, and 80%).",
"Figure 6 shows that prefix-tuning has better performance than fine-tuning when using more than 20% of the data.",
"For data scale of 10%, prefix-tuning with random initialization yields comparable or slightly lower performance than full fine-tuning, 20 40 60 80 percentage of training data 65 66 67 68 69 BLEU FT-full Prefix 20 40 60 80 percentage of training data 68 69 70 71 ROUGE-L FT-full Prefix Figure 6: Data efficiency curves: percentage of training set vs. performance on table-to-text (E2E).",
"Personalization.",
"As we note in 1, prefix-tuning is advantageous when there are a large number of tasks that needs to be trained independently.",
"One practical setting is user privacy (Shokri and Shmatikov, 2015; McMahan et al., 2016).",
"In order to preserve user privacy, each user's data needs to be separated and a personalized model needs to be trained independently for each user.",
"Consequently, each user can be regarded as an independent task.",
"If there are millions of users, prefix-tuning can scale to this setting and maintain modularity, enabling flexible addition or deletion of users by adding or deleting their prefixes without cross-contamination.",
"Batching across users.",
"Under the same personalization setting, prefix-tuning allows batching different users' queries even though they are backed by different prefixes.",
"When multiple users query a cloud GPU device with their inputs, it is computationally efficient to put these users in the same batch.",
"Prefix-tuning keeps the shared LM intact; consequently, batching requires a simple step of prepending the personalized prefix to user input, and all the remaining computation is unchanged.",
"In contrast, we can't batch across different users in adapter-tuning, which has personalized adapters between shared Transformer layers.",
"This batching benefit could also help create efficient ensembles of multiple prefixes trained on the same task (Lester et al., 2021).",
"Inductive bias of prefix-tuning.",
"Recall that finetuning updates all pretrained parameters, whereas prefix-tuning and adapter-tuning preserve them.",
"Since the language models are pretrained on general purpose corpora, preserving the LM parameters might help generalization to domains unseen during training.",
"In concordance with this intuition, we observe that both prefix-tuning and adapter-tuning have significant performance gain in extrapolation settings ( 6.4); however, how these methods improve extrapolation is an open question.",
"While prefix-tuning and adapter-tuning both freeze the pretrained parameters, they tune different sets of parameters to affect the activation layers of the Transformer.",
"Recall that prefix-tuning keeps the LM intact and uses the prefix and the pretrained attention blocks to affect the subsequent activations; adapter-tuning inserts trainable modules between LM layers, which directly add residual vectors to the activations.",
"Moreover, we observe that prefix-tuning requires vastly fewer parameters compared to adapter-tuning while maintaining comparable performance.",
"We think this gain in parameter efficiency is because prefix-tuning keeps the pretrained LM intact as much as possible, and therefore exploits the LM more than adapter-tuning.",
"Recent work by Aghajanyan et al. (2020) uses intrinsic dimension to show that there exists a low-dimensional reparameterization that is as effective for fine-tuning as the full parametrization.",
"This explains why good accuracy on downstream tasks can be obtained by updating only a small number of parameters.",
"Our work echoes this finding by showing that good generation performance can also be attained by updating a very small prefix.",
"However, prefix-tuning is not just about the size of trainable parameters, but more importantly, which subset of parameters to modify.",
"Therefore, it would be interesting future work to explore other lightweight fine-tuning methods that achieve an even better accuracy-size tradeoff.",
"We thank the members of p-lambda group as well as anonymous reviewers for valuable feedback.",
"We gratefully acknowledge the support of a PECASE award.",
"XLL is supported by a Stanford Graduate Fellowship."
] |
[
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"The ability to control for the kinds of information encoded in neural representation has a variety of use cases, especially in light of the challenge of interpreting these models.",
"We present Iterative Null-space Projection (INLP), a novel method for removing information from neural representations.",
"Our method is based on repeated training of linear classifiers that predict a certain property we aim to remove, followed by projection of the representations on their null-space.",
"By doing so, the classifiers become oblivious to that target property, making it hard to linearly separate the data according to it.",
"While applicable for multiple uses, we evaluate our method on bias and fairness use-cases, and show that our method is able to mitigate bias in word embeddings, as well as to increase fairness in a setting of multi-class classification.",
"What is encoded in vector representations of textual data, and can we control it?",
"Word embeddings, pre-trained language models, and more generally deep learning methods emerge as very effective techniques for text classification.",
"Accordingly, they are increasingly being used for predictions in real-world situations.",
"A large part of the success is due to the models' ability to perform representation learning , coming up with effective feature representations for the prediction task at hand.",
"However, these learned representations, while effective, are also notoriously opaque: we do not know what is encoded in them.",
"Indeed, there is an emerging line of work on probing deep-learning derived representations for syntactic (Linzen et al., 2016; Hewitt and Manning, 2019; Goldberg, 2019), semantic (Tenney et al., 2019) and factual knowledge (Petroni et al., 2019).",
"There is also evidence that they capture a lot of information regarding the Figure 1: t-SNE projection of GloVe vectors of the most gender-biased words after t=0, 3, 18, and 35 iterations of INLP.",
"What can we do in situations where we do not want our representations to encode certain kinds of information?",
"For example, we may want a word representation that does not take tense into account, or that does not encode part-of-speech distinctions.",
"We may want a classifier that judges the formality of the text, but which is also oblivious to the topic the text was taken from.",
"Finally, and also our empirical focus in this work, this situation often arises when considering fairness and bias of language-based classification.",
"We may not want our word-embeddings to encode gender stereotypes, and we do not want sensitive decisions on hiring or loan approvals to condition on the race , gender or age of the applicant.",
"We present a novel method for selectively removing specific kinds of information from a representation.",
"Previous methods are either based on projection on a pre-specified, user-provided direction (Bolukbasi et al., 2016), or on adding an adversarial objective to an end-to-end training process (Xie et al., 2017).",
"Both of these have benefits and limitations, as we discuss in the related work section ( 2).",
"Our proposed method, Iterative Nullspace Projection (INLP), presented in section 4, can be seen as a combination of these approaches, capitalizing on the benefits of both.",
"Like the projection methods, it is also based on the mathematical notion of linear projection, a commonly used deterministic operator.",
"Like the adversarial methods, it is data-driven in the directions it removes: we do not presuppose specific directions in the latent space that correspond to the protected attribute, but rather learn those directions, and remove them.",
"Empirically, we find it to work well.",
"We evaluate the method on the challenging task of removing gender signals from word embeddings (Bolukbasi et al., 2016; Zhao et al., 2018).",
"Recently, Gonen and Goldberg (2019) showed several limitations of current methods for this task.",
"We show that our method is effective in reducing many, but not all, of these ( 4).",
"We also consider the context of fair classification, where we want to ensure that a classifier's decision is oblivious to a protected attribute such as race, gender or age.",
"There, we need to integrate the projection-based method within a pre-trained classifier.",
"We propose a method to do so in section 5, and demonstrate its effectiveness in a controlled setup ( 6.2) as well as in a real-world one ( 6.3).",
"Finally, while we propose a general purpose information-removal method, our main evaluation is in the realm of bias and fairness applications.",
"We stress that this calls for some stricter scrutiny, as the effects of blindly trusting strong claims can have severe real-world consequences on individuals.",
"We discuss the limitations of our model in the context of such applications in section 7.",
"The objective of controlled removal of specific types of information from neural representation is tightly related to the task of disentanglement of the representations (Bengio et al., 2013; Mathieu et al., 2016), that is, controlling and separating the different kinds of information encoded in them.",
"In the context of transfer learning, previous methods have pursued representations which are invariant to some properties of the input, such as genre or topic, in order to ease domain transfer (Ganin and Lempitsky, 2015).",
"Those methods mostly rely on adding an adversarial component (Goodfellow et al., 2014; Ganin and Lempitsky, 2015; Xie et al., 2017; Zhang et al., 2018) to the main task objective: the representation is regularized by an adversary network, that competes against the encoder, trying to extract the protected information from its representation.",
"While adverserial methods showed impressive performance in various machine learning tasks, and were applied for the goal of removal of sensitive information (Elazar and Goldberg, 2018; Coavoux et al., 2018; Resheff et al., 2019; Barrett et al., 2019), they are notoriously hard to train.",
"Elazar and Goldberg (2018) have evaluated adverserial methods for the removal of demographic information from representations.",
"They showed that the complete removal of the protected information is nontrivial: even when the attribute seems protected, different classifiers of the same architecture can often still succeed in extracting it.",
"Another drawback of these methods is their reliance on a main-task loss in addition to the adverserial loss, making them less suitable for tasks such as debiasing pre-trained word embeddings.",
"Xu et al. (2017) utilized a nullspace cleaning operator for increasing privacy in classifiers.",
"They remove from the input a subspace that contains (but is not limited to) the nullspace of a pre-trained classifier, in order to clean information that is not used for the main task (and might be protected), while minimally impairing classification accuracy.",
"While similar in spirit to our method, several key differences exist.",
"As the complementary setting removing the nullsapce of the main-task classifier vs. projection onto the nullspace of protected attribute classifiers aims to achieve a distinct goal (privacy preserving), there is no notion of exhaustive cleaning.",
"Furthermore, they do not remove protected attributes that are used by the classifier (e.g. when it conditions on gender).",
"A recent line of work focused on projecting the representation to a subspace which does not encode the protected attributes.",
"Under this method, one identifies directions in the latent space that correspond to the protected attribute, and removes them.",
"In a seminal work, Bolukbasi et al. (2016) aimed to identify a gender subspace in word-embedding space by calculating the main directions in a subspace spanned by the differences between gendered word pairs, such as # he # she .",
"They suggested to zero out the components of neutral words in the direction of the gender subspace first principle components, and actively pushed neutral words to be equally distant from male and female-gendered words.",
"However, Gonen and Goldberg (2019) have shown that these methods only cover up the bias and that in fact, the information is deeply ingrained in the representations.",
"A key drawback of this approach is that it relies on an intuitive selection of a few (or a single) gender directions, while, as we Figure 2: Nullspace projection for a 2-dimensional binary classifier.",
"reveal in our experiments, the gender subspace is actually spanned by dozens to hundreds of orthogonal directions in the latent space, which are not necessarily as interpretable as the # he # she direction.",
"This observation aligns with the analysis of Ethayarajh et al. (2019) who demonstrated that debiasing by projection is theoretically effective provided that one removes all relevant directions in the latent space.",
"Our main goal is to guard sensitive information, so that it will not be encoded in a representation.",
"Given a set of vectors x i R d , and corresponding discrete attributes Z , z i { 1 , ..., k } (e.g. race or gender), we aim to learn a transformation g : R d R d , such that z i cannot be predicted from g ( x i ) .",
"In this work we are concerned with linear guarding: we seek a guard g such that no linear classifier w ( ) can predict z i from g ( x i ) with an accuracy greater than that of a decision rule that considers only the proportion of labels in Z .",
"We also wish for g ( x i ) to stay informative: when the vectors x are used for some end task, we want g ( x ) to have as minimal influence as possible on the end task performance, provided that z remains guarded.",
"We use the following definitions: Guarded w.r.t. a hypothesis class Let X = x 1 , ..., x m X R d be a set of vectors, with corresponding discrete attributes Z , z i { 1 , ..., k } .",
"We say the set X is guarded for Z with respect to hypothesis class H (conversely Z is guarded in X ) if there is no classifier W H that can predict z i from x i at better than guessing the majority class.",
"Guarding function A function g : R n R n is said to be guarding X for Z (w.r.t. to class H ) if the set { g ( x ) | x X } is guarded for Z w.r.t. to H .",
"We use the term linearly guarded to indicate guarding w.r.t. to the class of all linear classifiers.",
"Given a set of vectors x i R d and a set of corresponding discrete 1 protected attributes z i Z , we seek a linear guarding function g that remove the linear dependence between Z and X .",
"We begin with a high-level description of our approach.",
"Let c be a trained linear classifier, parameterized by a matrix W R k d , that predicts a property z with some accuracy.",
"We can construct a projection matrix P such that W ( P x ) = 0 for all x , rendering W useless on dataset X .",
"We then iteratively train additional classifiers W (cid:48) and perform the same procedure, until no more linear information regarding Z remains in X .",
"Constructing P is achieved via nullspace projection, as described below.",
"This method is the core of the INLP algorithm (Algorithm 1).",
"Nullspace Projection The linear interaction between W and a new test point x has a simple geometric interpretation: x is projected on the subspace spanned by W 's rows, and is classified according to the dot product between x and W 's rows, which is proportional to the components of x in the direction of W 's rowpsace.",
"Therefore, if we zeroed all components of x in the direction of W 's row-space, we removed all information used by W for prediction: the decision boundary found by the classifier is no longer useful.",
"As the orthogonal component of the rowspace is the nullspace, zeroing those components of x is equivalent to projecting x on W 's nullspace.",
"Figure 2 illustrates the idea for the 2 dimensional binary-classification setting, in which W is just a 2-dimensional vector.",
"For an algebraic interpretation, recall that the null-space of a matrix W is defined as the space N ( W ) = { x | W x = 0 } .",
"Given the basis vectors of N ( W ) we can construct a projection matrix PN ( W ) into N ( W ) , yielding W ( PN ( W ) x ) = 0 x .",
"This suggests a simple method for rendering z linearly guarded for a set of vectors X : training a linear classifier that is parameterized by W 0 to predict Z from X , calculating its nullspace, finding the orthogonal projection matrix PN ( W 0 ) onto the nullspace, and using it to remove from X those components that were used by the classifier for predicting Z .",
"1 While this work focuses on the discrete case, the extension to a linear regression setting is straightforward: A projection to the nullspace of a linear regressor w enforces wx = 0 for every x , i.e., each input is regressed to the non-informative value of zero.",
"Note that the orthogonal projection PN ( w 0 ) is the least harming linear operation to remove the linear information captured by W 0 from X , in the sense that among all maximum rank (which is not full, as such transformations are invertiblehence not linearly guarding) projections onto the nullspace of W 0 , it carries the least impact on distances.",
"This is so since the image under an orthogonal projection into a subspace is by definition the closest vector in that subspace.",
"Iterative Projection Projecting the inputs X on the nullspace of a single linear classifier does not suffice for making Z linearly guarded: classifiers can often still be trained to recover z from the projected x with above chance accuracy, as there are often multiple linear directions (hyper-planes) that can partially capture a relation in multidimensional space.",
"This can be remedied with an iterative process: After obtaining PN ( W 0 ) , we train classifier W 1 on PN ( W 0 ) X , obtain a projection matrix PN ( W 1 ) , train a classifier W 2 on PN ( W 1 ) PN ( W 0 ) X and so on, until no classifier W m +1 can be trained.",
"We return the guarding projection matrix P = PN ( W m ) PN ( W m 1 ) ...P N ( W 0 ) , with the guarding function g ( x ) = P x .",
"Crucially, the i th classifier W i is trained on the data X after the projection on the nullspaces of classifiers W 0 , ..., W i 1 and is therefore trained to find separating planes that are independent of the separating planes found by previous classifiers.",
"In Appendix A.1 we prove three desired proprieties of INLP: (1) any two protected-attribute classifiers found in INLP are orthogonal (Lemma A.1); (2) while in general the product of projection matrices is not a projection, the product P calculated in INLP is a valid projection (Corollary A.1.2); and (3) it projects any vector to the intersection of the nullspaces of each of the classifiers found in INLP, that is, after n INLP iterations, P is a projection to N ( W 0 ) N ( W 1 ) N ( W n ) (Corollary A.1.3).",
"We further bound the damage P causes to the structure of the space (Lemma A.2).",
"INLP can thus be seen as a linear dimensionality-reduction method, which keeps only those directions in the latent space which are not indicative of the protected attribute.",
"During iterative nullspace projection, the property z becomes increasingly linearly-guarded in P x .",
"For binary protected attributes, each intermediate W j is a vector, and the nullspace rank is d 1 .",
"Therefore, after n iterations, if the original rank of X was r , the rank of the projected input g ( X ) is at least r n .",
"The entire process is formalized in Algorithm",
"1. Algorithm 1 Iterative Nullspace Projection (INLP) Input : ( X, Z ) : a training set of vectors and protected attributes n: Number of rounds Result: A projection matrix P Function GetProjectionMatrix( X, Z ) : X projected X P I for i 1 to n do W i TrainClassifier( X projected , Z ) B i GetNullSpaceBasis( W i ) PN ( W i ) B i Bi TP PN ( W i ) P X projected PN ( W i ) X projected end return P INLP bears similarities to Partial Least Squares (PLS; Geladi and Kowalski (1986); Barker and Rayens (2003)), an established regression method.",
"Both iteratively find directions that correspond to Z : while PLS does so by maximizing covariance with Z (and is thus less suited to classification), INLP learns the directions by training classifiers with an arbitrary loss function.",
"Another difference is that INLP focuses on learning a projection that neutralizes Z , while PLS aims to learn low-dimensional representation of X that keeps information on Z .",
"Implementation Details A naive implementation of Algorithm 1 is prone to numerical errors, due to the accumulative projection-matrices multiplication P PN ( W i ) P .",
"To mitigate that, we use the formula of Ben-Israel (2015), which connects the intersection of nullspaces with the projection matrices to the corresponding rowspaces: N ( w 1 ) N ( w n ) = N ( PR ( w 1 ) + + PR ( w n )) (1) Where PR ( W i ) is the orthogonal projection matrix to the row-space of a classifier W i .",
"Accordingly, in practice, we do not multiply P PN ( W i ) P but rather collect rowspace projection matrices PR ( W i ) for each classifier W i .",
"In place of each input projection X projected PN ( W i ) X projected , we recalculate P := PN ( w 1 ) ... N ( w i ) according to 1, and perform a projection X projected P X .",
"Upon termination, we once again apply 1 to return the final nullspace projection matrix PN ( W 1 ) ... N ( W n ) .",
"The code is publicly available.",
"2 5 Application to Fair Classification The previous section described the INLP method for producing a linearly guarding function g for a set of vectors.",
"We now turn to describe its usage in the context of providing fair classification by a (possibly deep) neural network classifier.",
"In this setup, we are given, in addition to X and Z also labels Y , and wish to construct a classifier f : X Y , while being fair with respect to Z .",
"Fairness in classification can be defined in many ways (Hardt et al., 2016; Madras et al., 2019; Zhang et al., 2018).",
"We focus on a notion of fairness by which the predictor f is oblivious to Z when making predictions about Y .",
"To use linear guardedness in the context of a deep network, recall that a classification network f ( x ) can be decomposed into an encoder enc followed by a linear layer W : f ( x ) = W enc ( x ) , where W is the last layer of the network and enc is the rest of the network.",
"If we can make sure that Z is linearly guarded in the inputs to W , then W will have no knowledge of Z when making its prediction about Y , making the decision process oblivious to Z .",
"Adversarial training methods attempt to achieve such obliviousness by adding an adversarial objective to make enc ( x ) itself guarding.",
"We take a different approach and add a guarding function on top of an already trained enc .",
"We propose the following procedure.",
"Given a training set X , Y and protected attribute Z , we first train a neural network f = W enc ( X ) to best predict Y .",
"This results in an encoder that extracts effective features from X for predicting Y .",
"We then consider the vectors enc ( X ) , and use the INLP method to produce a linear guarding function g that guards Z in enc ( X ) .",
"At this point, we can use the classifier W g ( enc ( x )) to produce oblivious decisions, however by introducing g (which is lower rank than enc ( x ) ) we may have harmed W s performance.",
"We therefore freeze the network and fine-tune only W to predict Y from g ( enc ( x )) , producing the final fair classifier f (cid:48) ( x ) = W (cid:48) g ( enc ( x )) .",
"Notice that W (cid:48) only sees vectors which are linearly guarded for Z during its training, and therefore cannot take Z into ac-2 https://github.com/Shaul1321/nullspace projection count when making its predictions, ensuring fair classification.",
"We note that our notion of fairness by obliviousness does not, in the general case, correspond to other fairness metrics, such as equality of odds or of opportunity.",
"It does, however, correlate with fairness metrics, as we demonstrate empirically.",
"Further refinement.",
"Guardedness is a property that holds in expectation over an entire dataset.",
"For example, when considering a dataset of individuals from certain professions (as we do in 6.3), it is possible that the entire dataset is guarded for gender, yet if we consider only a subset of individuals (say, only nurses), we may still be able to recover gender with above majority accuracy, in that sub-population.",
"As fairness metrics are often concerned with classification behavior also within groups, we propose the following refinement to the algorithm, which we use in the experiments in 6.2 and 6.3: in each iteration, we train a classifier to predict the protected attribute not on the entire training set, but only on the training examples belonging to a single (randomly chosen) main-task class (e.g. profession).",
"By doing so, we push the protected attribute to be linearly guarded in the examples belonging to each of the main-task labels.",
"In the first set of experiments, we evaluate the INLP method in its ability to debias word embeddings (Bolukbasi et al., 2016).",
"After debiasing the embeddings, we repeat the set of diagnostic experiments of Gonen and Goldberg (2019).",
"Data.",
"Our debiasing targets are the uncased version of GloVe word embeddings (Zhao et al., 2018), after limiting the vocabulary to the 150,000 most common words.",
"To obtain labeled data X , Z for this classifier, we use the 7,500 most male-biased and 7,500 most female-biased words (as measured by the projection on the # he # she direction), as well as 7,500 neutral vectors, with a small component (smaller than 0.03) in the gender direction.",
"The data is randomly divided into a test set (30%), and training and development sets (70%, further divided into 70% training and 30% development examples).",
"Classification.",
"Initially, a linear SVM classifier perfectly discriminates between the two genders (100% accuracy).",
"The accuracy drops to 49.3% following INLP.",
"To measure to what extent gender is still encoded in a nonlinear way, we train a 1-layer ReLU-activation MLP.",
"The MLP recovers gender with accuracy of 85.0%.",
"This is expected, as the INLP method is only meant to achieve linear guarding 3 .",
"Human-selected vs. Learned Directions.",
"Our method differs from the common projection-based approach by two main factors: the numbers of directions we remove, and the fact that those directions are learned iteratively from data.",
"Perhaps the benefit is purely due to removing more directions?",
"We compare the ability to linearly classify words by gender bias after removing 10 directions by our method (running Algorithm 1 for 10 iterations) with the ability to do so after removing 10 manually-chosen directions defined by the difference vectors between gendered pairs 4 .",
"INLP-based debiasing results in a very substantial drop in classification accuracy (54.4%), while the removal of the predefined directions only moderately decreases accuracy (80.7%).",
"This shows that data-driven identification of gender-directions outperforms manually selected directions: there are many subtle ways in which gender is encoded, which are hard for people to imagine.",
"Discussion.",
"Both the previous method and our method start with the main gender-direction of # he # she .",
"However, while previous attempts take this direction as the information that needs to be neutralized, our method instead considers the labeling induced by this gender direction, and then iteratively finds and neutralizes directions that correlate with this labeling.",
"It is likely that the # he # she direction is one of the first to be removed, but we then go on and learn a set of other directions that correlate with the same labeling and which are predictive of it to some degree, neutralizing each of 3 Interestingly, RBF-kernel SVM (used by Gonen and Goldberg (2019)) achieves random accuracy.",
"4 We use the following pairs, taken from Bolukbasi et al. (2016): (woman, man), (girl, boy), (she, he), (mother, father), (daughter, son), (gal, guy), (female, male), (her, his), (herself, himself), (mary, john).",
"them in turn.",
"Compared to the 10 manually identi-fied gender-directions from Bolukbasi et al. (2016), it is likely that our learned directions capture a much more diverse and subtle set of gender clues in the embedding space.",
"Effect of debiasing on the embedding space.",
"In appendix A.2 we provide a list of 40 random words and their closest neighbors, before and after INLP, showing that INLP doesn't significantly damage the representation space that encodes lexical semantics.",
"We also include a short analysis of the influence on a specific subset of inherently gendered words: gendered surnames (Appendix A.4).",
"Additionally, we perform a semantic evaluation of the debiased embeddings using multiple word similarity datasets (e.g. SimLex-999 (Hill et al., 2015)).",
"We find large improvements in the quality of the embeddings after the projection (e.g. on SimLex-999 the correlation with human judgements improves from 0.373 to 0.489) and we elaborate more on these findings in Appendix A.3.",
"Clustering.",
"Figure 1 shows t-SNE (Maaten and Hinton, 2008) projections of the 2,000 most female-biased and 2,000 most male-biased words, originally and after t = 3 , t = 18 and t = 35 projection steps.",
"The results clearly demonstrate that the classes are no longer linearly separable: this behavior is qualitatively different from previous word vector debiasing methods, which were shown to maintain much of the proximity between female and male-biased vectors (Gonen and Goldberg, 2019).",
"To quantify the difference, we perform K-means clustering to K = 2 clusters on the vectors, and calculate the V-measure (Rosenberg and Hirschberg, 2007) which assesses the degree of overlap between the two clusters and the gender groups.",
"For the t-SNE projected vectors, the measure drops from 83.88% overlap originally, to 0.44% following the projection; and for the original space, the measure drops from 100% to 0.31%.",
"WEAT.",
"While our method does not guarantee attenuating the bias-by-neighbors phenomena that is discussed in Gonen and Goldberg (2019), it is still valuable to quantify to what extent it does mitigate this phenomenon.",
"We repeat the Word Embedding Association Test (WEAT) from Caliskan et al. (2017) which aims to measure the association in vector space between male and female concepts and stereotypically male or female professions.",
"Following Gonen and Goldberg (2019), we represent the male and female groups with common names of males and females, rather than with explicitly gendered words (e.g. pronouns).",
"Three tests evaluate the association between a group of male names and a groups of female names to (1) career and family-related words; (2) art and mathematics words; and (3) artistic and scientific fields.",
"In all three tests, we find that the strong association between the groups no longer exists after the projection (non-significant p-values of 0.855, 0.302 and 0.761, respectively).",
"Bias-by-Neighbors.",
"To measure bias-by-neighbors as discussed in (Gonen and Goldberg, 2019), we consider the list of professions provided in (Bolukbasi et al., 2016) and measure the correlation between bias-by projection and bias by neighbors, quantified as the percentage of the top 100 neighbors of each profession which were originally biased-by-projection towards either of the genders.",
"We find strong correlation of 0.734 (compared with 0.852 before), indicating that much of the bias-by-neighbors remains.",
"5 6.2 Fair Classification: Controlled Setup We now evaluate using INLP with a deeper classifier, with the goal of achieving fair classification.",
"Classifier bias measure: TPR-GAP.",
"To measure the bias in a classifier, we follow De-Arteaga et al. (2019) and use the TPR-GAP measure.",
"This measure quantifies the bias in a classifier by considering the difference (GAP) in the True Positive Rate (TPR) between individuals with different protected attributes (e.g. gender/race).",
"The TPR-GAP is tightly related to the notion of fairness by equal opportunity (Hardt et al., 2016): a fair classifier is expected to show similar success in predicting the task label Y for the two populations, when conditioned on the true class.",
"Formally, for a binary protected attribute z and a true class y , define: T P R z,y = P [ Y = y | Z = z , Y = y ] (2) GAP TPRz,y = T P R z,y T P R z (cid:48) ,y (3) where Z is a random variable denoting binary protected attribute, z and z (cid:48) denote its two values, and 5 Note that if, for example, STEM-related words are originally biased towards men, the word chemist after the projection may still be regarded as male-biased by neighbors, not because an inherent bias but due to its proximity to other originally biased words (e.g. other STEM professions).",
"Experiment setup.",
"We begin by experimenting with a controlled setup, where we control for the proportion of the protected attributes within each main-task class.",
"We follow the setup of Elazar and Goldberg (2018) which used a twitter dataset, collected by Blodgett et al. (2016), where each tweet is associated with race information and a sentiment which was determined by their belonging to some emoji group.",
"Naturally, the correlation between the protected class labels and the main-class labels may influence the fairness of the model, as high correlation can encourage the model to condition on the protected attributes.",
"We measure the TPR-GAP on predicting sentiment for the different race groups (African American English (AAE) speakers and Standard American English (SAE) speakers), with different imbalanced conditions, with and without application of our classifier debiasing procedure.",
"In all experiments, the dataset is overly balanced with respect to both sentiment and race (50k instances for each).",
"We change only the proportion of each race within each sentiment class (e.g., in the 0.7 condition, the happy sentiment class is composed of 70% AAE / 30% SAE, while the sad class is composed of 30% AAE / 70% SAE).",
"Our classifier is based on the DeepMoji encoder (Felbo et al., 2017), followed by a 1-hideen-layer MLP.",
"The DeepMoji model was trained on millions of tweets in order to predict their emojis; a model which was proven to perform well on different classification tasks (Felbo et al., 2017), but also encodes demographic information (Elazar and Goldberg, 2018).",
"We train this classifier to predict sentiment.",
"We then follow the procedure in 5: training a guarding function on the hidden layer of the MLP, and re-training the final linear layer on the guarded vectors.",
"Table 1 presents the results.",
"As expected the TPR-GAP grows as we increase the correlation between class labels and protected attributes.",
"The accuracy grows as well.",
"Applying our debiasing technique significantly reduced the TPR gap in all settings, although hurting more the main task accuracy in the highly-imbalanced setting.",
"In Appendix A.5, we give some more analysis on the balance between performance and TPR-Gap and show that one can control for this ratio, by using more iterations of INLP.",
"We now evaluate the fair classification approach in a less artificial setting, measuring gender bias in biography classification, following the setup of De-Arteaga et al. (2019).",
"They scraped the web and collected a dataset of short biographies, annotated by gender and profession.",
"They trained logistic regression classifiers to predict the profession of the biography's subject based on three different input representation: bag-of-words (BOW), bag of word-vectors (BWV), and RNN based representation.",
"We repeat their experiments, using INLP for rendering the classifier oblivious of gender.",
"Setup.",
"Our data contains 393,423 biographies.",
"6 We follow the train:dev:test split of De-Arteaga et al. (2019), resulting in 255,710 training examples (65%), 39,369 development examples (10%) and 98,344 (25%) test examples.",
"The dataset has 28 classes (professions), which we predict using a multiclass logistic classifier (in a one-vs-all set-ting).",
"We consider three input representations: BOW, BWV and BERT (Devlin et al., 2019) based classification.",
"In BOW, we represent each biography as the sum of one-hot vectors, each representing one word in the vocabulary.",
"In the BWV representation, we sum the FastText token representations (Joulin et al., 2017) of the words in the biography.",
"In BERT representation, we represent each biography as the last hidden state of BERT over the CLS token.",
"Each of these representations is then fed into the logistic classifier to get final prediction.",
"We do not finetune FastText or BERT.",
"We run INLP with scikit-learn Pedregosa et al. (2011) linear classifiers.",
"We use 100 logistic classifiers for BOW, 150 linear SVM classifiers for BWV, and 300 linear SVM classifiers for BERT.",
"Bias measure.",
"We use the TPR-GAP measure for each profession.",
"Following Romanov et al. (2019), we also calculate the root-mean square of GAP TPRg,y over all professions y , to get a single per-gender bias score: GAP TPR,RMSg = (cid:115) 1 | C | (cid:88) y C ( GAP TPRg,y ) 2 (4) where C is the set of all labels (professions).",
"De-Arteaga et al. (2019) have shown that GAP TPRg,y strongly correlates with the percentage of women in profession y , indicating that the true positive rate of the model is influenced by gender.",
"Main results The results are summarized in Table",
"2. INLP moderately changes main-task accuracy, with a 1.9% increase in BOW, a 5.1% decrease in performance in BWV and a 5.51% decrease in BERT.",
"GAP TPR,RMSg is significantly 6 The original dataset had 399,000 examples, but 5,557 biographies were no longer available on the web.",
"decreased, indicating that on average, the true positive rate of the classifiers for male and female become closer: in BOW representation, from 0.203 to 0.124 (a 38.91% decrease); in BWV, from 0.184 to 0.089 (a 51.6% decrease); and in BERT, from 0.184 to 0.095 (a 48.36% decrease).",
"We measure the correlation between GAP TPRy,female for each profession y , and the percentage of biographies of women in that profession.",
"In BOW representation, the correlation decreases from 0.894 prior to INLP to 0.670 after it (a 33.4% decrease).",
"In BWV representation, the correlation decreases from 0.896 prior to INLP to 0.425 after it (a 52.5% decrease).",
"In BERT representation, the correlation decreases from 0.883 prior to INLP to 0.470 following it (a 46.7% decreases; Figure 4b).",
"De-Arteaga et al. (2019) report a correlation of 0.71 for BWV representations when using a scrubbed version of the biographies, with all pronouns and names removed.",
"INLP significantly outperforms this baseline, while maintaining all explicit gender markers in the input.",
"Analysis.",
"How does imposing fairness influence the importance the logistic classifier attribute to different words in the biography?",
"We take advantage of the BOW representation and visualize which features (words) influence each prediction (profes-sion), before and after the projection.",
"According to Algorithm 1, to debias an input x , we multiply W ( P x ) .",
"Equivalently, we can first multiply W by P to get a debiased weight matrix W (cid:48) .",
"We begin by testing how much the debiased weights of words that are considered to be biased were changed during the debiasing, compared to random vocabulary words.",
"We compare the relative change before and after the projection of these words, for every occupation.",
"Biased words undergo an average relative change of x1.23 compared to the average change of the entire vocabulary, demonstrating that biased words indeed change more.",
"The per-profession breakout is available in Figure 2 in Appendix A.6.1.",
"Next, we test the words that were changed the most during the INLP process.",
"We compare the weight difference before and after the projection.",
"We sort each profession words by weight, and average their location index for each professions.",
"Many words indeed seem gender specific (e.g. ms. , mr. , his , her , which appears in locations 1, 2, 3 and 4 re-spectively), but some seem unrelated, perhaps due to spurious correlations in the data.",
"The complete list is available in Table 4 in the Appendix A.6.1; an analogous analysis for the FastText representation is available at Appendix A.6.2.",
"A limitation of our method when used in the context of fairness is that, like other learning approaches, it depends on the data X , Z that is fed to it, and works under the assumption that the training data is sufficiently large and is sampled i.i.d from the same distribution as the test data.",
"This condition is hard to achieve in practice, and failure to provide sufficiently representative training data may lead to biased classifications even after its application.",
"Like other methods, there are no magic guarantees, and the burden of verification remains on the user.",
"It is also important to remember that the method is designed to achieve a very specific sense of protection: removal of linear information regarding a protected attribute.",
"While it may correlate with fairness measures such as demographic parity, it is not designed to ensure them.",
"Finally, it is designed to be fed to a linear decoder, and the attributes are not protected under non-linear classifiers.",
"We present a novel method for removing linearly-represented information from neural representations.",
"We focus on bias and fairness as case studies, and demonstrate that across increasingly complex settings, our method is capable of attenuating societal biases that are expressed in representations learned from data.",
"While we focused on bias, Iterative Nullspace Projection has broader possible use-cases, and can be utilized to remove specific components from a representation, in a controlled manner.",
"This method can be applicable for other end goals, such as style-transfer, disentanglement of neural representations and increasing their interpretability.",
"We aim to explore those directions in a future work.",
"We thank Jacob Goldberger and Jonathan Berant for fruitful discussions.",
"This project received funding from the Europoean Research Council (ERC) under the Europoean Union's Horizon 2020 research and innovation programme, grant agreement No. 802774 (iEXTRACT)."
] |
[
"abstain",
"objective",
"objective",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"method",
"abstain",
"result",
"method",
"method",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"abstain",
"objective",
"other",
"other"
] |
[
"We introduce DynaSent (Dynamic Senti-ment'), a new English-language benchmark task for ternary (positive/negative/neutral) sentiment analysis.",
"DynaSent combines naturally occurring sentences with sentences created using the open-source Dynabench Platform, which facilities human-and-model-in-the-loop dataset creation.",
"DynaSent has a total of 121,634 sentences, each validated by five crowdworkers, and its development and test splits are designed to produce chance performance for even the best models we have been able to develop; when future models solve this task, we will use them to create DynaSent version 2, continuing the dynamic evolution of this benchmark.",
"Here, we report on the dataset creation effort, focusing on the steps we took to increase quality and reduce artifacts.",
"We also present evidence that DynaSent's Neutral category is more coherent than the comparable category in other benchmarks, and we motivate training models from scratch for each round over successive fine-tuning.",
"Sentiment analysis is an early success story for NLP, in both a technical and an industrial sense.",
"It has, however, entered into a more challenging phase for research and technology development: while present-day models achieve outstanding results on all available benchmark tasks, they still fall short when deployed as part of real-world systems (Burn-Murdoch, 2013; Grimes, 2014, 2017; Gossett, 2020) and display a range of clear shortcomings (Kiritchenko and Mohammad, 2018; Hanwen Shen et al., 2018; Wallace et al., 2019; Tsai et al., 2019; Jin et al., 2019; Zhang et al., 2020).",
"ing version 1 of the DynaSent dataset for English-language ternary (positive/negative/neutral) sentiment analysis.",
"1 DynaSent is intended to be a dynamic benchmark that expands in response to new models, new modeling goals, and new adversarial attacks.",
"We present the first two rounds here and motivate some specific data collection and modeling choices, and we propose that, when future models solve these rounds, we use those models to create additional DynaSent rounds.",
"This is an instance of the moving post' dynamic target for NLP that Nie et al. (2020) envision.",
"Figure 1 summarizes our method, which incorporates both naturally occurring sentences and sentences created by crowdworkers with the goal of fooling a top-performing model.",
"The starting point is Model 0, which is trained on standard sentiment 1 https://github.com/cgpotts/dynasent benchmarks and used to find challenging sentences in existing data.",
"These sentences are fed into a human validation task, leading to the Round 1 Dataset.",
"Next, we train Model 1 on Round 1 in addition to publicly available datasets.",
"In Round 2, this model runs live on the Dynabench Platform for human-and-model-in-the-loop dataset creation; 2 crowdworkers try to construct examples that fool Model 1. These examples are human-validated, which results in the Round 2 Dataset.",
"Taken together, Rounds 1 and 2 have 121,634 sentences, each with five human validation labels.",
"Thus, with only two rounds collected, DynaSent is already a substantial new resource for sentiment analysis.",
"In addition to contributing DynaSent, we seek to address a pressing concern for any dataset collection method in which workers are asked to construct original sentences: human creativity has intrinsic limits.",
"Individual workers will happen upon specific strategies and repeat them, and this will lead to dataset artifacts.",
"These artifacts will certainly reduce the value of the dataset, and they are likely to perpetuate and amplify social biases.",
"We explore two methods for mitigating these dangers.",
"First, by harvesting naturally occurring examples for Round 1, we tap into a wider population than we can via crowdsourcing, and we bring in sentences that were created for naturalistic reasons, rather than the more artificial goals present during crowdsourcing.",
"Second, for the Dynabench cases created in Round 2, we employ a Prompt' setting, in which crowdworkers are asked to modify a naturally occurring example rather than writing one from scratch.",
"We compare these sentences with those created without a prompt, and we find that the prompt-derived sentences are more like naturally occurring sentences in length and lexical diversity.",
"Of course, fundamental sources of bias remain we seek to identify these in the Datasheet (Gebru et al., 2018) distributed with our dataset but we argue that these steps help, and can inform crowdsourcing efforts in general.",
"As noted above, DynaSent presently uses the labels Positive, Negative, and Neutral.",
"This is a minimal expansion of the usual binary (Posi-tive/Negative) sentiment task, but a crucial one, as it avoids the false presupposition that all texts convey binary sentiment.",
"We chose this version of the problem to show that even basic sentiment analysis poses substantial challenges for our field.",
"We find that the Neutral category is especially difficult.",
"While it is common to synthesize such a category from middle-scale product and service reviews, we use an independent validation of the Stanford Sentiment Treebank (Socher et al., 2013) dev set to argue that this tends to blur neutrality together with mixed sentiment and uncertain sentiment (Section 5.2).",
"DynaSent can help tease these phenomena apart, since it already has a large number of Neutral examples and a large number of examples displaying substantial variation in validation.",
"Finally, we argue that the variable nature of the Neutral category is an obstacle to fine-tuning (Section 5.3), which favors our strategy of training models from scratch for each round.",
"Sentiment analysis was one of the first natural language understanding tasks to be revolutionized by data-driven methods.",
"Rather than trying to survey the field (see Pang and Lee 2008; Liu 2012; Grimes 2014), we focus on the benchmark tasks that have emerged in this space, and then seek to situate these benchmarks with respect to challenge (adversarial) datasets and crowdsourcing methods.",
"Many sentiment datasets are derived from customer reviews of products and services (Pang and Lee, 2004, 2005; Socher et al., 2013; Maas et al., 2011; Jindal and Liu, 2008; Ni et al., 2019; McAuley et al., 2012; Zhang et al., 2015).",
"This is an appealing source of data, since such texts are accessible and abundant in many languages and regions of the world, and they tend to come with their own author-provided labels (star ratings).",
"On the other hand, over-reliance on such texts is likely also limiting progress; DynaSent begins moving away from such texts, though it remains rooted in this domain.",
"Not all sentiment benchmarks are based in review texts.",
"The MPQA Opinion Corpus of Wiebe et al. (2005) contains news articles labeled at the phrase-level for a variety of subjective states; it presents an exciting vision for how sentiment analysis might become more multidimensional.",
"Se-mEval 2016 and 2017 (Nakov et al., 2016; Rosenthal et al., 2017) offered Twitter-based sentiment datasets.",
"And of course there are numerous additional datasets for specific languages, domains, and emotional dimensions; Google's Dataset Search currently reports over 100 datasets for sentiment.",
"Challenge and adversarial datasets (Winograd, 1972; Levesque, 2013) have risen to prominence in response to the sense that benchmark results are over-stating the quality of the models we are developing (Linzen, 2020).",
"These efforts seek to determine whether models have met specific learning targets (Alzantot et al., 2018; Glockner et al., 2018; Naik et al., 2018; Nie et al., 2019), exploit relatively superficial properties of their training data, (Jia and Liang, 2017; Kaushik and Lipton, 2018; Zhang et al., 2020), or inherit social biases in the data they were trained on (Kiritchenko and Mohammad, 2018; Rudinger et al., 2017, 2018; Sap et al., 2019; Schuster et al., 2019).",
"For the most part, challenge and adversarial datasets are meant to be used primarily for evaluation (though Liu et al. (2019a) show that even small amounts of training on them can be fruitful in some scenarios).",
"However, there are existing adversarial datasets that are large enough to support full-scale training efforts (Zellers et al., 2018, 2019; Chen et al., 2019; Dua et al., 2019; Bartolo et al., 2020).",
"DynaSent falls into this class; it has large train sets that can support from-scratch training as well as fine-tuning.",
"Our approach is closest to, and directly inspired by, the Adversarial NLI (ANLI) project, which is reported on by Nie et al. (2020) and which continues on Dynabench.",
"In ANLI, human annotators construct new examples that fool a top-performing model but make sense to other human annotators.",
"This is an iterative process that allows the annotation project itself to organically find phenomena that fool current models.",
"The resulting dataset has, by far, the largest gap between estimated human performance and model accuracy of any benchmark in the field right now.",
"We hope DynaSent follows a similar pattern, and that its naturally occurring sentences and prompt-derived sentences bring beneficial diversity.",
"Within NLP, Snow et al. (2008) helped establish crowdsourcing as a viable method for collecting data for at least some core language tasks.",
"Since then, it has become the dominant mode for dataset creation throughout all of AI, and the scientific study of these methods has in turn grown rapidly.",
"For our purposes, a few core findings from research into crowdsourcing are centrally important.",
"First, crowdworkers are not fully representative of the general population (Hube et al., 2019), and any crowdsourcing project will reach only a small population of workers (Gadiraju et al., 2017).",
"This narrowness seems to be an underlying cause of many of the artifacts that have been identified in prominent NLU benchmarks (Poliak et al., 2018; Gururangan et al., 2018; Tsuchiya, 2018; Belinkov et al., 2019).",
"DynaSent's naturally occurring sentences and prompt sentences can help, but we acknowledge that those texts come from people who write online reviews, which is also a special group.",
"Second, as with all work, quality varies across workers and examples, which raises the question of how best to infer individual labels from response distributions.",
"Dawid and Skene (1979) is an early contribution to this problem leveraging Expectation Maximization (Dempster et al., 1977).",
"Much subsequent work has pursued similar strategies; for a full review, see Zheng et al. 2017.",
"Our corpus release uses the true majority (3/5 labels) as the gold label where such a majority exists, leaving examples unlabeled otherwise, but we include the full response distributions in our corpus release and make use of those distributions when training Model 1. For additional details, see Section 3.3.",
"We now begin to describe our method for constructing DynaSent (Figure 1).",
"The current section focuses on Model 0 and Round 1, and Section 4 explains how these feed into Model 1 and Round 2. 3.1 Model 0 Our Model 0 begins with the RoBERTa-base parameters (Liu et al., 2019b) and adds a three-way sentiment classifier head.",
"The model was trained on a number of publicly-available datasets, as summarized in Table 2. See Appendix A for details on these datasets and how we processed them for our ternary task.",
"We evaluate this and subsequent models on three datasets (Table 1): SST-3 dev and test, and the assessment portion of the Yelp and Amazon datasets from Zhang et al. 2015.",
"For Yelp and Amazon, the original distribution contained only (very large) test files.",
"We split them in half (by line number) to create dev and test splits.",
"In Table 3, we summarize our Model 0 assessments on these datasets.",
"Across the board, our model does extremely well on the Positive and Negative categories, and less well on Neutral.",
"We trace SST-3 Yelp Amazon Dev Test Dev Test Dev Test Pos 444 909 9,577 10,423 130,631 129,369 Neg 428 912 10,222 9,778 129,108 130,892 Neu 228 389 5,201 4,799 65,261 64,739 Total 1,100 2,210 25,000 25,000 325,000 325,000 Table 1: External assessment datasets.",
"this to the fact that the Neutral categories for all these corpora were derived from three-star reviews, which actually mix a lot of different phenomena: neutrality, mixed sentiment, and (in the case of the reader judgments in SST) uncertainty about the author's intentions.",
"We return to this issue in Section 5.2, arguing that DynaSent marks progress on creating a more coherent Neutral category.",
"Finally, Table 3 includes results for our Round 1 dataset, as we are defining it.",
"Performance is at-chance across the board by construction (see Section 3.4 below).",
"We include these columns to help with tracking the progress we make with Model 1. We also report performance of this model on our Round 2 dataset (described below in Section 4), again to help with tracking progress and understanding the two rounds.",
"Our first round of data collection focused on finding naturally occurring sentences that would challenge our Model 0. To do this, we harvested sentences from the Yelp Academic Dataset, using the version of the dataset that contains 8,021,122 reviews.",
"3 The sampling process was designed so that 50% of the sentences fell into two groups: those that occurred in 1-star reviews but were predicted by Model 0 to be Positive, and those that occurred in 5-star reviews but were predicted by Model 0 to be Negative.",
"The intuition here is that these would likely be examples that fooled our model.",
"Of course, negative reviews can (and often do) contain positive sentences, and vice-versa.",
"This motivates the validation stage that we describe next.",
"Our validation task was conducted on Mechanical Turk.",
"Workers were shown ten sentences and asked to label them according to the categories Positive , Negative , Neutral , and Mixed .",
"See Appendix B for the full interface, including glosses for the categories and the task instructions.",
"For this round, 1,978 workers participated in the validation process.",
"In the final version of the corpus, each sentence is validated by five different workers.",
"To obtain these ratings, we employed an iterative strategy.",
"Sentences were uploaded in batches of 35K and, after each round, we measured each worker's rate of agreement with the majority.",
"We then removed from the potential pool those workers who disagreed more than 80% of the time with their co-annotators, using a method of unqualifying' workers that does not involve rejecting their work or blocking them (Turk, 2017).",
"We then obtained additional labels for examples that those unqualified' workers annotated.",
"The final version of DynaSent keeps only the responses from the highest-rated workers.",
"This led to a substantial increase in dataset quality by removing a lot of labels that seemed to us to be randomly assigned.",
"Appendix B describes the process in more detail, and our Datasheet enumerates the known unwanted biases that this process can introduce.",
"The Round 1 dataset is summarized in Table 5, and Table 4 gives randomly selected short examples.",
"Because each sentence has five ratings, there are two perspectives we can take on the dataset: Distributional Labels We can repeat each example with each of its labels (de Marneffe et al., 2012; Pavlick and Kwiatkowski, 2019).",
"For instance, the first sentence in Table 4 would be repeated three times with Mixed' as the label and twice with Negative'.",
"For many classifier models, this reduces to labeling each example with its probability distribution over the labels.",
"This is an appealing approach to creating training data, since it allows us to make use of all the examples, 4 even those that do not have a majority label, and it allows us to make maximal use of the labeling information.",
"In our experiments, we found that training on the distributional labels consistently led to slightly better 4 For Mixed' labels, we create two copies of the example, one labeled Positive', the other Negative'.",
"models, suggesting that annotator disagreement is stable and informative.",
"Majority Label We can take a more traditional route and infer a label based on the distribution of labels.",
"In Table 5, we show the labels inferred by assuming that an example has a label just in case at least three of the five annotators chose that label.",
"This is a conservative approach that creates a fairly large No Majority' category.",
"More sophisticated approaches might allow us to make fuller use of the examples and account for biases relating to annotator quality and example complexity (see Section 2.3).",
"We set these options aside for now because our validation process placed more weight on the best workers we could recruit (Section 3.3).",
"The Majority Label splits given by Table 5 are designed to ensure five properties: (1) the classes are balanced, (2) Model 0 performs at chance, (3) the review-level rating associated with the sentence has no predictive value, (4) at least four of the five workers agreed, and (5) the majority label is Positive, Negative, or Neutral.",
"(This excludes examples that received a Mixed majority and examples without a majority label at all.)",
"Over the entire round, 47% of cases are such that the validation majority label is Positive, Negative, or Neutral and Model 0 predicted a different label.",
"Table 6a provides a conservative estimate of human F1 in order to have a quantity that is comparable to our model assessments.",
"To do this, we randomize the responses for each example to create five synthetic annotators, and we calculate the precision, recall, and F1 scores for each of these annotators with respect to the gold label.",
"We average those scores.",
"This heavily weights the single annotator who disagreed for the cases with 4/5 majorities.",
"We Dev Test Pos 88.1 87.8 Neg 89.2 89.3 Neu 86.6 86.9 Avg 88.0 88.0",
"(a) Round 1. Fleiss : 0.62 dev, 0.62 test.",
"614 of 1,280 workers never disagreed with the gold label.",
"(b) Round 2. Fleiss : 0.68 dev, 0.67 test.",
"116 of 244 workers never disagreed with the gold label.",
"can balance this against the fact that 614 of 1,280 workers never disagreed with the majority label (see Appendix B for the full distribution).",
"However, it seems reasonable to say that a model has solved the round if it achieves comparable scores to our aggregate F1 a signal to start a new round.",
"In Round 2, we leverage Dynabench to begin creating a new dynamic sentiment benchmark.",
"Model 1 was created using the same general methods as for Model 0 (Section 3.1): we begin with RoBERTa parameters and add a three-way sentiment classifier head.",
"The differences between the two models lie in the data they were trained on.",
"The train set is summarized in Table 7, and Appendix A provides additional details.",
"Table 8 summarizes the performance of our model on the same evaluation sets as are reported in Table 8 for Model 0. Overall, we see a small performance drop on the external datasets, but a huge jump in performance on our dataset (Round 1).",
"While it is unfortunate to see a decline in performance on the external datasets, this is expected if we are shifting the label distribution with our new dataset it might be an inevitable consequence of hill-climbing in our intended direction.",
"Our data distribution provides the Dynabench interface we created for DynaSent as well the complete instructions and training items given to workers.",
"The essence of the task is that the worker chooses a label y to target and then seeks to write an example that the model (currently, Model 1) assigns a label other than y but that other humans would label y .",
"Workers can try repeatedly to fool the model, and they get feedback on the model's predictions as a guide for how to fool it.",
"We consider two conditions.",
"In the Prompt condition, workers are shown a sentence and given the opportunity to modify it as part of achieving their goal.",
"Prompts are sampled from parts of the Yelp Academic Dataset not used for Round 1. In the No Prompt condition, workers wrote sentences from scratch, with no guidance beyond their goal of fooling the model.",
"We piloted both versions and compared the results.",
"Our analyses are summarized in Section 5.1.",
"The findings led us to drop the No Prompt condition and use the Prompt condition exclusively, as it clearly leads to examples that are more naturalistic and linguistically diverse.",
"For Round 2, our intention was for each prompt to be used only once, but prompts were repeated in a small number of cases.",
"We have ensured that our dev and test sets contain only sentences derived from unique prompts (Section 4.5).",
"We used the identical validation process as described in Section 3.3, getting five responses for each example as before.",
"This again opens up the possibility of using label distributions or inferring individual labels.",
"395 workers participated in this round.",
"See Appendix B for additional details.",
"Table 10 summarizes our Round 2 dataset, and Table 9 provides train examples from Round 2 sampled using the same criteria we used for Table 4.",
"Overall, workers' success rate in fooling Model 1 SST-3 Yelp Amazon Round 1 Round 2 Dev Test Dev Test Dev Test Dev Test Dev Test Positive 84.6 88.6 80.0 83.1 83.3 83.3 81.0 80.4 33.3 33.3 Negative 82.7 84.4 79.5 79.6 78.7 78.8 80.5 80.2 33.3 33.3 Neutral 40.0 45.2 56.7 56.6 55.5 55.4 83.1 83.5 33.3 33.3 Macro avg 69.1 72.7 72.1 73.1 72.5 72.5 81.5 81.4 33.3 33.3 Table 8: Model 1 performance (F1 scores) on external assessment datasets (Table 1), as well as our Round 1 and Round 2 datasets.",
"is about 19%, which is much lower than the comparable value for Round 1 (47%).",
"There seem to be three central reasons for this.",
"First, Model 1 is hard to fool, so many workers reach the maximum number of attempts.",
"We retain the examples they enter, as many of them are interesting in their own right.",
"Second, some workers seem to get confused about the true goal and enter sentences that the model in fact handles correctly.",
"Some non-trivial rate of confusion here seems inevitable given the cognitive demands of the task, but we have taken steps to improve the interface to minimize this factor.",
"Third, a common strategy is to create examples with mixed sentiment; the model does not predict this label, but it is chosen at a high rate in validation.",
"Despite these factors, we can construct splits that meet our core goals: (1) Model 1 performs at chance on the dev and test sets, and (2) the dev and test sets contain only examples where the majority label was chosen by at least four of the five workers.",
"In addition, (3) our dev and test sets contain only examples from the Prompt condition (the No Prompt cases are in the train set, and flagged as such), and (4) all the dev and test sentences are derived from unique prompts to avoid leakage between train and assessment sets and reduce unwanted correlations within the assessment sets.",
"Table 6b provides estimates of human F1 for Round 2 using the same methods as described in Section 3.5.",
"We again emphasize that these are conservative estimates.",
"A large percentage of workers (116 of 244) never disagreed with the gold label on the examples they rated, suggesting that human performance can approach perfection.",
"Nonetheless, the estimates we give here seem useful for helping us decide whether to continue hill-climbing on this round or begin creating new rounds.",
"We now address a range of issues that our methods raise but that we have so far deferred in the interest of succinctly reporting on the methods themselves.",
"As discussed in Section 4, we explored two methods for collecting original sentences on Dynabench: with and without a prompt sentence that workers could edit to achieve their goal.",
"We did small pilot rounds in each condition and assessed the results.",
"This led us to use the Prompt condition exclusively.",
"This section explains our reasoning more fully.",
"First, we note that workers did in fact make use of the prompts.",
"In Figure 2a, we plot the Leven-shtein edit distance between the prompts provided to annotators and the examples the annotators produced, normalized by the length of the prompt or the example, whichever is longer.",
"There is a roughly bimodal distribution in this plot, where the peak on the right represents examples generated by the annotator tweaking the prompt slightly and the peak on the left represents examples where they deviated significantly from the prompt.",
"Essentially no examples fall at the extreme ends (literal reuse of the prompt; complete disregard for the prompt).",
"Second, we observe that examples generated in the Prompt condition are generally longer than those in the No Prompt condition, and more like our Round 1 examples.",
"Figure 2b summarizes for string lengths; the picture is essentially the same for tokenized word counts.",
"In addition, the Prompt examples have a more diverse vocabulary overall.",
"Figure 2c provides evidence for this: we sampled 100 examples from each condition 500 times, sampled five words from each example, and calculated the vocabulary size (unique token count) for each sample.",
"(These measures are intended to control for the known correlation between token counts and vocabulary sizes; Baayen 2001.)",
"The Prompt-condition vocabularies are much larger, and again more similar to our Round 1 examples.",
"Third, a qualitative analysis further substantiates the above picture.",
"For example, many workers realized that they could fool the model by attributing a sentiment to another group and then denying it, as in They said it would be great, but they were wrong.",
"As a result, there are dozens of examples in the No Prompt condition that employ this strategy.",
"Individual workers hit upon more idiosyncratic strategies and repeatedly used them.",
"This is just the sort of behavior that we know can create persistent dataset artifacts.",
"For this reason, we include No Prompt examples in the training data only, and we make it easy to identify them in case one wants to handle them specially.",
"For both Model 0 and Model 1, there is consistently a large gap between performance on the Neutral category and performance on the other categories, but only for the external datasets we use for evaluation.",
"For our dataset, performance across all three categories is fairly consistent.",
"We hypothesized that this traces to semantic diversity in the Neutral categories for these external datasets.",
"In review corpora, three-star reviews can signal neutrality, but they are also likely to signal mixed sentiment or uncertain overall assessments.",
"Similarly, where the ratings are assigned by readers, as in the SST, it seems likely that the middle of the scale will also be used to register mixed and uncertain sentiment, along with a real lack of sentiment.",
"To further support this hypothesis, we ran the SST dev set through our validation pipeline.",
"This leads to a completely relabeled dataset (distributed with DynaSent) with five ratings for each example and a richer array of categories.",
"The new labels are closely aligned with SST's for Positive and Negative, but the SST-3 Neutral category has a large percentage of cases falling into Mixed and No Majority.",
"Appendix D provides the full comparison matrix and gives a random sample of cases where the two label sets differ with regard to the Neutral category.",
"It also provides all seven cases of sentiment confusion.",
"We think these comparisons favor our labels over SST's original labels.",
"Our Model 1 was trained from scratch (beginning with RoBERTa parameters)d.",
"An appealing alternative would be to begin with Model 0 and fine-tune it on our Round 1 data.",
"This would be more effi-cient, and it might naturally lead to the Round 1 data receiving the desired overall weight relative to the other datasets.",
"Unfortunately, our attempts at this led to worse models, and the problems traced to very low performance on the Neutral category.",
"To study the effect of our dataset on Model 1 performance, we employ the fine-tuning by in-oculation method of Liu et al. (2019a).",
"We first divide our Round 1 train set into small subsets via random sampling.",
"Then, we fine-tune our Model 0",
"using these subsets of Round 1 train with nondistributional labels.",
"We early-stop our fine-tuning process if performance on the Round 0 dev set of Model 0 (SST-3 dev) has not improved for five epochs.",
"Lastly, we measure model performance with Round 1 dev (SST-3 dev plus Round 1 dev) and our external evaluation sets (Table 1).",
"Figure 3 presents F1 scores for our three class labels using this method.",
"Model performance on Round 1 dev increases for all three labels given more training examples.",
"The F1 scores for the Positive and Negative classes remain high, but they begin to drop slightly with larger samples.",
"The F1 scores on SST-3 dev show larger perturbations.",
"The most striking trends are for the Neutral category, where the F1 score on Round 1 dev increases steadily while the F1 scores on the three original development sets for Model 0 decrease drastically.",
"This is the pattern that Liu et al. (2019a) associate with dataset artifacts or label distribution shifts.",
"Our current hypothesis is that the pattern we observe can be attributed, at least in large part, to label shift specifically, to the difference between our Neutral category and the other Neutral categories, as discussed in the preceding section.",
"Our strategy of training from scratch seems less susceptible to these issues, though the label shift is still arguably a factor in the lower performance we see on this category with our external validation sets.",
"We presented DynaSent, as the first stage in an ongoing effort to create a dynamic benchmark for sentiment analysis.",
"To date, the best future-looking Model 2 we have developed achieves 83.1 F1 on Round 1 and 70.8 F1 on Round 2 while maintaining good performance on our external benchmarks.",
"Appendix E provides details on this model and others, and the Dynabench platform offers a detailed and up-to-date leaderboard.",
"We hope and expect that the community will find models that solve both rounds.",
"That will be our cue to launch another round of data collection to fool those models and push the field of sentiment forward by another step.",
"Our thanks to the developers of the Dynabench Platform, and special thanks to our Amazon Mechanical Turk workers for their essential contributions to this project.",
"This research is supported in part by faculty research grants from Facebook and Google.",
"DynaSent is distributed with a detailed Datasheet (Gebru et al., 2018) that describes the data collection process and its motivations, and seeks to articulate known limitations of the resource.",
"The data distribution also includes a Model card (Mitchell et al., 2019) that seeks to provide similar disclosures concerning Model 0 and Model 1. Taken together, these documents further articulate our central goals for these resources and provide guidance on responsible use.",
"These documents will be upated appropropriately as DynaSent and our associated models evolve."
] |
[
"objective",
"abstain",
"objective",
"method",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"method",
"abstain",
"abstain",
"result",
"result",
"method",
"abstain",
"method",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain"
] |
[
"Conditional text generation often requires lexical constraints, i.e., which words should or shouldn't be included in the output text.",
"While the dominant recipe for conditional text generation has been large-scale pretrained language models that are finetuned on the task-specific training data, such models do not learn to follow the underlying constraints reliably, even when supervised with large amounts of task-specific examples.",
"We propose NEUROLOGICDECODING , a simple yet effective algorithm that enables neural language models supervised or not to generate fluent text while satisfying complex lexical constraints.",
"Our approach is powerful yet efficient.",
"It handles any set of lexical constraints that is expressible under predicate logic, while its asymptotic runtime is equivalent to conventional beam search.",
"Empirical results on four benchmarks show that NEUROLOGICDECODING outperforms previous approaches, including algorithms that handle a subset of our constraints.",
"Moreover, we find that unsupervised models with NEUROLOGICDECODING often outperform supervised models with conventional decoding, even when the latter is based on considerably larger networks.",
"Our results suggest the limit of large-scale neural networks for fine-grained controllable generation and the promise of inference-time algorithms.",
"Text generation applications often need to incorporate semantic constraints, i.e., what words should and shouldn't appear in the output generation.",
"Consider the task of generating a recipe from a set of ingredients (Kiddon et al., 2016), such as garlic,' steak', and soy sauce' (Figure 1).",
"A generated recipe should cover all of those ingredients, without hallucinating new ones (such as pork' or beans').",
"This restriction, like others in Figure 1 for other i npu t Scenario food | table | sit | front The man sat with his food at the front of the table The food is in front of you sit at the table.",
"constraints expressed as a predicate logic formula.",
"The dominant paradigm today for performing such constrained generation is to start with a pretrained language model, and then finetune it on a dataset of task-specific examples.",
"However, pretrained language models struggle at learning to follow these constraints, even when the finetuning dataset is large.",
"For example, for the aforementioned recipe generation task, a GPT2 model finetuned on hundreds of thousands of recipes still hallucinates extra ingredients.",
"In stark contrast, humans need to see only a few examples (or even none) to generate the desired output satisfying all the logical constraints, e.g., writing a recipe that mentions each ingredient (butter, steak, etc.) without using new ones.",
"We hypothesize that this mismatch is due to a fundamental under-specification of finetuning.",
"If we finetune one of today's state-of-the-art language models on a dataset, the likelihood of it generating sequences from the same distribution should increase.",
"Yet there is no guarantee that this improvement in likelihood will come from improvements on the fundamental task of constrained generation, as opposed to picking up on dataset-specific patterns such as language style.",
"In fact, we present analysis suggesting that worst-case' learning behavior is common in practice: when we increase the finetuning data fed to GPT2 by an order of magnitude, constraint-satisfaction with standard beam search shows only modest improvement.",
"To address this issue, we propose NEUROLOGICDECODING , which effectively enforces the satisfaction of given lexical constraints by controlling the decoding stage of sequence generation.",
"These constraints can be any predicate logic formula, which crucially includes both positive constraints (the word butter' must be generated somewhere) and negative constraints (bean' cannot be generated).",
"These simpler constraints can then be combined through logical connectives to handle more complex requirements such as inflection or synonyms (beef' or steak' both satisfy the constraint of referring to the steak).",
"While beam search aims to maximize the likelihood of the generated sequence, our method searches for optimal output sequences among the strings that also satisfy the given constraints.",
"It does so efficiently: we convert the hard logic constraints into a soft penalty term in the decoding objective, and use a beam-based search to find approximately-optimal solutions; constraint states are tracked to reuse computation.",
"NEUROLOGICDECODING thus effectively and efficiently controls text generation without requiring any mod-ification of the model structure or training pipeline.",
"reasoning (COMMONGEN ; Lin et al., 2020), recipe generation (Kiddon et al., 2016), data-grounded dialogue response generation (Wen et al., 2015), and reducing gender bias in machine translation (Stanovsky et al., 2019).",
"Empirical results demonstrate that NEUROLOGICDECODING ensures the satisfaction of given constraints while maintaining high generation quality, in turn leading to new SOTA results in both the supervised and zero-shot setting.",
"In this section, we first rigorously define predicate logic constraint, and then present in detail the NEUROLOGICDECODING algorithm.",
"Let us define a predicate D ( a , y ) to be a boolean function indicating the occurrence of key phrase a in a sequence y , where a can be either unigram or multi-gram.",
"D ( a , y ) will be true iff a occurs in y .",
"NEUROLOGIC accepts lexical constraints in Conjunctive Normal Form (CNF):",
"where each D i represents a single positive or negative constraint, D ( a i , y ) or D ( a i , y ) , restricting whether key phrase a i should be strictly included or omitted in y , respectively.",
"Any propositional logical formula can be converted to CNF, and thus handled by NEUROLOGIC .",
"Notationally, we will refer to each individual constraint D i as a literal , and the disjunction of literals as a clause , denoted as C j , with L being the total number of clauses.",
"Our method seeks optimal sequences in which all clauses are satisfied: y = arg max y Y P ( y | x ) where L (cid:88) i =1 C i = L (1) Past work on constrained optimization introduces penalties (Fiacco, 1976) to approximate the constrained optimization problem with an unconstrained problem.",
"Specifically, by adding a high-cost penalty term for violated constraints: y = arg max y Y P ( y | x ) (cid:48) L (cid:88) i =1 (1 C i ) (2) !",
"Intuitively, this objective balances sequence likelihood (term 1) and constraint satisfaction (term 2).",
"The aim is to find sequences that do well at both dimensions.",
"While exhaustive search is intractable, we use a beam-based search to find approximately-optimal solutions for this objective.",
"When considering whether a generation hypothesis satisfies some clause C i during generation, there are fundamentally 4 possible states (as in figure 2)",
"S1 reversible unsatisfaction : If an unsatisfied clause C i contains at least one positive literal, C i could be satisfied in the future by fulfilling one of its positive literal(s).",
"S2 irreversible unsatisfaction : If an unsatisfied clause C i contains negative literal(s) only, C i will maintain unsatisfied in the future since the violation of negative literals could not be overturned.",
"S3 reversible satisfaction : If all satisfied literal(s) in a satisfied clause C i are negative literal(s), C i could switch back to unsatisfied in the future by violating all of its satisfied negative literal(s).",
"S4 irreversible satisfaction : If satisfied literal(s) in a satisfied clause C i contains at least one positive literal, C i will maintain satisfied in the future since the fulfilment of positive literals is irreversible.",
"To track the states of literals and clauses efficiently, we maintain two prefix tries.",
"The first trie, T + , tracks unsatisfied positive literals from all clauses in states S1 and S3, while the other trie, T , tracks satisfied negative literals from all clauses in state S3.",
"We do not track anything from clauses in state S2 or S4, as those are already irreversible.",
"S1 or S3 is henceforth irreversibly satisfied (state S4), thus we remove all literals of that clause from both tries and stop tracking.",
"If a negative literal in state S3 is violated, we remote it from the trie T .",
"Once all negative literals of a clause in state S3 has been removed, the clause switches back to unsatisfied (state S1 or S2).",
"If it has unsatisfied positive literal(s) in the trie T + , it becomes reversibly unsatisfied (state S1); otherwise it shall stay irreversibly unsatisfied (state S2).",
"Since exhaustive search to optimize the CNF constraints is intractable, NEUROLOGIC uses a beam-based search to approximate.",
"The high-level intuition is that at each time step, NEUROLOGIC selects generation hypotheses in consideration of both the objective function and the diversity of the partially satisfied constraints.",
"We achieve such by 3 steps: pruning , grouping , and selecting (illustrated in figure 3, and detailed below).",
"At each time step, the decoding model generates a distribution over all vocabulary V for k hypotheses in the current beam, resulting in a candidate score matrix of size k | V | .",
"Along with generating score matrix, we produce a constraint state for each of the k | V | new candidates h , based on the next token considered.",
"Pruning step : We first discard any h with irreversible unsatisfied clause (state S2) to focus only on candidates that might satisfy all constraints.",
"Then, we filter candidates h to those in the top-tier of both satisfied constraints and sequence likelihood.",
"Specifically, we drop any candidates not in the top in terms of likelihood P ( y t | y <t ) , and not in the top in terms of number of satisfied clauses (cid:80) Li =1 C i .",
"These are adjustable parameters, corresponding to maximum tolerance to sequence fluency and constraint satisfaction.",
"Grouping step : Next, we select the beam from the pruned candidates.",
"Naively selecting k best candidates with respect to the objective function would not work well, since such greedy selection would bias toward sequences with high likelihood and easy-to-satisfy clauses at early timestep, which can lead to struggling with remaining hard-to-satisfy clauses later on.",
"Therefore, the key intuition is to consider diverse partial solutions early on with respect to the set of irreversibly satisfied clauses, i.e., { C i | C i state S4 } .",
"We group candidates based on this set and select (in the next step) the best ones Constraints ! cowbo \" dog ( # (play music) $ plays music ) ( % catch & catches ) # $ ! \" runs catches plays eats plays talks talks plays catches cowboy man dog The t = 0 t = 1 t = 2 search tree score 3 1 4 2 0.18 + 0.1 * 0 = 0.18 0.12 + 0.1 * 0 = 0.12 0.15 + 0.1 * 0 = 0.15 0.11 + 0.1 * !",
"from each group, we first rank candidates within a group by score function:",
"where a i is a i 's matched prefix with ongoing generation.",
"For example, for y = The boy climbs an apple and constraint a i = apple tree , we have a i = apple .",
"The second term denotes maximal percentage of matched prefix in partially satisfied positive literals.",
"Intuitively, this score function ranks candidaites by likelihood and gives a partial reward to candidates moving towards satisfying a positive literal in an unsatisfied clause (state S1).",
"is an adjustable parameter, controlling how much we favor candidates towards fulfilling another unsatisfied clause.",
"We then proceed in rounds of filling the beam, visiting each group and taking the best scoring ones in rotation, until we reach k candidates.",
"The group traversing order follows the descending order of the highest score in each group.",
"In the end, we take the hypothesis with highest likelihood from the ones with maximal satisfied clauses.",
"NEUROLOGIC distinguishes itself from past works in constrained decoding in 3 fundamental ways.",
"of CNF constraint, while previous works only allow a subset of this (typically conjunctions).",
"Second, NEUROLOGIC effectively optimizes objective function through efficient and diverse search over output space, while previous works suffer from either myopic and narrow or inefficient exploration of the search space.",
"Third, the asymptotic runtime of NEUROLOGIC is O ( Nk ) 1 , same with beam search, constant with respect to number of constraints C .",
"Some previous works suffer from exponential runtime, making applications infeasible.",
"A detailed comparison between NEUROLOGIC and previous methods is provided in table",
"1. 3.1 Previous Constrained Decoding Approach Anderson et al. (2017) propose constrained beam search ( CBS ), where constraint satisfaction is tracked by a finite-state machine with 2 C states (all possible satisfaction status for C constraints).",
"Beam search is done over all states with k candidates per state.",
"This method has an exponential complexity O ( Nk 2 C ) , making many applications infeasible.",
"Hokamp and Liu (2017) propose grid beam search ( GBS ), which groups together hypotheses by number of constraints satisfied, giving C + 1 1 N denotes sequence length and k denotes beam size.",
"In this paper, we the asymptotic runtimes is in terms of the number of calls to a deep generator that scores P ( y t | y <t ) ; this is because calling the generator is the most expensive part of decoding (as opposed to auxiliary bookkeeping).",
"groups altogether.",
"Each group stores at most k candidates that are expanded at each timestep.",
"GBS has a faster runtime of O ( Nk C ) , but this approach biases towards sequences satisfying constraints greedily, and collapses into very similar search paths that are often times globally sub-optimal, which results in dropped language quality.",
"Post and Vilar (2018) propose dynamic beam allocation to reduce GBS's explicit dependence on C .",
"Beam search is done over a single beam, with the k slots of this beam dynamically allocated over the C +1 groups explicitly used by GBS.",
"This approach was made GPU-efficient by Hu et al. (2019a).",
"Still, the language quality issue of GBS remains, and can be worse in practice as fewer hypotheses are considered at each step.",
"Miao et al. (2019) propose Constrained Generation by Metropolis-Hastings Sampling ( CGMH ).",
"This approach begins by inserting all positive-constraint keywords in random order.",
"Edits are randomly sampled to replace, insert, or delete words to make the sentence fluent; the probability of each action is computed on top of a language model.",
"Sha (2020) proposes using gradient of a objective function to guide where and how to edit instead of random sampling.",
"These approaches have runtime independent to number of constraints; yet they can involve repeated deletions and insertions, reducing efficiency.",
"Generation quality is also sensitive to initial keyword order and sampled edits.",
"Lexically constrained generation can be broadly applied to prior conditional text generation tasks.",
"Examples include incorporating pre-specified lexical constraints (Anderson et al., 2017; Post and Vilar, 2018), user-provided terminology constraints (Hasler et al., 2018; Dinu et al., 2019), noisy automatic constraints (Li et al., 2019) in translation output.",
"A major use case of lexical constrained decoding is paraphrase generation (Hu et al., 2019a; Kajiwara, 2019; Hu et al., 2019b; Miao et al., 2019), by negatively constraining words in the source to enforce paraphrasing.",
"Another use case is image captioning, with novel scenes or out-of-domain objects (Anderson et al., 2017), or requiring explicit grounding to objects in the scene (Ren et al., 2015; Krause et al., 2016).",
"In addition, Balakrishnan et al. (2019) leverage constrained decoding to improve semantic correctness for response generation.",
"COMMONGEN (Lin et al., 2020) is a benchmark dataset designed as a test of generative commonsense reasoning.",
"Given a set of common concepts (e.g., dog, frisbee, catch, throw); the task is to generate a coherent sentence describing an everyday scenario using these concepts (e.g., a man throws a frisbee and his dog catches it).",
"Problem Formulation The input is an unordered set of n concepts x = { a 1 , a 2 , . . . , a n } , where each concept a i is a common object (noun) or action (verb).",
"The expected output is a simple, grammatical sentence y Y that describes a common scenario using all given concepts in x with correct morphological inflections.",
"To apply NEUROLOGICDECODING , we impose that each a i must appear in output y under some morphological inflection.",
"Let a i = { a i 1 , . . . a i | a i | } denote all inflections of a i .",
"y covers concept a i , if at least one of { a i 1 , . . . a i | a i | } appears.",
"Formally, a i x , a ij a i , D ( a ij , y ) where D ( a ij , y ) is a boolean-value function indicating whether y contains a ij or not, as defined above.",
"2 2 This gets converted into ni =1 (cid:0) | a i | j =1 D ( a ij , y ) (cid:1) .",
"Dataset The COMMONGEN dataset consists of 35,141 concept-sets (32,651 in train , 993 in val , 1,497 in test ) associated with 77,449 sentences.",
"The average size of the concept-sets in the test set is 4 .",
"04 , with an average of four sentences per concept-set and an average sentence length of 13 .",
"34 words.",
"Approach and Baseline The standard pipeline of approaching this problem is to consider it as a conditional sentence generation task.",
"We experiment with several recent pre-trained language models, including GPT-2 (Radford et al., 2019), UniLM (Dong et al., 2019), UniLM-v2 (Bao et al., 2020), BERT-Gen (Bao et al., 2020), BART (Lewis et al., 2020), and T5 (Raffel et al., 2019).",
"All models are finetuned with their default hyperparameters.",
"We compare with commonly used decoding methods, including beam search, sampling, and also previously proposed constrained decoding methods.",
"We use several widely-used automatic metrics to automatically assess the performance, such as BLEU, ROUGE, METEOR, which mainly focus on measuring surface similarities.",
"We also include metrics specially designed for captioning task, such as CIDEr, and SPICE.",
"Following Lin et al. (2020), we report the concept Coverage, which is the average percentage of input concepts that are present in lemmatizatized outputs.",
"In Table 4, we first present comparisons across different decoding methods based on a supervised sequence-to-sequence model, GPT-2.",
"The key observations are:",
"1. NEUROLOGIC outperforms all other previous decoding methods, both constrained and unconstrained, with respect to all metrics and often with a significant margin.",
"2. NEUROLOGIC not only attains high constraint satisfaction ( COVERAGE ), it also improves the generation quality as quantified over ROUGE , BLEU , METEOR , CIDE r, and SPICE",
".",
"3. In comparison, all previous constrained decoding methods (Hokamp and Liu, 2017; Post and Vilar, 2018; Hu et al., 2019a) attain high constraint satisfaction at the cost of generation quality; being outperformed here by conventional beam search with a large margin.",
"The second and the third points above demonstrate that the improved logical expressiveness of NEUROLOGIC together with the effective search strat-1 Figure 4: Performance (y-axis) of supervised GPT2-L on COMMONGEN , with a varying amount of training data for supervision (x-axis).",
"Table 2 presents experiments across various state-of-the-art pre-trained language models.",
"In this experiment, all models are supervised on the COMMONGEN training dataset.",
"Under each column, shows the performance using the conventional beam search ( ) compared to the enhanced performance using NEUROLOGICDECODING ( ).",
"As before, NEUROLOGIC always improves the performance across all models and all metrics with no exception both in terms of constraint satisfaction as well as generation quality.",
"The improvement is especially substantial when the generation quality is relatively low due to smaller model capability or less efficient model architecture or pre-training.",
"In this experiment, we test how well NEUROLOGIC works with unsupervised pre-trained language models, with and without domain adaptation.",
"Table 3 presents experimental results of zero-shot (i.e., unsupervised) constrained generation.",
"With unconstrained decoding, we have zero controllability over the unsupervised language models, as they ignore the problem input and generate irrelevant text.",
"With NEUROLOGIC , on the other hand, we can dramatically improve the performance on all metrics.",
"Fig 6 demonstrates some generated examples.",
"In zero-shot setting without any finetuning, the language style of pre-trained LMs might differ from that of COMMONGEN .",
"To further improve the performance, we conduct language domain adaption by fine-tuning the language models on the training-set COMMONGEN language ignoring all concept sets.",
"We observe that after domain adaption, NEUROLOGIC in zero-shot setting outperforms unconstrained generation with supervised finetuned LMs, which suggests that inference-time algorithms can provide a more compute-efficient avenue to draw better from neural models.",
"The amount of training data Figure 4 compares the performance (y-axis) of supervised GPT-2 with NEUROLOGIC ( orange line ) and with conventional beam search ( blue line ) as a function of the increasing amount of training data (x-axis).",
"Notably, even after being supervised on 100% of the training data, the supervised GPT-2 does not successfully learn the COMMONGEN constraints (Coverage') and is even outperformed by the zero-shot GPT-2 (i.e., using 0% training data) with NEUROLOGIC .",
"The model size Figure 5 compares the performance (y-axis) of GPT-2 with varying model sizes (x-axis).",
"Regardless of the model size, NEUROLOGIC ( purple line and black line ) boosts performance considerably over conventional beam search ( blue line ).",
"More over, if using NEUROLOGIC , the performance of unsupervised models ( black line ) becomes comparable to that of supervised models ( purple line ).",
"Remarkably, unsupervised models with NEUROLOGIC based on smaller networks ( black line ) often outperform supervised models with conventional beam search based on considerably larger networks ( blue line ).",
"We next study cooking recipe generation, a paragraph-level generation task.",
"Given a dish name and a list of ingredients, the task is to generate cooking instructions for the given recipe.",
"Problem Formulation The input is the recipe title, an unordered set of ingredients E = { e 1 , ..., e | E | } where e i can be a singleor multiword ingredient phrase (e.g., onions', black pep-per').",
"Let G denote the set of all ingredients.",
"The expected output is a paragraph y Y that describes multi-step cooking instructions.",
"To apply NEUROLOGICDECODING , we constrain output y to contain all given ingredients e i in E , and no other ingredients, i.e. no ingredients in G \\ E .",
"Ingredients can be referred to with generic terms (e.g., vegetables' may refer to onions', or carrots') and we denote the generic name for ingredient e i as e Ti .",
"Formally, the constraint is (cid:16) e i E, D ( e i , y ) D ( e Ti , y ) (cid:17) (cid:16) e i G \\ E, D ( e i , y ) (cid:17) Dataset, Approach and Baseline We use Recipe1M+, a large-scale, structured corpus of over one million cooking recipes.",
"On average each recipe has 118 words and 9 ingredients.",
"RecipeGPT (Lee et al., 2020) is a GPT-2 model fine-tuned on Recipe1M+, for generating recipes.",
"Its default decoding algorithms are beam search and sampling, which serve as the baselines for evaluating our method.",
"In addition, we compare against previously proposed constrained decoding methods with RecipeGPT.",
"Besides common evaluation metrics for generation task, we introduce explicit measures of given-ingredient coverage and usage of extra/hallucinated ingredients.",
"Result Table 5 presents the experimental results.",
"We can see that NEUROLOGIC outperforms all baselines in all metrics.",
"The delta is quite remarkable on coverage of given ingredients and usage of extra ingredients.",
"With NEUROLOGIC , we are able Supervised?",
"to cover almost all ingredients in generated instructions and guarantee not to use any other ingredients, which leads to more accurately controlled generation.",
"By plugging NEUROLOGIC into existing generation system, we can get immediate boosts in controllability and generation quality with no extra computational cost.",
"In dialogue response generation for hotel and restaurant information systems (Wen et al., 2016), we generate a natural language response given a query type (e.g., informing or querying) and a list of facts to convey (e.g., a hotel's name and address).",
"Problem Formulation The input is a query type, an unordered set of facts F = { f 1 , ..., f | F | } , where each f i contains attribute and value (i.e. accepts_credit_cards=yes, name=red victorian bed breakfast).",
"The expected output is a dialogue responses y Y containing given information.",
"The constraint here is that all given facts f i must be included in responses y in proper natural language form f Ni .",
"We use a very simple template to turn f i to natural language form f Ni .",
"(i.e. the natural language form for accepts_credit_cards=no is doesn't accept credit cards).",
"Formally, f i F, D ( f Ni , y ) Dataset, Approach and Baseline We use the hotel and restaurant dialogue system corpus and the same train-dev-test split from (Wen et al., 2016).",
"There are 8 query types and 12 attribute types.",
"The standard paradigm for dialogue generation is to consider it as a conditional sentence generation task and finetune a seq2seq model.",
"While this pipeline works effectively with existing data, once we have user queries with new query types or new attribute types, the seq2seq model would not be able to generate plausible responses.",
"The Model Accuracy (%; ) S (F1; ) E n -D e Google Translate 59.4 12.5 Microsoft Translator 74.1 30.2 Junczys-Dowmunt et al. 60.5 91.0 13.3 4.3 Junczys-Dowmunt et",
"situation can happen frequently with a dialogue generation system in application.",
"Thus, we are interested in zero-shot dialogue generation.",
"We give a hand-crafted initial prompt to a pre-trained LM based on the query type and apply NEUROLOGICDECODING to force given facts to include in generation.",
"The pre-trained LM we use here is GPT-2 (Radford et al., 2019).",
"The baseline we compare against is seq2seq finetuned LMs with vanilla beam search, including GPT-2 (Radford et al., 2019), BART (Lewis et al., 2020) and T5 (Raffel et al., 2019).",
"We also compare with previous SOTA (Kiddon et al., 2016) on dialogue response generation.",
"Result Table 6 presents the experimental results.",
"We can see that zero-shot generation with NEUROLOGIC outperforms or matches supervised baselines.",
"This suggests that plugging NEUROLOGICDECODING into pretrained LMs can lead to a powerful dialogue generation system, we do not actually need massive finetuning with extra computational cost to do that.",
"Problem Formulation We adopt the task setup and dataset of Stanovsky et al. (2019).",
"The input x is an English sentence describing a scenario with human entities N = { n 1 , . . . , n | N | } who are iden-tified by roles.",
"The desired output is a translation y which uses the correct gender inflections in the target language (here, German or French).",
"through coreference resolution, linking each entity with their gendered pronoun.",
"3 We then constrain the correctly-gendered human entities to appear in output y .",
"For a human entity n i , let n Fi denote its female inflection in the target language, and n Mi denotes its male inflection.",
"Let F denotes the set of human entities associated with female characters, and M denotes the set of entities associated with male.",
"Formally, the constraint is (cid:16) n i F, D ( n Fi , y ) D ( n Mi , y ) (cid:17) (cid:16) n i M, D ( n Mi , y ) D ( n Fi , y ) (cid:17) Dataset We use Stanovsky et al. (2019)'s dataset, which is built over the English-only coreference gender-bias studies: Winogender (Rudinger et al., 2018) and Wino-Bias (Zhao et al., 2018).",
"Result Our results are shown in Table 7.",
"When provided gender markers given by a coreference model, NEUROLOGIC increases the accuracy of handling gender correctly by 30.5 percentage for German, and 28.0 percentage for French.",
"This even outperforms commercial translation systems the best result, over any language or system, is Microsoft Translator for German with 74.1% accuracy, whereas NEUROLOGIC enables the baseline model to get 91% accuracy.",
"The performance increases again by an additional 4% (German) and 8.9% (French) when ground-truth gender markers are used during constrained decoding.",
"Last, the diagnostic results also show that NEUROLOGIC is particularly effective at reducing (over)reliance on stereotypical gender roles, with a significant decrease in performance difference S between stereotypical and non-stereotypical gender roles.",
"These results suggest that NEUROLOGICDECODING is a plug-and-play approach for reducing gender bias in existing translation systems.",
"We propose NEUROLOGICDECODING , an efficient and general method for generating with arbitrary predicate logic constraints.",
"We demonstrate its intuitive application to 4 different tasks as an extension to existing models, showing broad and consistent improvement to decoding quality.",
"3 We could use any off-the-shelf coreference resolution model for this.",
"However, since the English examples in Stanovsky et al. (2019) follow the Winograd schemas format, we use a RoBERTa model finetuned on Winograd Schema Challenge for this, with 78.4% accuracy.",
"We thank the anonymous reviewers and meta-reviewers for their helpful feedback.",
"This research was supported in part by DARPA under the MCS program through NIWC Pacific (N66001-19-2-4031) and the Allen Institute for AI (AI2)."
] |
[
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"objective",
"method",
"method",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"method",
"other",
"other"
] |
[
"In this paper, we present a new data set, named FreebaseQA , for open-domain factoid question answering (QA) tasks over structured knowledge bases, like Freebase.",
"The data set is generated by matching trivia-type question-answer pairs with subject-predicate-object triples in Freebase.",
"For each collected question-answer pair, we first tag all entities in each question and search for relevant predicates that bridge a tagged entity with the answer in Freebase.",
"Finally, human annotation is used to remove false positives in these matched triples.",
"Using this method, we are able to efficiently generate over 54K matches from about 28K unique questions with minimal cost.",
"Our analysis shows that this data set is suitable for model training in factoid QA tasks since FreebaseQA provides more linguistically sophisticated questions than other existing data sets.",
"The data set is available for free download at http://github.com/ infinitecold/FreebaseQA .",
"Within the field of natural language processing (NLP), there has been an increase in developments towards various real-world applications, such as factoid question answering (QA): the process of obtaining the answer(s) to a factual question similar to trivia game settings.",
"For this task to be successfully completed, there are several steps that need to occur.",
"Notably, we need to interpret and parse the question, determine the domain of relevance, eliminate ambiguities if existent, and pinpoint the exact answer to the question asked.",
"Fortunately, this task has been simplified with the emergence of large knowledge graphs, including Freebase (Bollacker et al., 2008), from where we Equal contribution.",
"can retrieve information.",
"Knowledge graphs are colossal networks of data that describe concepts, entities, and their relations.",
"In fact, Freebase is the largest publicly-available knowledge graph, consisting of 4 million nodes and approximately 3 billion edges (Google, 2017).",
"Each node represents an entity existing in the physical world, such as a person, a location, or an organization.",
"Each edge represents a relation between two entities, in a directed manner from a subject node to an object node.",
"In Freebase, these edges are referred to as predicates , and a collection of a subject-predicate-object is referred to as a triple .",
"An example triple in Freebase is the subject Clarissa , predicate book.written work.author and object Samuel Richardson , explaining that the book Clarissa is written by author Samuel Richardson.",
"Specifically, we take advantage of these relations between entities, which describe facts, to help with factoid question answering.",
"We believe open-domain factoid QA over structured knowledge graphs like Freebase is a very interesting NLP task since it opens up many interesting real-world applications, such as natural language based query and search.",
"Finally, once questions are formulated using a variety of rich and sophisticated representations in natural languages, such factoid QA tasks may serve as an excellent testbed to study many natural language understanding problems, e.g., examine the recently emerging research efforts to combine neural models with the traditional symbolic processing methods (Liang et al., 2017; Mou et al., 2017).",
"On the other hand, machine learning approaches for NLP are data hungry since they require large amounts of real-world data to train the models for the best possible performance.",
"Existing data sets for the factoid QA task over structured knowledge bases are either too small in scale to train neural networks effectively, or contain questions that are too simple in linguistic structure to amply cover real-world scenarios.",
"In this paper, we introduce a new data set for open-domain QA over Freebase, called FreebaseQA , which is created by matching trivia-type question-answer pairs with Freebase triples that reflect the semantic meaning of the questions.",
"FreebaseQA contains over 54K matches from about 28K unique questions that can be used to train machine learning (ML) models and help the development of factoid QA systems for more realistic applications.",
"Particularly, these matches may be used to train ML models to align natural language questions with Freebase predicates to search for the correct answers in Freebase.",
"Our analysis shows that FreebaseQA provides an advantage over all pre-existing data sets with similar objectives, which are either too small or only contain questions that are too simple in linguistic structure.",
"These results will be explained in detail in Section 4.",
"Factoid QA data sets involving question-answer pairs as well as their corresponding Freebase matches have been created in the past.",
"In (Be-rant et al., 2013), factoid QA over knowledge graphs are formulated as semantic parsing problems, where each natural language question is first converted into a logic form to retrieve the answer with traditional symbolic approaches.",
"In (Berant et al., 2013), a small-scale data set of several thousands of question-answer pairs, called WebQuestions, is created by human annotators.",
"In (Yih et al., 2016), the WebQuestions set is further refined by providing human-annotated semantic parses for some questions that are answerable using Freebase, which is called WebQuestionsSP (WebQSP).",
"Recently, deep learning approaches have become popular in the field of NLP.",
"Neural networks require far more training data than a small data set of several thousands of samples.",
"In (Bordes et al., 2015), a much larger QA data set of about 100K question/answer pairs, called SimpleQuestions, is created.",
"In this work, some randomly chosen Freebase triples are shown to human annotators.",
"For each given triple, an annotator is asked to manually compose a question to reflect the relation in the triple.",
"The issues with SimpleQuestions lie in that most constructed questions are quite simple in linguistic structure and many questions even directly use the keywords in the Freebase predicates since human annotators may be greatly limited in composition when a particular triple is shown.",
"According to (Petrochuk and Zettlemoyer, 2018), SimpleQuestions is nearly solved with only standard neural network methods if its linguistic ambiguity is taken into account.",
"In (Vlad Serban et al., 2016), a large QA data set is automatically generated by neural networks but it obviously lacks rich linguistic variations.",
"Additionally, many similar factoid QA data sets are also released for other non-English languages, e.g. WebQA in (Li et al., 2016).",
"Meanwhile, another direction of data collection efforts involve QA in various reading comprehension tasks, e.g. SQuAD in (Rajpurkar et al., 2016), MS-MARCO in (Nguyen et al., 2016), TriviaQA in (Joshi et al., 2017).",
"However, we believe question answering over structured knowledge graphs remains a viable NLP task for the promising research direction to combine neural computing methods with the traditional symbolic processing approaches.",
"In this section, we outline the construction procedure of the FreebaseQA data set, which consists of about 54K matches in the form of two examples shown in Table",
"1. 3.1 Preparation of Question-Answer Pairs In FreebaseQA, we have not generated any new question-answer pairs but we have instead collected pre-composed trivia-type factoid questions from a number of sources.",
"Unlike SimpleQuestions, these questions are independently composed for human contestants in various trivia-like competitions.",
"As a result, these questions show much richer linguistic variation and complexity than almost all existing KB-QA data sets.",
"In particular, we use the TriviaQA (Joshi et al., 2017) data set as the primary source of our QA pairs, while also including questions scraped from the trivia websites, KnowQuiz ( http://www. knowquiz.com ), QuizBalls ( http://www. quizballs.com ), and QuizZone ( https:// www.quiz-zone.co.uk ).",
"We remove duplicate entries and the remaining pairs are consolidated into a single source.",
"Each question is then run through two named entity recognition (NER) systems: TAGME (Fer-ragina and Scaiella, 2010) and FOFE NER (Xu et al., 2017).",
"By combining the results of both Components Example 1 Example 2 Question [Answer] Which 18th century author What is the correct name of the wrote Clarissa (or The character voiced by Angela History of a Young Lady), Lansbury in Beauty and The Beast?",
"systems, we create a list of possible subjects for each question.",
"We use confidence thresholds of 0.2 and 0.9 for the respective systems to ensure that an adequate amount of entities are produced while avoiding the production of irrelevant results.",
"The matching starts by searching for all Freebase nodes with a name or alias matching each subject name.",
"For each matched Freebase node (called a subject node), we search through all object nodes that are directly linked with the subject node.",
"Then for each object node, we search through all of its names and aliases to see if one matches the answer to the question.",
"Once a match is found, the subject node's Freebase ID, the predicate name, and the object node's Freebase ID are saved as a triple representing the question-answer pair.",
"Note that one question-answer pair may generate several matched triples when multiple related predicates are found since each question may contain multiple entities and each subject node may link to an object node through different predicates.",
"However, this procedure becomes inefficient since there is an enormous number of object nodes to process for some popular subject nodes, such as United States (m.09c7w0) , leading to a tremendous number of Freebase queries.",
"Since we know the end point of the search, the answer to the question, this procedure is optimized by also starting from the answer and searching for all object nodes with a name or alias matching it.",
"Then, the search concludes when the same object node is found from both starting points of the search.",
"By using this two-way search method, we have accelerated the Freebase matching algorithm more than ten-fold.",
"Freebase has been constructed with some special nodes called mediator nodes.",
"A mediator is an intermediate node that connects a subject node with an object node.",
"Since it itself is also considered a node, there are predicates from the subject to the mediator and from the mediator to the object.",
"These mediator nodes are special as they do not have a name or alias associated with it, and only occurring in Freebase when there are multiple subjects and objects that are related through the mediator.",
"When constructing the FreebaseQA data set, mediators are also accounted for.",
"If the above search procedure reaches a mediator, a 2-hop matching strategy is conducted to search all nodes linked to this mediator.",
"This captures a secondary predicate that bridges the subject to the answer through a mediator node.",
"An example involving a mediator is described as Example 2 in Table",
"1. 3.4 Human Annotation Since the matches found through the previously-explained algorithm are not guaranteed to be completely relevant to the question, human verification of the produced results is required to remove all possible false postitive matches.",
"A group of 10 native English speakers are hired to label all of the collected matches.",
"Each match is rated by the individuals as either Completely Relevant, Some-what Relevant, or Not Relevant.",
"The choice of rating is dependent on the relevancy of the predicate to the question.",
"If the predicate completely reflects the main idea asked by the question, the match is rated Completely Relevant.",
"If the predicate reflects part of the main idea of the question or is only somewhat related to it, the match is rated Partially Relevant.",
"Otherwise, the match is rated Not Relevant.",
"Compared with other Figure 1: Human annotators use this website interface to label all automatically-generated matches, rating either Completely Relevant, Partially Relevant, or Not Relevant.",
"QA data collection tasks, human involvement in FreebaseQA is relatively light since each person only needs to make a one-out-of-three choice instead of composing a question or sentence from scratch.",
"Therefore, using this method, we may significantly reduce the cost of QA data collection.",
"As an illustration, the user interface for this data annotation procedure is shown in Figure",
"1. In order to facilitate model training, the matches rated Completely Relevant are randomly chosen to populate the training, evaluation, and development sets of FreebaseQA.",
"These sets are separated so that if there are multiple matches for a single question-answer pair, all of those matches will exist in only one of the sets.",
"Moreover, the matches rated Partially Relevant are provided as a separate set, which may be useful for model training as well.",
"The FreebaseQA data set is available for public use at http://github.com/ infinitecold/FreebaseQA .",
"We report the preliminary results of our statistical analysis on the collected FreebaseQA data set.",
"The statistics of the originally collected question-answer pairs and the corresponding Freebase matches are summarized in Table",
"2. We see that with the exception of KnowQuiz, the number of matches in Freebase roughly equate the number of questions in each source.",
"Among all the generated matches, 54,611 matches in total are kept as true positives by human annotators.",
"The size of the FreebaseQA data set is compared to two similar QA data sets, WebQuestionsSP (WebQSP) (Yih et al., 2016) and SimpleQuestions (Bordes et al., 2015), in Table",
"3. Data Set train dev eval Total FreebaseQA 20,358 3,994 3,996 28,348 SimpleQuestions 75,910 10,845 21,687 108,442 WebQSP 3,098 -1,639 4,737 Table 3: Total numbers of unique questions found in the subsets of each data set.",
"We see that FreebaseQA has a significantly larger size than WebQSP in number of unique questions, but it is about one quarter of SimpleQuestions in number of unique questions.",
"Among these matches, FreebaseQA contains 28,348 unique questions in total, with 20,358, 3,994 and 3,996 in the train, dev and eval sets respectively.",
"However, another important factor to consider Figure 2: A histogram showing the spread of the length of the questions in each data set.",
"is the linguistic sophistication of the data.",
"The sophistication of the linguistic structure of the questions in the FreebaseQA data set is compared to other similar data sets based on the average length, in number of words, of the questions.",
"The histogram of question lengths of three data sets is shown in Figure",
"2. From the histogram, we see that the length of the questions in FreebaseQA extend much longer than the questions in SimpleQuestions or WebQSP (Yih et al., 2016).",
"In fact, SimpleQuestions has an average length of 7.65 words per question and WebQSP has an average length of 6.62 words per question, while FreebaseQA has an average length of 13.35 words per question: approximately double the length of either data set.",
"Finally, we use FOFE-net (Zhang et al., 2015; Xu et al., 2017) to build a baseline KBQA system on FreebaseQA, which consists of subject detection, entity linking and relation detection in the pipeline.",
"Our FOFE-net models are first compared with the popular hierarchical residual BiLSTM in (Yu et al., 2017) on two public data sets, such as SimpleQuestions and WebQSP.",
"See (Wu et al., 2019) for more details on experimental settings and results.",
"The comparison results are listed in Table 4.",
"As shown in Table 4, our baseline has achieved strong performance on the two public data sets but its final question answering accuracy has dropped significantly down to 37.0% on FreebaseQA.",
"Obviously, FreebaseQA is a much more challenging KBQA task than both SimpleQuestions and We-Data Set BiLSTM FOFE-net (Yu et al., 2017) (this work) SimpleQuestions 77.0% 77.3% WebQSP 63.0% 67.6% FreebaseQA -37.0% Table 4: Comparison of end-to-end QA accuracies on several KBQA data sets.",
"bQSP due to the fact that the questions in FreebaseQA are more complex in linguistic structure.",
"Therefore, FreebaseQA may serve as an excellent testbed for more advanced KBQA techniques.",
"To facilitate the evaluation of the end-to-end question-answering pipeline on FreebaseQA, we have extracted a subset of Freebase, which contains all nodes and their corresponding predicates matching any entities in the FreebaseQA data set.",
"This Freebase subset, also available at http://github.com/infinitecold/ FreebaseQA , may be used to conduct end-to-end QA experiments to compare with our performance results in Table 4.",
"This paper presents a new data set, FreebaseQA , for open-domain factoid QA over structured knowledge bases.",
"FreebaseQA has a size of over 54K matches, significantly larger than WebQSP and linguistically more sophisticated than SimpleQuestions.",
"Our baseline QA results have also shown that FreebaseQA is a much more diffi-cult KBQA task than either WebQSP or SimpleQuestions.",
"Therefore, FreebaseQA may be an invaluable asset to the investigation of more advanced machine learning methods for factoid KBQA problems.",
"Furthermore, the use of this data set is not only limited to factoid question answering, but several other applications can also be approached with this data set, including reading comprehension, natural language-based search, and the quantification of natural language understanding.",
"This work is partially supported by a research donation from iFLYTEK Co., Ltd., Hefei, China, and a discovery grant from Natural Sciences and Engineering Research Council (NSERC) of Canada."
] |
[
"objective",
"abstain",
"objective",
"abstain",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"other"
] |
[
"Neural machine translation (NMT) encodes the source sentence in a universal way to generate the target sentence word-by-word.",
"However, NMT does not consider the importance of word in the sentence meaning, for example, some words (i.e., content words) express more important meaning than others (i.e., function words).",
"To address this limitation, we first utilize word frequency information to distinguish between content and function words in a sentence, and then design a content word-aware NMT to improve translation performance.",
"Empirical results on the WMT14 English-to-German, WMT14 English-to-French, and WMT17 Chinese-to-English translation tasks show that the proposed methods can significantly improve the performance of Transformer-based NMT.",
"Neural machine translation ( NMT ) models (Sutskever et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017) often utilize the global neural networks to encode all words for learning the sentence representation and the context vector, and computes the accuracy of each generated target word in a universal manner.",
"Meanwhile, each generated target word makes the same contribution to the optimization of the NMT model, regardless of its importance.",
"Actually, there lacks a mechanism to guarantee that NMT captures the information related to word importance when predicting translations.",
"Intuitively, content words express more important meanings than function words, which indicates their comparative significance.",
"To evaluate this, we randomly masked content or function words with UNK in a source sentence.",
"Figure 1 shows that the BLEU scores of the test set decreased much Corresponding author 0 1 2 3 4 5 6 7 8 9 10 10 20 30 Number BLEU Transformer (base) -Function Words -Content Words Figure 1: Number denotes the number of content or function words that were randomly masked in each sentence of the WMT14 English-to-German translation task.",
"more substantially when parts of content words were randomly replaced with UNK on the WMT14 English-to-German task, which is in line with the findings in He et al. (2019)'s work.",
"To address this limitation, we propose a content word-aware NMT model that exploits the results of translation using a sequence of content words learned by a simple content word recognition method.",
"Inspired by the works of (Setiawan et al., 2007, 2009; Zhang and Zhao, 2013), we first divide words in a sentence into content words and other function words depending on term frequency-inverse document frequency ( TF-IDF ) constraints.",
"Two methods are designed to utilize the sequence of content word on the source and target sides: 1) We encode the content words of the source sentence as a new source representation, and learn an additional content word context vector based on it to improve translation performance; 2) A specific loss for content words of the target sentence is introduced to compensate for the original training objection, to obtain a content word-aware NMT model.",
"Empirical results on the WMT14 English-to-German, WMT14 English-to-French, and WMT17 Chinese-to-English tasks show the effectiveness of the proposed method.",
"In Transformer-based NMT (Vaswani et al., 2017), the encoder is composed of a stack of L identical layers, each of which contains two sub-layers.",
"The first sub-layer is a self-attention module (ATT), and the second sub-layer is a position-wise fully connected feed-forward network (FNN).",
"A residual connection (He et al., 2016) is applied between the sub-layers, and layer normalization (LN) (Ba et al., 2016) is performed.",
"Formally, the l -th identical layer of this stack is as follows: H l = LN ( ATT le ( Q l 1 e , K l 1 e , V l 1 e ) + H l 1 ) H l = LN ( FFN le ( H l ) + H l ) .",
"{ Q l 1 e , K l 1 e , V l 1 e } are query, key, and value vectors that are transformed from the ( l -1)-th layer H l 1 .",
"For example, { Q 0 , K 0 , V 0 } are packed from the H 0 learned by the positional encoding mechanism (Gehring et al., 2017).",
"Similarly, the decoder is composed of a stack of L identical layers.",
"Compared with the stacked encoder, it contains an additional attention sublayer to compute alignment weights for the output of the encoder stack HL : S li = LN ( ATT ld ( Q l 1 i , K l 1 i , V l 1 i ) + S l 1 i ) , C li = LN ( ATT lc ( S li , K Le , V Le ) + S li ) , S li = LN ( FFN ld ( C li ) + C li ) , (2) where Q l 1 d , K l 1 d , and V l 1 d are query, key, and value vectors, respectively, that are transformed from the ( l -1)-th layer S l 1 in time-step i .",
"{ K Le , V Le } are transformed from the L -th layer of the encoder.",
"The top layer of the decoder S Li is used to generate the next target word y i by a linear, potentially multi-layered function: P ( y i | y <i , x ) exp ( W o tanh ( W w S Li ) , (3) where W o and W w are projection matrices.",
"To obtain the translation model, the training objection maximizes the conditional translation probabilities over the training data set { [ X , Y ] } : J ( ) = arg max { P ( Y | X ; ) } .",
"content word recognition method based on the TF-IDF (Chen et al., 2019; Zhang et al., 2020).",
"An input sentence of length J m is treated as a document D m , and the TF-IDF T I j for each word d j in D m is computed: T I j = k j,m J m log | M | 1 + | m : d j D m | , (5) where k j,m represents the number of occurrences of the j -th word in the input sentence d t ; | M | is the total number of sentences in the monolingual data; and | m : d j D m | is the number of sentences including word d j in the monolingual data.",
"We then select a fixed percentage N (30% in the experiment) of word with high TF-IDF scores in the sentence as content words.",
"Note that we focus on statistics related to word frequency here, instead of the linguistic criteria; this method of approximation eliminates the need for additional language-specific resources.",
"In this section, we propose two ways to make use of the information on content words, designing three content word-aware NMT models.",
"The proposed method of content word recognition is first added as an additional module to the encoder to learn the sequence of source content words X from the input source sentence.",
"X is mapped and fed into the shared encoder (Li et al., 2020) in",
"Eq.(1) to learn an additional source representation of content words HL .",
"An multi-head attention module is then introduced to the decoder to learn the context vector C li based content words at time-step i , and C li is used to enhance the output S li : S li = LN ( ATT ld ( Q l 1 i , K l 1 i , V l 1 i ) + S l 1 i ) , C li = LN ( ATT lc ( S l i , K Le , V Le ) + S l i ) , C li = LN ( ATT ly ( S li , K Le , V Le ) + S li ) , S li = LN ( FFN ld ( C li + C li ) + C li ) , (6) where K Le and V Le of the content words are transformed from the L -th layer of the encoder.",
"Finally, the top layer of the decoder S Li , which is enhanced by the contextual vector of the content words C ld , is used as input to the Eq.",
"(3) to compute the probabilities of the next target word y i at time-step i : P ( y i | y <i , x ) exp ( W o tanh ( W w S Li ) .",
"Note that both the original source representation HL and proposed content word based representation HL are learned by a shared encoder using our content word recognition module.",
"Like the source sentence, the target sentence also contains content words.",
"We thus first identify a sequence of content words b from the target reference translation y according to the proposed content word recognition method (see Section 3).",
"We then introduce an addition loss term as a measure of the content words, which encourages the translation model to attend to the translation of the content words.",
"Formally, the training objective is revised as: J ( ) = arg max { P ( y | x ; )+ P ( b | x ; ) } , (8) where is a hyper-parameter empirically set to 0.4 in this paper.",
"Note that the introduced content word-aware loss works without any new parameters and influences only the computation of loss during the training of the standard NMT model.",
"Based on the above two strategies, we design three NMT models: 1) SCWAContext : The source content words are used to learn an additional",
"context vector to improve the prediction of target word (see Figure",
"2(a)); 2) TCWALoss : The target content words are used to compute an additional loss to guide the training of the translation model (see Figure",
"2(b)); 3) BCWAContLoss : It combines SCWAContext and TCWALoss to capture the content words of both the source and the target sentence to further improve translation performance.",
"The proposed methods were evaluated on the WMT14 English-to-German (EN-DE), WMT14 English-to-French (EN-FR), and WMT17 Chinese-to-English (ZH-EN) tasks.",
"The EN-DE corpus consists of 4M sentence pairs, the ZH-EN corpus of 22M sentence pairs, and the EN-FR corpus of 36M sentence pairs.",
"We used the case-sensitive 4-gram BLEU score as evaluation metric.",
"The results of the newstest2014 test sets are reported for the EN-DE and EN-FR tasks, and the newstest2017 test set is reported for the ZH-EN task.",
"The byte pair encoding algorithm (Sennrich et al., 2016) was applied to encode all sentences to limit the size of the vocabulary to 40K.",
"The other configurations were identical to those in (Vaswani et al., 2017).",
"The poposed models were implemented by using Systems EN-DE ZH-EN EN-FR BLEU #Speed #Param BLEU #Param BLEU #Param Existing NMT systems Trans.base (Vaswani et al., 2017) 27.3 N/A 65.0M N/A N/A 38.1 N/A +Context-Aware SANs (Yang et al., 2019a) 28.26 N/A 106.9M 24.67 126.8M N/A N/A +Convolutional SANs (Yang et al., 2019b) 28.18 N/A 88.0M 24.80 N/A N/A N/A +BIARN (Hao et al., 2019) 28.21 N/A 97.4M 24.70 107.3M N/A N/A Trans.big (Vaswani et al., 2017) 28.4 N/A 213.0M N/A N/A 41.0 N/A +Context-Aware SANs (Yang et al., 2019a) 28.89 N/A 339.6M 24.56 379.4M N/A N/A +Convolutional SANs (Yang et al., 2019b) 28.74 N/A 339.6M 25.01 N/A N/A N/A +BIARN (Hao et al., 2019) 28.98 N/A 333.5M 25.10 373.3M N/A N/A Our NMT systems Trans.base 27.48 13.2K 66.5M 24.28 74.7M 38.32 66.9M +SCWAContext 28.28+ 12.1K 72.8M 24.79+ 81.0M 39.41+ 73.2M +TCWALoss 27.94+ 14.3K 66.5M 24.65 74.7M 38.89+ 66.9M +BCWAContLoss 28.51+ 13.1K 72.8M 24.94+ 81.0M 39.56+ 73.2M Trans.big 28.45 11.2K 221.1M 24.55 237.5M 41.21 222.9M +BCWAContLoss 29.14+ 10.1K 246.3M 25.12+ 262.7M 42.57+ 247.1M Table 1: Results of the EN-DE, EN-FR, and ZH-EN tasks.",
"Table 1 shows results of the proposed method over our implemented Trans.base/big models which have similar BLEU scores with the original Transformer for the EN-DE and EN-FR tasks.",
"We then make the following observations: 1) All proposed three word-aware NMT models outperformed the baseline Transformer model.",
"This indicates that using information on the importance of words to enhance the translation of content words is helpful for the NMT model.",
"2) +SCWAContext performed better than +TCWALoss.",
"The NMT model was more sensitive to information on source content words than target content words.",
"+BCWAContLoss outperformed +SCWAContext and +TCWALoss, especially is superior to the existing +Context-Aware, +CSANs, and +BIARN.",
"This suggests that the sequences of content words of both source and the target can be used together to further improve translation performance.",
"3) The parameters of the proposed models only slightly increased.",
"In addition, Trans.base+BCWAContLoss delivered an comparable performance to Trans.big, which contained many more parameters than Trans.base+BCWAContLoss.",
"This indicates that the improvement in performance did not occur owing to a greater number of parameters.",
"The training speeds of our models were slightly lower than those of Trans.base.",
"Figure 3 shows the results of the Trans.base+SCWACont based different percentage N of content words in a sentence on the EN-DE and ZH-EN test sets.",
"On both test sets, the highest BLEU scores were obtained with N = 30%.",
"With increasing values of N , the trend of their BLEU scores were similar on both test sets.",
"The percent of N in the content word recognition method Figure 3: Results of Trans.base+SCWAContext model on the EN-DE and ZH-EN test set.",
"The dashed line denotes the Trans.base model.",
"We apply the proposed content word recognition method to the generated translation and the reference translation of test set, and thus extract two short sequences of including 30% of content words.",
"We compute the accuracy of unigram content word between the extracted two short sequences, as shown in Table 2.",
"The proposed methods outperformed the Trans.base in translating the content words, which is in line with the BLEU.",
"This means that the proposed NMT model improved the generation of target content words.",
"Figure 4 shows the results of +TCWALoss model on the EN-DE and ZH-EN test sets with different hyper-parameter .",
"When increased from 0 to 0.4, the BLEU scores of +TCWALoss model improved by +0.8 points over Trans.base model.",
"This means that the proposed content word-aware loss is useful for training NMT model.",
"Subsequently, larger values of reduced the BLEU scores, suggesting that excessive biased content word translation may be weak at translating function words.",
"Therefore, we set the hyper-parameter to 0.4 to control the loss of target content words in our experiments (Table 1).",
"Instead of directly identify content words, we identify the function words as the T most frequent words in the corpus.",
"Furthermore, after we remove the function words in a sentence x = { x 1 , , x J } , all the remaining words will be treated as a sequence (maintain the original order) of content words X according to the (Setiawan et al., 2007, 2009; Zhang and Zhao, 2013)'s work.",
"Figure 5 shows the results of Trans.base+SCWAContLoss on the EN-DE and ZH-EN test sets with different number of the top T function words.",
"Trans.base+SCWAContLoss obtained the highest BLEU scores on the both test sets over the Trans.base on modeling T = 256.",
"The number of function words TBLEUEN-DE Figure 5: BLEU scores of Trans.base+SCWAContLoss on the EN-DE and ZH-EN test sets with different number of function words T .",
"This paper explored the importance of word for NMT.",
"We divided words of one sentence into content and function words through word frequency-related information.",
"Our proposed NMT models, that are easy to implement and not much time and space cost, are introduced to the training and inference, and can improve the representation and translation of content words.",
"In future work, we will investigate the impact of fine-grained word categories (such as nouns, verbs, and adjectives) on the translation performance and design specific methods according to these categories.",
"We are grateful to the anonymous reviewers and the area chair for their insightful comments and suggestions.",
"Masao Utiyama is partly supported by JSPS KAKENHI Grant Number 19H05660.",
"Rui Wang was partially supported by JSPS grant-in-aid for early-career scientists (19K20354): Unsupervised Neural Machine Translation in Universal Scenarios and NICT tenure-track researcher startup fund Toward Intelligent Machine Translation."
] |
[
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"other",
"other",
"other"
] |
[
"In a corpus of data, outliers are either errors : mistakes in the data that are counterproductive, or are unique : informative samples that improve model robustness.",
"Identifying outliers can lead to better datasets by (1) removing noise in datasets and (2) guiding collection of additional data to fill gaps.",
"However, the problem of detecting both outlier types has received relatively little attention in NLP, particularly for dialog systems.",
"We introduce a simple and effective technique for detecting both erroneous and unique samples in a corpus of short texts using neural sentence embeddings combined with distance-based outlier detection.",
"We also present a novel data collection pipeline built atop our detection technique to automatically and iteratively mine unique data samples while discarding erroneous samples.",
"Experiments show that our outlier detection technique is effective at finding errors while our data collection pipeline yields highly diverse corpora that in turn produce more robust intent classification and slot-filling models.",
"High-quality annotated data is one of the fundamental drivers of progress in Natural Language Processing (e.g. Marcus et al., 1993; Koehn, 2005).",
"In order to be effective at producing an accurate and robust model, a dataset needs to be correct while also diverse enough to cover the full range of ways in which the phenomena it targets occur.",
"Substantial research effort has considered dataset correctness (Eskin, 2000; Dickinson and Meurers, 2003; Rehbein and Ruppenhofer, 2017), particularly for crowdsourcing (Snow et al., 2008; Jiang et al., 2017), but addressing diversity in data has received less attention, with the exception of using data from diverse domains (Hovy et al., 2006).",
"Outlier detection, the task of finding examples in a dataset that are atypical, provides a means of approaching the questions of correctness and diversity, but has mainly been studied at the document level (Guthrie et al., 2008; Zhuang et al., 2017), whereas texts in dialog systems are often no more than a few sentences in length.",
"We propose a novel approach that uses sentence embeddings to detect outliers in a corpus of short texts.",
"We rank samples based on their distance from the mean embedding of the corpus and consider samples farthest from the mean outliers.",
"Outliers come in two varieties: (1) errors , sentences that have been mislabeled whose inclusion in the dataset would be detrimental to model performance, and (2) unique samples, sentences that differ in structure or content from most in the data and whose inclusion would be helpful for model robustness.",
"Building upon this approach, we propose a novel crowdsourcing pipeline that distinguishes errors from unique samples and uses the unique samples to guide workers to give more diverse samples.",
"Experimentally, we find that our outlier detection technique leads to efficient detection of both artificial and real errors in our datasets.",
"We also use the proposed crowdsourcing pipeline to collect new datasets and build models for the dialog system tasks of intent classification and slot-filling.",
"We find that the proposed pipeline produces more diverse data, which in turn results in models that are more robust.",
"Outlier detection (Rousseeuw and Leroy, 1987), also called outlier analysis (Aggarwal, 2015) or anomaly detection (Chandola et al., 2009), is the task of identifying examples in a dataset that differ",
"differ substantially from the rest of the data.",
"For almost two decades, a body of work in NLP has investigated applying these ideas to data in order to identify annotation errors (Abney et al., 1999).",
"Approaches have included the use of scores from trained models for POS tagging (Ab-ney et al., 1999; Eskin, 2000; van Halteren, 2000; Dligach and Palmer, 2011; Fukumoto and Suzuki, 2004), count-based methods that compare examples from across the corpus (Nakagawa and Matsumoto, 2002; Hollenstein et al., 2016), characterizing data based on feature vectors projected down into a low-dimensional space (Guthrie et al., 2008), and tracking the difficulty of learning each example during training (Amiri et al., 2018).",
"One particularly effective approach has been to find n -grams that match but have different labels, as shown for annotations including POS tags (Dick-inson and Meurers, 2003), syntactic parses (Dick-inson and Meurers, 2005; Dickinson, 2010; Dickinson and Smith, 2011), and predicate-argument relations (Dickinson and Lee, 2008).",
"Our work instead uses continuous representations of text derived from neural networks.",
"While finding errors is an extremely useful application of outlier detection, also of interest are examples that are correct even though they are outliers, as these can be the most interesting and informative examples in a dataset.",
"We term these examples unique .",
"The problems of detecting and leveraging the unique examples in a dataset has received less attention, and the work that does exist focuses on identifying complete documents or segments of documents that are outliers out of a broader set of documents: Guthrie et al. (2007) used manually defined feature vectors to identify segments of documents with anomalous style, topic, or tone, and Kumaraswamy et al. (2015) and Zhuang et al. (2017) construct statistical models, identifying complete documents that are outliers within a set based on semantic variation.",
"Finally, a related but distinct topic is novelty detection (Soboroff and Harman, 2005; Lee, 2015; Ghosal et al., 2018), in which two sets of documents are provided, one that is assumed to be known, and one that may contain new content.",
"The task is to identify novel content in the second set.",
"While outlier detection methods are often applied to this problem, the inclusion of the known document set makes the task fundamentally different from the problem we consider in this work.",
"We build on prior work employing online crowd workers to create data by paraphrasing.",
"In particular, we refine the idea of iteratively asking for paraphrases, where each round prompts workers with sentences from the previous round, leading to more diverse data (Negri et al., 2012; Jiang et al., 2017; Kang et al., 2018).",
"We also apply the idea of a multi-stage process, in which a second set of workers check paraphrases to ensure they are correct (Buzek et al., 2010; Burrows et al., 2013; Coucke et al., 2018).",
"Most notably, by incorporating our outlier detection method, we are able to automate detecting detrimental data points while also prompting workers in subsequent rounds to paraphrase more unique examples.",
"We propose a new outlier detection approach using continuous representations of sentences.",
"Using that approach, we explored two applications: (1) identifying errors in crowdsourced data, and (2) guiding data collection in an iterative pipeline.",
"We detect outliers in a dataset as follows:",
"1. Generate a vector representation of each instance.",
"2. Average vectors to get a mean representation.",
"3. Calculate the distance of each instance from the mean.",
"4. Rank by distance in ascending order.",
"5. (Cut off the list, keeping only the top k % as outliers.) The final step is parenthesized as in practice we use a dynamic threshold approach, allowing the user to go through as much or as little of the list as they like.",
"The intuition behind this approach is that we expect our representations to capture the semantic structure of the space for each class.",
"An example that is far away from other examples in the set is therefore less semantically similar in some sense, making it an outlier.",
"Importantly, it may be an outlier for two distinct reasons: (1) it is not a valid instance of this class (i.e., an error ), or (2) it is an unusual example of the class (i.e., unique ).",
"two dialog system tasks: intent classification and slot-filling.",
"For classification, data for each possible intent label is considered separately, meaning we find outliers in the data by considering one intent class at a time.",
"For slot-filling, we group the data into classes based on combinations of slots.",
"This outlier detection method is rather simple as it relies only on a sentence embedding method, a distance metric, and a threshold k ; no other hyper-parameters are involved.",
"Moreover, the method requires no training.",
"We shall see in Section 4 that this method performs well compared to baseline methods, no matter what sentence embedding method is used.",
"We use Euclidean distance as our distance metric.",
"1 3.1.1 Sentence Representations Vector representation of sentences is an active area of research and we leverage the following approaches, each of which has been shown to have state of the art results in different use cases: Universal Sentence Encoder (USE; Cer et al., 2018) A Deep Averaging Network method, which averages word embeddings and passes the result through a feedforward network.",
"The USE is trained using a range of supervised and unsupervised tasks.",
"Smooth Inverse Frequency (SIF; Arora et al., 2017) A weighted average of word embeddings, with weights determined by word frequency within a corpus.",
"We consider word embeddings from GloVe (Pennington et al., 2014) and ELMo (Peters et al., 2018).",
"Average An unweighted average of word embeddings.",
"While simple, this approach has been shown to be effective for classification (Zhang and Wallace, 2017) and other downstream tasks (Zhu et al., 2018).",
"Again, we consider GloVe and ELMo word embeddings as inputs.",
"In addition to ranked lists produced by using these core sentence embeddings, we also investigated aggregating the ranked lists using the Borda count, a rank aggregation technique that has previously been used for combining web search results (Dwork et al., 2001).",
"The Borda count aggregates multiple ranked lists of the same set of items into a single ranked 1 We found similar results in experiments with a density-based metric, Local Outlier Factor (Breunig et al., 2000).",
"list.",
"First, points are assigned to each item in each list, with an item in position i in a ranked list of length N receiving N i points.",
"Next, points for items are summed across all of the lists.",
"Finally, the items are ranked by their total number of points, producing a final ranking.",
"Our proposed use of outlier detection to identify errors requires no further processing.",
"When used in practice, a user looks through the sorted list of examples, either stopping at a given fraction, or when errors become infrequent enough.",
"One core insight in this work is that outlier detection can be used for more than just finding errors.",
"The outliers that are not errors are likely to be the most interesting and informative examples in our dataset.",
"We propose to use these examples to guide data collection in an iterative process, with the goal of yielding more diverse data.",
"To demonstrate this idea, we developed a novel crowdsourcing pipeline for data collection.",
"Following prior work in crowdsourcing for dialog (Kang et al., 2018; Jiang et al., 2017), we ask crowd workers to write paraphrases of seed sentences with known intents and slot values.",
"This provides linguistic diversity in our data in a way that is easily explained to workers.",
"For instance, given the seed sentence What is my savings account balance? a worker might write How much money do I have in my savings account? .",
"Figure 1a shows a common crowdsourcing pipeline.",
"The task designer writes seed sentences that target an intent (for classification) or a slot (for slot-filling).",
"Crowd workers read the seed and write paraphrases.",
"These paraphrases are then passed to another set of workers who validate if they are in fact accurate paraphrases.",
"There are two major downsides to this standard pipeline.",
"First, the validation step increases the cost-per-example.",
"Second, the diversity of paraphrases depends on the given seed sentences (Jiang et al., 2017), creating a challenge for the task designer to think creatively.",
"We introduce a new pipeline, shown in Figure 1b that uses outlier detection to (1) reduce the number of sentences being checked, and (2) collect more diverse examples.",
"Our new approach initial designer-provided seed sentences paraphrase generation n raw samples curated samples paraphrase validation discard verified samples",
"uses outlier detection to select only a subset of sentences to be checked: namely, the ones ranked as most likely to be outliers.",
"This reduces effort by focusing on the sentences most likely to be incorrect.",
"To try to increase diversity, we also introduce a process with several rounds of data collection.",
"Outlier paraphrases collected in one round are used to seed the next round of data collection.",
"We could directly use the sentences labeled as correct in the validation step, but while these sentences are correct, they may have diverged from the desired semantics (e.g. diverged from the desired intent class).",
"To avoid confusion in the next round, we add a step in which workers are shown the most similar sentence from another intent (based on sentence embedding distance) and asked if the new seed is more similar to its intended intent or the alternative example.",
"Only seeds judged as closer to their intended intent are retained.",
"This iterative process is intended to collect more diverse data by priming workers to think about ways of phrasing the intent that are not well covered in the current data.",
"At the same time, we avoid the correctness issues Jiang et al. (2017) observed by incorporating the validation step.",
"First, we consider error detection, comparing various ranking methods in artificial and real data scenarios.",
"Second, we use our uniqueness-driven pipeline to collect data, measuring the impact on data diversity and model robustness.",
"All experiments were conducted on English language data.",
"We measure error detection effectiveness in two settings, one artificial and the other more realistic.",
"Artificial First, following prior work, we consider an artificial dataset in which we inject noise by mixing data from different intents (Guthrie et al., 2007, 2008; Zhuang et al., 2017; Amiri et al., 2018).",
"This provides an easy way to control the amount and type of anomalous data, but does lead to an easier task as the incorrect examples are generally more different than naturally collected errors would be.",
"The specific data we consider is a set of 20 intents from an in-production dialog system.",
"To generate outliers for a given intent class X i , we randomly sample p | X i | sentences from other intents (e.g. p = 0 . 04 , or 4%).",
"Real We collected a new set of sentences for ten intents.",
"Workers were given three seed sentences per intent and asked to write five paraphrases per seed.",
"2 Each seed was given to 15 crowd workers, leading to 2250 samples overall, after which 2 Crowd workers were paid 20 per HIT.",
"duplicates were discarded.",
"To identify errors, the authors independently checked each sentence and discussed any disagreements to determine a consensus label (either inlier or error ).",
"Examples of seed sentences, along with inliers and errors is shown in Table 2. 4.1.1 Evaluation Metrics Since our core outlier detection method produces a ranked list, we are interested in evaluating how effective it is at ranking errors near the top.",
"We use Mean Average Precision (MAP) as an overall measure of list quality.",
"where e is the position of an error in the list.",
"While this gives an overall qualitative measure for comparison, we are also interested in understanding the precisionrecall tradeoff when choosing a threshold k on the ranked lists.",
"We consider defining the cutoff as a percentage k of the list and measure the percentage of errors that are covered for each possible cutoff.",
"This measure is equivalent to Recall@ k , that is, Recall@ k = | errors above k | | errors | .",
"We average these values across intents to get an overall value for each cutoff percentage k .",
"For comparison, we consider four simple baselines: randomly ordering the samples (Random), sorting from shortest to longest (Short), sorting from longest to shortest (Long), and calculating distances in the vector space defined by a bag of words (BoW).",
"Table 1 presents MAP and Recall@ k for error detection in the two settings (Artificial and Real).",
"The neural methods outperform the baselines in both settings, demonstrating the effectiveness of our proposed approach.",
"However, the relative performance of the neural methods differs substantially between the two settings.",
"Specifically, (1) SIF performs better than an unweighted average on artificial data, but on real data we see the opposite trend, (2) combining rankings with Borda appears to help on the artificial data, but not on the real data, (3) ranking by length is surprisingly effective on the real data, and (4) results tend to be lower on the real data than the artificial (even at lower values of p ).",
"This last point suggests that the commonly used artificial setting does not perfectly capture the types of errors that occur in practice.",
"show",
"........average exchange rate from ten usd to cad (cid:58)(cid:58) last (cid:58)(cid:58)(cid:58)(cid:58) year Figure 3: Example annotated sentence for the slot-filling task.",
"The slot names are (in order of appearance)",
".......metric, amount, currency, and (cid:58)(cid:58)(cid:58)(cid:58) date.",
"performs particularly well vis-`a-vis other baselines on the real data, but not comparatively well on the artificial data.",
"This can be explained by observing that the length of the average error in the real data is roughly 6 tokens, while the average inlier length is 8 tokens.",
"Lengths of errors and inliers are roughly the same (roughly 8 tokens) in the artificial dataset, due to the outlier selection scheme.",
"While the values in Table 1 allow an overall comparison of the methods, they do not provide a clear qualitative sense of the distribution of errors in the lists.",
"Figure 2 shows the distribution for each method in the two settings.",
"The effective-same 1 unique 2 random 2 same 2 unique 3 random 3 same 3 Round 2 Round 1 Round 3 Figure 4: Data collection rounds.",
"ness of the neural methods, and USE in particular, is again clear.",
"In the real data, when considering just the first 20% of the list, USE covers over 85% of the errors on average.",
"One easy example was No checkbox, more? when the intent was to order more checks.",
"This is clearly an error, which would at the very least need to have checkbox replaced by checkbook .",
"In contrast, one hard example for USE was How much money do my banks when the intent was to request the user's balance.",
"Until the last word, this example looks like it will be a valid balance request.",
"These examples show that the system is qualitatively fitting our expectations for error detection.",
"The second set of experiments evaluates our proposed uniqueness-driven data collection pipeline.",
"We consider collecting data for two tasks used by dialog systems: intent classification and slot-filling.",
"In each case, we calculate intrinsic measures of data diversity and the robustness of models trained on the data.",
"Tasks We consider intent classification with 10 intents related to banking, and slot-filling for foreign exchange rate requests with four slots: amount , currency , date , and metric .",
"Figure 3 shows an example query with annotated slots.",
"Approaches As well as our proposed data collection pipeline ( unique ), we consider a variant where the next seed is chosen randomly ( random ), and one where the seeds are the same in every round ( same ).",
"The third case is equivalent to the standard pipeline from Figure 1a.",
"All three pipelines start from the same first round and then vary in the subsequent rounds, as depicted in Figure 4. Each pipeline collected data for three rounds.",
"The final dataset for each approach combines data collected from all three rounds.",
"In both tasks, we asked workers to rephrase each seed sentence 5 times and showed each seed sentence to 15 workers.",
"For classification there were 3 seed sentences per intent.",
"For slot-filling D ( a, b ) = 1 1 NN (cid:88) n =1 | n -grams a n -grams b | | n -grams a n -grams b | Diversity ( X ) = 1 | I | | I | (cid:88) i =1 1 | X i | 2 (cid:34) (cid:88) a X i (cid:88) b X i D ( a, b ) (cid:35) Coverage ( X, Y ) = 1 | I | | I | (cid:88) i =1 1 | Y i | (cid:88) b Y i max a X i (1 D ( a, b )) Figure 5: Metrics for diversity and coverage from Kang et al. (2018).",
"we defined 4 example scenarios, each corresponding to a specific combination of slots.",
"We used Borda USE+SG with k set to 10% for the outlier detection model.",
"We consider several different metrics to probe how effectively our proposed pipeline improves data quality.",
"In all cases, higher values are better.",
"Intrinsic We measure the diversity and coverage of each dataset using the metrics introduced in (Kang et al., 2018) and shown in Figure 5. Extrinsic The main reason to increase dataset diversity is to construct more robust models.",
"To directly evaluate that objective, we randomly divided the datasets collected by each pipeline into training and test sets (85-15 split).",
"Our intuition is that a robust model should perform fairly well across all test sets.",
"Training on a dataset that is not diverse will lead to a brittle model that only does well on data collected with the same seed sentences.",
"For intent classification, we measure accuracy of two models: an SVM (Cortes and Vapnik, 1995) using bag of words feature representation, and FastText (Joulin et al., 2017), a neural network that averages across sentence embeddings and passes the result through feedforward layers.",
"For slot-filling, we measure the F 1 score of a bi-directional LSTM with word vectors that are trained, but initialized with GloVe 300-dimensional embeddings.",
"For all models, we average results across 10 runs.",
"Classification Table 3 presents the number of examples and diversity of data collected in each round with each approach.",
"Diversity is consistently higher with seeds chosen using our proposed unique approach.",
"Dataset sizes vary because of the removal of duplicates.",
"The unique approach produces a larger final set as there is less duplication across rounds.",
"expected, the highest scores are on the diagonal training and testing on the same source data.",
"More importantly however, training on the unique data produces a model that is robust, performing well across all three test sets.",
"In contrast, training on the same or random data produces models that perform substantially worse on the unique test set.",
"This trend is also present in the coverage scores in the bottom section of the table.",
"Table 7 shows some of the seed sentences produced by the unique and random approaches.",
"These examples illustrate the trends in our metrics, with the seeds for the random approach often being very similar.",
"Meanwhile, the unique approach produces seeds with grammatical variation and the introduction of quite different expressions, such as ABA instead of routing number.",
"Slot-filling Table 5 shows the number of samples collected per round for each of the data collection pipelines and the diversity of the sets.",
"As in the classifier experiment, we observe that data produced by the unique pipeline is of higher diversity than the other two pipelines.",
"Table 6 displays F 1 -scores and coverage for each traintest combination.",
"Again, we see the same trends, with training on same or random leading to low results on the unique dataset, but not the reverse, and similarly for coverage, though the gaps are smaller than for classification.",
"Outliers are often the most interesting parts of our data, but outlier detection has received relatively little attention in NLP beyond its application to finding annotation errors.",
"This paper introduces the first neural outlier detection method for short text and demonstrates its effectiveness across multiple metrics in multiple experiments.",
"We also propose a way to integrate outlier detection into data collection, developing and evaluating a novel crowdsourcing pipeline.",
"This pipeline supports the creation of higher quality datasets to yield higher quality models by both reducing the number of errors and increasing the diversity of collected data.",
"While the experiments discussed herein are concerned with components of dialog systems, we believe that similar data collection strategies could yield benefits to other areas of NLP as well.",
"The authors thank Yiping Kang, Yunqi Zhang, Joseph Peper, and the anonymous reviewers for their helpful comments and feedback."
] |
[
"abstain",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"objective",
"result",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"other",
"other",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"objective",
"objective",
"abstain",
"method",
"other"
] |
[
"Neural machine translation requires large amounts of parallel training text to learn a reasonable-quality translation model.",
"This is particularly inconvenient for language pairs for which enough parallel text is not available.",
"In this paper, we use monolingual linguistic resources in the source side to address this challenging problem based on a multi-task learning approach.",
"More specifically, we scaffold the machine translation task on auxiliary tasks including semantic parsing, syntactic parsing, and named-entity recognition.",
"This effectively injects semantic and/or syntactic knowledge into the translation model, which would otherwise require a large amount of training bitext.",
"We empirically evaluate and show the effectiveness of our multi-task learning approach on three translation tasks: English-to-French, English-to-Farsi, and English-to-Vietnamese.",
"Neural Machine Translation (NMT) with attentional encoder-decoder architectures (Luong et al., 2015; Bahdanau et al., 2015) has revolutionised machine translation, and achieved state-of-the-art for several language pairs.",
"However, NMT is notorious for its need for large amounts of bilingual data (Koehn and Knowles, 2017) to achieve reasonable translation quality.",
"Leveraging existing monolingual resources is a potential approach for compensating this requirement in bilingually scarce scenarios.",
"Ideally, semantic and syntactic knowledge learned from existing linguistic resources provides NMT with proper inductive biases, leading to increased generalisation and better translation quality.",
"Multi-task learning (MTL) is an effective approach to inject knowledge into a task, which is learned from other related tasks.",
"Various recent works have attempted to improve NMT with an MTL approach (Peng et al., 2017; Liu et al., 2017; Zhang and Zong, 2016); however, they either do not make use of curated linguistic resources (Domhan and Hieber, 2017; Zhang and Zong, 2016), or their MTL architectures are restrictive yielding mediocre improvements (Niehues and Cho, 2017).",
"The current research leaves open how to best leverage curated linguistic resources in a suitable MTL framework to improve NMT.",
"In this paper, we make use of curated monolingual linguistic resources in the source side to improve NMT in bilingually scarce scenarios.",
"More specifically, we scaffold the machine translation task on auxiliary tasks including semantic parsing, syntactic parsing, and named-entity recognition.",
"This is achieved by casting the auxiliary tasks as sequence-to-sequence (SEQ 2S EQ ) transduction tasks, and tie the parameters of their encoders and/or decoders with those of the main translation task.",
"Our MTL architectures makes use of deep stacked encoders and decoders, where the parameters of the top layers are shared across the tasks.",
"We further make use of adversarial training to prevent contamination of common knowledge with task-specific information.",
"We present empirical results on translating from English into French, Vietnamese, and Farsi; three target languages with varying degree of divergence compared to English.",
"Our extensive empirical results demonstrate the effectiveness of our MTL approach in substantially improving the translation quality for these three translation tasks in bilingually scarce scenarios.",
"Our MTL is based on the attentional encoder-decoder architecture for SEQ 2S EQ transduction.",
"It contains an encoder to read the input sentence, and an attentional decoder to generate the output.",
"Encoder The encoder is a bi-directional RNN whose hidden states represent tokens of the input sequence.",
"These representations capture information not only of the corresponding token, but also other tokens in the sequence to leverage the context.",
"The bi-directional RNN consists of two RNNs running in the left-to-right and right-to-left directions over the input sequence: h i = RNN( h i 1 ,EEE S [ x i ]) h i = RNN( h i +1 ,EEE S [ x i ]) where EEES [ x i ] is the embedding of the token x i from the embedding table EEES of the input (source) space, and h i and h i are the hidden states of the forward and backward RNNs which can be based on the LSTM (long-short term mem-ory) (Hochreiter and Schmidhuber, 1997) or GRU (gated-recurrent unit) (Chung et al., 2014) units.",
"Each source token is then represented by the concatenation of the corresponding bidirectional hidden states, h i = [ h i ; h i ] .",
"Decoder.",
"The backbone of the decoder is a unidirectional RNN which generates the token of the output one-by-one from left to right.",
"The generation of each token y j is conditioned on all of the previously generated tokens y <j via the state of the RNN decoder s j , and the input sequence via a dynamic context vector c j (explained shortly): y j softmax( W y r j + b r ) (1) r j = tanh( s j + W rc c j + W rj EEET [ y j 1 ]) (2) s j = tanh( W s s j 1 + W sj EEET [ y j 1 ] + W sc c j ) where EEET [ y j ] is the embedding of the token y j from the embedding table EEET of the output (tar-get) space, and the W matrices and b r vector are the parameters.",
"A crucial element of the decoder is the attention mechanism which dynamically attends to relevant parts of the input sequence necessary for generating the next token in the output sequence.",
"Before generating the next token t j , the decoder computes the attention vector j over the input token: j = softmax( aaa j ) a ji = v tanh( W ae h i + W at s j 1 ) which intuitively is similar to the notion of alignment in word/phrase-based statistical MT (Brown et al., 1993).",
"The attention vector is then used to compute a fixed-length dynamic representation of the source sentence c j = X i ji h i .",
"(3) which is conditioned upon in the RNN decoder when computing the next state or generating the output word (as mentioned above).",
"Training and Decoding.",
"The model parameters are trained end-to-end by maximising the (regu-larised) log-likelihood of the training data arg max X ( x , y ) D | y | X j =1 log P ( y j | y <j , x ) where the above conditional probability is defined according to eqn (1).",
"Usually drop-out is employed to prevent over-fitting on the training data.",
"In the decoding time, the best output sequence for a given input sequence is produced by arg max y P ( y | x ) = Y j P ( y j | y <j x ) .",
"Usually greedy decoding or beam search algorithms are employed to find an approximate solution, since solving the above optimisation problem exactly is computationally hard.",
"We consider an extension of the basic SEQ 2S EQ model where the encoder and decoder are equipped with deep stacked layers.",
"Presumably, deeper layers capture more abstract information about a task, hence they can be used as a mechanism to share useful generalisable information among multiple tasks.",
"Deep Stacked Encoder.",
"The deep encoder consists of multiple layers, where the hidden states in layer 1 are the inputs to the hidden states at the next layer . That is, h i = RNN ,enc ( h i 1 , h 1 i ) h i = RNN ,enc ( h i 1 , h 1 i ) where h i = [ h i ; h i ] is the hidden state of the 'th layer RNN encoder for the i 'th source sentence word. The inputs to the first layer for-ward/backward RNNs are the source word embed-dings EEES [ x i ] . The representation of the source sentence is then the concatenation of the hidden states for all layers h i = [ h 1 i ; . . . ; h Li ] which is then used by the decoder. 1357 Deep Stacked Decoder. Similar to the multilayer RNN encoder, the decoder RNN has multiple layers: s j = RNN ,dec ( s j 1 , s 1 j ) where the inputs to the first layer RNNs are W sj EEET [ y j 1 ] + W sc c j in which c j is the dynamic source context, as defined in eqn 3. The state of the decoder is then the concatenation of the hidden states for all layers: s j = [ s 1 j ; . . . ; s Lj ] which is then used in eqn 2 as part of the output generation module. Shared Layer MTL. We share the deep layer RNNs in the encoders and/or decoders across the tasks, as a mechanism to share abstract knowledge and increase model generalisation. Suppose we have a total of M + 1 tasks, consisting of the main task plus M auxiliary tasks. Let menc = { m,enc } L =1 and mdec = { m 0 ,dec } L 0 0 =1 be the parameters of multi-layer encoder and decoder for the task m . Let { menc , mdec } Mm =1 and { 0 enc , 0 dec } be the RNN parameters for the auxiliary tasks and the main task, respectively. We share the parameters of the deep-level encoders and decoders of the auxiliary tasks with those of the main task. That is, m [1 , .., M ] [ L menc , .., L ] : m,enc = 0 ,enc m [1 , .., M ] 0 [ L 0 mdec , .., L 0 ] : m 0 ,dec = 0 0 ,dec where L menc and L 0 mdec specify the deep-layer RNNs need to be shared parameters. Other parameters to share across the tasks include those of the attention module, the source/target embedding tables, and the output generation module. As an extreme case, we can share all the parameters of SEQ 2S EQ architectures across the tasks. Training Objective. Suppose we are given a collection of M +1 SEQ 2S EQ transductions tasks, each of which is associated with a training set D m := { ( x i , y i ) } N m i =1 . The parameters are learned by maximising the MTL training objective: L mtl ( mtl ) := MX m =0 m |D m | X ( x , y ) D m log P m ( y | x ) (4) where mtl denotes all the parameters of the MTL architecture, |D m | denotes the size of the training set for the task m , and m balances out its influ-ence in the training objective. Training Schedule. Variants of stochastic gradient descent (SGD) can be used to optimise the objective in order to learn the parameters. Making the best use of tasks with different objective geometries is challenging, e.g. due to the scale of their gradients. One strategy for making an SGD update is to select the tasks from which the next data items should be chosen. In our training schedule, we randomly select a training data item from the main task, and pair it with a data item selected from a randomly selected auxiliary task for making the next SGD update. This ensures the presence of training signal from the main task in all SGD updates, and avoids the training signal being washed out by the auxiliary tasks. 4 Adversarial Training The learned shared knowledge can be contaminated by task-specific information. We address this issue by adding an adversarial objective. The basic idea is to augment the MTL training objective with additional terms, so that the identity of a task cannot be predicted from its data items by the representations resulted from the shared en-coder/decoder RNN layers. Task Discriminator. The goal of the task discriminator is to predict the identity of a task for a data item based on the representations of the share layers. More specifically, our task discriminator consists of two RNNs with LSTM units, each of which encodes the sequence of hidden states in the shared layers of the encoder and the decoder. 1 The last hidden states of these two RNNs are then concatenated, giving rise to a fixed dimensional vector summarising the representations in the shared layers. The summary vector is passed through a fully connected layer followed by a softmax to predict the probability distribution over the tasks: P d ( task id | h d ) softmax( W d h d + b d ) h d := disLSTMs(shrRep mtl ( x , y )) where disLSTMs denotes the discriminator LSTMs, shrRep mtl ( x , y ) denotes the representations in the shared layer of deep encoders and decoders in the MTL architecture, and d includes the disLSTMs parameters as well as { W d , b d } . 1 When multiple layers are shared, we concatenate their hidden states at each time step, which is then input to the task discriminator's LSTMs.",
"Adversarial Objective.",
"Inspired by (Chen et al., 2017), we add two additional terms to the MTL training objective in eqn 4. The first term is L adv 1 ( d ) defined as: MX m =0 X ( x , y ) D m log P d ( m | disLSTMs(shrRep mtl ( x , y ))) .",
"Maximising the above objective over d ensures proper training of the discriminator to predict the identity of the task.",
"The second term ensures that the parameters of the shared layers are trained so that they confuse the discriminator by maximising the entropy of its predicted distribution over the task identities.",
"That is, we add the term L adv 2 ( mtl ) to the training objective defined as: MX m =0 X ( x , y ) D m H (cid:2) P d ( .",
"We maximise the above objective by SGD, and update the parameters by alternating between optimising L mtl ( mtl ) + L adv 2 ( mtl ) and L adv 1 ( d ) .",
"We use three language-pairs, translating from English to French, Farsi, and Vietnamese.",
"We have chosen these languages to analyse the effect of multi-task learning on languages with different underlying linguistic structures.",
"The sentences are segmented using BPE (Sennrich et al., 2016) on the union of source and target vocabularies for English-French and English-Vietnamese.",
"For English-Farsi, BPE is performed using separate vocabularies due to the disjoint alphabets.",
"We use a special < UNK > token to replace unknown BPE units in the test and development sets.",
"Table 1 show some statistics about the bilingual corpora.",
"Further details about the corpora and their pre-processing is as follows: The English-French corpus is a random subset of EuroParlv7 as distributed to WMT2014.",
"Train Dev Test En Fr 98,846 5,357 5,357 En Fa 98,158 3,000 4,000 En vi 133,290 1,553 1,268",
"or the target has length more than 80 (be-fore applying BPE) have been removed.",
"The BPE is performed with a 30k total vocabulary size.",
"The news-test2012 and news-test-2013 portions are used for validation and test sets, respectively.",
"The English-Farsi corpus is assembled from all the parallel news text in LDC2016E93 Farsi Representative Language Pack from the Linguistic Data Consortium, combined with English-Farsi parallel subtitles from the TED corpus (Tiedemann, 2012).",
"Since the TED subtitles are user-contributed, this text contained considerable variation in the encoding of its Perso-Arabic characters.",
"To address this issue, we have normalized the corpus using the Hazm toolkit 2 .",
"Sentence pairs in which one of the sentences has more than 80 (before applying BPE) are removed, and BPE is performed with a 30k vocabulary size.",
"Random subsets of this corpus (3k and 4k sentences each) are held out as validation and test sets, respectively.",
"The English-Vietnamese is from the translation task in IWSLT 2015, and we use the preprocessed version provided by (Luong and Manning, 2015).",
"The sentence pairs in which at least one of their sentences had more than 300 units (after applying BPE) are removed.",
"tst2012 and tst2013 parts are used for validation and test sets, respectively.",
"We have chosen the following auxiliary tasks to provide the NMT model with syntactic and/or semantic knowledge, in order to enhance the quality of translation:",
"Named-Entity Recognition (NER).",
"With a small bilingual training corpus, it would be hard for the NMT model to learn how to translate rarely occurring named-entities.",
"Through the NER task, 2 www.sobhe.ir/hazm 1359 the model hopefully learns the skill to recognize named entities.",
"Speculatively, it would then enables leaning translation patterns by masking out named entities.",
"The NER data comes from the CONLL shared task.",
"3 Syntactic Parsing.",
"This task enables NMT to learn the phrase structure of the input sentence, which would then be useful in better re-orderings.",
"This would be most useful for language pairs with high syntactic divergence.",
"The parsing data comes from the Penn Tree Bank with the standard split for training, development, and test (Marcus et al., 1993).",
"We linearise the constituency trees, in order to turn syntactic parsing as a SEQ 2S EQ transduction (Vinyals et al., 2015).",
"Semantic Parsing.",
"A good translation should preserve the meaning.",
"Learning from the semantic parsing task enables the NMT model to pay attention to a meaning abstraction of the source sentence, in order to convey it to the target translation.",
"We have made use of the Abstract Meaning Representation (AMR) corpus Release 2.0 (LDC2017T10), which pairs English sentences AMR meaning graphs.",
"We linearise the AMR graphs, in order to convert semantic parsing as a SEQ 2S EQ transduction problem (Konstas et al., 2017).",
"We have implemented the proposed multi-task learning architecture in C++ using DyNet (Neu-big et al., 2017), on top of Mantis (Cohn et al., 2016) which is an implementation of the attentional SEQ 2S EQNMT model in ( ? ).",
"In our multitask architecture, we do partial sharing of parameters, where the parameters of the top 2 stacked layers are shared among the encoders of the tasks.",
"Moreover, we share the parameters of the top layer stacked decoder among the tasks.",
"Source and target embedding tables are shared among the tasks, while the attention component is task-specific.",
"4 We compare against the following baselines: Baseline 1: The vanila SEQ 2S EQ model without any multi-tasking.",
"special case of our approach where the parameters of all 3 stacked layers are shared among the tasks.",
"5 They have not used deep stacked layers in encoder and decoder as we do, so we extend their work to make it comparable with ours.",
"The configuration of models is as follows.",
"The encoders and decoders make use of GRU units with 400 hidden dimensions, and the attention component has 200 dimensions.",
"For training, we used Adam algorithm (Kingma and Ba, 2014) with the initial learning rate of 0.003 for all of the tasks.",
"Learning rates are halved when the performance on the corresponding dev set decreased.",
"In order to speed-up the training, we use mini-batching with the size of 32.",
"Dropout rates for both encoder and decoder are set to 0.5, and models are trained for 50 epochs where the best models is selected based on the perplexity on the dev set.",
"for the adversarial training is set to 0.5.",
"Once trained, the NMT model translates using the greedy search.",
"We use BLEU (Papineni et al., 2002) to measure translation quality.",
"6 5.4 Results Table 2 reports the BLEU scores and perplexities for the baseline and our proposed method on the three aforementioned translation tasks.",
"It can be seen that the performance of multi-task learning models are better than Baseline 1 (only MT task).",
"This confirms that adding auxiliary tasks helps to increase the performance of the machine translation task.",
"As expected, the effect of different tasks are not similar across the language pairs, possibly due to the following reasons:",
"(i) these translation tasks datasets come from different domains so they have various degree of domain relatedness to the auxiliary tasks, and",
"(ii) the BLEU scores of the Baseline 1 show that the three translation models are on different quality levels which may entail that they benefit from auxiliary knowledge on different levels.",
"In order to improve a model with low quality translations due to language divergence, syntactic knowledge can be more helpful as they help better reorderings.",
"In a higher-quality model, however, semantic knowledge can be more useful as 5 We have used their best performing architecture and changed the training schedule to ours.",
"6 With multi-bleu.perl script from Moses (Koehn et al., 2007).",
"Table 2 : BLEU scores and perplexities of the baseline vs our MTL architecture with various auxiliary tasks on the full bilingual datasets.",
"Table 3 against Baseline 2 (full parameter sharing).",
"a higher-level linguistic knowledge.",
"This pattern can be seen in the reported results: syntactic parsing leads to more improvement on Farsi translation which has a low BLEU score and high language divergence to English, and semantic parsing yields more improvement on the Vietnamese translation task which already has a high BLEU score.",
"The NER task has led to a steady improvement in all the translation tasks, as it leads to better handling of named entities.",
"We have further added adversarial training to ensure the shared representation learned by the encoder is not contaminated by the task-specific information.",
"The results are in the last row of Table 2.",
"The experiments show that adversarial training leads to further gains in MTL translation quality, except when translating into Farsi.",
"We speculate this is due to the low quality of NMT for Farsi, where updating shared parameters with respect to the entropy of discriminator's predicted distribution may negatively affect the model.",
"Table 3 compares our multi-task learning approach to Baseline 2.",
"As Table 3, our partial parameter sharing mechanism is more effective than fully sharing the parameters (Baseline 2), due to its flexibility in allowing access to private task-specific knowledge.",
"We also applied the adaptation technique (Niehues and Cho, 2017) as follows.",
"Upon finishing MTL training, we continue to train only on the MT task for another 20 epochs, and choose the best model based on perplexity on dev set.",
"Adaptation has led to consistent gains in the performance of our MTL architecture and Baseline 2.",
"How many layers of encoder/decoder to share?",
"Figure 2 show the results of changing the number of shared layers in encoder and decoder based on the En Vi translation task.",
"The results confirm that partial sharing of stacked layers is better than full sharing.",
"Intuitively, partial sharing provides the model with an opportunity to learn task specific skills via the private layers, while leveraging the knowledge learned from other tasks via shared layers.",
"Statistics of gold n -grams in MTL translations.",
"Generating high order gold n -grams is hard.",
"We analyse the effect of syntactic and semantic knowledge on generating gold n -grams in translations.",
"For each sentence, we first extract n -grams in the gold translation, and then compute the number of n -grams which are common with the generated translations.",
"Finally, after aggregating the results over the entire test set, we compute the percentage of additional gold n -grams generated by each MTL model compared to the ones in single-task MT model.",
"The results are depicted in Figure 1.",
"Interestingly, the MTL models generate more correct n -grams relative to the vanilla NMT model, as n increases.",
"Effect of the NER task.",
"The NMT model has difficulty translating rarely occurring named-entities, particularly when the bilingual parallel data is scarce.",
"We expect learning from the NER task leads the MTL model to recognize named-entities and learn underlying patterns for translating them.",
"The top part in Table 4 shows an example of such situation.",
"As seen, the MTL is able to recognize all of the named-entities in the sentence and translate the while the single-task model 1361 Figure 1 : Percentage of more correct n -grams generated by the deep MTL models compared to the single-task model (only MT).",
"Table 4 : Example of translations on Farsi test set.",
"In this examples each Farsi word is replaced with its English translation, and the order of words is reversed (Farsi is written right-to-left).",
"The structure of Farsi is Subject-Object-Verb (SOV), leading to different word orders in English and Reference sentences.",
"Figure 2 : BLEU scores for different numbers of shared layers in (top) encoder while no layer is shared in decoder, and (bottom) decoder while no layer is shared in encoder",
"tagger (Feely et al., 2014) to gold translations.",
"Then, we extracted n -grams with at least one noun in them, and report the statistics of correct such n grams, similar to what reported in Figure 1.",
"The resulting statistics is depicted in Figure 3. As seen, the MTL model trained on MT and NER tasks leads to generation of more correct unigram noun phrases relative to the vanilla NMT, as n increases.",
"Effect of the semantic parsing task.",
"Semantic parsing encourages a precise understanding of the source text, which would then be useful for conveying the correct meaning to the translation.",
"The middle part in Table 4 is an example translation, showing that semantic parsing has helped NMT by understanding that the subject sees the object via subject's screens.",
"Effect of the syntactic parsing task.",
"Recognizing the syntactic structure of the source sentence helps NMT to better translate phrases.",
"The bottom part of Table 4 shows an example translation demonstrating such case.",
"The source sentence is talking about a method for controlling the 1362 0 10 20 30 40 1 g r a m 2 g r a m 3 g r a m 4 g r a m 5 g r a m 6 g r a m 7 g r a m Figure 3 : Percentage of more corrected n-grams with at least one noun generated by MT+NER model compared with the only MT model (only MT). traffic, which is correctly translated by the MTL model while vanilla NMT has mistakenly translated it to controlled traffic.",
"Multi-task learning has attracted attention to improve NMT in recent work.",
"(Zhang and Zong, 2016) has made use of monolingual data in the source language in a multitask learning framework by sharing encoder in the attentional encoder-decoder model.",
"Their auxiliary task is to reorder the source text to make it close to the target language word order.",
"(Domhan and Hieber, 2017) proposed a two-layer stacked decoder, which the bottom layer is trained on language modelling on the target language text.",
"The next word is jointly predicted by the bottom layer language model and the top layer attentional RNN decoder.",
"They reported only moderate improvements over the baseline and fall short against using synthetic parallel data.",
"(Dalvi et al., 2017) investigated the amount of learned morphology and how it can be injected using MTL.",
"Our method is related to what they call joint data-learning, where they share all of the SEQ 2S EQ components among the tasks.",
"(Belinkov et al., 2017a; Shi et al., 2016; Be-linkov et al., 2017b) investigate syntax/semantics phenomena learned as a byproduct of SEQ 2S EQNMT training.",
"We, in turn, investigate the effect of injecting syntax/semantic on learning NMT using MTL.",
"The closet work to ours is (Niehues and Cho, 2017), which has made use of part-of-speech tagging and named-entity recognition tasks to improve NMT.",
"They have used the attentional encoder-decoder with a shallow architecture, and share different parts eg the encoder, decoder, and attention.",
"They report the best performance with fully sharing the encoder.",
"In contrast, our architecture uses partial sharing on deep stacked encoder and decoder components, and the results show that it is critical for NMT improvement in MTL.",
"Furthermore, we propose adversarial training to prevent contamination of shared knowledge with task specific details.",
"Taking another approach to MTL, (Sgaard and Goldberg, 2016) and (Hashimoto et al., 2017) have proposed architectures by stacking up tasks on top of each other according to their linguistic level, eg from lower level tasks (POS tagging) to higher level tasks (parsing).",
"In this approach, each task uses predicted annotations and hidden states of the lower-level tasks for making a better prediction.",
"This is contrast to the approach taken in this paper where models with shared parameters are trained jointly on multiple tasks.",
"More broadly, deep multitask learning has been used for various NLP problems, including graph-based parsing (Chen and Ye, 2011) and keyphrase boundary classification (Augenstein and Sgaard, 2017) .",
"(Chen et al., 2017) has applied multi-task learning for Chinese word segmentation, and (Liu et al., 2017) applied it for text classification problem.",
"Both of these works have used adversarial training to make sure the shared layer extract only common knowledge.",
"MTL has been used effectively to learn from multimodal data.",
"(Luong et al., 2016) has proposed MTL architectures for neural SEQ 2S EQ transduction for tasks including MT, image caption generation, and parsing.",
"They fully share the encoders (many-to-one), the decoders (one-to-many), or some of the encoders and decoders (many-to-many).",
"(Pasunuru and Bansal, 2017) have made use of an MTL approach to improve video captioning with auxiliary tasks including video prediction and logical language entailment based on a many-to-many architecture.",
"We have presented an approach to improve NMT in bilingually scarce scenarios, by leveraging curated linguistic resources in the source, including semantic parsing, syntactic parsing, and named entity recognition.",
"This is achieved via an effective MTL architecture, based on deep stacked en-1363 coders and decoders, to share common knowledge among the MT and auxiliary tasks.",
"Our experimental results show substantial improvements in the translation quality, when translating from English to French, Vietnamese, and Farsi in bilingually scarce scenarios.",
"For future work, we would like to investigate architectures which allow automatic parameter tying among the tasks (Ruder et al., 2017).",
"We are very grateful to the members of the JSALT2017 workshop at CMU, particularly George Foster, Colin Cherry, Patrick Littell, David Mortensen, Graham Neubig, Ji Xin, Daniel Beck, Anna Currey, Vu Hoang, and Gaurav Kumar for the insightful discussions and data pre-processing.",
"This work was supported by computational resources from the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE) at Monash University, Amazon, Extreme Science and Engineering Discovery Environment (supported by the NSF grant number OCI-1053575), and the Bridges system (supported by the NSF award number ACI-1445606) at the Pittsburgh Supercomputing Center.",
"This work was supported by the Australian Research Council via DP160102686."
] |
[
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"other",
"other",
"other",
"objective",
"objective",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"result",
"abstain",
"result",
"abstain",
"other",
"other",
"other"
] |
[
"Is there a principle to guide transfer learning across tasks in natural language processing (NLP)?",
"Taxonomy (Zamir et al., 2018) finds that a structure exists among visual tasks, as a principle underlying transfer learning for them.",
"In this paper, we propose a cognitively inspired framework, CogTaskonomy, to learn taxonomy for NLP tasks.",
"The framework consists of Cognitive Representation Analytics (CRA) and Cognitive-Neural Mapping (CNM).",
"The former employs Representational Similarity Analysis, which is commonly used in computational neuroscience to find a correlation between brain-activity measurement and computational modeling, to estimate task similarity with task-specific sentence representations.",
"The latter learns to detect task relations by projecting neural representations from NLP models to cognitive signals (i.e., fMRI voxels).",
"Experiments on 12 NLP tasks, where BERT/TinyBERT are used as the underlying models for transfer learning, demonstrate that the proposed CogTaskonomy is able to guide transfer learning, achieving performance competitive to the Analytic Hierarchy Process (Saaty, 1987) used in visual Taskonomy (Zamir et al., 2018) but without requiring exhaustive pairwise O ( m 2 ) task transferring.",
"Analyses further discover that CNM is capable of learning model-agnostic task taxonomy.",
"The source code is available at https://github.com/ tjunlp-lab/CogTaskonomy.git .",
"Transfer learning (TL) has attracted extensive research interests in natural language processing with a wide range of forms, e.g., TL from pretrained language models (PLM) to downstream tasks (De-vlin et al., 2018; Radford et al., 2018), from a task with rich labeled data to a task with low resource",
"(Chu and Wang, 2018; Yu et al., 2021), from high-resource languages to low-resource languages (Gu et al., 2018; Ko et al., 2021), etc. 1 A high-level concept or question on cross-task transfer learning is how these involved tasks are related to each other.",
"Is sentiment analysis related to paraphrasing?",
"Is textual entailment more related to question answering than named entity recognition?",
"All these sub-questions resolve themselves into whether a structure exists among NLP tasks.",
"Such task taxonomy is of notable values to transfer learning in NLP in that it has the potential to guide TL and reduce redundancies across tasks (Zamir et al., 2018).",
"In this paper, partially inspired by the task taxonomy in visual tasks (Zamir et al., 2018), we study the hierarchical task structure for NLP tasks.",
"But significantly different from the visual Taskonomy (Zamir et al., 2018), we construct NLP taskonomy from a cognitively inspired perspective.",
"Cognitively inspired NLP is the intersection of NLP and cognitive neuroscience that aims at uncovering cognitive processes in the brain, including cognition in language comprehension.",
"With the increasing availability of cognitively annotated data, on the one hand, cognitive processing signals (e.g., eye-tracking, EEG, fMRI) have been explored to enhance neural models for a wide range of NLP tasks (Barrett and Sgaard, 2015; Bingel et al., 2016; Hollenstein and Zhang, 2019; Hollenstein et al., 2019a).",
"On the other hand, representations learned in NLP models are used to predict brain activation patterns recorded in cognitive processing data (Mitchell et al., 2008; Pereira et al., 2018; Hale et al., 2018; Hollenstein et al., 2019b).",
"These studies on the bidirectional association between the two areas demonstrate that information underlying cognitive processing data is closely related to tasks and representations in NLP.",
"Hence we want to know whether it is feasible to isolate task repre-1 In this paper, we focus on cross-task transfer learning in the same language.",
"sentations from cognitive processing data and use them to learn task taxonomy in NLP.",
"To examine this hypothesis, we propose CogTaskonomy, a Cog nitively Inspired Task Tax onomy framework, as illustrated in Figure 1, to learn a task structure for NLP tasks.",
"CogTaskonomy consists of two main cognitively inspired components: Cognitive Representation Analytics (CRA) and Cognitive-Neural Mapping (CNM).",
"CRA extracts task representations from NLP models and employs Representational Similarity Analysis (RSA) (Kriegeskorte et al., 2008), which is commonly used to measure the correlation between brain activity and computational model, to estimate NLP task similarity.",
"CNM trains fully connected neural networks to build the mapping from sentence representations of pretrained models fine-tuned on specific tasks to fMRI signals recorded when human subjects read those sentences.",
"It then uses mapping correlation coefficients as task representations to compute task similarity.",
"Both methods require sentence representations to compute task representations.",
"We use pretrained language models fine-tuned on specific tasks, particularly BERT (Devlin et al., 2018) and TinyBERT (Jiao et al., 2020), to obtain sentence representations.",
"We compare the proposed CogTaskonomy against the Analytic Hierarchy Process (AHP) used in Taskonomy (Zamir et al., 2018).",
"We guide TL across tasks with the learned task structure and evaluate the effectiveness of these methods by estimating TL performance from various source to target tasks.",
"Contributions Our main contributions include: We propose CogTaskonomy, a cognitively inspired framework to measure task similarity and to build the task taxonomy in NLP.",
"This is the first attempt to study NLP task structures with cognitive processing data.",
"We present two cognitively inspired methods, CRA and CNM, and compare them against AHP.",
"Different from AHP, the two methods do not require O ( m 2 ) exhaustive pairwise transfer learning for task similarity estimation.",
"We build a taxonomy tree for 12 NLP tasks, including sentiment analysis, question answering, natural language inference, semantic textual similarity, passage ranking, etc., to guide transfer learning across them.",
"TL experiments and analyses validate the effectiveness of the proposed CogTaskonomy and find that CNM is able to learn stable task relations that are general to different underlying models.",
"Our work is related to cognitively inspired NLP and a variety of learning formalisms that involve knowledge transfer across different tasks.",
"We briefly review these topics within the scope of NLP and the constraint of space.",
"Using NLP Representations for Brain Activity Prediction Since the pioneering work (Mitchell et al., 2008), connecting statistical NLP representations with cognition has attracted widespread at-905",
"tention.",
"Chang et al. (2009) explore adjective-noun composition in fMRI based on co-occurrence statistics.",
"Huth et al. (2016) use distributed word representations to map fMRI data to activated brain regions, revealing a semantic map of how words are distributed in the human cerebral cortex.",
"A great deal of research (Murphy et al., 2012; Anderson et al., 2016; Sgaard, 2016; Bulat et al., 2017) has been devoted to word decoding.",
"Pereira et al. (2018) extend brain decoding to sentence stimuli, suggesting that neural network language models can be used to interpret sentences in a long-term context.",
"Ren and Xiong (2022) investigate the relationship between linguistic features and cognitive processing signals by developing a unified attentional network to bridge them.",
"Augmenting NLP Models with Cognitive Processing Signals Recent years have witnessed that many efforts have been devoted to exploring cognitive processing signals (e.g., eye-tracking, EEG, fMRI) in neural NLP models.",
"Muttenthaler et al. (2020) use cognitive data to regularize attention weights in NLP models.",
"Hollenstein et al. (2019a) evaluate word embeddings using cognitive data.",
"Toneva and Wehbe (2019) utilize fMRI scans to interpret and improve BERT.",
"Many other works use cognitive processing signals to improve NLP models (Barrett and Sgaard, 2015; Bingel et al., 2016; Gauthier and Levy, 2019; Hollenstein and Zhang, 2019; Ren and Xiong, 2021), just to name a few.",
"A very important trend in recent NLP is that models, algorithms, and solutions are not developed for only a single task, but for multiple tasks or across tasks (Devlin et al., 2018; Radford et al., 2018; McCann et al., 2018; Worsham and Kalita, 2020).",
"Learning methods that are capable of handling a set of tasks simultaneously or sequentially, e.g., multi-task learning, transfer learning, meta learning, have attracted growing research interests in NLP.",
"Beyond learning methods, yet another important dimension to this research trend is task relation learning, which is the topic of this work.",
"2 Multi-task Learning is to jointly train all tasks of interests with task linkages, e.g., in the form of regularization or sharing parameters across tasks 2 Task taxonomy learned by our methods could be applicable to other learning formalisms beyond transfer learning.",
"We leave this to our future work.",
"(Collobert et al., 2011).",
"It is important in multitask learning to find related tasks for target tasks as auxiliary tasks (Ruder, 2017).",
"Transfer Learning targets at transferring knowledge from a source task to a target task.",
"According to the task and domain difference in the source and target, TL is divided into transductive TL (same task, different domain, a.k.a. domain adaptation), inductive TL (same domain, different task) and unsupervised TL (both different) (Eaton and des-Jardins, 2011; Ghifary et al., 2014; Wang et al., 2019; Yuan and Wen, 2021).",
"If the source and target are dissimilar, negative transfer may hurt TL (Niu et al., 2020).",
"Meta Learning aims to gain experience over a set of related tasks for improving the learning algorithm itself (Hospedales et al., 2020).",
"Existing meta learning methods implicitly assume that tasks are similar to each other, but it is often unclear how to quantify task similarities and their roles in learning (Venkitaraman and Wahlberg, 2020).",
"Lifelong Learning is to learn continuously and accumulate knowledge along a sequence of tasks and uses it for future learning (Chen and Liu, 2018).",
"The system is tuned to be able to select the most related prior knowledge to bias the learning towards a new task favourably (Silver et al., 2013).",
"As task relatedness is important for cross-task learning formalisms mentioned in Section 2.2, efforts have also been made to learn task relations.",
"Craw-shaw (2020) groups previous methods on task relationship learning into three categories.",
"The first is task grouping or clustering, which divides a set of tasks into clusters so that tasks in the same cluster can be jointly trained (Bingel and Sgaard, 2017; Standley et al., 2019).",
"The second is learning transfer relationships, which analyzes whether transfer between tasks is beneficial to learning, regardless of whether tasks are related or not (Zamir et al., 2018; Dwivedi and Roig, 2019; Song et al., 2019).",
"The third is task embedding, which learns a specific representation space for tasks (James et al., 2018; Lan et al., 2019).",
"Our research can be considered as a mix of these categories.",
"CNM learns cognition-based task representations while both CNM and CRA learn task relations aiming at transfer learning.",
"Additionally, significantly different from previous studies, we 906 learn task structures from a cognitive perspective 3 , which is expected to estimate task relatedness in a cognitively tuned space.",
"As will be demonstrated below, our cognitively motivated methods incur a low computation cost and exhibit generalization across underlying models to some extend.",
"Figure 1 illustrates the basic framework of CogTaskonomy.",
"First, we obtain task-specific sentence representations of text stimuli from cognitive data by feeding them into fine-tuned or distilled pretrained language models on 12 downstream tasks (Section 3.1).",
"Subsequently, task-specific representations are fed into two cognitively inspired components, cognitive representation analytics (Section 3.2) and cognitive-neural mapping (Section 3.3), for estimating task similarity and inducing task taxonomy.",
"Fine-tuning a pretrained language model for an end task is a widely used strategy for quickly and efficiently building a model for that task with limited labeled data.",
"Zhou and Srikumar (2021) find that fine-tuning reconfigures underlying semantic space to adjust pretrained representations to downstream tasks.",
"In view of this, we take sentence-level textual stimuli of cognitive data as input data for a specific fine-tuned model to obtain representations that contain information specific to that task.",
"4 Additionally, Cheng et al. (2020) suggest that knowledge distillation (KD) helps models to be more focused on task-relevant concepts.",
"Therefore, without loss of generality, we use BERT and TinyBERT (performing KD) to obtain task-specific sentence representations.",
"BERT Following Devlin et al. (2018), we prepend a special classification token [CLS] to each input sentence in order to extract the contextualized representation of the corresponding sentence.",
"Merchant et al. (2020) find that fine-tuning primarily affects top layers of BERT.",
"Hence, we take the hidden state of the prepended token of each sequence in the last layer as the sentence representation.",
"3 Dwivedi and Roig (2019) also use RSA to learn task taxonomy, in some way similar to our CRA.",
"But they learn relations for visual tasks and use different correlation functions from our CRA.",
"4 Sentence-level textual stimuli of cognitive data refer to natural textual stimuli, i.e., sentences presented to subjects for collecting cognitive processing signals.",
"TinyBERT TinyBERT (Jiao et al., 2020) performs knowledge distillation at both the pretraining and fine-tuning stage.",
"By leveraging KD, TinyBERT learns to transfer knowledge encoded in the large teacher BERT (Devlin et al., 2018) to itself.",
"As a result, TinyBERT can capture both general and task-specific knowledge.",
"Similarly, we use the hidden state of [CLS] token in the last layer as the contextualized representation for a given sentence.",
"With task-specific representations learned by feeding text stimuli of cognitive data into a fine-tuned model, we can estimate pairwise task similarity for any two tasks in a given task list T = { t 1 , t 2 , ..., t m } .",
"The first cognitively inspired method is the cognitive representation analytics that adapts a common method in computational neuroscience to our scenario.",
"We first briefly introduce the common method, representational similarity analysis, and then elaborate the adaptation.",
"Representational Similarity Analysis is widely applied in cognitive neuroscience, which can not only realize cross-modal cognitive data comparison but also quantitatively relate brain activity measurements to computational models.",
"It first calculates a representation dissimilarity matrix (RDM) of different modal data, and then estimates the correlation between RDMs.",
"In this way, it successfully captures cross-modal data relationships (Kriegeskorte et al., 2008).",
"RSA can be also applied for the comparison between computational models and cognitive data.",
"The RDM of a computational model is obtained by comparing the dissimilarity of data representations obtained from the computational model in pairs.",
"It is then compared with the RDM of brain activity measurements.",
"5 We take all sentence representations R i generated by a task-specific model PLMFT i (a pretrained language model (either BERT or TinyBERT) fine-tuned on the i th task) as the base to simulate cognitive representations required by RSA.",
"For each pair of sentence representations ( R ij , R ij ) for the j th and j th sentence of the i th task, we compute a dissimilarity score in three metrics ( e ): Euclidean distance ( euclidean ), Canberra distance ( canberra ) and Pearson correlation coefficient ( ).",
"Among 5 In our CRA, only RDMs from computational models are used.",
"This is because we don't have cognitive data that are curated for specific NLP tasks.",
"In our preliminary experiments, we have created pseudo cognitive data for different NLP tasks by predicting cognitive signals with a mapping model similar to that used in CNM.",
"But it performs poorly.",
"them, the first two distance metrics can naturally represent dissimilarity (Dis), while the last needs to be converted to 1 to indicate dissimilarity, as follows: Dis ijj = (cid:40) e ( R ij , R ij ) e is not 1 e ( R ij , R ij ) e is (1) RDM for the i th task consists of the dissimilarity scores of all sentence pairs.",
"We formulate it as follows: RDM i = [ Dis i 12 , Dis i 13 , . . . , Dis i 1 n , . . . Dis ijj , . . . , Dis i ( n 1) n ] , j = j (2) where n is the number of sentences.",
"RDMs computed in this way are then used for estimating similarity between NLP tasks.",
"The pairwise similarity Sim ii of the i th and i th task is computed as follows: Sim ii = Similarity ( RDM i RDM i ) (3) Similarity ( ) is a similarity function, which can be Spearman rank correlation ( r s ), and cosine ( cos , by default).",
"In summary, we calculate the similarity between each RDM pair and finally obtain a similarity matrix for a set of tasks, as shown in Figure 2. 3.3 Cognitive-Neural Mapping The idea behind cognitive-neural mapping is to project sentence representations of NLP models fine-tuned in a specific task to cognitive signals (i.e., fMRI voxels in this paper) recorded when humans read those sentences with a neural network.",
"The connections between the specific task and cognitive signals learned in this way could be transformed into cognitively inspired task representations for further task similarity estimation.",
"The mapping can be considered as a way to isolate brain activity related to the specific task from fMRI cognitive signals.",
"Particularly, for the i th task and s th subject, we use a fully connected 3-layer feed-forward neural network to project sentence representation R ij specific to this task to fMRI y isj of the s th subject reading the j th sentence as follows: y isj = W i 2 ( ReLU ( W i 1 ( R ij )) (4) To optimize the mapping model, we use the mean squared error (MSE) as loss function.",
"5-fold cross-validation is performed for each mapping model.",
"Before training, grid search is conducted, and the optimal number of hidden layer units in the mapping network is obtained by three times of cross-validation on the verification set accounting for 20% training data.",
"Each mapping is run 5 times.",
"We average models over all subjects and 5 runs and then evaluate mapping model performance in all voxels.",
"Particularly, we compute the cognitively inspired task representation CogR i for the i th task, which consists of the correlation coefficients on all voxels between predicted values and ground-truth values, defined as follows : CogR i =[ c ( y i 0 , y 0 ) , . . . , c ( y ik , y k ) , . . . , c ( y iv , y v )] , 0 k v (5) where y ik is a vector of all predicted values for the k th voxel from all input sentences by the mapping model tuned for the i th task, y k is a vector of the ground-truth values for the k th voxel from all sentence-level signals of text stimuli in fMRI data, v is the number of voxels used, and c ( ) is a function for comparing two input vectors.",
"We instantiate c in two functions: the coefficient of determination ( R 2 ) and .",
"6 We then use cosine similarity to calculate pairwise task similarity as follows: Sim ii = cos( CogR i CogR i ) (6) 4 Experiments We conducted experiments with widely-used NLP benchmark datasets and cognitive data to evaluate the effectiveness of CogTaskonomy.",
"The brain fMRI dataset in our experiments is from Pereira et al. (2018), which is recorded on a whole-body 3-Tesla Siemens Trio scanner with a 32-channel head coil by showing 627 natural language sentences to 5 adult subjects.",
"7 Since voxels were 6 R 2 is a statistical measure that examines how much a model is able to predict or explain an outcome, usually defined as the square of the correlation between predicted values and actual values.",
"According to the results in Appendix A.1, we set R 2 as c in CNM by default.",
"7 This dataset is publicly available at https://osf.",
"randomly selected, Z-Score standardization was carried out for voxels obtained from different stimuli at each location on the basis of the original data set to avoid the influence of outliers.",
"Subjects are asked to read each encyclopedic statement carefully, while the fMRI scanner records brain signals at this point.",
"As a result, each fMRI scan covers multiple words at a time, subject to continuous stimulation.",
"Each fMRI recording contains a number of voxels.",
"We flattened 3d fMRI images into 1d vectors.",
"v voxels were randomly selected, yielding matrices I s R 627 v for each subject s .",
"We selected 8 NLP tasks from the GLUE benchmark (Wang et al., 2018), including CoLA, MNLI, MRPC, QNLI, QQP, RTE, SST-2, STS-B.",
"These tasks are considered important for generalizable natural language understanding, exhibiting diversity in domains, dataset sizes, and difficulties (Wang et al., 2018).",
"To cover the spectrum of NLP tasks as much as possible, we also included Extractive Question Answering (QA), Relation Extraction (RE), Named Entity Recognition (NER), and Passage Reranking (PR).",
"The datasets of these four tasks are SQuAD 2.0 (Rajpurkar et al., 2018), Semeval-2010 task 8 (Hendrickx et al., 2010), CoNLL 2003 (Sang and Meulder, 2003), MS MARCO (Nguyen et al., 2016; Craswell et al., 2020), respectively.",
"We mainly used two methods as our baselines, including Direct Similarity Estimation (DSE), Analytic Hierarchy Process (AHP) (Zamir et al., 2018).",
"Detailed experimental settings are shown in Appendix A.2.",
"Direct Similarity Estimation (DSE) A straightforward way to estimate pairwise task similarity is to calculate sentence-level similarities based on task-specific sentence representations and then average them.",
"Concretely, let R ij be the task-specific representation for the j th sentence in the i th task.",
"The task similarity Sim ii for a task pair ( i, i ) is computed as follows: R ij = PLMFT i ( x j ) (7) Sim ii = (cid:80) j Similarity ( R ij R i j ) n (8) ipated in experiments 2 and 3 were chosen in this paper.",
"where PLMFT i is the pretrained language model fine-tuned on the i th task.",
"PLM can be instantiated as TinyBERT or BERT.",
"Analytic Hierarchy Process (AHP) The main idea is to construct a matrix W t for each target task t , where the element at ( i , i ) in the matrix shows how many times the i th source task is better than i th source task in terms of the transferability to the target task on a held-out set.",
"The principal eigenvector of W t is then taken as the task representation for the corresponding task, and all task representations are stacked up to obtain an affinity matrix.",
"8 The affinity matrix is then viewed as the task similarity matrix.",
"Task Transferring To assess the similarity between tasks, all models fine-tuned on non-target tasks will be used as source models, and continue to be fine-tuned in the same way to transfer on the target task.",
"In task transferring, all parameters of source models are fine-tuned (i.e., not fixed).",
"We used the same learning rate and a number of training steps for all task transferring.",
"This allows a fair comparison between different source tasks.",
"Oracle Task Ranking The final similarity ranking of source tasks to a given target task is based on the results obtained from the task transferring experiments.",
"Generally speaking, the better the source-to-target transfer performance is, the more similar the two tasks are, since the essence of TL is to apply knowledge learned in the source task to the target task.",
"Based on this concept, we rank tasks in terms of transfer learning performance, for more details please see Appendix A.3.",
"Task Ranking Score Based on similarity results computed by each task estimation method, we can obtain the most similar task for each target task.",
"We then check the ranking position of the most similar task in the oracle task ranking.",
"We average ranking positions of all target tasks as the final task ranking score for the corresponding task estimation method.",
"Note that we exclude the transfer to the target task itself in computing task ranking scores.",
"9 8 For more details about AHP, please refer to (Zamir et al., 2018).",
"Since the test sets of our NLP tasks are not publicly available, we obtain AHP results based on the validation set of each task except the NER task of which the test set is publicly available.",
"In all experiments, the hyper-parameters are the same for all tasks.",
"9 Generally, the lower the task ranking score, the better the task similarity estimation method.",
"Task ranking scores (using the ranking of task transferring as the oracle ranking) of different task similarity estimation methods are shown in Table 1. From these results, we have the following observations:",
"observations: Both CRA and CNM are better than random ranking and DSE, suggesting that cognitively inspired task similarity estimation is able to capture relations of NLP tasks.",
"When TinyBERT is used, DSE is even worse than random ranking.",
"This suggests that simply using task-specific sentence representations cannot well detect task relations and distinguish different tasks.",
"TinyBERT performs better than BERT across three task estimation methods (i.e., CRA, CNM and AHP) although the number of parameters in the former is only half of that in the latter.",
"We conjecture that TinyBERT uses knowledge distillation, making sentence representations more relevant to individual tasks and hence resulting in better task similarity estimation.",
"We can also combine CRA and CNM (CRA+ CNM) by averaging task similarity scores estimated by them.",
"Such combination is better than both methods alone.",
"Although AHP is better than our methods, it directly uses the results of transfer learning to measure similarities between different tasks, which is very time-consuming.",
"if we have m tasks, we have similarity method would yield a task ranking score of 1 on each target task.",
"A random method would yield a ranking score of 0 .",
"5(1 + 11) = 6 in our experiments theoretically.",
"We have also conducted random sampling 5000 times on TinyBERT and BERT, and obtained mean task ranking scores of 6.05 and 6.04 respectively.",
"Hence, we take the 6 as the task ranking score for random ranking.",
"to perform O ( m 2 ) transfer learning to obtain the task similarity matrix across all task pairs.",
"In contrast, our methods do not require any costly transfer learning between tasks.",
"It is hence easier to perform and able to guide transfer learning across tasks.",
"We further evaluated the actual transfer learning performance of each target task from the most similar source task according to different task similarity estimation methods.",
"Results are shown in Appendix A.4, which further validate the effectiveness of our methods and show that CRA+CNM is very close to that of AHP.",
"In later experiments and analyses, we will show more advantages of our methods over AHP.",
"CRA adopts RSA to transform the dissimilarity of task-specific sentence representations into the similarity of tasks.",
"We have different options for dissimilarity measurement (e.g., euclidean , canberra ) in sentences and for similarity measurement (e.g., cos , ) in tasks.",
"Hence we want to know the im-pact of the combinations of different measurements in sentence dissimilarity and task similarity on final performance.",
"Results are provided in Table 2. Again, we have several interesting observations.",
"First, with different combinations of these measurements, our CRA significantly outperforms random ranking in almost all cases.",
"This suggests that RSA is able to be adapted to NLP task structure detection.",
"Second, in comparison to the combination of and r s in the original RSA (Kriegeskorte et al., 2008), in our case, the combination of and cos is better than other combinations in the majority of cases.",
"Third, TinyBERT is more robust to these 910 0 5000 10000 15000 20000 25000 30000 The Number of Voxels 4.5 5.0 5.5 6.0 6.5 T a s k R a nk i n g S c o re Random BERT TinyBERT Figure 3: Task ranking scores of CNM with TinyBERT and BERT predicting different numbers of voxels.",
"different combinations than BERT.",
"Since CNM bridges pretrained language models on the input side and voxels in fMRI images on the output side, we further evaluated CNM by varying the selection of PLMs (either BERT or TinyBERT) and the numbers of voxels.",
"Results are displayed in Figure 3. It is interesting to find that with a small number of cognitive signals (voxels), TinyBERT for CNM can achieve a good task ranking score.",
"By contrast, without sufficient cognitive signals, BERT for CNM fails in task similarity estimation, obtaining a task ranking score worse than random ranking.",
"This is consistent with our previous finding in the main results that TinyBERT (with KD) captures more task-relevant knowledge than BERT for task relation detection.",
"We conducted experiments to take a deep look into the feed-forward neural mapping model in CNM.",
"The number of voxels was set to 30K.",
"Pretrained Language Models We compared the prediction performance (measured by MSE between predicted results and ground-truth voxels) across different tasks using BERT vs. TinyBERT as the pretrained language model to obtain task-specific sentence representations.",
"Results are shown in Figure",
"4(a).",
"We can clearly see that both BERT and TinyBERT are better than the random baseline across all tasks.",
"And TinyBERT is better than BERT on all tasks, which resonates with the main results shown in Section 4.5.",
"Subjects We analyzed prediction performance across different subjects, as shown in Figure",
"4(b).",
"Although the prediction performance varies across different tasks, the shapes of the prediction",
"perfor-(a) MSEs (averaged over 5 subjects and 30K voxels) for different tasks with BERT vs. TinyBERT being used as the pretrained language model.",
"mance curve over 12 tasks for different subjects are similar to each other, indicating that similar brain activities are activated for these tasks across different subjects.",
"Models underlying our cross-task transfer learning are different pretrained language models, which is a widely acknowledged practice for transfer learning in NLP.",
"We therefore want to investigate how general our task similarity estimation methods (e.g., CNM, CRA, AHP) are to the underlying models.",
"This is important as we want to find a task taxonomy method that is not sensitive to underlying models.",
"That is, the learned task taxonomy can be used to guide transfer learning for any model.",
"For this, we first computed the Pearson correlation coefficient ( ) and the Spearman rank correlation ( r s ) between task similarities obtained with TinyBERT and those with BERT using the same similarity estimation method.",
"The correlation coefficients 911 Method TB B B TB k 3 4 5 3 4 5 CRA 0.39 0.40 0.42 0.61 0.54 0.58 CNM 0.64 0.58 0.62 0.75 0.79 0.73 AHP 0.53 0.52 0.52 0.69 0.65 0.67 Table 3: Probabilities that transferability learned with TinyBERT (TB) can be used for BERT (B) or vice versa.",
"between BERT-based and TinyBERT-based task similarity matrices obtained by the CRA, CNM and AHP are ( = 0.23, r s = 0.11), ( = 0.85, r s = 0.76) and ( = 0.36, r s = 0.34) respectively.",
"Both AHP and CRA show pool correlations between task similarity matrices using BERT and TinyBERT.",
"On the contrary, CNM is very robust to the variations of underlying models.",
"We speculate that both CRA and AHP capture task relations specific to underlying models while CNM could remove such bias by building the task taxonomy based on the cognitive data.",
"In other words, CNM is able to detect model-agnostic task relations, yet another desirable advantage over AHP with exhaustive computation cost.",
"To further examine this hypothesis, we used the task ranking estimated with another underlying PLM x to guide transfer learning with an underlying PLM y .",
"In our work, this would be using TinyBERT to guide BERT (TB B) or vice versa.",
"For each target task, we used the top k source tasks according to the task ranking with the guiding PLM x for transfer learning with the PLM y .",
"The results were compared to the actual performance of transfer learning to the target task from the top 6 source tasks according to the task ranking with the PLM y itself.",
"The probability of the top k source tasks occurring in the real top 6 tasks shows how much transferability learned with the PLM x can be used for the PLM y .",
"Results are shown in Table 3, which again suggests the superiority of CNM over AHP.",
"We further analyzed the generality of CNM to different subjects of cognitive data used in CNM, which can be found in Appendix A.5.",
"The experimental results show that the CNM is also robust to different subjects.",
"We visualize all pairwise task similarities for 12 tasks learned by CNM (averaged over 5 subjects) as a heatmap, shown in Figure",
"5(a).",
"It is clear to see from the heatmap that 6 GLUE tasks (i.e., CoLA, QNLI, RTE, MNLI, SST-2, and MRPC)",
"form a cluster.",
"These tasks are all related to sentence understanding.",
"We further perform hierarchical clustering over the 12 tasks according to their similarities to create a taxonomy tree, which is illustrated in Figure",
"5(b).",
"In this paper, we have presented a cognitively inspired framework, termed CogTaxonomy, to learn relation and structure for NLP tasks.",
"Experiments demonstrate that the task taxonomy detected by CogTaxonomy can be used to guide transfer learning across 12 different NLP tasks.",
"Both CRA and CNM, the two essential components of CogTaxonomy, do not require exhaustive transfer learning across all source-target task pairs.",
"The former is robust to different combinations of dissimilar-ity/similarity measurements.",
"The latter resorts to cognitive signals to learn model-agnostic task relations.",
"The present research was supported by Zhejiang Lab (No. 2022KH0AB01) and the Natural Science Foundation of Tianjin (No. 19JCZDJC31400).",
"We would like to thank the anonymous reviewers for their insightful comments."
] |
[
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"objective",
"objective",
"method",
"objective",
"method",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Users participate in online discussion forums to learn from others and share their knowledge with the community.",
"They often start a thread with a question or by sharing their new findings on a certain topic.",
"Unlike in Community Question Answering, where questions are mostly factoid based, we find that the threads in a forum are often open-ended ( e.g. , asking for recommendations from others) without a definitive correct answer.",
"We thus address the task of identifying helpful posts in a forum thread to help users comprehend long-running discussion threads, which often contain repetitive or irrelevant posts.",
"We propose a recurrent neural network based architecture to model",
"(i) the relevance of a post regarding the original post starting the thread, and",
"(ii) the novelty it brings to the discussion, compared to the previous posts in the thread.",
"Experimental results on five different types of online forum datasets show that our model significantly outperforms the state-of-the-art neural network models for text classification.",
"Online discussion forums are widely used in many domains such as in generic web content 1 , e-health 2 , Massive Open Online Courses (MOOCs) 3 , and e-commerce, among others.",
"Users participate in these forums to gain knowledge from the collective wisdom of the community.",
"Typically, users start a discussion thread by posting a question or asking others for opinions on a topic.",
"Others then reply to threads relevant to their interests.",
"Importantly, as these forums are indexed by search engines, they need to be discoverable by a wider audience apart from just 1 https://www.reddit.com/ 2 https://www.healthboards.com/boards 3 https://www.coursera.org registered users by enabling threads to be found in response to queries.",
"Due to the open nature of the forums and the various expertise level of users, the posts in the discussion threads vary in helpfulness.",
"To address this, some websites provide actions for users to signal this, as in Upvote ( reddit , stackoverflow ) and Highlight ( coursera ).",
"Such feedback is helpful for identifying important posts among the many.",
"Such feedback rarely comes immediately following new post creation, affecting their visibility to the users (Singh et al., 2017).",
"We can devise technology to proactively identify such helpful posts as they arrive, in a helpfulness prediction task , enabling users to efficiently assess relevance.",
"We observe that there is a key structural difference between online discussion forums and Community Question Answering (CQA) websites.",
"Figure 1 shows the distribution of normalized helpful votes for the top5 posts across a popular discussion forum ( reddit ), and a CQA website ( stackoverflow 4 ).",
"In CQA, the vote distribution decays exponentially, indicating that usually there is a single correct answer with the largest number of votes (Omari et al., 2016).",
"In contrast, votes for less helpful posts in discussion forums decay at a much lower rate, suggesting that discussion forum threads are more open-ended.",
"Table 1 shows a sample thread from reddit to understand the dynamics of online discussion.",
"We observe the following two major differences compared to threads in CQA domain: (1) The first post (hereafter, original post ) is not necessarily a question, but can be personal anecdotes or new findings on a certain topic, attracting more discussion.",
"(2) Instead of searching for a single relevant answer as in CQA, discussion forum users find a post helpful 4 https://www.kaggle.com/stackoverflow/ stacksample/data Figure 1: The helpful vote distribution for the top-5 posts across an online discussion forum ( reddit ), and the stackoverflow CQA website.",
"when it introduces some relevant (with respect to the original post) and novel ( i.e. , not presented in the earlier posts within the same thread) information.",
"Motivated by these observations, we address helpfulness prediction by considering both the target post and its preceding posts.",
"We propose a novel neural architecture to predict the helpfulness of a post in a discussion thread.",
"Our approach consists of two components: (1) modeling the relevance of a post and (2) determining the novelty with respect to the sequence of preceding posts.",
"It combines the output from both components to predict the overall post helpfulness.",
"As recurrent neural networks (RNNs) have shown good performance in sequence modeling tasks (Chung et al., 2014; Sutskever et al., 2014), we apply it to our architecture to model the",
"(i) sequence of words in the post text, and the",
"(ii) sequence of posts in a thread.",
"Our model significantly outperforms other state-of-the-art models across experiments on five varied and large forum datasets.",
"Our main contributions are: We reveal the key differences between posts in CQA and online discussion forums; We analyze the confounding factors behind the perceived helpfulness of posts in discussion forums.",
"We observe that both relevance and novelty play important roles in determining the helpfulness of a post; We propose a novel neural network architecture to predict the helpfulness by using textual content of a target post as well as sequence of posts preceding it in the thread; We compare our model with current neural network classifiers and analyze the factors that influence our model's performance.",
"To the best of our knowledge, predicting helpful posts in generic open-ended discussion forums has not been studied before.",
"However, there is significant amounts of related work on similar directions; where researchers evaluate the quality (which may not correlate with perceived helpfulness by the community users) of posts in specific domains such as health (Oh et al., 2012; Oh and Worrall, 2013; Beloborodov et al., 2014) and online education (Chandrasekaran et al., 2015; Chandrasekaran and Kan, 2019; Jenders et al., 2016).",
"External medical resources and thesauri such as UMLS 5 have been used to identify patterns of helpfulness in health (Asghar et al., 2014).",
"In MOOC platforms, apart from the textual content of the forums, additional signals such as user reputation ( e.g. , average homework scores, number of courses taken) have been used to estimate post quality (Jenders et al., 2016).",
"However, these techniques are tightly coupled with the target domain, and may not be generalizable to new domains.",
"CQA Answer Quality: Past work has also addressed the evaluation of answer quality in CQA sites (Jeon et al., 2006; Hong and Davison, 2009; Shah and Pomerantz, 2010; Yao et al., 2015; Omari et al., 2016; Li et al., 2015).",
"Typically posed as a classification problem, these use both textual and non-textual feature-based approaches.",
"Since it is quite common for popular questions to attract many potential answers, answer ranking based on perceived quality is another line of approach (Surdeanu et al., 2008; Bian et al., 2008; Wang et al., 2009).",
"Closer to our approach, Omari 5 https://www.nlm.nih.gov/research/ umls/ et al. (2016) proposed a novelty-based greedy ranking algorithm that depends on a pre-trained parser to identify different propositions, useful for predicting helpfulness.",
"Li et al. (2015) propose a few features for answer quality detection from academic QA sites such as ResearchGate 6 .",
"However this approach does not generalize well since the method uses many website-specific signals such as reputation scores for users and their institutions.",
"Additionally, their approach relies on human annotations to identify a few key conversational characteristics in the answers, keeping it from being applied to use cases where scalability and automation are key.",
"In the CQA answer quality evaluation literature, quality is often measured through the human eval-uators' annotations during experimentation (Shah and Pomerantz, 2010; Oh et al., 2012; Omari et al., 2016).",
"However, we are interested in modeling the helpfulness for actual discussion forum users (in term of Upvotes) and not annotators following guidelines to mark answer quality, which might present other forms of bias.",
"Modeling Novelty in IR, such as search result diversification (Carbonell and Goldstein, 1998; Soboroff and Harman, 2005; Ziegler et al., 2005; Clarke et al., 2008), also constitutes prior art.",
"Carbonell and Goldstein (1998) proposed maximal marginal relevance (MMR) to diversify the set of documents returned for a search query.",
"Similar approaches were also used later in Multi-Document Summarization (MDS) tasks (Nallap-ati et al., 2017).",
"These approaches address the problem either as a ranking task (ordering search results) or as a subset selection problem (MDS), where all documents are simultaneously made available.",
"In contrast, in our discussion thread scenario, we need to model the discussion posts' sequential nature to understand the context of a later post and, in turn, determine its helpfulness.",
"Neural Network Based Models have also recently outperformed existing classifiers in many text classification tasks.",
"They have been widely adopted as they induce useful features on their own, given sufficient data.",
"Although there are differences, the problem of answer selection is relevant: the goal is to rank the potential answers to a target question from multiple candidate answers in order of their similarity (Yu et al., 2014; Wang 6 https://www.researchgate.net/ and Nyberg, 2015; Severyn and Moschitti, 2015).",
"However in our case, all posts in a thread are similar to the original post to an extent.",
"Helpful posts are thus more difficult to identify; computing similarity is not viable as a single source solution.",
"Inspired by all these previous works, we propose a neural architecture to predict the helpfulness of posts in open-ended discussion forums.",
"To make it generic and easily adaptable to multiple domains, we study the problem from a linguistic viewpoint, where we consider only the textual contents of the discussion threads.",
"We propose a neural network architecture to model post helpfulness ( cf Figure 2a).",
"Our architecture is end-to-end trainable, adaptable to different domains.",
"The model comprises two components to analyze a target post's thread relevance and novelty with respect to its past k posts.",
"This component takes a post text p which consists of words ( w 1 , w 2 , . . . , w n ) as input and encodes it to a tensor ( h p ) in two steps.",
"We first use a word embedding initialized with GloVe 7 to transform all the words from the post text into finite d -dimensional vectors, i.e. , w i (cid:55) R d .",
"Our experimental results show that the coverage of GloVe varies between 68 76% on our datasets.",
"To estimate the embeddings for the out-of-vocabulary words and reflect the domain dependence, we keep the embedding vectors trainable.",
"In the second step, the sequence of words are provided to a gated recurrent unit (GRU) layer (Chung et al., 2014) to obtain a sequence of hidden vectors ( h 1 , h 2 , ..., h n ), where h i R g , and g is the output dimension of the GRU encoded tensor.",
"The latent vector is defined as follows: h i = GRU text ( h i 1 , w i ) .",
"The last vector in the sequence, h n , is considered as the encoded representation of a post text ( cf. Figure 2c).",
"For a post p , the GRU text encoded representation is denoted as h p .",
"We use a dropout layer after the GRU to prevent overfitting.",
"In our model, note that there is only a single text encoder; all textual inputs the target post, original post, 7 http://nlp.stanford.edu/data/glove.",
"and each of the past posts in the thread are encoded using a single text encoder, since as all of them are essentially text posts of similar nature.",
"Alternative Architectures.",
"We also tried stacking additional GRUs in our experiments, but we did not observe accuracy improvements.",
"We also tried to replace GRU with LSTM (Long-Short Term Memory) (Hochreiter and Schmidhu-ber, 1997), resulting in similar performance at the cost of much longer training time due to the larger number of parameters.",
"The left component of Figure 2a captures the relevance of a target post with respect to the original post.",
"It takes as input two GRU encoded tensors: one for the target post h t , the other for the original post h o .",
"It computes their similarity defined as: r t = h t h o , where denotes the element-wise multiplication.",
"We also experimented with element-wise difference and cosine similarity, but found that multiplication works best.",
"Our relevance modeling component is inspired from the architecture for answer sentence selection model (Yu et al., 2014).",
"In Figure 2a, the right component models the target post's novelty compared to the past k posts",
"from the same thread.",
"It takes the encoded tensors for the target post h t as input, as well as the past k posts ( h t k , h t k +1 , ..., h t 1 ) .",
"We first encode the context of the discussion by modeling the sequence of the past k posts.",
"In order to achieve this, we use another GRU (labeled as Sequence Encoder in Figure 2a) to transform the sequence of k post tensors to a single context tensor c t of equal dimension g .",
"Each timestep i of this is defined as follows: c ti = GRU context ( c ti 1 , h t i ) .",
"Similar to GRU text , c tt 1 , the last vector in the sequence, is considered as the context representation c t (as shown in Figure 2b).",
"To determine the novelty of the target post, we compute its similarity n t with the discussion thread context represented by its context tensor: n t = h t c t .",
"Importantly, instead of considering all the previous posts in the thread, we limit the context to the past k posts for two reasons:",
"1. Users may not recall the entire context of discussion while reading a post appearing much later in a long-running thread.",
"2. Users often arrive at a discussion thread through search engine queries.",
"Since long threads are paginated, a user may arrive on a page in the middle of the discussion thread, thus also missing the previous context.",
"We find empirical evidence for these assumptions later in our experiments (see Section 5).",
"In tuning our model, we observed that increasing the context length beyond a threshold does not yield improvements.",
"We combine the relevance tensor ( r t ) and novelty tensor ( n t ) and feed through a fully connected layer to make the final post helpfulness prediction:",
"where denotes concatenation; x t is the concatenated tensor; y is the output label ( 0 or 1 ); W and b are the weight matrix and bias vector, respectively, learned for the fully connected layer.",
"We use binary cross-entropy loss to train the model, optimizing with Adam (Kingma and Ba, 2014).",
"Alternative Architectures.",
"We also investigated ensemble architectures.",
"We fed the relevance and novelty tensors through two separate fully connected layers to obtain the binary predictions from both components concurrently, then merged the two predictions via a final fully connected layer for obtaining prediction.",
"This approach fared worse compared to our concatenation-based model, possibly as our final concatenation model can exploit non-linear interactions between both components.",
"The actual post content is never presented to the fully connected layer so that it generalizes well.",
"The final layer only gets to see the relevance, and novelty vectors, which we believe ameliorates the creation of overfitted (post-based or thread-based) features for the helpfulness prediction task.",
"We first describe the datasets, evaluation metrics, and baseline models before our main results.",
"We also conducted additional experiments to answer specific research questions about our model.",
"We experiment with five real-world online discussion forums (Table 2) to validate model effectiveness.",
"Typical of other research work, we also remove threads that have less than two posts.",
"12.",
"Reddit is a popular platform for discussions on a wide-variety of topics on the web.",
"We Dataset # Posts # Threads Avg # Posts / Thread Avg # words / Post",
"use a large number of discussion threads from a reddit data dump 8 .",
"To diversify the datasets in terms of average thread length, we set different thresholds, and created two datasets: Reddit 10+ ( 10 posts) and Reddit 3+ ( 3 posts).",
"Along with a chronologically ordered set of posts, reddit also has Upvote counts for every post.",
"34.",
"Coursera is a large MOOC platform, providing a discussion forum for the course participants.",
"We select two courses with the largest number of posts: Matrix-001 and Android Apps 101-001 from a MOOC dataset (Chandrasekaran et al., 2015).",
"Course participants can vote for a post if they find it helpful.",
"We refer to these datasets as Matrix and Android Apps , hereafter.",
"5. Travel Stack Exchange is one of many QA websites in the Stack Exchange community.",
"We use a data dump 9 of the website and refer to it as Travel dataset.",
"In Travel Stack Exchange, a user can Upvote a post if she deems it helpful.",
"Although not strictly a discussion forum, the threads in this forum appear to be less objective (by our vote distribution analysis, similar to Figure 1), compared to other CQA sites like stackoverflow .",
"We use the user-provided feedback in form of mark as helpful, like, upvote actions as a proxy of the actual helpfulness of a post.",
"Vote counts vary widely across posts and threads, ( i.e. , 0 to 3,100 for the reddit dataset), making it infeasible to formulate the task as a regression problem.",
"Following by prior published research (Cheng et al., 2014; Lo et al., 2017), we model it as a binary classification problem, and use the 80 th percentile expected value of helpful vote count across all the posts as the boundary between the two classes.",
"We assume that a post is 8 https://files.pushshift.io/reddit/ comments/ 9 https://archive.org/download/ stackexchange/travel.stackexchange.com.7z helpful if it has received more helpful votes than the 80 th percentile, and not helpful otherwise.",
"Since our goal is to predict the helpful posts and the class distribution is inherently skewed from our definition, we evaluate the model performance in terms of prediction accuracy for only the positive, helpful class.",
"We evaluate using standard precision, recall, and F 1 score across all datasets.",
"Code for our model is publicly available 10 to aid the reproduction of our results.",
"We experiment with the following state-of-the-art neural text classification methods:",
"1. BiLSTM (Sun et al., 2017): a stack of two layers of Bidirectional LSTM encoders on post text.",
"2. Stacked LSTM (Liu et al., 2016): a stack of two layers of LSTM encoders on the post text.",
"3. LSTM with Attention (Rocktaschel et al., 2016): an LSTM layer with hierarchical attention.",
"4. Answer Sentence Selection (Yu et al., 2014): a CNN model pioneered in a TREC QA 11 task.",
"5. Our Model (Relevance based) : only the relevance component of our model.",
"6. Our Model (Novelty based) : only the novelty component of our model.",
"We do not include traditional feature-based models as part of our reported baseline portfolio, as in our study, neural models have outperformed them as well, which is corroborated in recent studies (Kim, 2014).",
"Additionally, such approaches are fragile, as we experiment with datasets from multiple domains with various discussion styles, and extracting hand crafted features for each is non-trivial and labour intensive.",
"As a preliminary experiment, we tried with a traditional bag-of-words based model.",
"However, we do not include it in the baseline portfolio given its poor performance on our datasets.",
"We used the Keras 12 library with TensorFlow as the backend for model implementation.",
"We split the dataset 80:10:10 for train, validation, and test, respectively, and perform 5-fold cross validation.",
"We tuned the hyper-parameters via grid search on the validation set for all the models.",
"The rest of the parameters used follow standard values from the recent literature.",
"We set word embedding dimension ( d ) to 100, vocabulary size to 100K, hidden dimension of GRU ( g ) to 128, batch size to 512, the dimension of the final fully connected layer to 128, and use 70% dropout.",
"For the CNN-based Answer Sentence Selection baseline, we tuned the number and size of filters (128 and 3, respectively).",
"The maximum length of post text was set according to average post length (in the training split) for each dataset.",
"Table 3 shows the comparison of model performance over the five datasets.",
"We observe that our full model consistently outperforms others in terms of F 1 across all datasets.",
"Our novelty-based model gives the second best score in all datasets except for Android Apps .",
"Comparing our novelty-based model against answer selection model, we observe that the helpfulness of a post depends on both its relevance to the original post and the novelty with respect to earlier posts in the same thread.",
"The evaluation scores obtained by the state-of-the-art neural text classification models strongly support this observation.",
"They consistently make less accurate prediction compared to the relevanceand/or novelty-based models.",
"Among them, BiLSTM or LSTM with Attention model achieves the best performance, dependent on the dataset.",
"We discuss the confounding factor affecting performance in Section",
"5. We also observe that the prediction is more accurate when there is sufficient context to learn the dynamics of the discussion forums.",
"In Reddit 10+ and Reddit 3+ , where both datasets average about 20 and 7 posts per thread respectively, we obtain an F 1 score of 0.40 to 0.51.",
"In the other datasets, where the average thread length is much shorter ( 3 to 5), we obtain relatively low F 1 scores of 0.34 to 0.38.",
"Our model is more accurate in reddit datasets where threads are longer on average, indicative of more open-ended discussion centered on the original post.",
"We now highlight a few corner cases successfully",
"handled by our model.",
"Table 4 shows three target posts along with the original posts and their previous posts from dif-Model",
"ferent datasets.",
"In the first case, we observe that the target post introduces some relevant and novel information into the thread, and thus our model predicts it as helpful.",
"In the second example, we find that the target post is quite similar to some of the previous posts.",
"Since it introduces less novelty in the discussion, our model predicts the target post as unhelpful, although relevant to the discussion topic.",
"In the third example, the target post seems to be novel compared to the previous posts but it deviates from discussion topic in the original post.",
"Hence, our model does not predict it as helpful.",
"These observations indicate that our model treats each of the two qualities of a target post, i.e. , relevance with the original post, and novelty compared to the previous discussion individually as necessary but not sufficient conditions.",
"A target post needs both relevance and novelty so that our model predicts it as helpful.",
"We now answer the following research questions (RQ) to further analyze prediction of helpful posts:",
"influence model performance?",
"The number of posts across threads varies widely, making it difficult to estimate the optimal value for past context length ( k in Section 3.3).",
"To understand the effect of k on model performance, we vary k ranging from 1 to 18 and report F 1 for the Reddit 10+ , and Reddit 3+ datasets in Figure",
"3. Interestingly, we observe that, the performance stops improving after a certain number of posts in both cases: k =11 and k =7 for Reddit 10+ , and Reddit 3+ , respectively.",
"Setting too low a k limits the number of past posts the model gets to see, underfitting the data.",
"Large k gives modest performance gains but incurs significant increase in training cost.",
"As discussed in Section 3.3, the entire context might be redundant to determine target posts' helpfulness in long threads.",
"We believe the context length analysis would be necessary to achieve optimal model performance while exploring other domains.",
"matter?",
"To investigate whether the order of the past posts matter in determining the helpfulness of a target post, instead of modeling the past posts by GRU context layer, we just use the average of the",
"past post tensors to get the context tensor.",
"Table 5 shows the F 1 achieved by this variation compared to our model.",
"We observe that the model performance significantly degrades when the order of the past posts are ignored and represented by an average.",
"Crucially, we find that the datasets with longer threads suffer more compared to the ones with shorter threads.",
"This observation indicates that the sequential nature of discussion is integral to model construction.",
"RQ3: What factors influence performance among the text classification models and our model?",
"Table 3 shows that BiLSTM achieved better scores compared to the other neural text clas-Figure 5: Thread objectivity score CDF.",
"The blue curve shows threads where our model is correct and BiLSTM is not; vice versa for the grey.",
"sification models.",
"To better understand the modeling differences between the BiLSTM and our models, we focus on the cases where one model is correct but not the other (as illustrated for Reddit 10+ in Figure 4).",
"While both models can predict the correct class in 25.4% cases (in yellow), in the other cases (blue and grey), they differ.",
"We study the objectivity of the posts where such differences were observed.",
"Without loss of generality, we define a metric called thread objectivity spread, in terms of the vote shares for the top-5 posts: objectivity = max ( vote ( x )) min ( vote ( x )) (cid:80) vote ( x ) , where x { top-5 posts } in the thread and vote ( x ) gives the helpfulness score of post x .",
"objectivity is unit bound [0 , 1] .",
"While a high objectivity score indicates skewed helpfulness distribution in a thread, a low score indicates that there are multiple helpful answers in a thread; in other words, the thread is less objective in nature.",
"functions (CDFs) of objectivity spread scores for all threads belonging to the grey or blue wedge of Figure 4 ( cf . Figure 5).",
"We observe that the CDF for our model (blue) gives lower objectivity scores with 80 th percentile score of 0.64 for our model and 0.72 for BiLSTM, respectively.",
"This indicates that our model performs better when the thread is more open-ended in nature.",
"We studied the problem of predicting helpfulness of posts in open-ended discussion forums.",
"We found key differences in discussion forums compared to traditional CQA platforms: we observe that forum threads are often non-factoid and subjective in nature with many helpful answers.",
"We hypothesize that post helpfulness crucially relies on two factors:",
"(i) its relevance to the discussion thread and",
"(ii) the novelty of the information introduced.",
"We propose a generic and novel neural architecture using GRU encoders to embody this intuition.",
"Our model outperforms state-of-the-art neural text classification baselines over a diverse set of forums representing three distinct domains.",
"Through deeper analysis, we demonstrate that our model is able to encode the sequential nature of contextual posts, and capture the open-ended nature of discussion threads, thus achieving superior performance over other neural approaches.",
"We plan to apply our work towards building a notification system for incoming helpful posts.",
"In the current work, we addressed the information need aspect present in the discussion forums in general.",
"However, helpfulness might be con-flated with other reasons such as humour, sentiment in certain domains.",
"We would like to investigate those aspects in the future.",
"We acknowledge the support of NVIDIA Corporation for their donation of the Titan X GPU that facilitated this research.",
"This research is supported by the Singapore National Research Foundation under its International Research Centre."
] |
[
"abstain",
"abstain",
"result",
"method",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"objective",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"objective",
"abstain",
"abstain",
"objective",
"result",
"objective",
"method",
"abstain",
"abstain",
"objective",
"other",
"other"
] |
[
"Structured pruning has been extensively studied on monolingual pre-trained language models and is yet to be fully evaluated on their multilingual counterparts.",
"This work investigates three aspects of structured pruning on multilingual pre-trained language models: settings, algorithms, and efficiency.",
"Experiments on nine downstream tasks show several counterintuitive phenomena: for settings, individually pruning for each language does not induce a better result; for algorithms, the simplest method performs the best; for efficiency, a fast model does not imply that it is also small.",
"To facilitate the comparison on all sparsity levels, we present Dynamic Sparsification , a simple approach that allows training the model once and adapting to different model sizes at inference.",
"We hope this work fills the gap in the study of structured pruning on multilingual pre-trained models and sheds light on future research.",
"Large-scale pre-trained monolingual language models like BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) have shown promising results in various NLP tasks while suffering from their large model size and high latency.",
"Structured pruning has proven to be an effective approach to compressing and accelerating these large monolingual language models (Michel et al., 2019; Wang et al., 2020c; Prasanna et al., 2020; Liang et al., 2021), making them practical for real-world applications.",
"Similarly, multilingual pre-trained models (Con-neau and Lample, 2019; Conneau et al., 2020; Xue et al., 2021; Luo et al., 2021) are also powerful and even have more parameters.",
"However, little attention has been paid to evaluating the effectiveness of structured pruning on these multilingual models.",
"Applying pruning to multilingual pre-trained Collaborated work while doing an Alibaba DAMO Academy internship.",
"models is non-trivial, as it typically involves many languages and needs to carefully design the roles of modules within the network.",
"For example, most attention heads have little impact on the performance of monolingual pre-trained models (Michel et al., 2019; Voita et al., 2019), while it is the opposite for multilingual pre-trained models (See Section 5.3 and also Budhraja et al. (2021)).",
"This work intends to examine how structured pruning reacts to multilingual pre-trained models.",
"We take the most representative multilingual pre-trained model family, XLM-R (Conneau et al., 2020; Goyal et al., 2021) for our case study and evaluate the pruning performance on nine cross-lingual understanding tasks in XTREME (Hu et al., 2020).",
"We investigate three aspects of structured pruning: settings, algorithms, and efficiency.",
"Settings Traditional pruning produces a single small model, which is shared across languages ( shared setting).",
"Recent work on multilingual translation (Li et al., 2020; Lin et al., 2021; Xie et al., 2021; Gong et al., 2021) suggests that tailoring pruning to one language could achieve better results ( non-shared setting).",
"However, our comprehensive experiments show that neither of the two settings can consistently outperform the other one (See Section 5.2).",
"Algorithms There exists a broad spectrum of pruning algorithms (Hoefler et al., 2021), and it is impossible to test all of them considering the cost of pre-training.",
"We focus on two pruning algorithms that have been studied the most in monolingual pre-trained models: the regularization-based pruning (Louizos et al., 2018; Wang et al., 2020c) (and our improved version) and the gradient-based pruning (Michel et al., 2019; Prasanna et al., 2020; Liang et al., 2021) (See Section 4).",
"We experimentally find that the simplest gradient-based pruning is more effective for XLM-R (See Section 5.2).",
"speed of the pruned model vary with the sparsity (Hoefler et al., 2021).",
"However, most pruning algorithms, including those we study in this work, require training the model for each specific sparsity.",
"This limitation makes comparisons against a range of sparsity levels infeasible due to the prohibitive training cost.",
"To solve this issue, we propose the Dynamic Sparsification (DS for short), a simple method that parameterizes subnetworks at any sparsity level and shares their weights afterward (See Section 6.1).",
"DS only trains the model once but can obtain models at any sparsity level during inference.",
"Experiments on XNLI (Conneau et al., 2018) show that DS does not degrade the performance much while dramatically reducing the training cost.",
"Interestingly, we observe that the model size and inference speed are not strongly correlated in XLM-R.",
"This observation suggests that one could not obtain a fast model by simply making the model small by using vanilla pruning algorithms (See Section 6.2).",
"Settings Recent multilingual translation research suggests that adapting subnetworks for each language or language pair rather than for all of them gives better results.",
"Among them, Li et al. (2020) train a shared multilingual model, then select layers for each language pair.",
"Lin et al. (2021) also prune a shared multilingual model for each language pair, though on the level of entries in weight matrices.",
"Instead, Gong et al. (2021) prune attention heads and feedforward networks for each language.",
"Xie et al. (2021) first identify general and language-specific neurons in a shared multilingual network, then tune those neurons using the data of their corresponding language only.",
"These findings inspire us to extend from multilingual translation to see how non-shared pruning settings work on multilingual pre-training.",
"Algorithms There are many structured pruning techniques proposed for monolingual pre-trained language models recently.",
"Michel et al. (2019) propose a simple gradient-based importance score to prune attention heads.",
"Prasanna et al. (2020); Liang et al. (2021) extend to prune other components like the feedforward network of the Transformer (Vaswani et al., 2017).",
"Wang et al. (2020c) decompose the pre-trained model weights and apply L 0 regularization (Louizos et al., 2018) to regulate the ranks of decomposed weights.",
"Sajjad et al. (2020) study layer pruning and show that directly dropping N Word Embedding Multi-HeadAttention Add & Norm Feed-ForwardNetwork Add & Norm the ana Embedding Rank Head 4 Head 3 Head 2 Head 1 Attention Head Hidden Unit Figure 1: The left is the Transformer encoder, the right is the components that will be pruned at each layer.",
"the top layers performs the best in fine-tuning.",
"Peer et al. (2021) further show that by carefully choosing layers to drop, structured pruning can achieve a performance close to those trained by knowledge distillation (Hinton et al., 2015).",
"Efficiency The pruning algorithms mentioned above need to train one network for each sparsity level used at inference.",
"Hou et al. (2020) propose a dynamic structured pruning method based on Michel et al. (2019), which allows training the model once and making the inference with any size of the model.",
"Compared with our Dynamic Sparsification, Hou et al. (2020)'s method cannot be applied to the non-shared setting as it needs to rearrange the network, i.e., producing a new model, for each language.",
"Cascading methods (Schwartz et al., 2020; Xin et al., 2020) can even adapt the network size for each instance.",
"Since cascading methods cannot perform batch inference and are only available for sentence classification tasks, we do not consider them in this work.",
"In this section, we briefly review the structure of XLM-R (Conneau et al., 2020), a Transformer encoder (Vaswani et al., 2017) pre-trained by masked language modeling task (Devlin et al., 2019).",
"We also revisit how conventional structured pruning algorithms are applied to Transformers by introducing additional gating variables and setting appropriate values to them (See Figure 1 and also Prasanna et al. (2020); Liang et al. (2021)).",
"The XLM-R model consists of N layers.",
"Each layer is made of the multihead attention and feedforward 1853 networks, followed by the residual connection and layer normalization.",
"Attention Following Michel et al. (2019)'s formula, the multihead attention is written as: MHA( X ) = H (cid:88) i =1 G h,i head i (1) where H is the number of heads, head i is the output of i -th head and G h,i is the i -th entry of the gating variables G h RH .",
"G h,i indicates whether the head i will be pruned.",
"G h,i is set to 1 to retain that head and 0 if to drop it.",
"Different pruning algorithms will have their own ways to determine the values of G h .",
"Feedforward Network The feedforward network contains two linear projections with GeLU activation (Hendrycks and Gimpel, 2016) in between: FFN( X ) = (GeLU( XW 1 + b 1 ) (cid:12) G f ) W 2 + b 2 (2) where W 1 R d d f , b 1 R d f , W 2 R d f d and b 2 R d are weights of the feedforward network and d f is the hidden size.",
"(cid:12) denotes the Hadamard product and G f R d f is a gating vector with a value in the range of [0, 1].",
"G f functions similar to G h in multihead attention, except that G f controls the activation of hidden units.",
"Embedding To prune the large embedding matrix E (occupying 69% of all parameters), we decompose it via low-rank approximation as in Lan et al. (2020): E = E diag( G e ) P (3) where E R v d and P R d d are the decomposed matrices of E .",
"v is the vocabulary size.",
"G e R d , governing the rank of E , is a gating vector similar to G h and G f .",
"diag( G e ) converts G e to a diagonal matrix.",
"The right part of Figure 1 is an illustration of the components (such as hidden units, attention heads, and embeddings) that will be pruned.",
"This section will first introduce pruning algorithms that we study and then describe how to adapt them to two pruning settings.",
"The first is the shared setting that shares the pruned network across languages (default setting that all pruning algorithms could run on), and the second is the non-shared setting that prunes one subnetwork for each language (Xie et al., 2021; Gong et al., 2021).",
"Gradient-based pruning (Michel et al., 2019) computes the importance score of each component, e.g., heads in Eq.",
"1. Then it sets the gating variable of a component, e.g., G h,i in Eq.",
"1, to 1 if its importance score is larger than a threshold and 0 otherwise.",
"Taking an attention head i as an example, its importance score is defined as: I head i = EX X (cid:12)(cid:12)(cid:12)(cid:12) head Ti LMLM ( X ) head i (cid:12)(cid:12)(cid:12)(cid:12) (4) where X is the data distribution and we choose the validation set as X in practice, LMLM is the masked language modeling loss (Devlin et al., 2019).",
"The values of gating variables are set and frozen after pre-training.",
"An additional phase of pre-training is further employed to update network parameters to recover performance loss brought by pruning.",
"Extending gradient-based pruning to the non-shared setting is straightforward: to prune for one language, we use data of that language to compute a unique set of gating variables G = { G h , G f , G e } for it.",
"The L 0 norm has been widely used in many areas, including signal processing (Zhang, 2010; Xu et al., 2011) to induce sparsity.",
"In neural networks, regularization-based pruning, also referred to as L 0 regularization (Louizos et al., 2018), defines a differentiable L 0 norm on the gating variables G = { G h , G f , G e } .",
"It controls the network sparsity by learning the values of G during pre-training.",
"Taking a gating variable g G as an example, it is modeled as: u U (0 , 1) (5) s = sigmoid((log u/ (1 u ) + ) / ) (6) s = s ( r l ) + l (7) g = min(1 , max(0 , s )) (8) where U is the uniform distribution, l < 0 and r > 1 are two fixed constants, is the temperature and is a learnable parameter of g .",
"During training, u is sampled for each g separately.",
"At inference, 1854 Eq.",
"6 becomes s = sigmoid( ) .",
"Compared with gradient-based pruning, the importance score in L 0 regularization is the learnt and the threshold is fixed to sigmoid 1 (cid:16) l r l (cid:17) .",
"The L 0 regularization term of g is: || g || 0 = sigmoid ( log( l/r )) (9) and the overall L 0 regularization term is 1 : LL 0 = || G || 0 = (cid:88) g G || g || 0 (10) LL 0 will be multiplied by a hyper-parameter 1 and added to the pre-training loss LMLM .",
"Two issues of the previous native L 0 regularization emerge in practice: 1) The hyper-parameter 1 does not relate to the model sparsity.",
"It requires several expensive try-outs training runs to find an appropriate setup that can reach desired sparsity (Wang et al., 2020c).",
"2) If we extend L 0 regularization to non-shared setting as done in gradient-based pruning, it easily converges to an optimum where every language shares the network (Gong et al., 2021).",
"This falls back to the shared setting.",
"Thus, we propose two corresponding solutions as below: 1) Sparsity Constraint To address the first issue, we add a sparsity constraint to Eq.",
"where l is the number of languages and G i denotes the set of gating variables for language i .",
"This loss term will keep the subnetwork size of each language close to the targeted size t .",
"2 2) Diverse Subnetwork To address the second issue, we introduce a diversity loss term to encourage the model to find a distinct subnetwork for each language.",
"It is achieved by diagonalizing the gram matrix of gating variables G = [ G 1 ; ; G l ] : L diag = || P (cid:12) G GT (cid:12) ( 1 I ) || 1 (12) 1 In practice we weigh the L 0 regularization term of gating variables (See Appendix B).",
"where 1 is a matrix of ones and I is the identity matrix.",
"P R l l is used to introduce linguistic prior and is a matrix of ones by default.",
"Eq.",
"12 will penalize each language pair equally.",
"Intuitively, the subnetworks of two languages that are close, e.g., English and Spanish, should not be penalized.",
"Thus we add linguistic prior P ij = 0 when the i -th and j -th languages belong to the same language family (See Appendix C) and 1 otherwise.",
"To the end, the loss L we used in pre-training is: L = LMLM + 1 LL 0 + 2 L diag (13) Note that the parameter of the gating variable is randomly initialized.",
"We find that tuning only in the first few epochs is crucial to obtain better performance.",
"If no further notice, we will use this improved L 0 regularization for experiments with non-shared setting and the native L 0 regularization for shared setting.",
"Pre-training Our pruned models are trained on the CC-100 corpus (Wenzek et al., 2020).",
"We choose 100 languages with a total size of 2.2TB for training, which is consistent with those used in XLM-R (Conneau et al., 2020).",
"The development set we used to induce the importance score for pruning is 3K randomly selected samples from the CC-100 corpus per language.",
"Our model is a 12-layer Transformer with a 768 embedding size and a 3072 hidden size.",
"It is pruned and continually trained based on the publicly available XLM-R model for 150K steps with a batch size of 2048 and a learning rate of 0.0002.",
"Other hyper-parameters remain the same as in the original paper (Conneau et al., 2020).",
"We train our model on 32 Nvidia Tesla V100 32GB GPUs with mixed-precision training.",
"It takes roughly 7-10 days to pre-train one model.",
"For inference, we use 1 Nvidia Tesla V100 32GB GPU and Intel(R) Xeon(R) Platinum 8269CY CPU @ 2.50GHz to estimate the GPU and CPU throughput (with a batch size of 128 for GPU and 1 for CPU).",
"Fine-tuning We evaluate the pruned models on 9 downstream tasks from XTREME (Hu et al., 2020).",
"These tasks can be classified into four different categories: (1) sentence-pair classification: XNLI (Conneau et al., 2018), PAWS-X (Yang 1855 Task Sparsity XNLI PAWS-X POS NER XQuAD MLQA TyDiQA BUCC Tatoeba Avg Metrics Acc.",
"et al., 2019); (2) structured prediction: POS (Nivre et al., 2018), Wikiann NER (Pan et al., 2017); (3) question answering: XQuAD (Artetxe et al., 2020), MLQA (Lewis et al., 2020), TyDiQA (Clark et al., 2020); (4) sentence retrieval: BUCC2018 (Zweigenbaum et al., 2017), Tatoeba (Artetxe and Schwenk, 2019).",
"The hyper-parameter setup of fine-tuning could be found in Appendix A. Following previous work (Hu et al., 2020), we study the pruned models in two fine-tuning settings: Cross-lingual Transfer (a.k.a., zero-shot) and Translate-Train-All (a.k.a., multi-task).",
"Note that for the two sequence labelling tasks POS and NER, translation cannot give us the correct training labels.",
"We thus use human-annotated data for translate-train-all training on them.",
"Table 1 shows the fine-tuning results of using different methods to prune XLM-R to 50% sparsity (also the value of t in Eq. 11).",
"We follow the convention of Prasanna et al. (2020) to compute the sparsity of the encoder, which excludes the embeddings in the calculation.",
"For DistilBERT , we remove half of the original layers of XLM-R as done in Sanh et al. (2019).",
"Note that in Table 1 (the rows of L 0 (shared)), regularization-based pruning with shared setting has a lower sparsity (20%).",
"3 3 We have tried various hyper-parameters settings to pretrain models toward 50% sparsity (for a fair comparison with Methods Sparsity XNLI POS NER TyDiQA Avg L 0 20% 73.4 87.5 85.1 61.2/45.9 74.9 Impv.",
"regularization-based pruning.",
"Table 1 shows that vanilla L 0 in shared setting has more parameters (20% sparsity) but performs worse than gradient-based pruning with fewer parameters (50% sparsity).",
"Despite that our proposed improved L 0 works better ( non-shared setting), it still underperforms the gradient-based pruning counterpart.",
"This is because regularization-based pruning keeps modifying the subnetwork structure when weights are updating, which might introduce too much noise during training.",
"Gradient-based pruning, on the other hand, keeps the pruned network unchanged and adapts weights only.",
"Despite that some works (Hoefler et al., 2021) suggest that regularization-based pruning should be preferred, it might not be the same conclusion for XLM-R.",
"DistilBERT ) using vanilla L 0 , but the resulting sparsity is either too high ( 70%) or too low ( 20%).",
"This is in line with the trainability issue of L 0 as indicated in Section 4.2.",
"translation has suggested that non-shared setting provides consistent gains, as this way allows the pruned model to adapt for each language (Li et al., 2020; Lin et al., 2021; Xie et al., 2021; Gong et al., 2021).",
"However, this is not the case for XLM-R.",
"As shown in Table 1, regularization-based pruning ( L 0 ) works the best with the non-shared settings 4 , but for gradient-based pruning it is the shared setting.",
"We analyze that this is because XLM-R covers more low-resource languages (100 languages in XLM-R vs. 24 in most multilingual translation research), which makes sharing the subnetwork for a universal representation more preferable (Aharoni et al., 2019).",
"Simple distillation performs less effective than pruning.",
"For most tasks, distillation is not as effective as pruning.",
"5 This might be that distillation prunes a whole layer, while more fine-grained components are pruned in structured pruning.",
"But combining distillation with pruning could provide some gain, as shown in Table",
"2. Our improved L 0 regularization-based pruning can further boost the performance.",
"In Section 4.2.1, we propose an improved L 0 regularization to solve the drawbacks of standard L 0 .",
"Table 2 shows the results.",
"Through the sparsity constraint, we can control the model sparsity to be the desired value t = 50% instead of 20% (the closest we could have using vanilla L 0 ).",
"And along with diverse subnetwork, the improved L 0 can even consistently improve the fine-tuning results.",
"Appendix E visualizes how subnetworks differ between two languages after applying the diversity loss term.",
"4 Non-shared model with more parameters dropped (50% sparsity) is better than shared model with fewer parameters dropped (20% sparsity).",
"5 Although adopting advanced distillation techniques might improve the result, the pruning algorithm is also simple here.",
"Moreover, integrating with distillation (the last row of Table 2) can further improve the results.",
"Why does regularization-based pruning perform poorly?",
"Since regularization-based pruning learns the subnetwork from scratch, we believe its poor performance results from the low-resource languages.",
"We choose XNLI with the translate-train-all setting for empirical verification.",
"On the one hand, the translate-train-all setting ensures that each language has the same dataset for fine-tuning (except for NER and POS).",
"This way eliminates the difference in fine-tuning.",
"On the other hand, among all tasks except NER and POS, XNLI covers more languages.",
"Figure 2 supports our hypothesis.",
"It shows the accuracy loss and corpus size of each language in regularization-based and gradient-based pruning.",
"We observe that for regularization-based pruning accuracy loss strongly correlates with pre-training dataset size (a value of 0.83 for Pearson's ), while it is not for gradient-based pruning.",
"Where does pruning methods behave differently?",
"In Figure 3, we compare in which aspect different pruning algorithms behave differently.",
"Figure 3 shows the sparsity of each component (attention heads and hidden units) at each layer.",
"Interestingly, we see that gradient-based pruning preserves all attention heads and only a tiny number of hidden units, while regularization-based pruning prunes heads and hidden units more evenly.",
"Though previous works (Michel et al., 2019; Voita et al., 2019) have suggested that most attention heads have little impact on the final performance of monolingual models, our results show that this is not the case for XLM-R.",
"Besides, both pruning methods tend to drop more in the middle layers.",
"In practice, we may need models with different sparsities to fit various resource constraints or compare a set of methods.",
"Nevertheless, existing pruning techniques must train the model independently for each sparsity level, which is prohibitive for large models.",
"Here we propose Dynamic Sparsification ( DS for short), a method that trains the model once but allows inference with any level of sparsity.",
"Section 4 shows that both gradient-based and regularization-based pruning follow the same procedure: we first determine a threshold, then get the importance score for each component, and set the gating variable to 1 if its score is larger than that threshold and 0 otherwise.",
"By adjusting the threshold, one can obtain networks with any sparsity.",
"Based on this, we model a gating variable g as: g = f ( + t ) (14) where is a trainable importance score as in regularization-based pruning, t is the targeted network size (which is one minus the sparsity), t is the threshold with a learnable , f is a function with output ranging between 0 and",
"1. We choose f to be Eqs.",
"6 8 because it enables us to optimize and via L 0 regularization.",
"If and are set properly, Eq.",
"14 will automatically determine whether its corresponding component should be activated under the targeted network size t .",
"Then is how to find and using pruning algorithms.",
"We know that pruning algorithms could rank different components by their importance scores.",
"Based on this ranking, we identify the boundary network size that a specific component will be activated (denoted as t ) and will not.",
"These Methods XNLI POS NER TyDiQA Avg Grad (shared) 76.8 88.4 88.0 69.5/54.6 78.8 + DS 74.6 87.6 87.1 64.0/48.3 76.4 L 0 (non-shared) 76.3 87.9 86.8 67.8/52.5 77.8 + DS 76.2 87.9 86.7 67.9/52.4 77.7 Table 3: The results of gradient-based and regularization-based pruning with or without dynamic sparsification (Sparsity=50%).",
"where is the network size that one component contributes to, t is the boundary network size where the corresponding gating variable g should be 1 if t > t and 0 if t < t .",
"t equals the ranking divided by the total number of components.",
"Eq.",
"15 has a closed-form solution for and : 6 (cid:40) = (cid:0) 1 t/ (cid:1) f 1 (1) + ( t/ ) f 1 (0) = (cid:0) f 1 (1) f 1 (0) (cid:1) / (16) Before training, we use gradient-based pruning to initialize and via Eq.",
"16.",
"If only gradient-based pruning is adopted, and are then clamped and only the retained network parameters will be updated, otherwise they can be jointly optimized via regularization-based pruning.",
"During training, we sample different t s to train different sized subnetworks.",
"At inference, t is set to the targeted network size to prune the model.",
"If one wants to extend DS to non-shared setting, he can prune for each language once and compute a unique set of and for each language.",
"Table 3 ( + DS rows) shows the 50% sparsity results after applying DS to the two pruning algorithms under their best performing pruning settings (according to Table 1).",
"Surprisingly, we observe that gradient-based pruning with shared setting suffers from a significant loss, while regularization-based pruning with non-shared setting has almost no loss.",
"This is because DS shares the weights between subnetworks of different sparsities hurts the model capacity, and non-shared setting enlarges the subnetwork capacity by untying weights of different languages.",
"Due to the expensive cost of training models without DS, we only test the impact of DS on 50% sparsity, but we compare it with other systems with a smaller size (See Appendix F).",
"The leftmost part of Figure 4 shows more on how the two pruning methods trade accuracy for efficiency under various sparsities.",
"The second sub-figure from the left of Figure 4 shows a non-linear relationship between the number of parameters and sparsity, as embeddings are not included in sparsity calculation (Prasanna et al., 2020).",
"Since embeddings are more important than most parts of the model and are very large (69% of the overall parameters), the number of parameters remains high even when the encoder is quite sparse (Sparsity 50% ).",
"Pruning algorithms only start to prune these large embeddings when the encoder is very sparse (Sparsity > 50% ) and results in a great drop in the number of parameters, as shown in Figure 5.",
"The two rightmost panels of Figure 4 describe how the CPU and GPU throughput vary as the sparsity changes.",
"We observe a strong correlation between the CPU throughput and sparsity when the sparsity 50% .",
"However, there is no such trend observed when the sparsity < 50% .",
"This might be 90 70 50 30 10 0 50 100 Network Sparsity [%] L a y e r S p a r s it y [ % ] Layer 1 Layer 3 Layer 5 Layer 7 Layer 9 Layer 11 Figure 6: Sparsity of different layers pruned by regularization-based pruning vs. the sparsity.",
"due to the time consumption of irregular memory access out-weights the speed-up brought by the small tensor computation.",
"Interestingly, we see that sparse models show no acceleration on GPU even when the sparsity is high (e.g., 90%).",
"Although pruning algorithms here optimize the model size instead of inference efficiency, it is expected that the resulting sparse models still have speedup as shown in CPU and in other work (Wang et al., 2020c).",
"In Figure 6, we find that the highest sparsity of all layers is close to but not exactly 100%.",
"This implies that pruning tends to produce a deep and narrow model.",
"Previous studies (Sanh et al., 2019; Wang et al., 2020a; Li et al., 2021) show that GPU throughput is more sensitive to the model height instead of its width.",
"This explains why we did not observe any acceleration even for a model with 1 / 10 of the original size.",
"Though not shown in Table 1 and Figure 4, it is still possible to obtain actual speedup in GPU for sparse models.",
"Previous observations on GPU throughput only hold for inference with the same batch size .",
"In practice, the sparse models have a smaller memory footprint and we can use a larger batch size for higher parallelism.",
"For pruned models in Table 1, a nearly 2 speedup is observed when we double the inference batch size.",
"In summary, Figure 4 suggests that the correlation between the model size and throughput is very week for XLM-R : for model size, reducing the embedding size is important, but it has almost no impact on throughput (an O (1) complexity table lookup); for throughput, compressing parts other than embeddings is more effective as shown in Figure 4, but they have much fewer parameters than the embeddings (193M parameters for embeddings vs. 86M for the others).",
"This advocates special 1859 care needed to be taken if one wants to compress and accelerate XLM-R simultaneously.",
"Mikel Artetxe and Holger Schwenk.",
"2019.",
"Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond.",
"Transactions of the Association for Computational Linguistics , 7(0):597610.",
"Aakriti Budhraja, Madhura Pande, Pratyush Kumar, and Mitesh M. Khapra.",
"2021.",
"On the prunability of attention heads in multilingual BERT.",
"CoRR , abs/2109.12683.",
"Hongyu Gong, Xian Li, and Dmitriy Genzel.",
"2021.",
"Here we study what DS will prune under various sparsities.",
"Figure 5 shows which component (em-beddings, attention heads and hidden units) will be preferred during pruning.",
"In general, gradient-based pruning behaves similar to regularization-based pruning: they first prune hidden units, and only prune attention heads and embeddings when the sparsity is high.",
"The main difference between them is that gradient-based pruning starts to prune embeddings earlier (at 70% sparsity) than regularization-based pruning.",
"This explains why we observe a significant drop in performance for gradient-based pruning with 70% sparsity (See the left of Figure 4): the model already lost much information at the beginning and there is no way to recover.",
"Figure 6 shows how regularization-based pruning prunes each layer with DS.",
"Though we do not plot the curves of gradient-based pruning, its phenomenon is similar to regularization-basd pruning.",
"We find that regularization-based pruning behaves differently at low and high sparsity.",
"It first prunes bottom layers when the sparsity is low, then gradually shift to higher layers as the sparsity increases.",
"In the end, it retains more parameters in the bottom layers instead of the top layers.",
"This provides insight for future model design: a pyramid structure is better when the model size is very small .",
"In this work, we study three aspects of structured pruning on multilingual pre-trained models: settings, algorithms and efficiency.",
"Experiments show interesting phenomena: The best pruning setting depends on the choice of algorithms; The simplest pruning algorithm performs the best; A fast model does not mean it should be small.",
"We hope this work will give insight to future research."
] |
[
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"method",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"Robert Wolfe University of Washington [email protected]",
"Aylin Caliskan University of Washington [email protected]",
"Abstract We examine the effects of contrastive visual semantic pretraining by comparing the geometry and semantic properties of contextualized English language representations formed by GPT-2 and CLIP, a zero-shot multimodal image classifier which adapts the GPT-2 architecture to encode image captions.",
"We find that contrastive visual semantic pretraining significantly mitigates the anisotropy found in contextualized word embeddings from GPT-2, such that the intra-layer self-similarity (mean pairwise cosine similarity) of CLIP word embeddings is under .",
"25 in all layers, compared to greater than .",
"95 in the top layer of GPT-2.",
"CLIP word embeddings outperform GPT-2 on word-level semantic intrinsic evaluation tasks, and achieve a new corpus-based state of the art for the RG65 evaluation, at .",
"88 .",
"CLIP also forms fine-grained semantic representations of sentences, and obtains Spearman's = .",
"73 on the SemEval-2017 Semantic Textual Similarity Benchmark with no fine-tuning, compared to no greater than = .",
"45 in any layer of GPT-2.",
"Finally, intra-layer self-similarity of CLIP sentence embeddings decreases as the layer index increases, finishing at .",
"25 in the top layer, while the self-similarity of GPT-2 sentence embeddings formed using the EOS token increases layer-over-layer and never falls below .",
"97 .",
"Our results indicate that high anisotropy is not an inevitable consequence of contextualization, and that visual semantic pretraining is beneficial not only for ordering visual representations, but also for encoding useful semantic representations of language, both on the word level and the sentence level.",
"Large-scale \"natural language supervision\" using image captions collected from the internet has enabled the first \"zero-shot\" artificial intelligence (AI) image classifiers, which allow users to create their own image classes using natural language, yet outperform supervised models on common language-and-image",
"language-and-image tasks (Radford et al., 2021).",
"The image encoders of such models have been shown to form \"multimodal\" representations in the upper layers, such that the same neurons fire for photographic, symbolic, and textual depictions of a concept (Goh et al., 2021).",
"Research on these state of the art \"vi-sual semantic\" (joint language-and-image) models has focused primarily on their benefits for encoding semantically legible representations of images.",
"In this paper, we seek to answer a straightforward but as yet unexplored question: what benefits does contrastive visual semantic pretraining have for representations of natural language?",
"The CLIP (\"Contrastive Language Image Pre-training\") image classification model introduced by Radford et al. (2021) provides a unique opportunity to observe the effects of visual semantic pretraining on a contextualizing language model.",
"While most other visual semantic architectures combine language and image features in the inner layers of the model (Lu et al., 2019), CLIP separates the language model from the vision model until the end of the encoding process, at which point it projects a representation formed by each model into a joint language-image embedding space (Radford et al., 2021).",
"CLIP is trained to maximize the cosine similarity of a projected image with its projected natural language caption, while minimizing the cosine similarity of the projected caption with all of the other images in the batch (Radford et al., 2021), a training objective known as \"contrastive learning\" or \"contrastive representation distillation\" (Tian et al., 2019).",
"The separation of the language model from the vision model prior to projection allows us to consider the two models independently of each other, such that we can study representations of natural language trained for a visual semantic objective, rather than representations which combine language and image features in the inner layers of the model.",
"Moreover, because CLIP encodes natural language using GPT-2, a \"causal\" language 3050 0 1 2 3 4 5 6 7 8 9 10 11 12 0 0 .",
"CWEs, despite being trained with the same architecture, differences in contextualized representations which are not model architecture.",
"model trained solely on next-word prediction, we can directly compare representations formed using the same architecture, but for two very different objectives: one solely linguistic, the other visual semantic.",
"We observe differences between representations formed by GPT-2 and the CLIP language model (\"LM\") both on the word level and on the sentence level.",
"We outline our contributions:",
"1. As shown in Figure 1, contrastive visual semantic pretraining mitigates the angular uniformity (known as anisotropy, measured using cosine similarity) observed by Ethayarajh (2019) in GPT-2 and other contextualizing LMs.",
"The intra-layer self-similarity (mean pairwise cosine similarity, where 1 . 0 is maximally similar and 0 . 0 maximally dissimilar) of contextualized word embeddings (CWEs) is less than .25 in all layers of the CLIP LM, compared to greater than .",
"50 in all layers and greater than .95 in the top layer of GPT-2.",
"The five highest-magnitude neuron activations in a CWE from the CLIP LM make up 39 % of its length in the top layer, compared to more than 97 % of the length of a top layer GPT-2 CWE.",
"This indicates that high anisotropy is not an inescapable consequence of contextualization, nor of using a specific language modeling architecture, but is dependent on pretraining objective, and is significantly reduced by using an objective which is both contrastive and visual semantic.",
"2. Contrastive visual semantic pretraining results in CWEs which outperform other static and contextualized word embeddings on word-level intrinsic evaluation tasks.",
"CLIP word embeddings obtained in a \"decontextualized\" setting (wherein the model is given only the word with no other context) set new state of the art for a corpus-based method on the RG65 intrinsic evaluation task (Rubenstein and Goodenough, 1965), with Spearman's = .",
"88 in the eighth layer of the CLIP LM, and match state of the art for the ValNorm task, which evaluates the semantic quality of representations based on correspondence with pleasantness norms (Toney and Caliskan, 2021), with Pearson's = .",
"88 in layer 4.",
"CLIP CWEs outperform GPT-2 CWEs on every intrinsic evaluation in a decontextualized setting, and for all but one evaluation also outperform the GPT-2 embeddings of Bommasani et al. (2020), who encode 100 , 000 contexts and pool over the representations to form a static word embedding matrix.",
"3.",
"Contrastive visual semantic pretraining encodes semantically useful sentence representations which obtain Spearman's = .",
"73 on the SemEval-2017 Semantic Textual Similarity (STS) Benchmark using the cosine similarity between sentence pairs.",
"CLIP results on the STS benchmark outperform those of GPT-2, which never exceed = .",
"45 in any layer of the model.",
"Moreover, we find that while GPT-2 sentence embeddings formed using the end-of-sequence (EOS) token exhibit intra-layer self-similarity .",
"97 in all layers, the self-similarity of CLIP sentence embeddings steadily decreases over the layers of the model, from .",
"98 to .",
"25 in the top layer, indicating that the contrastive visual semantic pretraining objective of the model forces the formation of fine-grained semantic representations of sentences, such that they can be associated with encoded images.",
"We make our code and data available https://github.com/wolferobert3/clip_ contrastive_acl_2022 .",
"We review prior work on visual semantic AI, on the geometry and semantic properties of representations formed by language models, and on semantic intrinsic evaluation tasks.",
"We examine CLIP and GPT-2, both of which are \"foundation models,\" a term coined by Bommasani et al. (2021) to describe the group of architecturally similar state of the art AI systems which have seen wide adoption across domains including language (Raffel et al., 2020), vision (Dosovitskiy et al., 2020), medicine (Rasmy et al., 2021), and programming (Chen et al., 2021), and which exhibit unexpected emergent properties such as strong performance on tasks on which they were not explicitly trained (Brown et al., 2020).",
"GPT-2 and CLIP adapt the transformer neural network architecture, which uses an \"attention\" mechanism to draw information from the most relevant elements in the model's context window (Vaswani et al., 2017).",
"GPT-2 is a contextualizing language model, meaning that it forms word representations which incorporate information from surrounding words (\"con-text\") (Radford et al., 2019).",
"Such representations, referred to as \"contextualized word embeddings\" (Peters et al., 2018a), differ depending on the sense of the word used and the specific context in which the word occurs (Soler and Apidianaki, 2021), allowing such representations to overcome many of the limitations of static word embeddings, which use only one vector to represent each word (Col-lobert et al., 2011).",
"GPT-2 is an autoregressive \"causal\" language model, meaning that it is trained to predict the next word, and employs \"masked self-attention,\" such that the model can only draw information from words which precede the current word (Radford et al., 2019).",
"AICLIP is a \"multimodal\" model which combines language and image representations in a single joint visual semantic embedding space (Radford et al., 2021).",
"CLIP can be used with either a ResNet (He et al., 2016) or a Vision Transformer (ViT) (Doso-vitskiy et al., 2020) to encode images, and a language model (GPT-2) to encode captions (Radford et al., 2019).",
"CLIP projects the encoded images and captions into a joint embedding space, where the model maximizes the cosine similarity of the correct image-caption pair while minimizing the cosine similarity of each caption with every other image in the batch (Radford et al., 2021).",
"CLIP projects only a representation of the entire caption into the joint language-image space, and uses CWEs in order to produce this representation.",
"CLIP is not the first transformer-based model to form visual semantic representations: both Lu et al. (2019) and Li et al. (2019) adapt the BERT language model of Devlin et al. (2019) to produce visual semantic language-image representations, and Zhang et al. (2020) and Jia et al. (2021) use the same contrastive loss objective as CLIP.",
"What makes CLIP unique is that it is the first image classifier to generalize to zero-shot image classification, such that users can define image classes \"on-the-fly\" using natural language, and obtain performance competitive with supervised computer vision models, without ever fine-tuning on the data for a task (Radford et al., 2021).",
"CLIP improved the zero-shot state-of-the-art 1 on ImageNet (Deng et al., 2009) to 76 .",
"2 % (Radford et al., 2021), from a previous best of 11 .",
"5 % (Li et al., 2017).",
"Ethayarajh (2019) find that CWEs in ELMo (Pe-ters et al., 2018b), BERT (Devlin et al., 2019), and GPT-2 (Radford et al., 2019) are highly anisotropic (angularly uniform, based on measurements of cosine similarity).",
"The effect is most pronounced in GPT-2, such that randomly selected embeddings in the top layer of the model have \"nearly perfect\" ( i.e., close to 1 . 0 ) cosine similarity (Ethayarajh, 2019).",
"Cai et al. (2020) find that the inner layers of GPT and GPT-2 form contextualized word representations on a swiss-roll manifold, while BERT embeds words in clusters.",
"Mitigating anisotropy has been shown to be beneficial for semantic representations, as Mu and Viswanath (2018) find that increasing the isotropy (angular dispersion) of static word embeddings improves performance on semantic intrinsic evaluation tasks.",
"Voita et al. (2019) find that the pretraining objective of a contextualizing lan-1 Tiwary (2021) report that their Turing Bletchley model improves the zero-shot state of the art to 79 .",
"guage model affects what information is encoded in CWEs, and that embeddings in causal language models (like GPT-2) contain less mutual information with the input token and more mutual information with the next token in the sequence as the layer index increases.",
"Tenney et al. (2019) shows that layers of BERT are devoted primarily to certain natural language processing (NLP) tasks, and that task complexity increases with the layer index.",
"Intrinsic evaluation tasks assess the quality of word or sentence embeddings by measuring the correlation of the geometric properties of the embeddings with human-rated judgments of similarity (Tsvetkov et al., 2016) or psycholinguistic norms (Toney and Caliskan, 2021).",
"Bommasani et al. (2020) create static word embeddings by pooling over CWEs derived from tens of thousands of sentences from English Wikipedia, and study the performance of these embeddings on word-level intrinsic evaluation tasks.",
"They find that embeddings from the upper layers of BERT and GPT-2 perform poorly relative to embeddings from earlier layers, and that embeddings formed by pooling over a word's CWEs significantly outperform embeddings formed from \"decontextualized\" words, input to the model with no surrounding context (Bommasani et al., 2020).",
"We report results on the four intrinsic evaluation tasks analyzed by Bommasani et al. (2020), as well as the recently introduced ValNorm task (Toney and Caliskan, 2021), and a sentence-level intrinsic evaluation task, the Semantic Textual Similarity Benchmark (Cer et al., 2017).",
"For comparison of our results on CWE anisotropy with the prior work of Ethayarajh (2019), we encode the text of the SemEval Semantic Textual Similarity tasks from 2012 through 2016 (Agirre et al., 2012, 2013, 2014, 2015), who used these datasets because they include instances of the same words used in different contexts and reflecting different word senses.",
"We discard sentences too long to fit in the 77-token context window of the CLIP LM, which still leaves us with over 36,000 sentences.",
"We report results on five word-level tasks:",
"0 and 4 based on their semantic similarity, as judged by 51 human participants in a controlled psychological study intended to evaluate the relationship between \"similarity of context and similarity of meaning.\"",
"WordSim-353 , a word relatedness task consisting of 353 word pairs divided into two sets (Finkelstein et al., 2001).",
"WS-353 was introduced in the context of information retrieval for search engines but is now widespread as an evaluation of word relatedness.",
"SimLex-999 , a word similarity task consisting of 666 noun-noun word pairs, 222 verb-verb word pairs, and 111 adjective-adjective word pairs (Hill et al., 2015).",
"SimVerb-3500 , a set of 3 , 500 verb pairs rated on similarity by 843 study participants, and designed to remediate the lack of resources for evaluating verb semantics (Gerz et al., 2016).",
"ValNorm , which measures the quality of an embedding based on how well it reflects the valence norms of the language on which was trained (Toney and Caliskan, 2021).",
"ValNorm takes Pearson's correlation coefficient of human ratings in a valence lexicon with Single-Category Word Embedding Association Test (SC-WEAT) (Caliskan et al., 2017) pleasantness effect sizes for a word embedding.",
"task, the Semantic Textual Similarity (STS) Benchmark , a set of 8 , 628 sentence pairs derived from SemEval STS tasks between 2012 and 2017 and rated on similarity (Cer et al., 2017).",
"Sentences reflect three genres: news, forums, and captions.",
"The test set, on which we report results without use of the training set, includes 1 , 379 sentence pairs.",
"While the CLIP LM is based on the GPT-2 architecture, there are minor differences between the models we examine.",
"2 The CLIP LM is a 63-million parameter version of the GPT-2 architecture, and uses 12 layers to form 512-dimensional CWEs within a 77-token context window (Radford et al., 2021).",
"GPT-2 Small, the model studied by Ethayarajh (2019) and examined in this paper, forms 2 We use the PyTorch models available via the Transformers library of Wolf et al. (2020).",
"768-dimensional CWEs over a 1,024-token context window, and has a total parameter count of 124-million (Radford et al., 2019).",
"Though it consists only of image captions, the text component of the WebImageText corpus used to train CLIP has a \"similar\" word count to the WebText corpus used to train GPT-2, according to Radford et al. (2021).",
"We outline our experiments, and discuss our approach for extracting both CWEs and sentence embeddings, and for computing self-similarity.",
"We use the self-similarity formula of Ethayarajh (2019) to study whether the contrastive visual semantic pretraining objective of CLIP has affected the anisotropy of GPT-2 CWEs:",
"Note that cos in Equation 1 refers to cosine similarity, or the angular similarity of two vectors after normalization to unit length, a common method for measuring the semantic similarity of word embeddings.",
"n refers to the number of word embeddings w used in the self-similarity measurement.",
"Following Guo and Caliskan (2021), who report consistent results on semantic bias analyses by randomly sampling 10 , 000 CWEs, we measure the self-similarity of 10 , 000 randomly selected CWEs in contexts from the STS 2012-2016 tasks for every layer of CLIP and GPT-2.",
"We collect CWEs for the same 10 , 000 word indices from all layers, rather than randomly selecting new words at every layer.",
"Because Mu and Viswanath (2018) find that a few high-magnitude dimensions cause anisotropy and distort the semantics of static word embeddings, we also examine whether CLIP embeddings encode less of their magnitude in a few high-value dimensions.",
"Mu and Viswanath (2018) find that there are usually n/ 100 such distorting dimensions in static word embeddings, where n refers to the embedding's dimensionality.",
"Because GPT-2 small forms 768-dimensional embeddings, and CLIP forms 512-dimensional embeddings, we report the mean proportion of magnitude contained in the top 8 and the top 5 neuron activations for each model at each layer across 10 , 000 embeddings.",
"We examine the layerwise performance of CWEs extracted from the CLIP LM and from GPT-2 on the five word-level intrinsic evaluation tasks described in Section 3.1.",
"For these tasks, we extract the vector corresponding to the last subtoken of every word, as prior work finds that the last subtoken in a causal language model fully encodes the semantics of words which a causal language model breaks into subwords (Guo and Caliskan, 2021).",
"For each task, we input words in the \"decontex-tualized\" setting described by Bommasani et al. (2020) ( i.e., with no surrounding context).",
"Unlike Bommasani et al. (2020), we also extract the BOS token and EOS token from the GPT-2 tokenizer, and add them to either side of the decontextualized word.",
"We do this to keep the experiment consistent between the models, as adding the tokens is default behavior for the CLIP LM, but not for GPT-2.",
"Because it is common to omit the BOS and EOS tokens when using GPT-2, we report results for GPT-2 both with the tokens and without them.",
"To observe whether CLIP sentence embeddings have unique properties, since they are the only linguistic representations projected to the joint language-image space, we also report results on these tasks using the EOS token for the CLIP LM and GPT-2.",
"We report layerwise performance using sentence representations obtained from CLIP and GPT-2 on the STS benchmark (Cer et al., 2017).",
"For this task, we use the EOS token in both CLIP and in GPT-2.",
"For GPT-2, we also use the last subtoken of the sentence, with no EOS token added.",
"Finally, we analyze the self-similarity of sentence embeddings from each model using Equation",
"1. In this case, w refers not to a word embedding, but to a sentence embedding.",
"For this analysis, we use embeddings of all of the unique sentences in the test set of STS Benchmark (Cer et al., 2017).",
"CLIP CWEs are less anisotropic than GPT-2 embeddings, and CLIP outperforms GPT-2 on word-level and sentence-level semantic evaluations.",
"As illustrated in Figure 1, the self-similarity of CWEs is lower in every layer of the CLIP LM than in GPT-2.",
"Self-similarity in both models is at its 3054 Performance by Intrinsic Evaluation Task Task RG65 WS-353 SL-999 ValNorm SV-3500 Layer Best Top Best Top Best Top Best Top Best Top GPT-2 no BOS .09 (1) .01 .14 (1) .12 .05 (5) .02 .43 (7) .25 .01 (8) .00 GPT-2 w/ BOS .44 (7) .23 .44 (9) .25 .25 (8) .11 .76 (7) .33 .21 (8) .07 CLIP .88 (8) .70 .72 (6) .51 .48 (9) .39 .88 (4) .72 .30 (4) .17 GPT-2 EOS .32 (12) .32 .31 (3) .10 .16 (4) .05 .61 (6) .17 .10 (4) -.01",
"highest in the top layer, at .",
"96 in GPT-2 and .",
"24 in the CLIP LM.",
"The self-similarity of CWEs in GPT-2 never falls below .",
"55 in any layer, whereas the self-similarity of CWEs in CLIP falls to .",
"06 in layer 4.",
"As shown in Figure 2, we also find that the five highest-magnitude neuron activations in the top layer of GPT-2 make up more than 97 % of the magnitude of GPT-2 CWEs, compared to only 39 % of the magnitude of CLIP CWEs.",
"For both models, there is a small increase (less than 3 percentage points in each layer) using the 8 highest neuron activations.",
"Given that Mu and Viswanath (2018) found that high-magnitude dimensions cause high anisotropy and distort semantics in static word embeddings, and that Ethayarajh (2019) suggests increasing isotropy to improve CWE representational quality, we would expect that CLIP CWEs would have more semantic geometry than GPT-2 CWEs.",
"As shown in Table 1, CLIP embeddings outperform GPT-2 embeddings on all five of the word-level intrinsic evaluation tasks we study, and non-trivially",
"improve the corpus-based state of the art for the RG65 intrinsic evaluation to Spearman's = .",
"88 .",
"3 As visualized in Figure 3, CLIP embeddings also match the state of the art for the ValNorm intrinsic evaluation task (Toney and Caliskan, 2021), previously achieved by the GloVe embeddings of Pennington et al. (2014).",
"For every task except SV-3500, CLIP embeddings outperform the results obtained for GPT-2 by Bommasani et al. (2020), who create static word embeddings by pooling over CWEs obtained from 100 , 000 encoded contexts, both in GPT-2 small and in GPT-2 medium, a 24-layer model which forms 1 , 024 -dimensional embeddings.",
"For SV-3500, Bommasani et al. (2020) obtain Spearman's = .",
"31 in layer 6 of GPT-2 small from embeddings formed using CWEs 100 , 000 from contexts.",
"Our results also indicate that adding the BOS token in GPT-2 significantly improves results on word-level semantic intrinsic evaluation tasks in the decontextualized setting.",
"ValNorm scores im-3 According to the ACL leaderboard at https: //aclweb.org/aclwiki/RG-65_Test_Collection_(State_of_the_art) .",
"prove from .59 to .76 in layer 7, and RG65 scores improve from .01 to .44 in the same layer.",
"On every test, simply adding the BOS token outperforms results reported by Bommasani et al. (2020) on embeddings obtained using the pooling methodology for 10 , 000 contexts, both in GPT-2 small and GPT-2 medium Bommasani et al. (2020).",
"While adding the BOS token does not match the results of applying the pooling method to 50,000 or 100,000 contexts, this marked improvement indicates that using the BOS token is a simple, computationally efficient, and easily replicated way of obtaining static reductions of CWEs, with better quality than representations requiring ten thousand contexts to form.",
"Finally, we find that CLIP EOS token embeddings outperform CWEs in the top layer on two of five word-level intrinsic evaluation tasks, and nearly equal the performance of CLIP CWEs on the other three tasks.",
"ValNorm scores fall to .72 for CLIP CWEs in the top layer, but increase to .",
"80 for CLIP EOS token embeddings in that layer; and RG65 scores fall to .",
"70 in the top layer for CLIP CWEs, but increase to .",
"73 for CLIP EOS token embeddings.",
"CWEs lose some of their mutual information with the input word as the model forms predictions about the next word in the sequence (Voita et al., 2019), but our findings indicate that the EOS token must maintain the semantic information of a context in the top layers, such that it can be projected to the joint language-image space and accurately associated with an image.",
"Additional visualizations of CLIP and GPT-2 performance on word-level intrinsic evaluation tasks are included in Appendix A. 0 1 2 3 4 5 6 7 8 9 10 11 12 0 0 .",
"As shown in Figure 4, sentence embeddings from the CLIP LM outperform GPT-2 sentence embeddings on the STS benchmark at every layer of the respective models, and the difference in performance grows in the upper layers.",
"CLIP sentence embeddings obtain Spearman's = .",
"73 in the top layer, compared to no greater than .",
"45 for GPT-2 embeddings.",
"Even using the EOS token, GPT-2 sentence embeddings exhibit properties similar to CWEs in the model, and lose semantic information in the upper layers, while CLIP sentence embeddings improve in semantic quality through the top layer.",
"As shown in Figure 5, CLIP sentence embeddings become increasingly dissimilar as the layer index increases.",
"This is in stark contrast to GPT-2, wherein sentence embeddings using the EOS token have self-similarity .",
"97 in every layer, and indicates that the contrastive visual semantic objective of CLIP forces fine-grained differentiation of sentence-level semantics.",
"Our findings are straightforward, but it is not obvious that they should occur.",
"The training objective of CLIP is not to produce high-quality CWEs, or even sentence embeddings.",
"Indeed, Radford et al. (2021) spend little time discussing the CLIP language model, noting that they did not see significant performance improvements by scaling up the size of the model.",
"However, in creating the first broadly accurate zero-shot image classifier, Radford et al. (2021) have also created a zero-shot sentence encoder which substantially outperforms the version of its underlying architecture trained on language modeling.",
"Moreover, without the need 3056 for computationally expensive pooling methodologies, and despite having less than half the parameter count of GPT-2 small, the CLIP LM produces CWEs which match or exceed the best performance ever realized with a corpus-based approach on two intrinsic evaluation tasks, and outperform embeddings formed from 100 , 000 encoded contexts in GPT-2 medium (Bommasani et al., 2020).",
"CLIP embeddings show that the high anisotropy observed by Ethayarajh (2019) is not the inevitable result of contextualization, nor even of a specific language modeling architecture, but is connected to the pretraining objective of the model.",
"When trained on a contrastive visual semantic objective, CWEs formed by CLIP have much lower self-similarity at every layer of the model in comparison with GPT-2.",
"This is remarkable because CLIP does not actually project CWEs into the joint language-image space.",
"While we might expect CLIP sentence embeddings, which are projected into the language-image space, to have different properties from the CWEs formed by GPT-2, it does not necessarily also follow that the CWEs formed by CLIP would also be so different from those in GPT-2.",
"Indeed, we still observe the increased self-similarity in the top layer reported by Ethayarajh (2019), and the loss of semantic information related to the input token in the upper layers, as reported by Voita et al. (2019).",
"However, these effects are much less pronounced in CLIP than they are in GPT-2, indicating that the contrastive visual semantic objective of the model has regularizing effects that shape more than just the projected sentence embedding.",
"Our findings suggest that language models trained on visual semantic objectives are likely to privilege the encoding of semantic information, which is essential to matching a caption to an image.",
"The more isotropic representations we observe reflect the objective of the model, which requires differentiating fine-grained semantic information.",
"That models trained on visual semantic objectives would form embeddings to reflect the semantics of a word or sentence more than would a causal language model makes intuitive sense.",
"Through the lens of the training objective, it is more problematic for a causal language model to predict a syntactically invalid continuation of a sentence, such as an incorrect part of speech, than to predict a somewhat unexpected but still syntactically valid continuation of a sentence.",
"When a language model is trained to encode and associate the correct text caption with a matching image, however, the semantic content of the text becomes at least as important as its syntactic properties.",
"Our work shows that a pretraining objective which is both visual semantic and contrastive in nature results in isotropic, highly semantic CWEs and sentence representations, in stark contrast to the representations formed by the same architecture when trained on a language modeling objective.",
"However, further work is needed to address to what extent the results we observe are the result of contrastive training, and to what extent they are the result of visual semantic training.",
"It is possible that a contrastive training objective, wherein the model must discriminate between correct and incorrect options, will result in isotropic and highly semantic embeddings even if both models produce linguistic representations.",
"On the other hand, encoding language for the purpose of performing visual semantic tasks may be particularly important for achieving the effects seen in CLIP, as images lack a grammatical structure and are primarily semantic in composition.",
"Future work might perform a direct assessment between representations obtained from the CLIP LM and representations learned by contrastive text-only models such as those recently introduced by Neelakantan et al. (2022).",
"This work examines semantics in contextualized representations without postprocessing, using cosine similarity as the similarity metric.",
"While this is a common experimental design evaluated frequently in prior work, it is not the only way of assessing semantics in contextualized word embeddings.",
"For example, recent work indicates that semantics can be better isolated in language models like GPT-2 by postprocessing and transforming the embedding space using methods such as removing high-magnitude directions with principal component analysis (Wolfe and Caliskan, 2022; Timkey and van Schijndel, 2021).",
"4 Future work might assess whether these postprocessing techniques, or methods which assess semantics using mutual information (Voita et al., 2019) or linear probes (Tenney et al., 2019), also indicate that contrastive multimodal pretraining magnifies semantics in the embedding space.",
"4 CLIP still outperforms GPT-2 in nearly every case over intrinsic evaluation results reported after postprocessing, and CLIP embeddings may also exhibit improvements from comparable manipulations of the embedding space.",
"Finally, Radford et al. (2021) note that CLIP was first intended to be a zero-shot caption generator, a design which has since been realized using the SimVLM architecture of (Wang et al., 2021b).",
"Analysis of such models, which are not yet available to the research community in a way which would allow analysis of the underlying architecture, may help to answer questions of whether the contrastive objective or the visual semantic setting is more important for regularizing anisotropy and representing semantics.",
"We find that contrastive visual semantic pretraining produces isotropic CWEs which outperform a language model based on the same architecture on semantic evaluations on both the word level and the sentence level.",
"Our findings indicate that incorporating visual semantic objectives with language models may be useful both to regularize the anisotropy in CWEs and to improve the semantic quality of both word and sentence representations.",
"While the contrastive visual semantic objective of CLIP produces semantically rich representations of natural language, we caution that the model is also known to encode harmful societal biases.",
"Goh et al. (2021) find that the CLIP image encoder forms representations which reflect biases against communities marginalized based on religion and on immigration status, and Wang et al. (2021a) and Agarwal et al. (2021) report biases of underrepresentation and stereotypical associations which disproportionately affect women.",
"Moreover, Radford et al. (2021) state that they use frequency-based heuristics to construct the WebImageText corpus on which CLIP trains.",
"Other research on language models has shown that similar techniques can exacerbate biases against marginalized groups, who are often underrepresented in such datasets (Wolfe and Caliskan, 2021).",
"Thus, while our findings are promising for the future of visual semantic AI systems, models like CLIP must be studied further to understand how they represent people, and what the ramifications of such representations are for society.",
"This material is based on research partially supported by the U.S. National Institute of Standards",
"and Technology (NIST) Grant 60NANB20D212.",
"Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of NIST."
] |
[
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"result",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"result",
"objective",
"objective",
"objective",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"method",
"result",
"abstain",
"method",
"method",
"result",
"method",
"method",
"method",
"result",
"result",
"result",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"other"
] |
[
"An important risk that children face today is online grooming , where a so-called sexual predator establishes an emotional connection with a minor online with the objective of sexual abuse.",
"Prior work has sought to automatically identify grooming chats, but only after an incidence has already happened in the context of legal prosecution.",
"In this work, we instead investigate this problem from the point of view of prevention.",
"We define and study the task of early sexual predator detection (eSPD) in chats, where the goal is to analyze a running chat from its beginning and predict grooming attempts as early and as accurately as possible.",
"We survey existing datasets and their limitations regarding eSPD, and create a new dataset called PANC for more realistic evaluations.",
"We present strong baselines built on BERT that also reach state-of-the-art results for conventional SPD.",
"Finally, we consider coping with limited computational resources, as real-life applications require eSPD on mobile devices.",
"Online grooming denotes the process where a so-called sexual predator establishes an emotional connection with a minor online to systematically solicit and exploit them for sexual purposes (Wachs et al., 2012).",
"Online grooming is a major concern of public safety that, sadly, is rapidly growing.",
"For instance, in England and Wales in the year to mid-2020, police recorded 5,083 offenses of Sexual Communication with a Child [1], an average of 14 offenses per day.",
"In Germany, there were 2,632 recorded cases in 2020 where a child was sexually abused through internet communication technologies [2], an increase of 50 % to the previous year.",
"As such crimes often go unreported or undetected, police-recorded incidents certainly do ... ...",
"The problem of detecting whether or not a child is being groomed by a predator is called sexual predator detection (SPD).",
"Most previous approaches to SPD have cast this as the problem of identifying predatory authors in a corpus of segments of chats (Villatoro-Tello et al., 2012; Cardei and Rebedea, 2017).",
"Other approaches interpreted it as a binary classification problem over segments of a chat (Ebrahimi et al., 2016), or the entire chat (Bours and Kulsrud, 2019).",
"Approaches were evaluated mostly using data from the PAN shared task on sexual predator detection (Inches and Crestani, 2012).",
"However, most prior work has viewed SPD from the point of view of forensics: they focused on identifying completed grooming chats in preparation for legal prosecution.",
"We believe that it is also important to study approaches that may prevent online grooming as early as possible, i.e., during an ongoing chat.",
"Ideally, the grooming process should be disrupted before it succeeds to protect children from harm.",
"This task is non-trivial as the content of grooming chats changes over time: chats often start with the exchange of personal information and building of trust, a phase in which they are difficult to detect.",
"In a second stage, predators further develop trust with their victims in a cycle of entrapment.",
"They try to desensitize their victims to sexual topics, isolate them from others, and arrange meetings (Olson et al., 2007, p. 236).",
"Even in this second stage, it is difficult to distinguish between grooming and consensual conversations between minors or adults.",
"For this, a model needs to be able to detect discriminative features like a user talking about age difference, checking on the victim's relationship with their parents, isolating them from their support network, reframing sexual actions as appropriate and more (see Olson et al. (2007), pp. 234ff).",
"An example of arranging a meeting is shown in Figure 1. Here, an alert is triggered only late in the grooming process, when an in-person meeting is already explicitly being discussed.",
"Ideally, such chats should be detected far sooner.",
"However, the real-world consequences of a triggered eSPD alert can be considerable and may involve police actions.",
"This means that false alerts should be avoided as much as possible.",
"At the same time, false negatives must be avoided by all means as these could lead to a sexual assault.",
"It is therefore as important as ethically difficult to find the best balance between the earliness of an alert and the certainty that an alert is justified.",
"We introduce the task of early sexual predator detection (eSPD) in chats.",
"We cast eSPD as an early risk detection problem in which chats are analyzed from the start and message by message, with the goal of raising warnings for chats early and accurately.",
"Specifically, we make the following contributions: We introduce the problem of eSPD and formally define it.",
"limitations, and build a new combined dataset called PANC as a best-effort for evaluating eSPD.",
"We propose a task setup to evaluate eSPD, focusing on the trade-off between earliness and accuracy.",
"We present strong baselines for eSPD using a two-tier approach.",
"Our method (1) analyzes sliding windows of messages from an ongoing chat using BERT and (2) continuously classifies the sequence of the window classifications.",
"We evaluate three different BERT language models, two of which work on mobile.",
"We compare our models to previous research in conventional (i.e. non-early) SPD settings and find that two of them outperform the current state of the art.",
"We provide an extensive discussion of the limitations of our models and the available data.",
"We see our work as an important step to encourage more research into eSPD.",
"To this end, we make our experimental setup, our baseline models, scripts for corpus processing, and the visualization tool for inspecting analyzed chats (used to generate Figure 1) publicly available 1 .",
"We emphasize that we do not consider our models to be ready for use in real scenarios, which we discuss in depth in our Ethics Statement (see below).",
"Due to privacy and legal reasons, grooming chats are extremely difficult to obtain.",
"We introduce the (few) known corpora of this kind and discuss their limitations, motivating the assembly of the PANC dataset we discuss in Section 3. 2.1 Original data sources The main source of grooming chats used in SPD literature is the Perverted Justice Foundation (PJ) [10].",
"This organization used trained volunteers (decoys) posing as children in public chat rooms to help authorities convict sexual predators.",
"They provide their chats with convicted predators for download but ceased their decoy operations in 2019.",
"Nearly all prior work evaluates on datasets derived from PJ (McGhee et al., 2011; Gupta et al., 2012; Bogdanova et al., 2014; Meyer, 2015; Ebrahimi et al., 2016; Cardei and Rebedea, 2017; Pastor Lopez-Monroy et al., 2018).",
"To our knowledge, the only work using real grooming chats is Cheong et al. (2015) who used chats extracted from MovieStarPlanet , a massively multiplayer online game for children.",
"Unfortunately, this corpus is not publicly available.",
"The PAN Lab at the 2012 CLEF conference introduced a shared task on sexual predator identification [7].",
"The organizers created a large dataset which we call PAN12 using data from PJ.",
"As nongrooming chats, they sampled from logs of IRC channels and of the chatting site Omegle [11].",
"These chats also include cybersex between consenting adults among non-predatory conversations, which makes distinguishing grooming chats especially difficult.",
"They divided chats into segments whenever a conversation was interrupted for more than 25 minutes and filtered all segments with more than 150 messages.",
"This results in a total of 222k segments, of which 2.58 % are grooming chats, through which the organizers try to mimic the distribution of grooming in actual online conversations.",
"They are partitioned into train and test splits of a 30:70 ratio.",
"PAN12 has several limitations.",
"All grooming chats stem from decoy operations and are not with actual victims, and the non-grooming chats are not with decoys.",
"real.",
"Most problematic for eSPD is the separation into relatively short, unordered segments, thus completely blurring the true timeline of a chat.",
"This makes the data unsuitable for eSPD since we aim to detect predators as early as possible in potentially long-running chats.",
"Villatoro-Tello et al. (2012) found that filtering the PAN12 segments to only focus on the most important samples can lead to better model performance.",
"They created a new dataset ( VTPAN ) by removing from PAN12 segments that have only one participant, less than 6 interactions per user, or long sequences of special characters (often depicting ASCII art).",
"Many short segments which stem from predatory chats actually contain no predatory language, so a benefit of VTPAN is that many of these segments are filtered.",
"The dataset is only 10% of the size of PAN12 , and is also used in recent work on SPD (Escalante et al., 2016, 2017; Pastor Lopez-Monroy et al., 2018).",
"Regarding eSPD, this dataset suffers from the same limitation as PAN12 .",
"The ChatCoder2 ( CC2 ) corpus was created by McGhee et al. in 2011 and was later also used by other researchers (Basave et al., 2014).",
"It contains 497 complete predator chats from PJ and was built mainly for studying the semantic segmentation of grooming chats.",
"Accordingly, messages in 155 chats are also labeled as belonging to one of three phases: (1) exchange of Personal Information , (2) Grooming , and (3) Approach of the victim.",
"In summary, we find that existing datasets suffer from limitations that make them difficult to use for training and evaluating eSPD.",
"The commonly used datasets PAN12 and VTPAN only contain short, disjointed, and unordered chat segments.",
"For eSPD, however, one needs to detect grooming in a continuous message stream, which is ordered and theoretically unbounded in length.",
"Classifying segments only, we have no information about how early in the complete chat grooming is detected.",
"Moreover, evaluating earliness within single segments would not be interesting as it is not interpretable and because they are so short.",
"While CC2 does have full chat logs, it does not contain any negative samples.",
"Our analysis thus motivates the assembly of the new PANC dataset as explained in the next section.",
"In this section, we propose an evaluation setup for eSPD.",
"We give a formal definition of the task followed by suitable evaluation metrics.",
"Finally, we discuss how we use and combine existing SPD datasets to create PANC for the evaluation of eSPD.",
"We interpret eSPD as an early risk detection problem (Losada et al., 2020).",
"This means that we need to consider the earliness and the accuracy of warnings, continuously analyzing a chat after each new message.",
"Formally: Definition 1 (Message).",
"A message is a string with a time and an author .",
"Definition 2 (Chat).",
"A chat C = ( m 1 , m 2 , . . . ) is a sequence of messages m i where the time of messages is monotonically increasing.",
"A finite chat is of the form b C = ( m 1 , . . . , m n ) , where we say b C has a length of n .",
"We call grooming chats positive and other chats negative .",
"This is the class of a chat.",
"The length of real chats is potentially unbounded and keeps increasing, so regarding real chats as infinite is handy.",
"We analyze chats after each new message, thus considering only finite prefixes for classification.",
"Definition 4 (eSPD).",
"Let X Test be a dataset of finite chats.",
"For C = ( m 1 , . . . , m n ) X Test and l = 1 , . . . , n increasing over time, an eSPD system decides for each l whether a warning for C should be raised or not by classifying C ( l ) .",
"It stops as soon as a warning is raised, classifying C as grooming.",
"If no warning is raised for all l = 1 , . . . , n , it classifies C as non-grooming.",
"Finally, eSPD is the problem of classifying all C X Test as early and accurately as possible.",
"Note that this definition deliberately states that an eSPD system never classifies a chat as nongrooming as long as there are messages left (or the chat did not end, in a real-life setting), as it cannot know the future after the current prefix C ( l ) .",
"In eSPD, there are two desiderata between which a trade-off exists:",
"(a) Raising alerts as early as possible, and",
"(b) raising alerts as accurately as possible.",
"Raising warnings early is good for",
"(a), but hampers",
"(b) as less data is available.",
"Waiting longer with warning hurts",
"(a), but most likely improves",
"(b), as later decisions are based on more messages.",
"Accuracy metrics are most prominent in related work on detecting sexual predators (Pastor Lopez-Monroy et al., 2018; Escalante et al., 2017), i.e. non-early SPD.",
"We report the established metrics of precision, recall, and F 1 for the grooming class.",
"We call the number of messages that have been exchanged before a warning is raised the warning latency .",
"We use latency-weighted F 1 (Sadeque et al., 2018) as a measure that accounts for both warning accuracy and warning latency.",
"To calculate it, we first define a penalty for each warning latency l 1 given by penalty( l ) := 1 + 2 1 + exp( p ( l 1)) where p determines how quickly the penalty should increase as latency increases.",
"A warning after the first message receives 0 penalty and for increasing warning latency, the penalty approaches 1 .",
"Now assume an eSPD system to produce a list latencies of warning latencies for all chats C X Test where (1) C is positive, and (2) the system raises a warning for C .",
"We define the overall speed of correct warnings as speed := 1 median { penalty( l ) | l latencies } .",
"This metric is more interpretable than just using the mean or median warning latency, as it depends on the problem and the dataset at hand how good a median warning latency actually is.",
"Finally, the latency-weighted F 1 is given by F latency := F 1 speed .",
"We generally consider an eSPD system A better than an eSPD system B when it reaches, for a given dataset, a higher F latency ; comparisons focusing more on speed or more on accuracy or searching for pareto-optimal solutions are also possible.",
"Note that we, following Losada et al. (2019), compute the speed of warnings only for grooming chats classified as such.",
"All other cases (false positives, false negatives, true negatives) are accounted for through the F 1 value.",
"Evaluating an eSPD system needs a corpus of chats, where each entire chat is annotated as grooming or not.",
"Note that we do not require this annotation number of positive negative % positive Full-length pos.",
"Furthermore, eSPD based on supervised learning requires an annotated training corpus.",
"Existing datasets cannot be directly used for this purpose, because they either consist only of unordered segments ( VTPAN , PAN12 ), which hinders measuring speed, or only contain positive chats ( CC2 ), which makes measuring F 1 impossible.",
"Furthermore, the existing corpora all use PJ grooming chats and partly overlap.",
"To address these issues, we assembled PANC , an evaluation dataset for eSPD, by carefully combining selected parts from PAN12 and from CC2 .",
"The process is illustrated in Figure 2: The final corpus consists of (1) all positive full length chats from CC2 and (2) the negative segments of PAN12 .",
"We randomly split the corpus on this level at proportions 60:40 into train/test splits.",
"Through (1), we can evaluate earliness.",
"We cannot measure accuracy as defined above due to the lack of full-length negative chats.",
"Instead, in the experiments, we will compute accuracy based on segments as an estimate of (2), for which we split the full-length grooming chats into segments.",
"We filter all segments shorter than 6 messages, similar to VTPAN , and those longer than 150 messages (some of the latter were actually not filtered in PAN12 , contrary to its original specification).",
"Finally, we removed segments that are not between exactly two authors to make them comparable to CC2 chats.",
"Statistics on the resulting corpus are given in Table 1. Discussion.",
"We consider PANC to be the first corpus suitable for realistic eSPD evaluations.",
"Yet it still has limitations: First, the negative chats are not full-length chats but only segments.",
"While this does not impact our earliness evaluation, it prevents the computation of true eSPD accuracy.",
"Our proposed workaround is to replace chat accuracy with segment accuracy, although we do not know how well the latter approximates the former as we therein classify short segments which can stem from anywhere in a chat.",
"An alternative would be to use a difference source for the negative chats; however, we decided on those from PAN12 as they also include hard negative cases (i.e. sexual conversations between consenting adults), which we believe gives more realism to our evaluation.",
"Another limitation is that PANC only contains chats between exactly two authors, so our systems are not applicable in group chats.",
"However, grooming is very rare in group chats as predators depend on their actions staying unnoticed.",
"We present a straightforward eSPD approach to demonstrate the validity of our task setup and to establish baselines for future works.",
"It consists of two tiers of classification: (1) A local tier (Tier 1) that moves a sliding window over the messages of a chat and classifies them, and (2) a global tier (Tier 2) that decides after each window prediction whether to raise a warning or not based on the sequence of recent window predictions.",
"The purpose of this architecture is to balance earliness and accuracy and especially to prevent single suspicious windows from triggering warnings.",
"For Tier 1, we use a standard approach in which we add a linear classifier to a pre-trained transformer model and fine-tune the entire architecture.",
"It takes as input all messages in a given window and outputs a binary prediction.",
"We evaluated different BERT models: BERT large , BERT base (Devlin et al., 2018), and MobileBERT (Sun et al., 2020).",
"Model parameters can be found in Appendix A. MobileBERT is a version of BERT large with smaller model size and faster inference, optimized for use on mobile devices.",
"Hyperparameters.",
"Next to the choice of language model, the main hyperparameter of Tier 1 is the window size .",
"It controls the number of messages that are input into the classifier.",
"sification results over a series of windows.",
"After every window classification, we consider the count of positively classified windows within the last 10 windows.",
"If this value exceeds a pre-defined threshold called skepticism s { 1 , . . . , 10 } , the chat is classified as grooming.",
"Hyperparameters.",
"The only hyperparameter of Tier-2 is thus skepticism which controls the earli-ness/accuracy tradeoff.",
"We evaluate our baseline approach in our eSPD task setup using the proposed metrics for warning earliness, accuracy, and F latency .",
"We compare three different eSPD systems: S BERT-large , S BERT-base , and S MobileBERT , which use the respective transformer models as described above as the Tier-1 classifier.",
"We use a window size of 50 and a skepticism of 5; an evaluation of the impact of the skepticism parameter can also be found below.",
"We fine-tune each of our BERT models on PANC and VTPAN .",
"As the results of fine-tuning BERT models often vary heavily based on the random seed used (Dodge et al., 2020), we repeat this process three times.",
"In the evaluation, we always report the mean of the resulting measures together with standard deviation.",
"We fine-tune BERT base and MobileBERT using the TensorFlow Lite Model Maker [8] Library and BERT large using Flair [9] (Akbik et al., 2019).",
"An overview of evaluation results for our three model variants is given in Table 2. To compute the F latency of warnings, we measured their F 1 score for segments, while speed is based on full positive chats (see Section 3).",
"Evaluating earliness in isolation.",
"Figure 3 shows violin plots of the distribution of warning latencies for the three systems for all predator chats from PANC Test , based on the means over three runs.",
"The systems S BERT-large and S MobileBERT have similar performance while S BERT-base outperforms both.",
"Its median warning latency is roughly 30 messages lower compared to the other systems.",
"Moreover, S BERT-base exhibits much less variance in warning 0 50 100 150 200 250 300 350 400 450 500 BERTlargeMedian: 54 Std.",
"latency than the other two models.",
"An explanation of the somewhat surprising scores of S BERT-large is that one of the three runs of this model led to significantly worse results than the other runs.",
"As a consequence, the standard deviation of this model is also much higher than for the other two models.",
"Interpreting and penalizing warning latency.",
"To calculate F latency , we need to set the parameter p which controls the penalty that is assigned to a given warning latency.",
"However, when inspecting the full-length predator chats, we noticed that the number of messages before a chat gets suspicious varies heavily, and there is no typical value for this, which makes setting p difficult.",
"We believe that it would be better to not set p globally but on a chat by chat basis, which could be done in future work.",
"Conventionally (Sadeque et al., 2018; Losada et al., 2019), p is set such that the penalty is 0.5 at the median length of chats.",
"But for our full-length predator chats, this would be 1,055 messages which we think is way too late to raise a warning.",
"Ultimately, we decided to set p with help from the message labels from CC2 .",
"We set p such that the penalty is 0.5 when about 20 grooming messages are exchanged.",
"In median for the labeled CC2 chats, this is 90 messages, so we set p = ln(3) / (90 1) 0 .",
"0123 .",
"However, the standard deviation for this is about 200 messages.",
"Best baseline approach.",
"As Table 2 shows, overall results differ whether one considers only F 1 or F latency : Considering F 1 , S BERT-large and S BERT-base have similar performance and both outperform S MobileBERT .",
"However, when considering speed, BERT base significantly outperforms the other models.",
"One of the BERT large runs only scored a speed of 0.55 , which is why the mean speed is unexpectedly low and the standard deviation is high.",
"In F latency , S BERT-base outperforms S BERT-large by 0.14 which again outperforms S MobileBERT by 0.09 .",
"Impact of skepticism.",
"The skepticism hyperparameter s controls the propensity of the Tier-2 classifier to raise warnings and can thus be seen as the central knob to tune the earliness/accuracy trade-off for our approach.",
"We would expect that being more skeptical leads to a lower recall, higher precision, and higher latency of warnings.",
"To confirm this, we evaluate each of our eSPD systems on PANC for each skepticism s = 1 , . . . , 10 and note precision, recall, and speed of warnings depending on skepticism.",
"Here, the speed of warnings is calculated as explained in Section 3.2.2.",
"In Figure 4, we plot the concrete accuracy and speed metrics of our eSPD systems, depending on the skepticism of the Tier-2 classifiers.",
"For all of our systems, we indeed find that as skepticism increases, precision increases as well, while recall and speed are decreasing.",
"Moreover, the F latency of our detectors does not significantly change as long as s is in a medium range of { 3 , 4 , 5 , 6 , 7 } , except for S BERT-large , but here the standard deviation of F latency is so high that no clear correlation exists.",
"To get a better understanding of the accuracy of our proposed baseline approach, we also employ it in a conventional SPD setting.",
"This allows us to compare against the state-of-the-art approaches by Escalante et al. (2017) and Pastor Lopez-Monroy et al. (2018).",
"Evaluation setup.",
"For this comparison, we replicate their evaluation setting in which they classify segments on VTPAN by considering increasing fractions of each segment as measured by the number of characters.",
"They evaluate their SPD accuracy after 10%, 20%, . . . , 100% of all characters of a segment where only whole words are included.",
"As classification is not message-by-message, we only use our Tier-1 classifiers in this setting.",
"Note that evaluating accuracy as a function of fraction 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 6 7 8 9 10 Skepticism E v a l ua t i on M e t r i cs BERTl a r ge PrecisionRecallF1SpeedF_latency 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 6 7 8 9 10 Skepticism E v a l ua t i on M e t r i cs BERTba s e PrecisionRecall F1SpeedF_latency 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1 2 3 4 5 6 7 8 9 10 Skepticism E v a l ua t i on M e t r i cs M ob il e BERT PrecisionRecallF1SpeedF_latency Figure 4: Impact of master classifiers skepticism s for our eSPD systems S BERT-large , S BERT-base , and S MobileBERT .",
"New state of the art on SPD.",
"Figure 5 summarizes the results of this comparison.",
"Notably, even the MobileBERT model is competitive with previous works in spite of being much less resource hungry.",
"Both other models outperform previous works for all settings.",
"The difference in performance is especially large for small segment prefixes and decreases with increasing availability of information.",
"For 10 % of information, BERT large outperforms the SOTA by as much as 8 % in F 1 .",
"A complete list of the F 1 values is given in Appendix B. Discussion.",
"We believe that improvements primarily stem from our usage of BERT, which previously had not been applied to SPD.",
"The implementations of previous approaches are not openly available, so we cannot directly compare example inputs.",
"But prior work uses document representations where words are considered irrespective of their context.",
"Thus, we believe that these approaches are mostly able to detect grooming attempts that use specific words, for instance those with a sexual connotation.",
"A BERT-style transformer model on the other hand may be able to better distinguish whether the overall context in which words are used is a grooming context and identify attempts that use more indirect language such as innuendo.",
"We discuss several issues that must be considered before planning to apply an algorithm like the ones presented in this work in practice.",
"A critical question is how representative PANC is of real grooming chats.",
"Chiang and Grant (2019, p. 693) and Schneevogt et al. (2018), suggest that the PJ chats created by adult decoy volunteers instead of actual child victims (see Section 2.1) may not truly represent real grooming chats.",
"Specifically, they found that they are missing themes of forceful persuasion or extortion of victims, which is present in real grooming chats.",
"Furthermore, youth language changes very fast over the years; as our corpus is from 2012, it is questionable how well it would represent current chats.",
"For instance, it does not contain any emojis.",
"Another issue is the lack of deep relationships in our non-grooming chats.",
"Among those, the only chats with personal or intimate conversations are from Omegle.",
"This is a platform that invites cybersex, for example, but users do not have a strong personal relationship as they randomly meet (only) online.",
"An example of how the lack of such chats might lead to false positives is shown in Appendix C. 6.2 Lack of complete negative chats Due to the lack of publicly available datasets, we could not test our models on complete negative chats.",
"This has implications: We had to resort 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Percentage of characters F 1 BERT-largeBERT-baseMobileBERTPastor opez-Monroy et al. (2018) Escalante et al. (2017) Figure 5: Our BERT models vs. SOTA on VTPAN classifying 10% , 20% , . . . of characters of segments.",
"to measuring accuracy at the segment level, and we cannot provide concrete estimates on warning accuracy for such chats.",
"However, we consider our results on negative segments to be promising.",
"Our Tier-1 classifiers are trained on segments of a chat, created by a specific partitioning of the sequence of messages.",
"However, during eSPD we apply them to windows of the last 50 messages, which may exhibit different properties than the predefined segments.",
"For instance, as segments are separated by lengthy breaks in the conversation, they often begin with greetings which is not the case for our windows.",
"Such differences may confuse our models and lead to sequences of wrong window classifications, an effect we counteract through the Tier-2 classifier.",
"While we consider only chat messages as information to detect grooming attempts, real-world applications might also have additional data available.",
"For instance, in social media, users are often required to state their age when they create their profile.",
"Such data could be very helpful for eSPD.",
"However, we caution that profile information may not be reliable as it is typically not verified and therefore easy to fake and it is common for predators to use fake information.",
"Online grooming is a real and pressing problem faced by any chat system open to children.",
"Accordingly, social media sites and games often use automated grooming detection systems (Bowles and Keller, 2019).",
"For example, YouTube applies NLP to detect predatory messages in video comments and livestream chats followed by human verification (IICSA and Canegallo, 2019, p. 63, ll. 1025).",
"Microsoft uses a similar approach for XBOX Live and Skype chat [6] and also licenses their software to other service providers free of charge (Patel, 2020).",
"Their obvious advantage over academic research is the access to much larger datasets.",
"However, these solutions are server-based and cannot be applied for end-to-end encrypted chats.",
"Many parents also resort to using parental control apps, some of which send children's chats to external servers for analysis, which is a privacy concern.",
"Because of these reasons, there is a need for eSPD systems even on mobile devices.",
"In academia, eSPD so far has seen comparably little research despite its high societal importance, probably due to the difficulties of obtaining appropriate datasets.",
"Villatoro-Tello et al. (2012) was the winning team of the first problem of the PAN12 competition, which was the identification of the predatory authors of the PAN12 segments.",
"They approached the problem by first predicting segments as grooming or not and then distinguishing victim from predator.",
"This two-step method was refined by Cardei and Rebedea (2017) who additionally used behavioral features, such as the number of questions asked, achieving an F 0 .",
"5 of 0.934 for segment classification on a subset of PAN12 Test .",
"Bours and Kulsrud (2019) studied the same problem and included an analysis of early segment classification, i.e., an attempt to find predators early within a segment.",
"They explored their method also by applying it to 10 full-length PJ chats, which could be seen as the first instance of eSPD we are aware of.",
"Early text classification.",
"To our knowledge, Escalante et al. (2016) was the first work to approach SPD from an early text classification perspective, but restricted their analysis to the segment level.",
"Their results were improved in Escalante et al. (2017) using profile-based representations, where documents are represented as normalized sums of vector representations of words.",
"The best results so far for early segment classification were achieved by Pastor Lopez-Monroy et al. (2018) using a Multi-Resolution Representation (MulR) for documents to cope equally well with longer and shorter segments.",
"We compared to the results of the latter two works in Section 5.2 and found that our approach outperforms both.",
"Note that we are not aware of any previous work employing transformers for SPD.",
"Early time series classification.",
"An interesting perspective on our Tier 2 is that it actually solves an early time series classification (eTSC) problem, for which there exist several mature approaches, e.g. TEASER (Schafer and Leser, 2020) or ECTS (Xing et al., 2012).",
"However, there exists a key difference that prevents us from using such methods directly: An eSPD System never classifies a chat as nongrooming as long as there are still messages left (or expected), while an eTSC system at some stage might decide that it is safe to stop controlling the chat (Loyola et al., 2018).",
"This opens the door to malicious attacks by using long and harmless openings in grooming attempts.",
"We nevertheless believe exploring ways to adapt eTSC to eSPD to be an interesting avenue for future research.",
"We defined the problem of early sexual predator detection (eSPD) in online chats and proposed an evaluation setup for this task.",
"To this end, we assembled the PANC dataset, which, albeit having clear limitations, in our mind is the currently best effort possible with the data available.",
"We also showed that a baseline built on current BERT-based language models achieves strong results on this dataset, and beats previous methods in related settings.",
"Notably, results are only modestly impacted for models that can run on mobile devices.",
"We discussed open issues in our data and evaluation setup that must be studied carefully in future work before eSPD systems could go live (and expand on this discussion in Appendix D).",
"We hope that making our task setup accessible to the research community will encourage more research into the highly important topic of early sexual predator detection.",
"We would like to thank Dr. Hugo Jair Escalante and Dr. Esau Villatoro-Tello for providing us with VTPAN and allowing us to publish the means to recreate it, as well as Professor April Edwards for providing us with CC2 .",
"We thank the Institute of Sexology and Sexual Medicine at Charite Uni-versitatsmedizin Berlin for introducing us to the problem of online grooming and the need for automatic solutions, and fruitful discussions.",
"Finally, we thank the TensorFlow Lite Support Team and specifically Chen Cen for creating a workaround that enables our BERT models to work on mobile.",
"Early sexual predator detection is a highly sensitive topic which calls for a proper discussion of potential implications of such research, the datasets being used, and the readiness of eSPD models.",
"There are potentially high stakes for any subject whose chats are analyzed by eSPD systems.",
"Any application of eSPD in running chat systems would incur interaction with vulnerable populations (minors) which must be firmly protected.",
"False-negative, as well as false-positive predictions, may have severe implications for the falsely alleged chat partner or the erroneously unprotected child, respectively.",
"Online grooming is forbidden by law in many countries, as are the establishment of sexual relationships of any kind to children.",
"In many countries, including Germany, already obtaining logs of chat content with sexual content involving children is forbidden, which makes acquisition or usage of real data impossible outside criminal investigations.",
"At the same time, online grooming does happen now, and in many instances, making research into ways to prevent or at least diminish it important.",
"Datasets.",
"For this study, we did not create any new data or perform any experiments with human beings.",
"According to European regulations, such research does not require an ethics vote from an institutional review board.",
"Instead, we performed specific filtering and combination of data from the two datasets PAN12 and ChatCoder2 ( CC2 ), which are available on request to their authors, and have been extensively used in the literature.",
"The creators of PAN12 anonymized the data by removing usernames and email addresses to avoid the identification of users.",
"This makes PAN12 compatible with European regulations that permit the exchange of carefully anonymized data.",
"The CC2 chats stem from PJ and are with offenders who were prosecuted in court and adult decoys posing as children.",
"Thus, they contain no conversations with minors or victims, which makes CC2 compatible with the above-mentioned regulations against possession and usage of any real chat logs involving sexual content with children.",
"Readiness of eSPD models.",
"Real-world applications already use automatic systems to support detection of grooming in chats (Patel, 2020; Bowles and Keller, 2019), yet no details about their measured performance and internal functioning are known to us.",
"However, we do not consider the models and methods presented in this paper as ready for production systems.",
"We already discussed some of their technical limitations in Section 6.",
"On top of these, we believe that any eSPD system must be carefully adapted to any concrete chat system and continuously retrained and monitored to be able to pick up specific styles of communication and how they change over time.",
"Additionally, any system applying eSPD must take an ethically highly difficult decision regarding the trade-off between the two immanent desiderata for eSPD systems: the earliness of warnings and their accuracy.",
"Perfectly achieving both, i.e., performing only correct classi-fications after the very first message, is impossible.",
"In this research paper, we studied the impact of our skepticism factor which controls this trade-off.",
"The concrete setting of this (or a similar) parameter in a real application must depend on an independent and careful assessment of consequences of false positive and false negative alarms.",
"This decision must take the respective circumstances into account and requires an application-specific ethical assessment of its own, including options of monitoring by human professionals as discussed in Appendix D. References Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf."
] |
[
"abstain",
"abstain",
"objective",
"objective",
"objective",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"method",
"method",
"method",
"result",
"method",
"result",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"result",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"abstain",
"other",
"other",
"abstain",
"method",
"other",
"abstain",
"method",
"other",
"objective",
"objective",
"abstain",
"result",
"abstain",
"method",
"method",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"Position encoding (PE), an essential part of self-attention networks (SANs), is used to preserve the word order information for natural language processing tasks, generating fixed position indices for input sequences.",
"However, in cross-lingual scenarios, e.g., machine translation, the PEs of source and target sentences are modeled independently.",
"Due to word order divergences in different languages, modeling the cross-lingual positional relationships might help SANs tackle this problem.",
"In this paper, we augment SANs with cross-lingual position representations to model the bilingually aware latent structure for the input sentence.",
"Specifically, we utilize bracketing transduction grammar (BTG)-based reordering information to encourage SANs to learn bilingual diagonal alignments.",
"Experimental results on WMT'14 English German, WAT'17 Japanese English, and WMT'17 Chinese English translation tasks demonstrate that our approach significantly and consistently improves translation quality over strong baselines.",
"Extensive analyses confirm that the performance gains come from the cross-lingual information.",
"Although self-attention networks (SANs) (Lin et al., 2017) have achieved the state-of-the-art performance on several natural language processing (NLP) tasks (Vaswani et al., 2017; Devlin et al., 2019; Radford et al., 2018), they possess the innate disadvantage of sequential modeling due to the lack of positional information.",
"Therefore, absolute position encoding (APE) (Vaswani et al., 2017) and relative position encoding (RPE) (Shaw et al., 2018) were introduced to better capture the sequential dependencies.",
"However, either absolute or relative PE is language-independent and its embedding Bush with Sharon held a talk Bush held a talk with Sharon [ source ] [ re-ordered ] 0 1 2 3 4 5 [ abs POS] 0 3 4 5 1 2 [ XL POS] [ target ] Bush held a talk with Sharon",
"remains fixed.",
"This inhibits the capacity of SANs when modelling multiple languages, which have diverse word orders and structures (Gell-Mann and Ruhlen, 2011).",
"Recent work have shown that modeling cross-lingual information ( e.g., alignment or reordering) at encoder or attention level improves translation performance for different language pairs (Cohn et al., 2016; Du and Way, 2017; Zhao et al., 2018; Kawara et al., 2018).",
"Inspired by their work, we propose to augment SANs with cross-lingual representations , by encoding reordering indices at embedding level.",
"Taking English Chinese translation task for example, we first reorder the English sentence by deriving a latent bracketing transduction grammar (BTG) tree (Wu, 1997) (Fig. 1a).",
"Similar to absolute position, the reordering information can be represented as cross-lingual position (Fig. 1b).",
"In addition, we propose two strategies to incorporate cross-lingual position encoding into SANs.",
"We conducted experiments on three commonly-cited datasets of machine translation.",
"Results show that exploiting cross-lingual PE consistently improves translation quality .",
"Further analysis reveals that our method improves the alignment quality (Sec. 4.3) and context-free Transformer (Tang et al., 2019) (Sec. 4.4).",
"Furthermore, contrastive evaluation demonstrates that NMT models benefits from the cross-lingual information rather than denoising ability (Sec. 4.5).",
"To tackle the position unaware problem, absolute position information is injected into the SANs:",
"where pos abs denotes the numerical position indices, i is the dimension of the position indices and d model means hidden size.",
"f ( ) alternately employs sin ( ) and cos ( ) for even and odd dimensions.",
"Accordingly, the position matrix PE can be obtained given the input X = { x 1 , . . . , x T } RT d model .",
"Then, the position aware output Z is calculated by: Z = X + PE abs RT d model (2) Self-Attention The SANs compute the attention of each pair of elements in parallel.",
"It first converts the input into three matrices Q , K , V , representing queries, keys, and values, respectively: { Q , K , V } = { ZWQ , ZWK , ZWV } (3) where WQ , WK , WV R d model d model are parameter matrices.",
"The output is then computed as a weighted sum of values by ATT ( Q , K , V ) .",
"SANs can be implemented with multi-head attention mechanism, which requires extra splitting and concatenation operations.",
"Specifically, WQ , WK , WV and Q , K , V in Eq.",
"(3) is split into H sub-matrices, yielding H heads.",
"For the h -th head, the output is computed by: O h = ATT ( Q h , K h , V h ) RT d v (4) Where subspace parameters are W hQ , W hK R d model d k and W hV R d model d v , where d k , d v + Nonlinear Fusion",
"refer to the dimensions of keys and values in the subspace, and normally d k = d v = d model / H. Finally, these subspaces are combined with concatenation operation: O = CONCAT ( O 1 , . . . , OH ) WO (5) where WO R Hd v d model and O RT d model are the parameter matrix and output, respectively.",
"First, we built a BTG-based reordering model (Neu-big et al., 2012) to generate a reordered source sentence according to the word order of its corresponding target sentence.",
"Second, we obtained the reordered word indices pos XL that correspond with the input sentence X .",
"To output the cross-lingual position matrix PEXL , we inherit the sinusoidal function in Eq.",
"(1).",
"Formally, the process is: PEXL = f ( BTG ( X )) (6) 3.2 Integration Strategy As shown in Fig. 2, we propose two strategies to integrate the cross-lingual position encoding (XLPE) into SANs: inputting-level XL ( InXL ) SANs and head-level ( HeadXL ) SANs.",
"Inputting-level XL SANs As illustrated in Fig. 2a, we employ a non-linear function TANH ( ) to fuse PE abs and PEXL : PEIN-XL = TANH ( PE abs U + PEXLV ) (7) where U , V are trainable parameters.",
"In our preliminary experiments, the non-linear function performs better than element-wise addition.",
"This might because complex non-linear one have better fitting capabilities, thereby avoiding exceptional reordering to some extent.",
"Next, we perform Eq.",
"(2) to obtain the output representations: ZIN-XL = X + PEIN-XL (8) Similarly, we use Eq.",
"(3) (5) to calculate multiple heads of SANs.",
"Head-level XL SANs Instead of projecting XL PE to all attention heads, we feed partial of them, such that some heads contain XL PE and others contain APE, namely HeadXL.",
"As shown in Fig. 2b, we fist add APE and XL PE for X , respectively: Z abs = X + PE abs ZXL = X + PEXL (9) We denote the number of XL PE equipped heads as { 0 , . . . , H } .",
"To perform the attention calculation, W i is divided into [ WXL i R d model d v ; W absi R d model ( H ) d v ] for each i Q , K , V , correspondingly generating two types of { Q , K , V } for XL PE heads and APE heads.",
"According to Eq.",
"(4), the output of each XL PE head is: OXL h = ATT ( QXL h , KXL h , VXL h ) RT d v (10) As a result, the final output of HeadXL is: HEADSAN ( X ) = CONCAT ( OXL 1 , . . . , OXL O abs +1 , . . . , O absH ) WO (11) In particular, = 0 refers to the original Transformer (Vaswani et al., 2017) and = H means that XL PE will propagate over all attention heads.",
"We conduct experiments on word order-diverse language pairs: WMT'14 English German (En-De), WAT'17 Japanese English (Ja-En), and WMT'17 Chinese English (Zh-En & En-Zh).",
"For English German, the training set consists of 4.5 million sentence pairs and newstest2013 & 2014 are used as the dev.",
"and test sets, respectively.",
"BPE with 32K merge operations is used to handle low-frequency words.",
"For Japanese English, we follow Morishita et al. (2017) to use the first two sections as training data, which consists of 2.0 million sentence pairs.",
"The dev.",
"and test sets contain 1790 and 1812 sentences.",
"For Chinese English, we follow Hassan et al. (2018) to get 20 million 28.3 28.6 28.8 0 2 4 6 8 10 12 14 BLEU (#heads with XL PE) Transformer Big Head XL SANs Figure 3: BLEU score on newstest2014 for different .",
"sentence pairs.",
"We develop on devtest2017 and test on newstest2017.",
"We use SacreBLEU (Post, 2018) as the evaluation metric with statistical significance test (Collins et al., 2005).",
"We evaluate the proposed XL PE strategies on Transformer.",
"The baseline systems include Relative PE (Shaw et al., 2018) and directional SAN (DiSAN, Shen et al. 2018).",
"We implement them on top of OpenNMT (Klein et al., 2017).",
"In addition, we report the results of previous studies (Hao et al., 2019; Wang et al., 2019; Chen et al., 2019b,a; Du and Way, 2017; Hassan et al., 2018).",
"The reordered source sentences are generated by BTG-based preordering model (Neubig et al., 2012) trained with above sub-word level 1 parallel corpus.",
"At training phase, we first obtain word alignments from parallel data using GIZA++ or FastAlign, and then the training process is to find the optimal BTG tree for source sentence consistent with the order of the target sentence based on the word alignments and parallel data.",
"At decoding phase, we only provide source sentences as input and the model can output reordering indices, which will be fed into NMT model.",
"Thus, bilingual alignment information is only used to preprocess training data, but not necessary at decoding time.",
"For fair comparison, we keep the Transformer decoder unchanged and validate different position representation strategies on the encoder.",
"We conduct all experiments on the TRANSFORMER-BIG with four V100 GPUs.",
"Fig. 3 reports the results of different for Head XL SAN s.",
"With increasing of XL PE-informed heads, the best BLEU is achieved when #heads = 4, which is therefore left as the default setting for HeadXL.",
"Then, the BLEU score gradually decreases as the 1 Garg et al. (2019) show that sub-word units are beneficial for statistical model.",
"number of APE-informed heads decrease ( ), indicating that sequential position embedding is still essential for SANs.",
"Tab.",
"1 shows the results on En-De, inputting-level cross-lingual PE (+InXL PE) and head-level cross-lingual PE (+HeadXL PE) outperform Transformer BIG by 0.30 and 0.36 BLEU points, and combining these two strategies 2 achieves a 0.69 BLEU point increase.",
"For Ja-En, Zh-En, and En-Zh (Tab. 2), we observe a similar phenomenon, demonstrating that XL PE on SANs do improve the translation performance for several language pairs.",
"It is worth noting that our approach introduces nearly no additional parameters (+0.01M over 282.55M).",
"Our proposed XL PE intuitively encourages SANs to learn bilingual diagonal alignment, so has the",
"2 Replace PEXL in Eq.",
"(9) with PEIN-XL in Eq.",
"(8).",
"potential to induce better attention matrices.",
"We explore this hypothesis on the widely used Gold Alignment dataset 3 and follow Tang et al. (2019) to perform the alignment.",
"The only difference being that we average the attention matrices across all heads from the penultimate layer (Garg et al., 2019).",
"The alignment error rate (AER, Och and Ney 2003), precision (P) and recall (R) are reported as the evaluation metrics.",
"Tab.",
"3 summarizes the results.",
"We can see: 1) XL PE allows SANs to learn better attention matrices, thereby improving alignment performance (27.4 / 26.9 vs. 29.7); and 2) combining the two strategies delivers consistent improvements (24.7 vs. 29.7).",
"Tang et al. (2019) showed that context-free Transformer (directly propagating the source word em-beddings with PE to the decoder) achieved comparable results to the best RNN-based model.",
"We argue that XL PE could further enhance the context-free Transformer.",
"On English German dataset, 3 http://www-i6.informatik.rwth-aachen.",
"de/goldAlignment , the original dataset is German-English, we reverse it to English-German.",
"we compare LSTM-based model, Transformer BIG -noenc-nopos, +APE, +RPE and +InXL PE.",
"For fair comparison, we set the LSTM hidden size to 1024.",
"In Tab.",
"4, we can see: 1) position information is the most important component for the context-free model, bringing +14.45 average improvement; 2) InXL PE equipped context-free Transformer significantly outperforms the LSTM model while consuming less parameters; and 3) compared to the increment on standard Transformer (+0.30 over 28.36), InXL PE improves more for context-free Transformer (+0.57 over 24.11), where the improvements are +2.3% vs. +1.1%.",
"To demonstrate that our improvements come from cross-lingual position information rather than noisy position signals, we attack our model by adding noises 4 into reordered indices of training sentences.",
"As shown in Fig. 4, our method can tolerate partial reordering noises and maintain performance to some extent.",
"However, as noise increases, translation quality deteriorates, indicating that noises in reordering information do not work as regularization.",
"This contrastive evaluation also confirms that the model does not benefit from the noise as much as it benefits from the reordering information.",
"Augmenting SANs with position representation SANs ignore the position of each token due to its position-unaware bag-of-words assumption.",
"The most straightforward strategy is adding the position representations as part of the token representations (Vaswani et al., 2017; Shaw et al., 2018).",
"Besides above sequential PE approaches, Wang et al. (2019) enhanced SANs with structural positions extracted from the syntax dependencies.",
"However, none of them considered modeling the cross-4 We randomly swap two reordered positional indexes with different ratios.",
"Modeling cross-lingual divergence There has been many works modeling cross-lingual divergence ( e.g. , reordering) in statistical machine translation (Nagata et al., 2006; Durrani et al., 2011, 2013).",
"However, it is difficult to migrant them to neural machine translation.",
"Kawara et al. (2018) pre-reordered the source sentences with a recursive neural network model.",
"Chen et al. (2019a) learned the reordering embedding by considering the relationship between the position embedding of a word and SANS -calculated sentence representation.",
"Yang et al. (2019) showed that SANs in machine translation could learn word order mainly due to the PE, indicating that modeling cross-lingual information at position representation level may be informative.",
"Thus, we propose a novel cross-lingual PE method to improve SANs.",
"In this paper, we presented a novel cross-lingual position encoding to augment SANs by considering cross-lingual information ( i.e., reordering indices) for the input sentence.",
"We designed two strategies to integrate it into SANs.",
"Experiments indicated that the proposed strategies consistently improve the translation performance.",
"In the future, we plan to extend the cross-lingual position encoding to non-autoregressive MT (Gu et al., 2018) and unsupervised NMT (Lample et al., 2018).",
"This work was supported by Australian Research Council Projects FL-170100117.",
"We are grateful to the anonymous reviewers and the area chair for their insightful comments and suggestions."
] |
[
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"other",
"other"
] |
[
"Universal Semantic Tagging aims to provide lightweight unified analysis for all languages at the word level.",
"Though the proposed annotation scheme is conceptually promising, the feasibility is only examined in four Indo European languages.",
"This paper is concerned with extending the annotation scheme to handle Mandarin Chinese and empirically study the plausibility of unifying meaning representations for multiple languages.",
"We discuss a set of language-specific semantic phenomena, propose new annotation specifications and build a richly annotated corpus.",
"The corpus consists of 1100 EnglishChinese parallel sentences, where compositional semantic analysis is available for English, and another 1000 Chinese sentences which has enriched syntactic analysis.",
"By means of the new annotations, we also evaluate a series of neural tagging models to gauge how successful semantic tagging can be: accuracies of 92.7% and 94.6% are obtained for Chinese and English respectively.",
"The English tagging performance is remarkably better than the state-of-the-art by 7.7%.",
"Developing meaning representations across different languages plays a fundamental and essential role in multilingual natural language processing, and is attracting more and more research interests (Costa-juss et al., 2020).",
"Existing approaches can be roughly divided into three categories: the crosslingual 1 approach focuses on lending semantic annotation of a resource-rich language, such as English, to an under-resourced language (Wang et al., 2019; Blloshmi et al., 2020; Mohiuddin and Joty, 2020); the interlingual approach attempts to This author is now working in Tencent.",
"provide a unified semantic framework for all languages (Abend and Rappoport, 2013; White et al., 2016; Ranta et al., 2020); the multilingual approach aims at developing comparable but not necessarily identical annotation schemes shared by different languages (Bond and Foster, 2013; Baker and Ellsworth, 2017; Pires et al., 2019).",
"In line with the interlingual approach, Universal Semantic Tagging (UST; Bjerva et al., 2016) develops a set of language-neutral tags (hereafter referred to as sem-tag ) to annotate individual words, providing shallow yet effective semantic information.",
"Semantic analyses of different languages utilise a same core tag set, but may also employ a few language-specific tags.",
"Figure 1 presents an example.",
"English I / PRO had / PST repaired / EXT my / HAS watch / CON .",
"/ NIL German Ich / PRO hatte / PST meine / HAS Arm-banduhr / CON repariert / EXT .",
"/ NIL Italian Ho / NOW riparito / EXT il / DEF mio / HAS orologio / CON .",
"/ NIL Chinese / PRO / OBJ / PRO / MOD / CON / EXT / EXT / PFT",
"/ NIL Figure 1: An example of parallel sentences and their sem-tags.",
"The idea of sem-tag is first applied to the Parallel Meaning Bank (PMB; Abzianidze et al., 2017), where a multilingual corpus, including Dutch, German and Italian, is semi-automatically built by projecting semantic tags from English sentences to their translated counterparts.",
"However, it is insuf-ficient to prove the feasibility of UST only through some cases of inflectional and genetically related languages, because one main challenge in developing interlingual meaning representations is unifying annotations related to different characteristics of different languages.",
"We argue that two questions with regard to universality of UST are still unanswered.",
"Firstly, homologous words in PMB languages facilitate the application of UST, but it is not clear whether UST is equally applicable to languages sharing little cognates, although UST employs a delexicalised method.",
"Another concern is from typology: it still remains unknown whether word-level semantic tags are effective for annotating long sentence-words composing many morphemes which are common in agglutinative languages (e.g. Turkish and Japanese) and polysynthetic languages (e.g. Eskimo languages).",
"This paper takes Mandarin Chinese, a phylogenetically distant language from the IndoEuropean family, as an example to explore the effectiveness of UST as a universal annotation scheme.",
"Considering the balance of Chinese-specific linguistic properties and universality, we present a more comprehensive tag set where six new tags are added, indicating most sem-tags are applicable to Chinese (2).",
"Based on the new tag set, we establish a parallel corpus by manually translating WSJ into corresponding Chinese sentences and annotating sem-tags for 1100 sentence pairs.",
"It is a peer-reviewed corpus with 92.9% and 91.2% inter-annotator observed agreement of Chinese and English respectively (3).",
"This relatively successful practice of UST in Chinese suggests it keeps the balance between the depth of represented information and the breadth of its coverage of languages.",
"In other words, shallow semantics of UST enables it to be extended to annotate diversified languages.",
"By means of the newly created corpus, we evaluate a series of neural sequence labeling techniques (4).",
"The results demonstrate that the proposed scheme is promising with the accuracy of Chinese achieving 92.7% and the accuracy of English 94.6% (5).",
"The English tagging performance is remarkably better than the state-of-the-art (Abzianidze and Bos, 2017) by 7.7%, even though the sentences in our corpus are much longer than PMB on average, with 25 tokens per sentence compared with 6 in PMB.",
"In order to analyse the divergence between annotations of English and Chinese data and the plausibility of developing universal semantic representation in general, we manually annotate word alignment for 500 sentences.",
"By studying the aligned counterparts, we argue that universality is still threatened to some extent because there are 37.0% aligned tokens with mismatched sem-tags.",
"This phenomenon is mainly due to grammatical divergence, information loss of translation and difference of annotation strategies.",
"All the analyses based on word alignment suggest that even for a delexicalised, relatively shallow meaning representation scheme, it can still be problematic to ensure that semantic representations could be comparable in a word-to-word way.",
"Considering different linguistic ways to encode tense, aspect, prepositions, measure words, subordinate clauses and comparative expressions, we provide a tailored version of UST to handle Mandarin Chinese.",
"We present the complete tailored tag set in the Appendix.",
"Events and tense/aspect Different from English as well as many other IndoEuropean languages, there are no inflection-style tense-markers in Mandarin.",
"Therefore, the morphological tense-related labels, e.g. ENS and EPS , are removed.",
"Alternatively, temporal interpretation of Chinese can be conveyed through function words, adverbials or shared understanding of the context in Chinese (Smith and Erbaugh, 2005).",
"Apart from the last way, the previous two are encoded by sem-tags FUT and IST .",
"As for aspect in Chinese, there are only four commonly recognized aspect markers, denoting the preceding verbs are actualized or ongoing / are perfective ( PFT ) and / are progressive ( PRG ) (Liu, 2015).",
"Preposition Prepositions of English and Chinese vary in their history origins though they have similar syntactic function at present.",
"English prepositions are mainly created to replace the lost inflectional case markers (Mitchell, 1985).",
"On the other hand, Chinese prepositions can be traced to verbs.",
"Li and Thompson (1989) even go so far as to call them coverbs since some of them are like verbs and can be used as verbs that have similar meanings.",
"This term can avoid labeling them either verbs or prepositions.",
"In this regard, Chi-English EXS untensed simple: to walk, is eaten ENS present simple: we walk, he walks EPS past simple: ate, went EXG untensed progressive: is running EXT untensed perfect: has eaten Chinese EXS untensed simple: EXG untensed progressive: EXT untensed perfect: Table 1: EVE tags of English and Chinese.",
"nese prepositions should not follow the practice on English because REL emphasizes grammatical relations between verbs and nouns while in Chinese the degree of grammarization of prepositions is not so far.",
"Consequently, we design a separate set of sem-tags for Chinesee prepositions by borrowing existing sem-tags ( DXT / DXP / ALT ) and adding some new sem-tags ( MAN / RES / AIM / OBJ / COM ).",
"Classifier Classifier is a Chinese-specific word class which is inserted between numerals and nouns to denote quantity.",
"This category does not exist in English so we generalize UOM over the unit of measurement since its function is quite similar to classifiers (Li and Thompson, 1989).",
"Subordinate clause Whether subordinate clauses exist in Chinese is controversial since not all the clauses meet the standard in a lower position than the main clause .",
"Additionally, words corresponding to subordinate conjunctions of English such as (because), (al-though), etc, constitute a heterogeneous group and do not necessarily select a subordinating clausal complement (Paul, 2016).",
"Given these two reasons, SUB is (temporarily) removed to avoid controversy.",
"Comparative expression UST designs a detailed label set to annotate comparative expressions in English.",
"See Table 4.",
"In particular, though expressions labeled as MOR / TOP and LES / BOT utilize exactly the same syntactic constructions, they are separated according to their meaning, in a way that is more oriented by applications.",
"Different from English, Mandarin does not have morphological comparatives and superlatives.",
"To express comparative-related meaning, adverbs (roughly means more ) and (roughly means most ) are utilized and annotated as MOR and TOP respectively.",
"Accordingly, LES and BOT are deleted.",
"EQU equative: as tallasJohn , whales are mammals MOR comparative positive: smarter, more LES comparative negative: less, worse TOP superlative positive: smartest, most BOT superlative negative: worst, least ORD ordinal: 1st, 3rd, third Chinese",
"EQU equative: MOR comparative positive: TOP superlative positive: ORD ordinal:",
"We introduce a new moderate-sized corpus containing high-quality manual annotations for English and Chinese, which is now available at https://github.com/pkucoli/UST .",
"To support fine-grained cross-lingual comparisons, the corpus includes 1100 parallel sentence pairs.",
"We select 1100 sentences from the Wall Street Journal (WSJ) section of Penn TreeBank (PTB; Marcus et al., 1993).",
"We choose it because it contains detailed semantic annotations and the sentences are relatively long, thus potentially carrying more complex information.",
"It is noteworthy that various syntactic and semantic analyses of these English sentences have been built by multiple projects, e.g. DeepBank (Flickinger et al., 2012), PropBank (Palmer et al., 2005) and OntoNotes (Weischedel et al., 2013).",
"We then obtain Chinese counterparts of original English sentences by employing EnglishChinese bilinguals to do literal translation.",
"In addition, we also select 1000 sentences from Chinese TreeBank (CTB; Xue et al., 2005), where manual syntactic analyses are available.",
"One doctoral student and one undergraduate student, majoring in linguistics, annotate the pair sentences.",
"The guideline for English annotation is derived from the universal semantic tag set (Abzian-idze and Bos, 2017) with reference to data in PMB and Chinese is annotated based on the modified tag set in the appendix.",
"The annotation process consists of three steps: firstly, annotators independently annotate 100 Chinese WSJ sentences, and later compare and discuss disagreements between the annotations.",
"The conflicting cases are then analyzed to modify the specification.",
"After some iterations, the consistency between annotators is significantly improved.",
"Additionally, we find part-of-speech (POS) tags are quite useful to accelerate manual annotation.",
"Therefore, we apply the Stanford CoreNLP tool (Manning et al., 2014a) to get automatically predicted POS tags for the translated Chinese sentences.",
"Quality of the corpus The observed inter-annotator agreement in annotating Chinese and English sub-corpus data achieves 92.9% and 91.2% for Chinese and English sentences respectively.",
"A high consistency in the annotation of both sub-corpus is obtained, which, in our view, demonstrates that UST is feasible for Chinese and the adjustment of original tag set is relatively satisfactory.",
"Re-tagging In order to improve the quality of annotation, we leverage the re-tagging strategy (Ide and Pustejovsky, 2017).",
"Specifically, we investigate disagreements between initial model predictions and manual tagging, and correct manual annotation errors.",
"After a round of re-tagging and re-training, the disagreement between the gold and the output of the tagger reduces from 10.3% to 7.9% on Chinese and 6.7% to 5.2% for English.",
"As a multilingual annotation scheme, UST represents semantic information in an interlingual way.",
"Therefore, we want to answer after the modification of tag set, how the retained cross-lingual syntax and semantic divergence between distant languages still threatens its universality.",
"We leverage a token-level word alignment for 500 parallel sentence pairs and investigate sem-tag mismatching between aligned tokens.",
"Of the total 7295 pairs of tokens aligned, tokens in 3392 pairs share matched semantic tags with their counterparts, with a matching rate of 46.5% .",
"Note that punctuation and tokens tagged with NIL are excluded.",
"Figure 2 shows an example of word alignment and sem-tag matching.",
"Our divergence analysis based on alignment is under the assumption that, as both the tasks of alignment and sem-tagging are concerning token-level semantic representation, the matched token pairs are expected to share the same sem-tags.",
"Non-correspondence between aligned counterparts would therefore suggest divergence between the annotations in two languages, and further, may reveal problems caused by cross-lingual divergence.",
"Word alignment Word alignment between sentence pairs is firstly automatically acquired with Berkeley Aligner 2 and then manually corrected.",
"Matching rate and mismatches In general, aligned tokens are mostly entities or events, and among matches, the most frequent sem-tag is CON , followed by ORG and ROL .",
"Other tags whose proportions in all matches exceed 3% are EXS , QUC , IST , PER and GPE .",
"And the match per edge rates of these tags are also relatively high except for IST (see Table 5).",
"However, since the mismatch phenomenon in CON , ORG and EXS are also 2 https://code.google.com/archive/p/ berkeleyaligner/ Pacific+First+Financial+Corp.",
"not rare, annotation divergence could probably exist.",
"A linguistically-motivated analysis suggests the following important factors: Grammatical divergence: an example is EXS in Figure 2.",
"As illustrated in 2, it is used to tag Chinese verbs that are non-progressive or non-perfect, while only limited to untensed simple for English.",
"This grammatical difference leads to tag set modification and thus results in sem-tag mismatch.",
"Information loss caused by non-literal translation: In the example in Figure 2, approved its acquisition is translated as ... , which cause mismatch between acquisition (noun, CON ) and (verb, EXS ).",
"Different annotation strategy for MWE: Corp. is tagged ORG while in their Chinese counterparts are tagged CON .",
"widely used in various sequential tagging tasks (Huang et al., 2015; Ma and Hovy, 2016; Bohnet et al., 2018) and have achieved the state-of-the-art performance for many popular benchmark datasets.",
"In our paper, we use Bidirecational LSTM (BiLSTM) with and without a Conditional Random Field (CRF) inference layer to build baseline systems for our dataset.",
"In the rest part of this section, we will briefly formulate our baseline tagging models and introduce some widely used techniques that may enhance prediction for some tagging tasks.",
"Model For a word w i in an input sentence ( w 1 , w 2 , ..., w n ) , we use dynamically learned word embeddings e 1 summed with the feature vectors calculated by BERT/ELMo after a linear projection W e as the input of BiLSTM.",
"If the POS tag of word w i is used as additional input, we extend x i with the the embedding p i of the POS tag before passing it into the BiLSTM.",
"After obtaining the contextual representations f i and b i , we pass the concatenation of f i and b i to a multilayer perceptron (MLP) to calculate the scores vector s i over semantic tags.",
"Finally, we feed s i into a softmax layer to choose a tag with highest probability for each word independently, or a CRF layer which can select the tag sequence with highest probability for the whole sentence.",
"Subword/Character-level Models In order to solve the out-of-vocabulary (OOV) issues in sequence tagging tasks, many subword-level and character-level models are proposed (Akbik et al., 2018; Ling et al., 2015; Bohnet et al., 2018).",
"We do not use these models for experiments, instead we leverage pretrained language models to handle OOV issues, such as BERT (Devlin et al., 2019) and ELMo (Peters et al., 2018).",
"These pretrained language models are trained on large corpus and use a subword/character-level vocabulary, which provide better contextual word representations.",
"POS features POS categories can provide low-level syntax infomation which is beneficial for sem-tagging.",
"In our experiments, we try to use POS tags as additional inputs for our baseline systems.",
"Multi-task Learning (MTL) Multi-task learning (MTL) is a widely discussed technique in the literature.",
"Previous work (Changpinyo et al., 2018) shows that MTL can improve sequence tagging tasks in some cases.",
"In our experiments, we try to jointly train a POS tagger and a semantic tagger which use a shared BiLSTM.",
"We conduct experiments on English and Chinese data separately.",
"Since there are only about 2100 Chinese sentences and 1100 English sentences which are annotated, in order to achieve more stable tagging accuary for future comparison, we randomly split the whole dataset into 5 folds.",
"One fold is a test set and the remaining serves as the training set where our model is trained on 85% instances and model selection is judged by the performance on the rest 15% instances.",
"And then the tagging accuracy will be calculated using the best model on the selected fold.",
"Finally, we report the average accuracy on these 5 folds.",
"Built on the top of PyTorch (Paszke et al., 2017), we employ BiLSTM as our baseline model and all the models are trained for 8000 mini-batches, with a size of 32 .",
"Using the Adam optimizer (Kingma and Ba, 2015) and a co-sine learning rate annealing method, we train the model with an initial learning rate chosen from { 0 .",
"0001 , 0 .",
"005 , 0 .",
"001 } .",
"The details of parameters setting in different models are as follow: 1) the dimension of the hidden states of LSTM is set to 128 for each direction and the number of layers is set to 1 ; 2) the embeddings of POS tags are randomly initialized and has a dimension of 32 while the embeddings of words have a dimension of 300 and are initialized by the GloVe vectors 3 (Penning-ton et al., 2014) and pre-trained word vectors 4 (Li et al., 2018) for English and Chinese respectively 5 ; 3) the parameters of BERT/ELMo are fixed during the training of our sequence tagging models; 4) for models with MTL, we directly optimize the sum of the losses for both POS tagging and universal semantic tagging.",
"Figure 3 shows the overall performance of different models.",
"Gold POS tags bring significant performance improvements, which is also verified by Huo and de Melo (2020).",
"However, MTL can only slightly improve the overall results.",
"When pre-trained contextualized word embeddings are utilized, the gap between different models becomes insignificant.",
"Additionally, the significant improvement of English accuracy over previous state-of-the art is also attributed to the use of pretraining models: with the help of BERT, a simple BiLSTM tagger can be close to 92.0%-accurate for Chinese and 94.6% for English while without it, tagging accuracy of English data is around 85%.",
"3 nlp.stanford.edu/projects/glove/ 4 github.com/Embedding/ Chinese-Word-Vectors 5 The embeddings missed in the pre-trained vectors are randomly initialized.",
"Empirical evaluation indicates competitive accuracy of our models.",
"However, the result varies among different sem-tag categories and some of them remain at an extremely low level (Table 6).",
"To further improve the model's performance and have a better understanding of cross-lingual semantic representation, this section provides a fine-grained error analysis towards each underper-forming sem-tag category.",
"Properties of Chinese adjectives The low predication accuracy of ATT is largely attributable to the difficulties in differentiating IST and SST , especially in the light of high frequencies of adjectives in Chinese, which are a more complicated case compared to English adjectives.",
"Usages of Chinese adjectives and corresponding sem-tags are shown in Table 7: Usage A A+N A+ de +N Narrow adjectives IST IST / SST IST / SST Distinct words n.a. IST IST Table 7: Usages and sem-tags of Chinese adjectives.",
"We propose practical strategies to improve the performance of our tagging model on differentiating IST and SST in Chinese.",
"The first method is to establish a lexicon, based on the fact that whether an adjective can be used as a predicate is an inherent property.",
"Thus it is possible to distinguish the use of IST and SST by simply referring to a lexicon.",
"Another strategy is rule-based: an adnominal adjective is tagged SST only when it obtains a gradable reading.",
"We stipulate the following rules: if tokens preceded by attribute adjectives are tagged INT , EQU , MOR and TOP , adjectives should be marked as SST .",
"After uploading the lexicon and rules, the tagging accuracy of IST and SST raise from 68.8% and 63.1% to 81.4% and 77.9%.",
"Overall accuracies after uploading adjective lexicon and rules are shown in Table 8.",
"Named entity Table 9 shows the accuracy of each of NAM (named entity) for English and Chinese.",
"Although named entities are regarded as one of the most frequently corresponding concepts shared by various languages (see 3), marked differences still exist: The accuracies of each sem-tag of English are generally higher than those of Chinese 6 .",
"English presents a lower diversity of performance (73.3%98.0%) compared with Chinese (58.6%97.9%).",
"We propose an explanation on why English and Chinese sem-taggers perform differently on NAM : named entities in English are identified by capitalization while Chinese not.",
"Therefore, it is harder for Chinese to calculate the scope of proper names than English, and the overall accuracy is thus influenced.",
"Moreover, it can also be inferred that Chinese is more sensitive to the length 6 HAP is not included and will be discussed in the next paragraph.",
"of named entities given its difficulties in judging scope: sem-tags ( PER , GPE and UOM ) whose accuracies are higher than the average level, are commonly used to annotate one-token units while other below-average tags ( GPO , GEO , ORG and ART ) annotate multi-word proper nouns.",
"On the contrary, English, with certain markers of named entites, shows that the decrease of accuracy with length is not as prominent as it of Chinese.",
"Sparse data input DXD of DXS , ITJ , HES and GRE of ACT , EQU of COM and HAP of NAM , whose presences are not enough for training and learning, need more diverse data as input in further research.",
"The high-quality manual annotation and automatic tagging both indicate the importance of POS tags in the USTthe inter-annotator agreement and tagging accuracies increase after applying POS tags.",
"Huo and de Melo (2020) believe this is because POS tags may facilitate semantic disambiguation though the extra syntactic information.",
"However, what is not revealed is the underlying mechanism under which a syntactic feature can contribute to semantic analysis.",
"To investigate the impact of POS tags, 50 new sentences of WSJ and their Chinese counterparts are selected for a pilot study.",
"Two annotators are asked to annotate them with or without the assistance of POS tags.",
"Table 10 shows that POS tags have an impact on the inter-annotator agreements.",
"This tendency is observed for both English and Chinese data.",
"After a detailed investigation, we summarize the influences of POS tags on inter-annotator agreements as two points:",
"(i) Some tokens have multidimensional semantic features and POS tags are likely to make annotators choose sem-tags related to POS features.",
"For instance, unable may be annotated as NOT (negation) or POS (possibility).",
"However, after the introduction of its POS tag, i.e. ADJ , two annotators are more likely to annotate it as IST , which is appropriate for most of adjectives, rather than NOT and POS ;",
"(ii) Gerunds which do not take arguments or are not modified by adverbs are more likely to bring challenges as it is difficult for annotators to determine whether event-related sem-tags or concept-related ones are more suitable for them.",
"It is even more difficult for Chinese annotation in which verbs do not have inflected forms.",
"All these can be easily solved by assigning POS tags.",
"In our view, the reason why POS contribute to semantic annotations can be traced to discussions of theoretical linguistics.",
"Generally speaking, POS is category of words, whose identification has been a controversial problem for a long time in this area.",
"Some linguists are in favor of a syntactic or distributional basis of POS (Harris, 1951; Edmonds, 1967) while others advocate a semantic or notional basis (Lyons, 1966).",
"From a notion-based perspective, assigning forms to concepts, or POS tags and sem-tags to tokens, are all a process of categorizing and classifying objects referred by these tokens, which helps explain why POS tags have a significant influence on semantic sorts.",
"In this regard, annotations are undoubtedly impacted by POS tags.",
"Nonetheless, some researchers rebate it, believing that the notional definitions of POS are not applicable because of its unclearness.",
"According to them, distribution, morphological features, grammatical functions are all useful criteria for the identification of POS.",
"In our view, contradiction between notion-based and distribution-based approach leads to some difficulties in annotation.",
"To avoid this, we applied POS tags which are automatically-generated by the Stanford CoreNLP tool (Manning et al., 2014b) to assist manual annotation.",
"However, though POS tags actually improve the inter-annotator agreement by regulating manual annotations of sem-tags in two ways, it is not clear whether they improve the quality of annotations the first one increases the possibility of one option while the second one directly makes choices for annotators.",
"To what extent more coarse-grained annotating standards contribute to annotations needs further research.",
"Building comparative semantic representations across languages has been an important topic in recent years as a strategy to both contribute to semantic parsing and syntactic analysis.",
"Existing approaches towards it can be roughly divided into three categories.",
"First, crosslingual approach is proposed, which lends semantic annotation of a resource-rich language to an under-resourced language; see e.g. Damonte and Cohen (2018).",
"However, crosslingual divergence between the lender and the borrower is likely to be retained to a considerable extent, especially for the languages which are phylogenetically distant.",
"Another widely-discussed multilingual approach aims to achieve the goal by developing a comparable scheme of annotations for different languages, such as multilingual FrameNet (Baker et al., 1998) and multilingual WordNet (Miller, 1995), whose main limitation is that the semantic information represented is at the risk of oversimplifying since many in-depth properties are language-specific.",
"The third one, the interlingual approach aims to find universal semantic frameworks for all languages.",
"Yet it can be fairly difficult to find such appropriate interlingual frameworks.",
"In our view, these strategies are employed by researchers to study the major challenge i.e., the divergence of languages, encountered in representing multilingual data.",
"And UST, which is in line with interlingual method, attempts to address it by a relatively shallow scheme.",
"Despite the high inter-annotator agreements and tagging accuracies, there are still some divergences, which requires more in-depth study of multilingual annotation.",
"UST is one of previous attempts of interlingua (Abzianidze and Bos, 2017), which is originally designed to provide necessary information for semantic parsing (Bjerva et al., 2016).",
"Primary automatic sem-taggers are built using convolutional neural networks and deep residual networks (Bjerva et al., 2016).",
"Later, in PMB project (Abzianidze et al., 2017), the authors propose a method of projecting automatically annotated semantic tags from a sentence to its sentenceand word-aligned counterparts.",
"Following previous works, an updated universal semantic tagset is later proposed (Abzianidze and Bos, 2017), with a modification of deriving the tagset in a data-driven manner to disambiguate categories.",
"In this work, a tri-gram based tagging model, TnT tagger (Brants, 2000), is also initially explored for bootstrapping utilization.",
"In a recent study built on Bjerva et al. (2016), employing sem-tag in multi-task learning is found to be beneficial to both sem-tag task and other NLP tasks including Universal Dependency POS tagging, Universal Dependency parsing, and Natural Language Inference (Abdou et al., 2018).",
"Overall, these studies indicate that sem-tags are effective in conducting various NLP tasks.",
"In this paper, we take Chinese into account to provide a more comprehensive tag set based on which we establish a reliable manually-annotated corpus, and show that promising performance of automatic semantic tagging is obtained after employing MTL as well as gold POS tag and leveraging pre-trained models.",
"The overall success of this approach prompts a reflection of universality of different languages and operability of multilingual meaning representation: 1) UST is plausible in general partly because it is delexicalised and can thus represent phylogenetically languages after some adaptions; 2) universality is threatened to some extent because there are aligned but mismatched tokens between English and Chinese, which are caused by grammatical divergence, information loss of translation and different annotation strategies for MWE; and 3) innate crosslingual divergences still exist even in NAM 's thought to be the most consistent pairs, which needs further exploration.",
"Though our work demonstrates the plausibility of developing a shared delexicalised and shallow annotation scheme to mitigate divergences across languages, it seems that more in-depth semantic analysis, especially lexicalised ones, may not be possible to be unified.",
"We think a wider range of languages can be annotated after some minor adaptions of scheme.",
"But it is still unknown how to get deeper processing information on this basis and thus develop an enhanced understanding of multilingual meaning representation.",
"We thank the three anonymous reviewers for their helpful comments."
] |
[
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"result",
"other",
"objective",
"abstain",
"abstain",
"other"
] |
[
"Text classification models are becoming increasingly complex and opaque, however for many applications it is essential that the models are interpretable.",
"Recently, a variety of approaches have been proposed for generating local explanations.",
"While robust evaluations are needed to drive further progress, so far it is unclear which evaluation approaches are suitable.",
"This paper is a first step towards more robust evaluations of local explanations.",
"We evaluate a variety of local explanation approaches using automatic measures based on word deletion.",
"Furthermore, we show that an evaluation using a crowdsourcing experiment correlates moderately with these automatic measures and that a variety of other factors also impact the human judgements.",
"While the impact of machine learning is increasing rapidly in society, machine learning systems have also become increasingly complex and opaque.",
"Classification models are usually evaluated based on prediction performance alone (e.g., by measuring the accuracy, recall, and precision) and the interpretability of these models has generally been undervalued.",
"However, the importance of interpretable models is increasingly being recognized (Doshi-Velez and Kim, 2017; Freitas, 2014).",
"First, higher interpretability could lead to more effective models by revealing incompleteness in the problem formalization (Doshi-Velez and Kim, 2017), by revealing confounding factors that could lead to biased models, and by supporting error analyses or feature discovery (Aubakirova and Bansal, 2016).",
"Second, with the increasing adoption of machine learning approaches for humanities and social science research, there is also an increasing need for systems that support exploratory analyses and theory development.",
"Various approaches have been explored to in-crease the interpretability of machine learning models (Lipton, 2016).",
"This paper focuses on local explanation, which aims to explain the prediction for an individual instance (e.g., Ribeiro et al. (2016)).",
"A study by Herlocker et al. (2000) found that providing local explanations could help improve the acceptance of movie recommendation systems.",
"Local explanations can come in different forms.",
"For example, Koh and Liang (2017) identify the most influential training documents for a particular prediction.",
"The most common type of local explanation involves identifying the important parts of the input for a prediction, such as the most predictive words in a document for a text classification model.",
"In this paper we focus on local explanations for text classification.",
"Below is a fragment of a movie review.",
"The words identified by a local explanation method to explain a neural network prediction are in bold.",
"The review is labeled with a negative sentiment, but the classifier incorrectly predicted a positive sentiment.",
"The highlighted words help us understand why.",
"steve martin is one of the funniest men alive.",
"if you can take that as a true statement, then your disappointment at this film will equal mine.",
"martin can be hilarious , creating some of the best laugh-out-loud experiences that have ever taken place in movie theaters.",
"you won't find any of them here.",
"[...] Words such as funniest and hilarious were important for the prediction.",
"Besides providing evidence for a predicted label, some local explanations can also provide evidence against a predicted label.",
"For example, in the above example, the word disappointment was one of the highest ranked words against the predicted label.",
"Ineffective approaches could generate misleading explanations (Lipton, 2016), but evaluating local explanations is challenging.",
"A variety of approaches has been used, including only visual inspection (Ding et al., 2017; Li et al., 2016a), intrinsic evaluation approaches such as measuring the impact of deleting the identified words on the classifier output (Arras et al., 2016), and user studies (Kulesza et al., 2015).",
"Contributions To further progress in this area, it is imperative to have a better understanding of how to evaluate local explanations.",
"This paper makes the following contributions: Comparison of local explanation methods for text classification.",
"We present an in-depth comparison between three local explanation approaches (and a random baseline) using two different automatic evaluation measures on two text classification tasks (Section 4).",
"Automatic versus human evaluation.",
"Automatic evaluations, such as those based on word deletions, are frequently used since they enable rapid iterations and are easy to reproduce.",
"However, it is unclear to what extent they correspond with human-based evaluations.",
"We show that the automatic measures correlate moderately with human judgements in a task setting and that other factors also impact human judgement.",
"(Section 5).",
"Research on interpretable machine learning models has so far mainly focused on computer vision systems (e.g., Simonyan et al. (2013)).",
"Topic modeling is one of the exceptions within NLP where the interpretability of models has been important, since topic models are often valued for their interpretability and are integrated in various user interfaces (Paul, 2016).",
"There has recently been an increasing interest in improving the interpretability of NLP models, perhaps driven by the increasing complexity of NLP models and the rise of deep learning (Manning, 2015).",
"Global approaches aim to provide a global view of the model.",
"One line of work involves making the machine learning model itself more interpretable, e.g., by enforcing sparsity or imposing monotonicity constraints (Freitas, 2014).",
"However, often there is a trade-off between accuracy and interpretability as adding constraints to the model could reduce the performance.",
"An alternative involves extracting a more interpretable model, such as a decision tree, from a model that is less interpretable, such as a neural network (Craven, 1996).",
"In this case, model performance is not sacrificed but it is essential that the proxy is faithful to the underlying model.",
"However, often a machine learning model is so complex that interpretable, trustworthy global explanations are difficult to attain.",
"Local explanations aim to explain the output for an individual instance.",
"For some models the local explanations are relatively easy to construct, e.g., displaying the word probabilities of a Naive Bayes model with respect to each label (Kulesza et al., 2015) or displaying the path of a decision tree (Lim et al., 2009).",
"However, these models may not be easily interpretable if they make use of many features.",
"For many machine learning models, extracting local explanations is even less straight-forward.",
"Proposed approaches so far include using the gradients to visualize neural networks (Aubakirova and Bansal, 2016; Li et al., 2016a; Simonyan et al., 2013), measuring the effect of removing individual words (or features) (Li et al., 2016b; Martens and Provost, 2014), decomposition approaches (Arras et al., 2016; Ding et al., 2017), and training an interpretable classifier (e.g., linear model) that approximates the neighborhood around a particular instance (Ribeiro et al., 2016).",
"Some approaches have only been evaluated using visual inspection (Ding et al., 2017; Li et al., 2016a).",
"Goyal et al. (2016) identified important words for a visual question answering system and informally evaluated their approach by analyzing the distribution among PoS tags (e.g., assuming that nouns are important).",
"However, quantitative evaluations are needed for more robust comparisons.",
"Such evaluations have included measuring the impact of the deletion of words identified by the explanation approaches on the classification output (Arras et al., 2016, 2017), or testing whether the explanation was consistent with an underlying gold model (Ribeiro et al., 2016).",
"These automatic evaluations are fast to carry out but act as a simplistic proxy for explanation quality.",
"While a few user studies have been performed to evaluate explanations (e.g., Ribeiro et al. (2016)), we are not aware of work that analyzes how automatic evaluation measures compare to human-based evaluation.",
"This section describes the datasets, the classification models and the local explanation approaches used in our experiments.",
"Twenty newsgroups (20news).",
"The Twenty Newsgroups dataset has been used in several studies on ML interpretability (Arras et al., 2016; Kapoor et al., 2010; Ribeiro et al., 2016).",
"Similar to Ribeiro et al. (2016), we only distinguish between Christianity and Atheism .",
"We use the 20news-bydate version, and randomly reserve 20% of the training data for development.",
"Movie reviews.",
"Movie reviews with polarity labels (positive versus negative sentiment) from Pang and Lee (2004).",
"We use the version from Zaidan et al. (2007).",
"The dataset is randomly split into a train (60%), development (20%) and test (20%) set.",
"We experiment with two different models.",
"Logistic Regression ( LR ) is implemented using Scikit-learn (Pedregosa et al., 2011) with Ridge regulari-sation, unigrams and a TF-IDF representation, resulting in a 0.797 accuracy on the movie dataset and a 0.921 accuracy on the 20news dataset.",
"We experiment with a LR model, because the contributions of individual features in a LR model are known.",
"We thus have a ground truth for feature importance to compare against for this model.",
"We also use a feedforward neural network ( MLP ) implemented using Keras (Chollet et al., 2015), with 512 hidden units, ReLU activation, dropout (0.5, not optimized) and Adam optimization, resulting in a 0.832 accuracy on the movie dataset and a 0.939 accuracy on the 20news dataset.",
"In this paper, we focus on local explanation approaches that identify the most influential parts of the input for a particular prediction.",
"In this paper we limit our focus to individual words for explaining the output of text classification models.",
"Other representations, e.g., explanations using phrases or higher-level concepts are left for future work.",
"We experiment with explanations for the predicted class, since in real-life settings usually no ground truth labels are available.",
"We experiment with the following local explanation approaches: Random.",
"LIME (Ribeiro et al., 2016) is a model-agnostic approach and involves training an interpretable model (in this paper, a linear model with Ridge regularisation) on samples created around the specific data point by perturbing the data.",
"We experiment with 500 5000 samples and use the implementation provided by the authors.",
"1 Word omission.",
"This approach aims to estimate the contribution of individual words by deleting them and measuring the effect, e.g., by the difference in probability (Robnik-Sikonja and Kononenko, 2008).",
"Within NLP, variations have been proposed by Kadar et al. (2016), Li et al. (2016b) and Martens and Provost (2014).",
"It is also similar to occlusion in the context of image classification, which involves occluding regions of the input image (Zeiler and Fergus, 2014).",
"For LR , this approach corresponds to ranking words according to the regression weights (and considering the frequency in the text) and is therefore optimal.",
"For MLP , we use the difference in probability for the predicted class ( y ) when removing word w from input x : p ( y | x ) p ( y | x \\ w ) .",
"This approach supports explanations based on interpretable features (e.g., words) even when the underlying representation may be less interpretable.",
"However note that in general, this omission approach might not be optimal, since it estimates the contribution of words independently.",
"This approach is also computationally expensive, especially when many features are used.",
"1 https://github.com/marcotcr/lime .",
"First derivative saliency.",
"This approach computes the gradient of the output with respect to the input (e.g., used in Aubakirova and Bansal (2016), Li et al. (2016a) and Simonyan et al. (2013)).",
"The obtained estimates are often referred to as saliency values.",
"Several variations exist, e.g., Li et al. (2016a) take the absolute value.",
"In this paper, the raw value is taken to identify the words important for and against a certain prediction.",
"In this section we explore automatic evaluation of local explanations.",
"Local explanations should exhibit high local fidelity , i.e. they should match the underlying model in the neighborhood of the instance (Ribeiro et al., 2016).",
"An explanation with low local fidelity could be misleading.",
"Because we generate explanations for the predicted class (rather than the ground truth), explanations with high local fidelity do not necessarily need to match human intuition, for example when the classifier is weak (Samek et al., 2017).",
"Ideally, the evaluation metrics are model agnostic and do not require information that may not always be available such as probability outputs.",
"This paper focuses on local fidelity, but other aspects might also be desired, such as sparsity (Samek et al., 2017; Ribeiro et al., 2016; Martens and Provost, 2014).",
"We measure local fidelity by deleting words in the order of their estimated importance for the prediction.",
"Arras et al. (2016) generated explanations with the correct class as target.",
"By deleting the identified words, accuracy increased for incorrect predictions and decreased for correct predictions.",
"However, their approach assumes knowledge of the ground-truth labels.",
"We take an alternative, but similar, approach.",
"Words are also deleted according to their estimated importance, e.g. w 1 ...w n with w 1 the word with the highest importance score, but for the predicted class instead.",
"For each document, we measure the number of words that need to be deleted before the prediction switches to another class (the switching point ), normalized by the number of words in the document.",
"For example, a value of 0.10 indicates that 10% of the words needed to be deleted before the prediction changed.",
"An advantage of this approach is that ground-truth labels are not needed and that it can be applied to black-box classifiers, we only need to know the predicted class.",
"Furthermore, the approach acts on the raw input.",
"It requires no knowledge of the underlying feature representation (e.g., the actual features might be on the character level).",
"We also experiment with the measure proposed by Samek et al. (2017), referred to as the area over the perturbation curve ( AOPC ): AOP C = 1 K + 1 h KX k =1 f ( x ) f ( x \\ 1",
"where f ( x \\ 1",
"..k ) is the probability for the predicted class when words 1",
"..",
"k are removed and hi p ( x ) denotes the average over the documents.",
"This approach is also based on deleting words, but it is more fine-grained since it uses probability values rather than predicted labels.",
"It also enables evaluating negative evidence.",
"A drawback is that AOPC requires access to probability estimates of a classifier.",
"In this paper, K is set to 10.",
"For LR , the exact contribution of individual features to a prediction is known and the words in the document that contributed most to the prediction can be computed directly.",
"For this classifier, the optimal approach corresponds to the omission approach.",
"Table 3 reports the results by measuring the effect of word deletions and reporting the average switching point.",
"Lower values indicate that the method was better capable of identifying the words that contributed most towards the predicted class, because on average fewer words needed to be deleted to change a prediction.",
"Table 2 shows the AOPC values with a cut-off at 10.",
"We measure AOPC in two settings: removing positive evidence (higher values indicate a more effective explanation) and negative evidence (lower values indicate a more effective explanation).",
"Comparison local explanation methods As expected, LIME improves consistently when more samples are used.",
"Furthermore, when comparing the scores of the omission approach for the LR model (which corresponds to the ground-truth) we observe that LIME with 5000 samples comes close to the optimal score.",
"We use the two-tailed paired permutation test to test for significance between all methods with both evaluation measures.",
"In al-1072 20news (topic) Movie (sentiment) LR MLP LR MLP pos.",
"most all cases, the differences are highly significant ( p < 0 . 001 ), except the difference in average switching point between the omission and salience approach on the movies dataset with the MLP classifier (n.s.) and the difference in average switching point between the omission and LIME-5000 approach on 20news with the MLP classifier (n.s.).",
"The difference in AOPC scores for evaluating negative evidence was not significant in many cases.",
"Metric sensitivity First, the results suggest that the values obtained depend strongly on the type of task and classifier.",
"The explanation approaches score better on the sentiment detection task in both Tables 2 and 3.",
"For example, fewer words need to be removed on average to change a prediction in the movie dataset (Table 3).",
"A possible explanation is that for sentiment detection, a few words can provide strong cues for the sentiment (e.g., terrific ), while for (fine-grained) topic detection (e.g., distinguishing between Christianity and atheism ) the evidence tends to be distributed among more words.",
"Better values are also obtained for the LR classifier (a linear model) than for MLP .",
"Second, as shown in Table 2, AOPC enables assessing negative evidence (i.e. the words that provide evidence for the opposite class).",
"The obtained absolute values are much smaller compared to the values obtained for the words identified as positive evidence.",
"This is expected, since the positive evidence in a document for the predicted class should be larger than the negative evidence.",
"Third, we analyze the relation between the word deletion evaluation measures and the prediction confidence of the classifiers, based on the probability of the output class.",
"Table 4 reports the Spearman correlations for the MLP classifier on the movie dataset (similar trends were observed with the LR classifier).",
"There is a strong correlation between the prediction confidence and the word deletion evaluation measures.",
"The higher the prediction confidence of a classifier, the more words need to be deleted before a prediction changes (e.g., see the switching points).",
"However, the strength of the correlations is lower for the more robust explanation methods (LIME-5000, omission and saliency).",
"In the previous section we evaluated the local explanation approaches using automatic measures.",
"However, the explanations are meant to be presented to humans .",
"We therefore turn to evaluating the explanations using crowdsourcing.",
"We analyze the usefulness of the generated explanations in a task setting and analyze to what extent the automatic measures correspond to the human-based evaluations.",
"The crowdsourcing experiments are run on CrowdFlower.",
"Only crowdworkers from Australia, Canada, Ireland, United Kingdom and the United States and with quality levels two or three were accepted.",
"One way to evaluate an explanation is by asking humans to guess the output of a model based on the explanation and the input.",
"Doshi-Velez and Kim (2017) refer to this as forward simula-tion/prediction.",
"As mentioned by Doshi-Velez and Kim (2017), this is a simplified task.",
"Evaluations using more specific application-oriented tasks or tailored towards specific user groups should be explored in future work.",
"We have chosen the forward prediction task as a first step since it is a general setup that could be used to evaluate explanations for a variety of tasks and models.",
"In this study, crowdworkers are shown the texts (e.g., a movie review), in which the top words identified by the local explanation approaches are highlighted.",
"Crowdworkers are then asked to guess the output of the system (e.g., a positive or negative sentiment).",
"The crowdworkers are also asked to state their confidence on a five-point Lik-ert scale ( I am confident in my answer ': strongly disagree . . . strongly agree).",
"Note that the workers need to guess the output of the model regardless of the true label (i.e. the model may be wrong).",
"The crowdworkers are therefore presented with documents with different prediction outcomes (true positive, true negative, false negative, and false positive).",
"We sample up to 50 documents for each prediction outcome.",
"A screenshot is shown in Figure 1.",
"A quiz and test questions are used to ensure the quality of the crowdworkers.",
"Instructions as well as the test questions included cases where the system made an incorrect prediction, so that workers understood that the task was different than standard labeling tasks.",
"See Appendix A for more details.",
"We experiment with the following parameters: methods (random baseline, LIME with 500 and 5000 samples, word omission, saliency) and the number of words (10, 20).",
"We experiment with both datasets.",
"Due to space constraints, we only experiment with the MLP classifier.",
"We collected the data in August and September 2017.",
"Each HIT (Human Intelligence Task) was carried out by five crowdworkers.",
"We paid $0.03 per judgement.",
"On the 20news dataset, we collected 7,200 judgements from 406 workers (mean nr of. judgements per worker: 17.73, std.: 7.21) and on the movie dataset we collected 8,100 judgements from 445 workers (mean nr of. judgements per worker 18.20, std: 7.24).",
"Confidence Most workers chose confidence values of three or four.",
"Table 6 reports the confidence scores by method.",
"On the movie dataset, the trends match the intrinsic evaluations closely.",
"The random method leads to the lowest confidence score, followed by LIME-500 and LIME-5000, and explanations from the omission and saliency approach both lead to the highest confidence scores.",
"On the 20news dataset, the trends are less clear.",
"We observe a small, significant negative correlation between confidence values and time spent (Spearman correlation: =-0.08, p < 0.0001 on the movie dataset, =-0.06, p < 0.0001 on 20news).",
"Accuracy Table 6 also reports the fraction of correct guesses per method.",
"Random explanations lead to the lowest accuracies, followed by LIME with 500 samples.",
"The differences between LIME-5000, omission and saliency are small and not consistent across datasets.",
"The crowd had a higher accuracy on the movie data, except when the explanations were randomly generated.",
"Table 5 separates the results by the different prediction outcomes.",
"The results suggest that false positive and false negative are the most revealing.",
"In these cases, crowdworkers are not able to rely on their intuition and a strong explanation should convince them that the system makes a mistake.",
"Otherwise, crowd workers might choose the label matching the document (and not necessarily the classifier output).",
"This is especially salient in the 20news dataset, where the random approach performs better than expected on the true positives and true negatives.",
"For example, compare the random approach with the omission approach on true positives with ten word explanations.",
"Our experiments also show that local explanations in the form of the most predictive words are sometimes not enough to simulate the output of a system.",
"For example, the best accuracy on true positive instances in the 20news data is only 0.752.",
"The movie dataset contains difficult instances as well.",
"For example, the omission method identifies the following words in a movie review to explain a false positive prediction: believes ', become ', hair ', unhappy ', quentin ', directed ', runs ', filled ', fiction ', clint '.",
"Due to the composition of the training data, the system has associated words like quentin ' and clint ' with a positive sentiment.",
"This may have confused the crowdworkers as most of them guessed incorrectly.",
"Expanding the explanation with for example influential documents (Koh and Liang, 2017) or a visualization of the class distributions of the most influential words could make the explanations more informative.",
"Correlation with automatic evaluation For each explanation, we compute the fraction of workers who correctly predicted the classifier output (the crowd accuracy') and correlate these with the automatic measures.",
"We expect a negative correlation with the switching points and a positive correlation with the AOPC.",
"The correlations are moderate (Table 8).",
"The correlations with AOPC on the movie data are the biggest on the false positives and false negatives, when workers are not able to rely on their intuition.",
"The correlations 1075 TP TN FP FN Noise AOPC Acc Conf n Acc Conf n Acc Conf n Acc Conf n 0 0.2627 0.940 3.87 250 0.872 3.78 250 0.819 3.50 155 0.729 3.59 155 0.2 0.2044 0.896 3.60 250 0.780 3.67 250 0.735 3.39 155 0.735 3.58 155 0.4 0.1485 0.824 3.62 250 0.776 3.68 250 0.723 3.37 155 0.645 3.31 155 0.6 0.0851 0.800 3.40 250 0.756 3.40 250 0.710 3.63 155 0.639 3.34 155 0.8 0.0411 0.736 3.29 250 0.640 3.35 250 0.632 3.25 155 0.523 3.25 155 Table 7: Forward prediction task with noisy explanations on the movie dataset and the saliency method Movie 20news SP AOPC SP AOPC tp 0 .",
"measured on the true positives in 20news are opposite of what we expect.",
"The 20news data is noisy and the classifier picks up on spurious features, possibly confusing the workers.",
"An example in the 20news data is an e-mail with the following words highlighted: thank ', mail ', discussions ', seminary ', before ', thanks ', question ', fill ', affected ', during ', proofs '.",
"The classifier was confident and the computed switchpoint was low.",
"The e-mail comes from the atheism newsgroup, which becomes clear from reading the text.",
"The highlighted words are all more likely to occur in the christianity newsgroup, but on their own they are not intuitive to lay people.",
"Consequently, workers guessed incorrectly that the predicted label was atheism.",
"Explanations that also show the negative evidence (in this case, words such as atheism ' and atheists ') and/or the word distributions across classes would likely have led to better crowd accuracy.",
"As shown in section 4, the automatic measures correlate strongly with the prediction confidence of the classifier.",
"More words need to be removed before a prediction changes (i.e. a higher switching point) when the classifier is more confident.",
"However, we also find that higher classifier confidence leads to higher crowd accuracies (e.g., = 0.236, p < 0 . 001 on the 20news dataset).",
"We therefore fit an Ordinary Least Squares (OLS) model to control for these different factors (Table 9), with crowd accuracy as the dependent variable.",
"A higher switching point significantly leads to a lower accuracy.",
"However, classifier confidence and prediction outcome also significantly impact the accuracy.",
"Similar trends are observed for the AOPC measure (Table 10).",
"We also find that the automatic evaluation measures significantly impact crowd accuracy on the 20news dataset, but the patterns are less strong.",
"Noise In our final experiment we analyze the effect of noise.",
"We focus on explanations based on saliency scores on the movie dataset.",
"We experiment with introducing noise to the top ten words (Table 7) and we collect additional judgements.",
"A noise level of 0.2 indicates that two out of the top ten words are randomly replaced by other words.",
"The results show that with increasing the noise, as expected, both the performance and average AOPC score decrease.",
"There has been an increasing interest in improving the interpretability of machine learning systems, but evaluating the quality of explanations has been challenging.",
"This paper focused on evaluating local explanations for text classification.",
"Local explanations were generated by identifying important words in a document for a prediction.",
"We compared automatic evaluation approaches, based on measuring the effect of word deletions, with human-based evaluations.",
"Explanations generated using word omissions and first derivatives both performed well.",
"LIME (Ribeiro et al., 2016) performed close to these methods when using enough samples.",
"Our analyses furthermore showed that the evaluation numbers depend on the task/dataset and the confidence of the classifiers.",
"Next, crowd workers were asked to predict the output of the classifiers based on the generated explanations.",
"We found moderate, but significant, correlations between the automatic measures and crowd accuracy.",
"In addition, the human judgements were impacted by the confidence of the classifier and the type of prediction outcome (e.g., a false negative versus a true positive).",
"Our results also suggest that only highlighting words is sometimes not enough.",
"An explanation can highlight the most important parts of an input and score well on automatic measures, but if the explanation is not intuitive (for example due to biases in the data), humans are still not able to predict the output.",
"For the classification tasks in this paper (topic classification and sentiment detection) individual words are often predictive.",
"As a result, local explanation approaches that select words independently worked well.",
"However, we expect that for tasks where individual words are not predictive, the current evaluation methods and local explanation approaches may not be sufficient.",
"Furthermore, in future work more fine-grained visualizations (e.g., Handler et al. (2016)) could be explored.",
"This work was supported by The Alan Turing Institute under the EPSRC grant EP/N510129/1.",
"The author is supported with an Alan Turing Institute Fellowship (TU/A/000006).",
"This work was supported with seed funding award SF023."
] |
[
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"objective",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"The main barrier to progress in the task of Formality Style Transfer is the inadequacy of training data.",
"In this paper, we study how to augment parallel data and propose novel and simple data augmentation methods for this task to obtain useful sentence pairs with easily accessible models and systems.",
"Experiments demonstrate that our augmented parallel data largely helps improve formality style transfer when it is used to pre-train the model, leading to the state-of-the-art results in the GYAFC benchmark dataset 1 .",
"Formality style transfer (FST) is defined as the task of automatically transforming a piece of text in one particular formality style into another (Rao and Tetreault, 2018).",
"For example, given an informal sentence, FST aims to preserve the style-independent content and output a formal sentence.",
"Previous work tends to leverage neural networks (Xu et al., 2019; Niu et al., 2018; Wang et al., 2019) such as seq2seq models to address this challenge due to their powerful capability and large improvement over the traditional rule-based approaches (Rao and Tetreault, 2018).",
"However, the performance of the neural network approaches is still limited by the inadequacy of training data: the public parallel corpus for FST training GYAFC (Rao and Tetreault, 2018) contains only approximately 100K sentence pairs, which can hardly satiate the neural models with millions of parameters.",
"To tackle the data sparsity problem for FST, we propose to augment parallel data with three specific data augmentation methods to help improve the model's generalization ability and reduce the over-fitting risk.",
"Besides applying the widely used back Work done during the internship at Microsoft Research.",
"translation (BT) method (Sennrich et al., 2016a) in Machine Translation (MT) to FST, our data augmentation methods include formality discrimination (F-Dis) and multi-task transfer (M-Task).",
"They are both novel and effective in generating parallel data that introduces additional formality transfer knowledge that cannot be derived from the original training data.",
"Specifically, F-Dis identifies useful pairs from the paraphrased pairs generated by cross-lingual MT; while M-task leverages the training data of Grammatical Error Correction (GEC) task to improve formality, as shown in Figure",
"1. Experimental results show that our proposed data augmentation methods can harvest large amounts of augmented parallel data for FST.",
"The augmented parallel data proves helpful and significantly helps improve formality style transfer when it is used to pre-train the model, allowing the model to achieve the state-of-the-art results in the GYAFC benchmark dataset.",
"We study three data augmentation methods for formality style transfer: back translation, formality discrimination, and multi-task transfer.",
"We focus on informal formal style transfer since it is more practical in real application scenarios.",
"The original idea of back translation (BT) (Sen-nrich et al., 2016a) is to train a target-to-source seq2seq (Sutskever et al., 2014; Cho et al., 2014) model and use the model to generate source language sentences from target monolingual sentences, establishing synthetic parallel sentences.",
"We generalize it as our basic data augmentation method and use the original parallel data to train a seq2seq model in the formal-to-informal direction.",
"Then, we can feed formal sentences to this model that is supposed to be capable of generating their informal counterparts.",
"The formal input and the informal output sentences can be paired to establish augmented parallel data.",
"According to the observation that an informal sentence tends to become a formal sentence after a round-trip translation by MT models that are mainly trained with formal text like news, we propose a novel method called formality discrimination to generate formal rewrites of informal source sentences by means of cross-lingual MT models.",
"A typical example is shown in Figure",
"2. To this end, we collect a number of potentially informal English sentences (e.g., from online fo-rums).",
"Formally, we denote the collected sentences as S = { s i } |S| i =1 where s i represents the i -th sentence.",
"We first translate 2 them into a pivot language (e.g., French) and then translate them back into English, as Figure 2 shows.",
"In this way, we obtain a rewritten sentence s (cid:48) i for each sentence s i S .",
"To verify whether s (cid:48) i improves the formality compared to s i , we introduce a formality discriminator which in our case is a Convolutional Neural Network (CNN) to quantify the formality level of a sentence.",
"We trained the formality discriminator with the sentences and their formality labels in the FST corpus (e.g., GYAFC).",
"The pairs ( s i , s (cid:48) i ) where s (cid:48) i largely improves the formality of s i will 2 https://translate.google.com/ Input i'm gonna trust my gut feelings.",
"(0.12)",
"Output I will trust my instinct.",
"( 0.96 )",
"French je vais faire confiance mon instinct.",
"MT MT Figure 2: Formality discrimination for FST.",
"be selected as the augmented data.",
"The resulting data set T aug is such a set of pairs: T aug = { ( s i , s (cid:48) i ) | P + ( s (cid:48) i ) P + ( s i ) } (1) where P + ( x ) is the probability of sentence x being formal, predicted by the discriminator, and is the threshold 3 for augmented data selection.",
"In this way, we can obtain much helpful parallel data with valuable rewriting knowledge that is not covered by the original parallel data.",
"In addition to back translation and formality discrimination that use artificially generated sentence pairs for data augmentation, we introduce multitask transfer that uses annotated sentence pairs from other seq2seq tasks.",
"We observe that informal texts are usually ungrammatical while formal texts are almost grammatically correct.",
"Therefore, a desirable FST model should possess the ability to detect and rewrite ungrammatical texts, which has been verified by the previous empirical study (Ge et al., 2019) showing that using a state-of-the-art grammatical error correction (GEC) model to post-process the outputs of an FST model can improve the result.",
"Inspired by this observation, we propose to transfer the knowledge from GEC to FST by leveraging the GEC training data as the augmented parallel data to help improve formality.",
"An example is illustrated in Figure 1 in which the annotated data for GEC provides knowledge to help the model rewrite the ungrammatical informal sentence.",
"In general, massive augmented parallel data can help a seq2seq model to learn contextualized representations, sentence generation and source-target alignments better.",
"When the augmented parallel 3 = 0 .",
"data is available, previous studies (Sennrich et al., 2016a; Edunov et al., 2018; Karakanta et al., 2018; Wang et al., 2018) for seq2seq tasks are inclined to train a seq2seq model with original training data and augmented data simultaneously.",
"However, augmented data is usually noisier and less valuable than original training data.",
"In simultaneous training, the massive augmented data tends to overwhelm the original data and introduce unnecessary and even erroneous editing knowledge, which is undesirable for our task.",
"To better exploit the augmented data, we propose to first pre-train the model with augmented parallel data and then fine-tune the model with the original training data.",
"In our pre-training & fine-tuning (PT&FT) approach, the augmented data is not treated equally to the original data; instead it only serves as prior knowledge that can be updated and even overwritten during the fine-tuning phase.",
"In this way, the model can better learn from the original data without being overwhelmed or distracted by the augmented data.",
"Moreover, separating the augmented and original data into different training phases makes the model become more tolerant to noise in augmented data, which reduces the quality requirement for the augmented data and enables the model to use noisier augmented data and even training data from other tasks.",
"In this section, we present the experimental settings and related experimental results.",
"We focus on informal formal style transfer since it is more practical in real application scenarios.",
"We use GYAFC benchmark dataset (Rao and Tetreault, 2018) for training and evaluation.",
"GYAFC's training split contains a total of 110K annotated informal-formal parallel sentences, which are annotated via crowd-sourcing of two domains: Entertainment & Music (E&M) and Family & Relationships (F&R).",
"In its test split, there are 1,146 and 1,332 informal sentences in E&M and F&R domain respectively and each informal sentence has 4 referential formal rewrites.",
"We use all the three data augmentation methods we introduced and obtain a total of 4.9M augmented pairs.",
"Among them, 1.6M are generated by back-translating (BT) formal sentences identified (as formal) by the formality discriminator in E&M and F&R domain on Yahoo Model E&M F&R BLEU BLEU Original data 69.44 74.19 Augmented data 51.83 55.66 ST 59.93 63.16 ST (up-sampling) 68.43 73.04 ST (down-sampling) 68.54 73.69 PT&FT 72.63 77.01 Table 1: The comparison of simultaneous training (ST) and Pre-train & Fine-tuning (PT&FT).",
"Answers L6 corpus 4 , 1.5M are derived by formality discrimination (F-Dis) by using French, German and Chinese as pivot languages, and 1.8M are from multi-task transfer (M-task) from the public GEC data (Lang-8 (Mizumoto et al., 2011; Tajiri et al., 2012) and NUCLE (Dahlmeier et al., 2013)).",
"The informal sentences used in F-Dis strategy are also from Yahoo Answers L6 corpus.",
"We use the Transformer (base) (Vaswani et al., 2017) as the seq2seq model with a shared vocabulary of 20K BPE (Sennrich et al., 2016b) tokens.",
"We adopt the Adam optimizer to pre-train the model with the augmented parallel data and then fine-tune it with the original parallel data.",
"In pre-training, the dropout rate is set to 0.1 and the learning rate is set to 0.0005 with 8000 warmup steps and scheduled to an inverse square root decay after warmup; while during fine-tuning, the learning rate is set to 0.00025.",
"We pre-train the model for 80k steps and fine-tune the model for a total of 15k steps.",
"The CNN we use as the formality discriminator has filter sizes of 3, 4, 5 with 100 feature maps.",
"The dropout rate is set to 0.5.",
"It achieves an accuracy of 93.09% over the GYAFC test set.",
"Table 1 compares the results of the models trained with simultaneous training (ST) and pre-training & fine-tuning (PT&FT).",
"ST with the augmented and original data leads to a performance decline, because the noisy augmented data cannot achieve desirable performance by itself and may distract the model from exploiting the original data in simultaneous training.",
"In contrast, PT&FT only uses 4 https://webscope.sandbox.yahoo.com/catalog.php Model E&M F&R BLEU BLEU Original data 69.44 74.19 Pre-training & Fine-tuning + BT 71.18 75.34 + F-Dis 71.72 76.24 + M-Task 71.91 76.21 + BT + M-Task + F-Dis 72.63 77.01 Table 2: The comparison of different data augmentation methods for FST.",
"the augmented data in the pre-training phase and treats it as the prior knowledge supplementary to the original training data, reducing the negative effects of the augmented data and improving the results.",
"Table 2 compares the results of different data augmentation methods with PT&FT.",
"Pre-training with augmented data generated by BT enhances the generalization ability of the model, thus we observe an improvement over the baseline.",
"However, it does not introduce any new informal-to-formal transfer knowledge, leading to the least improvement among the three methods.",
"In contrast, both F-Dis and M-Task introduce abundant transfer knowledge for FST.",
"The augmented data of F-Dis includes various informal formal rewrite knowledge derived from the MT models, allowing the model to better handle the test instances whose patterns are never seen in the original training data; while M-Task introduces GEC knowledge that helps improve formality in terms of grammar.",
"We then combine all these beneficial augmented data for pre-training.",
"As expected, the combination strategy achieves further improvement as shown in Table 2 since the it enables the model to take advantage of all the data augmentation methods.",
"We compare our approach to the following previous approaches in the GYAFC benchmark:",
"Rule, PBMT, NMT, PBMT-NMT: Rule-based, phrase-based MT, NMT, PBMT-NMT hybrid model (Rao and Tetreault, 2018).",
"NMT-MTL: NMT model with multi-task learning (Niu et al., 2018).",
"GPT-CAT, GPT-Ensemble: fine-tuned encoder-decoder models (Wang et al., 2019) initialized by GPT (Radford et al., System E&M F&R BLEU BLEU No-edit 50.28 51.67 Rule 60.37 66.40 PBMT 66.88 72.40 NMT 58.27 68.26 NMT-PBMT 67.51 73.78 NMT-MTL 71.29 74.51 NMT-MTL-Ensemble* 72.01 75.33 GPT-CAT 72.70 77.26 GPT-Ensemble* 69.86 76.32 Our Approach 72.63 77.01 Our Approach* 74.24 77.97 Table 3: The comparison of our approach to the state-of-the-art results. * denotes the ensemble results. 2019).",
"Specifically, GPT-CAT concatenates the original input sentence and the input sentence preprocessed by rules as input, while GPT-Ensemble is the ensemble of two GPT-based encoder-decoder models: one takes the original input sentence as input, the other takes the preprocssed sentence as input.",
"Following Niu et al. (2018), we train 4 independent models with different initializations for ensemble decoding.",
"According to Table 3, our single model performs comparably to the state-of-the-art GPT-based encoder-decoder models (more than 200M parameters) with only 54M parameters.",
"Our ensemble model further advances the state-of-the-art result only with a comparable model size to the GPT-based single model (i.e., GPT-CAT).",
"We also conduct human evaluation.",
"Following Rao and Tetreault (2018), we assess the model output on three criteria: formality , fluency and meaning preservation .",
"We compare our baseline model trained with original data, our best performing model and the previous state-of-the-art models (NMT-MTL and GPT-CAT).",
"We randomly sample 300 items and each item includes an input and four outputs that shuffled to anonymize model identities.",
"Two annotators are asked to rate the outputs on a discrete scale of 0 to",
"2. More details can be found in the appendix.",
"The results are shown in Table 4 which demonstrates that our model is consistently well rated in human evaluation.",
"We also conduct an exploratory study of the pivot languages used in formality discrimination.",
"Among the three pivot languages (i.e. French, German and Chinese) in our experiments, it is interest-Model Formality Fluency Meaning Original data 1.31 1.77 1.80 NMT-MTL 1.34 1.78 1.92 * GPT-CAT 1.42 1.84* 1.90 Ours 1.45 * 1.85 * 1.92 * Table 4: Results of human evaluation of FST.",
"ing to observe a significant difference in the sizes of the obtained parallel data given the same source sentences and filter threshold, as shown in Table",
"5. Using Chinese as the pivot language results in the most data, probably due to the fact that Chinese and English belong to different language systems.",
"The formality of original informal English sentences may be lost during translation, which turns out to facilitate the MT system to translate Chinese back into formal English.",
"In contrast, French and German have much in common with English, especially for French in terms of the lexicon (Baugh and Cable, 1993).",
"The translated sentences are likely to maintain informal sense, which hinders the MT system from generating formal English translations.",
"We compare the performance with augmented data generated by three pivot languages separately in Table",
"6. Manual inspection reveals that a few pairs have the issue of meaning inconsistency in all the three sets, which mainly arises from the translation difficulties caused by omissions and poor grammaticality in informal sentences and the segmentation ambiguity in some pivot languages like Chinese.",
"Among the three languages, the Chinese-based augmented data introduces more noise due to the additional segmentation ambiguity problem but brings fair improvement because of its largest size.",
"In contrast, the German-based augmented data has relatively high quality and a moderate size, leading to the best result in our experiments.",
"Data augmentation has been much explored for seq2seq tasks like Machine Translation (He et al., 2016; Fadaee et al., 2017; Zhang et al., 2018b; Pon-Model",
"celas et al., 2018; Edunov et al., 2018; Li et al., 2019) and Grammatical Error Correction (Kiyono et al., 2019; Grundkiewicz et al., 2019; Zhao et al., 2019; Zhou et al., 2019; Ge et al., 2018a,b; Xie et al., 2018; Yuan et al., 2016; Rei et al., 2017).",
"For text style transfer, however, due to the lack of parallel data, many studies focus on unsupervised approaches (Luo et al., 2019; Wu et al., 2019; Zhang et al., 2018a) and there is little related work concerning data augmentation.",
"As a result, most recent work (Jhamtani et al., 2017; Xu et al., 2012) that models text style transfer as MT suffers from a lack of parallel data for training, which seriously limits the performance of powerful models.",
"To solve this pain point, we propose novel data augmentation methods and study the best way to utilize the augmented data, which not only achieves a success in formality style transfer, but also would be inspiring for other text style transfer tasks.",
"In this paper, we propose novel data augmentation methods for formality style transfer.",
"Our proposed data augmentation methods can effectively generate diverse augmented data with various formality style transfer knowledge.",
"The augmented data can significantly help improve the performance when it is used for pre-training the model and leads to the state-of-the-art results in the formality style transfer benchmark dataset.",
"We thank all the reviewers for providing the constructive suggestions.",
"This work is partly supported by Beijing Academy of Artificial Intelligence.",
"Xu Sun is the corresponding author of this paper."
] |
[
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"result",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"objective",
"objective",
"objective",
"abstain",
"other",
"other",
"other"
] |
[
"An enormous amount of conversation occurs online every day, such as on chat platforms where multiple conversations may take place concurrently.",
"Interleaved conversations lead to difficulties in not only following discussions but also retrieving relevant information from simultaneous messages.",
"Conversation disentanglement aims to separate intermingled messages into detached conversations.",
"In this paper, we propose to leverage representation learning for conversation disentanglement.",
"A Siamese hierarchical convolutional neural network (SHCNN), which integrates local and more global representations of a message, is first presented to estimate the conversation-level similarity between closely posted messages.",
"With the estimated similarity scores, our algorithm for conversation identification by similarity ranking (CISIR) then derives conversations based on high-confidence message pairs and pairwise redundancy.",
"Experiments were conducted with four publicly available datasets of conversations from Reddit and IRC channels.",
"The experimental results show that our approach significantly outperforms comparative baselines in both pairwise similarity estimation and conversation disentanglement.",
"With the growth of ubiquitous internet and mobile devices, people now commonly communicate in the virtual world.",
"Among the various methods of communication, text-based conversational media, such as internet relay chat (IRC) (Werry, 1996) and Facebook Messenger 1 , has been and remains one of the most popular choices.",
"In addition, many enterprises have started to use conversational chat platforms such as Slack 2 to enhance team collaboration.",
"However, multiple conversations may 1 Facebook Messenger: https://www.messenger.",
"Thread Message ... ...",
"T31 Malcolm: If running as root, I need to set up a global config rather than",
"/.fetchmailrc ?",
"T38 Elma: i'm sure i missed something but fonts rendering in my gimp works isn't at its best T39 Sena: is there anyway to see what the CPU temperature is?",
"T38 Elma: is it because of gimp or i missed some tuning or such?",
"T31 Rache: Specify a non-default name run control file.",
"T41 Denny: so how does one enforce a permission set and ownership set on a folder and all its children?",
"T31 Malcolm: in the man page it doesn't mention any global fetchmailrc file... that is what was confusing",
"me...",
"T42 Shenna: hi, are sata drives accessed as sda or hda?",
"T41 Elma: -R for recursive...",
"T42 Elma: sda ... ...",
"occur simultaneously when conversations involve three or more participants.",
"Aoki et al. (2006) found an average of 1.79 conversations among eight participants at a time.",
"Moreover, some platforms like chatrooms in Twitch may have more concurrent conversations (Hamilton et al., 2014).",
"Interleaved conversations can lead to difficulties in both grasping discussions and identifying messages related to a search result.",
"For example, Figure 1 shows a segment of conversations from the real-world IRC dataset as an example.",
"Five interleaved threads are involved in only ten messages.",
"Messages in the same thread may not have identical keywords.",
"Moreover, a user (i.e., Elma ) can participate in multiple threads.",
"Hence, a robust mechanism to disentangle interleaved conversations can improve a user's satisfaction with a chat system.",
"One solution for conversation disentanglement is to model the task as a topic detection and tracking (TDT) (Allan, 2002) task by deciding whether each incoming message starts a new topic or belongs to an existing conversation.",
"Messages in the same conversation may have higher similarity 1812 scores (Shen et al., 2006; Mayfield et al., 2012) or similar context messages (Wang and Oard, 2009).",
"However, similarity thresholds for determining new topics vary depending on context.",
"Embedding of earlier messages, resulting in duplication of parts of messages, can alter the similarity score.",
"More specifically, the similarity scores obtained in previous work cannot well represent conversation-level relationships between messages.",
"Several studies have examined the use of statistical (Du et al., 2017) and linguistic features (Elsner and Charniak, 2008, 2010, 2011; Mayfield et al., 2012) for predicting user annotations of paired message similarity.",
"These studies employed bag-of-words representations which do not capture term similarity and cannot distinguish word importance and relationships between words in a message.",
"Thus, better representations of messages and their relationships are needed.",
"Recent studies have demonstrated the effectiveness of deep learning methods in representation learning (Bengio et al., 2013), aiming to infer low-dimensional distributed representations for sparse data such as text (Hinton and Salakhut-dinov, 2006).",
"These representations can be derived not only for words (Mikolov et al., 2013) but also sentences and documents (Le and Mikolov, 2014).",
"In particular, convolutional neural networks (CNNs) have been shown to efficiently and effectively preserve important semantic and syntactic information from embedded text sequences (Blunsom et al., 2014).",
"It has been demonstrated that CNNs produce state-of-the-art results in many NLP tasks such as text classification (Kim, 2014; Lai et al., 2015; Zhang et al., 2015) and sentiment analysis (Tang et al., 2014; Poria et al., 2015).",
"Existing approaches, however, do not take advantage of deep learning techniques to model relationships between messages for disentangling conversations.",
"(Mehri and Carenini, 2017) defined many statistical features for use with a random forest for in-thread classification and used a recurrent neural network (RNN) only to model adjacent messages with an external dataset as a feature.",
"In this paper, we aim to leverage deep learning for conversation disentanglement.",
"Our proposed approach consists of two stages: (1) message pair similarity estimation and (2) conversation identification.",
"In the first stage, we propose the Siamese hierarchical convolutional neural network (SHCNN) to estimate conversation-level similarity between pairs of closely posted messages.",
"SHCNN is framed as a Siamese architecture (Mueller and Thyagarajan, 2016) concatenating the outputs of two hierarchical convolutional neural networks and additional features.",
"Compared to other conventional CNN-based Siamese networks (Severyn and Moschitti, 2015; Yin et al., 2016), SHCNN models not only local information in adjacent words but also more global semantic information in a message.",
"In the second stage, the algorithm of conversation identification by similarity ranking (CISIR) ranks messages within a time window paired with each message and constructs a message graph involving high-rank connections with strong confidence.",
"Although only high-confidence relations are represented in the constructed graph, the redundancy of pairwise relationships can capture the connectivity of messages within a conversation.",
"In summary, the main contributions of this paper are threefold: (1) Deep similarity estimation for conversation disentanglement: To the best of our knowledge, this is the first study applying deep learning to estimate similarities between messages for disentangling conversations.",
"SHCNN simultaneously captures and compares local and global characteristics of two messages to estimate their similarity.",
"Message representations are also optimized towards the task of conversation disentanglement.",
"(2) Efficient and effective method: The selection of message pairs posted closely in time and the proposed CISIR algorithm significantly reduces the computational time from O \u0000 | M | 2 \u0000 to O ( k | M | ) , where | M | is the number of messages, and k is the maximum number of messages posted within a fixed-length time window.",
"When many messages are posted over a long period, the computational time of our approach could be near-linear.",
"(3) Empirical improvements over previous work: Extensive experiments have been conducted on four publicly available datasets, including three synthetic conversation datasets and one real conversation dataset from Reddit 3 and IRC conversations.",
"Our approach outperforms all comparative baselines for both similarity estimation and conversation disentanglement.",
"Methods for conversation disentanglement can be simply categorized into unsupervised and supervised approaches.",
"Unsupervised approaches (Wang and Oard, 2009) estimate the relationship between messages through unsupervised similarity functions such cosine similarity, and assign messages to conversations based on a predefined 3 Reddit: https://www.reddit.com/ 1813 threshold.",
"In contrast, supervised methods exploit a set of user annotations (Elsner and Charniak, 2008; Mayfield et al., 2012; Shen et al., 2006; Du et al., 2017; Mehri and Carenini, 2017) to adapt to different datasets.",
"Our approach can be classi-fied as a supervised approach because a small set of user annotations is used to train the SHCNN.",
"In addition to conversations, some studies predict the partial structure of threaded data, especially for online forums (Aumayr et al., 2011; Wang et al., 2011b,a).",
"These studies merely classify parent-child relationships in disentangled, in-dependent threads.",
"Moreover, they focus only on comments to the same post.",
"Indeed, conversation disentanglement is a more difficult task.",
"Estimating the similarity of text pairs is an essential part in our approach.",
"Many studies also focus on similar tasks aside from conversation disentanglement, such as entailment prediction (Mueller and Thyagarajan, 2016; Wang and Jiang, 2017) and question-answering (Severyn and Mos-chitti, 2015; Amiri et al., 2016; Yin et al., 2016).",
"However, most of their models are complicated and require a larger amount of labeled training data; limited conversational data can lead to unsatisfactory performance as shown in Section 4.",
"In this section, we formally define the objective of this work and notations used.",
"A two-stage approach is then proposed to address the problem.",
"Given a set of speakers S , a message m is defined as a tuple m = ( w , s, t ) , where w = h w 1 , w 2 , , w n i is a word sequence posted by the speaker s 2 S at time t in seconds.",
"Each message m is associated with a conversation z ( m ) .",
"Messages in different conversations can be posted concurrently, i.e., conversations can be interleaved.",
"Following the settings of previous work (El-sner and Charniak, 2008, 2010, 2011; Mayfield et al., 2012), a set of pairwise annotations A = { ( m i , m j , y ) } , where y 2{ 0 , 1 } , is given for training the model.",
"More specifically, a Boolean value y indicates whether two messages m i and m j are in the same conversation, i.e., z ( m i ) and z ( m j ) are identical.",
"Given a set of messages M and the pairwise annotations A as training data, the goal is to learn a model that can identify whether messages are posted in the same conversation z ( m ) .",
"Note that the number of conversations | Z = { z ( m ) |8 m 2 M } | is always unknown to the system.",
"Figure 2 illustrates our two-stage framework.",
"The first stage aims to estimate pairwise similarity among messages.",
"Message pair selection is applied to focus on the similarity between messages that are posted closely in time and thus more likely to be in the same conversation.",
"The Siamese hierarchical CNN (SHCNN) is proposed for learning message representations and estimating pairwise similarity scores.",
"The overlapping hierarchical structure of SHCNN models a message at multiple semantic levels and obtains representations that are more comprehensive.",
"In the second stage, our conversation identification by similarity ranking (CISIR) algorithm exploits the redundancy and connectivity of pairwise relationships to identify conversations as connected components in a message graph.",
"Most of the previous work on conversation disentanglement focused on pairwise relationships between messages (Mayfield et al., 2012).",
"Especially for single-pass clustering approaches, all pairs of messages need to be enumerated during similarity computation (Wang and Oard, 2009).",
"However, if messages have been collected for a long time, the number of message pairs could be too mammoth to be processed in an acceptable amount of time.",
"More precisely, it leads to at least O ( n 2 ) computational time, where n is the number of messages.",
"As shown in Figure 3, the percentage of messages in the same conversation as a given message becomes significantly lower with a longer elapsed time between consecutive messages.",
"In light of this observation, an assumption is made as follows: Assumption 1 The elapsed time between two consecutive messages posted in the same conversation is not greater than T hours, where T is a small number.",
"More specifically, in our dataset every message m i is posted within T hours earlier or later than any other message m j in the same conversation, i.e., | t i \u0000 t j | 3600 < T for all pairs ( m i , m j ) , where t is in seconds.",
"For example, in the IRC dataset the average elapsed time between consecutive messages in a conversation is only 7 minutes.",
"If a conversation is ongoing, there may not be an extended silence before a new message; conversely, an extended silence could be treated as the start of a new 1814 postedtime Interleaved Conversations Stage 1: Similarity Estimation Stage 2: Conversation Identification Disentangled Conversations C 1 C 2 (1b) SHCNN for Similarity Estimation (1a) Message Pair Selection (2) CISIR for Identification m i m j P ( z ( m i ) = z ( m j )) =?",
"conversation.",
"With this assumption, the number of pairs can be reduced to O ( kn ) , where k is the maximum number of messages posted in a T -hour time window.",
"By default T is set to 1 hour in our experiments.",
"In addition, it is worth mentioning that it may be possible to include conversational structure, such as replied-to relations, into the model.",
"For example, after using CISIR to identify conversational threads, structure inference may be performed using methods such as described in (Aumayr et al., 2011) or (Wang et al., 2011b) and the structure used to refine the threads.",
"In this study, we focus on only conversation disentanglement.",
"Given a set of message pairs, we propose the Siamese hierarchical CNN (SHCNN) to estimate the similarity between a pair of messages.",
"The effectiveness of CNNs for representing text has already been addressed in previous studies.",
"However, single-layer CNNs (Kim, 2014; Severyn and Moschitti, 2015) may not represent high-level semantics while low-level information could be diluted with multiple-layer CNNs (Yin et al., 2016).",
"The hierarchical CNN (HCNN) is designed to simultaneously capture lowand high-level message meanings as shown in Figure 4.",
"A message m i is first represented by a d | w | message matrix W 2 R d | w | , where d is the dimension of a word embedding, and | w | is the num-1815 ber of words in a message.",
"For low-level information, we exploit single-layer CNNs (Kim, 2014; Severyn and Moschitti, 2015) with a set of d k L kernels, where L denotes Low, to extract n gram semantics of k L contiguous words.",
"In this paper, 64 d k L kernels, where k L = 5 , are applied to obtain 64 low-level features m L .",
"Note that the kernel row dimension is identical to the word embedding dimension to jointly consider the full embedding vector.",
"As a consequence, convolution with each kernel produces a vector c Li , which is then aggregated by max-over-time pooling (Col-lobert et al., 2011; Kim, 2014).",
"To acquire high-level semantics across a message, HCNN uses another multiple-layer CNN for feature extraction.",
"A 1 k C kernel is applied to W , thereby generating a convolutional message matrix WC .",
"Features covering broader contents are computed by applying a 1 2 kernel to a max-pooling layer with a stride of 2, producing a high-level message matrix WH .",
"The row sizes of the two kernels are set to 1 to capture relations within each embedding dimension, and convolution is performed on WH with 64 d k H kernels to capture relations across embedding dimensions.",
"The generated convolutional feature maps c Hi are subject to max-over-time pooling, resulting in 64 features m H .",
"Finally, a message representation m is constructed by concatenating m L and m H , i.e., creating a 128-dimensional feature vector, for characterizing both lowand high-level semantics of a message m .",
"In this paper, both k C and k H are set to 5 while computing high-level representations.",
"A Siamese structure with two identical subnetworks is useful to exploit the affinity between representations of two instances in the same hidden space (Severyn and Moschitti, 2015; Yin et al., 2016; Wang and Jiang, 2017).",
"For similarity estimation, we propose the Siamese hierarchical CNN (SHCNN) using a Siamese structure that blends the outputs from two HCNNs as well as some context features.",
"Figure 5 shows the structure of the SHCNN for estimating the similarity between two messages m i and m j where the message representations m i and m j are generated by two sub-networks HCNNs (See Figure 4).",
"There are many ways to deal with two sub-networks, such as using a similarity matrix (Severyn and Moschitti, 2015) or an attention matrix (Yin et al., 2016).",
"However, both methods lead to an enormous number of parame-message input !",
"ters for long messages.",
"We propose to independently compute the element-wise absolute differences (Mueller and Thyagarajan, 2016) between a pair of message representations m i and m j , each from a sub-network.",
"More formally, the absolute difference d is a vector where the k -th element is computed as | m i ( k ) \u0000 m j ( k ) | .",
"This approach provides not only fewer parameters but also the flex-ibility to observe interactions among different dimensions in representations.",
"Our experiments also show it outperforms the other two approaches in similarity estimation (See Section 4).",
"In addition to message contents, contexts such as temporal and user information were also usually considered in previous studies about conversation disentanglement (Wang and Oard, 2009; Elsner and Charniak, 2010, 2011).",
"In this paper, we focus on the performance of message content representations and only incorporate four context features: speaker identicality, absolute time difference and the number of duplicated words with and without weighting by inverse document frequency (Christopher et al., 2008).",
"SHCNN concatenates the context features x ( m i , m j ) with the absolute difference d as the input of a fully-connected layer of the same size.",
"The final output of SHCNN y ( m i , m j ) is normalized by a logistic sigmoid function (Han and Moraga, 1995), representing the probability P ( z ( m i ) = z ( m j )) .",
"All convolutional layers and the fully-connected layer require activation functions, and the choice affects the performance (Maas et al., 2013).",
"Popular functions include rectified linear units (Re-LUs) (LeCun et al., 2015), hyperbolic tangent 1816 units ( tanh ) and exponential linear units (ELUs) (Clevert et al., 2016).",
"In this study, we conducted informal comparison experiments and ELU was fi-nally chosen for all functions because it performed the best.",
"Given a set of annotated message pairs A = { ( m i , m j , y ) } , where y is a Boolean value indicating whether two messages are in the same conversation, SHCNN is optimized with binomial cross entropy (Goodfellow et al., 2016).",
"More formally, the objective function is as follows: X ( m i ,m j ,y ) 2 A [ y log( y + ) + (1 \u0000 y ) log(1 \u0000 y + )]+ \u0000 || || 2 where y simplifies y ( m i , m j ) , and is a small number, i.e., 10 \u0000 9 in our experiments, preventing underflow errors.",
"The term \u0000 serves as the weight for L2-regularization for the set of parameters .",
"In our experiments, SHCNN is implemented by TensorFlow (Abadi et al., 2016) and trained by the Adam optimizer (Kingma and Ba, 2015) with an initial learning rate of 10 \u0000 3 .",
"The dropout technique (Srivastava et al., 2014) is utilized in the fully-connected layer with a dropout probability of 0 .",
"1 .",
"Word embeddings are initialized using the publicly available fastText 300-dimensional pre-trained embeddings from Facebook (Bojanowski et al., 2016).",
"The batch size is set to 512, and the maximum number of training epochs is 1,000.",
"The final model is determined by evaluating the mean average precision (MAP) on a validation dataset every 100 iterations.",
"In the second stage of conversation disentanglement, i.e., part (2) in Figure 2, we aim to separate conversations based on the identified message pairs and their estimated similarity.",
"It is intuitive to apply graph-based methods if pairwise relationships of messages are exploited (El-sner and Charniak, 2008).",
"Furthermore, methods based on single-pass clustering (Wang and Oard, 2009) can be also be treated as graph-based methods.",
"However, graph-based methods have a risky drawback: A single false positive connection between two messages can be propagated to several messages from different conversations.",
"As shown Algorithm 1: The algorithm of conversation disentanglement by similarity ranking (CISIR).",
"in Figure 3, a certain percentage of message pairs are in different conversations, which can lead to numerous false positive connections.",
"False alarms may be reduced by raising the threshold that determines whether two messages are connected (Wang and Oard, 2009).",
"However, a high threshold can make disentangled conversations fragmented and the best threshold for each pair could vary.",
"Instead of setting a high threshold, we propose the algorithm of Conversation Identification by SImilarity Ranking (CISIR).",
"CISIR focuses on the top messages ranked by similarity scores.",
"Based on Assumption 1, for each message, there exists at least one or more other messages in the same conversation posted closely in time.",
"With this redundancy, a few pairs with stronger confidence, i.e., the top-ranked pairs, can be enough to extend a correct connectivity to earlier or later messages, while the low-ranked pairs can be ignored to reduce the risk of error propagation.",
"Given a set of selected message pairs with estimated similarity scores D = { ( m i , m j , y ) } , Algorithm 1 shows the procedure of CISIR with two parameters r and h , where r is a high threshold 1817 of similarity ranks and h is a lower threshold of similarity scores.",
"Note that CISIR filters out pairs with low scores because a message can have more than r same-conversation pairs posted in its T hour time window.",
"For each message, CISIR ranks all of its associated pairs by the estimated similarity and only retrieves the topr pairs whose similarity scores are greater than h .",
"These retrieved high-confidence pairs are treated as the edges in a message graph G .",
"Finally, CISIR divides G into connected components, and the messages in each connected component are treated as a conversation.",
"In this paper, we use grid search to set r and h as 5 and 0.5, respectively.",
"The efficiency of Algorithm 1 can be further improved.",
"The topr qualified pairs for each message can be pre-processed by a scan of D with | M | min-heaps which always contain at most r +1 elements.",
"When r is a small constant number, it only takes O ( | D | ) = O ( k | M | ) for pre-processing, where k is the maximum number of messages posted in a T -hour time window.",
"With preprocessed top pairs, CISIR can do graph construction and find connected components in O ( k | M | ) , which compares favorably to conventional methods in O ( | M | 2 ) .",
"In this section, we conduct extensive experiments on four publicly available datasets to evaluate SHCNN and CISIR in two stages.",
"4.1.1 Datasets Three datasets from Reddit and one dataset of IRC are used as the experimental datasets.",
"Reddit Datasets 4 The Reddit dataset is comprised of all posts and corresponding comments in all sub-reddits (i.e., forums in Reddit.com) from June 2016 to May 2017.",
"Comments under a post can be treated as messages in one conversational thread.",
"Here we manually merge all comments in a sub-reddit to construct a synthetic dataset of interleaved conversations.",
"Note that although it is called a synthetic dataset, all messages are written by real users.",
"Three sub-reddits with different popularity levels as shown in Table 1 are selected to build three datasets: gadgets, iPhone and politics.",
"IRC Dataset.",
"An annotated IRC dataset used in (Elsner and Charniak, 2008) is also included in our experiments.",
"The IRC dataset consists of about 6 hours of messages in interleaved conversations.",
"Even though the IRC dataset is significantly smaller and shorter than the Reddit datasets, it consists of natural, interleaved conversations with ground truth annotations, including thread id. 4.1.2 Experimental Settings Humans may not participate in a large number of simultaneous conversations.",
"e.g., an average of 1.79 for eight people (Aoki et al., 2006), but there could be hundreds of concurrent posts in a subred-dit.",
"Hence, we adjusted the datasets to be more similar to real conversations.",
"Specifically we removed some conversations so that every dataset has at most ten conversations at any point in time.",
"Short messages with less than five words are also removed because even for humans they are frequently ambiguous.",
"Too short conversations with less than ten messages are also discarded as outliers (Ren et al., 2011).",
"Training and validation data are randomly chosen from only 10% of the selected message pairs, respectively, because in real situations obtaining labels could be very costly.",
"The remaining 80% of pairs are regarded as testing data.",
"As a result, Table 1 shows the statistics of the four datasets after pre-processing.",
"Message pair similarity estimation is treated as a ranking task and evaluated with three ranking evaluation metrics: precision at 1 (P@1), mean average precision (MAP) and mean reciprocal rank (MRR) (Christopher et al., 2008).",
"We compare the performance with six baseline methods, including the difference of posted time ( TimeDiff ), sameness of speakers ( Speaker ), cosine similarity of text ( Text-Sim ), the approach proposed by Elsner and Charniak (2008) (Elsner), DeepQA (Severyn and Moschitti, 2015) and ABCNN (Yin et al., 2016).",
"Note that DeepQA and ABCNN are neural network-based models for question-answering.",
"The approach of Mehri and Carenini 1818 Dataset Reddit Datasets IRC Dataset gadgets iPhone politics Metric P@1 MRR MAP P@1 MRR MAP P@1 MRR MAP P@1 MRR MAP TimeDiff 0.6916 0.8237 0.8170 0.6085 0.7651 0.7495 0.4412 0.6362 0.5644 0.3262 0.5180 0.4384 Speaker 0.5643 0.7046 0.7425 0.5364 0.6595 0.6590 0.4021 0.4620 0.3914 0.4356 0.6263 0.6891 Text-Sim 0.7913 0.8746 0.8440 0.7347 0.8318 0.7872 0.5245 0.6672 0.5326 0.3712 0.5269 0.3108 Elsner 0.7758 0.8651 0.8321 0.6809 0.7935 0.7471 0.4643 0.6132 0.4884 0.1094 0.1886 0.2063 DeepQA 0.8011 0.8755 0.8511 0.7156 0.8112 0.7766 0.5593 0.6759 0.5685 0.7811 0.8182 0.8050 ABCNN 0.8374 0.8511 0.8502 0.8112 0.8520 0.8118 0.7419 0.6221 0.6644 0.7008 0.4142 0.5858 SHCNN 0.8834 0.9281 0.9005 0.8375 0.8944 0.8497 0.7696 0.8392 0.6967 0.9785 0.9838 0.9819 SHCNN (L) 0.8470 0.9080 0.8702 0.8066 0.8792 0.8275 0.7225 0.8070 0.6438 0.9807 0.9834 0.9750 SHCNN (H) 0.8490 0.9105 0.8704 0.8158 0.8851 0.8313 0.7228 0.8110 0.6283 0.9635 0.9728 0.8632 Table 2: Performance of pairwise similarity estimation in four datasets.",
"(2017) was not compared in our experiments because the RNN requires additional message sequences; moreover, its performance was only mildly better than Elsner, which performed poorly on IRC in Table 2.",
"Table 2 shows the performance of similarity estimation.",
"Among all methods, neural network approaches (Severyn and Moschitti, 2015; Yin et al., 2016) perform better than other methods in most cases, indicating that message content representation has considerable impact on estimating pairwise similarity.",
"SHCNN outperforms most of the baselines even if only low-level (L) or high-level (H) representations are exploited.",
"When SHCNN captures both lowand high-level semantics, it significantly outperforms all baselines across the four datasets.",
"For example, ABCNN can outperform SHCNN using only either lowor high-level representations in the politics dataset; however, SHCNN turns the tables after using both representations.",
"An interesting observation is that ABCNN is the best baseline in every dataset except for IRC; this may be because the IRC data is too small to train complicated attention structures.",
"On the contrary, our SHCNN can precisely capture semantics even with few parameters and limited data.",
"results of the IRC data and demonstrate the capability of SHCNN to simultaneously preserve local and more global information.",
"Figure 6 presents an example to show how SHCNN is better than other methods in capturing more high-level topical information.",
"Even though the main sentences of two messages are clearly on different topics, the baseline method DeepQA (Severyn and Mos-chitti, 2015) still predicts a high similarity.",
"This could be attributed to the context of author mention (Wang and Oard, 2009) and a bias on the local information, i.e., the exact same term Arlie, in the Siamese network used in DeepQA.",
"On the contrary, SHCNN can capture more global information that differentiates the topics and correctly predicts a very low score.",
"Figure 7 illustrates another example of how SHCNN outperforms other methods in preserving the similarity of local information.",
"Both of the messages in the example have some segments related to software engineering.",
"A baseline method ABCNN (Yin et al., 2016) with multiple-layer CNNs, however, still predicts a low score.",
"This might be because both sentences are long so that the local information is diluted after processing by multiple CNN layers.",
"Differently, SHCNN is able to seize local information, correctly predicting a high score.",
"For conversation identification, three clustering metrics are adopted for evaluation: normalized mutual information (NMI), adjusted rand index (ARI) and F 1 score (F1).",
"Six methods are implemented as the baselines for conversation disentanglement, including Doc2Vec (Le and Mikolov, 2014), blocks of 10 messages ( Block-10 ), messages of respective speakers ( Speaker ) (Elsner and Charniak, 2011), context-based message expansion ( CBME ) (Wang and Oard, 2009) and a graph-theoretical model with chatand content-specific features (Elsner and Charniak, 2008) ( GTM ).",
"The embedding-based clustering method, i.e., Doc2Vec , applies affinity propagation (Frey and Dueck, 2007) to cluster messages embedded using Doc2Vec without being given the number of clusters, with the idea that messages in the same conversation would form a cluster.",
"Note that message pairs in the training and validation data are not utilized in prediction for a fair comparison to all methods.",
"Table 3 shows the performance of conversation disentanglement.",
"Note that Oracle represents the optimal performance for CISIR when all message pairs in identical conversations in D are correctly retrieved.",
"Because pairs in D may not have enough coverage to connect all messages in a coversation, the optimal performance could be lower than 1.0.",
"CISIR performs better than all baseline methods for all datasets, and achieves excellent performance in IRC, due in part to the high-performing similarity estimates from the first stage.",
"Among the baseline methods, GTM performs relatively well on all datasets except for IRC.",
"This is because messages are more frequently posted in the IRC dataset, thereby increasing the number of incorrect pairs in the constructed graph.",
"Examining the graph constructed by GTM, there are only two connected components, indicating that many conversations were incorrectly combined; in contrast, CISIR may be exempt from error propagation because it only relies on top-ranked pairs.",
"Doc2Vec is trained to predict words in a document in an unsupervised manner.",
"Its lowest performance in the experiments may point out a need for supervised learning in the specific task of conversation disentanglement to tackle the variation in semantic patterns.",
"Time and author contextual cues do help conversation disentanglement as seen in the results of Block-10 and Speaker.",
"Both of these contexts are integrated into our model.",
"In this paper, we propose a novel framework for disentangling conversations, including similarity estimation for message pairs and conversation identification.",
"In contrast to previous work, we assume that we do not need to select all message pairs in the first stage, thereby reducing computational time without sacrificing performance too much.",
"To estimate conversation-level similarity, a Siamese Hierarchical Convolutional Neural Network, SHCNN, is proposed to minimize the estimation error as well as preserve both the low-and high-level semantics of messages.",
"In the second stage, we developed the Conversation Identification by SImilarity Ranking, CISIR, algorithm, which exploits the assumption made in the first stage and identifies individual, entangled conversations with high-ranked message pairs.",
"Extensive experiments conducted on four publicly available datasets show that SHCNN and CISIR outperform several existing approaches in both similarity estimation and conversation identification.",
"We would like to thank the anonymous reviewers for their helpful comments.",
"The work was partially supported by NIH U01HG008488, NIH R01GM115833, NIH U54GM114833, and NSF IIS-1313606."
] |
[
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"other",
"other"
] |
[
"Training Transformer-based models demands a large amount of data, while obtaining aligned and labelled data in multimodality is rather cost-demanding, especially for audio-visual speech recognition (AVSR).",
"Thus it makes a lot of sense to make use of unlabelled unimodal data.",
"On the other side, although the effectiveness of large-scale self-supervised learning is well established in both audio and visual modalities, how to integrate those pre-trained models into a multimodal scenario remains underexplored.",
"In this work, we successfully leverage unimodal self-supervised learning to promote the multimodal AVSR.",
"In particular, audio and visual front-ends are trained on large-scale unimodal datasets, then we integrate components of both front-ends into a larger multimodal framework which learns to recognize parallel audio-visual data into characters through a combination of CTC and seq2seq decoding.",
"We show that both components inherited from unimodal self-supervised learning cooperate well, resulting in that the multimodal framework yields competitive results through fine-tuning.",
"Our model is experimentally validated on both word-level and sentence-level tasks.",
"Especially, even without an external language model, our proposed model raises the state-of-the-art performances on the widely accepted Lip Reading Sentences 2 (LRS2) dataset by a large margin, with a relative improvement of 30%.",
"* 1 Introduction Audio-Visual Speech Recognition (AVSR) is a speech recognition task that leverages both an audio input of human voice and an aligned visual input of lip motions.",
"It has been one of the successful application fields that involve multiple modalities Corresponding author.",
"in recent years.",
"Due to the limited amount of labeled, multimodal aligned data and the difficulty of recognition from the visual inputs (i.e., lip reading), it is a challenging task to tackle.",
"Existing AVSR models tend to use extra data to increase the performance of the system, in a form of inserting an extra supervised learning stage in the training process.",
"For example, many existing methods rely on an extra sequence level classification to bootstrap its learning on visual features.",
"Petridis et al. (2018); Zhang et al. (2019) train their visual front-end on LRW (Chung and Zisserman, 2016) before learning on the AVSR task.",
"Afouras et al. (2018a,b) chunks the MV-LRS data (Chung and Zisserman, 2017) into pieces of words and pre-train the model through classification.",
"VoxCeleb (Chung et al., 2018) are also used in Afouras et al. (2020) for the same purpose.",
"Learning an effective visual front-end could still be notoriously hard, even with these extra supervised learning tasks.",
"Sometimes curriculum learning is required to adapt the learned visual front-end into AVSR task (Afouras et al., 2018a).",
"End-to-end learning of large-scale AVSR data hasn't been successful until recently (Ma et al., 2021).",
"Although self-supervised learning could enable leveraging unlabelled or even unaligned data, it hasn't been adequately explored on this task.",
"Shukla et al. (2020) is among the few attempts in this facet, in which it predicts lip motions from audio inputs.",
"Their proposed learning schemes yield strong emotion recognition results but are relatively weak in speech recognition.",
"Moreover, since in AVSR it is the lip shape and motions between frames rather than the objects in a single image that matters for recognizing speech contents, if pre-trained visual models tailored for tasks targeting at single frame images could work for AVSR remains unknown.",
"In another scenario, self-supervised learning in unimodality has been well established as a paradigm to learn general repre-4491 sentations from unlabelled examples, such as in natural language processing (Brown et al., 2020; Devlin et al., 2019), speech recognition (Baevski et al., 2020), and computer vision (He et al., 2019; Chen et al., 2020a; Grill et al., 2020).",
"In this work, we rely on a simple but effective approach, which is to utilize unlabelled unimodal data by using pre-trained models that are trained in single-modality through self-supervised learning.",
"Specifically, we use Baevski et al. (2020) pre-trained on the large LibriLight (Kahn et al., 2020) dataset as our audio front-end.",
"For visual front-end, we found that it is not as straight-forward for it to leverage pre-trained models, as we have to substitute the first convolutional layer in MoCo v2 (Chen et al., 2020b) by a 3-D convolutional layer and fine-tune it through LRW.",
"In total, our approach doesn't require a curriculum learning stage, and the overall training time has been decreased.",
"Experimental results show that our new front-ends significantly outperform previous ones by a big margin in both audio-only and visual-only settings, and a new state-of-the-art has been achieved in the final AVSR setting.",
"To our best knowledge, this is the first work that successfully applies unimodal pre-trained models in the multimodal setting of AVSR.",
"The earliest work on AVSR could be dated back to around two decades ago, when Dupont and Luet-tin (2000) showed hand-crafted visual feature improves HMM-based ASR systems.",
"The first mod-ern AVSR system is proposed in Afouras et al. (2018a) where deep neural networks are used.",
"The field has been rapidly developing since then.",
"Most of the works are devoted into the architectural improvements, for example, Zhang et al. (2019) proposed temporal focal block and spatio-temporal fusion, and Lee et al. (2020) explored to use cross-modality attentions with Transformer.",
"The other line of research focuses on a more diversified learning scheme to improve AVSR performance.",
"Li et al. (2019) uses a cross-modal student-teacher training scheme.",
"Paraskevopoulos et al. (2020) proposes a multi-task learning scheme by making the model to predict on both character and subword level.",
"Self-supervised learning has also been explored in Shukla et al. (2020), where the cross-modality setting is utilized by predicting frames of videos from audio inputs.",
"The end-to-end learning of AVSR systems are first seen in Tao and Busso (2020), albeit in a much simpler dataset than LRS2.",
"More recent work (Ma et al., 2021) has made end-to-end learning on LRS2 possible by using a Conformer acoustic model and a hybrid CTC/attention decoder.",
"Self-supervised learning has been chased in recent years since its ability to learn general representations of data through simple tasks that don't require labeling.",
"Contrastive learning (Hadsell et al., 2006) has become the most impactful learning scheme in this field.",
"In natural language processing, uni-or bi-directional language modelling (Brown et al., 2020; Devlin et al., 2019) have been used to significantly increase performances on various tasks.",
"In audio speech processing, contrastive predictive coding (Baevski et al., 2020) has been proven to be powerful in speech recognition.",
"In the visual domain, Earlier works create self-supervised tasks through image processing based methods, such as distortion (Gidaris et al., 2018),colorization (Zhang et al., 2016) and context prediction (Doersch et al., 2015).",
"More recently, contrastive learning emerged as a paradigm of self-supervised learning, which results in a group of more expressive general visual representations, such as MoCo (He et al., 2019; Chen et al., 2020b), SimCLR (Chen et al., 2020a), BYOL (Grill et al., 2020), etc. 3 Architecture The overall architecture of our model is shown in Fig.",
"1. The audio-visual model is comprised of four components, the front-ends and back-ends for both modalities, the fusion module, and the decoders.",
"Visual Front-end: Visual front-end serves as a component to capture the lip motion and reflect the lip position differences in its output representations.",
"A naive way to apply pre-trained models in the visual front-end is to directly feed the RGB channels of each frame as input.",
"However, since frames within a same clip in AVSR are largely similar in their contents while most pre-trained models in vision target at learning general representations reflecting the content of the whole image, this approach will result in similar outputs for all the frames, collapsing the informative lip position 4492 Conv3d MoCo Wav2Vec 2.0 C on c a t ena t e ModalityNorm seq2seqLoss Visual Frontend Audio Frontend Visual Backend TransformerEncoderLayer 6 Conv1d Audio Backend ModalityNorm Transformer EncoderLayer 6 Conv1d Transformer EncoderLayer 6 Conv1d Fusion Module Transformer DecoderLayer 6 Conv1d CTCLoss Figure 1: Overall architecture of our AVSR model.",
"To overcome the aforementioned problem while still being able to utilize the pre-trained model, we truncate the first convolutional layer in MoCo v2 (Chen et al., 2020b), which is pre-trained on ImageNet (Deng et al., 2009), and replace it with a layer of 3-D convolution.",
"The outputs of 3-D convolutional layer are intentionally made identical to the input of the first ResBlock in MoCo v2 (see Table 1), thus providing a compatible interface to transfer higher layers of MoCo v2 into this task.",
"On the other hand, we also adopt the common practice to convert the RGB input image to gray-scale before feeding it into the model, as it prevents the model from learning chromatic aberration information.",
"Audio Front-end: The audio front-end is rather straight-forward.",
"We use wav2vec 2.0 (Schnei-der et al., 2019) pre-trained on Libri-Light (Kahn et al., 2020), like it is normally used for ASR tasks, both the 1-D convolutional layers and the stacked Transformer encoder layers are transferred into our audio front-end.",
"The audio front-end takes as input raw audio wave of 16kHz, and produces one vector representation every 20ms.",
"The audio feature dimensions are shown in Table",
"2. 3.2 Back-ends Since the visual frames are in 25 FPS and the wav2vec 2.0 outputs are around 49 Hz, one should note that there is 2x difference in the frequency of frame-wise visual and audio representations at the output of their front-ends.",
"In the back-end, we use 1-D convolutional layers on the time dimension combined with Transformer encoder layers to provide single modality temporal modeling, as well as adjusting the features to have the same frequency.",
"Visual Back-end: The incoming MoCo v2 output to the visual back-end has a feature dimension of 2048, at a frequency of 25 vectors per second.",
"In the visual backend, we keep this frequency while reducing the feature size to 512.",
"See Table",
"1. For positional encodings of the Transformer, we use fixed positional encoding in the form of sinusoidal functions.",
"Audio Back-end: In the audio back-end, the incoming wav2vec 2.0 outputs have a feature size of 1024, at a frequency of 50 vectors per second.",
"We downscale the frequency by setting the stride of 1-D convolutional layer to",
"2. The Transformer encoder layers have the identical size to that of the visual back-end, while using a separate set of parameters.",
"Table 2 shows a clearer picture of audio frontand back-end dimensions.",
"The odds are due to the larger receptive fields of wav2vec 2.0 1-D convolutional layers, which we circumvent by properly prefixing and suffixing the audio sequence and truncate the trailing audio vector.",
"Thus a perfect 1:2 ratio of visual frames and audio front-end outputs are ensured.",
"Features from both the audio and visual modalities are fused together in this section, forming vector representation of 1024 dimensions at a relatively low rate of 25 Hz.",
"We use LayerNorm (Ba et al., 2016) separately on each of the modalities before concatenating them on the feature dimension.",
"The LayerNorm is required since it avoids one modality overtaking the whole representation with larger variance.",
"Similar 1-D convolutional layers and a subsequent Transformer encoder block of 6 layers take the fused representations as input, and encode them for the decoders.",
"Following the setting of Petridis et al. (2018), there are two decoders trained simultaneously based on the same output in the fusion module.",
"The first is a Transformer seq2seq decoder, a Transformer decoder with 6 layers is used, and we perform teacher forcing at character level by using ground truth characters as input during training.",
"The second one is arguably a decoder since it yields character probabilities for each timestep and relies on the CTC loss in training.",
"4 extra 1-D convolutional layers with ReLU activation are used on top of the last Transformer encoder layer output.",
"We also include LayerNorm between each of the layers.",
"In this work, we use a so called hybrid CTC/atten-tion loss (Watanabe et al., 2017) for our training process.",
"Let x = [ x 1 , , x T ] be the input frame sequence at the input of Transformer encoder in the fusion module and y = [ y 1 , , y L ] being the targets, where T and L denote the input and target lengths, respectively.",
"The CTC loss assumes conditional independence between each output prediction and has a form of p CTC ( y | x ) T (cid:89) t =1 p ( y t | x ) (1) On the other hand, an autoregressive decoder gets rid of this assumption by directly estimating the posterior on the basis of the chain rule, which has a form of p CE ( y | x ) = L (cid:89) l =1 p ( y l | y <l , x ) (2) The overall objective function is computed as follows: L = log p CTC ( y | x )+(1 ) log p CE ( y | x ) (3) where controls the relative weight between CTC loss and seq2seq loss in the hybrid CTC/atten-tion mechanisms.",
"The weight is needed not only when integrating the two losses into one training loss, but also fusing the two predictions during decoding, which we will revisit in the following subsections.",
"For audio modality, the audio front-end is first pre-trained through self-supervised learning, which is done by wav2vec 2.0.",
"Then the audio frontand back-end are trained through the audio-only (AO) setting, together with dedicated decoders.",
"For the visual modality, the visual front-end is first pre-trained through self-supervised learning, then modified and trained through sequence classification at word level video clips in LRW data.",
"After that, the visual front-end is inherited by the visual-only (VO) model, where visual back-end and dedicated decoders are used.",
"Due to computational constraints, we pre-compute the audio and visual back-end outputs, and only learn the parameters in the fusion module and decoders part in this final stage.",
"A detailed visualization of our training pipeline is depicted in Figure",
"2. 3.7 Decoding Decoding is performed using joint CTC/attention one-pass decoding (Watanabe et al., 2017) with beam search.",
"We apply shallow fusion to incorporate CTC and seq2seq predictions: y = arg max y Y { log p CTC ( y | x ) + (1 ) log p CE ( y | x ) } (4) where Y denotes predictions set of target symbols, while is the relative weight that tuned on validation set.",
"In this section, we will first introduce the datasets and various settings we used in each component of our model.",
"Then we will present results of audio-only, visual-only and audio-visual settings.",
"We also present a breakdown of the relative contribution of every component through ablation study.",
"We use the large-scale publicly AVSR dataset, the Lip Reading Sentences 2 (LRS2) (Chung et al., 2017) as our main testbed.",
"During training, we also use the Lip Reading in the Wild (LRW) (Chung and Zisserman, 2016) as a word-level video classification task to pre-train our visual front-end.",
"LRS2 consists of 224 hours of aligned audio and videos, with a total of 144K clips from BBC videos, the clips are at a length of sentence level.",
"The training data contains over 2M word instances and a vocabulary of over 40K.",
"The dataset is very challenging as there are large variations in head pose, lighting conditions, genres and the number of speakers.",
"LRW is a word-level dataset, consisting of 157 hours of aligned audio and videos, totalling 489K video clips from BBC videos, each containing the utterance of a single word out of a vocabulary of 500.",
"The videos have a fixed length of 29 frames, the target word occurring in the middle of the clip and surrounded by co-articulation.",
"All of the videos are either frontal or near-frontal.",
"In our experiment, we only use the visual modality from this dataset to train our visual front-end.",
"We use character level prediction with an output size of 40, consisting of the 26 characters in the alphabet, the 10 digits, the apostrophe, and special tokens for [space] , [blank] and [EOS/SOS] .",
"Since the transcriptions of the datasets do not contain other punctuations, we do not include them in the vocabulary.",
"Our implementation is based on the Pytorch library (Paszke et al., 2019) and trained on four NVIDIA A100 GPUs with a total of 160GB memory for 1 week.",
"The network is trained using the Adam optimizer (Kingma and Ba, 2015) with 1 = 0 .",
"9 , 2 = 0 .",
"999 and (cid:15) = 10 8 and an initial learning rate of 10 4 .",
"We use label smoothing with a weight set to 0.01, learning rate warm up and reduce on plateau scheduler.",
"The relative weight in CTC loss and seq2seq loss is set to 0.2.",
"When decoding, we set to 0.1.",
"The samples in the pre-train set are cropped by randomly sampling a continuous range of 1 / 3 words of the whole utterances, in order to match the length of clips in the train set.",
"Over-length samples are further truncated at 160 frames to reduce memory occupation.",
"Preprocessing: We detected and tracked 68 facial landmarks using dlib (King, 2009) for each video.",
"To remove differences related to face rotation and scale, the faces are aligned to a neural reference frame using a similarity transformation following Martnez et al. (2020).",
"Interpolation and frame smoothing with a window width of 12 frames are used to deal with the frames that dlib fails to detect.",
"Then a bounding box of 120 120 is used to crop the mouth ROIs.",
"The cropped frames are further converted to gray-scale and normalized with respect to the overall mean and variance of the train set.",
"Each raw audio waveform is normalized to zero mean and unit variance following Baevski et al. (2020).",
"Data Augmentation: Following Ma et al. (2021), random cropping with a size of 112 112 and horizontal flipping with a probability of 0.5 are performed consistently across all frames of a given image sequence when training visual-only and audiovisual models.",
"For each audio waveform, additive noise is performed in the time domain following Afouras et al. (2018a) during training audio-only 4495 Methods WER Visual-only LIBS (Zhao et al., 2020) 65.3 TM-CTC* (Afouras et al., 2018a) 54.7 Conv-seq2seq (Zhang et al., 2019) 51.7 TM-seq2seq* (Afouras et al., 2018a) 50.0 KD-TM (Ren et al., 2021) 49.2 LF-MMI TDNN* (Yu et al., 2020) 48.9 E2E Conformer* (Ma et al., 2021) 42.4 E2E Conformer** (Ma et al., 2021) 37.9 Our Model 43.2 Audio-only TM-CTC* (Afouras et al., 2018a) 10.1 TM-seq2seq* (Afouras et al., 2018a) 9.7 CTC/attention* (Petridis et al., 2018) 8.2 LF-MMI TDNN* (Yu et al., 2020) 6.7 E2E Conformer** (Ma et al., 2021) 3.9 Our Model 2.7 Audio-Visual TM-DCM (Lee et al., 2020) 8.6 TM-seq2seq* (Afouras et al., 2018a) 8.5 TM-CTC* (Afouras et al., 2018a) 8.2 LF-MMI TDNN* (Yu et al., 2020) 5.9 E2E Conformer** (Ma et al., 2021) 3.7 Our Model 2.6 Table 3: Audio-only, visual-only and audio-visual results of word error rate (WER) tested on LRS2.",
"and audio-visual models.",
"Babble noise are added to the audio stream with 5dB SNR and probability of p n = 0 .",
"25 .",
"The babble noise is synthesized by mixing 20 different audio samples from LRS2.",
"Evaluation: For all experiments, word error rate (WER) are reported which is defined as WER = ( S + D + I ) /N .",
"The S , D and I in the formula denotes the number of substitutions, deletions and insertions respectively from the reference to the hypothesis, and N is the number of words in the inference.",
"The babble noise added to the audio waveform during evaluation is generated using the same manner as training, while we set a different seed to avoid model fit to a specific generated noise.",
"Decoding is performed using joint CTC/attention one-pass decoding (Watanabe et al., 2017) with Modules Ours TM-CTC E2E Conformer Audio front-end 315.0M -3.9M Visual front-end 23.5M 11.2M(freezed) 11.2M Audio back-end 20.2M 20.2M 31.8M Visual back-end 20.2M 20.2M 31.8M Fusion module 19.7M 19.7M 0.8M Decoders 26.2M 20.5K 9.5M Table 4: The parameters comparison of ours, TM-CTC (Afouras et al., 2018a) and E2E Conformer (Ma et al., 2021) models.",
"beam width 5 (the values were determined on the held-out validation set of LRS2).",
"We don't use an external language model in our experiments.",
"We present results for all experiments in Table 3, reporting WERs on visual-only, audio-only and audio-visual models.",
"Note that many of the models listed here are also using extra training data in different stages of training pipeline, such as MV-LRS (Chung and Zisserman, 2017), LRS3 (Afouras et al., 2018b), LibriSpeech (Panayotov et al., 2015) and LRW.",
"We present the parameters of our model, TM-CTC model (Afouras et al., 2018a) and the current state-of-the-art model (Ma et al., 2021) in Table",
"4. Our model back-ends and fusion module configura-tions follow TM-CTC model, the hyper-parameters settings in the seq2seq decoder are the same as in the back-ends.",
"The most significant difference is that we utilize pre-trained front-ends, resulting in a larger model size.",
"Audio-visual Setting: In the main audio-visual setting, the pre-train and train sets in LRS2 are used as train set in the final training stage.",
"Our proposed audio-visual model achieves a WER of 2.6% without the help of an external language model, which improves by 1.1% over the current state-of-the-art (Ma et al., 2021).",
"This is rather a big improvement, with a relative improvement of around 30%.",
"Audio-only Setting: The training data used for training audio-only model consists of 224 hours labelled data from LRS2, as well as the 60K hours unlabelled data from LibriLight (Kahn et al., 2020) that are indirectly used through inheriting wav2vec 2.0 parameters.",
"Our model also achieves a WER of 2.7%, which reduces the WER of the current state-4496 of-the-art (Ma et al., 2021) by 1.2%, indicating a relative improvement of 31%.",
"Visual-only Setting: The visual-only model uses labelled LRS2 data in its pre-train and train sets, the LRW for supervised pre-training, and indirectly using the 1.28M unlabelled images from ImageNet through MoCo v2.",
"The visual-only model achieves a WER of 43.2%, lagging behind the current state-of-the-art E2E Conformer model (Ma et al., 2021) with 5.3%.",
"Compared to E2E Conformer, the main difference is that a large Transformer language model is used during decoding, which itself brings a 4.5% difference compared with a normal RNN language model in their ablation studies (Ma et al., 2021).",
"The gap between our visual-only model and the E2E Conformer model with a RNN language model is 0.8%, which resides in a quite reasonable range.",
"Additionally, we use a 6-layers Transformer encoder for temporal modelling instead of a 12-layers conformer encoder, which resulted in a smaller back-end size.",
"If we consider a fairer comparison by only looking at benchmarks without using an external language model, the best-reported benchmark is Ren et al. (2021), which achieved a WER of 49.2%, lagging behind our model by 6.0%.",
"In this section, we investigate the impact of every individual building block by testing them in LRW, audio-only and visual-only settings.",
"MoCo v2 Contribution in Visual Word Classification: Results of visual word classification on LRW are shown in Table",
"5. We first train a model by replacing the ResNet-18 front-end in Stafylakis and Tzimiropoulos (2017) with a ResNet-50 frontend, matching the size of MoCo v2 but with fresh weights.",
"This results in an absolute improvement of 2.1%.",
"Then we initialize the ResNet-50 frontend with MoCo v2 weights and a further absolute improvement of 2.3% is observed, which implies that self-supervised learning is actually functioning in better represent the lip movement.",
"Additionally, When Using 6 layers of Transformer encoder instead of TCN as back-end, we can observe another absolute improvement of 6.0%.",
"We also noticed that using MoCo v2 front-end could significantly reduce the training time.",
"Performance Breakdown in Audio-only Setting: Results of audio-only model on LRS2 are shown in Table",
"6. Starting from Afouras et al. (2018a), Method Acc Baseline(Stafylakis and Tzimiropoulos, 2017) 74.6% + ResNet-50 front-end 76.7% + MoCo v2 front-end 79.0% + Transformer encoder back-end 85.0 % Table 5: Ablation study on visual word classification performance on LRW.",
"we first train a model by replacing the STFT audio feature with a wav2vec 2.0 front-end pre-trained on LibriSpeech, resulting in an absolute improvement of 11.1%.",
"Then we use another pre-trained model learned on an even larger unlabelled single modality dataset Libri-Light, and a further absolute improvement of 0.6% is observed.",
"We further train the model with a hybrid CTC/attention decoder during the training stage, which results in another absolute improvement of 0.9%.",
"Performance Breakdown in Visual-only Setting: Results of the visual-only model on LRS2 are shown in Table",
"7. Starting from Afouras et al. (2018a), we first introduce end-to-end training by using a hybrid CTC/attention decoder (the frontend is still pre-trained through LRW), resulting in an absolute improvement of 16.0%.",
"Then we initialize the front-end with pre-trained MoCo v2 weights, a same end-to-end training manner results in a further absolute improvement of 5.8%.",
"Robustness under Noisy Inputs: To evaluate the model's tolerance to audio noise, we tested the performance of our model under babble noise with different SNR levels.",
"Our audio-only and audiovisual models reach WERs of 32.5% and 24.5% when the SNR level is 0dB, respectively, which 4497 Modality Model 0dB 5dB clean AO Afouras et al. (2018a) 58.0% -10.5% Our model 32.5 % 6.8% 2.7 % AV Afouras et al. (2018a) 33.5% -9.4% Our model 24.5 % 6.3% 2.6 % Table 8: Word error rate (WER) under different SNR levels.",
"reduce the reported result in Afouras et al. (2018a) by 25.5% and 9% .",
"When the SNR level rises to 5dB, our audio-only and audio-visual model obtain WERs of 6.8% and 6.3%.",
"Besides achieving significant improvement over the baseline model under babble noise environment, we further investigate the model performance under human noise environment.",
"The human noise is extremely challenging because the noise itself contains some words, while the model cannot easily distinguish which audio signal is the one to be recognized.",
"We synthesize the human noise by randomly crop many 1 second signals from different audio samples in the LRS2 dataset.",
"As shown in Fig. 3, we conduct experiments varying different levels of human noise, the models are trained using babble noise augmented audio.",
"The WER increases greatly after the SNR level drops down under 0db.",
"It is because the model may not be able to distinguish the two overlapped spoken words at a low SNR level.",
"And the overall performance under each SNR level is worse than babble noise, indicating that noise with specific information is harder than disorganized babble noise.",
"Recognition under Low Resource: A significant benefit of using self-supervised pre-trained models is that only a small amount of labelled data is needed for training a model.",
"To further investigate the models' performance in low resource environment, we use the 28 hours train set of LRS2 to train an audio-only and a visual-only model.",
"The results are shown in Table 9.",
"The audio-only model trained with 28 hours data achieves a WER of 3.4%, which is a little bit worse than the one trained with 224 hours data.",
"The result indicates that for the audio-only model, the self-supervised model pre-trained on a large-scale single modality dataset can significantly reduce the demands of data.",
"While Ma et al. (2021) also provides a performance under noisy inputs, however, we are not able to compare with them due to a lack of necessary details to generate the same noise.",
"the visual-only model trained with 28 hours data has a great gap with the one trained with 224 hours data, the reason can be that the visual-only model is harder to train and demands a larger amount of data.",
"In this work, we propose to utilize self-supervised learning for AVSR by simply incorporating the pre-trained model trained in massive unlabelled single modality data.",
"Although the visual pre-trained models are not straight-forward to be transplanted into visual front-end, we still manage to integrate pre-trained models in both modalities for the AVSR task.",
"Experimental results are impressive, resulting in a 30% relative improvement.",
"It's interesting to observe that self-supervised model in audio modality has an even larger improvement than that of the visual counterpart.",
"We believe the reasons can be listed as follows: The training data scale of audio modality is significantly larger than that of visual modality, with the Libri-Light dataset used for pretraining wav2vec 2.0 consists of 60K hours audio signals, the ImageNet dataset, on the con-4498 trary, has only 1.28M images, roughly equivalent to 14 hours silent video under 25 FPS.",
"The MoCo v2 model is pre-trained on images to better represent frame-level contents, while there are no pre-training steps to model the temporal correlation between frames.",
"In contrast, the wav2vec 2.0 model is pre-trained on consistent audios, thus having a better temporal modelling ability.",
"As there has not emerged a dominating cross-modality self-supervised learning approach in the field of AVSR, in future work, we are going to explore two more directions in the self-supervised learning scenario based on this work.",
"The first is utilizing the temporal correlations within the visual domain, while the other is the cross-modal correlations between the audio and visual modality.",
"We hope this work could pave the way towards multimodality self-supervised learning, especially for various aspects in AVSR.",
"This work will not pose ethical problems, the data resources we use are all from published works and do not involve privacy issues related to data collection.",
"The data is collected from BBC and contains thousands of diverse speakers, allowing the speech recognition models to generalize to all speakers.",
"In terms of computational experiments, we used publicly available pre-trained models, which makes the training more environmentally friendly and lowers the computational requirements to reproduce our work.",
"This work was sponsored by the National Natural Science Foundation of China (NSFC) grant (No. 62106143), and Shanghai Pujiang Program.",
"We would like to thank all the anonymous reviewers for their valuable and constructive comments."
] |
[
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"other"
] |
[
"Meaning conflation deficiency is one of the main limiting factors of word representations which, given their widespread use at the core of many NLP systems, can lead to inaccurate semantic understanding of the input text and inevitably hamper the performance.",
"Sense representations target this problem.",
"However, their potential impact has rarely been investigated in downstream NLP applications.",
"Through a set of experiments on a state-of-the-art reverse dictionary system based on neural networks, we show that a simple adjustment aimed at addressing the meaning conflation deficiency can lead to substantial improvements.",
"Words are often the most fine-grained meaning bearing components of NLP systems.",
"As a standard practise, particularly for neural models, the input text is treated as a sequence of words and each word in the sequence is represent with a dense distributional representation (word embed-ding).",
"Importantly, this setting ignores the fact that a word can be polysemous, i.e., it can take multiple (possibly unrelated) meanings.",
"Representing a word with all its possible meanings as a single point (vector) in the embedding space, the so-called meaning conflation deficiency (Camacho-Collados and Pilehvar, 2018), can hinder system's semantic effectiveness.",
"To address this deficiency, many techniques have been put forward over the past few years, the most prominent of which is sense representation or multi-prototype embedding (Schutze, 1998; Reisinger and Mooney, 2010).",
"However, as a general trend, these representations are usually evaluated either on generic benchmarks, such as word similarity, or on sense-centered tasks such as Word Sense Disambiguation, leaving their potential impact on downstream word-based systems unknown.",
"In this paper, we provide an analysis to highlight the importance of addressing the meaning conflation deficiency.",
"Specifically, we show how distinguishing different meanings of a word can facilitate a more accurate semantic understanding of a state-of-the-art reverse dictionary system, reflected by substantial improvements in recall and generalisation power.",
"Reverse dictionary, conceptual dictionary, or concept lookup is the task of returning a word given its description or definition (Brown and McNeill, 1966; Zock and Bilac, 2004).",
"For example, given a crystal of snow, the system has to return the word snowflake .",
"The task is closely related to the tip of the tongue problem where an individual recalls some general features about a word but cannot retrieve that from memory.",
"Therefore, a reverse dictionary system can be particularly useful to writers and translators when they cannot recall a word in time or are unsure how to express an idea they want to convey.",
"Our experiments are based on the reverse dictionary model of Hill et al. (2016) which leverages a standard neural architecture in order to map dictionary definitions to representations of the words defined by those definitions.",
"Specifically, they proposed two neural architectures for mapping the definition of word t to its word embedding e t .",
"Let D t be the sequence of words in t 's definition, i.e., D t = { w 1 , w 2 , . . . , w n } , with their corresponding embeddings { v 1 , v 2 , . . . , v n } .",
"The two models differ in the way they process D t .",
"In the bag-of-words ( BoW ) model, D t is taken as a bag of words, i.e., the representation of the definition is encoded by adding the word embeddings of all its content words, i.e., (cid:80) ni =1 v i .",
"The model learns, using a fully-connected layer, a matrix for transforming the encoded representation to the target word's embedding e t .",
"The BoW model is not sensitive to the order of words in D t .",
"This might be crucial for an accurate semantic understanding.",
"The Recurrent Neural Network ( RNN ) model alleviates this issue by encoding the input sequence using an LSTM architecture (Hochreiter and Schmidhuber, 1997).",
"Similarly to the BoW model, a dense layer maps the encoded representation to the target word's embedding.",
"In both cases, the goal is to map a given definition to the corresponding target word's embedding e t , computed using Word2vec (Mikolov et al., 2013) and independently from the training of the main model.",
"Two cost functions were tested: (1) the cosine distance between the estimated point in the target space ( e t ) and e t , and (2) the rank loss which contrast the choice of e t with a random choice for a randomly-selected word from the vocabulary other than t .",
"The reverse dictionary system takes advantage of a standard architecture which has proven effective in various NLP tasks.",
"However, similarly to many other word-based models, the system ignores that the same word can have multiple (po-tentially unrelated) meanings.",
"In fact, it tries to map multiple definitions, with different semantics, to the same point in the target space.",
"For instance, the three semantically unrelated definitions of crane : lifts and moves heavy objects, large long-necked wading bird, and a small constellation in the southern hemisphere will have similar semantic interpretation by the system.",
"This word-level meaning conflation can hamper the ability of the system in learning an accurate mapping function.",
"In what follows in this paper, we will illustrate how a simple sense level distinction can facilitate a more accurate semantic understanding for the reverse dictionary system, hence leading to significant performance improvements.",
"Let t be an ambiguous word with three meanings; hence, three distinct definitions D t 1 , D t 2 , and D t 3 .",
"The original model of Hill et al. (2016) maps all these definitions to e t .",
"We mitigate the meaning conflation deficiency through a sense-specific mapping function that obtains distinct interpretations for individual definition, hence mapping them to different points in the target space: s t 1 , s t 2 , and s t 3 .",
"Specifically, in our experiments we leveraged DeConf (Pilehvar and Collier, 2016).",
"DeConf is a WordNet-based sense representation technique which receives a set of pre-trained word embeddings and generates embeddings for individual word senses in the same semantic space, hence generating a combined space of words and word senses.",
"DeConf performs a set of random walks on WordNet's semantic network and extracts for each sense a set of sense biasing words B s .",
"A sense biasing word for the i th meaning of a target word t is a semantically related word to that specific sense of the word ( s t i ).",
"For each word sense in WordNet we obtain the corresponding B s .",
"Then, the embedding for a specific word sense s is computed as: s = || e w + (cid:88) b B s exp( i ) e b || , (1) where is a decay parameter and e w is the embedding of corresponding lemma of sense s .",
"In our experiments, as for word embeddings we used the 300-dimensional Word2vec embeddings, trained on the Google News corpus.",
"1 The same set was used as input to DeConf.",
"As a result of this process, around 207K additional word senses were introduced in the space for the 155K unique words in WordNet 3.0.",
"It is widely acknowledged that sense distinctions in WordNet inventory are too fine-grained for most NLP applications (Hovy et al., 2013).",
"For instance, for the noun star , WordNet 3.0 lists eight senses, among which two celestial body senses (as an astronomical object and that visible, as a point of light, from the Earth), and three person senses (skillful person, lead actor, and per-forming artist).",
"This fine level of sense distinction is often more than that required by the target downstream application (Rud et al., 2011; Sev-eryn et al., 2013; Flekova and Gurevych, 2016).",
"In our experiments, we used WordNet's lexicographer files (lexnames 2 ) in order to reduce sense granularity.",
"Created by the curators of WordNet 1 https://code.google.com/archive/p/ word2vec/ 2 https://wordnet.princeton.edu/man/ lexnames.5WN.html WN-seen WN-unseen Concept Mapping top-10 top-100 top-10 top-100 top-10 top-100 Supersense RNN cosine 0.656 0.824 0.150 0.310 0.230 0.480 ranking 0.694 0.836 0.162 0.352 0.335 0.630 BoW cosine 0.642 0.820 0.250 0.416 0.280 0.590 ranking 0.706 0.872 0.310 0.474 0.390 0.735 Sense RNN cosine 0.742 0.854 0.164 0.336 0.275 0.505 ranking 0.668 0.840 0.180 0.372 0.325 0.615 BoW cosine 0.678 0.826 0.290 0.456 0.300 0.620 ranking 0.692 0.848 0.292 0.470 0.380 0.735 Word RNN cosine 0.462 0.652 0.056 0.162 0.215 0.400 ranking 0.534 0.728 0.086 0.188 0.190 0.475 BoW cosine 0.446 0.652 0.136 0.264 0.175 0.465 ranking 0.562 0.740 0.160 0.292 0.320 0.600 Baseline -0.104 0.346 0.054 0.158 0.065 0.300 Table 1: Accuracy performance (@10/100) of the original (word-based) reverse dictionary system and its sense-and supersense-based improvements on different datasets.",
"during its development, these files organize WordNet synsets into 45 groups (such as food, animal, event, and emotion) according to their syntactic and logical properties.",
"These groupings are usually referred to as supersenses .",
"Using supersenses, the celestial and person meanings of star are grouped into two main groups.",
"A supersense embedding e ss in our experiments is simply computed as a normalized average (centroid) of its contained sense embeddings, i.e., e ss = || (cid:80) s ss e s || .",
"This reduces the average number of senses for polysemous words in WordNet from 2.9 to 1.8.",
"We carried out evaluations on the three reverse dictionary datasets created by Hill et al. (2016): WordNet definitions and single-sentence descrip-tions written for a set of frequent words ( concept mapping ).",
"They proposed two different versions of the WordNet dataset: WN-seen , in which a test instance is already observed during training, and WN-unseen , in which test instances are excluded from the training data.",
"The former dataset is targeted at evaluating the ability of the system to recall a previously encoded information.",
"We experimented with three variants of the reverse dictionary system: the original word-based model and the two proposed sense-based variants, based on WordNet senses and supersenses.",
"Table 1 reports accuracy performance for four different configurations of the system (BoW and RNN definition composition and cosine and ranking loss; cf. Section 2.1) on the three datasets.",
"In the last row, we also report results for the unsupervised baseline of Hill et al. (2016) which adds the embedding of words in the input definition and finds the nearest embedding in the target space.",
"Results reported in the Table clearly highlight that addressing the meaning conflation deficiency in the system has led to significant performance improvements (word vs. sense and supersense set-tings).",
"This is observed consistently across all the three datasets and for both sense-based models.",
"The better semantic understanding of the system is reflected by its better recall of seen test instances (WN-seen) and better generalisation to unseen and out-of-domain data (WN-unseen and concept mapping).",
"The absolute top-10 accuracy improvements of the ranking-BoW supersense model over the best corresponding word-based configurations are: 14.4% (WN-seen), 15% (WN-unseen), and 7% (concept mapping).",
"Among the two proposed systems, supersenses prove to be more effective, suggesting that the fine-grained sense distinctions in WordNet might not be necessary for an accurate reverse dic-3 The experiments are based on the implementation available at https://github.com/fh295/DefGen2 .",
"tionary mapping, corroborating previous findings (Flekova and Gurevych, 2016).",
"Our results are also in line with the findings of Hill et al. (2016) that the reverse dictionary system performs best with the bag-of-words (BoW) input encoding and the ranking loss.",
"One of the fundamental differences between the two input encodings lies in their sensitiveness to order: RNNs are sensitive to the order of words in a given sequence whereas permuting words in the sequence does not alter BoW's encoding.",
"Hill et al. (2016) suggested that it is often possible to retrieve a concept even if the words in its corresponding definition are shuffled.",
"This can partly explain the strikingly good relative performance of the BoW model.",
"During our analysis of system outputs, we observed many examples in which the word-based model was unable to retrieve an ambiguous word since the definition was referring to one of its less frequent meanings.",
"For instance, the word dressing might refer to different concepts such as getting dressed or savory dressing for salads.",
"Having a conflated understanding of dressing , the word-based model was unable to retrieve the salad meaning.",
"Other similar examples include infrequent senses of party , defined as an organization to gain political power, and partition , defined as a vertical structure that divides or separates.",
"In both cases, the sense-based model improves the original word-based one, in which the system is unable to retrieve the intended word.",
"Numerous such examples were observed during our analysis of the results, highlighting the important limitation of word-based models for their inherent bias towards more frequent usages.",
"Moreover, as a side benefit, sense embeddings provide parts of speech distinction, unlike common pre-trained word embeddings which conflate all parts of speech to a single token.",
"For instance, the word-based model is unable to recall the nominal bear because it has a conflated understanding of the word which includes all its senses, particularly the dominant verb meaning.",
"4 bear massive plantigrade carnivorous or omnivorous mammals with long shaggy coats and strong claws word: critter, rabbit, squirrel, wolf sense: bear , mustelid, bruin baseline: carnivorous, omnivorous.",
"The same applies to the open land meaning of common , which is a less frequent (nominal) meaning of the word which is usually used as an adjective for concepts such as ordinary or usual.",
"Additionally, word embeddings are insensitive to fine-grained semantic distinctions, such as antonymy, due to their construction nature.",
"However, the sense representations used in our experiments (DeConf) were constructed by exploiting the knowledge encoded in WordNet.",
"Hence, they benefit from the rich semantic and ontological knowledge provided by the resource (such as relation types).",
"Some of the improvements can be attributed to this property of sense embeddings.",
"unanticipated not anticipated word: unavoidable, inevitable, plausible sense: unforeseen, unanticipated , unpredicted baseline: not, anticipated, expected However, there are cases in which the word-based model provided more accurate results.",
"For instance: service work done by one person or group that benefits another word: service , caring sense: organisation, dependant, programme Our analysis showed that most of these errors were due to fine-grained sense distinctions in WordNet or obscure meanings.",
"For instance, one of the senses 5 of organisation is semantically re-4 In our analysis, we found that improvements are mostly due to addressing semantic conflation rather than ambiguities in parts of speech.",
"5 The 6 th sense of organisation in WordNet 3.0, defined as the activity or result of distributing or disposing persons or things properly or methodically.",
"lated (also close in WordNet's graph) to the meaning of service in the example.",
"This would suggest the need for more accurate sense representations and highlight the fact that the fine-granularity of senses should be better adjusted to the underlying task.",
"Moreover, it corroborates our finding that the coarse-grained supersenses are more suitable in the task of reverse dictionary mapping.",
"We leave the experiments with other sense representation techniques to future work.",
"Sense representations address the meaning conflation deficiency of their word-based counterparts by computing distinct representations for individual meanings of words, usually referred to as word senses.",
"Sense distinctions might be given by an external sense inventory, such as WordNet (Fell-baum, 1998).",
"An inventory-based sense representation technique exploits the knowledge encoded in the resource to construct representations (Rothe and Schutze, 2015; Jauhar et al., 2015; Pilehvar and Collier, 2016).",
"Alternatively, senses can be automatically induced in an unsupervised manner by analyzing the diversity of contexts in which a word appears (Schutze, 1998; Reisinger and Mooney, 2010; Huang et al., 2012; Neelakantan et al., 2014; Guo et al., 2014; Suster et al., 2016).",
"Regardless of how senses are obtained, the integration of sense representations into NLP systems is not a straightforward process.",
"Hence, they have often been evaluated on artificial tasks such as word similarity.",
"This is also due to lack of suitable evaluation benchmarks for sense representation techniques.",
"Pilehvar and Camacho-Collados (2019) recently proposed a dataset, The Word-in-Context (WiC), which provides a challenging, yet reliable, benchmark for the purpose.",
"Few attempts have been made at the integration of sense representation into downstream applications.",
"Li and Jurafsky (2015) experimented with unsupervised sense representations in tasks such as part-of-speech tagging and named entity recognition, with mixed results.",
"Also related to our work are the proposals of Flekova and Gurevych (2016) and Pilehvar et al. (2017) to disambiguate the input text and replace word embeddings with sense embeddings for the intended senses.",
"Our results for supersenses corroborates the findings of Pilehvar et al. (2017) who found reducing fine-granularity of senses beneficial to some settings.",
"A more recent branch of research investigates the construction of dynamic word embeddings that can adapt according to the context in which they appear (Peters et al., 2018; Devlin et al., 2018).",
"One of the objectives of this research has been to bypass the integration difficulties of sense representations into downstream models.",
"These so-called contextualised word embeddings can easily be replaced with conventional static word embeddings in neural-based NLP systems.",
"This integration has proven beneficial to a wide range of NLP applications.",
"Pilehvar and Camacho-Collados (2019) carried out an analysis on the sense distinguishing capability of contextualised embeddings, showing that, despite their successful application to downstream applications, these embeddings are not very powerful in capturing distinct meanings of words.",
"We provided an analysis on the impact of addressing the meaning conflation deficiency of word embeddings on the performance of a downstream NLP application, i.e., reverse dictionary mapping.",
"Through a set of experiments we showed that a simple migration from words to senses can sig-nificantly improve the ability of the system in semantic understanding, leading to consistent performance boost.",
"In future work, we plan to evaluate sense integration in other NLP applications, such as Machine Translation, in the light of (Liu et al., 2018), and question answering."
] |
[
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"method",
"result",
"method"
] |
[
"Knowledge-grounded conversation (KGC) shows great potential in building an engaging and knowledgeable chatbot, and knowledge selection is a key ingredient in it.",
"However, previous methods for knowledge selection only concentrate on the relevance between knowledge and dialogue context, ignoring the fact that age, hobby, education and life experience of an interlocutor have a major effect on his or her personal preference over external knowledge.",
"Without taking the personalization issue into account, it is difficult to select the proper knowledge and generate persona-consistent responses.",
"In this work, we introduce personal memory into knowledge selection in KGC to address the personalization issue.",
"We propose a variational method to model the underlying relationship between one's personal memory and his or her selection of knowledge, and devise a learning scheme in which the forward mapping from personal memory to knowledge and its inverse mapping is included in a closed loop so that they could teach each other.",
"Experiment results show that our method outperforms existing KGC methods significantly on both automatic evaluation and human evaluation.",
"Open-domain dialogue system often suffers from safe response (Li et al., 2015; Zhang et al., 2019) problem as they could only refer to the context when generating a response.",
"To alleviate this, knowledge-grounded conversation (KGC) is proposed to introduce external fact and real-world commonsense as prior knowledge (Zhou et al., 2018a; Dinan et al., 2019; Zhao et al., 2020a), such that a dialogue system is able to ground the conversation with the provided knowledge and therefore The first two authors contribute equally.",
"Xueliang Zhao is responsible for the design of the methodology and algorithm.",
"Tingchen Fu is responsible for the implementation and experiment.",
"The order is decided by a coin flip.",
"* Corresponding author: Rui Yan ([email protected]) generate informative and engaging responses.",
"As external knowledge supplements the background to the inputs and decides what to say, knowledge selection is a key ingredient in KGC.",
"Numerous methods have been developed to tackle the knowledge selection problem by sequential latent variables (Kim et al., 2020; Meng et al., 2020), reinforcement learning (Zhao et al., 2020b), or expectation maximization algorithm (Li et al., 2020).",
"In spite of the progress in this task, knowledge selection remains an unsolved problem as the precision is still far from satisfactory in Wizard of Wikipedia (Dinan et al., 2019) and other benchmarks in KGC (Gopalakrishnan et al., 2019), which also hinders the optimization of subsequent response generation models.",
"A crucial point is, they often make assumption that the golden knowledge is distinguishable as long as the dialogue context is known, yet this is not always held true because there exists a one-to-many relationship in conversation and the past utterance history in a dialogue session is insufficient to decide the knowledge selection or the future trend of a dialogue.",
"As is shown in Figure 1, personalization is a key to success in the task because knowledge selection is a personal or subjective process in na-ture.",
"When people communicate with each other, their perception of dialogue context will evoke their past memory about relevant life experience, taste and values, which we refer to as personal memory .",
"The aroused fragment of personal memory further guides their interest and preference for different knowledge.",
"Motivated by this, we postulate a new task named personalized KGC, introducing personalization into knowledge-grounded dialogue to encourage more human-like knowledge selection.",
"Importing persona memory into knowledge selection is a non-trivial task.",
"One of the challenge is concretization of personal memory.",
"Personal memory is an abstract concept related to user-specific experience, which is difficult to depict or model.",
"Though it has been discussed in open-domain dialogue (Li et al., 2016; Zhang et al., 2018), no previous research sheds light on the personalization issue in KGC and there exists no dialogue dataset featured with external facts and personal memory at the same time.",
"Besides, there is no annotated label to indicate which knowledge candidate a person will choose based on his or her personal memory.",
"Namely, the mapping between personal memory and knowledge selection is highly unconstrained without golden label.",
"Intuitive resolution like treating personal memory as additional knowledge is sub-optimal because of dependency between knowledge and personal memory, as is shown in our experiments.",
"To address the above issue, we construct a KGC dataset featured with personalized memory repository, collecting user-specific utterance history under multiple types of context, which is a reflection of one's personal memory.",
"And to discover the underlying relationship between the dialogue context, personal memory and knowledge, we propose a variational method and introduce two latent variables Z p and Z k to indicate the fragment of personal memory to evoke and the knowledge candidate to select respectively.",
"And to model the mapping from Z p to Z k , we introduce an inverse mapping as a dual task and employ dual learning to allow the two mappings to teach each other.",
"The motivation behind this is intuitive: The reconstruc-tion of personal memory from selected knowledge candidate is natural and easy if the mapping from personal memory to knowledge is accurate.",
"Extensive experiment shows that our methods outperform competitive baselines in both automatic evaluation and human evaluation, justifying the importance of introducing personal memory and the effect of the dual learning mechanism empirically.",
"The contributions of this work are three-fold: (1) We explore the personalization issue of the knowledge selection task in KGC and construct a dataset featured with user-specific personal memory to benefit relevant research in the future.",
"We are the first to explore the possibility of introducing personal memory into KGC.",
"(2) We propose a novel variational method and introduce two latent variables to model the interdependency between the persona and knowledge.",
"Besides, we employ dual learning to optimize the relationship between the dialogue context, personal memory and knowledge in a unified framework.",
"(3) We conduct extensive experiments and verify the proposed methods empirically.",
"Both the automatic and human evaluation evidence the efficacy of our proposed method.",
"There is a substantial literature in the field of knowledge-grounded conversation.",
"With the grounding of external knowledge in format of knowledge graph (Zhou et al., 2018a; Wu et al., 2019), document (Ghazvininejad et al., 2018; Zhou et al., 2018b; Zhao et al., 2019) or visual background (Das et al., 2017), it is regarded as a critical method towards intelligent dialogue system.",
"Nowadays, existing methods in KGC often share a paradigm that decomposes the task into two related sub-problems, namely knowledge selection and utterance generation (Kim et al., 2020).",
"In this work, we mainly focus on the knowledge selection task.",
"To this end, a great deal of methods have been proposed to retrieve the most relevant knowledge by memory network (Ghazvininejad et al., 2018), sequential latent variables (Kim et al., 2020; Meng et al., 2020), reinforcement learning (Zhao et al., 2020b) and so on.",
"A recent work gives attention to the expression style of knowledge (Zhao et al., 2021).",
"However, they only focus on the decoding phase and no methods shed light on the personalization issue of knowledge selection, to our best knowledge.",
"(2016), dual learning is a semi-supervision learning scheme aiming at utilizing large-scale unlabeled data.",
"Together with its newly appeared variants in recent years (Xia et al., 2017, 2018; Wang et al., 2019), dual learning has been successfully applied in neural machine translation (Xia et al., 2017; He et al., 2017), image-image-translation (Yi et al., 2017; Lin et al., 2018), sentiment analysis (Xia et al., 2017), automatic speech recognition (Ren et al., 2019), question answering (Tang et al., 2017), and knowledge-grounded dialogue (Meng et al., 2020).",
"Our work is related to dual learning as well.",
"First proposed in neural machine translation by He et al. (2016), dual learning is a semi-supervision learning scheme aiming at utilizing the large scale unlabeled data.",
"In this work, we apply dual learning to model the inter-dependency relationship between one's personal memory and his or her choice of knowledge.",
"Suppose we have a KGC dataset D with N case, and every case is in format of ( C, K , R ) , where C = [ u 1 , u 2 , , u l C ] is the context of the dialogue with l C tokens in total, K = { K 1 , K 2 , , K |K| } is a set of |K| knowledge candidates.",
"And R = [ r 1 , r 2 , , r l R ] is a response in this conversation corresponding to a specific user with unique user id.",
"Different from the original KGC task, we have a memory repository M .",
"For every interlocutor corresponding to the response, a set of his or her personal memory P = { P 1 , P 2 , , P |P| } composed of |P| customized utterance history could be retrieved from the memory repository.",
"Our goal is to learn a probabilistic model p ( R | C, K , P ) that could generate a personalized and informative response based on personal memory and knowledge.",
"Figure 2 gives a graphical model of our methods.",
"As is shown, the core of our proposed method is five probabilistic models to calculate the prior and posterior distribution of Z p , Z k and an auxiliary distribution of Z p .",
"During training, we devise an unsupervised learning scheme, in which we optimize the distribution of two latent variables Z p and Z k by dual learning.",
"To be more specific, we first sample a Z p from the posterior distribution q ( Z p | C, R ) , and then calculate the forward map distill dual condition KL-div KL-div dual loop VAE , , ) ( |, , ) ( |) , , ) ( |, ) Figure 2: A graphical representation of our proposed method.",
"ping from memory to knowledge q ( Z k | C, R, Z p ) , from which we sample a Z k .",
"The reward is designed as the probability of reconstructing the selected memory fragment by the auxiliary distribution ( Z p = Z p | C, R, Z k ) .",
"By maximizing the reward, the primal task and the auxiliary task could benefit each other.",
"The gains of the auxiliary distribution is distilled to q ( Z p | C, R ) , such that the two posterior distribution and the auxiliary distribution form a closed loop.",
"Besides, the prior distribution is forced to get close to the posterior distribution via KL-divergence.",
"In the inference phase, the prior distribution of Z p is calculated at first, from which we sample and activate a personal memory fragment.",
"After that, the woken memory fragment is used to decide the prior knowledge distribution p ( Z k | C ) .",
"Finally, the knowledge sampled from Z k together with the memory fragment is sent into a generator to synthesize a response.",
"Note that the golden response is only involved in the training phase.",
", and are all learnable parameters.",
"To make the latent variables interpretable, we set the latent space of Z p and Z k as the number of memory fragments or knowledge candidates to choose from, and each sampling corresponds to a single piece of memory fragment or a knowledge candidate.",
"Furthermore, motivated by human cognitive process, the aroused personal memory fragment implies one's preference for different external knowledge, which influences the likelihood of choosing different knowledge.",
"In light of this, 3903 the prior distribution of ( Z p , Z k ) is factorized as: p ( Z p , Z k ) = p ( Z k | Z p ) p ( Z p ) (1) And to calculate their probability distribution, we adopt BERT (Devlin et al., 2018) as the backbone of our method to obtain a dense representation of dialogue context, response, candidate knowledge sentence or personal memory fragment.",
"Take the calculation of the prior distribution p ( Z k | C, Z p ) as an example.",
"We first concatenate the context C , the memory fragment P indicated by the sampled Z p , and the i -th candidate knowledge K i together as a long sequence.",
"A special [CLS] token is prepended at the beginning of the sequence and [SEP] is inserted to separate different utterances: I = u 1 , u 2 , u l C , p 1 , p 2 , , p l P , k 1 , k 2 , k l Ki , (2) where l C , l P and l K i are the number of tokens in the context, memory facet and knowledge candidate respectively.",
"Then the embedding layer will convert I into input representations, which is the sum of the corresponding token embedding and position embedding.",
"Thereafter, the BERT encoder performs multi-head attention on the input representation to obtain a dense representation.",
"There are n identical layers in the BERT encoder, and for each layer, the multi-head attention could be formulated as H l = FFN(MultiHead( Q l 1 , K l 1 , V l 1 )) , (3) where FFN( ) is a feed-forward network and we use Q l 1 , K l 1 , and V l 1 to denote the query matrix, key matrix and value matrix after the l 1 th layer respectively.",
"For self-attention, we have Q l 1 = K l 1 = V l 1 = H l 1 , (4) where H l means the hidden state at the l -th layer.",
"Specially, H 0 is the input embedding and H n is the final output of the BERT.",
"We use the vector corresponding to the position of the special [CLS] token in H n as the representation of the i -th knowledge candidate, which is referred to as h i .",
"Then the distribution of Z k is calculated as p ( Z k = i | C, Z p ) = exp( f ( h i )) |K| (cid:80) j exp( f ( h j )) , (5) where f ( ) is a multi-layer perceptron.",
"The prior and posterior distribution of Z k and Z p are calculated in a similar way.",
"The only difference lies in the constitution of input sequence I : For the prior distribution of Z p , I is the concatenation of dialogue context and a candidate personal memory facet: I = u 1 , u 2 , u l C , p 1 , p 2 , , p l P (6) And to calculate the posterior distribution, we insert the response tokens behind the dialogue context tokens as the response usually contains clue indicating the selected knowledge and memory.",
"Namely, to compute q ( Z p | C, R ) , the posterior of Z p , the input is: I = u 1 , u 2 , u l C , r 1 , r 2 , , r l R , p 1 , p 2 , , p l P (7) And for q ( Z k | C, R, Z p ) : I = u 1 , u 2 , u l C , r 1 , r 2 , , r l R , p 1 , p 2 , , p l P , k 1 , k 2 , , k l K (8) Normally, the generator g of our method could be specified as any large-scale pre-trained language model.",
"Here we define the generator as GPT-2 (Radford et al., 2019).",
"Previous methods often synthesize a response merely based on the dialogue context and the selected knowledge, taking no consideration of the persona of the interlocutor, which may lead to an inconsistency in persona.",
"Different from that, we input the sampled personal memory fragment and the sampled knowledge candidate into GPT-2 all together with the dialogue context.",
"Intuitively, personal memory fragment implies why the knowledge is paid attention to and underlying relevance between the persona of the interlocutor and the knowledge, which endows the generator to generate persona-consistent and knowledgeable responses: g ( R ) = g ( R | C, Z p , Z k ) = l R (cid:89) i =1 g ( r i | C, Z p , Z k , r <i ) (9) 3.4 Learning Details Directly maximizing the marginal log-likelihood of generating the correct response g ( R | C, Z p , Z k ) requires integrating over all possibilities of Z k and Z p , which is more than time-consuming.",
"Inspired by variational inference, we introduce a variational posterior as the true posterior is intractable.",
"Thereby, instead of directly optimizing 3904 Algorithm 1 The proposed learning algorithm.",
"1: Input: Training KGC dataset D , memory repository M 2: Warm up p ( Z p ) , p ( ZK | Z p ) , q ( Z p | R ) and q ( Z k | R, Z p ) on D .",
"3: while not converge do 4: Sample a mini-batch { ( C, K , R ) } from D .",
"5: Retrieve the user-specific personal memory P from the memory repository.",
"6: Calculate the prior personal memory distribution p ( Z p ) with C .",
"7: Sample a Z p and then calculate the prior distribution of knowledge p ( Z k | Z p ) .",
"8: Calculate the posterior memory distribution q ( Z p | R ) based on C and R , and then sample a Z p from that.",
"9: Calculate the posterior knowledge distribution q ( Z k | R, Z p ) , and then sample a Z k from that.",
"{The primal task} 10: Compute the reward Re 1 as the Reconstruct probability ( Z p = Z p | Z k ) .",
"11: Update according to Eq.",
"16.",
"12: Calculate the auxiliary memory distribution ( Z p | R, Z k ) based on the pseudo knowledge label Z k , and sample a Z p from",
"that.{The dual task} 13: Compute the reward Re 2 as q ( Z k = Z k | Z p ) .",
"14: Update according to Eq.",
"15.",
"15: Update according to Eq.",
"10.",
"16: Update according to Eq.",
"17.",
"17: end while 18: return The prior distribution p ( Z p ) and p ( ZK | Z p ) the marginal log-likelihood, we derive an evidence lower bound objective to maximize: LELBO = E q ( Z k | Z p ) q ( Z p ) g ( R | C, Z p , Z k ) E q ( Z p ) KL ( q ( Z k | Z p ) || p ( Z k | Z p )) KL ( q ( Z p ) || p ( Z p )) (10) where q ( Z k | Z p ) , q ( Z p ) , p ( Z p ) , p ( Z k | Z p ) are shorthand for q ( Z k | C, R, Z p ) , q ( Z p | C, R ) , p ( Z p ) and p ( Z k | C, Z p ) respectively.",
"A stepwise derivation could be found in the supplementary materials.",
"The forward mapping from personal memory to knowledge candidates is relatively implicit and obscure, partially because the customized utterance history contains unwanted noise.",
"As a result, there is a tendency that Z p is ignored and p ( Z k | Z p , C ) is degenerated into p ( Z k | C ) , which we refer to as the vanishing memory .",
"To address this issue, inspired by the idea of dual learning (He et al., 2016), we introduce an inverse mapping from knowledge candidate to personal memory as a dual task, which is depicted by the auxiliary distribution ( Z p | C, R, Z k ) .",
"Intuitively, there is a natural duality between the mapping from personal memory to knowledge and the inverse mapping.",
"Therefore, if the forward mapping makes a good inference about the knowledge to choose, the inverse mapping is able to map it back to personal memory, which means that the memory is not vanishing.",
"And before the dual learning procedure, the primal task and the dual task are warmed up to speed up convergence and alleviate error accumulation in the dual learning process, following the idea of He et al. (2016) and Meng et al. (2020).",
"Namely, we construct pseudo knowledge label P and persona label K based on their similarity to the response.",
"K = max K i K Sim ( K i , R ) P = max P i P Sim ( P i , R ) (11) Then, both the primal task and the dual task are warmed up with a traditional maximum likelihood estimation objective.",
"After the warm-up procedure, for each iteration, we first sample a Z p according to its posterior distribution q ( Z p | C, R ) .",
"Then the forward mapping calculates the probability distribution q ( Z k | C, R, Z p ) , from which we sample a Z k .",
"The reward for the forward mapping is defined as the probability that the auxiliary distribution recovers the Z p .",
"Mathematically, we have Re 1 = ( Z p = Z p | C, R, Z k ) (12) Symmetrically, the reward for the auxiliary distribution is the prediction probability of the golden knowledge by the forward mapping: Re 2 = q ( Z k = Z k | C, R, Z p ) , (13) where Z k is corresponding to the pseudo knowledge label.",
"And the objective of the dual learning is to maximize the reward: L dual = ED [ Re 1 + Re 2 ] (14) For reward maximization, we optimize the parameter through policy gradient method (Sutton et al., 2000): L dual = log ( Z p = Z p | C, R, Z k ) Re 2 .",
"(15) L dual = log q ( Z k = Z k | C, R, Z p ) Re 1 .",
"(16)",
"Finally, the gains of the dual task is distilled into the posterior distribution of Z p via a cross-entropy loss: L dis = KL ( T ( Z p | C, R, Z k ) || q T ( Z k | C, R, Z p )) + log q ( Z p = Z p | C, R, Z k ) , (17) 3905 where is a hyper-parameters to balance the weights of two parts and the superscript T means that the distribution is normalized at temperature T .",
"Thus, the three probabilistic models form a closed loop in which each component is trained alternatively.",
"The full procedure of our proposed learning algorithm is concluded in Algorithm",
"1. 4 Experiment 4.1 Dataset Since existing dataset like CMU_DoG (Zhou et al., 2018b) or Holl-E (Moghe et al., 2018) do not contain information about personal memory, we establish a new KGC dataset equipped with a memory repository.",
"The dataset is constructed based on Reddit (Baumgartner et al., 2020).",
"In detail, we download the conversational data on the PushShift dump of Reddit ranging from 2011 to the first half of 2015 and divide them into a training set, a validation set and a test set according to the date.",
"To construct a memory repository, we maintain a dictionary where the key is a long string hashed from the user account name and the value is a set of utterances of the user.",
"Since it is a repository for user-specific utterances, it may inevitably contain false beliefs or subjective opinions.",
"We shall leave this issue for future work.",
"Elaborated data filtering is conducted to ensure: (1) We only keep utterances from users that have at least 5 utterances in the memory repository; (2) Utterances that are too long or too short are filtered; (3) Paraphrase tool (Damodaran, 2021) is applied on every utterances to avoid tracing the utterances back to real reddit users.",
"The statistics of our dataset is shown in Table",
"1. And the code is available at https://github.",
"com/Lucasftc/PersonaKGC .",
"A few examples is shown in Appendix A.3.",
"To benefit future research and meanwhile avoid possible malicious abuse, the dataset is available upon request from the authors 1 .",
"To verify the effectiveness of the proposed methods, we compare our methods with baselines in KGC.",
"Meanwhile, since our proposed method makes use of personal memory to generate persona-consistency response, we also compare our methods with baselines in personalized dialogue.",
"Generative Profile Memory Network (GPMN) (Zhang et al., 2018) is a method in personalized dialogue which employs Memory Network along with persona information.",
"Transformer Memory Network (TMN) (Dinan et al., 2019) adopts the traditional Memory Network with transformer architecture and introduces the knowledge selection loss.",
"Transfertransfo (Wolf et al., 2019) is a combination of a transfer learning based training scheme and a high-capacity transformer model and achieves the best results in the Conversational Intelligence Challenge",
"2. Sequential Knowledge Transformer (SKT) (Kim et al., 2020) utilizes sequential latent variables for knowledge selection.",
"We use the pseudo knowledge labels for the golden knowledge label in implementation.",
"KnowledGPT (Zhao et al., 2020b) puts the knowledge selector and the response generator in a framework and employ reinforcement learning and curriculum learning to accomplish the state-of-the-art performance in KGC.",
"KnowledGPT+M , a variant of KnowledGPT where we treat personal memory as knowledge candidates as well and input them to the knowledge selector.",
"P 2 BOT (Liu et al., 2020) is a transmitter-receiver based framework explicitly modeling the perception between the interlocutors and achieves the state-of-the-art in personalized dialogue.",
"BoB (Song et al., 2021) is a newly published method that disentangles personalized dialogue into persona understanding and personalized generation.",
"We choose distinctness, BLEU(Papineni et al., 2002), ROUGE(Lin, 2004) 2 and ME-TEOR(Denkowski and Lavie, 2014) 3 to be our automatic metrics.",
"Focusing on the exact n-gram co-occurrence in hypothesis and reference, BLEU and ROUGE evaluate the appropriateness of the proposed model.",
"Distinctness is calculated as the ratio of unique unigrams and bigrams, paying more attention to the diversity of generated text.",
"METEOR measures the alignment, or the exact, stem, synonym, and paraphrase matches between the hypothesis and reference.",
"Apart from automatic evaluation, we conduct human evaluation.",
"Specifically, 200 examples are randomly sampled from the test set and well-educated native speakers are recruited to assess the quality of the generation from different models with their source hidden.",
"Each annotators are required to give a score in { 0 : bad , 1 : fair , 2 : good } for three independent aspects: (1) fluency : whether the reply is fluent; (2) coherence : whether the reply is coherent with the context; and (3) faithfulness : whether the reply is well-grounded and faithful to the selected knowledge sentence and memory fragment.",
"The agreement of annotators is measured via Fleiss' kappa (Fleiss, 1971).",
"We first report the experimental result in automatic evaluation.",
"As is shown in Table 2, our method outperforms the state-of-the-art baselines in KGC and personalized dialogue in most metrics, verifying the effectiveness of our model empirically.",
"Among non-pretrained methods, TMN and GPMN are low in diversity, since their generator is not pre-trained on large corpus before.",
"SKT improves distinctness but shows low appropriateness, possibly because that it highly relies on the golden knowledge label, which is costly and not always available.",
"In pre-trained based methods, Transfertransfo attains impressive results on distinctness.",
"It also achieves competitive appropriateness results, but not as good as ours.",
"We gauge the performance of the model to the large document-level training corpus, a critical choice for pre-trained language model, which may boost the diversity of generated text.",
"Besides, the performance of the BoB, a recently published baseline, is less satisfactory compared with others.",
"The premise of BoB is the disentanglement between contextual coherence and persona consistency, which is not always achievable especially when we use user-specific dialogue history for personal memory information.",
"And it is notable from the table that there is a significant gap between the baseline methods in KGC or personalized dialogue and ours, validating that neither simply projecting personal information into dialogue nor purely grounding on knowledge is an acceptable solution to the KGC task.",
"It is necessary to combine personal memory and external knowledge together.",
"The comprehensive improvement of KnowledGPT+M in contrast with the original KnowledGPT also reveals this viewpoint.",
"Additionally, the considerable advantage of our proposed method over KnowledGPT+M illustrates the fact 3907 Models BLEU ROUGE Distinct METEOR B-1 B-2 B-3 B-4 R-1 R-2 R-3 D-1 D-2 BoB 4.69 1.57 0.65 0.31 10.68 1.57 9.30 4.94 17.06 3.97 w/o.",
"that treating personal memory as knowledge is not enough.",
"The dependency between personal memory and the knowledge should not be ignored.",
"We also present the result of human evaluation since no automatic metric is perfect in this task (Di-nan et al., 2019).",
"Since human evaluation is time-consuming and expensive, only competitive baselines are involved.",
"As shown in Table 3, our proposed model outperforms the baseline methods and there is an evident improvement.",
"Apart from the main results, we are especially interested",
"interested in some research questions: (RQ1) How does each component contributes to the performance of our model?",
"(RQ2) How many knowledge sentences and memory fragments to select?",
"There is 3908 Recall@1 Recall@2 Recall@5 Recall@10 m=1 0.173 0.286 0.505 0.720 m=2 0.176 0.289 0.513 0.730 m=3 0.177 0.289 0.509 0.730 m=4 0.176 0.288 0.508 0.730 Table 5: The performance of p ( Z k | C, Z p ) under different m .",
"To answer the first question, we conduct ablation study and compare the full model with several vari-ants:(1) w/o.",
"know .",
"the external knowledge base to grounding the dialogue is removed; (2) w/o.",
"mem .",
"personal memory is removed and this variant is a standard KGC model essentially; (3) w/o.",
"dual .",
"the dual task is removed, so there is no dual learning and distillation in this variant; (4) w/o.",
"dep .",
"the dependency of the two latent variables is removed so Z p and Z k are calculated independently.",
"The ablation result is shown in Table 4, from which we could have the following observations: (1) w/o.",
"know and w/o.",
"mem exhibit a degeneration at a great extent, further justifying the necessity of introducing knowledge and personal memory into a dialogue system, respectively.",
"(2) w/o.",
"dep also shows an obvious deterioration.",
"This is in line with our expectation since w/o.",
"dep model Z k and Z p as two independent latent variables, ignoring the underlying dependence between them.",
"Comparatively speaking, w/o.",
"dual achieves a better result, but not as good as the full model due to the destroy of the closed dual loop.",
"And to have a intuitive perception about the effect of the closed dual loop, we examine the promotion brought to the q ( Z k | C, R, Z p ) , ( Z p | C, R, Z k ) and q ( Z p | C, R ) in terms of Re-call@1 of knowledge or personal memory.",
"The result is shown in Figure",
"3. From the figure we could see that there is an obvious improvement after trained with our proposed learning algorithm.",
"For the (RQ2) , we first explore it by varying the amount of selected personal memory fragments and observe how the knowledge selection procedure is influenced.",
"In detail, we vary the number of personal memory fragments m sampled by p ( Z p | C ) from 1 to 4 and evaluate the performance of p ( Z k | C, Z p ) in terms of Recall@n (n {1,2,5,10}).",
"a fluctuation or slight drop when m continues to increase possibly owing to the distraction mixed with the redundant personal memory.",
"Besides, we are also curious about the final generation performance under different numbers of knowledge and personal memory fragment.",
"It could be seen from Figure 4 that there appears a decline when we increase the number of knowledge and personal memory fragment, which we attribute to the unwanted noise mixed with personal memory and knowledge.",
"In this work, we explore personalized KGC by introducing personal memory into knowledge selection task.",
"Two latent variables are introduced to select knowledge and personal memory respectively.",
"Besides, dual learning scheme is employed to allow the two selection task to teach each other.",
"For future work, we would like to extend the personalized knowledge-grounded dialogue to personalized conversational recommendation system for application in online shopping.",
"Intended Use The chief purpose of our dataset is to examine a dialogue model's capacity in selecting proper knowledge with the help of personal memory.",
"The dataset is mainly for research propose and it is not supposed to be directly used to train a production system.",
"And researchers should be aware of the possible ethic issues before exploiting our dataset.",
"Data Collection All the examples in our dataset are in English and no human annotators are involved in the data collection process.",
"As mentioned in Sec.4.1, our dataset is built on the basis of the Reddit dumps from Pushshift (Baumgartner et al., 2020), which is a publicly available resource widely used in more than a hundred peer-reviewed publications.",
"Our data collection is in consistent with the term of use and the research is granted ethical approval by an external institutional review board.",
"To avoid potential abuse, the dataset is available upon request to the authors.",
"Contact the authors (by email) and clearly state your intended use if you believe the dataset might be helpful in your research.",
"User Privacy Although our dataset includes user-specific utterance history as personal memory, no user account names will be revealed or inferred from the dataset.",
"Besides, the utterance histories are paraphrased during our procession of the dataset such that they can not be traced back to the real users in Reddit.",
"In conclusion, There is no personally identifiable information in our dataset or underlying leakage of personal information.",
"Thanks for the reviewers for their valuable suggestions.",
"This work was supported by National Natural Science Foundation of China (NSFC Grant No. 62122089 & No. 61876196 & No. 61832017), Beijing Outstanding Young Scientist Program NO.",
"BJJWZYJH012019100020098, and Intelligent Social Governance Platform, Major Innovation & Planning Interdisciplinary Platform for the \"Double-First Class\" Initiative, Renmin University of China.",
"We also wish to acknowledge the supports provided and contributions made by Public Policy and Decision-making Research Lab of RUC, and the Public Computing Cloud, Renmin University of China.",
"Rui Yan is also supported by Beijing Academy of Artificial Intelligence (BAAI)."
] |
[
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"objective",
"method",
"abstain",
"result",
"objective",
"objective",
"objective",
"method",
"objective",
"objective",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other"
] |
[
"In this paper, we show that neural machine translation (NMT) systems trained on large back-translated data overfit some of the characteristics of machine-translated texts.",
"Such NMT systems better translate human-produced translations, i.e., translationese, but may largely worsen the translation quality of original texts.",
"Our analysis reveals that adding a simple tag to back-translations prevents this quality degradation and improves on average the overall translation quality by helping the NMT system to distinguish back-translated data from original parallel data during training.",
"We also show that, in contrast to high-resource configurations, NMT systems trained in low-resource settings are much less vulnerable to overfit back-translations.",
"We conclude that the back-translations in the training data should always be tagged especially when the origin of the text to be translated is unknown.",
"During training, neural machine translation (NMT) can leverage a large amount of monolingual data in the target language.",
"Among existing ways of exploiting monolingual data in NMT, the so-called back-translation of monolingual data (Sennrich et al., 2016a) is undoubtedly the most prevalent one, as it remains widely used in state-of-the-art NMT systems (Barrault et al., 2019).",
"NMT systems trained on back-translated data can generate more fluent translations (Sennrich et al., 2016a) thanks to the use of much larger data in the target language to better train the decoder, especially for low-resource conditions where only a small quantity of parallel training data is available.",
"However, the impact of the noisiness of the synthetic source sentences generated by NMT largely remains unclear and understudied.",
"Edunov et al. (2018) even showed that introducing synthetic noise in back-translations actually improves translation quality and enables the use of a much larger quantity of back-translated data for further improvements in translation quality.",
"More recently, Caswell et al. (2019) empirically demonstrated that adding a unique token at the beginning of each back-translation acts as a tag that helps the system during training to differentiate back-translated data from the original parallel training data and is as effective as introducing synthetic noise for improving translation quality.",
"It is also much simpler since it requires only one editing operation, adding the tag, and non-parametric.",
"However, it is not fully understood why adding a tag has such a significant impact and to what extent it helps to distinguish back-translated data from the original parallel data.",
"In this paper, we report on the impact of tagging back-translations in NMT, focusing on the following research questions (see Section 2 for our motivation).",
"Q1.",
"Do NMT systems trained on large back-translated data capture some of the characteristics of human-produced translations, i.e., translationese ?",
"Q2.",
"Does a tag for back-translations really help differentiate translationese from original texts?",
"Q3.",
"Are NMT systems trained on back-translation for low-resource conditions as sensitive to translationese as in high-resource conditions?",
"During the training with back-translated data (Sen-nrich et al., 2016a), we can expect the NMT system to learn the characteristics of back-translations, i.e., translations generated by NMT, and such characteristics will be consequently exhibited at test time.",
"However, translating translations is a rather artifi-cial task, whereas users usually want to perform translation of original texts.",
"Nonetheless, many of the test sets used by the research community for evaluating MT systems actually contain a large portion of texts that are translations produced by humans, i.e., translationese .",
"Translationese texts are known to be much simpler, with a lower mean sentence length and more standardized than original texts (Laviosa-Braithwaite, 1998).",
"These characteristics overlap with those of translations generated by NMT systems that have been shown simpler, shorter, and to exhibit a less diverse vocabulary than original texts (Burlot and Yvon, 2018).",
"These similarities raise Q1 .",
"Caswell et al. (2019) hypothesized that tagging back-translations helps the NMT system during training to make some distinction between the back-translated data and the original parallel data.",
"Even though the effectiveness of a tag has been empirically demonstrated, the nature of this distinction remains unclear.",
"Thus, we pose Q2 .",
"The initial motivation for back-translation is to improve NMT for low-resource language pairs by augmenting the training data.",
"Therefore, we verify whether our answers to Q1 and Q2 for high-resource conditions are also valid in low-resource conditions, answering Q3 .",
"As parallel data for training our NMT systems, we used all the parallel data provided for the shared translation tasks of WMT19 1 for English German (en-de), excluding the Paracrawl corpus, and WMT15 2 for EnglishFrench (en-fr).",
"3 As monolingual data for each of English, German, and French to be used for back-translation, we concatenated all the News Crawl corpora provided by WMT, and randomly extracted 25M sentences.",
"For our simulation of low-resource conditions, we randomly sub-sampled 200k sentence pairs from the parallel data to train NMT systems and used these systems to back-translate 1M sentences randomly sub-sampled from the monolingual data.",
"For validation, i.e., selecting the best model after training, we chose newstest2016 for en-de and newstest2013 for en-fr, since they are rather balanced on their source side between translationese and original texts.",
"For 1 http://www.statmt.org/wmt19/ translation-task.html 2 http://www.statmt.org/wmt15/ translation-task.html 3 After pre-processing and cleaning, we obtained 5.2M and 32.8M sentence pairs for en-de and en-fr, respectively.",
"evaluation, since most of the WMT test sets are made of both original and translationese texts, we used all the newstest sets, from WMT10 to WMT19 for en-de, and from WMT08 to WMT15 for en-fr.",
"4 All our data were pre-processed in the same way: we performed tokenization and truecasing with Moses (Koehn et al., 2007).",
"For NMT, we used the Transformer (Vaswani et al., 2017) implemented in Marian (Junczys-Dowmunt et al., 2018) with standard hyper-parameters for training a Transformer base model.",
"5 To compress the vocabulary, we learned 32k byte-pair encoding (BPE) operations (Sennrich et al., 2016b) for each side of the parallel training data.",
"The back-translations were generated through decoding with Marian the sampled monolingual sentences using beam search with a beam size of 12 and a length normalization of 1.0.",
"The back-translated data were then concatenated to the original parallel data and a new NMT model was trained from scratch using the same hyper-parameters used to train the model that generated the back-translations.",
"We evaluated all systems with BLEU (Papineni et al., 2002) computed by sacreBLEU (Post, 2018).",
"To evaluate only on the part of the test set that have original text or translationese on the source side, we used the --origlang option of sacreBLEU with the value non-L1 for translationese texts and L1 for original texts, where L1 is the source language, and report on their respective BLEU scores.",
"6 3.3 Results in Resource-Rich Conditions Our results with back-translations (BT) and tagged back-translations (T-BT) are presented in Table 1.",
"When using BT, we consistently observed a drop of BLEU scores for original texts for all the translations tasks, with the largest drop of 12.1 BLEU points (en fr, 2014).",
"Conversely, BLEU scores for translationese texts were improved for most tasks, with the largest gain of 10.4 BLEU points 4 For WMT14, we used the full version instead of the default filtered version in sacreBLEU that does not contain information on the origin of the source sentences.",
"5 The full list of hyper-parameters is provided in the supplementary material (Appendix A).",
"6 sacreBLEU signatures where L1 and L2 respectively indicates a two-letter identifier for the source and target languages of either de-en, en-de, fr-en, or en-fr, and XXX the name of the test set: BLEU+case.mixed+lang.L1L2+numrefs.1+ { origlang.L1,origlang.non-L2 } +smooth.exp+test.XXX+tok.13a+version.1.4.2 System test set de en en de all o n-o all o n-o BT 2010 28.9 (+0.5) 33.2 (-0.9) 27.9 (+0.7) 21.8 (-2.3) 24.6 (-5.7) 21.0 (-1.2) 2011 25.3 (-0.3) 29.9 (-1.0) 24.2 (-0.2) 19.9 (-1.4) 23.8 (-1.9) 19.0 (-1.1) 2012 27.1 (+0.3) 27.9 (-1.6) 27.0 (+0.7) 20.4 (-1.2) 24.5 (-4.6) 19.3 (-0.2) 2013 30.3 (+0.3) 34.7 (-1.6) 29.2 (+0.6) 23.8 (-1.9) 25.1 (-2.8) 23.6 (-1.7) 2014 32.8 (+2.2) 27.4 (-2.5) 36.8 (+7.0) 25.4 (-0.5) 23.2 (-3.3) 27.9 (+2.7) 2015 33.8 (+2.4) 22.5 (-1.9) 39.5 (+5.5) 27.2 (-1.1) 28.1 (-2.9) 24.7 (+1.9) 2017 35.5 (+3.0) 27.2 (-1.1) 42.8 (+7.4) 26.4 (-0.1) 26.3 (-3.6) 25.5 (+3.3) 2018 43.9 (+4.6) 32.0 (-1.0) 53.8 (+10.4) 38.0 (-1.4) 38.9 (-5.9) 35.0 (+3.8) 2019 -33.1 (-1.5) -31.4 (-4.8) -T-BT 2010 29.5 (+1.1) 34.4 (+0.3) 28.4 (+1.2) 25.0 (+0.9) 30.5 (+0.2) 23.4 (+1.2) 2011 26.4 (+0.8) 31.7 (+0.8) 25.2 (+0.8) 22.1 (+0.8) 25.8 (+0.1) 21.0 (+0.9) 2012 28.1 (+1.3) 30.2 (+0.7) 27.7 (+1.4) 22.8 (+1.2) 30.0 (+0.9) 20.9 (+1.4) 2013 30.8 (+0.8) 36.0 (-0.3) 29.6 (+1.0) 26.4 (+0.7) 28.1 (+0.2) 26.1 (+0.8) 2014 32.4 (+1.8) 29.6 (-0.3) 33.8 (+4.0) 27.9 (+2.0) 26.7 (+0.2) 29.4 (+4.2) 2015 33.9 (+2.5) 24.9 (+0.5) 37.7 (+3.7) 29.9 (+1.6) 32.1 (+1.1) 25.6 (+2.8) 2017 35.5 (+3.0) 28.1 (-0.2) 41.2 (+5.8) 28.7 (+2.2) 30.7 (+0.8) 26.0 (+3.8) 2018 43.2 (+3.9) 33.0 (+0.0) 50.4 (+7.0) 41.8 (+2.4) 45.6 (+0.8) 35.5 (+4.3) 2019 -35.0 (+0.4) -37.6 (+1.4) System test set fr en en fr all o n-o all o n-o BT 2008 22.9 (-1.7) 27.9 (-2.6) 22.2 (-1.5) 23.2 (-0.2) 21.2 (-3.3) 23.6 (+0.5) 2009 26.5 (-2.3) 41.1 (-5.3) 23.9 (-1.6) 27.7 (+1.1) 22.7 (-2.0) 28.4 (+1.4) 2010 29.3 (-1.4) 27.4 (-7.8) 29.5 (+0.5) 28.2 (-0.5) 22.5 (-11.1) 29.8 (+2.5) 2011 29.4 (-1.9) 29.3 (-4.7) 29.4 (-1.1) 30.9 (+0.0) 36.7 (-8.2) 29.3 (+2.1) 2012 29.7 (-1.4) 34.3 (-4.3) 28.6 (-0.6) 28.4 (+1.1) 26.3 (-4.1) 29.0 (+2.5) 2014 36.6 (+0.6) 31.4 (-4.7) 40.3 (+5.6) 32.9 (-3.1) 26.1 (-12.1) 39.6 (+6.1) 2015 36.2 (+0.0) 40.9 (-3.1) 29.8 (+3.5) 35.7 (+1.7) 25.1 (-4.4) 44.9 (+6.5) T-BT 2008 24.5 (-0.1) 29.5 (-1.0) 23.7 (+0.0) 23.8 (+0.4) 25.1 (+0.6) 23.5 (+0.4) 2009 28.9 (+0.1) 46.4 (+0.0) 25.7 (+0.2) 27.3 (+0.7) 25.1 (+0.4) 27.7 (+0.7) 2010 31.2 (+0.5) 35.1 (-0.1) 29.6 (+0.6) 30.0 (+1.3) 34.1 (+0.5) 28.9 (+1.6) 2011 31.8 (+0.5) 33.3 (-0.7) 31.4 (+0.9) 31.6 (+0.7) 45.3 (+0.4) 28.0 (+0.8) 2012 31.8 (+0.7) 38.3 (-0.3) 30.1 (+0.9) 28.9 (+1.6) 31.9 (+1.5) 28.1 (+1.6) 2014 37.3 (+1.3) 36.1 (+0.0) 37.2 (+2.5) 38.2 (+2.2) 39.7 (+1.5) 36.5 (+3.0) 2015 36.6 (+0.4) 43.2 (-0.8) 27.9 (+1.6) 36.0 (+2.0) 30.7 (+1.2) 41.2 (+2.8) Table 1: BLEU scores for NMT systems trained with back-translations (BT) and tagged back-translations (T-BT) for each origin of the source text: original (o) or translationese (n-o).",
"(de en, 2018).",
"These results give an answer to Q1 : NMT overfits back-translations, potentially due to their much larger size than the original parallel data used for training.",
"Interestingly, using back-translations does not consistently improve translation quality.",
"We assume that newstest sets may manifest some different characteristics of translationese from one year to another.",
"Prepending a tag (T-BT) had a strong impact on the translation quality for original texts, recovering or even surpassing the quality obtained by the NMT system without back-translated data, always beating BT.",
"The large improvements of BLEU scores over BT show that a tag helps in identifying translationese (answer for Q2 ).",
"In the supplementary material (Appendix B), we present additional results obtained using more back-translations (up to 150M sentences) showing a similar impact of tags.",
"However, while a tag in such a configuration prevents an even larger drop of the BLEU scores, it is not sufficient to attain a BLEU score similar to the configurations that use less back-translations.",
"Interestingly, the best NMT system was not always the same depending on the translation direction and the origin of the test sets.",
"It is thus possible to select either of the models to obtain the best translation quality given the origin of the source sentences, according to the results on the validation set for instance.",
"7 7 Since this observation is rather secondary, we present results for best model selection in the supplementary material (Appendix C).",
"Note also that these BLEU scores can potentially be further increased by using a validation set whose source side is either original texts or translationese respectively to translate original texts or translationese at test time.",
"In low-resource conditions, as reported in Table 2, the translation quality can be notably improved by adding back-translations.",
"Using BT, we observed improvements of BLEU scores ranging from 0.7 (fr en, 2011) to 12.4 (de en, 2010) BLEU points for original texts and from 2.1 (en de, 2011) to 21.1 (de en, 2018) BLEU points for translationese texts.",
"These results remain in line with one of the initial motivations for using back-translation: improving translation quality in low-resource conditions.",
"In this setting without back-translated data, the data in the target language is too small for the NMT system to learn reasonably good representations for the target language.",
"Adding 5 times more data in the target language, through back-translation, clearly helps the systems without any negative impact of the noisiness of the back-translations that were generated by the initial system.",
"We assume here that since the quality of the back-translations is very low, their characteristics are quite different from the ones of translationese texts.",
"This is confirmed by our observation that adding the tag has only a negligible impact on the BLEU scores for all the tasks (answer to Q3 ).",
"A tag on back-translations helps identifying translationese during NMT training.",
"Thus, adding the same tag on the test sets should have a very different impact depending on the origin of the source sentences.",
"If we tag original sentences and decode them with a T-BT model, then we enforce the decoding of translationese.",
"Since we mislead the decoder, translation quality should drop.",
"On the other hand, by tagging translationese sentences, we help the decoder that can now rely on the tag to be very confident that the text to decode is translationese.",
"Our results presented in Table 3 confirm these System de en en de fr en en fr 2017 2018 2017 2018 2012 2015 2012 2015 tagged original -2.0 -2.6 -5.9 -9.6 -7.5 -4.9 -10.1 -11.1 tagged non-original +1.6 +3.4 +0.8 +1.6 -3.1 +1.4 -0.3 +3.6 Table 3: Results with tagged test sets, either original or non-original, decoded with the T-BT model in the high-resource condition.",
"assumptions.",
"We observed a drop of BLEU scores when decoding tagged original texts with the T-BT model, while we saw an improvement of translation quality for 6 out of 8 test sets when decoding tagged translationese texts.",
"The remaining 2 test sets for which we did not observed any improvements are newstest2012 for both translation directions of en-fr.",
"It potentially indicates a mismatch between the characteristics of translationese in newstest2012 and those exhibited by back-translations used for training the T-BT model.",
"We empirically demonstrated that training NMT on back-translated data overfits some of its characteristics that are partly similar to those of translationese.",
"Using back-translation improves translation quality for translationese texts but worsens it for original texts.",
"Previous work (Graham et al., 2019; Zhang and Toral, 2019) showed that state-of-the-art NMT systems are better in translating translationese than original texts.",
"Our results show that this is partly due to the use of back-translations which is also confirmed by concurrent and indepen-dent work (Bogoychev and Sennrich, 2019; Edunov et al., 2019).",
"Adding a tag to back-translations prevents a large drop of translation quality on original texts while improvements of translation quality for translationese texts remain and may be further boosted by tagging test sentences at decoding time.",
"Moreover, in low-resource conditions, we show that the overall tendency is significantly different from the high-resource conditions: back-translation improves translation quality for both translationese and original texts while adding a tag to back-translations has only a little impact.",
"We conclude from this study that training NMT on back-translated data, in high-resource conditions, remains reasonable when the user knows in advance that the system will be used to translate translationese texts.",
"If the user does not know it a priori, a tag should be added to back-translations during training to prevent a possible large drop of translation quality.",
"For future work, following the work on automatic identification of translationese (Rabinovich and Wintner, 2015; Rubino et al., 2016), we plan to investigate the impact of tagging translationese texts inside parallel training data, such as parallel sentences collected from the Web.",
"We would like to thank the reviewers for their useful comments and suggestions.",
"A part of this work was conducted under the program Re-search and Development of Enhanced Multilingual and Multipurpose Speech Translation System of the Ministry of Internal Affairs and Communications (MIC), Japan.",
"Benjamin Marie was partly supported by JSPS KAKENHI Grant Number 20K19879 and the tenure-track researcher start-up fund in NICT.",
"Atsushi Fujita was partly supported by JSPS KAKENHI Grant Number 19H05660."
] |
[
"result",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Natural language processing (NLP) applications are now more powerful and ubiquitous than ever before.",
"With rapidly developing (neural) models and ever-more available data, current NLP models have access to more information than any human speaker during their life.",
"Still, it would be hard to argue that NLP models have reached human-level capacity.",
"In this position paper, we argue that the reason for the current limitations is a focus on information content while ignoring language's social factors .",
"We show that current NLP systems systematically break down when faced with interpreting the social factors of language.",
"This limits applications to a subset of information-related tasks and prevents NLP from reaching human-level performance.",
"At the same time, systems that incorporate even a minimum of social factors already show remarkable improvements.",
"We formalize a taxonomy of seven social factors based on linguistic theory and exemplify current failures and emerging successes for each of them.",
"We suggest that the NLP community address social factors to get closer to the goal of humanlike language understanding.",
"[T]he common misconception [is] that language use has primarily to do with words and what they mean. It doesn't. It has primarily to do with people and what they mean.",
"Clark and Schober (1992) Until the 1970s, economics assumed that individuals, markets, and firms always acted rationally, based on all the available information.",
"This assumption allowed researchers to use linear models and worked well for several applications.",
"However, it came at the cost of ignoring essential aspects of human decision making, which oversim-plified an inherently complex matter in a way that limited possible insights and applications.",
"The seminal work by Tversky and Kahneman (1973) showed that people would make irrational decisions, time and again, even with full information, and that simple models could not account for this behavior.",
"By introducing the human factor into the equation, they opened up a new research field: behavioral economics.",
"Like economics in the mid-twentieth century, Natural Language Processing (NLP) still makes a limiting assumption: language is only about information, i.e., message content alone.",
"This assumption makes it possible to model language statistically and works for several applications.",
"However, it completely ignores the fact that people use language to achieve (social) goals; like economists before 1973, NLP researchers are oversimplifying an inherently complex matter in a way that limits possible insights and applications.",
"And like introducing behavior transformed economics, introducing social factors into NLP will similarly transform the field: it will open up new avenues of research, enable new insights and applications, and provide more performant, equitable tools.",
"The focus on information content is rooted in early research on quantifying text and making it usable for information retrieval.",
"While it over-simplifies its subject matter, this focus has enabled many NLP applications, with increasing commercial success over the last few decades.",
"The statistical revolution and introduction of machine learning in the late 1980s and deep learning in the last five years (Manning, 2015) has dramatically improved robustness and performance, and produced industrial-strength everyday applications like machine translation (Wu et al., 2016), search (Shen et al., 2014), and personal assistants (Serban et al., 2016; Radford et al., 2019).",
"Recently, BERT (De-vlin et al., 2019) and GPT-3 (Brown et al., 2020) seemingly picked up enough language behavior to produce natural-looking sentences that show pragmatic constraints and interact in dialogues.",
"However, recent work has pointed out (Bender and Koller, 2020; Bisk et al., 2020) that language is more than just words strung together: it has a social function and relates to non-linguistic context.",
"Nonetheless, current NLP systems still largely ignore the social aspect of language.",
"Instead, they only pay attention to what is said, not to who says it, in what context , and for which goals .",
"We go further to argue that the simplifying focus on information content has effectively limited NLP to a narrow range of information-based applications.",
"Consequently, NLP systems struggle with applications related to pragmatics and interaction, or when what is said is not what is meant, e.g., sarcasm, irony, deception, and any other situation that requires a social interpretation (Aber-crombie and Hovy, 2016).",
"This approach is especially crucial for any system related to pragmatics, such as dialogue systems, machine translation (Mirkin and Meunier, 2015), text-to-speech, and mental healthcare tools (Benton et al., 2017).",
"Examples include conversational agents' inconsistent personality in conducting dialogues with humans (Cercas Curry et al., 2020), the failure of machine translation systems in generating culturally appropriate and polite outputs (Jones and Irvine, 2013; Matusov, 2019; Vanmassenhove et al., 2019), or the general struggles of current systems with social intelligence (Cercas Curry and Rieser, 2018).",
"Ultimately, the goal of NLP is to process language at a human level.",
"However, NLP's current approachignoring social factorsprevents us from reaching human-level competence and performance because language is more than just information content.",
"Unless we start paying attention to the social factors of language, we are arti-ficially limiting NLP's potential as a field and the applications we can develop, including the performance of the applications that exist today.",
"We want to be clear that the idea of language as a social construct is itself nothing new: linguistics and philosophy have long modeled it this way (Wittgenstein, 2010; Eckert, 2012, inter alia).",
"However, as we are reaching a point where this idea can become implemented in systems, it is a message that bears repeating in the NLP community (see also Hovy (2018) and Flek (2020) for similar points, as well as Nguyen et al. (2016) for an overview of the closely related issue of computational sociolinguistics).",
"There have in-0 20 40 60 2 0 1 0 2 0 1 2 2 0 1 4 2 0 1 6 2 0 1 8 2 0 2 0 Social Media Sentiment Analysis Discourse and Pragmatics Sum Figure 1: Trend of interest in social factors in NLP papers, using ACL as an example deed been ongoing and emerging efforts to overcome these limitations.",
"Over the last ten years, research interest in social factors and social context has increased, as shown in Figure 1.",
"Here, we counted the number of accepted papers for the track of computational social science and social media, sentiment analysis, discourse and pragmatics, and their sum at the ACL conference per year, and visualized the overall trend 1 .",
"However, to further highlight and formalize these social factors in language and their use in NLP, we propose a set of seven social factors, explain why they are needed, and show encouraging evidence of approaches that have used them.",
"We hope that this work can inspire more research into the social factors of language in NLP, and push the boundary of what we can achieve as a research field.",
"Contributions We formalize the notion of social factors via two linguistic theories: systemic functional linguistics (Halliday and Matthiessen, 2013, SFL) and the Cooperative Principle (Grice, 1975).",
"We build on these frameworks to provide a taxonomy of seven increasingly complex social factors that help tease out the limitations of NLP models.",
"These seven factors are:",
"1) speaker and",
"2) receiver , 3) social relations , 4) context , 5) social norms , 6) culture and ideology , and",
"7) communicative goals .",
"For each factor, we explain why it presents an obstacle to current information-based approaches and show work that has started to address them.",
"Systemic functional linguistics (SFL) (Halliday and Matthiessen, 2013), studies precisely this re-1",
"lationship between language and its functions in social settings.",
"It gives us a sense of the different language areas that, instead of formal factors like syntax and semantics, rely on social factors for interpretation.",
"By detailing those factors, we can understand what is missing in current NLP approaches, and how to incorporate them into our systems to go beyond information content.",
"However, SFL alone can not explain why what is said is not what is meant.",
"For that, we bor-row from Grice (1975), who laid out four maxims that govern effective communication in social situations.",
"These four maxims are those of Quality (Make your contribution true, do not lie or make unsupported claims), Quantity (Make your contribution as informative as is required (but not more informative)), Relevance (Make your contribution relevant), and Manner (Be brief and orderly and avoid obscurity of expression and ambi-guity).",
"Together, these maxims are known as the Cooperative principle , and govern successful conversations, as long as all conversational partners adhere to them.",
"However, we can also deliberately break selected maxims, for example, for comical effect, sarcasm, politeness, when we playact, or outright lie (i.e., saying things that are not true, not relevant, or obtuse).",
"If this violation is apparent, the conversational partner can use the resulting inconsistency to construct an alternative meaning.",
"E.g., inferring that Take your time, I love waiting for you violates the maxim of quality and is probably not true lets us assume sarcasm.",
"Gricean maxims and their selective violations can explain why what is said is not what is meant.",
"This inference process is called conversational implicature , and can help explain why NLP applications struggle with tasks such as sarcasm detection or entailment.",
"Some previous works have consequently used them to evaluate the quality of NLP systems (Jwalapuram, 2017; Qwaider et al., 2017).",
"Building upon these two frameworks, we lay out a set of seven social factors that NLP systems need to be aware of to overcome current limitations (see Figure 2).",
"We cover SPEAKER characteristics (Section 2.1), RECEIVER characteristics (Section 2.2), SOCIAL RELATIONS (Section 2.3), CONTEXT (Section 2.4), SOCIAL NORMS (Section 2.5), CULTURE AND IDEOLOGY Figure 2: Taxonomy of social factors (Section 2.6), and COMMUNICATIVE GOALS (Section 2.7).",
"We first outline each factor and its relation to SFL and the cooperation principle and then discuss the associated limitations for current NLP systems, as well as existing approaches that address these factors.",
"Note that the seven social factors in this taxonomy are not mutually exclusive.",
"Most language use can be categorized according to multiple factors, such as the use of goal and norm.",
"An individual or agent uses language for different social goals, such as constructing their identity.",
"Characteristics of speakers include age, gender, ethnicity, social class, dialect, etc.",
"A speaker determines the speech act, text, tone, language style, and consciously encoded personal signatures of an utterance.",
"Certain speaker attributes are expected to be consistent or unchanged across different scenarios, such as basic demographics and personality traits.",
"Other can vary according to situation, such as tone and style.",
"In both cases, the speaker has a certain amount of agency over the expression of some of these attributes, but will be unaware of others.",
"In sociolinguistics, this hierarchy is called saliency , ranging from obvious to all speakers (e.g., \"howdy\" for Texans) to apparent only to speakers of the variety (e.g., when to unround a vowel or not), or only to researchers (e.g., syntactic inversion) (Silverstein, 2003).",
"Successful speaker models should thus use the cooperative principle as a set of constraints and know when to break them for effect.",
"the message of a 20-year-old German female read-ing like it was from a 75-year-old American male after translation (Hovy et al., 2020).",
"This effect is a big issue for any text generation, where the lack of speaker personality can create incongruous responses in conversational agents.",
"Despite conversational agents' recent successes (Rit-ter et al., 2011; Banchs and Li, 2012; Serban et al., 2016), their lack of a consistent personality is still one of the common issues in using data-driven approaches.",
"The main reason is that these models are often trained over conversations by different people, averaging and thereby virtually ignoring individual speakers' personalities (Li et al., 2016; Wei et al., 2017; Zhang et al., 2018; Wu et al., 2021).",
"There have not been many attempts to make NLP systems more robust to language variation across speakers (Yang and Eisenstein, 2017), though attempts at creating personalized language technologies exist in information retrieval (Shen et al., 2005), recommender systems (Basilico and Hofmann, 2004), machine translation (Mirkin and Meunier, 2015), and language modeling (Federico, 1996).",
"Meanwhile, various approaches have shown the positive impact of incorporating speaker characteristics into NLP applications, either as explicit features (Volkova et al., 2013), through conditional embeddings (Hovy, 2015; Lynn et al., 2017), or via neural models for multi-task learning (Ben-ton et al., 2017; Li et al., 2018).",
"By accounting for a speaker's specific demographic attributes, models achieve better performance in a variety of tasks, such as sentiment analysis, user attributes, part-of-speech tagging, and response generation (Wu et al., 2021).",
"Rashkin et al. (2016) showed the value of modelling speaker perspective to discover opinions or biases in the way things are expressed.",
"Hovy (2016) showed that demographically-conditioned generated text also is more convincing.",
"2.2 Receiver Audiences that receive text from a speaker are made up of receivers, depending on the situation and medium.",
"The number of receivers can vary substantially, ranging from zero (monologue) to one (dialogue), multiple (conversation), or massive (broadcast).",
"Receivers may be known or unknown.",
"For instance, in any given dialogue or conversation, the speaker knows the identity of the specific and fixed target or group to whom he/she is talking.",
"However, when it comes to broadcasting or highly public spaces, receivers are often imagined by the speaker (Litt, 2012) and are potentially numerous and invisible.",
"This imagined audience is a speaker's mental conceptualization of the people with whom he or she is communicating.",
"This conceptualization of receiver characteristics influences the conversation: a speaker who calls on Newton's Celestial Mechanics to respond to a child's question Where does the sun go at night? has grossly misconceptualized the receiver characteristics in the situation.",
"Successful receiver models should thus use the cooperative principle as a set of constraints on what to expect from a counterpart.",
"However, they should also assume that the receiver will perform conversational implicature when they notice a maxim violation.",
"Right now, conversational agents tend to take any input as adhering to all maxims, so they are bad at recognizing sarcasm, irony, or overly polite forms (all of which violate the maxim of quality by saying things that are not true: you really do want another piece of cake).",
"Applications Spellchecking and stylistic models currently fail to consider receiver characteristics.",
"For instance, when writing to the president of a company vs. messaging your best friend , the politeness levels and register differ substantially, but current large, pretrained models cannot deal with this difference effectively (for an exception, see Fu et al. (2020)).",
"What is more, they can generate messages that are actively hurtful to receivers (Nozza et al., 2021).",
"In other cases like hateful-content detection (Warner and Hirschberg, 2012), a message might be toxic to outsiders but perceived as appropriate among close friends (Sap et al., 2019a).",
"This self-reference or joking use of slurs by a group of intimates might introduce significant noise to the automatic recognition of hate speech, causing existing classifiers to fail in many instances.",
"Detecting such hateful or toxic speech online might require classifiers to take into account both content and receivers, as well as a broader context.",
"Receiver differences markedly add to the complexity and difficulty in machine translation from, say, English to Korean.",
"Korean speech has strict rules about politeness in language depending on who you are talking to; misusing these measures would be viewed as quite rude by native speakers of Korean (Kim and Lee, 2017).",
"The distance or relation between speaker and receiver matters.",
"Examples of social relations include family, friendship, rival, ally, competitor, professional hierarchies, seniority, follower, and followee.",
"One of the core communicative functions of language is to establish, modulate, and reproduce these social dynamics and social relations (Hymes, 1972).",
"The interplay between speakers, receivers, and their relations introduces variations and flexibility into the resulting text.",
"It also provides a shared background knowledge and context (this function of social relations has also in-fluenced work on meaning frames by Fillmore (1982)).",
"The incorporation of social relations is closely related to the consideration of speakers and receivers, but with different roles.",
"In various social relations, we can flaunt the maxim of manner by being obscure, since much of the missing information will be filled in by shared knowledge.",
"Applications We could improve the detection of self-referential or joking use of hateful content with close friends if we could understand such social relations in the first place, similar to the context of response generation for different audiences.",
"For the sentiment classification task, Yang and Eisenstein (2017) argue that models fail to leverage the tendency of socially proximate individuals (e.g., friends) to use language similarly.",
"Ignoring this phenomenon of linguistic homophily usually means they suffer from limited accuracy.",
"In practice, such social relations often can be reasonably inferred from text (Kr-ishnan and Eisenstein, 2015; Iyyer et al., 2016; Rashid and Blanco, 2017; Rashid et al., 2020).",
"They go a long way to explaining other socially motivated constructs, such as power imbalances or politeness, which in turn can also be inferred from dialogue (Prabhakaran et al., 2012; Danescu-Niculescu-Mizil et al., 2013a).",
"Radfar et al. (2020) showed that including friendship relations in their hate-speech detection improved performance by up to 5%.",
"Similarly, Del Tredici et al. (2019) showed that modeling the social graph of a user improves performance in sentiment analysis, as well as stance and hate speech detection.",
"Incorporating user networks into geolocation substantially improves performance (Rahimi et al., 2018; Fornaciari and Hovy, 2019) and Dinan et al. (2020) show that the different roles of speaking-as, speaking-to, and speaking-about affect gender bias in NLP models.",
"Certain word choices or pronunciations might signal social class, status, or membership in a dialect group.",
"Labov (1972) famously showed how realization of the /r/ sound in phrases like fourth floor was correlated with social hierarchy.",
"In sociolinguistics (Trudgill, 2000), these distinguishing terms are called shibboleths , based on a story from the Old Testament in which pronouncing the word shibboleth a certain way decided whether a person was allowed to pass a checkpoint or was killed.",
"Dialectal areas still play an important role, even in online communication (Hovy and Purschke, 2018), and identifying and integrating them can be vital for fairer NLP tools (Jrgensen et al., 2016; Blodgett et al., 2016; Dorn, 2019).",
"Language-based communication usually takes place in a limited number of social contexts.",
"These contexts reflect the detailed settings speakers and receivers are in, including (but not limited to) the language (e.g., English), domain (e.g., Twitter), occasion (e.g., presentation or discussion), and topic (e.g., work or life).",
"As the containers or holders of communication (Yang, 2019, p. 20), (interpersonal) contexts set the specific boundaries for exchanging language.",
"Prior research on dialogue (Schank and Abelson, 1975) accounted for (social) context as scripts, but framed it in terms of content rather than social factors.",
"Social context is related to the Gricean maxims of quantity and relevance, as it governs what is appropriate and required.",
"Randomly (i.e., without context) saying I have never smuggled live animals in my underwear would probably raise some justified suspicion.",
"In contrast, it is a perfectly acceptable response to the question, Did you hide that parrot in your underpants? (whether the question is appropriate is another matter).",
"Applications NLP models, by their nature, are usually unaware of the (extralinguistic) context.",
"For instance, text or response generation may need to adaptively adjust to the social context of communication, rather than relying on background conversations from different communicators in different contexts.",
"Models have mostly learned to relate words to other words.",
"For instance, current machine translation models are trained on huge corpora of text.",
"However, nuances in language often make it difficult to provide an accurate and direct translation from one social context to another.",
"Studies show that current popular industrial MT systems and recent state-of-the-art academic MT models are significantly prone to gender-biased translation errors for all tested target languages (Stanovsky et al., 2019; Vanmassenhove et al., 2019; Hovy et al., 2020).",
"There is hilarious content caused by translation fails (see #translation-fail on Twitter), especially when it comes to the social context or cultural-specific nuances of language.",
"Current text generation models also usually fail to account for social context, generating text that lacks nuance.",
"This factor is one of the most difficult ones to overcome, because",
"1) social context is almost always extralinguistic, and",
"2) the focus of NLP models has always been on learning applications based on text alone (amplified by the seeming ability of neural approaches to do so, see Collobert et al. (2011)).",
"Some recent papers have commented on the artificial limitation of relying solely on text (Bender and Koller, 2020; Bisk et al., 2020), demonstrating how even large pretrained language models are essentially just mimicking people's language use, instead of actual use.",
"Several works have shown, though, how incorporating non-textual information can improve performance, specifically in conjunction with images (Lazaridou et al., 2015; Caglayan et al., 2019).",
"These approaches help various tasks, from concept learning to machine translation, and improve inherently multimodal applications such as scene descriptions and image labeling.",
"However, even including more linguistic context (i.e., text beyond the current sentence) can drastically improve performance of text classification (Yang et al., 2016) and the detection of irony (Wallace et al., 2014) and sarcasm (Abercrombie and Hovy, 2016).",
"2 2.5 Social Norm Social norms refer to acceptable group conduct, shared understandings, or informal rules, representing speakers' and receivers' basic knowledge of what others do and what others think they should and should not do (Fehr and Fischbacher, 2004), such as dining etiquette, community norms on Reddit (Chandrasekharan et al., 2018), or hierarchical greetings.",
"Norms are therefore closely related to the factors of relation (Section 2.3) and context (Section 2.4).",
"For instance, greet-2 Note that the latter two show that human speakers depend on context as well, though.",
"ing messages are usually full of positive words and phrases and rarely contain expressions carrying strong negative connotations.",
"Product representatives are expected to communicate with customers in a professional manner rather than teasing or using slang and informal words.",
"The scope of norms also include social commonsense about what is expected and normal in a given situation (Sap et al., 2019b), similar to scripts in Schank and Abelson (1975).",
"Social norms are related to the Gricean maxims of manner and quality: in some situations, it is very much expected to say too much and make unsupported claims, for example, when giving a laudatory speech or a eulogy; Good evening. Martin didn't stand out while he was alive. Now he is dead. Thank you. is not much of a speech.",
"Applications Social norms are subtle constructs that are not easy to define, so we still do not have many computational techniques to reliably quantify them, let alone assessing whether certain model behaviors should be rewarded or sanctioned (Anastassacos et al., 2020).",
"Consequently, most NLP models still fail to recognize social norms (for an exception, see Forbes et al. (2020)).",
"Failing to measure social norms, and to detect the alignment between expected or unexpected behaviors and models' actual behaviors, can introduce severe damage and negatively impact society, especially as more conversational agents or chatbots have been developed and deployed for real-world applications, such as customer services, travel or flight reservation, or therapy.",
"In 2016, Microsoft released its now infamous chatbot on Twitter: Tay 3 .",
"Microsoft initially expected Tay's language patterns to resemble a 19-year old American girl, but the chatbot quickly transformed into a fountain of racist, sexist, and abusive slurs, by interacting with people espousing these views.",
"A similar issue played out recently with a Korean chatbot.",
"4 Sap et al. (2019a) showed that lack of awareness of social norms around taboo words led to annotation bias being integrated into the models.",
"However, norms are subject to change, as Danescu-Niculescu-Mizil et al. (2013b) have shown, and 3 https://www.theguardian.com/world/20 16/mar/29/microsoft-tay-tweets-antisemitic-racism 4 https://www.theguardian.com/world/20 21/jan/14/time-to-properly-socialise-hate-speech-ai-chatbot-pulled-from-facebook can affect standing and integration of members.",
"Language and culture are intertwined.",
"Language reflects the society, ideology, cultural identity, and customs of communicators, as well as their values.",
"It is therefore intertwined with social norms (Sec-tion 2.5).",
"For example, in Japanese (Gao, 2005), the expression of hierarchy necessitates more fine-grained politeness and formality levels than in Western cultures.",
"The terms of address also vary in terms of social and age differences, i.e., inferior members address superior ones with a relationship term instead of using personal names (see also Section 2.3).",
"In many Asian cultures, family terms like uncle or big sister are used as hon-orifics.",
"While it is common amongst native speakers of North American English to use please in requests even to close friends, such an act would be considered awkward, if not rude, in Arabic-speaking cultures (Kdr and Mills, 2011; Madaan et al., 2020).",
"Cultural norms can impose a hierarchy on Gricean maxims.",
"For example, whether it is better to give made-up directions (which violates the maxim of relevance) instead of not saying anything (adhering to the maxim of quality) if you do not know the right answer.",
"Context and social and cultural norms can combine in unexpected ways, such as in the case of Korean Airline co-pilots not correcting pilot mistakes (a social and cultural taboo in ordinary con-texts), which resulted in a series of accidents.",
"Differing perceptions of the context, respect for seniority and age, and a hierarchical communication style can lead to one-way communication, in these cases resulting in the deaths of hundreds.",
"5 The solution here was to change the context by making the working language English, which in turn removed associated social and cultural norms around hierarchical communication (Gladwell, 2008).",
"Applications Culture and ideology are probably the most complicated language constructs.",
"Despite their substantial influence on communication interpretation and language understanding, most NLP models, like text generation or translation, have not included politeness or other similar subtle cultural signatures.",
"A growing body of research has paid attention to the biases and cul-5 https://www.cnbc.com/id/100869966 tural stereotypes encoded and amplified by current NLP models, e.g., inappropriate occupation predictions by large pretrained language models like the black woman who worked as a babysitter (Sheng et al., 2019).",
"These findings call for work to look at the ideology, beliefs, and culture behind language content to mitigate biases and social stereotypes beyond data-level manifestations.",
"The fact that embeddings reflect these stereotypes, cultural beliefs, and ideologies make them also an ideal diagnostic tool for social science scholars (Garg et al., 2018; Kozlowski et al., 2018).",
"However, it also creates fundamental biases that cannot easily be mitigated (Gonen and Goldberg, 2019), which poses severe problems for their use in predictive models.",
"Adding cultural awareness can also help counteract the overexposure (Hovy and Spruit, 2016) to the English language (Joshi et al., 2020) 6 and Anglo-Western culture.",
"Finally, communicative goals cover what people want to achieve with their language use, e.g., information, decision making, social chitchat, negotiation, etc.",
"SFL represents this factor as multiple metafunctions of language.",
"Two metafunctions are of particular relevance here: the interpersonal metafunction, whereby language enables us to enact social relationships, to cooperate, form bonds, negotiate, ask for things, and instruct; and the ideational metafunction, whereby language enables us to talk about inner and outer experiences, people and things, or circumstances in which events occur.",
"Goals introduce an essential layer on top of content, and a good understanding of them can reveal the intent and implication behind the text structure.",
"All of the Gricean maxims are used (or deliberately flaunted) in the ser-vice of achieving these goals.",
"For example, when trying to convince someone to join us in a project, we might adhere to the maxims of relevance and concisely lay out the reasons we need them to join.",
"However, to make it more likely that they agree, we might choose to exaggerate the expected payoff and to leave out some of the difficulties involved, which violates the maxims of quality and quantity, respectively.",
"Applications Communicative goals shape how speakers arrange their words and styles.",
"For in-6 https://thegradient.pub/the-benderru le-on-naming-the-languages-we-study-and-why-it-matters/ stance, text that aims to convince others often uses various persuasion strategies (Yang et al., 2019a; Chen and Yang, 2021), argumentation techniques (Stab and Gurevych, 2014), rhetorical structures (Rapp, 2011), and the exchange of social support (Wang and Jurgens, 2018; Yang et al., 2019b).",
"Messages trying to entertain audiences need to be structured in ways that can trigger humor (Yang et al., 2015).",
"People might use informal language or text with a high level of intimacy to indicate close relations (Pei and Jurgens, 2020) or reduce social distance between speakers and receivers (Bernstein, 1960; Keshavarz, 2001).",
"Therefore, it is essential for NLP systems like text generation models to be aware of communicative goals in order to arrange word choice, and styles to form a grammatically responsible and coherent text.",
"Ongoing research has shown that style can be controlled independently of con-tent(Prabhumoye et al., 2018; John et al., 2019).",
"Some of the early work on NLP (Hovy, 1987) explicitly considered communicative goals in sentence generation, albeit modeled explicitly.",
"More recently, Sap et al. (2020) modeled speaker intent to resolve conversational implicature.",
"Social Factors in Different NLP Tasks When and how, though, should we consider these various social factors for an NLP application?",
"NLP practitioners should feel free to use our social factor taxonomy as a guide to examine what social factors should be used, and whether integrating each confers additional benefits (e.g., better design, performance, user experience, or cultural fit) for their use cases.",
"Different NLP tasks will likely benefit differently from our social factor taxonomy.",
"There is some evidence that the earlier factors (such as speaker and receiver characteristics) can be applied to most tasks, as they are fundamental aspects of language.",
"Social relations and context are likely to apply more to dialogue and text generation tasks than to, say, sentiment analysis.",
"Lastly, high-level factors such as social norms and culture and ideology likely require more research to inform individual applications, but are likely to shape our community approaches.",
"We would be well-advised to incorporate the findings of fields that have studied these issues for longer, such as philosophy, sociology, or sociolinguistics.",
"As NLP tasks and algorithms are being now applied to different aspects of everyday interaction and around the world, how we will equip NLP models with a grounding in social factors becomes extremely important, especially these two dimensions.",
"Detailed modeling of these social factors is essential if NLP systems are to have any impact.",
"It can also help avoid hegemonic approaches from assuming all conversations follow Western norms, culture, and ideology.",
"Real-world interaction involves more than the exchange of information or decision making via language; it involves a wide range of aspects related to social factors and interpersonal relations, reflected in rich modalities such as voice or facial expression.",
"Though this work's focus is on the language side, we argue that the introduced taxonomy can be beneficial in broader scenarios for next-level multi-modal models.",
"Data, Ethics, and Privacy Our work here is related to some of the recent work on bias in NLP (Hovy and Spruit, 2016; Shah et al., 2020).",
"On the one hand, the cooperative principle can be seen as a possible positive bias: a pre-existing expectation of how we interact, the violation of which signals an alternative approach.",
"So far, models do not integrate this positive bias.",
"On the other hand, work on speaker and receiver characteristics is affected by the models' predictive biases: exaggerating or overestimating one particular group's attributes can skew the results, for example, in the case of machine-translated texts sounding older and more male (Hovy et al., 2020).",
"Recently, Blodgett et al. (2020) have discussed the role of bias conceptions, which serves as a meta-discussion of the conceptualization of social norms.",
"Integrating social factors into NLP poses a double challenge: on the one hand, it requires additional data to model those social factors.",
"We need representative annotation samples for, e.g., the demographics and network information of speaker, receiver, and social relations, which requires us to collect and document our annotations (Bender and Friedman, 2018).",
"Social media already contains some information from personal or socially grounded conversations, but other domains might suffer from data sparsity for these factors, and require advances in unsupervised learning or few-shot learning techniques.",
"On the other hand, collecting all this information raises questions about privacy, data protec-tion, and ethics.",
"Some data we need to collect to work with social factors might be personal or protected data, which comes with risks for de-anonymization and privacy leaks.",
"Collecting sensitive data (i.e., membership in a protected category) requires the participants' approval and rigorous procedures to ensure that this information cannot be connected to them individually.",
"These considerations also pose a challenge to data sharing; even if properly anonymized, data can contain clues as to participants' identity (Eckert and Dewes, 2017).",
"We need to strengthen ethical considerations for this emerging direction to guide practice in the field and ensure our models are used in beneficial ways.",
"Evaluation and Metrics A central question in these efforts is How do we evaluate whether NLP models have learned the social factors of language, beyond performance improvements?",
"Current models optimize performance metrics, but these metrics might fail to capture the nuances of NLP systems' understanding when considering social content.",
"Thus, better metrics are needed to measure and visualize such additional benefits introduced by modeling language's social factors.",
"These metrics will become essential to diagnose failure.",
"Failed or improper incorporation of social factors could lead to awkward social consequences.",
"E.g., a system misjudging its social relation to the speaker and being a bit too chummy, or a conversational agent disre-specting social norms of turn taking and formality.",
"To some extent, such problems might be unavoidable: interacting through language is always a trial-and-error process, even for humans.",
"However, such errors become extremely important in high-stakes scenarios, such as inappropriate responses from conversational agents in mental health counseling applications.",
"We need metrics to capture this failure and mechanisms to explain the decision-making process behind socially aware NLP models.",
"Multi-modal Social Interaction Real-world interaction involves more than the exchange of information or decision making via language; it involves a wide range of aspects related to social factors and interpersonal relations, reflected in rich modalities (Simmons et al., 2011) such as images, voice or facial expression.",
"Though this work's focus is on the language side, we argue that the introduced taxonomy can be beneficial in broader scenarios for the next level multi-modal models.",
"In this work, we have argued that there are seven social factors of language that impact NLP applications: speaker, receiver characteristics, social relations, context, social norms, culture and ideology, and communicative goals.",
"At present, NLP models often ignore these factors.",
"We have shown that this ignorance limits the kinds of applications we can tackle.",
"It can also can introduce mistakes, ranging from the hilarious to the severe.",
"However, several extant approaches incorporate these social factors, all of them showing substantial improvements in a wide range of applications.",
"By systematically addressing the social aspects of language as a field, we will improve the performances of existing NLP systems, open up new applications, and increase fairness and usability for all users.",
"This project has partially received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (No. 949944, INTEGRATOR).",
"DY is supported in part by grants from Google and Salesforce.",
"DH is the scientific director of the Data and Marketing Insights Unit at the Bocconi Institute for Data Science and Analysis.",
"We would like to thank Maxwell Forbes, Christoph Purschke, and Maarten Sap for comments on the drafts, as well as the anonymous reviewers who suggested valuable additions."
] |
[
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other"
] |
[
"Recent advancements in neural language modelling make it possible to rapidly generate vast amounts of human-sounding text.",
"The capabilities of humans and automatic discriminators to detect machine-generated text have been a large source of research interest, but humans and machines rely on different cues to make their decisions.",
"Here, we perform careful benchmarking and analysis of three popular sampling-based decoding strategiestop-k , nucleus sampling, and untruncated random samplingand show that improvements in decoding methods have primarily optimized for fooling humans.",
"This comes at the expense of introducing statistical abnormalities that make detection easy for automatic systems.",
"We also show that though both human and automatic detector performance improve with longer excerpt length, even multi-sentence excerpts can fool expert human raters over 30% of the time.",
"Our findings reveal the importance of using both human and automatic detectors to assess the humanness of text generation systems.",
"State-of-the-art generative language models are now capable of producing multi-paragraph excerpts that at a surface level are virtually indistinguishable from human-written content (Zellers et al., 2019; Radford et al., 2019; Adelani et al., 2020).",
"Often, only subtle logical fallacies or idiosyncrasies of language give away the text as machine-generated, errors that require a close reading and/or domain knowledge for humans to detect.",
"Deceptive text, whether humanor machine-generated, has entered the sphere of public concern (Cooke, 2018).",
"It propogates quickly (Vosoughi et al., 2018), sets political agendas Equal contribution, Google, University of Pennsylvania (Vargo et al., 2018), influences elections (Allcott and Gentzkow, 2017), and undermines user trust (Wang et al., 2012; Song et al., 2015).",
"Recently, Adelani et al. (2020) have shown that automatically generated reviews are perceived to be as fluent as human-written ones.",
"As generative technology matures, authors, well-meaning or otherwise, will increasingly employ it to augment and accelerate their own writing.",
"It is more imperative now than ever for both humans and automated systems to be able to detect and identify machine-generated texts in the wild.",
"However, there has thus been little inquiry into the textual properties that cause humans to give generated text high human-like ratings compared to those that cause automatic systems to rate it highly.",
"To speak of texts produced by language models, we must first consider how these texts are generated.",
"A neural language model encodes a probability distribution over the next word in a sequence given the previous words.",
"1 A decoding strategy is an algorithm that generates sequences from a language model by determining how words should get selected from this distribution.",
"The field has largely moved toward probabilistic decoding strategies that randomly sample from the output distribution token-by-token.",
"However, when many low-likelihood words cumulatively contain quite a bit of probability mass, choosing one of these words can lead to odd or contradictory phrases and semantic errors.",
"Humans are quick to notice these types of errors.",
"For this reason, it has become common to modify the language model's output probability distribution to increase the chance of sampling tokens with high likelihood according to the language model.",
"Topk random sampling, where low-likelihood words are restricted from being 1 Often these words are actually subword character sequences such as BPE tokens (Sennrich et al., 2016). generated, is one such method. A language model that is only permitted to produce high-likelihood words is less likely to make a poor choice and create the type of mistakes that are easy for humans to detect. Since humans are not proficient at identifying when a model subtly favors some utterances more often than a human author would, they don't notice the over-representation of high-likelihood words in the generated text. In contrast, automatic systems excel at identifying statistical anomalies and struggle to build deeper semantic understanding. Topk in particular creates text that is easy for machines to detect but very hard for humans. Thus, we observe the general trend: as the number of unlikely words available to be chosen is increased, humans get better at detecting fakes while automatic systems get worse . In this work, we study three popular random decoding strategiestopk , nucleus, and temperature samplingapplied to GPT-2 (Radford et al., 2019). We draw a large number of excerpts generated by each strategy and train a family of BERT-based (Devlin et al., 2019) binary classifiers to label text excerpts as human-written or machine-generated. We find large differences in human rater and classifier accuracy depending on the decoding strategy employed and length of the generated sequences. Regardless of strategy, we find human raters achieve significantly lower accuracy than the automatic discriminators. We also show that when a decoding strategy severely modifies the unigram token distribution, as topk does, humans have trouble detecting the resultant generated text, but automatic classifiers find it the easiest to discriminate. Worryingly, we further find that classifiers are brittle; they generalize poorly when trained to discriminate samples from one strategy and then evaluated on samples from another. In summary, our contributions are: A comprehensive study of generated text detection systems' sensitivity to model structure, decoding strategy, and excerpt length.",
"An analysis of human raters' ability to identify machine-generated content, and how human raters differ from automatic detectors.",
"Generative Language Models With a suffi-ciently large training set and number of trainable parameters, neural language models based on the",
"Transformer architecture (Vaswani et al., 2017) are capable of generating convincing, human-like excerpts up to several paragraphs in length.",
"GPT-2 (Radford et al., 2019), GROVER (Zellers et al., 2019), and Transformer-DMCA (Liu et al., 2018) are a few examples of large, publicly available models with this ability.",
"GROVER , in particular, has been shown to generate fake news that is more trustworthy than human-written fake news according to human raters.",
"Human Detection The task of trying to guess whether text is coming from a robot or a fellow human was made famous by the Turing Test (Tur-ing, 1950).",
"It continues to be used is chatbot evaluation (Lowe et al., 2017).",
"The related (but not identical) task of asking human raters to judge the quality of machine-generated excerpts remains the gold-standard for evaluating open-domain generation systems (van der Lee et al., 2019).",
"Kreps et al. (2020), Gehrmann et al. (2019), and others have stressed the importance of humans being able to identify fake content on the web.",
"Automatic Detection The rise of machine-generated content has led to the development of automated systems to identify it.",
"GROVER was designed to not only generate convincing news excerpts but to also identify them using a fine-tuned version of the generative model itself (Zellers et al., 2019).",
"GLTR, expecting attackers to use sampling methods that favor high-likelihood tokens, aims to make machine-generated text detectable by computing histograms over per-token log likelihoods (Gehrmann et al., 2019).",
"Bakhtin et al. (2019) frame human-text detection as a ranking task and evaluate their models' cross-domain and cross-model generalization, finding signifi-cant loss in quality when training on one domain and evaluating on another.",
"Schuster et al. (2019) argue that the language distributional features implicitly or explicitly employed by these detectors are insufficient; instead, one should look to explicit fact-verification models.",
"Finally, discriminators for whether text is machine-generated are a promising research direction in adversarial training (Lin et al., 2017; Li et al., 2017) and in automatic evaluation of generative model quality (Novikova et al., 2017; Kannan and Vinyals, 2017; Lowe et al., 2017).",
"Natural Language Understanding Automatic detection of machine-generated text benefits from a semantic understanding of the text.",
"Contradictions, falsehoods, and topic drift can all indicate that an excerpt was machine-generated.",
"Encoder-only Transformer models such as BERT (Devlin et al., 2019) have been shown to do very well at tasks requiring this understanding.",
"While we fine-tune BERT for the task of classifying whether text was machine-generated, others have used the contextual word embeddings from a pre-trained BERT model without fine-tuning to compute a quality score for generated text (Zhang et al., 2020).",
"It is worth noting that recent work has raised questions as to whether BERT truly builds a semantic understanding to make its predictions, or whether it merely takes advantage of spurious statistical differences between the text of different classes (Niven and Kao, 2019).",
"We frame the detection problem as a binary classification task: given an excerpt of text, label it as either human-written or machine-generated.",
"In particular, we are interested in how variables such as excerpt length and decoding strategy impact performance on this classification task.",
"We thus create several datasets.",
"Each is approximately balanced between positive examples of machine-generated text and negative examples of human-written text.",
"While they all share the same human-written examples, each dataset contains a different set of machine-generated examples sampled using one particular decoding strategy.",
"We also build additional datasets by truncating all of the examples to a particular sequence length, By training a separate classifier on each dataset, we are able to answer questions about which decoding strategy results in text that is the easiest to automatically disambiguate from human-written text.",
"We are also able to answer questions about how the length of the examples in the training set impacts our ability to automatically classify excerpts of that same length as either human-written or machine-generated.",
"All of our generated text samples are drawn from GPT-2, a state-of-the-art Transformer-based generative language model that was trained on text from popular web pages (Radford et al., 2019).",
"While we use the GPT-2 LARGE model with 774M parameters, we found that similar trends to those reported here hold in experiments with smaller language models.",
"Given an autoregressive language model that defines a probability distribution over the next token given the previous tokens in a sequence, a decoding strategy generates text by deciding how to output a token at each step based on the predicted distributions.",
"Perhaps the most straightforward decoding strategy is to randomly choose a token with probability proportional to its likelihood.",
"A challenge with the random sampling approach is that these probability distributions often contain a long tail of vocabulary items that are individually low-probability but cumulatively comprise a substantial amount of probability mass.",
"Holtzman et al. (2020) observe that choosing tokens from this tail often leads to incoherent generations.",
"Topk sampling, nucleus sampling, and (in the extreme) beam search have all been proposed to heuristically promote samples with higher per-token likelihoods.",
"Topk and nucleus sampling both do so by setting the likelihood of tokens in the tail of the distribution to zero.",
"Topk restricts the distribution to all but the k most likely tokens, where k is a constant (Fan et al., 2018).",
"Nucleus sampling, also called topp , truncates the distribution at each decoding step t to the k t -most-likely next tokens such that the cumulative likelihood of these tokens is no greater than a constant p (Holtz-man et al., 2020).",
"We thus consider three different decoding strategy settings: Sample from the untruncated distribution Topk , choosing k =40 (Radford et al., 2019).",
"Nucleus sampling (aka topp ), choosing p =0.96 (Zellers et al., 2019).",
"In addition, we form negative examples of human-written text by taking excerpts of web text that come from the same distribution as GPT-2's training data.",
"2 By picking text that resembles GPT-2's train set, we ensure that our classifiers can't simply take advantage of stylistic differences between the human-written text corpus and the kind of text GPT-2 was trained to generate.",
"For each decoding method, we construct a training dataset by pairing 250,000 generated samples with 250,000 excerpts of web text.",
"5,000 additional paired samples are kept aside for validation and test datasets.",
"Lastly, we filter out excerpts with fewer than 192 WordPiece tokens (Wu et al., 2 https://github.com/openai/ gpt-2-output-dataset 2016) (excerpts might be quite short if the model produces an end-of-text token early on).",
"See Appendix 1 for final dataset sizes.",
"A crucial question when generating text with a language model is whether or not to provide a priming sequence which the language model should continue.",
"Unconditioned samples, where no priming text is provided, in conjunction with topk sampling, lead to pathological behavior for discriminators as the first token of the generated text will always be one of k possible options.",
"On the other hand, if long sequences of human text are used as priming, the space of possible generated sequences is larger, but the detection problem shifts from one of how human-like is the generated text? to how well does the generated text follow the priming sequence?.",
"Since in this study we are interested in the former simpler question, we create two datasets, one with no priming, and one with the minimum amount of priming possible: a single token of web text.",
"This means that for every excerpt of web text in the training set, there is an excerpt of machine-generated text that starts with the same token.",
"We find that even with limited priming, the ability of automatic detectors can be strongly impacted.",
"To study the effect of excerpt length, we construct variations of the above datasets by truncating all excerpts to ten possible lengths ranging from 2 to 192 WordPiece tokens (Wu et al., 2016).",
"In total, we obtain sixty dataset variations: one per sampling method, truncation length, and choice of priming or no priming.",
"The primary discriminator we employ is a finetuned BERT classifier (Devlin et al., 2019).",
"We fine-tune one instance of BERT per dataset variation described above.",
"For the longest sequence length, n =192, we compare BERT's performance with several simple baselines that have been proposed in other work.",
"Fine-tuned BERT We fine-tune BERT-LARGE (cased) on the task of labeling a sentence as humanor machinegenerated.",
"The models are trained for 15 epochs, with checkpoints saved every 1000 steps, and a batch size of 256.",
"All results are reported on the test set using the checkpoint for which validation accuracy was highest.",
"Bag-of-Words For each sequence, we compute a bag-of-words embedding where each dimension corresponds to a token in GPT-2's 50,000 token BPE vocabulary (Sennrich et al., 2016), and we count how many times that token appears in the text sequence.",
"We then train a logistic regression binary classifier to predict humanor machine-written given this 50,000-dimensional embedding.",
"We experimented with truncating embedding size by removing entries for infrequent vocabulary words, but this did not improve performance.",
"Histogram-of-Likelihood Ranks Following GLTR (Gehrmann et al., 2019), we compute the probability distribution of the next word given the previous words in a text sequence according to a trained language model (in our case the same GPT-2 model that was used for generation).",
"At each sequence position, we rerank the vocabulary words by likelihood, and record the rank of the ground-truth next word within this list.",
"These ranks are then binned.",
"GLTR uses four bins, counting (1) the number of times the top 1 word is seen, (2) the number of times words ranked 2 through 5 are seen, (3) words ranked 6-100, and (4) words ranked > 100.",
"However, we observe higher accuracy when 50 bins are spread uniformly over the possible rankings.",
"This means that since there are 50,000 vocabulary words, the first bin counts the number of times the actual next word was within the 1,000 mostly likely next words, the second bin counts the 1,001-2,000th, and so on.",
"We then train logistic regression binary classifiers to predict humanor machine-written given either the 4-dimensional histograms or 50-dimensional histograms as input.",
"Total Probability Solaiman et al. (2019) propose a very simple baseline consisting of a threshold on the total probability of the text sequence.",
"An excerpt is predicted as machine-generated if its likelihood according to GPT-2 is closer to the mean likelihood over all machine-generated sequences than to the mean of human-written ones.",
"The human evaluation task is framed similarly to the automatic one.",
"We ask the raters to decide whether a passage of text was written by a human or by a computer algorithm.",
"(Full instructions are in the Appendix.)",
"Raters are allowed to choose between four options: definitely or possibly machine-generated and definitely or possibly human-written.",
"They are first shown an excerpt of length 16 WordPiece tokens.",
"After they make BERT BagOfWords HistGLTRBuckets Hist50Buckets TotalProb Human Method acc AUC acc AUC acc AUC acc AUC acc acc k40-1wordcond 0.88 0.99 0.79 0.87 0.52 0.52 0.69 0.76 0.61 0.64 p0.96-1wordcond 0.81 0.89 0.60 0.65 0.53 0.56 0.54 0.56 0.63 0.77 p1.0-1wordcond 0.79 0.92 0.59 0.62 0.53 0.55 0.54 0.55 0.65 0.71 Table 1: Performance (accuracy and AUC) of the fine-tuned BERT classifier and several simple baselines on detecting length-192 sequences generated with one word of priming (1worccond).",
"a guess, the length of the excerpt is doubled, and they are asked the same question again.",
"This continues until the entire passage of length 192 tokens is shown.",
"Passages are equally likely to be human-written or machine-generated, with the machine-generated excerpts being evenly split between the three sampling strategies considered in this paper.",
"Initially, Amazon Mechanical Turk (AMT) raters were employed for this task, but rater accuracy was poor with over 70% of the definitely votes cast for human despite the classes being balanced.",
"Accuracy, even for the longest sequences, hovered around 50%.",
"The same study was then performed with university students who were first walked through ten examples (see Appendix Table 4) as a group.",
"Afterward, they were asked to complete the same tasks that had been sent to the AMT workers.",
"No additional guidance or direction was given to them after the initial walk-through.",
"We will refer to this group as the expert raters.",
"Among them, 52.1% of def-initely votes were cast for human, and accuracy on the longest excerpt length was over 70%.",
"The human evaluation dataset consisted of 150 excerpts of web text and 50 excerpts each from the three decoding strategies.",
"Each question was shown to at most three raters, leading to 900 total annotations from the untrained workers and 475 from the expert raters.",
"A more detailed breakdown can be found in the Appendix.",
"Simple Baselines Table 1 shows the performance of the baseline discriminators on length-192 sequences, as compared with fine-tuned BERT.",
"Reassuringly, BERT far surpasses all simple baselines, indicating that it is not fully possible to solve the detection problem without complex sequence-based understanding.",
"The simplest baseline, TotalProb, which makes a decision based on the likelihood of the sequence, performs surprisingly well (over 60% accuracy for all sampling methods) relative to the methods which involve training logistic regression models.",
"Logistic regression on bag-of-words is the best of the baselines, beating out the histogram-based methods.",
"While Gehrmann et al. (2019) report an AUC of 0.87 on classifying text as real or generated using logistic regression on the four buckets of the GLTR system, we report AUC between 0.52 and 0.56 for this task.",
"The discrepancy is likely due to the fact that the human-written text in our discriminator training set comes from the same distribution as the text used to train the language model, while in GLTR the human text comes from children's books, scientific abstracts, and newspaper articles.",
"The selection of training data for learned detection systems is crucial.",
"In real-world applications, the choice ought to reflect the genres that builders of text-generation systems are trying to impersonate.",
"Fine-tuned BERT In Figure 1a, we begin by observing discriminator accuracy as a function of excerpt length and sampling method.",
"As can be intuitively expected, as sequence length increases, so too does accuracy.",
"For unconditioned text decoded with nucleus (p0.96) and untruncated (p1.0) random sampling, we find discriminator accuracy increases from 55%, near random, to about 81% for the longest sequences tested.",
"In contrast, discriminators trained and evaluated on topk achieve over 80% accuracy even on 16-token excerpts.",
"Why are topk 's samples so easy to detect?",
"In Figure 2b, we see the percentage of probability mass concentrated in the k most common token types for each sampling method.",
"While random sampling and nucleus sampling are very similar to human-written texts, we see top-k concentrating up to 80% of its mass in the first 500 most common tokens.",
"The other sampling methods as well as human-written texts require at least 1,100 token types for the same.",
"It is clear that topk 's distribu-50% 55% 60% 65% 70% 75% 80% 85% 90% 95% 100% 0 32 64 96 128 160 192 A cc u r a cy Sequence length in tokens Accuracy of BERT Fine-tuned Discriminator k40-1wordcond k40-nowordcond p0.96-1wordcond p0.96-nowordcond p1.0-1wordcond p1.0-nowordcond",
"tion over unigrams strongly diverges from human-written textsan easy feature for discriminators to exploit.",
"In fact, See et al. (2019) note that it takes setting k to 1000 to achieve about the same amount of rare word usage and fraction of non-stopword text as as human writing.",
"3 This makes it very easy for the model to pick out machine-generated text based on these distributional differences.",
"One way to help resolve this problem is to add priming text.",
"Doing so causes more rare words to be incorporated into the topk of the unigram distribution.",
"Adding even a single human word of priming significantly reduces the performance of detectors trained with topk random sampling.",
"Without priming, a discriminator trained on sequences of length 2 can classify with 90% accuracy the provenance of the text (Figure 1a).",
"By adding one priming token, accuracy drops to 65%.",
"Even on the longest 192-length sequences, topk discriminator accuracy is 6% lower on the primed dataset than the unprimed one.",
"When generating with nucleus or untruncated random sampling, adding a priming token is not as impactful, as these methods are already sampling from a large fraction (or all) of the probability distribution.",
"This is seen in Figure 2a where at the very first step of unprimed generation, nucleus sampling selects from 3075 possible vocabulary words, and at later positions selects from on 3 when decoding from the GPT-2 small model with 117M parameters.",
"Transferability In Table 2, we show how discriminators trained with samples from one decoding strategy can transfer at test time to detecting samples generated using a different decoding strategy.",
"Unsurprisingly a discriminator trained on topk generalizes poorly to other sampling methods: accuracy drops to as low as 42.5%, worse than chance .",
"Conversely, training the discriminator with sequences sampled from the untruncated distribution leads to little transferability to detecting topk samples.",
"Only the discriminator trained with nucleus sampling (a compromise between unmodified sampling and topk ) was able to detect sequences from the other sampling strategies without too much of a hit to accuracy.",
"As expected, a discriminator trained on an equal portion of data from each decoding method does reasonably at detecting all three.",
"Perhaps this lack of transferability is related to each discriminator's calibration.",
"Indeed, the degree to which a discriminator's average prediction deviates from 50% is a direct indicator of its accuracy.",
"In Table 3, we observe that of the three BERT discriminators, only that trained on topp samples predicts machine-generated' on approximately 50% of in-domain examples as expected.",
"This same discriminator's behavior holds on datasets generated by other sampling strategies 0 50 100 150 200 Position in sequence 500 1000 1500 2000 2500 3000 3500 4000 4500 k Mean k Chosen at each Position during Generation with Nucleus Sampling p0.96-nowordcondp0.96-1wordcond",
"as well.",
"In contrast, we observe that discriminators trained on top-k and untruncated random samples severely underestimate the percentage of machine-generated excerpts in out-of-domain datasets.",
"Even within domain (Figure 1b), we find both discriminators heavily favor a single class, in-Eval topk nucleus random T r a i n top-k 60.9 27.9 14.5 nucleus 49.2 51.7 48.9 random 7.3 22.6 38.3 Table 3: Average probability of machine-generated' according to each length-192 discriminator.",
"Human Evaluation Overall human performance across all sampling methods is shown in Figure 3b.",
"Even with the multi-paragraph 192-length excerpts, human performance is only at 71.4%, indicating that even trained humans struggle to correctly identify machine-generated text over a quar-Truth Raters p1.0 k40 p0.96 Truth Raters p1.0 k40 p0.96 H M H H M H H M M M",
"ter a time.",
"However, it is worth noting that our best raters achieved accuracy of 85% or higher, suggesting that it is possible for humans to do very well at this task.",
"Further investigation is needed into how educational background, comfort with English, participation in more extensive training, and other factors can impact rater performance.",
"To break up the accuracies by sampling method in a way that is comparable to the results shown for the automatic discriminators, we pair each machine-generated example with a randomly selected one of webtext to create a balanced dataset for each sampling strategy.",
"Performance is shown in Figure 3a.",
"Topk produces the text that is hardest for raters to correctly distinguish, but as shown in Section 7, it is the easiest for our automatic detection systems.",
"Samples from untruncated random sampling and nucleus sampling with p =0.96 are equivalently difficult for raters to classify as machine-generated.",
"Our human evaluation results suggest that much lower p -values than the 0.92 to 0.98 range proposed in Zellers et al. (2019) might be necessary in order to generate text that is considered significantly more human-like to human raters than the text produced by using the untruncated distribution.",
"Table 4 gives several examples where human raters and our BERT-based discriminators disagreed.",
"When raters incorrectly labeled human-written text as machine-generated, often the excerpts contained formatting failures introduced when the HTML was stripped out.",
"In the mid-dle two examples, topic drift and falsehoods such as Atlanta being the information hub of the na-tion's capital allowed humans to correctly detect the generated content.",
"However, in the bottom two examples, the high level of fluency left human raters fooled.",
"Overall we find that human raterseven expert trained oneshave consistently worse accuracy than automatic discriminators for all decoding methods and excerpt lengths.",
"In our experiments, randomly-selected pairs of raters agree with each other on a mere 59% of excerpts on average.",
"(In comparison, raters and discriminators agree on 61% to 70% of excerpts depending on the discriminator considered).",
"We surmise that the gap between human and machine performance will only grow as researchers inevitably train bigger, better detection models on larger amounts of training data.",
"While improved detection models are inevitible, it is unclear how to go about improving human performance.",
"GLTR proposes providing visual aids to humans to improve their performance at detecting generated-text, but it is unlikely that their histogram-based color-coding will continue to be effective as generative methods get better at producing high-quality text that lacks statistical anomalies.",
"In this work, we study the behavior of automated discriminators and their ability to identify machine-generated and human-written texts.",
"We train these discriminators on balanced binary classification datasets where all machine-generated excerpts are drawn from the same generative model but with different decoding strategies.",
"We find that, in general, discriminators transfer poorly between decoding strategies, but that training on a mix of data from methods can help.",
"We also show the rate at which discriminator accuracy increases as excerpts are lengthened.",
"We further study the ability of expert human raters to perform the same task.",
"We find that rater accuracy varies wildly, but has a median of 74%, which is less than the accuracy of our best-performing discriminator.",
"Most interestingly, we find that human raters and discriminators make decisions based on different qualities, with humans more easily noticing semantic errors and discriminators picking up on statistical artifacts.",
"In our experiments, these artifacts are most prominent with topk sampling.",
"However, any strategy that over-samples high-likelihood words is susceptible.",
"As the p in nucleus sampling is set increasingly lower to achieve more fluent text (some systems are already using p as low as 0.5 (Miculicich et al., 2019)), the distributional deviations that plague topk text will surface in nucleus sampling as well.",
"Holtzman et al. (2020) explain how a unique attribute of human language is that it dips in and out of low probability zones.",
"This variance in likelihood is what makes human-written text interesting and exciting to read.",
"Today's generation systems have not yet solved the problem of mimicking the human cadence without introducing poor word choices that are easy for humans to detect.",
"Generation systems often optimize for fooling humans without acknowledging the trade-off that exists between human perception of quality and ease of automatic detection.",
"We therefore suggest three prongs for future research:",
"1. Identifying ways to improve the language models and decoding strategies we use in order to generate text that is both exciting (ie. unlikely) and semantically plausible.",
"2. Building better world understanding into automatic discriminators so that they are more capable of detecting the types of errors that humans notice.",
"3. Developing tools and educational materials to improve humans' ability to detect machine-generated text.",
"These may include automatic detectors with components that explain their predictions.",
"Finally, we would like to note that all of our experiments were performed with English language models, and it remains an open question how the trade-off between ease of human detection and ease of automatic detection might differ for languages that are very different from English.",
"This research is based upon work supported in part by U.S. DARPA KAIROS Program No.",
"FA8750-19-2-1004.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.",
"We also thank Noah Fiedel, Peter Liu, Sharan Narang, Joao Sedoc, Yun William Yu, and Hugh Zhang for their valuable feedback."
] |
[
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"result",
"result",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other"
] |
[
"Zero pronoun recovery and resolution aim at recovering the dropped pronoun and pointing out its anaphoric mentions, respectively.",
"We propose to better explore their interaction by solving both tasks together, while the previous work treats them separately.",
"For zero pronoun resolution, we study this task in a more realistic setting, where no parsing trees or only automatic trees are available, while most previous work assumes gold trees.",
"Experiments on two benchmarks show that joint modeling significantly outperforms our baseline that already beats the previous state of the arts.",
"Our code is available at https://github.com/ freesunshine0316/lab-zp-joint .",
"Zero pronoun (ZP) is a linguistic phenomenon where a pronoun is dropped for simplicity.",
"Figure 1 shows an example, where two pronouns at positions \u0000 1 and \u0000 2 are omitted.",
"They both refer to f (The police) in the sentence beginning and their original form is (they).",
"The situation of dropping pronouns happens in most languages.",
"While this phenomenon is not frequent in non-pro-drop languages, such as English, it is extremely severe for pro-drop languages, such as Chinese.",
"In addition, dropped pronouns happens more frequently in conversations than in news.",
"Our preliminary statistics of Chinese shows that 59.2% pronouns are dropped in a corpus of casual dialogues domain, while the number is just 41.6% in another data of broadcast news.",
"In NLP, dropped pronouns can cause loss of important information, such as the subject or object of the central predicate in a sentence, introducing ambiguity to applications such as machine translation (Nakaiwa and Shirai, 1996; Wang et al., 2016; Takeno et al., 2016), question answering (Choi et al., 2018; Reddy et al., 2019; Sun et al., 2019; [ f ] \u0000 / \u0000 w H \u0000 \u0000 1 \u0000 \u0000 2 H [ The police ] suspected that this is a criminal case about illegal guns , \u0000 1 brought the guns and bags to the city \u0000 2 to deal with the case . Figure 1: An zero pronoun example and its English translation, where \u0000 1 and \u0000 2 are zero pronouns pointing to the span in square brackets. Chen and Choi, 2016) and dialogue understanding (Chen et al., 2017; Rolih, 2018).",
"As a result, zero pronouns have recently received much research attention (Liu et al., 2017; Yin et al., 2018a,b).",
"We study Chinese zero pronoun in dialogue settings.",
"There are two long-existing tasks namely zero pronoun recovery , which aims at recovering the original pronoun (such as (he) and y (she)), and zero pronoun resolution , where the goal is to pinpoint the mention that each dropped pronoun refers to.",
"Intuitively, the results of the two tasks highly interact with each other.",
"Taking Figure 1 as an example, it will be much easier to resolute \u0000 1 to f (The police) rather than H (crim-inal case about illegal guns) if we know \u0000 1 corresponds to (they).",
"Similarly, it would be more likely to recover \u0000 1 as (they) than other candidate pronouns, if we know \u0000 1 points to f (The police).",
"Despite their high correlation, previous work considers them as irrelevant tasks, solving them separately by different models.",
"This can waste training resources, as each task has a limited number of labeled instances, and thus data sparsity can limit model performance.",
"Besides, we believe that it is unnecessary to keep a specific model for each task, as they can be close enough to be solved together.",
"In addition, most zero pronoun resolution research (Chen and Ng, 2013, 2016; Kong and Zhou, 2010; Iida and Poesio, 2011; Sasano et al., 2008; Yin et al., 2018b; Yang et al., 2019) assumes gold trees being available with the positions of zero pronouns, which is unrealistic in practical applications.",
"During decoding, a zero pronoun resolution model has to rely on automatic trees and zero pronoun detection, thus suffering from error propagation.",
"In this paper, we propose to jointly solve both tasks under a heterogeneous multi-task learning framework, where each data point only has the annotation of one task, to benefit from the supervised data of both tasks.",
"As the result, we enjoy the benefit of more supervised training data.",
"To improve the robustness of heterogeneous training and introduce more supervision, we introduce zero pronoun detection , a common sub-task for both ZP resolution and recovery.",
"Zero pronoun detection is a binary-classification task aiming to detect whether a word space has a dropped pronoun.",
"We consider ZP recovery as a sequence labeling task, regarding whether each word space has a dropped pronoun and what type the pronoun is.",
"ZP resolution is solved as extractive reading comprehension (Rajpurkar et al., 2016), where each word space is taken as a query and its anaphoric mentions are treated as the answers.",
"For non-ZP spaces where there is no corresponding anaphoric mentions, we assign the sentence beginning (span [0,0]) as the answer.",
"Experiments on two benchmarks, OntoNotes 5.0 1 (ZP resolution) and BaiduZhdiao (Zhang et al., 2016) (ZP recovery), show that joint modeling gives us 1.5+ absolute F1-score gains for both tasks over our very strong baselines using BERT (Devlin et al., 2019).",
"Our overall system gives an dramatic improvement of 3.5 F1 points over previous state-of-the-art results on both tasks.",
"Previous work considers zero pronoun resolution and recovery separately.",
"For zero pronoun recovery, existing methods can be classified according to the types of annotations they use.",
"One line of work (Yang et al., 2015, 2019) simply relies on the human annotations, solving the task as sequence labeling.",
"The other line of work (Chung and Gildea, 2010; Xiang et al., 2013; Wang et al., 2016) mines weak supervision signals from a large bilingual parallel corpus, where the other language is non-pro-drop with fewer pronoun drops.",
"The latter requires massive training data, and the MT performance is 1 https://catalog.ldc.upenn.edu/LDC2013T19 the primary goal, thus we follow the first line of research using human-annotated data.",
"Rao et al. (2015) studied zero pronoun resolution in multi-turn dialogues, claiming that their model does not rely on parsing trees to extract ZP positions and noun phrase as resolution candidates.",
"However, they only consider the dropped pronouns that correspond to one of the dialogue participant.",
"As a result, they only explore a small subset of the entire ZP resolution problem, and their task is closer to zero pronoun recovery.",
"Most similar to our work, Liu et al. (2017) converted zero pronoun resolution as a machine reading comprehension task (Rajpurkar et al., 2016) in order to automatically construct a large-scale pseudo dataset for model pretraining.",
"However, their model finetuning and evaluation with benchmark data still rely on human-annotated trees and gold zero pronoun positions.",
"As a result, it is still uncertain what performance a model can achieve without such gold inputs.",
"We address both issues in the joint task.",
"Our work is inspired by the recent advances of heterogeneous multi-task learning using BERT (De-vlin et al., 2019), which combines the supervised data of several related tasks to achieve further improvements.",
"In particular, Liu et al. (2019) utilize this framework to jointly solve GLUE tasks (Wang et al., 2019).",
"But their experiments show that multitask learning does not help across all tasks.",
"Our work takes a similar spirit, and our contribution is mainly on the zero pronoun tasks.",
"In addition, we find that it helps the robustness of multi-task learning to add a common sub-task (e.g. zero pronoun detection in our case) for additional supervision and alleviating annotation variances, if such a subtask is available.",
"As shown in Figure 2, we model ZP recovery ( f rec ), ZP resolution ( f res ), and the auxiliary ZP detection ( f det ) task with multi-task learning, where BERT (Devlin et al., 2019) is used to represent each input sentence s 1 . . . s N of N words to provide shared features.",
"ZP recovery is to restore any dropped pronouns for an input text.",
"Since pronouns are enumerable (e.g. there are 10 types for Chinese), we cast this task into a classification problem for each word space.",
"Taking some shared input representations BERT ... ... ... ...",
"h 0 , h 1 , . . . , h N , the probability for recovering pronoun p i at the space between s i \u0000 1 and s i is: p ( p i | X, i ) = softmax( W r h i + b r ) (1) where W r and b r are model parameters.",
"Our zero pronoun resolution task is to predict the span that each dropped pronoun points to, while the gold ZP positions are not available.",
"One potential solution is executing zero pronoun recovery first and utilize that information, while this introduces error propagation.",
"Conversely, we manually assign span (0,0) for non-ZP positions.",
"This will not introduce conflicts, as position 0 corresponds to the special token [CLS] for BERT encoding and thus no real spans can be (0,0).",
"We cast the resolution task for each word space (such as between s i \u0000 1 and s i ) as machine reading comprehension (MRC) (Rajpurkar et al., 2016), where a resolution span corresponds to a MRC target answer.",
"Following previous work on MRC, we separately model the start ( r sti ) and end ( r edi ) positions for each span with self-attention: p ( r sti | X, i ) = SelfAttn st ( H , h i ) p ( r edi | X, i ) = SelfAttn ed ( H , h i ) (2) where H = [ h 0 , . . . , h N ] is the concatenation of all word states, and SelfAttn st () and SelfAttn ed () are the self-attention modules for predicting the start and end positions of each ZP resolution span.",
"The probability for the whole span r i is: p ( r i | X, i ) = p ( r sti | X, i ) p ( r edi | X, i ) (3) 3.3 Auxiliary task: zero pronoun detection We also introduce pronoun detection as an auxiliary task to enhance multi-task training.",
"This task is to determine whether each word space has a dropped pronoun.",
"Similar with zero pronoun recovery, we formulate it as binary classification: p ( d i | X, i ) = softmax( W d h i + b d ) (4) where d i is the binary detection result.",
"Given an input sentence s 1 , . . . , s N , we use BERT to encode them into a sequence of input features shared across all our tasks.",
"We append the [CLS] token to inputs, before sending them to BERT.",
"Our task features are represented as h 0 , h 1 , . . . , h N , where h 0 corresponds to token [CLS] .",
"We train our model on the combined and shuffled data of both tasks to leverage more supervision signals.",
"Each data instance only contains the annotation of either ZP recovery or resolution, thus the loss for one example is defined as: loss = \u0000 X i 2 1",
"..N log p (p i | X, i ) \u0000 \u0000 log p (r i | X, i ) \u0000 \u0000 log p (d i | X, i ) (5) where , \u0000 and \u0000 are the coefficients for the tasks.",
"For and \u0000 , the value of is 1 if the corresponding supervision exists, otherwise it is 0.",
"We empirically set the value of \u0000 to 0.1, as the supervision of ZP detection exists for all instances, and we do not want this auxiliary loss signal to be too strong.",
"We study the effectiveness of jointly modeling ZP resolution, recovery and detection.",
"We take two benchmark datasets: BaiduZhidao (Zhang et al., 2016), a benchmark for ZP recovery, and OntoNotes 5.0, a benchmark for ZP resolution.",
"For BaiduZhidao, we use the version cleaned by Yang et al. (2019), containing 5504, 1175 and 1178 instances for training, development and testing, respectively.",
"OntoNotes 5.0 has 36487 training and 6083 testing instances, and we separate 20% training instances for development.",
"Method Auto Tree + Auto ZP P R F Our model 30.96 22.51 26.07 w/ auto tree cons.",
"36.13 32.32 34.12 Table 3: Resolution using automatic trees as constraint.",
"We choose the official pretrained Chinese BERT-base model 2 .",
"Models are trained with Adam (Kingma and Ba, 2014) with a learning rate of 10 \u0000 5 and a warm-up proportion of 10%.",
"To avoid overfitting, we apply l 2 norm for BERT parameters with a coefficient of 0.01.",
"Models are selected by early stopping with development results.",
"Table 1 shows the results for both resolution and recovery tasks, where ZPMN and NDPR-W show the state-of-the-art performances without relying on any gold syntactic information.",
"ZPMN treats zero pronoun resolution as a classification task over noun phrase candidates, and the final result is selected using an attention mechanism.",
"NDPR-W studies zero pronoun recovery in dialogues by modeling all dialogue history.",
"For our models, BERT represents finetuning BERT only on one task, BERT-MTL means jointly finetuning BERT on both tasks with multi-task learning (as shown in Figure 2), and BERT-MTL w/ detection is our model with auxiliary detection loss.",
"Using BERT already gives us much better performances than the previous state-of-the-art results.",
"Initial usage of heterogeneous multi-task learning helps ZP resolution, while hurting ZP recovery, 2 https://github.com/google-research/bert and one potential reason is that the ZP resolution dataset (OntoNotes 5.0) has much more instances than the ZP recovery dataset (BaiduZhidao).",
"This problem is alleviated by introducing the auxiliary ZP detection task due to the following possible reasons.",
"Most importantly, ZP detection is very close to ZP recovery (binary vs multi-class), thus this extra supervision helps to alleviate the data magnitude imbalance problem.",
"Besides, ZP detection introduces more useful training signals to the overall training process.",
"We also evaluate on other previously studied settings, where gold trees or even gold ZP positions are given.",
"As ZPMN also reported strong performances cross these settings, we take this model as a baseline for comparison.",
"Using gold trees and ZP positions Since most previous work on ZP resolution uses gold syntactic trees and/or ZP positions, we also investigate our performance under these settings.",
"In particular, we take the noun phrases and/or ZP positions from gold trees to serve as constraints.",
"Besides, our model is only trained on the ZP positions when they are given.",
"Table 2 shows the results, AttentionZP gives the previous state-of-the-art performance under the Gold Tree + Gold ZP setting.",
"Our model outperforms AttentionZP by a significant margin.",
"Beside, we also report the best performance, which significantly outperforms the previous best system ( ZPMN ) under the Gold Tree + Auto ZP setting, where only gold trees are available.",
"Effectiveness of automatic trees Currently, our model considers all free spans when making a resolution decision.",
"Using automatic tree can greatly limit the search space, while that could introduce errors.",
"We conduct a preliminary comparison as shown in Table 3, where such a constraint dramatically helps the performance.",
"But, this is based on the assumption that the target-domain syntactic parsing is very accurate, as our ZP resolution data (OntoNotes 5.0) is mostly collected from broadcast news.",
"The F1 score using automatic trees (34.12) is close to the score using gold trees (36.55 in Table 2), which also indicates the conjecture above.",
"As a result, we may expect a performance drop for web and biomedical domains, where the parsing accuracies are much lower.",
"We studied the effectiveness of jointly modeling ZP recovery and resolution using the recently introduced multi-task learning + BERT framework.",
"To alleviate the data magnitude imbalance problem, we introduce ZP detection as a common auxiliary sub-task for extra supervision.",
"Experiments on two benchmarks show that our model is consistently better than previous results under various settings, and that the auxiliary ZP detection sub-task can make the training process more robust."
] |
[
"abstain",
"objective",
"method",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"result",
"result",
"abstain",
"method",
"abstain",
"abstain",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"abstain",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"method",
"abstain",
"result"
] |
[
"Knowledge graph embedding methods often suffer from a limitation of memorizing valid triples to predict new ones for triple classification and search personalization problems.",
"To this end, we introduce a novel embedding model, named R-MeN, that explores a relational memory network to encode potential dependencies in relationship triples.",
"R-MeN considers each triple as a sequence of 3 input vectors that recurrently interact with a memory using a transformer self-attention mechanism.",
"Thus R-MeN encodes new information from interactions between the memory and each input vector to return a corresponding vector.",
"Consequently, R-MeN feeds these 3 returned vectors to a convolutional neural network-based decoder to produce a scalar score for the triple.",
"Experimental results show that our proposed R-MeN obtains state-of-the-art results on SEARCH17 for the search personalization task, and on WN11 and FB13 for the triple classification task.",
"Knowledge graphs (KGs) representing the genuine relationships among entities in the form of triples (subject, relation, object) denoted as (s, r, o) are often insufficient for knowledge presentation due to the lack of many valid triples (West et al., 2014).",
"Therefore, research work has been focusing on inferring whether a new triple missed in KGs is likely valid or not (Bordes et al., 2011, 2013; Socher et al., 2013).",
"As summarized in (Nickel et al., 2016; Nguyen, 2017), KG embedding models aim to compute a score for each triple, such that valid triples have higher scores than invalid ones.",
"Early embedding models such as TransE (Bordes et al., 2013), TransH (Wang et al., 2014), TransR (Lin et al., 2015), TransD (Ji et al., 2015), DIST-MULT (Yang et al., 2015) and ComplEx (Trouil-lon et al., 2016) often employ simple linear operators such as addition, subtraction and multiplication.",
"Recent embedding models such as ConvE (Dettmers et al., 2018) and CapsE (Nguyen et al., 2019b) successfully apply deep neural networks to score the triples.",
"Existing embedding models are showing promising performances mainly for knowledge graph completion, where the goal is to infer a missing entity given a relation and another entity.",
"But in real applications, less mentioned, such as triple classification (Socher et al., 2013) that aims to predict whether a given triple is valid, and search personalization (Vu et al., 2017) that aims to re-rank the relevant documents returned by a user-oriented search system given a query, these models do not effectively capture potential dependencies among entities and relations from existing triples to predict new triples.",
"To this end, we leverage the relational memory network (Santoro et al., 2018) to propose R-MeN to infer a valid fact of new triples.",
"In particular, R-MeN transforms each triple along with adding positional embeddings into a sequence of 3 input vectors.",
"R-MeN then uses a transformer self-attention mechanism (Vaswani et al., 2017) to guide the memory to interact with each input vector to produce an encoded vector.",
"As a result, R-MeN feeds these 3 encoded vectors to a convolutional neural network (CNN)-based decoder to return a score for the triple.",
"In summary, our main contributions are as follows: We present R-MeN a novel KG embedding model to memorize and encode the potential dependencies among relations and entities for two real applications of triple classification and search personalization.",
"Experimental results show that R-MeN obtains better performance than up-to-date embedding models, in which R-MeN produces new state-of-the-art results on SEARCH17 for the search personalization task, and a new highest accuracy on WN11 and the second-highest accuracy on FB13 for the triple classification task.",
"2 The proposed R-MeN Embedding Positional Encoding s r o CNN score MMLP g MMLP g MMLP g + + + Figure 1: Processes in our proposed R-MeN for an illustration purpose.",
"Let G be a KG database of valid triples in the form of (subject, relation, object) denoted as (s, r, o) .",
"KG embedding models aim to compute a score for each triple, such that valid triples obtain higher scores than invalid triples.",
"We denote v s , v r and v o R d as the embeddings of s , r and o , respectively.",
"Besides, we hypothesize that relative positions among s , r and o are useful to reason instinct relationships; hence we add to each position a positional embedding.",
"Given a triple (s, r, o) , we obtain a sequence of 3 vectors { x 1 , x 2 , x 3 } as: x 1 = W ( v s + p 1 ) + b x 2 = W ( v r + p 2 ) + b x 3 = W ( v o + p 3 ) + b where W R k d is a weight matrix, and p 1 , p 2 and p 3 R d are positional embeddings, and k is the memory size.",
"We assume we have a memory M consisting of N rows wherein each row is a memory slot.",
"We use M ( t ) to denote the memory at timestep t , and M ( t ) i, : R k to denote the i -th memory slot at timestep t .",
"We follow Santoro et al. (2018) to take x t to update M ( t ) i, : using the multi-head self-attention mechanism (Vaswani et al., 2017) as: M ( t +1) i, : = [ M ( t +1) , 1 i, : M ( t +1) , 2 i, : ... M ( t +1) ,H i, : ] with M ( t +1) ,h i, : = i,N +1 ,h (cid:16) W h,V x t (cid:17) + N (cid:88) j =1 i,j,h (cid:16) W h,V M ( t ) j, : (cid:17) where H is the number of attention heads, and denotes a vector concatenation operation.",
"Regarding the h -th head, W h,V R n k is a value-projection matrix, in which n is the head size and k = nH .",
"Note that { i,j,h } Nj =1 and i,N +1 ,h are attention weights, which are computed using the softmax function over scaled dot products as: i,j,h = exp ( i,j,h ) (cid:80) N +1 m =1 exp ( i,m,h ) i,N +1 ,h = exp ( i,N +1 ,h ) (cid:80) N +1 m =1 exp ( i,m,h ) with i,j,h = (cid:16) W h,Q M ( t ) i, : (cid:17) T (cid:16) W h,K M ( t ) j, : (cid:17) n i,N +1 ,h = (cid:16) W h,Q M ( t ) i, : (cid:17) T (cid:0) W h,K x t (cid:1) n where W h,Q R n k and W h,K R n k are query-projection and key-projection matrices, respectively.",
"As following Santoro et al. (2018), we feed a residual connection between x t and M ( t +1) i, : to a multi-layer perceptron followed by a memory gating to produce an encoded vector y t R k for timestep t and the next memory slot M ( t +1) i, : for timestep ( t + 1) .",
"As a result, we obtain a sequence of 3 encoded vectors { y 1 , y 2 , y 3 } for the triple ( s, r, o ) .",
"We then use a CNN-based decoder to compute a score for the triple as: f ( s, r, o ) = max ( ReLU ([ y 1 , y 2 , y 3 ] )) T w where we view [ y 1 , y 2 , y 3 ] as a matrix in R k 3 ; denotes a set of filters in R m 3 , in which m is the window size of filters; w R | | is a weight vector; denotes a convolution operator; and max denotes a max-pooling operator.",
"Note that we use the max-pooling operator instead of the vector concatenation of all feature maps used in ConvKB (Nguyen et al., 2018) to capture the most important feature from each feature map, and to reduce the number of weight parameters.",
"We illustrate our proposed R-MeN as shown in Figure 1.",
"In addition, we employ the Adam optimizer (Kingma and Ba, 2014) to train R-MeN by minimizing the following loss function (Trouillon et al., 2016; Nguyen et al., 2018): L = (cid:88) ( s,r,o ) {GG (cid:48) } log (cid:0) 1 + exp (cid:0) t ( s,r,o ) f ( s, r, o ) (cid:1)(cid:1) in which, t ( s,r,o ) = (cid:26) 1 for ( s, r, o ) G 1 for ( s, r, o ) G (cid:48) where G and G (cid:48) are collections of valid and invalid triples, respectively.",
"G (cid:48) is generated by corrupting valid triples in G .",
"The triple classification task is to predict whether a given triple ( s, r, o ) is valid or not (Socher et al., 2013).",
"Following Socher et al. (2013), we use two benchmark datasets WN11 and FB13, in which each validation or test set consists of the same number of valid and invalid triples.",
"It is to note in the test set that Socher et al. (2013) did not include triples that either or both of their subject and object entities also appear in a different relation type or order in the training set, to avoid reversible relation problems.",
"Table 1 gives statistics of the experimental datasets.",
"Each relation r has a threshold r computed by maximizing the micro-averaged classification accuracy on the validation set.",
"If the score of a given triple ( s, r, o ) is above r , then this triple is classified as a valid triple, otherwise, it is classified as an invalid one.",
"In search personalization, given a submitted query for a user , we aim to re-rank the documents returned by a search system, so that the more the",
"returned documents are relevant for that query, the higher their ranks are.",
"We follow (Vu et al., 2017; Nguyen et al., 2019a,b) to view a relationship of the submitted query, the user and the returned document as a (s, r, o) -like triple (query, user, document) .",
"Therefore, we can adapt our R-MeN for the search personalization task.",
"We evaluate our R-MeN on the benchmark dataset SEARCH17 (Vu et al., 2017) as follows:",
"(i) We train our model and use the trained model to compute a score for each (query, user, document) triple.",
"(ii) We sort the scores in the descending order to obtain a new ranked list.",
"(iii) We employ two standard evaluation metrics: mean reciprocal rank (MRR) and Hits@1.",
"For each metric, the higher value indicates better ranking performance.",
"We use the common Bernoulli strategy (Wang et al., 2014; Lin et al., 2015) when sampling invalid triples.",
"For WN11, we follow Guu et al. (2015) to initialize entity and relation embeddings in our R-MeN by averaging word vectors in the relations and entities, i.e., v american arborvitae = 12 ( v american + v arborvitae ) , in which these word vectors are taken from the Glove 50-dimensional pre-trained embeddings (Pennington et al., 2014) (i.e., d = 50).",
"For FB13, we use entity and relation embeddings produced by TransE to initialize entity and relation embeddings in our R-MeN, for which we obtain the best result for TransE on the FB13 validation set when using l 2 -norm, learning rate at 0.01, margin = 2 and d = 50.",
"Furthermore, on WN11, we provide our new fine-tuned result for TransE using our experimental setting, wherein we use the same initialization taken from the Glove 50-dimensional pre-trained embeddings to initialize entity and relation embeddings in TransE.",
"We get the best score for TransE on the WN11 validation set when using l 1 -norm, learning rate at 0.01, margin = 6 and d = 50.",
"In preliminary experiments, we see the highest accuracies on the validation sets for both datasets when using a single memory slot (i.e., N = 1 ); and this is consistent with utilizing the single memory slot in language modeling (Santoro et al., 2018).",
"Therefore, we set N = 1 to use the single memory slot for the triple classification task.",
"Also from preliminary experiments, we select the batch size bs = 16 for WN11 and bs = 256 for FB13, and set the window size m of filters to 1 (i.e., m = 1 ).",
"Regarding other hyper-parameters, we vary the number of attention heads H in { 1, 2, 3 } , the head size n in { 128, 256, 512, 1024 } , the number of MLP layers l in { 2, 3, 4 } , and the number of filters F = | | in { 128, 256, 512, 1024 } .",
"The memory size k is set to be nH = k .",
"To learn our model parameters, we train our model using the Adam initial learning rate lr in { 1 e 6 , 5 e 6 , 1 e 5 , 5 e 5 , 1 e 4 , 5 e 4 } .",
"We run up to 30 epochs and use a grid search to select the optimal hyper-parameters.",
"We monitor the accuracy after each training epoch to compute the relation-specific threshold r to get the optimal hyper-parameters (w.r.t the highest accuracy) on the validation set, and to report the final accuracy on the test set.",
"We use the same initialization of user profile, query and document embeddings used by Nguyen et al. (2019b) on SEARCH17 to initialize the corresponding embeddings in our R-MeN respectively.",
"From the preliminary experiments, we set N = 1 , bs = 16 and m = 1 .",
"Other hyper-parameters are varied as same as used in the triple classification task.",
"We monitor the MRR score after each training epoch to obtain the highest MRR score on the validation set to report the final scores on the test set.",
"Table 2 reports the accuracy results of our R-MeN model and previously published results on WN11 and FB13.",
"R-MeN sets a new state-of-the-art accuracy of 90.5% that significantly outperforms other models on WN11.",
"R-MeN also achieves a second highest accuracy of 88.9% on FB13.",
"Overall, R-MeN yields the best performance averaged over these two datasets.",
"Regarding TransE, we obtain the second-best accuracy of 89.2% on WN11 and a competitive accuracy of 88.1% on FB13.",
"Figure 2 shows the accuracy results for TransE and our R-MeN w.r.t each relation.",
"In particular, on WN11, the accuracy for the one-to-one relation similar to significantly increases from 50.0% for TransE to 78.6% for R-MeN.",
"On FB13, R-MeN improves the accuracies over TransE for the many-to-many relations insti-tution and profession.",
"Table 3 presents the experimental results on SEARCH17, where R-MeN outperforms up-to-date embedding models and obtains the new highest performances for both MRR and Hits@1 metrics.",
"We restate the prospective strategy proposed by Vu et al. (2017) in utilizing the KG embedding methods to improve the ranking quality of the personalized search systems.",
"Next, we present in Figure 3 the effects of hyper-parameters consisting of the head size n , and the number H of attention heads.",
"Using large head sizes (e.g., n = 1024 ) can produce better performances on all 3 datasets.",
"Additionally, using multiple heads gives better results on WN11 and FB13, while using a single head (i.e., H = 1 ) works best on SEARCH17 because each query usually has a single intention.",
"For the last experiment, we compute and report our ablation results over 2 factors in Table 4.",
"In particular, the scores degrade on FB13 and SEARCH17 when not using the positional embeddings.",
"More importantly, the results degrade on Model WN11 FB13 SEARCH17 Our R-MeN 91.3 88.8 0.792",
"all 3 datasets without using the relational memory network.",
"These show that using the positional embeddings can explore the relative positions among s , r and o ; besides, using the relational memory network helps to memorize and encode the potential dependencies among relations and entities.",
"We propose a new KG embedding model, named R-MeN, where we integrate transformer self-attention mechanism-based memory interactions with a CNN decoder to capture the potential dependencies in the KG triples effectively.",
"Experimental results show that our proposed R-MeN obtains the new state-of-the-art performances for both the triple classification and search personalization tasks.",
"In future work, we plan to extend R-MeN for multihop knowledge graph reasoning.",
"Our code is available at: https://github.com/daiquocnguyen/ R-MeN.",
"This research was partially supported by the ARC Discovery Projects DP150100031 and DP160103934."
] |
[
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"method",
"method",
"method",
"abstain",
"other",
"method",
"result",
"method",
"method",
"objective",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"other",
"other"
] |
[
"Implicit Event Argument Extraction seeks to identify arguments that play direct or implicit roles in a given event.",
"However, most prior works focus on capturing direct relations between arguments and the event trigger.",
"The lack of reasoning ability brings many challenges to the extraction of implicit arguments.",
"In this work, we present a Frame-aware Event Argument Extraction (FEAE) learning framework to tackle this issue through reasoning in event frame-level scope.",
"The proposed method leverages related arguments of the expected one as clues to guide the reasoning process.",
"To bridge the gap between oracle knowledge used in the training phase and the imperfect related arguments in the test stage, we further introduce a curriculum knowledge distillation strategy to drive a final model that could operate without extra inputs through mimicking the behavior of a well-informed teacher model.",
"Experimental results demonstrate FEAE obtains new state-of-the-art performance on the RAMS dataset.",
"In this work, we investigate the problem of Implicit Event Argument Extraction (IEAE) (Ebner et al., 2020), which seeks to identify arguments that play specific roles respect to a given trigger (Chen et al., 2020).",
"Unlike previous event argument extraction task that only processes a single sentence, arguments in IEAE could span multiple sentences.",
"As shown in Figure 1, given a conflict/attack/firearmattack event triggered by the word shooting , an IEAE system is required to extract four corresponding arguments with their roles in brackets: mass murder ( target ), firearms ( instrument ), Andrey Shpagonov ( attacker ), and Tatarstan ( place ).",
"Mainstream methods to extract event arguments focus on learning pair-wise information between arguments and the given trigger.",
"Chen et al. (2015a); Nguyen et al. (2016a); Liu et al. (2018); Sha et al. (2018) cast argument extraction as a relation classification problem to extract pairs of trigger and candidate arguments.",
"Ebner et al. (2020); Zhang et al. (2020b) utilize event trigger as the predicate and leverage semantic role labeling model (Surdeanu et al., 2008; Hajic et al., 2009) to identify arguments.",
"Former state-of-the-art approaches (Du and Cardie, 2020; Li et al., 2020; Zhang et al., 2020a) formulate event argument extraction as a Machine Reading Comprehension (MRC) problem through asking trigger and role-specific questions.",
"Despite the success of these works in single sentence event argument extraction, current methods struggle in IEAE due to the following critical issues: 1.Long-range Dependency: Since arguments could span multiple sentences, there exist long-range and cross-sentence dependencies between arguments and the given trigger, which is hard to be captured through existing methods.",
"2.Implicit Arguments: Extracting implicit event arguments requires the ability to reason over event roles, and it is difficult for prior methods to learn these indirect relations.",
"We attribute these limitations to that current works are mainly designed to capture direct relations between arguments and the given event trigger.",
"This pair-wise learning paradigm lacks the ability of effective reasoning.",
"Instead of only using trigger information, we observe that in MRC-based event argument extraction methods, the related arguments, which refer to arguments (also their roles) in the same event except for the required one, could provide information to perform reasoning.",
"For example, as shown in Figure 1, if we have already known Andrey Shpagonov plays the attacker role of a firearmattack event, intuitively, firearms could be the instrument of attacker .",
"Implicit relations may lie between the two arguments, helping identifying firearms .",
"In this manner, arguments corresponding to roles defined in the event frame-level scope could act as clues to perform reasoning and be utilized as relay nodes to capture long-range dependencies.",
"Nevertheless, the importance of related arguments is under-exploited.",
"Liu et al. (2017) model event arguments as supervising attention information to promote trigger extraction.",
"Chen et al. (2020) propose to learn the association of arguments, but their method works on golden-standard candidate spans, which is unavailable in real-world applications.",
"Existing methods could also be extended to incorporate related arguments and their roles by taking such information as inputs.",
"However, since the model is trained with golden-standard arguments, predicted imperfect arguments might introduce noise and affect the performance in the test stage.",
"In this work, we introduce a Frame-aware Event Argument Extraction (FEAE) learning framework for IEAE.",
"We extend the MRC-based method to allow reasoning in event frame-level scope by exploiting related arguments and their roles as clues to capture the argument-argument dependencies.",
"This method could learn to extract implicit arguments of an event trigger and handle the long-range dependency problem.",
"To bridge the gap between the unavailable oracle knowledge (Fang et al., 2021) and the imperfect test inputs, we introduce a teacher-student framework that drives a final model that could operate without extra inputs through mimicking the behavior of well-informed teachers.",
"Inspired by the curriculum theory (Ben-gio et al., 2009), we further introduce a curriculum distillation strategy that gradually increases the learning complexity of the student model to make it more compatible with the real situation, thus driving a better model.",
"In summary, our contributions in this work are as follows: 1) We introduce a Frame-aware Event Argument Extraction framework to train models for implicit event argument extraction.",
"Event frame-level knowledge is incorporated to reason and capture long-range dependencies among triggers and arguments.",
"2) The proposed model learns to incorporate frame-level knowledge implicitly.",
"Knowledge distillation and curriculum learning are utilized to drive a model that does not require extra tools to produce reasoning clues, and could incorporate frame-level knowledge implicitly.",
"3) Our approach outperforms existing methods significantly.",
"We achieve new state-of-the-art performance on the RAMS dataset.",
"Event Argument Extraction (EAE) seeks to extract entities with specific roles in an event.",
"Methods that learn direct relation between arguments and triggers have achieved significant progress in this field (Chen et al., 2015b; Nguyen et al., 2016b; Zhang et al., 2019; Liu et al., 2018).",
"Recently, there is a trend to formulate EAE as a Question Answering (QA) problem, and several MRC models report performing well (Zhang et al., 2020a; Du and Cardie, 2020; Liu et al., 2020).",
"These methods leverage role-specific questions to extract boundaries of the expected arguments.",
"Implicit Event Argument Extraction (IEAE) is a less studied problem where arguments could span multiple sentences and appear in an implicit way.",
"There have been only a few works for IEAE.",
"Ebner et al. (2020); Zhang et al. (2020b) formulate IEAE as a semantic role labeling task and extract arguments by classifying phrase pairs.",
"These methods only explicitly consider direct relations between triggers and arguments.",
"Chen et al. (2020) also consider the relation among arguments, however, their method could only deal with argument linking task that identifies the role of a given argument span, which is not available in a realistic situation.",
"Knowledge Distillation is proposed to guide a student model to imitate a well-trained teacher model.",
"It is first proposed by Hinton et al. (2015) and has been widely used in the natural language processing (NLP) field (Ruder and Plank, 2018; Gong et al., 2018; Lee et al., 2018; Jiao et al., 2020).",
"In this work, we employ the knowledge distillation training strategy to handle the train-test disparity caused by unavailable oracle knowledge in the test stage through driving a student model to learn the behavior of a well-informed teacher.",
"Curriculum Learning is a learning strategy firstly proposed by Bengio et al. (2009) that trains a neural network better through increasing data complexity of training data.",
"It is broadly adopted in many NLP domains (Platanios et al., 2019; Huang and Du, 2019; Xu et al., 2020).",
"In this work, since data with rich related arguments is easier to be learned than those without extra inputs, we promote the training of our student model by gradually increasing the learning complexity of the distillation process by decreasing the proportion of given arguments.",
"Our FEAE framework consists of two training steps to drive a model that could utilize frame-level knowledge for IEAE, and details are shown in Figure 2.",
"For single teacher situations, firstly we train an MRC-based teacher model MT with oracle knowledge composing of golden-standard relevant arguments to exploit frame-aware information and obtain the capacity to reason.",
"Then a student model MS that does not have access to this oracle information is driven with the guidance of MT to be used in practice.",
"Our framework can also be extended to multi-teacher circumstances.",
"In the following sub-sections, we will give the formulation of our task and our MRC-based model.",
"After that, we will illustrate the curriculum knowledge distillation strategy to bridge the gap between the training and inference stage.",
"We formulate IEAE as a QA problem and leverage the MRC-based model to extract answer spans.",
"For each argument type, the provided information consists of a tuple < q, c > , where q and c refer to the question and context, respectively.",
"In practice, the question q should contain information about a trigger, the event type, and the role of the expected argument.",
"We aim to extract a span s in the context that contains the answer to the question.",
"Formally, given the context C = { w i } ni =1 consisting of n words and a known event trigger with the corresponding event type, we seek to identify a set of argument tuples (cid:8)(cid:0) Y s j , Y e j , Role j (cid:1)(cid:9) m j =1 , where Y s j and Y e j are the start and end index of the j -th argument, respectively; Role j is the role of this argument.",
"The key of MRC-based QA is to generate questions that contain information about text spans to be extracted.",
"We leverage a template-based question generation strategy to acquire meaningful descriptions about the desired event argument in this work.",
"The question template we used to extract arguments with the role of Arg T ype is as follows: [ Event T ype ] [ Arg T ype ] with [ arg 1 ] as [ role 1 ] and [ arg 2 ] as [ role 2 ] . . . and [ arg n ] as [ role n ] in [ T rigger ] .",
"where [ T rigger ] and [ Event T ype ] should be filled in with event trigger and the corresponding event type, respectively; [ Arg T ype ] denotes the role of the expected argument; [ arg ] and [ role ] are related arguments and their role types in the same event.",
"Elements in underlines contain oracle knowledge and are excluded during the test stage.",
"The MRC-based model could be explicitly aware of the frame-level information by filling in this template, thus making better predictions.",
"We employ the pre-trained language model BERT (Devlin et al., 2019) as the backbone of our MRC-based argument extraction model.",
"The text input is formulated as: [ CLS ] question [ SEP ] context [ SEP ] where [ CLS ] and [ SEP ] are special tokens defined in BERT; question refers to the query generated with our template, and context denotes the context words where arguments are extracted.",
"This input sequence is then converted into an embedding matrix E and used as inputs of the MRC model.",
"We leverage BERT to build semantic representation for each word in the context.",
"After the encoding stage, we utilize hidden states from the last BERT layer to represent each token: H = BERT ( E ) (1) This encoding stage makes a deep fusion between the question and the context by interactions",
"between multi-head and multi-layer attention.",
"In order to explicitly inform the model with the location of trigger word, we further introduce positional embedding to reflect the relevant distances between words and the specific trigger.",
"The concatenations of positional embedding and hidden states are then utilized to produce two probability vectors of the start and end positions: p start = softmax ( W s ( H E p ) / ) p end = softmax ( W e ( H E p ) / ) (2) where E p is the positional embedding matrix; is the operator of concatenation and is the parameter of softmax temperature.",
"We use cross-entropy between the prediction and golden labels as our training criterion to optimize our model.",
"The following two losses are used for training start and end index predictions: L start = CE ( p start , Y start ) L end = CE ( p end , Y end ) (3) where Y start and Y end are ground-truth labels for the index of desired span, respectively.",
"For the situation where no answer exists in the context (missing role of the event), we point these two heads to the [ CLS ] token.",
"The overall loss of the basic MRC model is formulated as: LCE = L start + L end (4) 3.4 Teacher-student Framework Although oracle knowledge about related arguments in the same event could provide clues to assist reasoning in the training stage, this golden-standard information is not available for the test stage in practice.",
"This train-test disparity may lead to a performance drop when noisy, or even unrelated arguments are used in the test stage.",
"To bridge this gap, we adopt the teacher-student framework to drive a model that is capable of reasoning without the requirement of extra clues.",
"Specifically, as shown in Figure 2",
"(a), we first input frame-aware question Q full that contains all categories of oracle knowledge to obtain a well-trained teacher model MT .",
"Then MT is utilized to generate hidden states HT and the span distributions p Tstart and p Tend .",
"Likewise, a student model MS , which does not utilize oracle information, produces hidden states HS and index distributions p Sstart and p Send .",
"The MS distills knowledge from MT through learning to have similar behavior in both hidden vectors and prediction distributions: LKL = ( KL ( p Tstart , p Sstart )+ KL ( p Tend , p Send )) / 2 LMSE = MSE ( HT , HS ) (5) where KL and MSE are short for KL-divergence loss and mean squared error loss, respectively.",
"Both the teacher MT and the student MS share the same architecture but with diverse parameters.",
"The weights of MT are fixed and we only optimize the parameters of the student model during the knowledge distillation stage.",
"The overall loss of MS under our teacher-student framework is formulated as: L T,S = LCE + LKL + LMSE (6) where and are two weight coefficients.",
"Note that oracle knowledge in the question template, marked with underlines, is not available in a realistic test situation.",
"In this work, we only utilize them to guide our teacher model to capture frame-aware information in the training stage.",
"As illustrated in Figure",
"2(b), for the test stage of our student model MS , we discard these extra inputs and fill in slots with event-aware context, which only consists of the event trigger, event type, and the expected argument type.",
"Besides, as oracle knowledge is included in the input of the teacher model, during the distillation process we mask out the question part of the text input in both teacher and student models, and only distill the knowledge of context part.",
"This teacher-student framework could be further extended to a multi-teacher manner which enables a student model to capture knowledge from multiple perspectives.",
"A teacher model could learn to focus on several patterns to apply reasoning by providing different combinations of related arguments.",
"We drive four teachers trained with diverse templates to capture different categories of oracle knowledge among roles, which are represented with ALL , ALL 1 , ALL 2 , and NONE , respectively.",
"These templates utilize arguments of different proportions.",
"Take the example of the knowledge distillation training stage in Figure 2",
"(a), there is one expected argument to be extracted and three related arguments.",
"ALL indicates we fill in the input template with all related arguments.",
"ALL 1 denotes that we randomly enumerate the possibilities of two out of the three other arguments and leave one slot unfilled.",
"Questions for ALL 2 and NONE are generated in the same method where two or all slots remain unfilled.",
"For the multi-teacher situation, we distill knowledge into the student model from the four teachers mentioned above simultaneously.",
"The overall multi-teacher distillation loss is formulated as: L = (cid:88) k k LT k ,S (7) where k and LT k ,S are the weighting factor and the loss function calculated with the k -th teacher model using equation 6, respectively.",
"In this subsection, we view the disparity between the training and test stage from the perspective of learning complexity and introduce our curriculum",
"end",
"distillation strategy.",
"Clues in the form of related arguments and their roles are explicitly given for the teacher model to promote reasoning.",
"While for the student model (the inference stage), there are no golden-standard clues, making it challenging for the model to extract the expected argument by relying on associated ones.",
"Intuitively, the training process of the student model is harder than that of the teacher.",
"Inspired by the curriculum theory that a machine learning model could be trained better by feeding data following the easier to harder order, we introduce a curriculum distillation strategy to promote the learning of student model.",
"We utilize the proportion of given arguments to measure the complexity of the learning task and data points in IEAE task.",
"As in Figure 2",
"(c), at the beginning of the distillation stage, we utilize questions containing oracle knowledge with all related arguments to train the student as a warm-up procedure.",
"Then we gradually reduce the proportion of given arguments and finally transit to using no extra arguments as in a realistic situation.",
"Note that all teacher models utilize oracle knowledge as they are trained throughout the whole process.",
"Details of the curriculum distillation strategy are shown in Algorithm 1.",
"IALL and I are two sets of training instances with all golden-standard arguments and no extra knowledge are used to build questions, respectively.",
"{ MT k } 4 k =1 are four well-informed teacher models trained with diverse templates that capture different categories of oracle knowledge.",
"MS is the student model.",
"For each training step, firstly, we sample a batch of instances following Bernoulli distribution and the probability of selecting an example from the IALL is a%.",
"Secondly, we cache the hidden state, start and end distribution of the four teachers with I All as input.",
"Finally, we utilize all cached status from teacher models to simultaneously distill knowledge to student network.",
"As the training stage progresses, the value of a gradually decreases from 100 to 0, leading to the learning difficulty of batches of data from easier to harder.",
"Note that we evaluate the performance of MS using data without extra arguments in questions.",
"We apply the early stop strategy to avoid over-fitting when the obtained F1 score on the development set no longer improves after several iterations.",
"Dataset.",
"We conduct experiments on the RAMS 1 dataset, which is annotated with 139 event types and 65 corresponding argument roles.",
"Each instance consists of a 5-sentences context around the typed event trigger, and there are several typed arguments to be extracted.",
"RAMS dataset consists of 7329, 924, and 871 instances in the training, development, and test set, respectively.",
"Evaluation and Hyperparameters.",
"An argument is considered correctly identified when the predicted offset fits the golden-standard span.",
"If both the span and the role of an extracted argument are matched with golden-standard one, then this argument is correctly classified.",
"Precision (P), Recall (R), and F measure (F1) are adopted as valuation metrics.",
"Besides, gold event type information is used in the type constrained decoding (TCD) setting.",
"1 https://nlp.jhu.edu/rams/ In experiments, we adopt BERT-base, which has 12 layers, 768 hidden units, and 12 attention heads in every layer, as our MRC model.",
"The batch size is set to 4 and the max sequence length is 512.",
"We set the dimension of the trigger position embedding to 76 and the epoch is set to 7.",
"We train the models with an Adam weight decay optimizer with an initial learning rate of 3e-5.",
"The warming up portion for learning rate is 10%.",
"Temperature is set to 1.",
"And we set as 0.5, as 2e-3 to bal-ance cross-entropy, KL-divergence, and MSE loss.",
"The proportionality factor a in every epoch is set to 100, 70, 40, 30, 20, 10, 0.",
"And the weighting factors { k } 4 k =1 from ALL , ALL 1 , ALL 2 , and NONE are configured as 0.35, 0.25, 0.25, 0.15, respectively.",
"Baselines.",
"(1) Ebner's (Ebner et al., 2020) is a semantic role labeling-based method with greedy decoding.",
"(2) Zhang's (Zhang et al., 2020b) is a two-step head-based model that first predicts headwords of an argument and then expands to the full span.",
"Since IEAE is a newly proposed task, there are only a few existing works.",
"To demonstrate the effectiveness of our method, we also adopt several strong methods from the EAE task and report performances of these baselines and their variants.",
"(3) Student is our base model that extracts arguments with MRC framework based on Du and Cardie (2020).",
"(4) Student-SUP is the variant where argument information is explicitly modeled with supervising attention mechanism based on Liu et al. (2017).",
"(5) Student-GCN is the variant where graph nodes are built by named entities ex-F i F c Teacher 53.03 49.88 FEAE multi cl kd 49.03 43.06 FEAE multi cl 50.35 44.75 FEAE multi 52.03 46.25 FEAE cl 51.26 45.82 FEAE 53.49 47.40 Table 2: Ablation study on the test set of FEAE.",
"tracted from Stanford corenlp toolkit 2 , and adopts multi-hop graph convolutional network for reasoning based on Liu et al. (2018).",
"(6) Student-MKD is a multi-teacher knowledge distillation framework where four student models trained with various random seeds are used as teachers, and then distill to another student model.",
"(7) Student-DA is the variant that utilizes questions with different proportions of oracle knowledge as the data augmentation strategy.",
"(8) Student-BAG is the variant that ensembles 5 well-trained student models through a bagging paradigm.",
"(9) Teacher is the variant with the same architecture as the student, and it is trained and tested with oracle knowledge.",
"(10)",
"Teacher-R has the same setting as the Teacher but tested with raw text.",
"(11)",
"Teacher-MT is the variant where answering histories from previous turns are fused to the current question in a multi-turn manner.",
"From experimental results shown in Table 1, we can conclude that: (1) MRC-based methods exceed those directly learn pair-wise relations among event targets and candidate arguments, leading to strong baselines for IEAE.",
"We attribute these improvements to that MRC models could capture relations among arguments implicitly during the encoding stage through the QA framework.",
"These methods also benefit from the prior knowledge contained in task descriptions.",
"(2) With the same architecture, Student-SUP, Student-GCN, Student-DA, and FEAE surpass the Student, and the Teacher that utilizes oracle knowledge in both the training and test stage performs best.",
"These results indicate the effectiveness of related arguments and verify our intuition that reasoning in the event frame-level scope contributes to IEAE.",
"(3) The result gaps among Teacher, Teacher-R, and Teacher-MT clearly show that the train-test disparity could affect the inference procedure.",
"Compared with Teacher-MT, our FEAE obtains a gain of 6.80 points in F1, indicating the effectiveness of our teach-student learning 2 http://stanfordnlp.github.io/CoreNLP/ NONE ALL-2 ALL-1 ALL FEAE F1 c 45.11 45.23 45.98 46.25 47.40 Table 3: Argument classification study with different proportions of arguments.",
"strategy.",
"An explanation is that in Teacher-MT, incorrect answers in the previous turn may bring noise and seriously affect the results of subsequent answers.",
"However, FEAE is trained with golden-standard related arguments, thus could alleviate such error accumulation problem.",
"(4) Student-SUP that does not require extra NLP tools to build an explicit graph outperforms Student-GCN.",
"Our method further obtains an improvement of 2.07 absolute points in the argument classification task.",
"These results demonstrate that implicit reasoning is a powerful way to capture the interrelation between arguments.",
"Another reason is that building explicit reasoning graphs could not avoid introducing noises.",
"(5) The improvements of Student-MKD, Student-DA, and Student-BAG are marginal, illustrating that the improvement in our method is mainly from the architecture of knowledge distillation rather than introducing additional factors.",
"(6) The proposed FEAE outperforms strong baselines and achieves new state-of-the-art results for both argument identification and argument classification.",
"Without using extra inputs, our approach achieves results similar to the one with oracle knowledge.",
"The performance gain clearly indicates that our FEAE could capture frame-aware information effectively.",
"Ablation Study.",
"To investigate the effect of each component, we conduct an ablation study by removing multi-teacher ( -multi ), curriculum learning ( -cl ), and knowledge distillation framework ( -kd ).",
"We train the model with oracle knowledge containing all related arguments when eliminating multi-teacher( -multi ), results are shown in Table 2.",
"We can observe that: (1) Knowledge distillation brings as large as 1.69 absolute points in F1 for argument classification.",
"By mimicking the behavior of a well-informed teacher, our method could effectively ob-d = -2 d = -1 d = 0 d = 1 d = 2 F1 i F1 c F1 i F1 c F1 i F1 c F1 i F1 c F1 i F1 c Zhang's -14.0 -14.0 -41.2 -15.7 -4.2 Teacher 27.59 27.59 23.95 22.49 56.20 52.38 30.07 27.62 9.88 9.88 Student 3.77 3.77 14.49 13.77 51.75 44.00 20.48 17.78 5.79 2.89 FEAE 25.96 23.72 23.61 19.33 55.65 49.20 26.10 25.00 7.65 5.35 Table 5: Performance breakdown by argument-trigger distance d on RAMS development set.",
"tain the ability of reasoning in event frame-level scope, thus achieving better performances.",
"(2) The curriculum strategy could promote the training process of our student model by gradually filling in the gap between train and test inputs.",
"(3) Introducing multiple teachers could provide more accurate guidance from different views and enhance the knowledge distillation framework.",
"Impact of Frame-aware Knowledge.",
"To get a better understanding of the impact of frame-aware knowledge, we show results with different teacher settings in Table 3, where we adopt a single-teacher curriculum knowledge distillation strategy in experiment.",
"The main difference between these variants is the percentage of oracle knowledge utilized to train teachers, as shown in section 3.4.",
"We find that with the increase of the percentage of ground-truth related argument (the completeness in event frame-level scope), the student could achieve better performance, verifying our assumption that frame-aware knowledge could provide essential information for IEAE.",
"FEAE achieves the best results and shows the importance of capturing multi-view guidances.",
"Performance on Argument Linking.",
"We present the performances of FEAE and baselines on the argument linking task in Table 4, where ground-truth argument spans are provided and these models are required to identify the role of each span.",
"For our MRC variants, we add the expected argument into the question and apply binary classification on the vector of [ CLS ] token to decide whether the argument plays the given role in the event.",
"We find that FEAE has an 8.3 points improvement in F1 score compared to Ebner's -TCD, and our FEAE also surpasses baselines.",
"Results of this study indicate that frame-aware knowledge also contributes to improving the performance of argument linking.",
"Performance breakdown by distance.",
"To test our method's ability to capture long-range dependencies, we list the performance breakdown on different sentence distances between arguments and the given trigger in Table 5.",
"Similar to Zhang et al. (2020b), we observe that all models have a performance drop for the non-local arguments (where d = 2 or d = 1 ).",
"Compared with Student, FEAE achieves a gain of more than 4 times by summing the results in the condition of d = 2 , and the F1 score even increases by 6 times when d = 2 .",
"To explore the reasons, we sort all argument roles in the d = 2 cases by the number of occurrences and find the top five categories are place , recipient , instrument , participant , and attacker , which covers more than 56% of the total number.",
"Intuitively, there are strong semantic associations between the aforementioned roles and other roles defined in the frame scope.",
"Since our FEAE enables the model to reason with frame-level knowledge, it is natural that our method could mitigate the performance degradation in long-range dependency situations.",
"To have a better understanding of how FEAE improves the MRC model, we conduct an experiment",
"to illustrate the reasoning process with attention weights of the BERT backbone.",
"Following Clark et al. (2019), we extract the top 10 most significant attention heads from all the 144 BERT-base heads pointing from expected argument to related argument.",
"We enumerate and average those top 10 attention heads from 314 all possible argument role pairs on RAMS test set and find that Teacher and FEAE have larger averaged values than Student with 295 and 269 argument pairs, respectively.",
"The result indicates that our approach is able to well guide the BERT model to learn oracle information by modifying the corresponding attention weights and guide expected argument to focus more on the clues brought by related argument.",
"In addition, we list the 5 most notable samples where the values are normalized by student averaged values in Table 7.",
"It should be noted that the averaged attention weights among different role-pairs are numerically incomparable.",
"But in a particular pair, FEAE tends to have a larger value than that of the student model, indicating that FEAE learns to reason by paying more attention to the relevant arguments.",
"For example, in the first instance, intuitively, when looking for place , arguments with the role of damager destroyer could provide clues.",
"In this section, we further illustrate how FEAE could alleviate long-range dependencies and implicit argument problems.",
"As shown in Table 6, we give representative examples where student model misses the correct answers, while FEAE is able to correctly find them.",
"For the scenario of long-range dependencies in E1, it is difficult to identify the argument of role victim because there are too many words between the argument Armenian and the trigger Genocide .",
"However, there is a strong implicit semantic relationship between killer and victim .",
"FEAE could better capture such oracle knowledge than student model, thus FEAE successfully find and classify Armenian as victim .",
"For the implicit argument situations in E2, since there is no direct association between argument Russian farms and trigger word immigrating , student model falls to identify Russian farms .",
"But frame-aware knowledge provides the priory that there is an implicit connection between argument role transporter and passenger .",
"Consequently, FEAE successfully recalls argument Russian farms .",
"In this paper, we exploit frame-aware knowledge for extracting implicit event arguments.",
"Specifi-cally, we introduce a curriculum knowledge distillation strategy, FEAE, to train an MRC model that could focus on frame-aware information to identify implicit arguments.",
"The proposed method leverages a teacher-student framework to avoid the requirement of extra clues and could perform reasoning with the guidance in event frame-level scope.",
"Experiments show that our method surpasses strong state-of-the-art baselines in RAMS, and could sci-entifically alleviate long-range dependency and implicit argument problems."
] |
[
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result"
] |
[
"Translation quality can be improved by global information from the required target sentence because the decoder can understand both past and future information.",
"However, the model needs additional cost to produce and consider such global information.",
"In this work, to inject global information but also save cost, we present an efficient method to sample and consider a semantic draft as global information from semantic space for decoding with almost free of cost.",
"Unlike other successful adaptations, we do not have to perform an EM-like process that repeatedly samples a possible semantic from the semantic space.",
"Empirical experiments show that the presented method can achieve competitive performance in common language pairs with a clear advantage in inference efficiency.",
"We will open all our source code on GitHub.",
"Successful NMT (Neural Machine Translation) (Vaswani et al., 2017; Bahdanau et al., 2015; Johnson et al., 2017; Ng et al., 2019) can translate sentences through left to right or through right to left.",
"However, there is one critical limitation in this diagram.",
"That is, the decoder can only have access to directional information (left-to-right or right-to-left) when processing auto-regressive (Graves, 2013).",
"To alleviate this pain, there have been three successful lines.",
"1) Generative NMT : (Zheng et al., 2020; Shah and Barber, 2018; Su et al., 2018; Zhang et al., 2016; Eikema and Aziz, 2019) adapt VAE (variational auto-encoder) (Altieri and Duve-naud, 2015; Kingma and Ba, 2015; Bowman et al., 2016) for NMT that is trained in generative model settings, modeling the semantics of the source and target sentences in latent space.",
"2) Deliberation : since the problem is caused by the one-pass process of decoding in the auto-regression process, (Xia et al., 2017) present a framework to predict a guess target sentence in the first-pass and jointly considers the encoding and the guess target sentence in the second-pass.",
"3) Soft-prototype : (Wang et al., 2019) present a framework to generate a prototype on the encoder side and then the decoder can jointly use the encoding and the prototype.",
"Although empirical results show the previous methods can successfully inject global information into the decoder, these methods either introduce computational complexity to the encoder-decoder architecture or employ an EM-like process in inferring, thus requiring even more than 100% additional time to produce and consider global information in inferring.",
"In this work, we present an efficient method to sample and consider a semantic draft as global information for decoding with almost free of cost, following the line of generative NMT.",
"Concretely, we sample the semantic draft from semantic space that is a Gaussian inference model with learnable parameters.",
"In the classic utilization of the semantic space, e.g., generative NMT, inferring needs to work with the EM-like process that could degrade the inference efficiency significantly.",
"To mitigate the degradation but still use the semantic space, we train the encoder of NMT in multilingual settings and simultaneously train a cross-lingual generator to obtain an approximation of the target-sentence semantic, hence modeling the required semantic space from the approximation and the source-sentence semantic.",
"In inferring, based on the source-sentence semantic and an approximation made by the cross-lingual generator, the semantic draft can be sampled from the semantic space in a one-shot style.",
"Once the semantic draft has been sampled, we aggregate the semantic draft and the encoding so that the variational decoder can simply decompose the aggregation.",
"We train the model in generative settings with additional loss of KL-divergence that is used to optimize the semantic space, similar to generative NMT training (Zheng et al., 2020; Shah and Barber, 2018; Su et al., 2018; Zhang et al., 2016; Eikema and Aziz, 2019) and VAE training (Altieri and Du-venaud, 2015; Kingma and Ba, 2015; Bowman et al., 2016).",
"Our work can build upon Transformer (Vaswani et al., 2017), LSTM/GRU (Hochreiter and Schmidhuber, 1997; Chung et al., 2014) and Convolutional sequence (Gehring et al., 2017).",
"In this work, we use Transformer as an example to present our idea, evaluating our method on common translation tasks and 5 more comprehensive experiments.",
"Our empirical study shows that, compared to previously successful methods, our method can achieve competitive performance and has a clear advantage in inference efficiency.",
"Since we do not change the architecture of the NMT model, our model is compatible with common technics in NMT.",
"Notation x and y denotes word embeddings in the source language L 1 and the target language L 2 , respectively.",
"X = ( x 0 , x 2 , ..., x n ) RN d and Y = ( y 0 , y 2 , ..., y m ) RM d are the sentences sampled from corpora in L 1 and L 2 respectively, where N and M are the sequence length and d is the word embedding dimension.",
"X and Y are parallel sentences that are used in our supervised training.",
"The translation task X Y is denoted as Y = Dec ( Enc ( X )) , where Dec and Enc jointly construct an encoder-decoder model.",
"s and t represent the source-sentence semantic for X and the target-sentence semantic for Y in translation, respectively.",
"z is a latent variable to represent a semantic draft, sampled from the semantic space.",
"NMT (Vaswani et al., 2017; Bahdanau et al., 2015; Johnson et al., 2017; Ng et al., 2019) utilizes seq 2 seq learning (Sutskever et al., 2014) and autoregressive (Graves, 2013) to facilitate training and inferring.",
"Concretely, the current translation y j at time-step j is conditional on Enc ( X ) and y <j , where y <j is the previous translation before j .",
"The intrinsic problem is caused because the translation y j can only consider y < j without considering y > j .",
"Intuitively, a semantic draft or global information including y < j and y > j can benefit the translation y j because the translation can be consistent with neighboring information.",
"Some impressive methods have been proposed to produce and consider a draft providing global information for better translation quality.",
"1) Generative NMT (including variational NMT) (Shah and Barber, 2018; Zheng et al., 2020; Su et al., 2018; Zhang et al., 2016; Eikema and Aziz, 2019) study latent and continuous space of semantic (Bowman et al., 2016) for NMT, which can sample z .",
"These methods inject z into NMT to provide global information for better translation.",
"Meanwhile, the encoder is encouraged to consider z .",
"In this manner, generative NMT models the joint probability P nmt ( X, Y, z ) = p ( z ) p ( X | z ) p nmt ( Y | X, z ) in training.",
"For inferring, the model utilizes the EM-like process to maximize a lower bound on log ( p ( X, Y )) by repeatedly guessing or predicting possible Y and resampling z .",
"However, compared to NMT without z , generative NMT costs over 100% additional time in inferring typically.",
"2) Sharing the same idea of the reconsideration of the current translation, Deliberation (Xia et al., 2017) is proposed to deliberate the complete output of the first-pass decoding as the attention context of the second-pass decoding.",
"With the Deliberation , the final translation is based on the understanding of a possible translation in the target language.",
"Although Deliberation is employed without the EM-like process, which is more efficient than generative NMT in inferring, the doubled pass increases the time of auto-regressive in decoding that costs 80% additional time in inferring.",
"3) (Wang et al., 2019) further consider the inference efficiency and the storage cost, proposing Soft-prototype framework to use a prototype.",
"The prototype is an approximation of the target sentence Y (cid:48) = ( y (cid:48) 0 , ..., y (cid:48) i ) , produced by a probability generator R that accepts any x to generate a probability p ( y (cid:48) ) over the target vocabulary to search y (cid:48) .",
"These successful methods, although using different settings and frameworks, share the same idea to inject a draft of the required target sentence and introduce global information to the decoder.",
"Therefore, the decoder can understand the target globally.",
"Concretely, such an idea can be formulated into a framework as: Y = Dec ( Enc ( X ) , draft ) (1) However, these successful methods either introduce computational complexity to NMT (Wang et al., 2019; Xia et al., 2017) or employ the EM-like process, showing significant degradation in inference efficiency, e.g., GNMT(Shah and Barber, 2018) needs 110% additional inferring time.",
"Intuitively, a high-quality draft should include two main aspects: 1) a good draft should include a global semantic for the target sentence; 2) a draft should not degrade inference efficiency significantly.",
"In this section, we present our framework and method.",
"We then discuss how to train the model in generative settings and how to tackle optimization challenges in practice.",
"Inspired from previously successful models, we employ the general framework Y = Dec ( Enc ( X ) , draft ) for our model, presenting the high-level architecture in Figure",
"1. Concretely, draft is instantiated to z that the general framework is modified to Y = Dec ( Enc ( X ) , z ) .",
"Since z is sampled from the semantic space, our decoder is a variational decoder (Altieri and Duvenaud, 2015; Kingma and Ba, 2015; Bowman et al., 2016).",
"To obtain z , we leverage a similar generative process of GNMT (Shah and Barber, 2018), sampling z from the semantic space that is a Gaussian inference model trained by s and t or approximations of s and t at the very least.",
"Typically, s and t are obtained by modeling the semantics of X and Y with the same parameters.",
"Semantic for Source Sentence s R d is computed by averaging a set of vector representation.",
"Specifically, we first process X to the NMT encoder before averaging, obtaining Enc ( X ) .",
"Then, we compute s = 1 N (cid:80) nk =0 Enc ( X ) k .",
"Semantic for Target Sentence We encourage the model to learn an approximation of t instead of the \"ground-truth target semantic\".",
"We assume G ( s ) t , where G is a two-layer cross-lingual generator.",
"In other words, we compute a dummy target-sentence semantic G ( s ) based on s .",
"We will discuss this assumption in 4 Multilingual Encoder and Cross-lingual Generator and how to train the cross-lingual generator G in 3.2 Encoder and Generator Tweaking .",
"Semantic Space Typically, a Gaussian inference model is used for the semantic space, representing a variational distribution q z ( z | s, t ) for sampling (Shah and Barber, 2018; Zhang et al., 2016; Zheng et al., 2020).",
"It serves as an approximate posterior.",
"Instead of q z ( z | s, t ) , in our model, we use q z ( z | s, G ( s )) for our required semantic space because G ( s ) is encouraged to learn an approximation of t .",
"Specifically, we concatenate s and G ( s ) to compute the mean and variance of the diagonal Gaussian as: S = [ s, G ( s )] q z ( z | s, G ( s )) = N ( W S, diag ( exp ( W S ))) (2) 3.1.2 Decoding with Draft As aforementioned, z is sampled from the semantic space q z ( z | s, G ( s )) .",
"We then aggregate z and Enc ( X ) , processing the aggregation to the decoder for decoding.",
"In other words, we add generative context to the encoding for the encoder-decoder attention in the decoder.",
"Therefore, the decoder is a variational decoder that is conditional on z and X .",
"NMT Training To train the parameters of both NMT and the semantic space in generative settings, we follow the successful training strategy in previous works (Bowman et al., 2016; Zhang et al., 2016), using SGVB (stochastic gradient variational Bayes) (Kingma and Welling, 2014; Rezende et al., 2014) to perform approximate maximum likelihood estimation:",
"Encoder and Generator Tweaking Intuitively, the semantic space should consider the shared semantics between s and t .",
"Ideally, s and t should be obtained from a shared model by processing X and Y , which is discussed in generative NMT (Shah and Barber, 2018; Zheng et al., 2020; Eikema and Aziz, 2019).",
"In spired by this idea, we use the same NMT encoder to compute Enc ( Y ) , obtaining the \"ground-truth target semantic\" t = 1 M (cid:80) mk =0 Enc ( Y ) k R d .",
"As aforementioned, we do not directly use t for our semantic space, which is different from generative NMT.",
"Instead, we only use t to enforce and regularize G ( s ) in training.",
"Concretely, we train the cross-lingual generator G to restore t from s so that G ( s ) t .",
"Costly Draft In traditionally generative NMT, based on a random target sentence, the inference mode or the process of translation generating makes an initial guess z init from the semantic space or the variational distribution q z ( z | s, t random ) , where s is computed by X and t random is obtained from a random Y random .",
"Then, it can generate a possible translation Y (cid:48) and its semantic t (cid:48) .",
"To obtain a good translation, based on the last translation, the inference mode can re-sample a better semantic from the semantic space and regenerate a new translation to maximize a lower bound on log ( p ( X, Y )) in the EM-like process.",
"Readers can also refer to Algorithm 1 in GNMT (Shah and Barber, 2018) for more details.",
"Almost Free Draft Unlike traditionally generative NMT, we do not need to make an initial guess and also do not employ the EM-like process to sample z for inferring, which improves the inference efficiency.",
"In our model, G ( s ) , which is the dummy target semantic, plays a prominent role that aims to approximate t instead of making an initial guess.",
"Therefore, we do not have to make an initial guess, and we can also eliminate the whole EM-like process because z is not randomly sampled, which results in a one-shot sampling.",
"Since G is a simple generator, sampling z from q z ( z | s, G ( s )) does not hurt the inference efficiency significantly and is almost free of cost.",
"and the cross-lingual generator G to make G ( s ) and t as similar as possible.",
"Since we input parallel sentences to the encoder, the encoder is encouraged to search multilingual properties.",
"Specifically, we notice that s t potentially 1 , which is studied and reported in previous works of multilingual BERT empirically (Devlin et al., 2019; Karthikeyan et al., 2020; Wu and Dredze, 2019).",
"Meanwhile, Soft-prototype (Wang et al., 2019) and multilingual NMT (Wu et al., 2016; Johnson et al., 2017) also explore this aspect in NMT scenario.",
"We further introduce the cross-lingual generator G to tweak/finetune the property, observing the significant benefits of regularizing.",
"Most importantly, with the cross-lingual generator G , the model can greedily gain a dummy t by G ( s ) so that the semantic draft can be sampled in a one-shot generative style without the EM-like process.",
"Potential of s and G ( s ) Besides, we are aware that only injecting s or G ( s ) without processing to the semantic space may also provide global information or the shared semantic for decoding because s t and G ( s ) t potentially.",
"We will present an ablation study in one of our comprehensive experiments 6.5 Necessity of Semantic Space and Multilingual Encoder to show the significance of G , the semantic space and their combination.",
"Semantic in Encoder and Decoder On the other hand, compared to generative NMT, which employs an auxiliary network to help the semantic space by feeding parallel sentences, our method simply processes the parallel sentences to the NMT 1 There is a difference between s or t and the output of multilingual BERT.",
"Specifically, s and t are sentence representations, whereas multilingual BERT outputs a sequence of the word representation.",
"encoder that is equivalent to the auxiliary network in generative NMT.",
"In this way, there is no need to pass z to the encoder to model a joint probability P nmt ( X, Y, z ) = p ( z ) p ( X | z ) p nmt ( Y | X, z ) .",
"Specifically, as discussed in VAE (Altieri and Duve-naud, 2015; Kingma and Ba, 2015; Bowman et al., 2016; Zhang et al., 2016), if z involves in the process of encoding, z can guide and regularize the encoder to consider the shared semantic.",
"Therefore, generative NMT models the joint probability in training, encouraged to consider z in both the encoder and the decoder.",
"However, in our model, we let the multilingual encoder consider the implicitly shared semantic itself, and we inject z into the decoder that is encouraged to consider the shared semantic.",
"In Figure 2, we compare our framework with previous successful models: GNMT (Shah and Barber, 2018), Deliberation (Xia et al., 2017) and Soft-prototype (Wang et al., 2019).",
"We observe some significant differences from the perspective of our design: vs GNMT 1) The semantic space is built upon the multilingual encoder and the cross-lingual generator in our model; 2) the semantic/global information is only used in the decoder.",
"vs Deliberation The global information comes from semantic space instead of the first-pass decoding.",
"vs Soft-prototype The global information is sampled from the semantic space instead of target prototypes.",
"Additionally, we notice an optimization solution for the EM-like process.",
"(Eikema and Aziz, 2019) study an approximating method to maximize the lower bound on log ( p ( X, Y )) by employing an auxiliary distribution with only using source s , which boosts the inference efficiency with a single call (without the EM-like process) to an argmax solver.",
"Compared to their work, our model has three major differences: 1) our model depends on both s and G ( s ) ; 2) an auxiliary distribution is not necessary in our model; 3) we focus on the process of draft generating.",
"Collapse of DKL (Bowman et al., 2016) report the collapse of DKL term in the objective function Eq.3.",
"Following the instructions of (Bowman et al., 2016; Shah and Barber, 2018), we apply two common strategies: 1) linearly increases from 0 to 1 over the initial 50k steps during training; 2) we randomly drop a constant of 30% words when encoding X .",
"Warm-up of Generator Training is somewhat tricky when using the cross-lingual generator G .",
"We apply a weight [0 , 1] for G ( s ) and a weight 1 for t , as presented in Figure",
"1. linearly increases from 0 to 1 over 50k steps after = 1 .",
"By this strategy, the semantic space is encoruaged to rely on t in warm-up.",
"Significantly, it avoids that cos ( G ( s ) , t ) is close to 0 at the beginning of training.",
"After warm-up, i.e., G ( s ) t , we use G ( s ) for the rest of training.",
"To be comparable, we train our model on language pairs { F rench, German } English and a relative low-resource language pair Romanian English which are commonly used in previous work (Shah and Barber, 2018; Vaswani et al., 2017; Bahdanau et al., 2015; Zheng et al., 2020).",
"Concretely, we download parallel corpora { F rench, German, } English from WMT 2014 2 (Bojar et al., 2014).",
"For Romanian 2 http://www.statmt.org/wmt14/translation-task.html English , we retrieve parallel corpora from WMT 2016 3 (Bojar et al., 2016).",
"The preprocess is simple in our case that we only remove sentences with over 50-word length in our training datasets.",
"Following standard evaluation, the model is evaluated on newstest2014 for { F rench, German } English and newstest2016 for Romanian English .",
"Case-sensitive BLEU score is computed by multi-BLEU.perl 4 to report the performance.",
"We also employ beam search with beam size 4 and length penalty 0.6.",
"We implement presented model on Tensorflow 2.0 (Abadi et al., 2016).",
"To be comparable with other models and baselines, the NMT settings are identical to big-Transformer (Vaswani et al., 2017).",
"Specifically, we set model dimension, word embedding, head, encoder layer, decoder layer and FFN filter to 1024, 1024, 16, 6, 6 and 4096.",
"Adam optimizer (Kingma and Ba, 2015) is employed with parameters 1 = 0 .",
"9 , 2 = 0 .",
"98 and (cid:15) = 10 9 .",
"We use a dynamic learning rate over the course of NMT training (Smith, 2017; Vaswani et al., 2017) 5 .",
"The dropout rate is set to rate = 0 .",
"1 , and label smoothing is used with gamma = 0 .",
"1 (Mezzini, 2018).",
"Parallel corpora for one translation task (e.g., Romanian English ) are concatenated to train BPE (Sennrich et al., 2016b) with a balance strategy (Lample and Conneau, 2019) that forms a shared vocabulary with 40 , 000 sub-tokens.",
"For data feeding efficiency, each mini-batch of similar-length sentences are padded to the same length and may have a different number of elements in each mini-batch, such that batch _ size padded _ length < = 3000 .",
"To be fair, we reimplement some models on our machine with the same mini-batch size.",
"We compare the reimplemented results to the reported results on the same test set to ensure the difference less than 5% (or 1.5) in BLEU.",
"Then, we can confirm the reimplementation and reconfiguration.",
"3 http://www.statmt.org/wmt16/translation-task.html 4 https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/multiBLEU.perl 5 lr = peak _ lr min (1 , step/warm _ up ) ( max ( step, warm _ up )) 0 .",
"6.1 Translation Task We study the methods of how to produce and consider global information for NMT.",
"Since we have discussed three successful directions, we compare our method with the baselines of Transformer (Vaswani et al., 2017), generative NMT including GNMT (Shah and Barber, 2018) and Mirror-GNMT (Zheng et al., 2020), Deliberation (Xia et al., 2017) and Soft-prototype (Wang et al., 2019).",
"Meanwhile, we have introduced some additional parameters to the model, which is the same as the comparable models.",
"Therefore, we evaluate not only the performance but also the inference efficiency.",
"The comparison of the inference efficiency is based on the inference speed of the vanilla big-Transformer.",
"Besides, we reconfigure Mirror-GNMT and GNMT to big-Transformer settings, and we additionally reimplement Soft-prototype on English Romanian .",
"Table 1 presents the performance of our model and the baselines on the training dataset.",
"We summarize the results that: Competitive Translation Quality Our method outperforms the baselines of big-Transformer and GNMT on all the language pairs.",
"Compared to state-of-the-art models, our model gains competitive performance on all the language pairs.",
"Clear Advantage in Inference Efficiency Besides competitive performance on all the language pairs, our model has a clear advantage in the comparison of inference efficiency.",
"Specifically, GNMT, Mirror-GNMT and Deliberation introduce computational complexity to the decoder that needs more than 1 iteration 6 to consider a translation (+ 80% additional time at least), and Soft-prototype increases the computational complexity on the encoder side (+ 34% additional time).",
"However, our method only introduces a generator to the model so that the computational complexity in the encoder and the decoder is the same as in vanilla big-Transformer, which results in an efficient inferring and an almost free draft (only + 5% additional time).",
"port a result obtained by employing the EM-like process for our model in the last row.",
"Although there is noticeable room for improvement, it degrades the inference efficiency significantly so that we do not suggest such a combination.",
"We will discuss this result and integration in 6.2 Drafting with EM-like process .",
"In the most discussion of this work, we sample z from q z ( z | s, G ( s )) in a one-shot generative style for the sake of inference efficiency.",
"The previous evaluation shows that such an idea is feasible.",
"Meanwhile, our model shares some properties with generative NMT, which makes us interested in the integration with the EM-like process for the sake of the best translation quality only.",
"1. We sample a semantic draft z from q z ( z | s, G ( s )) and gain a possible translation Y (cid:48)",
".",
"2. We then sample a new semantic draft z (cid:48) from q z ( z | s, t (cid:48) ) to predict a possible and new translation Y (cid:48)(cid:48) , where t (cid:48) = 1 M (cid:48) (cid:80) M (cid:48) k =0 Enc ( Y (cid:48) ) k and M (cid:48) is the length of Y (cid:48) .",
"The second step can be repeated to maximize a lower bound on log ( p nmt ( Y | X )) .",
"We observe some improvements from employing the EM-like process, reporting the result in the last row of Table 1 that we achieve the best performance on all the language pairs.",
"However, most significantly, the translation converges at 2 3 iterations that in-crease the inference time by 137% (from 1 . 05 to 2 . 42 ).",
"Concretely, the model needs to re-encode the last translation to obtain a new draft and re-decode the new draft to generate a new translation, e.g., re-encode Y (cid:48) to obtain Enc ( Y (cid:48) ) and its t (cid:48) , resample the draft z (cid:48) from q z ( z | s, t (cid:48) ) and re-decode the aggregation of Enc ( X ) and z (cid:48) .",
"Thus, we suggest the one-shot generative style in practice.",
"Additionally, we realize that in this case the improvement may come from not only the re-sampled draft but also the adaptation of two ideas: 1) \"dou-ble encoding\" in Soft-prototype (Wang et al., 2019) because we encode the previously complete trans-lation/prototype for the next translation; 2) \"double decoding\" in Deliberation (Xia et al., 2017) because we make more than one complete translation.",
"We will justify the significance of the draft in 6.3 Test for Draft and 6.4 Draft Reliance Test .",
"We are interested in whether the draft does indeed provide useful semantics/global information.",
"In the last section, the improvement from the EM-like process can intuitively show the effect of the draft because a better-quality draft re-sampled from the last translation continuously improves the performance, but the improvement may only come from \"double encoding\" and \"double decoding\".",
"Therefore, we conduct a test to demonstrate that the generative draft learns the desired semantics.",
"In this test, we share the same missing word translation task with GNMT (Shah and Barber, 2018).",
"Concretely, the model is forced to give a translation based on the draft heavily.",
"We share the same settings that each word has a 30% chance of being missing independently.",
"Note that we do not conduct this experiment for Deliberation (Xia et al., 2017) and Soft-prototype (Wang et al., 2019) because such discriminative models do not sample semantics from the semantic space.",
"Table 2 shows the test result on training dataset German English and test dataset newstest2014 .",
"We observe that our model outperforms GNMT and achieves competitive performance to Mirror-GNMT (Zheng et al., 2020).",
"Specifically, compared to GNMT, our method trains a multilingual encoder and a cross-lingual generator to encourage shared semantics for the semantic space.",
"GNMT Mirror-GNMT our method + EM-like process DKL 5.73 6.92 6.65 7.03 Table 3: Draft Reliance Test.",
"Compared to Mirror-GNMT, which gains the improvement from the simultaneously used LM (lan-guage model) and back-translation technic (Sen-nrich et al., 2016a), our model is not integrated with LM to counter noisy input so that Mirror-GNMT gains slightly better performance.",
"We leave the integration with denoising language modeling (Vincent, 2010) for future experiments.",
"We have demonstrated that the semantic draft is useful for the translation task.",
"We further indicate how much the model relies on the semantic draft.",
"Since the objective function Eq.3 is the same as in GNMT (Shah and Barber, 2018) and Mirror-GNMT (Zheng et al., 2020), we report a comparison on the term of DKL = DKL q z ( z | X, Y ) || p ( z ) , presenting the result in Table",
"3. The test is conducted on training dataset German English and test dataset newstest2014 by averaging the value of DKL = DKL q z ( z | X, Y ) || p ( z ) .",
"Our method relies on the semantic draft (or the latent variable from the semantic space) heavier than GNMT does.",
"With the EM-like process, the reliance is higher than Mirror-GNMT.",
"Although the semantic draft does indeed provide useful global information in 6.3 Test for Draft and 6.4 Draft Reliance Test , we still question the necessity of the semantic space because G ( s ) t and s t .",
"In other words, we can simply process G ( s ) or s to the decoder, which can provide global information for decoding potentailly.",
"To justify, we train the model on training dataset German English and test dataset newstest2014 with 4 different types of draft based on the framework Dec ( Enc ( X ) , draft ) : We use our full-packaged model draft = z , where z comes from q z ( z | s, G ( s )) .",
"draft = G ( s ) is set for translation to test the significance of the semantic space.",
"To test the significance of G , we set draft = z (cid:48) , where z (cid:48) comes from q z (cid:48) ( z (cid:48) | s ) .",
"We test both the significance of G and the semantic space by setting draft = s for translation.",
"Besides the difference of draft , all the other con-figurations are the same for this test.",
"We report the result in Table 4, and our observations are that: According to \"row 2 vs row 4\", we can see the significance of the cross-lingual generator G .",
"\"row 3 vs row 4\" indicates the significance of the semantic space.",
"When focusing on \"row 2 vs row 3\", G improves general translation performance (col-umn 2&4), and the semantic space improves noisy translation (column 3&5) We intuitively conclude that the semantic space and the cross-lingual generator G can further smooth and regularize the semantic for decoding, similar to that is found in GNMT (Shah and Barber, 2018) and (Bowman et al., 2016).",
"Moreover, the cross-lingual generator G can only restore a coarse semantic so that the model cannot only rely on G ( s ) to maintain translation quality when testing in the missing word translation task generally.",
"We have mentioned the multilingual property of the encoder in our design, using the NMT encoder to process X and Y .",
"As reported in multilingual BERT (Devlin et al., 2019; Karthikeyan et al., 2020; newstest2014 draft type De En noisy De En En De noisy En De q z ( z | s,G ( s )) 33.03 23.93 29.20 20.35 G ( s ) 32.85 21.82 29.03 17.97 q z (cid:48) ( z (cid:48) | s ) 32.74 22.34 28.91 19.14 s 32.15 20.92 28.49 17.11 Table 4: Performance with/without semantic space or/and generator.",
"Wu and Dredze, 2019), sharing encoder for nonparallel sentences in different languages can still build shared semantic space implicitly.",
"This leads us to experiment with that we can jointly train the encoder with the objective of multilingual BERT.",
"We then train on a relative low-resource language pair Romanian English , and we use additional monolingual data News Crawl articles 2015 from WMT 2016 to jointly train the multilingual encoder with the objective of multilingual BERT.",
"In Table 5, we report competitive results, and the performance is significantly improved by simultaneously using non-parallel data.",
"Note that, when training on non-parallel data, we can pre-train the multilingual encoder with the BERT objective instead of joint training.",
"We leave this idea for further experiments.",
"Translation quality can be further improved by global information from the target sentence.",
"Although there have been three feasible solutions, successful methods do not consider inference efficiency carefully, which leads to high cost in inferring.",
"In this work, we present a method/framework to improve the performance of NMT.",
"We sample a semantic draft from semantic space that the decoder can consider the semantic draft to obtain the required global information with high efficiency in inferring.",
"Our empirical study shows that, compared to previously successful methods, our method can achieve competitive performance and has a clear advantage in inference efficiency.",
"Since we do not change the architecture of the NMT model, our model can be further improved by employing pretraining (Lample and Conneau, 2019; Devlin et al., 2019; Radford et al., 2018), back-translation (Sen-nrich et al., 2016a) and other finetuning methods with non-parallel data.",
"And, our model can also be used in unsupervised NMT (Artetxe et al., 2018; Lample et al., 2018).",
"We leave all these experiments for future work."
] |
[
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"method",
"method",
"method",
"method",
"result",
"abstain",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"result",
"abstain",
"method",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"method",
"abstain"
] |
[
"We introduce a theoretical analysis of crosslingual transfer in probabilistic topic models.",
"By formulating posterior inference through Gibbs sampling as a process of language transfer, we propose a new measure that quantifies the loss of knowledge across languages during this process.",
"This measure enables us to derive a PAC-Bayesian bound that elucidates the factors affecting model quality, both during training and in downstream applications.",
"We provide experimental validation of the analysis on a diverse set of five languages, and discuss best practices for data collection and model design based on our analysis.",
"Crosslingual learning is an important area of natural language processing that has driven applications including text mining in multiple languages ( Ni et al., 2009; Smet and Moens, 2009), cultural difference detection (Guti errez et al., 2016), and various linguistic studies (Shutova et al. , 2017; Barrett et al., 2016).",
"Crosslingual learning methods generally extend monolingual algorithms by using various multilingual resources.",
"In contrast to traditional high-dimensional vector space models, modern crosslingual models tend to rely on learning low-dimensional word representations that are more efficient and generalizable.",
"A popular approach to representation learning comes from the word embedding community, in which words are represented as vectors in an embedding space shared by multiple languages (Ruder et al., 2018; Faruqui and Dyer, 2014; Klementiev et al., 2012).",
"Another direction is from the topic modeling community, where words are projected into a probabilistic topic space (Ma and Nasukawa, 2017; Jagarlamudi and III, 2010).",
"While formulated differently, both types of models apply the same principles low-dimensional vectors exist in a shared crosslingual space, wherein vector representations of similar concepts across languages ( e.g., dog and hund) should be nearby in the shared space.",
"To enable crosslingual representation learning, knowledge is transferred from a source language to a target language, so that representations have similar values across languages.",
"In this study, we will focus on probabilistic topic models, and knowledge refers to a word's probability distribution over topics.",
"Little is known about the characteristics of crosslingual knowledge transfer in topic models, and thus this paper provides an analysis, both theoretical and empirical, of crosslingual transfer in multilingual topic models.",
"Multilingual Topic Models Given a multilingual corpus D (1 ;:::;L ) in languages = 1 ; : : : ; L as inputs, a multilingual topic model learns K topics.",
"Each multilingual topic k (1 ;:::;L ) ( k = 1 ; : : : ; K ), is defined as an L -dimensional tuple ( (1) k ; : : : ; ( L ) k ) , where ( ) k is a multinomial distribution over the vocabulary V ( ) in language .",
"From a human's perspective, a multilingual topic k (1 ;:::;L ) can be interpreted by looking at the word types that have C highest probabilities in ( ) k for each language .",
"C here is called cardinality of the topic.",
"Thus, a multilingual topic can loosely be thought of as a group of word lists where each language has its own version of the topic.",
"Multilingual topic models are generally extended from Latent Dirichlet Allocation (Blei et al., 2003, LDA ).",
"Though many variations have been proposed, the underlying structures of multilingual topic models are similar.",
"These models require either a parallel/comparable corpus in multiple languages, or word translations from a dictionary.",
"One of the most popular models is the polylingual topic model (Mimno et al., 2009, PLTM ), where comparable document pairs share distributions over topics (cid:18) , while each language has its own distributions f ( ) k g Kk =1 over the vocabulary V ( ) .",
"By re-marginalizing the estimations f b ( ) k g Kk =1 , we obtain word representations b ( w ) 2 RK for each word w , where b ( w ) k = Pr( z w = k j w ) , i.e., the probability of topic k given a word type w .",
"Crosslingual Transfer Knowledge transfer through crosslingual representations has been studied in prior work.",
"Smet and Moens (2009) and Heyman et al. (2016) show empirically how document classification using topic models implements the ideas of crosslingual transfer, but to date there has been no theoretical framework to analyze this transfer process in detail.",
"In this paper, we describe two types of transferon-site and off-sitebased on the na-ture of where and how the transfer takes place.",
"We refer to transfer that happens while training topic models ( i.e., during representation learning) as on-site .",
"Once we obtain the low-dimensional representations, they can be used for downstream tasks.",
"We refer to transfer in this phase as off-site , since the crosslingual tasks are usually detached from the process of representation learning.",
"Contributions Our study provides a theoretical analysis of crosslingual transfer learning in topic models.",
"Specifically, we first formulate on-site transfer as circular validation, and derive an upper bound based on PAC-Bayesian theories (Sec-tion 2).",
"The upper bound explicitly shows the factors that can affect knowledge transfer.",
"We then move on to off-site transfer, and focus on crosslingual document classification as a downstream task (Section 3).",
"Finally, we show experimentally that the on-site transfer error can have impact on the performance of downstream tasks (Section 4).",
"On-site transfer refers to the training procedure of multilingual topic models, which usually involves Bayesian inference techniques such as variational inference and Gibbs sampling.",
"Our work focuses on the analysis of collapsed Gibbs sampling (Grif-fiths and Steyvers, 2004), showing how knowledge is transferred across languages and how a topic space is formed through the sampling process.",
"To this end, we first describe a specific formulation of knowledge transfer in multilingual topic models as a starting point of our analysis (Sec-tion 2.1).",
"We then formulate Gibbs sampling as circular validation and quantify a loss during this phase (Section 2.2).",
"This formulation leads us to a PAC-Bayesian bound that explicitly shows the factors that affect the crosslingual training (Sec-tion 2.3).",
"Lastly, we look further into different transfer mechanisms in more depth (Section 2.4).",
"Priors are an important component in Bayesian models like PLTM .",
"In the original generative process of PLTM , each comparable document pair ( d S ; d T ) in the source and target languages ( S; T ) is generated by the same multinomial (cid:18) (cid:24) Dir( (cid:11) ) .",
"Hao and Paul (2018) showed that knowledge transfer across languages happens through priors.",
"Specifically, assume the source document is generated from (cid:18) ( d S ) (cid:24) Dir( (cid:11) ) , and has a sufficient statistics n d S 2 NK where each cell n k j d S is the count of topic k in document d S .",
"When generating the corresponding comparable document d T , the Dirichlet prior of the distribution over topics (cid:18) ( d T ) , instead of a symmetric (cid:11) , is parameterized by (cid:11) + n d S .",
"This formulation yields the same posterior estimation as the original joint model and is the foundation of our analysis in this section.",
"To see this transfer process more clearly, we look closer to the conditional distributions during sampling, and take PLTM as an example.",
"When sampling a token in target language x T , the Gibbs sampler calculates a conditional distribution P x T over K topics, where a topic k is randomly drawn and assigned to x T (denoted as z x T ).",
"Assume the token x T is in document d T whose comparable document in the source language is d S .",
"The conditional distribution for x T is P x;k = Pr( z x = k ; w (cid:0) ; z (cid:0) ) (1) / ( n k j d T + n k j d S + (cid:11) ) (cid:1) n w T j k + (cid:12) n (cid:1)j k + V ( T ) (cid:12) ; where the quantity n k j d S is added and thus transferred from the source document.",
"Thus, the calculation of P x incorporates the knowledge transferred from the other language.",
"Now that we have identified the transfer process, we provide an alternative view of Gibbs sampling, i.e., circular validation, in the next section.",
"Circular validation (or reverse validation) was proposed by Zhong et al. (2010) and Bruzzone and Marconcini (2010) in transfer learning.",
"Briefly, a learning algorithm A is trained on both source and target datasets ( DS and DT ), where the source is labeled and target is unlabeled.",
"After predicting the labels for the target dataset using A (predic-tions denoted as A ( DT ) ), circular validation trains another algorithm A in the reverse direction, i.e., uses A ( DT ) and DT as the labeled dataset and DS as the unlabeled dataset.",
"The error is then evaluated on A ( DS ) .",
"This train-predict-reverse-repeat cycle has a similar flavor to the iterative manner of Gibbs sampling, which inspires us to look at the sampling process as circular validation.",
"Figure 1 illustrates this process.",
"Suppose the Gibbs sampler is currently sampling x T of word type w T in target language T .",
"As discussed for Equation (1), the calculation of the conditional distribution P x T incorporates the knowledge transferred from the source language.",
"We then treat the process of drawing a topic from P x T as a classification of the token x T .",
"Let P x T be a distribution over K unary classifiers , f h k g Kk =1 , and the k -th classifier labels the token as topic k with a probability of one: h k (cid:24) P x T ; and Pr ( z x T = k ; h k ) = 1 : (2) This process is repeated between the two languages until the Markov chain converges.",
"The training of topic models is unsupervised, i.e., there is no ground truth for labeling a topic, which makes it difficult to analyze the effect of transfer learning.",
"Thus, after calculating P x T , we take an additional step called reverse validation , where we design and calculate a measure circular validation lossto quantify the transfer.",
"Definition 1 (Circular validation loss, CVL ) .",
"Let S w be the set containing all the tokens of type w throughout the whole training corpus, and call it the sample of w .",
"Given a bilingual word pair ( w T ; w S ) where w T is in target language T while w S in source S , let S w T and S w S be the samples for the two types respectively, and n w T and n w S the sizes of them.",
"The empirical circular validation score ( d CVL ) is defined as d CVL ( w T ; w S ) = 1 2 E x S ;x T [ b L ( x T ; w S ) + b L ( x S ; w T ) ] ; b L ( x T ; w S ) = 1 n w S x S 2S wS E h (cid:24)P xT [ 1 f h ( x S ) = z x S g ] = 1 n w S x S 2S wS ( 1 (cid:0) P x T ;z xS ) ; where P x T ;k is the conditional probability of token x T assigned with topic k .",
"When sampling a token x T , we still follow the two-step process as in Equation (2), but instead of labeling x T itself, we use its conditional P x T to label the entire sample of a word type w S in the source language.",
"Since all the topic labels for the source language are fixed, we take them as the assumed correct labelings, and compare x S 's labels and the predictions from P x T .",
"This is the intuition behind CVL .",
"Note that the choice of word types w T and w S to calculate d CVL is arbitrary.",
"However, d CVL is only meaningful when the two word types are semantically related, such as word translations, because those word pairs are where the knowledge transfer takes place.",
"On the other hand, the Gibbs sampler does not calculate this d CVL explicitly, and thus adding reverse validation step does not affect the training of the model.",
"It does, however, help us to expose and analyze the knowledge transfer mechanism.",
"In fact, as we show in the next theorem, sampling is also a procedure of optimizing d CVL .",
"Theorem 1.",
"Let d CVL ( t ) ( w T ; w S ) be the empirical circular validation loss of any bilingual word pair at iteration t of Gibbs sampling.",
"Then d CVL ( t ) ( w T ; w S ) converges as t !",
"1 .",
"A question following the formulation of d CVL is, what factors could lead to better transfer during this process, particularly for semantically related words?",
"To answer this, we turn to theory that bounds the performance of classifiers and apply this theory to this formulation of topic sampling as classification.",
"The PAC-Bayes theorem was introduced by McAllester (1999) to bound the performance of Bayes classifiers.",
"Given a hypothesis set H , the majority vote classifier (or Bayes classifier) uses every hypothesis h 2 H to perform binary classification on an example x , and uses the majority output as the final prediction.",
"Since minimizing the error by Bayes classifier is NP-hard, an alternative way is to use a Gibbs classifier as approximation.",
"The Gibbs classifier first draws a hypothesis h 2 H according to a posterior distribution over H , and then uses this hypothesis to predict the label of an example x ( Germain et al., 2012).",
"The generalization loss of this Gibbs classifier can be bounded as follows.",
"Theorem 2 (PAC-Bayes theorem, McAllester (1999)) .",
"Let P be a posterior distribution over all classifiers h 2 H , and Q a prior distribution.",
"With a probability at least 1 (cid:0) (cid:14) , we have L (cid:20) b L + 1 2 n ( KL ( Pjj Q ) + ln 2 p n (cid:14) ) ; where L and b L are the general loss and the empirical loss on a sample of size n .",
"In our framework, a token x T provides a posterior P x T over K classifiers.",
"The loss b L ( x T ; w S ) is then calculated on a sample of S w S in language S .",
"The following theorem shows that for a bilingual word pair ( w T ; w S ) , the general CVL can be bounded with several quantities.",
"For brevity we use KL w to denote KL( P x jj Q x ) , where P x is the conditional distribution from Gibbs sampling of token x with word type w that gives highest loss b L ( x; w ) , and Q x a prior.",
"Proof.",
"See Appendix.",
"Recall that knowledge transfer happens through priors in topic models (Section 2.1).",
"Because the KL-divergence terms in Theorem 3 include this prior Q , we can use this theorem to analyze the transfer mechanisms more concretely.",
"The conditional distribution for sampling a topic z x for a token x during sampling can be factorized into document-topic and topic-word levels: P x;k = Pr ( z x = k j w x = w; w (cid:0) ; z (cid:0) ) = Pr ( z x = k j z (cid:0) ) (cid:1) Pr ( w x = w j z x = k; w (cid:0) ; z (cid:0) ) / Pr ( z x = k j z (cid:0) ) | {z } documentlevel (cid:1) Pr ( z x = k j w x = w; w (cid:0) ) | {z } wordlevel = P (cid:18);x;k (cid:1) P ;x;k ; P x = P (cid:18);x (cid:10) P ;x ; where (cid:10) is element-wise multiplication.",
"Thus, we have the following inequality: KL ( P x jj Q x ) = KL ( P (cid:18);x (cid:10) P ;x jj Q (cid:18);x (cid:10) Q ;x ) (cid:20) KL ( P (cid:18);x jj Q (cid:18);x ) + KL ( P ;x jj Q ;x ) ; and the KL-divergence term in Theorem 3 is simply the sum of the KL-divergences between the conditional and prior distributions on all levels.",
"Recall that PLTM transfers knowledge at the document level, through Q (cid:18);x , by linking document translations together (Equation (1)).",
"Assume the current token x is from a target document linked to a document d S in the source language.",
"Then the prior for P (cid:18);x is b (cid:18) ( d S ) , i.e., the normalized empirical distribution over topics of d S .",
"Since the words are generated within each language under PLTM , i.e., ( S ) k is irrelevant to ( T ) k , no transfer happens at the word level.",
"In this case, Q ;x , the prior for P ;x , is simply a K dimensional uniform distribution U .",
"Then: KL w (cid:20) KL ( P (cid:18);x jj b (cid:18) ( d S ) ) + KL ( P ;x jjU ) = KL ( P (cid:18);x jj b (cid:18) ( d S ) ) | {z } crosslingual entropy + log K (cid:0) H ( P ;x ) | {z } monolingual entropy : Thus, at levels where transfer happens (document-or word-level), a low crosslingual entropy is preferred, to offset the impact of monolingual entropy where no transfer happens.",
"Most multilingual topic models are generative admixture models in which the conditional probabilities can be factorized into different levels, thus KL-divergence term in Theorem 3 can be decomposed and analyzed in the same way as in this section for models that have transfer at other levels, such as Hao and Paul (2018), Heyman et al. (2016), and Hu et al. (2014).",
"For example, if a model has word-level transfer, i.e., the model assumes that word translations share the same distributions, we have a KL-divergence term as, KL w (cid:20) KL ( P ;x jj b ( w S ) ) + KL( P (cid:18);x jjU ) = KL ( P ;x jj b ( w S ) ) + log K (cid:0) H ( P (cid:18);x ) ; where w S is the word translation to word w .",
"Off-site transfer refers to language transfer that happens while applying trained topic models to downstream crosslingual tasks such as document classification.",
"Because transfer happens using the trained representations, the performance of off-site transfer heavily depends on that of on-site transfer.",
"To analyze this problem, we focus on the task of crosslingual document classification.",
"In crosslingual document classification, a document classifier , h , is trained on documents from one language, and h is then applied to documents from another language.",
"Specifically, after training bilingual topic models, we have K bilingual word distributions f b ( S ) k g Kk =1 and f b ( T ) k g Kk =1 .",
"These two distributions are used to infer document-topic distributions b (cid:18) on unseen documents in the test corpus, and each document is represented by the inferred distributions.",
"A document classifier is then trained on the b (cid:18) vectors as features in source language S and tested on the target T .",
"We aim to show how the generalization risk on target languages T , denoted as RT ( h ) , is related to the training risk on source languages S , c RS ( h ) .",
"To differentiate the loss and classifiers in this section from those in Section 2, we use the term risk here, and h refers to the document classifiers, not the topic labeling process by the sampler.",
"Classic learning theory requires training and test sets to come from the same distribution D , i.e., ( (cid:18); y ) (cid:24) D , where (cid:18) is the document representation (features) and y the document label (Valiant,",
"1984).",
"In practice, however, corpora in S and T may be sampled from different distributions, i.e., D ( S ) = f ( b (cid:18) ( d S ) ; y ) g (cid:24) b D ( S ) and D ( T ) = f ( b (cid:18) ( d T ) ; y ) g (cid:24) b D ( T ) .",
"We refer to these distributions as document spaces .",
"To relate RT ( h ) and c RS ( h ) , therefore, we have to take their distribution bias into consideration.",
"This is often formulated as a problem of domain adaptation, and here we can formulate this such that each language is treated as a domain.",
"We follow the seminal work by Ben-David et al. (2006), and define H -distance as follows.",
"Definition 2 ( H -distance, Ben-David et al. (2006)) .",
"Let H be a symmetric hypothesis space, i.e., for every hypothesis h 2 H there exists its counterpart 1 (cid:0) h 2 H .",
"We let m = (cid:12)(cid:12) D ( S ) (cid:12)(cid:12) + (cid:12)(cid:12) D ( T ) (cid:12)(cid:12) , the total size of test corpus.",
"The H distance between b D ( S ) and b D ( T ) is defined as 1 2 b d H ( b D ( S ) ; b D ( T ) ) = max h 2H 1 m 2f S;T g x d : h ( x d )= 1 { x d 2 D ( ) } ; where x d is the representation for document d , and h ( x d ) outputs the language of this document.",
"This distance measures how identifiable the languages are based on their representations.",
"If source and target languages are from entirely different distributions, a classifier can easily identify language-specific features, which could affect performance of the document classifiers.",
"With H -distances, we have a measure of the distance between the two distributions b D ( S ) and b D ( T ) .",
"We state the following theorem from domain adaptation theory.",
"Theorem 4 (Ben-David et al. (2006)) .",
"Let m be the corpus size of the source language, i.e., m = (cid:12)(cid:12) D ( S ) (cid:12)(cid:12) , c the VC-dimension of document classifiers h 2 H , and b d H ( b D ( S ) ; b D ( T ) ) the H -distance between two languages in the document space.",
"With probability at least 1 (cid:0) (cid:14) , we have the following bound, RT ( h ) (cid:20) b RS ( h ) + b d H ( b D ( S ) ; b D ( T ) ) + b (cid:21) + 4 m ( c log 2 em c + log 4 (cid:14) ) ; (4) b (cid:21) = min h 2H b RS ( h ) + b RT ( h ) : (5) The term b (cid:21) in Theorem 4 defines a joint risk , i.e., the training error on both source and target documents.",
"This term usually cannot be estimated in practice since the labels for target documents are unavailable.",
"However, we can still calculate this term for the purpose of analysis.",
"The theorem shows that the crosslingual classification risk is bounded by two critical components: the H -distance, and the joint risk b (cid:21) .",
"Interestingly, these two quantities are based on the same set of features with different labeling rules: for H -distance, the label for each instance is its language, while b (cid:21) uses the actual document label.",
"Therefore, a better bound requires the consistency of features across languages, both in language and document labelings.",
"Since consistency of features depends on the document representations b (cid:18) , we need to trace back to the upstream training of topic models and show how the errors propagate to the formation of document representations.",
"Thus, we first show the relations between d CVL and word representations b in the following lemma.",
"Lemma 1.",
"Given any bilingual word pair ( w T ; w S ) , let b ( w ) denote the distribution over topics of word type w .",
"Then we have, 1 (cid:0) b ( w T ) (cid:1) b ( w S ) (cid:20) d CVL ( w T ; w S ) : Proof.",
"We need to connect the word representations b , which are central to on-site transfer, to the document representations b (cid:18) , which are central to off-site transfer.",
"To do this, we make an assumption that the inferred distribution over topics b (cid:18) ( d ) for each test document d is a weighted average over all word vectors, i.e., b (cid:18) ( d ) / w f dw (cid:1) b ( w ) , where f dw is the normalized frequency of word w in document d (Arora et al., 2013).",
"When this assumption holds, we can bound the similarity of document representations b (cid:18) ( d S ) and b (cid:18) ( d T ) in terms of word representations and hence their d CVL .",
"Theorem",
"5. Let b (cid:18) ( d S ) be the distribution over topics for document d S (similarly for d T ), F ( d S ; d T ) = ( w S f d S w S 2 (cid:1) w T f d T w T 2 ) 12 where f dw is the normalized frequency of word w in document d , and K the number of topics.",
"Then b (cid:18) ( d S ) (cid:1) b (cid:18) ( d T ) (cid:20) F ( d S ; d T ) (cid:1) K (cid:1) w S ;w T ( d CVL ( w T ; w S ) (cid:0) 1 ) 2 : Proof.",
"This provides a spatial connection between document pairs and word pairs they have.",
"Many ker-nalized classifiers such as support vector machines ( SVM ) explicitly use this inner product in the dual optimization objective (Platt, 1998).",
"Since the inner product is directly related to the cosine similarity, Theorem 5 indicates that if two documents are spatially close, their inner product should be large, and thus the d CVL of all word pairs they share should be small.",
"In an extreme case, if d CVL ( w T ; w S ) = 1 for all the bilingual word pairs appearing in document pair ( d S ; d T ) , then b (cid:18) ( d S ) (cid:1) b (cid:18) ( d T ) = 0 , meaning the two documents are orthogonal and tend to be irrelevant topically.",
"With upstream training discussed in Section 2, we see that d CVL has an impact on the consistency of features across languages.",
"A low d CVL indicates that the transfer from source to target is sufficient in two ways.",
"First, languages share similar distributions, and therefore, it is harder to distinguish languages based on their distributions.",
"Second, if there exists a latent mapping from a distribution to a label, it should produce similar labeling on both source and target data since they are similar.",
"These two aspects correspond to the language H distance and joint risk b (cid:21) in Theorem",
"4. 4 Experiments We experiment with five languages: Arabic ( AR , Semitic), German ( DE , Germanic), Spanish ( ES , Romance), Russian ( RU , Slavic), and Chinese ( ZH , Sinitic).",
"In the first two experiments, we pair each with English ( EN , Germanic) and train PLTM on each language pair individually.",
"Training Data For each language pair, we use a subsample of 3 ; 000 Wikipedia comparable documents, i.e., 6 ; 000 documents in total.",
"We set K = 50 , and train PLTM with default hyperparameters (McCallum, 2002).",
"We run each experiment five times and average the results.",
"Test Data For experiments with document classification, we use Global Voices ( GV ) in all five language pairs as test sets.",
"Each document in this dataset has a categories attribute that can be used as the document label.",
"In our classification experiments, we use culture , technology , and education as the labels to perform multiclass classification.",
"Evaluation To evaluate topic qualities, we use Crosslingual Normalized Pointwise Mutual Information (Hao et al., 2018, CNPMI ), an intrinsic metric of crosslingual topic coherence.",
"For any bilingual word pair ( w T ; w S ) , CNPMI ( w T ; w S ) = (cid:0) log Pr( w T ;w S ) Pr( w T )Pr( w S ) log Pr ( w T ; w S ) ; (6) where Pr ( w T ; w S ) is the occurrence of w T and w S appearing in the same pair of comparable documents.",
"We use 10 ; 000 Wikipedia comparable document pairs outside PLTM training data for each language pair to calculate CNPMI scores.",
"All datasets are publicly available at http:// opus.nlpl.eu/ (Tiedemann, 2012).",
"Additional details of our datasets and experiment setup can be found in the appendix.",
"Our first experiment shows how d CVL changes over time during Gibbs sampling.",
"According to the definition, the arguments of d CVL can include any bilingual word pairs; however, we suggest that it should be calculated specifically among word pairs that are expected to be related (and thus enable transfer).",
"In our experiments, we select word pairs in the following way.",
"Recall that the output of a bilingual topic model is K topics, where each language has its own distribution.",
"For each topic k , we can calculate d CVL ( w S ; w T ) such that w S and w T belong to the same topic ( i.e., are in the top C most probable words in that topic), from the two languages, respectively.",
"Using a cardinality C for each of the K topics, we have in total C 2 (cid:2) K bilingual word pairs in the calculation of d CVL .",
"At certain iterations, we collect the topic words as described above with cardinality C = 5 , and calculate d CVL ( w T ; w S ) , CNPMI ( w T ; w S ) , and the error term (the 12 p(cid:1) (cid:1) (cid:1) term in Theorem 3) of all the bilingual word pairs.",
"In the middle panel of Figure 2, d CVL over all word pairs from topic words is decreasing as sampling proceeds and becomes stable by the end of sampling.",
"On the other hand, the correlations between CNPMI and d CVL are constantly decreasing.",
"The negative correlations between d CVL and CNPMI implies that lower d CVL is associated with higher topic quality, since higher-quality topic has higher CNPMI but lower d CVL .",
"Theorem 3 provides insights into how knowledge is transferred during sampling and the factors that could affect this process.",
"We analyze this bound from two aspects, the size of the training data (cor-responding to ln n n term) and model assumptions (as in the crosslingual entropy terms).",
"One factor that could affect d CVL , according to Theorem 3, is the balance of tokens of a word pair.",
"In an extreme case, if a word type w S has only one token, while another word type w T has a large number of tokens, the transfer from w S to w T is negligible.",
"In this experiment, we will test if increasing the ratio term ln n n in the corpus lowers the performance of crosslingual transfer learning.",
"To this end, we specify a sample rate (cid:26) = 0 : 2 ; 0 : 4 ; 0 : 6 ; 0 : 8 ; and 1 : 0 .",
"For each word pair ( w T ; w S ) , we calculate n as in the ratio term ln n n , and remove (1 (cid:0) (cid:26) ) (cid:1) n tokens from the corpus (rounded to the nearest integer).",
"Smaller (cid:26) removes more tokens from the corpus and thus yields a larger ratio term on average.",
"We use a dictionary from Wiktionary to collect word pairs, where each word pair ( w S ; w T ) is a translation pair.",
"Figure 3 shows the results of downsampling using these two methods.",
"Decreasing the sample rate (cid:26) lowers the topic qualities.",
"This implies that although PLTM can process comparable corpora, which need not be exact translations, one still needs to be careful about the token balance between linked document pairs.",
"For many low-resource languages, the target language corpus is much smaller than the source corpus, so the effect of this imbalance is important to be aware of.",
"This is an important issue when choosing comparable documents, and Wikipedia is an illustrative example.",
"Although one can collect comparable documents via Wikipedia's interlanguage links, articles under the same title but in different languages can have very large variations on document length, causing the imbalance of samples ln n n , and thus potentially suboptimal performance of crosslingual training.",
"Recall that the crosslingual entropy term can be decomposed into different levels, e.g., document",
"level and word level, and we prefer a model with low crosslingual entropy but high monolingual entropy.",
"In this experiment, we show how these two quantities affect the topic qualities, using English-German ( EN-DE ) documents as an example.",
"Given PLTM output in ( EN , DE ) and a cardinality C = 5 , we collect C 2 (cid:2) K bilingual word pairs as described in Section 4.1.",
"For each word pair, we calculate three quantities: d CVL , CNPMI , and the inner product of the word representations.",
"In Figure 4, each dot is a word pair ( w S ; w T ) colored by the values of these quantities.",
"The word pair dots are positioned by their crosslingual and monolingual entropies.",
"We observe that d CVL decreases with crosslingual entropy on document level.",
"The larger the crosslingual entropy, the harder it is to get a low d CVL because it needs larger monolingual entropy to decrease the bound, as shown in Section 2.4.",
"On the other hand, the inner product of word pairs shows an opposite pattern of d CVL , indicating a negative correlation (Lemma 1).",
"see the correlation between CNPMI and d CVL is around (cid:0) 0 : 4 at the end of sampling, so there are fewer clear patterns for CNPMI in Figure",
"4. However, we also notice that the word pairs with higher CNPMI scores often appear at the bottom where crosslingual entropy is low while the monolingual entropy is high.",
"We move on to crosslingual document classification as a downstream task.",
"At various iterations of Gibbs sampling, we infer topics on the test sets for another 500 iterations and calculate the quantities shown in the Figure 5 (averaged over all lan-guages), including the H -distances for both training and test sets, and the joint risk b (cid:21) .",
"We treat English as the source language and train support vector machines to obtain the best classifier h that fits the English documents.",
"This classifier is then used to calculate the source and target risks b RS ( h ) and b RT ( h ) .",
"We also include 12 b d H ( S; T ) , the H -distance based on word rep-1 10 20 40 60 80 100 500 1000 Iterations 0 .",
"resentations b .",
"As mentioned in Section 3.1, we train support vector machines to use languages as labels, and the accuracy score as the H -distance.",
"The classification risks, such as b RS ( h ) , b RT ( h ) , and b (cid:21) , are decreasing as expected (upper row in Figure 5), which shows very similar trends as d CVL in Figure 2.",
"On the other hand, we notice that the H -distances of training documents and vocabularies, 12 b d H ( b D ( S ) ; b D ( T ) ) and 12 b d H ( S; T ) , stabilize around 0 : 5 to 0 : 6 , meaning it is difficult to differentiate the languages based on their representations.",
"Interestingly, the H -distances of test documents are at a less ideal value, although they are slightly decreasing in most of the languages except AR .",
"However, recall that the target risk also depends on other factors than H -distance (Theo-rem 4), and we use Figure 6 to illustrate this point.",
"We further explore the relationship between the predictability of languages vs document classes in Figure 6.",
"We collect documents correctly classi-fied for both document class and language labels, from which we randomly choose 200 documents for each language, and use b (cid:18) to plot t-SNE scatterplots.",
"Note that the two plots are from the same set of documents, and so the spatial relations between any two points are fixed, but we color them with different labelings.",
"Although the classifier can identify the languages (right panel), the features are still consistent, because on the left panel, the decision boundary changes its direction and also successfully classifies the documents based on actual label class.",
"This illustrates why a single H -distance does not necessarily mean inconsistent features across languages and high target risks.",
"This study gives new insights into crosslingual transfer learning in multilingual topic models.",
"By formulating the inference process as a circular validation, we derive a PAC-Bayesian theorem to show the factors that affect the success of crosslingual learning.",
"We also connect topic model learning with downstream crosslingual tasks to show how errors propagate.",
"As the first step toward more theoretically justi-fied crosslingual transfer learning, our study suggests considerations for constructing crosslingual transfer models in general.",
"For example, an effective model should strengthen crosslingual transfer while minimizing non-transferred components, use a balanced dataset or specific optimization algorithms for low-resource languages, and support evaluation metrics that relate to CVL ."
] |
[
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"result",
"other",
"other",
"method",
"method",
"result",
"abstain",
"objective",
"objective",
"other",
"method",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"method",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"result",
"objective",
"abstain"
] |
[
"We propose variable-in-situ logico-semantic graphs to bridge the gap between semantic graph and logical form parsing.",
"The new type of graph-based meaning representation allows us to include analysis for scope-related phenomena, such as quantification, negation and modality, in a way that is consistent with the state-of-the-art underspecification approach.",
"Moreover, the well-formedness of such a graph is clear, since model-theoretic interpretation is available.",
"We demonstrate the effectiveness of this new perspective by developing a new state-of-the-art semantic parser for Minimal Recursion Semantics.",
"At the core of this parser is a novel neural graph rewriting system which combines the strengths of Hyperedge Replacement Grammar, a knowledge-intensive model, and Graph Neural Networks, a data-intensive model.",
"Our parser achieves an accuracy of 92.39% in terms of ELEMENTARY DEPENDENCY MATCH , which is a 2.88 point improvement over the best data-driven model in the literature.",
"The output of our parser is highly coherent: at least 91% graphs are valid, in that they allow at least one sound scope-resolved logical form.",
"Graphs have recently become popular as a strategy for encoding sentence-level semantics, and related data-driven parsing techniques have been making rapid progress.",
"The primary component of popular semantic graphs, e.g. Elementary Dependency Structure (EDS; Oepen and Lnning, 2006) and Abstract Meaning Representation (AMR; Ba-narescu et al., 2013), is the predicateargument structure, with the predicate being a concept that takes a number of arguments.",
"Though expressive for many applications, this predicative core does not fully match the need for logical forms that used to stand in the central area of semantic parsing.",
"Partly due to the lack of model-theoretic semantics, it is rather difficult to add scope information related to quantification, negation and modality to a graph.",
"Partly due to the lack of logical deduction engines, it is rather difficult to directly perform automated reasoning over graphs.",
"This paper proposes to express logical forms with variable-in-situ graphs for the ongoing advances in graph-centric formalisms, algorithms and neural architectures.",
"This leads us to a novel neural graph rewriting system that combines the strengths of Hyperedge Replacement Grammar ( HRG ; Drewes et al., 1997) and Graph Neural Networks (Song et al., 2018a).",
"On the one hand, it can be viewed as an improved graph embedding model that explicitly explores recursive structures that are defined by an HRG .",
"On the other hand, it can be viewed as an enhanced graph grammar with which all nodes involved in derivations of graphs are assigned vector-based distributed encodings.",
"Based on our neural graph rewriting system, we develop a new parser for Minimal Recursion Semantics ( MRS ; Copestake et al., 2005).",
"By means of the DeepBank (Flickinger et al., 2012) data, our parser achieves an accuracy of 92.39% in terms of ELEMENTARY DEPENDENCY MATCH , which is a 2.88 point improvement over the best data-driven model in the literature.",
"We also consider the structural validity of logico-semantic graphs following the original design of MRS .",
"The output of our parser is highly coherent: at least 91% graphs are coherent, in that they allow at least one sound scope-resolved logical form.",
"Source code: https://github.com/draplater/var-parser/",
"sentence is captured as its truth conditions.",
"Under this assumption, using expressions of some logical languages to encode truth conditions is the de facto approach in formal semantics.",
"Classic logic, e.g. first-order predicate logic, supports precise, consistent and controlled meaning representation via truth-conditional interpretation.",
"A logical form can be visualized as a pseudo tree, as suggested by Copestake et al. (2005).",
"For example, the formula in Fig. 1a can be encoded as the tree in Fig. 1b.",
"However, the leaves of such a tree are not independent of each other.",
"For instance, dog ( x ) and chase ( e 1 , x , y ) share the same variable x .",
"Transforming logical forms into trees may enlarge the distance between closely-related nodes and make it difficult for a statistical or neural model to explicitly capture such dependencies.",
"In addition, considering syntactico-semantic similarity, this tree-structured logical form is essentially different from the corresponding syntactic tree, as shown in Fig. 2.",
"Such a tree representation brings difficulties to develop a systematic syntax-semantics interface.",
"Previous study (Oepen and Lnning, 2006; Copestake, 2009) shows that there are some good engineering reasons for producing a dependency style representation (see Fig. 1c) with links between predicates: It improves readability for consumers of the representation and eases integration with distributional semantics.",
"Exploiting this direction further, we augment such a semantic dependency graph with variables (see Fig. 1d).",
"In fact, it is a more straightforward way to encode logical forms using graphs.",
"Comparing the two types of graphs, we can see that the variable-in-situ representation fully specifies what there is in a logical form, while a variable-free graph may lose some information.",
"Take Fig. 1c for example.",
"The following logical form is also compatible with the graph, which is unfortunately a bad reading, since happy , according to its conceptual meaning, is not a scopal predicate.",
"Natural language utterances are often ambiguous, i.e., they have more than one reading.",
"Take scope ambiguity, an important type of ambiguity that has been receiving heightened attention by semanticists, for example.",
"Considering the following sentence: (2)",
"a. Every dog happily chases some cat.",
"b. some ( y , cat ( y ) , every ( x , dog ( x ) , happy ( e 2 , e 1 ) chase ( e 1 , x , y )))",
"c. every ( x , dog ( x ) , some ( y , cat ( y ) , happy ( e 2 , e 1 ) chase ( e 1 , x , y ))) The sentence is ambiguous: it can either mean that for every dog it is the case that it chases some potentially different cats; or else it can mean that there is a particular group of cats which are chased by every dog.",
"The two readings are all made up of the same set of predicates and operators, but differ in the relative scopes of certain 1 This formula is comparable to the following first-order formula: x ( dog ( x ) y ( cat ( y ) ( chase ( e 1 , x , y ) happy ( e 2 , e 1 )))) 6774 scope bearing elements.",
"There are some other natural language constructions that also involve scope ambiguity, e.g. negation and modality.",
"Underspecification is by now the standard technique to deal with semantic ambiguities in many modern semantic theories, e.g. Underspecified Discourse Representation Theory (Kamp et al., 2011) and Hole Semantics (Bos, 1996).",
"The basic idea behind it is to derive a single compact representation that describes the set of readings for a sentence that exhibits a scope ambiguity.",
"The individual readings can be enumerated from such an underspecified description if it is required (Koller and Thater, 2005), but it is also possible to process underspecified representations directly without enumerating the readings (Koller and Thater, 2010).",
"In this paper, we make our logico-semantic graph representations expressive to exhibit the complexities of human language semantics to some extent, by adopting a specific formalism for underspecification, i.e. Minimal Recursion Semantics ( MRS ; Copestake et al., 2005), a widely-used computational semantic framework in NLP.",
"In addition to variables to represent individuals or events, an MRS structure use another kind of element, called handle , to represent out-of-scope relationships between predicates.",
"Each node is assigned with a label handle , and some arguments of a concept are specified as hole handles .",
"Note that a hole argument is different from an event-variable argument, as illustrated by Ex.",
"(1).",
"Handles can be added to current variable-in-situ graph as a new type of node.",
"See Fig. 3 for an example.",
"h 1 , h 2 , h 3 , h 4 and h 5 are labels, h 2 , h 5 , h 7 , h 9 are hole handles.",
"The out-of-scope relationships in logical forms are converted into a set of constraints between holes and labels .",
"For example, if we let h 7 = h 4 and h 9 = h 3 , the MRS will be resolved into reading Ex.",
"(2c); similarly, h 4 = h 1 and h 7 = h 3 for reading Ex.",
"(2b).",
"To be more precise, a variable-in-situ logico-semantic graph is a graph such that, every node must be a predicate, handle or variable; every edge must be (1) between a predicate and a variable, encoding predicateargument relation, (2) between a predicate and a handle, encoding scopal argument or (3) between a predicate and a label, encoding naming convention and tagged by L. chase x y some h 5 cat h 4 every dog h 2 h 1 h 7 h 9 e 1 happy h 3 e 2 L L ARG 1 ARG 2 ARG0 RSTR ARG 0 L L ARG 0 L ARG0 RSTR BODY BODY ARG0 ARG1 ARG0 L Figure 3: Underspecified logico-semantic graph.",
"MRS provides a principled way to enumerate readings from an underspecified logical form (Niehren and Thater, 2003), showing us a way to validate the output logic structure.",
"We thus define a valid semantic structure as an MRS in which a scope-resolved logical form is allowable.",
"To be more precise, a variable-in-situ logico-semantic graph is valid if and only if there exists at least one fully specified logical form that satisfies all the constraints encoded by the graph.",
"Automatically constructing a semantic representation can be achieved by exploring the compo-sitionality principle: The meaning of a complex expression is a function of the meanings of its parts and of the syntactic rules by which they are combined.",
"In this perspective, both meanings of its parts and the function of syntactic rules can be precisely defined by graph fragments.",
"In this paper, we investigate how to manipulate semantic graph fragments with HRG , a context-free rewriting system for generating graphs.",
"We give a formal description of HRG , and then show how to model syntactico-semantic composition through graph rewriting.",
"Recursive neural networks are also important for handling linguistic data.",
"In this section, we will further augment an HRG with a hypergraph-state LSTM.",
"of nodes, and E V + is a finite set of hyper-edges.",
"A hyperedge is an extension of a normal edge which can connect to more than two nodes or only one node.",
"l : E L assigns a label from a finite set L to each hyperedge.",
"Since nodes receive no informative labels, we use single-node edges with terminal labels to represent predicates.",
"This strategy is widely used by HRG -based NLP sys-tems, including Chiang et al. (2013), Peng et al. (2015) and Chen et al. (2018).",
"X V defines an ordered list of nodes called external nodes , which specify the docking points during graph rewriting.",
"t : V T assigns a type from a finite set T to each node.",
"Different from the hypergraphs used by Chiang et al. (2013) and Chen et al. (2018), we highlight the usage of node types which has a significant impact on making parsing results logically coherent.",
"Three node types are utilized: h , x and c , which indicate handle, variable and predicate respectively.",
"During node gluing, we must make sure that the types of nodes are identical.",
"If the type of any node is still unspecified, the type of the other node will be selected.",
"For convenience, we define the type of a non-terminal hyperedge as the tuple of types of all nodes it connects to; we define the type of a graph fragment as the tuple of types of all external nodes in order.",
"For example, the graph fragment of some in Fig. 4 is typed as ( h , x ), which will be denoted as hx for short.",
"P is a finite set of production rules of the form A R , where the left hand side (LHS) A N , and the right hand side (RHS) R is a hypergraph with edge labels over N T .",
"The rewriting process replaces a non-terminal hyperedge with the graph fragment specified by a rule's RHS, attaching each external node to the matched node of the corresponding LHS.",
"In the meantime, the co-related nodes in LHS and RHS must be of the same types.",
"Tab.",
"1 presents four example rules.",
"Rule (cid:174) consists of three nodes and two hyperedges.",
"All three nodes are of type x , indicating that they are variables.",
"One hyperedge has a label NP and connects to one internal node; the other is labelled as VP and connects to one internal node and two external nodes.",
"Fig. 4 presents the composition process for chase some cat , in which Rule (cid:173) and (cid:174) are recursively called for semantic construction.",
"The types of a HRG rule put additional constraints to the combination of subgraphs and in this way the output graph is regularized to some extent.",
"A failed combination is illustrated in Fig.",
"5. 6776 c some 1 x 0 h h h RSTR ARG0 L BODY + c cat 0 x 1 h L ARG0 c 0 ?",
"Since we explicitly describe a recursive process, we are able to define a new graph embedding methodencoding graphs along with such a recursive structure.",
"Our strategy is to assign vectors to nodes involved in the composition process in a bottom-up way.",
"Before the application of an HRG rule A R ( R = (cid:104) V, E, l, t, X (cid:105) , V = { n 1 , n 2 , ... } , X = { e 1 , e 2 , ... } ), the external nodes of all non-terminal edges in R have been assigned vectors based on preceding composition while other newly introduced nodes are zero-initialized.",
"The vectors assigned to all nodes in R will be updated according to a Graph Neural Network (GNN), which works by exploiting locality encoded by R .",
"In this paper, we propose a hypergraph-state LSTM structure to do so.",
"In what follows, we will first introduce our GNN model and then use it to equip an HRG , resulting in a recursive hypergraph-state LSTM model.",
"Each node n j V has a node property vector x n j to represent its own information, such as the type and the corresponding label of a concept node, and the index of an external node.",
"And another hidden state vector h j is employed to hopefully encode the information of its surroundings.",
"The surrounding information of n j is collected by multi-step information exchange between n j and its neighbouring nodes, denoted as ( n j ) .",
"Two nodes n j and n k are viewed as neighbours if there is at least one hyperedge that connects them.",
"To keep its own information, we assume that each node has a self loop, i.e. n j ( n j ) .",
"Thus the neighbouring relation is symmetric.",
"An optional label l ( n j , n k ) can be attached to each neighboring relation.",
"Each node has an initial state h 0 j , representing the state when information has not been updated yet.",
"In each step of information exchange, according to x j and its previous hidden state h t 1 j , the new hidden state h tj is calculated from the representation of itself, its neighbours ( n j ) , and the label of each relation, in a way as generally defined as follows: h tj = f ( { x k | k ( n j ) } , { h t 1 k | k ( n j ) } , { l ( n j , n k ) | n k ( n j ) } ) Assume that L is a randomly initialized matrix for encoding neighbouring labels.",
"Summation is utilized to collect information from neighbouring nodes: x,j = (cid:88) k ( n j ) ( x k L [ l ( n j , n k )]) t 1 h,j = (cid:88) k ( n j ) h t 1 k Introducing the LSTM gate mechanism, the state transition can be written as: i tj = ( W i x,j + U i t 1 h,j + b i ) o tj = ( W o x,j + U o t 1 h,j + b o ) f tj = ( W f x,j + U f t 1 h,j + b f ) u tj = ( W u x,j + U u t 1 h,j + b u ) c tj = f tj c t 1 j + i tj u tj h tj = o tj tanh( c tj ) where i , o , f are the input, output and forget gates of LSTM.",
"W and U are the model parameters.",
"Similar to the tree LSTM (Tai et al., 2015), our recursive hyperedge-state LSTM model composes the states of a graph fragment from input vectors and the representations of its subgraphs.",
"The model alternates between two kinds of steps: (1) graph fragment encoding and (2) state propagation.",
"The process for encoding a non-leaf graph fragment is visualized in Fig.",
"6. The most important feature of our graph encoding method is that the process is step-wise, making it possible to perform semantic disambiguation and graph encoding iteratively.",
"In a graph fragment encoding step for R , we want to get some vectors representing a specific graph fragment for further combination.",
"This can be done by running multilayer hypergraph-state LSTM (denoted as HGS ) on R : [ h Tn 1 ; h Tn 2 ; ... ] = HGST ([ h 0 n 1 ; h 0 n 2 ; ... ] , [ x n 1 ; x n 2 ; ... ] , R ) 6777 some 0 1 LBL BODY ARG0 RSTR h T D , 1 h T D , 0 (cid:126)(cid:119) HGS D 1 0 2 L ARG0 ARG1ARG2 chase h T V , 1 h T V , 0 h T V , 2 (cid:126)(cid:119) HGS V 0 h T NP , 0 h 0 NP , 0 h 0 NP ,i D N (cid:126)(cid:119) HGS NP cat 1 0 h T N , 1 h T N , 0 LBL ARG0 (cid:126)(cid:119) HGS N x V 0 x 1 x 0 1 2 NP h 0 VP ,i h 0 VP , 0 h 0 VP , 1 h T VP , 1 h T VP , 0 (cid:126)(cid:119) HGS VP Figure 6: A graphical illustration of our recursive hypergraph-state LSTM model.",
"T represents the number of layers in the hypergraph-state LSTM.",
"For a node n j in a lexical graph fragment, we use a zero vector as h 0 n j .",
"For the non-leaf case, x and h 0 is acquired from preceding state propagation.",
"Not all final states h Tn 1 , h Tn 2 . . . should be kept for further composition.",
"Considering the role played by external nodes in graph gluing, we use the final states of external nodes h Te 1 , h Te 2 . . . to pass information and call them interface vectors .",
"State propagation is the preparatory stage of non-leaf graph fragment encoding, in which the interface vectors of its subgraph fragments are combined to calculate x and h 0 for the next step.",
"Without the loss of generality, we only discuss the case for binary rules in which R consists of two non-terminal hyperedges.",
"It is worth noting that in non-leaf graph fragment encoding, the hypergraph-state LSTM is operated on a rule rather than the entire graph fragment.",
"The process of encoding a non-leaf graph fragment can be seen as encoding an RHS R with special initial states originated from interface vectors.",
"The nodes in R are of three types: unified nodes, passover nodes and newly created nodes.",
"Newly created nodes bring new information to the combined graph fragment while the other two kinds of nodes are only used for structural connection.",
"For a newly created node, the node property vector x is calculated from its own information, and the initial state is a zero vector.",
"A unified node is connected by both non-terminal hyperedges, and therefore receive information from both sides.",
"The initial state h 0 of a unified node is the sum of the two corresponding interface vectors.",
"The property vector x is redefined as the sum of the two related property vectors.",
"A passover node is a node connected to only one non-terminal hyperedge.",
"And its property vector and initial state are simply copied from the unique corresponding node.",
"For example, the rule VP in Tab.",
"1 contains one unified node and two passover nodes.",
"Denote the set of corresponding nodes of n j as cor ( n j ) .",
"| cor ( n j ) | is 0, 1 or 2 for newly created nodes, passover nodes and unified nodes respectively.",
"x n j and h 0 n j for non-leaf graph fragment encoding can be calculated as: h 0 n j = (cid:88) n i cor ( n j ) h T n i x n j = (cid:88) n i cor ( n j ) x n i if | cor ( n j ) | (cid:54) = 0 6778 4 Parsing to Variable-in-situ Graphs Following our previous work (Chen et al., 2018), we continue to employ a synchronous grammar to build a practical parser.",
"We integrate a CFG that expresses syntactic composition with an HRG that expresses semantic composition.",
"Semantic construction is divided into two subtasks: syntactic parsing and semantic interpretation.",
"When a phrase structure tree T is available, a semantic interpreter translates T to the derivation of graph construction by assigning corresponding HRG rules to the syntactic counterparts.",
"At a single derivation step, there may be more than one HRG rule applicable.",
"In this case, we need a disambiguation model to select a good one.",
"The simplest disambiguation model is a count-based model : Given a coherent derivation tree, together with corresponding rule types, it simply selects the most frequent rule in the training data.",
"This model provides baseline performance for reference.",
"Chen et al. (2018) showed that disambiguation can be significantly improved when a classifier is introduced.",
"In particular, they proposed a feature engineering-based classifier , in which manually defined sparse vectors are utilized.",
"This is not suitable for our purpose because a variable-in-situ graph is much more complex in that much more external nodes are involved.",
"With the neural graph rewriting system introduced in 3.2, we propose a subgraph-based model which can handle the above problem by automatically learning vector representations for graphs.",
"More concretely, assume that we have built the left and right subgraphs, denoted by H l and H r , for further composition.",
"Usually, multiple rules, viz. r 1 , r 2 , ..., r M , are applicable to combine H l and H r .",
"Let the possible merged graphs be denoted by H = { H 1 , H 2 , . . . , HM } .",
"To build a high-quality graph, we need to rank H 1 , H 2 , ... according to some score functions that reflect their goodness .",
"Formally, we have an optimization problem: H = arg max H m H SCORE ( H m ) To calculate the score for H m , we consider both syntactic and semantic contexts.",
"To reflect the syntactic information, we use a vector-based encoding, denoted by s i,j , of the corresponding phrase/span ( i, j ) that can be calculated by a sequence-based model, such as LSTM or Transformer.",
"Graph fragment H m with n external nodes can be encoded by the neural graph rewriting system: running a recursive hypergraph-state LSTM on the RHS R m of an HRG rule where the interface vectors of H l and H r are consumed as initial states.",
"After that we get n new interface vectors related to H m (denoted as u m,k , 0 k < n ).",
"Taking advantage of the recursive structure, the common parts H l and H r of graph fragments H 1 , H 2 , ... are encoded only once, avoiding redundant computation.",
"We use an attention mechanism to get a single vector representation t m for the graph fragment H m : w m,k = ( u m,k ) (cid:62) W s i,j t m = (cid:88) 0 k<n ( u m,k w m,k ) We use the similarity between t m and s i,j as the score of this graph fragment.",
"For training, we use the cross-entropy function as loss.",
"SCORE ( H m , i, j ) = ( t m ) (cid:62) W 2 s i,j 5 Experiments 5.1 Data Setup DeepBank (Flickinger et al., 2012) is a deep linguistic resource that covers the Wall Street Journal section of Penn TreeBank (PTB; Marcus et al., 1993).",
"All annotations are governed by English Resource Grammar (ERG; Flickinger, 2000).",
"We use the DeepBank v1.1 data, and split it into training, development and test sets along with previous work (Oepen et al., 2014, 2015; Buys and Blunsom, 2017; Chen et al., 2018) to make sure that the numeric performance can be directly compared to the results in the literature.",
"Token-wise Evaluation for Accuracy The semantic annotations in DeepBank are presented as variable-in-situ MRS style originally.",
"It is a non-trivial problem to measure the similarity between different logical forms accordingly.",
"Copestake (2009) provides a method to reversibly translate them into variable-reduced semantic graphs, namely dubbed Dependency MRS ( DMRS ), in an information-equivalent fashion, which is widely used by previous studies.",
"We convert our outputs to DMRS , and re-use the evaluation metrics for variable-reduced graph representations, including Elementary Dependency Match ( EDM ; 6779 Dridan and Oepen, 2011) and SMATCH (Cai and Knight, 2013) to perform evaluation.",
"Search-Based Evaluation for Coherence Another dimension for parser evaluationthe coherence of the output structuresis as essential as accuracy, since we also emphasize on the logical nature.",
"Under the framework of underspecification, the coherence of a semantic structure entails that there must be at least one fully specified, i.e. scope-resolved logical form, which satisfies all the constraints encoded by that structure.",
"The following shows a by-design incoherent semantic graph: every dog ?",
"every has two scopal arguments, corresponding to the restriction and body domains respectively, but there is not enough predicates to fill in them.",
"Niehren and Thater (2003) proved that figuring out whether an MRS structure is coherent is NP-hard.",
"Accordingly, we use exhaustive search to find the first scope-resolved logical form if there is any.",
"Practically, our implementation is efficient enough to cover all graphs produced by our parser.",
"We conduct automatic grammar induction following our previous method (Chen et al., 2018).",
"Tab.",
"1 shows some rule examples, while Tab.",
"2 presents some statistics of the related grammars.",
"There is a big difference between the rule distributions of the grammars for variable-reduced and variable-in-situ semantic graphs.",
"For comparison, we report results on Elementary Dependency Structure ( EDS ; Oepen and Lnning, 2006).",
"Rules for the latter one have more external nodes on average.",
"More external nodes bring in a new problem for grammar induction determining the order of external nodes.",
"Consider the rule related to chase in Fig.",
"4. chase has three external nodes: the endpoints of ARG0 , ARG1 and ARG2 .",
"A grammar TH AO Span EDMPEDMAEDMSMATCH Count-Based N Y 91.98 94.41 65.68 80.52 80.79 Y N 91.80 94.41 75.35 84.91 85.42 Y Y 91.76 94.57 87.28 90.91 91.52 Subgraph-Based N Y 91.98 94.86 83.59 89.22 89.72 Y N 91.80 94.77 89.50 92.11 92.72 Y Y 91.76 94.85 90.27 92.54 93.39 Table 3: Accuracies on the development data.",
"induction algorithm needs to decide which one is taken as the first external node and which one the second, etc.",
"We find that a good order is important to the performance of a parser.",
"In our experiments, we use the syntactic attachment order to decide the order of an external node.",
"The attachment order reflects when a node is being glued to another graph fragment.",
"For example, the ARG2 of chase connects to the graph fragment of cat firstly, since cat is the syntactic object; secondly, the ARG0 connects to the graph fragment of happy , because happily as a adjunct stands in between object and subject.",
"As a result, we take the ARG2 and ARG0 endpoints as the first and second external nodes.",
"This method not only makes the grammar more regular, but also endows the order of external nodes with semantic meaning.",
"We implement a syntactic parser according to Kitaev and Klein (2018), which contains an 8-layer transformer to extract dense vector representations for candidate phrases.",
"ELMo (Peters et al., 2018) is used as pretrained contextualized word embed-dings.",
"In addition to the CFG rules, our syntactic parser also predicts the types of synchronous rules.",
"If a phrase NP has a semantic part of type x , it is labeled as NP#x .",
"A CKY decoder is employed to make sure that the output of the syntactic parser is coherent for semantic interpretation.",
"Tab.",
"3 presents the accuracy of syntactic parsing.",
"When syntactic trees are ready, the semantic interpreter selects an HRG rule for each tree node.",
"We apply greedy search to complete this translating process.",
"In subgraph-based model, the span features s i,j obtained by the syntactic parser are also used to perform disambiguation.",
"The word embedding and transformer are fixed in this step.",
"Tab.",
"3 summarizes the parsing results with different set-ups.",
"There is a significant gap between the typed and untyped HRG with respect to EDM scores.",
"Note that the performance of syntactic parsing is comparable.",
"This demonstrates the necessity to explicitly control the structural coherence of the semantic outputs.",
"An interesting observation is that the performance also drops significantly without a proper order of external nodes in the count-based model.",
"But the gap narrows after introducing the neural model.",
"It reveals that using the syntactic attachment order makes the grammar more regular, giving it more ability of semantic disambiguation.",
"The recursive hypergraph-state LSTM model is robust.",
"Its strong disambiguation ability can make up for the weakness of the grammar.",
"Tab.",
"4 shows the results on test set.",
"Our parser achieves an accuracy of 92.39% in terms of EDM , which is a 2.88 point improvement over the best data-driven model in the literature.",
"For fair competition, we remove the ELMo to match the experiment set-up of previous models.",
"The result shows that we still outperform the previous best model by 1.05 points.",
"We test the well-formedness of the output MRS and present the result in Tab.",
"5. With type restrictions, the output of our parser is highly coherent: at least 91% MRS allow at least one sound scope-resolved logic form.",
"It has been a long time since researchers manipulated semantic construction following the princi-ple of compositionality.",
"Different formalisms have been developed to express the syntactic-semantic interface in natural language utterances.",
"To manipulate compositional construction, HRG is a popular framework to define a graph-structured syntax-semantics interface (Peng et al., 2015; Chen et al., 2018).",
"AM algebra (Koller, 2015; Groschwitz et al., 2017) is another formalism to handle graph construction which has been successfully explored to build semantic parsers (Groschwitz et al., 2018; Lindemann et al., 2019).",
"Compositional vector representation is also widely studied in recent years.",
"Kiperwasser and Goldberg (2016) encodes syntactic dependency trees with a recursive recurrent neural network, which acts as the core of a bottom-up dependency parser.",
"Dyer et al. (2016) introduced Recurrent Neural Network Grammar, a probabilistic model of sentences with explicit phrase structure.",
"A recursive syntactic composition function is used to compute an embedding of a completed phrase-structure subtree.",
"Modeling discrete structures with principled neural networks has received an increasing interest.",
"Kipf and Welling (2017) proposed Graph Convolution Network to classify nodes in graphs.",
"DAG-structured LSTM is a natural extension to tree LSTM which treats nodes as basic states (Zhu et al., 2016).",
"Graph-state LSTM can be used in both generation task (Song et al., 2018a) and relation extraction (Song et al., 2018b).",
"Graph-structured meaning representations provide an effective way to encode rich semantic information of natural language sentences and have been extensively studied recently.",
"We enriched the discussion by studying an alternative graph-based representation for underspecified logical forms.",
"In particular, we introduced a novel neural graph rewriting system and developed a new state-of-the-art semantic parser for variable-in-situ graphs.",
"This work is supported in part by the National Hi-Tech R&D Program of China (No. 2018YFC0831900).",
"Weiwei Sun is the corresponding author."
] |
[
"objective",
"objective",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"objective",
"other",
"other"
] |
[
"Recent advances in open-domain QA have led to strong models based on dense retrieval, but only focused on retrieving textual passages.",
"In this work, we tackle open-domain QA over tables for the first time, and show that retrieval can be improved by a retriever designed to handle tabular context.",
"We present an effective pre-training procedure for our retriever and improve retrieval quality with mined hard negatives.",
"As relevant datasets are missing, we extract a subset of NATURALQUESTIONS (Kwiatkowski et al., 2019) into a Table QA dataset.",
"We find that our retriever improves retrieval results from 72 .",
"0 to 81 .",
"1 recall@10 and end-to-end QA results from 33 .",
"8 to 37 .",
"7 exact match, over a BERT based retriever.",
"Models for question answering (QA) over tables usually assume that the relevant table is given during test time.",
"This applies for semantic parsing (e.g., for models trained on SPIDER (Yu et al., 2018)) and for end-to-end QA (Neelakantan et al., 2016; Herzig et al., 2020).",
"While this assumption simplifies the QA model, it is not realistic for many use-cases where the question is asked through some open-domain natural language interface, such as web search or a virtual assistant.",
"In these open-domain settings, the user has some information need, and the corresponding answer resides in some table in a large corpus of tables.",
"The QA model then needs to utilize the corpus as an information source, efficiently search for the relevant table within, parse it, and extract the answer.",
"Recently, much work has explored open-domain QA over a corpus of textual passages (Chen et al., 2017; Sun et al., 2018; Yang et al., 2019; Lee et al., 2019, inter alia ).",
"These approaches usually follow a two-stage framework: (1) a retriever first selects a small subset of candidate passages relevant to the Work completed while interning at Google.",
"question, and then (2) a machine reader examines the retrieved passages and selects the correct answer.",
"While these approaches work well on free text, it is not clear whether they can be directly applied to tables, as tables are semi-structured, and thus different than free text.",
"In this paper we describe the first study to tackle open-domain QA over tables, and focus on modifying the retriever.",
"We follow the two-step approach of a retriever model that retrieves a small set of candidate tables from a corpus, followed by a QA model (Figure 1).",
"Specifically, we utilize dense retrieval approaches targeted for retrieving passages (Lee et al., 2019; Guu et al., 2020; Karpukhin et al., 2020), and modify the retriever to better handle tabular contexts.",
"We present a simple and effective pre-training procedure for our retriever, and further improve its performance by mining hard negatives using the retriever model.",
"Finally, as relevant open domain datasets are missing, we process NATURALQUESTIONS (Kwiatkowski et al., 2019) and extract 11K examples where the answer resides in some table.",
"Our model and data generation code as well as the pre-trained model are publicly available at https://github.com/google-research/ tapas .",
"We formally define open domain extractive QA over tables as follows.",
"We are given a training set of N examples D train = { ( q i , T i , a i ) } Ni =1 , where q i is a question, T i is a table where the answer a i resides, and a corpus of M tables C = { T i } Mi =1 .",
"The answer a i is comprised of one or more spans of tokens in T i .",
"Our goal is to learn a model that given a new question q and the corpus C returns the correct answer a .",
"Our task shares similarities with open domain QA over documents (Chen et al., 2017; Yang et al., 2019; Lee et al., 2019), where the corpus C consists of textual passages extracted from documents instead of tables, and the answer is a span that appears in some passage in the corpus.",
"As in these works, dealing with a large corpus (of tables in our setting), requires relevant context retrieval.",
"Naively applying a QA model, for example TAPAS (Herzig et al., 2020), over each table in the large corpus is not practical because inference is too expensive.",
"To this end we break our system into two in-dependent steps.",
"First, an efficient table retriever component selects a small set of candidate tables CR from a large corpus of tables C .",
"Second, we apply a QA model to extract the answer a given the question q and the candidate tables CR .",
"In this section we describe our dense table retriever (DTR), which retrieves a small set of K candidate tables CR given a question q and a corpus C .",
"In this work we set K = 10 and take C to be the set of all tables in the dataset we experiment with (see 6).",
"As in recent work for open domain QA on passages (Lee et al., 2019; Guu et al., 2020; Karpukhin et al., 2020; Chen et al., 2021; Oguz et al., 2020), we also follow a dense retrieval architecture.",
"As tables that contain the answer to q do not necessarily include tokens from q , a dense encoding can better capture similarities between table contents and a question.",
"For training DTR, we leverage both in-domain training data D train , and automatically constructed pre-training data D pt of text-table pairs (see below).",
"meaningful way, by capturing their specific structure.",
"Traditional information retrieval methods such as BM25 are targeted to capture token overlaps between a query and a textual document, and other dense encoders are pre-trained language models (such as BERT) targeted for text representations.",
"Recently, Herzig et al. (2020) proposed TAPAS, an encoder based on BERT, designed to contextually represent text and a table jointly.",
"TAPAS includes table specific embeddings that capture its structure, such as row and column ids.",
"In DTR, we use TAPAS to represent both the query q and the table T .",
"For efficient retrieval during inference we use two different TAPAS instances (for q and for T ), and learn a similarity metric between them as Lee et al. (2019); Karpukhin et al. (2020).",
"More concretely, the TAPAS encoder TAPAS ( x 1 , [ x 2 ]) takes one or two inputs as arguments, where x 1 is a string and x 2 is a flattened table.",
"We then define the retrieval score as the inner product of dense vector representations of the question q and the table T : h q = W q TAPAS q ( q ) [CLS] h T = WTTAPAST ( title ( T ) , T ) [CLS] S ret ( q, T ) = h Tq h T , where TAPAS ( ) [CLS] returns the hidden state for the CLS token, W q and WT are matrices that project the TAPAS output into d = 256 dimensional vectors, and title ( T ) is the page title for table T .",
"We found the table's page title to assist in retrieving relevant tables, which is also useful for Wikipedia passage retrieval (Lee et al., 2019).",
"Training The goal of the retriever is to create a vector space such that relevant pairs of questions and tables will have smaller distance (which results in a large dot product) than the irrelevant pairs, by learning an embedding.",
"To increase the likelihood of gold ( q, T ) pairs, we train the retriever with in-batch negatives (Gillick et al., 2019; Henderson et al., 2017; Karpukhin et al., 2020).",
"Let { ( q i , T i ) } Bi =1 be a batch of B examples from D train , where for each q i , T i is the gold table to retrieve, and for each j (cid:54) = i we treat T j as a negative.",
"We now define the likelihood of the gold table T i as: p ( T i | q i ) = exp[ S ret ( q i , T i )] (cid:80) Bj =1 exp[ S ret ( q i , T j )] .",
"questions and tables respectively.",
"Then, S = QTT gives an B B matrix where the logits from the gold table are on the diagonal.",
"We then train using a row-wise cross entropy loss where the labels are a B B identity matrix.",
"Pre-training One could train our retriever from scratch, solely relying on a sufficiently large in-domain training dataset D train .",
"However, we find performance to improve after using a simple pretraining method for our retriever.",
"Lee et al. (2019) suggest to pre-train a textual dense retriever using an Inverse Cloze Task (ICT).",
"In ICT, the goal is to predict a context given a sentence s .",
"The context is a passage that originally contains s , but with s masked.",
"The motivation is that the relevant context should be semantically similar to s , and should contain information missing from s .",
"Similarly, we posit that a table T that appears in close proximity to some text span s is more relevant to s than a random table.",
"To construct a set D pt = { ( s i , T i ) } Mi =1 that consists of M pretraining pairs ( s, T ) , we use the pre-training data from Herzig et al. (2020).",
"They extracted text-table pairs from 6.2M Wikipedia tables, where text spans were sampled from the table caption, page title, page description, segment title and text of the segment the table occurs in.",
"This resulted in a total of 21.3M text-table ( s, T ) pairs.",
"While Herzig et al. (2020) uses extracted ( s, T ) pairs for pretraining TAPAS with a masked language modeling objective, we pre-train DTR from these pairs, with the same objective used for in-domain data.",
"Hard Negatives Following similar work (Gillick et al., 2019; Karpukhin et al., 2020; Xiong et al., 2021), we use an initial retrieval model to extract the most similar tables from C for each question in the training set.",
"From this list we discard each table that does contain the reference answer to remove false negatives.",
"We use the highest scoring remaining table as a particular hard negative.",
"Given the new triplets of question, reference table and mined negative table, we train a new model using a modified version of the in-batch negative training discussed above.",
"Given Q and S as defined above and a new matrix N ( B d ) that holds the representations of the negative tables, S (cid:48) = QNT gives another B B matrix that we want to be small in value (possibly negative).",
"If we concatenate S and S (cid:48) row-wise we get a new matrix for which we can perform the same cross entropy training as before.",
"The label matrix is now obtained by concatenating an identity matrix row-wise with a zero matrix.",
"Inference During inference time, we apply the table encoder TAPAST to all the tables T C offline.",
"Given a test question q , we derive its representation h q and retrieve the top K tables with representations closest to h q .",
"In our experiments, we use exhaustive search to find the top K tables, but to scale to large corpora, fast maximum inner product search using existing tools such as FAISS (Johnson et al., 2019) and SCANN (Guo et al., 2020) could be used, instead.",
"A reader model is used to extract the answer a given the question q and K candidate tables.",
"The model scores each candidate and at the same time extracts a suitable answer span from the table.",
"Each table and question are jointly encoded using a TAPAS model.",
"The score is a simple logistic loss based on the CLS token, as in Eisenschlos et al. (2020).",
"The answer span extraction is modeled as a soft-max over all possible spans up to a certain length.",
"Spans that are located outside of a table cell or that cross a cell are masked.",
"Following Lee et al. (2017, 2019), the span representation is the concatenation of the contextual representation of the first and last token in the span s : h start = TAPAS r ( q, title ( T ) , T )[ START ( s )] h end = TAPAS r ( q, title ( T ) , T )[ END ( s )] S read ( q, T ) = MLP ([ h start , h end ]) .",
"The training and test data are created by running a retrieval model.",
"We extract the K = 10 highest scoring candidate tables for each question.",
"At training time we add the reference table if it is missing from the candidates.",
"At inference time all table candidates are processed and the answer of the candidate with the highest score is returned as the predicted answer.",
"We create a new English dataset called NQ-TABLES from NATURALQUESTIONS (Kwiatkowski et al., 2019) (NQ).",
"Concurrently with this work, Zayats et al. (2021) study a similar subset of NQ but without the retrieval aspect.",
"from real Google search queries and the answers are spans in Wikipedia articles identified by annotators.",
"Although the answers for most questions appear in textual passages, we identified 12K examples where the answer resides in a table, and can be used as a QA over tables example.",
"To this end, we form NQ-TABLES that consists of ( q, T, a ) triplets from these examples.",
"Tables are extracted from the article's HTML, and are normalized by transposing infobox tables.",
"We randomly split the original NQ train set into train and dev (based on a hash of the page title) and use all questions from the original NQ dev set as our test set.",
"To construct the corpus C , we extract all tables that appear in articles in all NQ sets.",
"NQ can contain the same Wikipedia page in different versions which leads to many almost identical tables.",
"We merge close duplicates using the following procedure.",
"For all tables that occur on the same Wikipedia page we flatten the entire table content, tokenize it and compute l 2 normalized uni-gram vectors of the token counts of each table.",
"We then compute the pair-wise cosine similarity of all tables.",
"We iterate over the table pairs in decreasing order of similarity and attempt to merge them into clusters.",
"This is essentially a version of single link clustering.",
"In particular, we will merge two tables if the similarity is > 0 .",
"91 , they do not occur on the same version of the page, their difference is rows is at most 2 and they have the same number of columns.",
"Dataset sizes are given in the following table: train dev test corpus C 9,594 1,068 966 169,898 Retriever Reader EM F1 Oracle EM Oracle F1 BM25 TAPAS 21.46 28.24 29.51 40.79 DTR-Text BERT 29.58 37.38 39.39 51.48 DTR-Text TAPAS 33.78 43.49 42.83 56.46 DTR-Schema TAPAS 32.75 42.19 42.63 55.05 DTR TAPAS 35.50 45.44 46.09 59.01 DTR +hnbm25 TAPAS 36.61 46.74 47.46 60.72 DTR +hn TAPAS 37.69 47.70 48.20 61.50 Table 2: QA results on NQ-TABLES test set.",
"Details about the experimental setup are given Appendix A.",
"Retrieval Baselines We consider the following baselines as alternatives to DTR.",
"We use the BM25 (Robertson and Zaragoza, 2009) implementation of Gensim ( Rehurek and Sojka, 2010) 1 .",
"To measure if a table-specific encoder is necessary, we implement DTR-TEXT , where the retriever is initialized from BERT (Devlin et al., 2019) instead of TAPAS.",
"To test whether the content of the table is relevant, we experiment with DTR-SCHEMA , where only the headers and title are used to represent tables.",
"Retrieval Results Table 1 shows the test results for table retrieval (dev results are in Appendix B).",
"We report recall at K (R@K) metrics as the fraction of questions for which the highest scoring K tables contain the reference table.",
"We find that all dense models that have been pre-trained out-peform the BM25 baseline by a large margin.",
"The model that uses the TAPAS table embeddings (DTR) out-performs the dense baselines by more than 1 point in R@10.",
"The addition of mined negatives (DTR +hn) yields an additional improvement of more than 5 points.",
"Mining negatives from DTR works better than mining negatives from BM25 (DTR +hnbm25, +0.6 R@10).",
"End-to-End QA Results for end-to-end QA experiments are shown in Table 2 (dev results are in Appendix B).",
"We use the exact match (EM) and token F1 metrics as implemented in SQUAD (Ra-jpurkar et al., 2016).",
"2 We additionally report oracle 1 We find that recall improves if the document title and table header tokens are counted multiple times.",
"In all experiments we use a count of 15.",
"2 https://worksheets.",
"We again find that all dense models out-perform the BM25 baseline.",
"A TAPAS-based reader outperforms a BERT reader by more than 3 points in EM.",
"The simple DTR model out-performs the baselines by more than 1 point in EM.",
"Hard negatives from BM25 (+hnbm25) improve DTR's performance by 1 point, while hard negatives from DTR (+hn) improve performance by 2 points.",
"We additionally perform a McNemar's significance test for our proposed model, DTR+hn, and find that it performs significantly better (p<0.05) than all baselines.",
"Analysis Analyzing the best model in Table 2 (DTR +hn) on the dev set, we find that 29% of the questions are answered correctly, 14% require a list answer (which is out of scope for this paper), 12% do not have any table candidate that contains the answer, for 11% the model does not select a table that contains the answer, and for 34% the reader fails to extract the correct span.",
"We further analyzed the last category by manually annotating 100 random examples.",
"We find that for 23 examples the answer is partially correct (usually caused by inconsistent span annotations in NQ).",
"For 11 examples the answer is ambiguous (e.g., the release date of a movie released in different regions).",
"For 22 examples the table is missing context or does only contain the answer accidentally.",
"Finally, 44 examples are wrong, usually because they require some kind of table reasoning, like computing the maximum over a column, or using common sense knowledge.",
"In this paper we demonstrated that a retriever designed to handle tabular context can outperform other textual retrievers for open-domain QA on tables.",
"We additionally showed that our retriever can be effectively pre-trained and improved by hard negatives.",
"In future work we aim to tackle multimodal open-domain QA, combining passages and tables as context.",
"We would like to thank William Cohen, Sewon Min, Yasemin Altun and the anonymous reviewers for their constructive feedback, useful comments and suggestions.",
"This work was completed in partial fulfillment for the PhD degree of the first author, which was also supported by a Google PhD fellowship."
] |
[
"abstain",
"objective",
"result",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"result",
"method",
"other",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"other",
"other"
] |
[
"While online conversations can cover a vast amount of information in many different formats, abstractive text summarization has primarily focused on modeling solely news articles.",
"This research gap is due, in part, to the lack of standardized datasets for summarizing online discussions.",
"To address this gap, we design annotation protocols motivated by an issuesviewpointsassertions framework to crowdsource four new datasets on diverse online conversation forms of news comments, discussion forums, community question answering forums, and email threads.",
"We benchmark state-of-the-art models on our datasets and analyze characteristics associated with the data.",
"To create a comprehensive benchmark, we also evaluate these models on widely-used conversation summarization datasets to establish strong baselines in this domain.",
"Furthermore, we incorporate argument mining through graph construction to directly model the issues, viewpoints, and assertions present in a conversation and filter noisy input, showing comparable or improved results according to automatic and human evaluations.",
"Automatic text summarization is the process of outputting the most salient parts of an input in a concise and readable form.",
"Recent work in summarization has made significant progress due to introducing large-scale datasets such as the CNN-DailyMail dataset (Nallapati et al., 2016) and the New York Times dataset (Sandhaus, 2008).",
"Furthermore, the use of large self-supervised pretrained models such as BART (Lewis et al., 2020) and Pegasus (Zhang et al., 2019) has achieved state-of-the-art performance across summarization tasks and strong performance in zero and few-shot settings (Fabbri et al., 2020a).",
"However, less work has focused on summarizing online conversations.",
"Headline: SuperBowl Snippet: Whether you're a football fan or not, what do you like about Super Bowl Sunday?",
"Comment: ...",
"In my opinion I think the Falcons will stomp the patriots.",
"I think Tom Brady will choke the Super Bowl.",
"...",
"Comment: I am big Arizona Cardinals fan so when they didn't even make the playoffs i was upset.",
"...",
"Comment: I'm not a very big football fan at all.",
"So when it comes to Superbowl Sunday, I'm in it for the commercials and the half time show.",
"...",
"Comment: I am not exactly a football fan, but I enjoy watching the Super Bowl....",
"...Summary: Several commenters list their favorite things about the Super Bowl, including half-time shows, the funny commercials, the Puppy Bowl, eating food, and spending time with family.",
"A couple of commenters admit to not being football fans but still enjoying the Super Bowl.",
"Some commenters discuss whether they thought the Falcons or the Patriots were going to win, while others list teams they wish were in the game.",
"Unlike documents, articles, and scientific papers, which contain specific linguistic structures and conventions such as topic sentences and abstracts, conversational text scatters main points across multiple utterances and between numerous writers.",
"As a result, the text summarization task in the conversational data domain offers a challenging research field to test newly-developed models (Chen and Yang, 2020).",
"Recently, Gliwa et al. (2019a) introduced a dataset for chat-dialogue conversation summarization consisting of 16k examples, the first large-scale dataset of its kind.",
"Previous work in conversation summarization was limited by the data available and focused primarily on meeting summarization, such as the AMI (Kraaij et al., 2005) and ICSI (Janin et al., 2003) datasets.",
"The datasets used in recent conversation papers are often not uniform, ranging from visual dialogue data (Goo and Chen, 2018a) to customer-service dialogues (Yuan and Yu, 2019), not initially intended for summarization.",
"The availability of benchmark datasets for comparing methods has limited work in other conversation summarization domains and thus likely inhibited progress (Kryscinski et al., 2019; Fabbri et al., 2020b).",
"We aim to address this research gap by crowdsourcing a suite of four datasets, which we call ConvoSumm , that can evaluate a model's performance on a broad spectrum of conversation data.",
"In determining the domains of data to collect, we use the general definition of conversation as any discourse produced by more than one person (Ford, 1991).",
"We identify several key categories of data for which standard human-created development and testing datasets do not exist, namely (1) news article comments, (2) discussion forums and debate, (3) community question answering, and (4) email threads.",
"We design annotation protocols motivated by work in quantifying viewpoints present in news comment data (Barker and Gaizauskas, 2016a) to crowdsource 250 development and 250 test examples for each of the above domains.",
"We provide an example of comments to a New York Times news article, and our crowdsourced summary in Table 1. In addition to introducing manually-curated datasets for conversation summarization, we also aim to unify previous work in conversation summarization.",
"Namely, we benchmark a state-of-the-art abstractive model on several conversation datasets: dialogue summarization from SAMSum (Gliwa et al., 2019b), heuristic-generated community question answering from CQASumm (Chowdhury and Chakraborty, 2018), meeting summarization data from AMI and ICSI, and smaller test sets in the news comments, discussion forum, and email domains.",
"We believe that such benchmarking will facilitate a more straightforward comparison of conversation summarization models across domains.",
"To unify modeling across these conversational domains, we propose to use recent work in end-to-end argument mining (Lenz et al., 2020; Stab and Gurevych, 2014; Chakrabarty et al., 2019) to instantiate the theoretical graph framework which motivated our annotation protocol, proposed by Barker and Gaizauskas (2016a) for conversation summarization.",
"This protocol is employed to both identify and use the issuesviewpointsassertions argument structure (discussed in Related Work) for summarizing news comments.",
"We construct this argument graph using entailment relations, linearize the graph, train a graph-to-text model (Ribeiro et al., 2020), and experiment with argument mining as a way to reduce noise in long-text input.",
"Our contributions are the following: (1) we crowdsource datasets for four domains of conversational data and analyze the characteristics of our proposed datasets; (2) we benchmark state-of-the-art models on these datasets as well as previous widely-used conversation summarization datasets to provide a clear baseline for future work; and (3) we apply argument mining to model the structure of our conversational data better as well as reduce noise in long-text input, showing comparable or improved results in both automatic and human evaluations.",
"1 2 Related Work Modeling Conversation Summarization Early approaches to conversation summarization consisted of feature engineering (Shasha Xie et al., 2008), template selection methods (Oya et al., 2014), and statistical machine learning approaches (Galley, 2006; Wang and Cardie, 2013).",
"More recent modeling approaches for dialogue summarization have attempted to take advantage of conversation structures found within the data through dialogue act classification (Goo and Chen, 2018b), discourse labeling (Ganesh and Dingliwal, 2019), topic segmentation (Liu et al., 2019c), and key-point analysis (Liu et al., 2019a).",
"Chen and Yang (2020) utilize multiple conversational structures from different perspectives in its sequence-to-sequence model.",
"However, such approaches focus exclusively on dialogue summarization, and it is not trivial to extend such methods to longer conversations with many more participants.",
"We thus introduce a method to model the structure of the discourse over the many-party conversation.",
"Several existing works have focused on conceptualizing conversation structure for summarization and how to present this structure to end-users.",
"Barker et al. (2016a) propose a conversation overview summary that aims to capture the key argumentative content of a reader comment conversation.",
"Misra et al. (2017) use summarization 1 For reproducibility of our findings, we will make our data and code publicly available at https://github.com/ Yale-LILY/ConvoSumm .",
"as a means of probing online debates to discover central propositions, which they cluster to identify argument facets.",
"Barker and Gaizauskas (2016b) identify three key components of conversational dialogue: issues (that individuals discuss), viewpoints (that they hold about these issues), and assertions (that they make to support their viewpoints).",
"We build on this framework and advances in argument mining for end-to-end training for summarization.",
"Argument Mining Work in argument mining (Stab and Gurevych, 2014) has aimed to identify these argumentative units and classify them into claims, premises, and major claims, or claims describing the key concept in a text.",
"More recently, Chakrabarty et al. (2019) propose to fine-tune BERT (Devlin et al., 2019) for identifying argumentative units and relationships between them within a text and across texts.",
"Lenz et al. (2020) are the first to propose an end-to-end approach for constructing an argument graph (Stede et al., 2016), a structured representation of claims and premises in an argumentative text; the graph is built by connecting claim and premise argumentative discourse units.",
"We build on this framework for modeling discourse in conversational data.",
"Few-Shot Summarization As the datasets we introduce are not on a scale with larger datasets, we focus on few-shot and domain transfer summarization techniques.",
"Wang et al. (2019) examine domain adaptation in extractive summarization, while Hua and Wang (2017) examine domain adaptation between opinion and news summarization.",
"Within unsupervised abstractive summarization, several approaches have made use of variational autoen-coders (Baziotis et al., 2019; Chu and Liu, 2019; Brazinskas et al., 2020) and pretrained language models (Zhou and Rush, 2019; Laban et al., 2020).",
"Recent work in abstractive (Zhang et al., 2019; Fabbri et al., 2020a) and extractive-compressive summarization (Desai et al., 2020) has shown the power of pretrained models for a few-shot transfer.",
"The quality of models trained on several hundred examples in these papers is comparable to that of models trained on the equivalent full datasets.",
"Thus, we believe that introducing curated validation and testing datasets consisting of a few hundred examples is a valuable contribution within the current paradigm, which was confirmed by the poor performance of models transferred from other domains compared to that trained on this validation data.",
"In this section, we introduce our dataset selection, our annotation protocol, and the characteristics of our crowdsourced dataset.",
"Data Selection For the news comments subdomain, we use the NYT Comments dataset, which consists of 2 million comments made on 9,000 New York Times articles published between 2017 and 2018.",
"It is publicly available and has been used in work for news-comment relevance modeling (Kolhatkar and Taboada, 2017); it also contains metadata that may be of use in summarization modeling.",
"For the discussion forums and debate subdomain, we select Reddit data from CoarseDis-course (Zhang et al., 2017), which contains annotations about the discourse structure of the threads.",
"For the community question answering subdomain, we use StackExchange (Stack), which provides access to all forums and has been used in modeling for answer relevance and question deduplication (Hoogeveen et al., 2015).",
"We chose StackExchange over the commonly-used Yahoo! Answers data due to licensing reasons.",
"For the email threads subdomain, we use the publicly-available W3C corpus (Craswell et al., 2005).",
"Previous work also made use of this dataset for email summarization (Ulrich et al., 2008) but provided only a small sample of 40 email threads, for which we provide transfer testing results.",
"We generally follow the guidance of Tomasoni and Huang (2010), from summarizing community question answering forums, for determining which subsets of data to select from the above datasets.",
"We remove an example if (1) there were less than five posts (four in the case of email threads; post refers to any answer, comment, or email); (2) the longest post was over 400 words; (3) the sum of all post lengths was outside of [100 , 1400] words (although we extended this maximum length for NYT comments); or (4) the average length of the posts was outside of the [50 , 300] words interval.",
"For Stack data, we first filtered answers which received a negative community rating, as defined by the number of user upvotes minus the number of user downvotes.",
"While real-world settings may contain much longer threads, we later show that this setting is already challenging.",
"Annotation Protocol We designed annotation instructions for crowdsourced workers to write abstractive summaries for each of the four Dataset % novel n-grams Extractive Oracle Summary Length Input Length # Docs/Example NYT 36.11/79.72/94.52 36.26/10.21/31.23 79 1624 16.95 Reddit 43.84/84.98/95.65 35.74/10.45/30.74 65 641 7.88 Stack 35.12/77.91/93.56 37.30/10.70/31.93 73 1207 9.72 Email 42.09/83.27/93.98 40.98/15.50/35.22 74 917 4.95 Table 2: Statistics across dataset sources in ConvoSumm, showing novel uni/bi/tri-grams, ROUGE-1/2/L extractive oracle scores, the average input and summary lengths (number of tokens), as well as the number of documents per example, where each comment/post/answer/email is considered a document.",
"Dataset/Method Inter-document Similarity Redundancy Layout Bias NYT -11.71 -0.23 0.2/0.5/0.3 Reddit -7.56 -0.49 0.2/0.5/0.2 Stack -9.59 -0.27 0.2/0.3/0.4 Email -1.76 -0.18 0.3/0.4/0.3 Table 3: Multi-document summarization-specific dataset analysis on our proposed datasets with metrics introduced in Dey et al. (2020a): inter-document similarity (father from zero is less similarity), redundancy (father from zero is less overall redundancy of semantic units), and start/middle/end layout bias.",
"datasets, motivated by work in summarizing viewpoints present in online conversation (Barker and Gaizauskas, 2016a).",
"We present the crowdsource workers with the data threads, along with any available metadata.",
"For NYT, we presented the workers with the article headline, keywords, and, rather than providing the entire article as context, an extractive BERT-based summary (Miller, 2019) of the article.",
"We use a BERT summary to give the annotators an idea of the topic of the article.",
"We avoided having annotators read the entire article since the focus of their summaries was solely the content of the comments as per the annotation protocols, and reading the entire article could end up introducing information in the summaries that was not necessarily representative of the comments' main points.",
"We found that these summaries were useful in initial in-house annotations, and allowed us to better understand the context of the comments being summarized.",
"For Reddit and Stack, question tags and information about the subforum were provided; the Stack data includes both answers and answer comments.",
"Reddit data was filtered simply on word limits due to the unavailability of up/down votes from the Coarse Discourse data.",
"Stack data includes the prompt/title as well.",
"Whenever possible, we included username information and the scores of all comments, posts, and answers.",
"Although the instructions differed slightly with the specific nuances of each dataset, they had standard overall rules: (1) summaries should be an analysis of the given input rather than another response or utterance; (2) summaries should be abstractive,",
"i.e., annotators were required to paraphrase and could not repeat more than five words in a row from the source; and (3) summary lengths should contain [40 , 90] tokens.",
"Following the issuesviewpoints assertions framework presented in Barker and Gaizauskas (2016b), we also instructed annotators that summaries should summarize all viewpoints in the input and should try to include specific details from assertions and anecdotes (unless this made the summary too lengthy).",
"Summarizing based on similar viewpoints is analogous to clustering then summarizing, similar to the comment label grouping procedure before summarization in Barker et al. (2016b).",
"To help with this, we recommended wording such as Most commenters suggest that... and Some commenters think that... to group responses with similar viewpoints.",
"However, the email dataset was unique among the selected datasets given that it contained more back-and-forth dialogue than clusters of viewpoints, and thus identifying the speakers was essential to creating summaries that still retained meaning from the original email dialogue.",
"Since the email threads contained fewer individual speakers than the other datasets, this sort of summarization remained feasible.",
"Thus, for this dataset, annotators were instructed to specify the speakers when summarizing the conversation.",
"Quality-Controlled Crowdsourcing We crowdsourced our data using Amazon Mechanical Turk.",
"We required that our workers be native English speakers and pass a qualifying exam for each domain to be summarized.",
"We worked with a select group of about 15 workers who formed a community of high-quality annotators.",
"Example summaries were provided to the workers.",
"The workers submitted the qualifying exam, and then one of the authors of this paper provided feedback.",
"If the worker was not sure of the quality of the summaries written, at any point, they could enlist the input of one of the authors.",
"Additionally, after the workers wrote all summaries, we manually reviewed every summary and made corrections to grammar, wording, and overall structure.",
"Summaries we could not fix ourselves, either because they were poorly written or did not follow the annotation protocols, were flagged to be re-written.",
"They were then sent to our approved group of workers to be re-written, excluding any workers who had written a flagged summary.",
"While data crowdsourced from non-experts may contain noise (Gillick and Liu, 2010), we believe that our setup of working closely with a small group of workers, providing feedback to individual workers, and manually reviewing all final summaries mitigates these issues.",
"Dataset Statistics We provide statistics in Table 2. The percentage of novel n-grams in our summaries is higher than that of the very abstractive XSum dataset (Narayan et al., 2018) (35.76/83.45/95.50 -% novel uni/bi/tri-grams).",
"This level of abstraction is likely due to the instructions to perform abstractive summarization and the summaries being an analysis of the input, which results in the insertion of new words (e.g. commenters likely isn't seen in the input).",
"The in-fluence of this abstraction is further seen by an analysis of the Extractive Oracle, for which we show ROUGE-1/2/L (Lin, 2004).",
"We see that the performance of an extractive model is above the Extractive Oracle on the very abstractive XSum (Narayan et al., 2018) (29.79 ROUGE-1), but much lower than the Extractive Oracle on the CNN-DailyMail (CNNDM) dataset (Nallapati et al., 2016) ( > 50 ROUGE-1).",
"The summary lengths are fairly consistent, while the input lengths are the longest for NYT and Stack data.",
"We include the title and additional meta-data such as the headline and snippet in NYT data in input length calculations.",
"We analyze multi-document summarization specific characteristics of our datasets, as proposed by Dey et al. (2020a).",
"In particular, inter-document similarity measures the degree of overlap of semantic units in the candidate documents, with scores further from zero signifying less overlap.",
"The notion introduced for redundancy measures the overall distribution of semantic units; the farther the score is from zero, the more uniform semantic units are across the entire input, with the maximum when each unit is present only once.",
"Layout bias measures the similarity of multi-sentential documents with the reference.",
"For more precise definitions, we refer the reader to Dey et al. (2020a).",
"We provide results for our data in Table 3. Email data exhibits the most inter-document similarity, which follows the intuition that an email thread consists of a focused discussion typically on a single topic.",
"For redundancy, we see Reddit shows the most uniform distribution of semantic units, perhaps due to Reddit threads' less focused nature compared to the remaining datasets.",
"We do not see a particularly strong layout bias across any parts of the input documents.",
"Our datasets exhibit greater or comparable levels of novel-ngrams compared to multi-document summarization datasets such as MultiNews (Fabbri et al., 2019) and CQASUMM (Chowdhury and Chakraborty, 2018).",
"Our Stack subset has lower inter-document similarity, which presents challenges for models which rely strictly on redundancy in the input, and our datasets generally exhibit less layout bias, when compared to the analysis done in Dey et al. (2020b).",
"Comparison to Existing Datasets Although previous work on conversation summarization, before the introduction of SAMSum (Gliwa et al., 2019b), has largely featured unsupervised or few-shot methods, there exist several datasets with reference summaries.",
"These include SENSEI (Barker et al., 2016b) for news comments, the Argumentative Dialogue Summary Corpus (ADS) (Misra et al., 2015) for discussion forums, and the BC3 (Ulrich et al., 2009) dataset for email data.",
"However, much of the existing datasets are not wide in scope.",
"For example, SENSEI only covers six topics and the ADS Corpus covers one topic and only has 45 dialogues.",
"Furthermore, they each pertain to one subdomain of conversation.",
"Our dataset avoids these issues by covering four diverse subdomains of conversation and having approximately 500 annotated summaries for each subdomain.",
"Additionally, since neural abstractive summarization baselines do not exist for these datasets, we benchmark our models on these datasets to further their use as test sets.",
"We similarly include the AMI and ICSI meeting datasets within our benchmark.",
"Within community question answering, the Wik-iHowQA dataset (Deng et al., 2020) consists of user response threads to non-factoid questions starting with how to, including labels for the answer selection task and reference summaries.",
"The CQASUMM dataset (Chowdhury and Chakraborty, Figure 1: Sample argument subgraph construct from NYT news comments illustrating varying viewpoints. Claims I honestly... and but I dont.. are entailed by premises, connected through Default Inference nodes, and opposing claims are connected through Issue nodes. 2018) sampled threads from Yahoo! Answers in which the best answer could be used as a reference summary.",
"However, this heuristic is not guaranteed to cover all the user answers' perspectives, so we believe our dataset is a more principled benchmark for community question answering.",
"It is also noted that several large-scale MDS datasets have been introduced in the news domain (Fabbri et al., 2019; Gu et al., 2020; Gholipour Gha-landari et al., 2020), for creating Wikipedia lead-paragraphs (Liu et al., 2018), and for long-form question answering (Fan et al., 2019).",
"However, these do not focus on the conversational domain.",
"As our annotation protocol is motivated by the issues-viewpoints-assertions framework proposed in Barker and Gaizauskas (2016a), we propose to instantiate a modified version of that work's theoretical, proposed graph model.",
"Argument Graph Construction We build on the argument graph formulation of Lenz et al. (2020), a variant of Argument Interchange Format (Chesnevar et al., 2006).",
"Claims and premises are represented as information nodes ( I -nodes), with the relations between them represented as scheme nodes ( S -nodes).",
"Let V = I S be the set of nodes, and E V V the set of edges describing support relationships among the nodes.",
"We then define the argument graph G = ( V, E ) .",
"Lenz et al. (2020) breaks the construction of the argument graph down into four steps: (1) argument extraction , or the identification of argumentative discourse units; (2) relationship type classification , or the classification of edges between nodes; (3) major claim detection ; and (4) graph construction , or the construction of the final graph based on the identified nodes and edges.",
"To adapt this formulation to our multi-document setting, we first perform argument extraction and relationship type classification for each individual input document and finally graph construction to determine relationships among claims from all documents.",
"Argument Extraction For extracting arguments from a single document, we build on work in argument mining with pretrained models (Chakrabarty et al., 2019).",
"As in Lenz et al. (2020), our argumentative units are sentences, from which we identify claims , which are assertions that something is true, and premises , which are propositions from which a conclusion is drawn.",
"Additionally, we identify and remove non-argumentative units.",
"We train a three-way classifier for the task of argument extraction, following Chakrabarty et al. (2019) and making use of data for argument mining from that paper and from Stab and Gurevych (2014).",
"The output of this step can also simply be used without further graph construction as a less noisy version of the input, which we call -arg-filtered .",
"Relationship Type Classification We follow the procedure in Lenz et al. (2020) and use entailment to determine the relationship between argumentative units within a document.",
"However, rather than using the classifier provided, we make use of RoBERTa (Liu et al., 2019b) fine-tuned on the MNLI entailment dataset (Williams et al., 2018).",
"Rather than using both support and contradiction edges between claims and premises, we make the simplification that all relationships can be captured with support edges, as we are dealing with a single document in this step.",
"Within a single text, the Dataset/Method Lexrank Textrank BERT-ext NYT 22.30/3.87/19.14 25.11/3.75/20.61 25.88/3.81/22.00 Reddit 22.71/4.52/19.38 24.38/4.54/19.84 24.51/4.18/20.95 Stack 26.30/5.62/22.27 25.43/4.40/20.58 26.84/4.63/22.85 Email 16.04/3.68/13.38 19.50/3.90/16.18 25.46/6.17/21.73 Table 4: ROUGE-1/2/L results for extractive LexRank (Erkan and Radev, 2004), TextRank (Mihalcea and Tarau, 2004), and BERT-based (Miller, 2019) models.",
"premise can be tied as following from one of the claims.",
"We create an edge between any premise and the claim it most entails if the entailment score from RoBERTa is greater than 0.33, based on manual analysis of the scores.",
"If a premise is not labeled as supporting a claim, then we heuristically create an edge between that premise and the closest claim preceding it in the text.",
"Since not all texts in the benchmark datasets may be argumentative or may be too short to contain major claims, we use some heuristics in our graph creation.",
"If none of the argumentative sentences are labeled as claims (i.e., all are labeled as premises) in argument extraction, the text's first sentence is labeled as the claim.",
"Furthermore, we do not identify a single claim as the major claim since there may be multiple major points of discussion.",
"Graph Construction For the final graph, for each of the documents in an example, we run the above procedure and obtain a set of claims and associated premises.",
"We then identify support edges between claims, which may be across documents.",
"One claim may make a larger assertion, which is supported by other claims.",
"We run our entailment model over all potential edges (in both directions) among claims in the document and greedily add edges according to the entailment support score while no cycles are made.",
"After this step, we are left with a set of claims which do not entail any other nodes or, stated otherwise, do not have parent nodes.",
"Following the terminology of Barker and Gaizauskas (2016b), these nodes can be considered viewpoints.",
"We then identify issues or topics on which the viewpoints differ.",
"We run our entailment model for all parent claim nodes again in both directions over these claims and identify nodes that contradict each other with probability over 0.33, based on manual analysis of the resulting graphs.",
"We greedily add edges to maintain a tree structure, joining these nodes to a special node, which we call the Issue node.",
"All Issue nodes, as well as claims which are not connected to any Issue node, are connected to Data/Method BART BART-arg NYT 35.91/9.22/31.28 36.60 / 9.83 / 32.61 Reddit 35.50/10.64/32.57 36.39 / 11.38 / 33.57 Stack 39.61/10.98/35.35 39.73 / 11.17 / 35.52 Email 41.46 / 13.76 / 37.70 40.32/12.97/36.90 Table 5: ROUGE-1/2/L results for vanilla BART as well as one trained on argument-mining input.",
"a dummy Conversation Node' which serves as the root of the argument graph.",
"We show an example Issue subgraph for NYT data in Figure 1. Argument Graphs to Summaries Recent work has shown the strength of text-based pretrained models on graph-to-text problems (Ribeiro et al., 2020).",
"Following that work, we linearize the graph by following a depth-first approach starting from the Conversation Node.",
"We found that inserting special tokens to signify edge types did not improve performance, likely due to the size of our data, and simply make use of an arrow to signify the relationship between sentences.",
"We train a sequence-to-sequence model on our linearized graph input, which we call -arg-graph .",
"We use the fairseq codebase (Ott et al., 2019) for our experiments.",
"Our base abstractive text summarization model is BART-large (Lewis et al., 2020), a pretrained denoising autoencoder with 336M parameters that builds on the sequence-to-sequence transformer of Vaswani et al. (2017).",
"We fine-tune BART using a polynomial decay learning rate scheduler with Adam optimizer (Kingma and Ba, 2015).",
"We used a learning rate of 3e-5 and warmup and total updates of 20 and 200, following previous few-shot transfer work (Fabbri et al., 2020a).",
"We could have equally fine-tuned other pretrained models such as Pegasus (Zhang et al., 2019) or T5 (Raffel et al., 2019), but Fabbri et al. (2020a) find that BART largely performs equally well in few-shot settings when compared to Pegasus.",
"For the NYT and Stack datasets, which contain sequences over the typical 1024 max encoder length with which BART is trained, we copied the encoder positional embeddings to allow sequences up to length 2048.",
"To address the input-length of meeting summaries, which range from 6k to 12k tokens, we use the Longformer (Beltagy et al., 2020), which allows for sequences up to length 16k to-Method/Dataset AMI ICSI HMNet 53.02/18.57/-46.28 /10.60/-DDA-GCN 53.15/ 22.32 /-Longformer-BART 54.20/20.72/51.36 43.03/ 12.14 /40.26 Longformer-BART-arg 54.47 /20.83/ 51.74 44.17/11.69/ 41.33 Table 6: ROUGE-1/2/L results for DDA-GCN (Feng et al., 2020) and HMNet (Zhu et al., 2020) on the AMI and ICSI meeting summarization dataset along with our Longformer and Longformer-arg models.",
"kens.",
"We initialize the Longformer model with BART parameters trained on the CNN-DailyMail dataset, as the meeting summarization datasets contain fewer than 100 data points.",
"We otherwise fine-tune models from vanilla BART, following intuition in few-shot summarization (Fabbri et al., 2020a) and based on initial experiments.",
"In the tables which follow, -arg refers to any model trained with argument-mining-based input, and we specify which -arg-graph or -arg-filtered settings were used for each dataset below.",
"We provide results for baseline, unsupervised extractive models in Table 4. Lexrank (Erkan and Radev, 2004) and Textrank (Mihalcea and Tarau, 2004), and BERT-ext (Miller, 2019), which makes use of BERT (Devlin et al., 2019).",
"The unsupervised extractive models perform well below the extractive oracle performance, suggesting the difficulty of content selection in this setting.",
"We train BART on 200 examples from our validation set for abstractive models, using the remaining 50 as validation and test on the final test set of 250 examples.",
"We tested zero-shot transfer from CNNDM and SAMSum in zero-shot settings, although these resulted in a much lower performance of about 28 ROUGE-1.",
"Few-shot model performance is shown in Table 5. The abstractive model performs at or above the Extractive Oracle, suggesting the need for better abstractive models.",
"We also train on our argument mining-based approaches and show results in Table 5. We see ROUGE improvements when applying BART-arg-graph for Reddit, and Stack data.",
"The -arg-filtered variation (which, as defined in Section 4, is the less noisy version of the input produced by the argument extraction step) outperformed the -arg-graph variation on both email and NYT data.",
"For email data, however, this did not improve upon the BART baseline, likely due to the dataset's characteristics; email data is shorter and more linear, not benefiting Dataset/Method Our results Previous SOTA SAMSum 52.27/27.82/47.92 49.30/25.60/47.70 CQASUMM 32.79/6.68/28.83 31.00/5.00/15.20 BC3 39.59/13.98/21.20 ADS 37.18/11.42/21.27 SENSEI 34.57/7.08/16.80 Table 7: Benchmarking results on conversational datasets such as SAMSum (Gliwa et al., 2019b) and CQASUMM (Chowdhury and Chakraborty, 2018) and initial neural abstractive summarization results for email (BC3) (Ulrich et al., 2008), debate discussion forums (ADS) (Misra et al., 2015), and news comments (SENSEI) (Barker et al., 2016b).",
"Benchmarking Other Conversation Summarization Datasets We benchmark our models on widely used meeting summarization datasets.",
"Due to the input's linear nature and the size of the meeting transcripts, we found improved results using -arg-filtered to filter non-argumentative units rather than incorporating the graph structure.",
"Results are shown in Table 6. The Longformer model performs as well or better than previous state-of-the-art results on these datasets, despite not making use of more complex modeling structures, and we generally see improvement with argument-mining.",
"As noted above, there exist prior datasets for dialogue, community question answering, email, forum, and news comments summarization.",
"We benchmark results on these datasets in Table 7. We outperform prior work on SAMSum (Gliwa et al., 2019b), and CQASUMM (Chowdhury and Chakraborty, 2018) with our BART and BART-arg-graph models, respectively.",
"We did not find improvement on SAMSum with the BART-arg model due to the extremely short and focused nature of the dialogues, analogous to email data performance.",
"We also provide transfer results of BART and BART-arg-graph models from our email and news-comment data to BC3 (Ulrich et al., 2009), ADS (Misra et al., 2015), and SENSEI data (Barker et al., 2016b), for which no prior neural abstractive summarization results existed.",
"Human Evaluations We collect human judg-ment annotations for two of the four quality dimensions studied in Kryscinski et al. (2019) and Fabbri et al. (2020b), namely consistency and relevance.",
"Consistency is defined as the factual alignment be-Target Dataset BART BART-arg Relevance Consistency Relevance Consistency Reddit 3.39 (0.13) 3.40 (0.12) 3.47 (0.12) 3.41 (0.10) AMI 4.07 (0.16) 3.67 (0.16) 4.13 (0.17) 3.70 (0.17) Table 8: Mean relevance and factual consistency annotations for BART and BART-arg outputs on Reddit and AMI.",
"tween the summary and the summarized source text, while relevance is defined as the summary's ability to select important content; only relevant information and viewpoints should be included.",
"We did not include fluency as an initial inspection of the data found fluency to be of very high quality, as has shown to be the case for pretrained models in news summarization (Fabbri et al., 2020b).",
"We did not include coherence as this was generally not an issue of concern in the initial analysis.",
"We randomly select 25 random examples from the Reddit corpus and ten examples from the AMI corpus, and output from the BART and BART-arg-graph models.",
"These data points were chosen to demonstrate what characteristics are realized in differences across ROUGE for argument-graph and argument-noise-reduction approaches.",
"Ten examples were chosen from AMI due to the size of the input and annotation constraints.",
"The annotator sees the source article and randomly-ordered output from the model and then rates the summaries for relevance and consistency on a Likert from 1 to 5, with 5 being the best score.",
"We averaged the score of three native English-speaking annotators on each example and then across examples.",
"Results are shown in Table 8. We find that the annotators prefer our argument mining-based approaches in both dimensions.",
"However, the results are close.",
"Furthermore, the scores for relevance and consistency are rather low, especially on the Reddit dataset and when compared to results on the CNN-DailyMail Dataset from Fabbri et al. (2020b).",
"These results demonstrate the difficulty of modeling such conversational data.",
"Examples are included in the appendix.",
"We propose ConvoSumm, a benchmark of four new, crowdsourced conversation datasets and state-of-the-art baselines on widely-used datasets that promote more unified progress in summarization beyond the news domain.",
"Our benchmark consists of high-quality, human-written summaries that call for abstractive summaries and a deeper understanding of the input texts' structure.",
"We provide results for baseline models and propose to model the text's argument structure, showing that such structure helps better quantify viewpoints in non-linear input in both automatic and human evaluations.",
"Our analysis notes challenges in modeling relevance and consistency in abstractive conversation summarization when compared to news summarization.",
"As we propose novel conversation summarization datasets and modeling components, this section is divided into the following two parts.",
"Intellectual Properties and Privacy Rights All data for our newly-introduced datasets are available online; please see the following for New York Times comment data 2 , StackExchange data 3 , and W3C email data 4 .",
"Reddit data is available via the Google BigQuery tool 5 .",
"Compensation for Annotators We compensated the Turkers approximately $12$15 per hour.",
"We first annotated examples in-house to determine the required annotation speed.",
"Typically, the summarization task took around 10 minutes, and we compensated the workers from $2.25 to $3.00 per task, depending on the domain and deadline requirements.",
"Steps Taken to Avoid Potential Problems We interacted closely with the Turkers to ensure that compensation was fair and that the instructions were clear.",
"To maintain the quality of the dataset, we manually reviewed the crowdsourced summaries for language use.",
"Initial investigation into Reddit data showed certain inappropriate language usage, so we filtered these examples automatically.",
"Bias Biases may exist in the datasets, such as political bias in the news datasets and gender bias in potentially all of the datasets.",
"Thus, models trained on these datasets may propagate these biases.",
"We 2 https://www.kaggle.com/aashita/ nyt-comments 3 https://archive.org/download/ stackexchange 4 https://tides.umiacs.umd.edu/webtrec/ trecent/parsed_w3c_corpus.html 5 https://console.cloud.google.com/ bigquery removed data with offensive language when possible.",
"Misuse Potential and Failure Mode When used as intended, applying the summarization models described in this paper can save people much time.",
"However, the current models are still prone to producing hallucinated summaries, and in such a case, they may contribute to misinformation on the internet.",
"Further research is needed to ensure the faithfulness of abstractive summaries to address this issue, as this issue is present among all current abstractive summarization models.",
"Environmental Cost The experiments described in the paper make use of V100 GPUs.",
"We used up to 8 GPUs per experiment (depending on the experiment; sometimes, a single GPU was used to run the maximum number of experiments in paral-lel).",
"The experiments may take up to a couple of hours for the larger datasets.",
"Several dozen experiments were run due to parameter search, and future work should experiment with distilled models for more light-weight training.",
"We note that while our work required extensive experiments to draw sound conclusions, future work will be able to draw on these insights and need not run as many large-scale comparisons.",
"Models in production may be trained once for use using the most promising settings."
] |
[
"abstain",
"abstain",
"objective",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"objective",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"To extract the relationship between two entities in a sentence, two common approaches are (1) using their shortest dependency path (SDP) and (2) using an attention model to capture a context-based representation of the sentence.",
"Each approach suffers from its own disadvantage of either missing or redundant information.",
"In this work, we propose a novel model that combines the advantages of these two approaches.",
"This is based on the basic information in the SDP enhanced with information selected by several attention mechanisms with kernel filters, namely RbSP (Richer-but-Smarter SDP).",
"To exploit the representation behind the RbSP structure effectively, we develop a combined deep neural model with a LSTM network on word sequences and a CNN on RbSP.",
"Experimental results on the SemEval-2010 dataset demonstrate improved performance over competitive baselines.",
"The data and source code are available at https: //github.com/catcd/RbSP .",
"One of the most fundamental tasks in natural language processing, as well as in information extraction, is Relation Extraction (RE), i.e., determining the semantic relation between pairs of named entities or nominals in a sentence or a paragraph.",
"Take the following sentences from the SemEval-2010 task 8 dataset (Hendrickx et al., 2009) as examples:",
"[ churn ] e 2 and started stirring it.",
"(ii) The agitating [ students ] e 1 also put up a [ barricade ] e 2 on the Dhaka-Mymensingh highway.",
"Here the nominals cream' and churn' in sentence",
"(i) are of relation Corresponding author Entity-Destination(e1,e2) while nominals students' and barricade' in sentence",
"(ii) are of relations Product-Producer(e2,e1) .",
"The research history of RE has witnessed the development as well as the competition of a variety of RE methodologies.",
"All of them are proven to be effective and have different strengths by leveraging different types of linguistic knowledge, however, also suffer from their own limitations.",
"Some early studies stated that the shortest dependency path (SDP) in dependency tree is usually concise and contains essential information for RE (Bunescu and Mooney, 2005; Fundel et al., 2006).",
"By 2016, this approach became dominant with many studies demonstrating that using SDP brings better experimental results than previous approaches that used the whole sentence (Xu et al., 2015a,b; Mehryary et al., 2016; Cai et al., 2016; Le et al., 2018).",
"However, using the SDP may lead to the omission of useful information (i.e., negation, adverbs, prepositions, etc.).",
"Recognizing this disadvantage, some studies have sought to improve SDP approaches, such as adding the information from the sub-tree attached to each node in the SDP (Liu et al., 2015) or applying a graph convolution over pruned dependency trees (Zhang et al., 2018b).",
"Another approach to extract the relation between two entities is using whole sentence in which both are mentioned.",
"This approach seems to be slightly weaker than using the SDP since not all words in a sentence contribute equally to classify relations and this leads to unexpected noises (Nguyen and Grishman, 2015).",
"However, the emergence and development of attention mechanism (Bahdanau et al., 2015) has re-vitalized this approach.",
"For RE, the attention mechanism is capable of picking out the relevant words concerning target entities/relations, and then we can find critical words which determine primary useful semantic information (Zhou et al., 2016; Verga et al., 2018).",
"We therefore need to determine the object of attention, i.e., nominals themselves, their entity types or relation label.",
"However, conventional attention mechanism on sequence of words cannot make use of structural information on dependency tree.",
"Moreover, it is hard for machines to learn the attention weights from a long sequence of input text.",
"In this work we propose an enhanced representation for relations that combines the advantages of the above approaches.",
"Basically, we focus on condensed semantic and syntactic information on the SDP.",
"Compensating for the limitations of the SDP may still lead to missing information so we enhance this with syntactic information from the full dependency parse tree.",
"Our idea is based on fundamental notion that the syntactic structure of a sentence consists of binary asymmetrical relations between words ( Nivre, 2005).",
"Since these dependency relations hold between a head word (parent, predicate) and a dependent word (children, argu-ment), we try to use all child nodes of a word in the dependency tree to augment its information.",
"Depending on a specific set of relations, it will turn out that not all children are useful to enhance the parent node; we select relevant children by applying several attention mechanisms with kernel filters.",
"This new representation of relation is named Richer-but-Smarter SDP (RbSP).",
"Recently, deep neural networks (DNNs) have been effectively used to learn robust syntactic and semantic representations behind complex structures.",
"Thus, we propose a novel DNN framework which combines Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) and Convolutional Neural Networks (CNN) (LeCun et al., 1989) with a multi-attention layer.",
"We proposed a novel representation of relation based on attentive augmented SDP that overcomes the disadvantages of traditional SDP.",
"We improved the attention mechanism with kernel filters to capture the features from context vectors.",
"We proposed an advanced DNN architecture that utilizes the proposed Richer-but-Smarter Shortest Dependency Path (RbSP) and other types of linguistic and architectural features.",
"RE has been widely studied in NLP community for many years.",
"Unsupervised (Hasegawa et al., 2004; Yan et al., 2009; Quan et al., 2014), semi-supervised (Chen et al., 2006; Carlson et al., 2010; Ammar et al., 2017) and distant supervision (Verga et al., 2018; Ji et al., 2017) methods have been proven effective for the task of detecting relations from unstructured text.",
"However, in this paper, we mainly focus on supervised approaches, which usually have higher accuracy.",
"In earlier RE studies, researchers focused on extracting various kinds of linguistic features, including both syntactic features and semantic cues (Chan and Roth, 2010; Nguyen and Grishman, 2014).",
"However, all the feature-based methods depend strongly on the quality of designed features from an explicit linguistic pre-processing step.",
"Based on the idea that SDPs contain the essential information for RE, many studies exploit it with several refinements.",
"Typical refinements include negative sampling (Xu et al., 2015a) and BRCNN (Cai et al., 2016) which model the directed shortest path.",
"Liu et al. (2015) suggested incorporating additional network architectures to further improve the performance of SDP-based methods, which uses a recursive neural network to model the sub-tree.",
"Some works utilized information over the whole dependency tree, such as Li et al. (2017) used dynamic extended tree conditioned LSTM for RE and Panyam et al. (2018) exploited whole dependency graph for relation extraction in biomedical text.",
"Recently, with the introduction and development of attention mechanism, many works tend to use whole sentence or paragraph and focus on the most relevant information using attention technique.",
"Some studies apply a single attention layer, that focus on the word itself (Shen and Huang, 2016; Zhang et al., 2018a); word position (Zhang et al., 2017) and global relation embedding (Su et al., 2018).",
"Other works apply several attention layers, such as word, relation and pooling attention (Wang et al., 2016), multi-head attention (Verga et al., 2018) and wordand entity-based attention (Jat et al., 2017).",
"Luo et al. (2018) used a bidirectional Long Short-Term Memory architecture with an attention layer and a tensor layer for organizing the context information and detecting the connections between two nominals.",
"As previously mentioned, we utilize the condensed information in the SDP to learn the relation between two nominals.",
"The simple structure of the SDP is one of its weaknesses since there exists some useful information in dependency tree that does not appear in the SDP.",
"This information can be leveraged to represent the relation more precisely.",
"Two examples in Figure 1 belong to different relation types, but the paths between two nominals in these examples contain only one token ( put ).",
"However, the meaning of token put in two SDPs are completely different.",
"In this situation, it is difficult for the machine to distinguish the two shortest dependency paths from these instances.",
"We notice that the child nodes attached to the shortest dependency paths and their dependency relation from their parent can provide supplemental information for relation classification.",
"In the previous examples, the sub-structure prt up provides semantic information about token put in the specific sentence to make it discriminated from the stand-alone one. Based on similar observations, we propose the idea of combining subtree information with original SDP to form a more precise structure for classifying relations. In this RbSP structure each token t is represented by itself and its attached children on the dependency tree. 4 Proposed Model The overall architecture of our proposed model is shown in Figure 2. Given a sentence and its dependency tree, we build our model on the SDP between two nominals and its directed children on the tree. Here, we mainly focus on the SDP representation, which is composed of decream put churn dobj prep:in the soured We det amod W c , b c W i nd o w p r o c e ss i n g max-pooling W f , b f SDPR e p r e s e n t a t i o n W o r d e m b e dd i n g D e p e nd e n c y e m b e dd i n g A u g m e n t e d i n f o r m a t i o n Softmax Convolved features nsubj the butter det comp A tt e n t i v e a u g m e n t a t i o n A tt e n t i v e a u g m e n t a t i o n A tt e n t i v e a u g m e n t a t i o n Output Input = SDP + its nodes' children N e u r a l N e t w o r k C o n v o l u t i o n a l Figure 2: The architecture of RbSP model for relation classification.",
"pendency embeddings, token embeddings, and token's augmented information.",
"After SDP representation phase, each token and dependency relation is transformed into a vector.",
"This sequence of vectors is then fed to a convolutional neural network to capture the convolved features that can be used to determine which relation two nominals are of.",
"The goal of this phase is to represent each component on the shortest path (dependency relation and token) by a corresponding vector.",
"We concatenate the dependency type and dependency direction to form the embedding for a dependency relation, a non-linear transformer is followed to produce the final D -dimensional representation d i RD of i-th dependency relation as follow: d i = tanh (cid:16)h d typi d diri i W d + b d (cid:17) (1) where d typ R dim typ and d dir R dim dir are dependency type and direction respectively; W d R ( dim typ + dim dir ) D and b d RD are trainable parameters of the network.",
"For token representation, as mentioned above, we assume that each token should be interpreted by itself and its children.",
"Then, the word information t i of each token on the SDP is concatenated with its attentive augmented information a i based on the attached children (which is calculated by Multi-layer attention with Kernel filters, see Section 4.2).",
"In this work, we utilize four types of embeddings to represent the word information of each token, including: Pre-trained fastText embeddings (Bo-janowski et al., 2017): which learned the word representation based on its external context.",
"Character-based embeddings : we use an internal LSTM to learn the information about word morphology (like the prefix or suffix).",
"POS tag embeddings : we embed the token's grammatical tag using a randomly initialized look-up table and update this parameter on model learning phase.",
"WordNet embeddings : which is in form of a sparse vector that figure out which basic WordNet synsets the token belongs to.",
"To take advantage of the original sentence sequence information, we use a recurrent neural network with LSTM units to pick up the information along the sentence S = { t i } ni =1 as follow: H = biLSTM( S ) = { h i } ni =1 (2) Each token t i is then augmented by the corresponding hidden state h i from H .",
"Finally, this concatenation is transformed into an X dimensional vector to form the representation x i RX of the token.",
"I.e., x i = tanh ([ t i a i h i ] W x + b x ) (3) where W x and b x are trainable parameters of the network.",
"To capture the appropriate augmented information from the child nodes of each token, we propose a novel multi-layer attention with kernel filters architecture.",
"As illustrated in Figure 3, we employ two sequential attention layers on the children of Token embedding Dependency embedding d 1 d 2 h 2 s 2 h 1 s 1 Kernel filters M u l t i l a y e r a tt e n t i o n F e a t u r e s e l e c t i o n I npu t Token on SDP Distance from child Childnode m a x p oo li n g S e l f a tt e n t i o n a tt e n t i o n H e u r i s t i c Token's augmented information to its father token Figure 3: The multi-layer attention architecture to extract the augmented information from the children of a token on SDP.",
"a token to produce children context vectors.",
"Afterward, to utilize all informative child nodes and preserve the integrity of the word information, we capture the token's augmented information using kernel filters instead of using the average of context vectors weighted by multi-layer attention.",
"Given a token t and their child nodes, we first represent every token by a real-valued vector to provide lexical semantic features.",
"Token t is transformed into a token embedding vector t R dim which is the concatenation of its word embedding and part-of-speech (POS) tag embedding.",
"To utilize all the information in the sub-structure of token's children, we form a child node not only by its token embedding as in parent node but also by the dependency relation from its direct ancestor on the sentence's parse tree.",
"Suppose t has a set C of M children, i.e., C = { c 1 , c 2 , ..., c M } .",
"Our model represents each child in C with a real-valued vector c i R dim + dim dep .",
"To additionally capture information about the child node to the target token, we incorporate the position embeddings d i to re-flect the relative distances between the i-th child's token to the target token on the original sentence.",
"We then apply a simple self-attentive network to child nodes { c i } Mi =1 where the attention weights are calculated based on the concatenation of themselves with parent information and distance from parent, as follow: C = (cid:8) c i t d i w d (cid:9) M i =1 = (cid:8) c i (cid:9) M i =1 e = (cid:8) c i W e + b e (cid:9) M i =1 = (cid:8) e i (cid:9) M i =1 si = exp( e i ) P Mk =1 exp( e k ) (4) where denotes the concatenation operation; w d R dim d is the base distance embedding; W e R (2 dim + dim dep + dim d ) 1 and b e R are weight and bias term.",
"The self-attentive context vector a s of the target token is the weighted sum of the self-attentive children context vectors based on the weights as follows: c si = si c i a s = X i c s i (5) We observe that the importance of a child node to the parent node depends on the distance between them on the original sentence.",
"Therefore, we apply a heuristic attentive layer on the self-attentive children context vectors based on the distances d 1 , d 2 , ..., d M to keep track of how close each child is to the target token.",
"We heuristically choose the activation function for the distances d 1 , d 2 , ..., d M as f ( d ) = d 2 with = 0 .",
"03 , and a softmax layer is followed to calculate the heuristic attention weight.",
"I.e., hi = exp( d 2 i ) P Nk =1 exp( d 2 k ) c hi = hi c i a h = X i c hi (6) The multi-attentive context vector a h is a synthetic representation of all child nodes with the target token node taken into account.",
"Since the child nodes are usually distinct from each other, an average vector is not suitable to represent the children information.",
"We propose to use the kernel filters to capture the relevant and important information from the output of the multi-attention layer.",
"K kernel filters are applied to each child's attentive vector to produce K features from each child.",
"I.e., F = n ReLU (cid:16) c hi W f + b f (cid:17)o M i =1 (7) where W f R (2 dim + dim dep + dim d ) K is the weight of K kernel filters; and b f RK is bias term.",
"Finally, to produce the final augmented information a , we apply a max-pooling (Boureau et al., 2010) layer to the feature matrix F and select the most important features as follow: a = (cid:8) max (cid:0) F k (cid:1)(cid:9) K k =1 (8) 4.3 CNN on RbSP After SDP representation layer, the input SDP is transformed into: SDP = h x 1 , d 1 , x 2 , ..., x N 1 , d N 1 , x N i (9) where the over arrow on d i denotes the direction of the dependency relation.",
"We build the CNN model on this SDP ; our model is similar to the model of Xu et al. (2015a).",
"In general, let us define the vector x i : i + j as the concatenation of j tokens and j 1 dependency relation between them.",
"I.e., x i : i + j = x i d i x i +1 ... d i + j 2 x i + j 1 (10) The convolution operation with region size r applies k filters to all possible window of r successive tokens to produce convolved feature map.",
"We then gather the most important features by applying a max pooling (Boureau et al., 2010) layer over the entire feature map.",
"I.e., the convolutional layer computes the i-th element of the convolved feature vector f as follows: f i = max 0 j N r +1 [ x j : j + r W c + b c ] i (11) where W c R ( rX +( r 1) D ) k and b c R k are the weight matrix and bias vector of the convolutional layer.",
"The output f of the convolutional layer is then fed to a softmax classifier to predict a ( K + 1) -class distribution over labels y : y = softmax ( fW y + b y ) (12) where W y and b y are parameter of the network to be learned.",
"The proposed model can be stated as a parameter tuple = ( W , b ) .",
"To compute the model parameters , we define the training objective for a data sample as: L ( ) = KX i =0 y i log y i + k k 2 (13) where y { 0 , 1 } ( K +1) indicating the one-hot vector represented ground truth; and is a regularization coefficient.",
"By minimizing L ( ) using mini-batch gradient descent (GD) with Adam optimizer (Kingma and Ba, 2014), is updated through neural network structures.",
"For this paper, we directly utilize the pre-trained fastText word embeddings model (Bojanowski et al., 2017) which is trained on Wikipedia data.",
"The look-up tables for dependency embeddings, word characters, POS tags are randomly constructed using the Glorot initializer (Glorot and Bengio, 2010) and are treated as the parameters to be learned during the training phase.",
"Since the CNN model takes the fixed size matrix as input, we pad the inputs in each batch of data dynamically to the longest input length of the batch.",
"We further use the batch normalization (Ioffe and Szegedy, 2015) which is able to enable higher learning rates and reduces over-fitting.",
"During the training phase, we make use of several techniques, including: clipping the gradients if their norm exceeds a given threshold (Goldberg, 2017); applying dropout (Srivastava et al., 2014) with the probability of 0.5 on embeddings layer, CNN hidden states, and penultimate layer; and using early stopping (Caruana et al., 2001) by validation loss.",
"Further, to reduce the impact of random effects on our model, we employ the ensemble mechanism (Krogh and Sollich, 1997).",
"For this study, we run the model for 20 times and uses the strict majority vote to obtain the final results.",
"Our model was evaluated on SemEval-2010 Task 8 dataset ( Hendrickx et al., 2009), which contains 10 , 717 annotated relation classification examples and is separated into two subsets: 8 , 000 instances",
"for training and 2 , 717 for testing.",
"We randomly split 10 percents of the training data for validation.",
"There are 9 directed relations and one undirected Other class.",
"We conduct the training-testing process 20 times and calculate the averaged results.",
"For evaluation, the predicted labels were compared to the golden annotated data using standard precision (P), recall (R), and F1 score metrics.",
"Table 1 summarizes the performance of our model and comparative models.",
"For a fair comparison with other researches, we implemented a baseline model, in which we remove all the proposed augmented information (multi-layer attention with kernel filters and LSTM on original sentence).",
"This baseline model is similar to the model of Xu et al. (2015a) with some technical improvements and additional information sources.",
"It yields higher F1 than competitors which are based on SDP without any data augmentation methods.",
"This result is also comparative when is placed next to the result of basic Attention-CNN model.",
"The results also demonstrate the effectiveness of our proposed methods that brings an improvement of 1 .",
"5% in F1, compared to the baseline result.",
"Our RbSP model yields an F1-score of 86 .",
"3% , outperforms other comparative models, except Multi-Att-CNN model of Wang et al. (2016) with multi-level attention CNN.",
"However, we have tried to re-implement the Multi-Att-CNN, but we failed to reproduce the positive result in the original paper.",
"The performance of our re-implementation is about 84 .",
"9% of F1.",
"This result has a high consensus with Luo et al. (2018) since they also tried to re-build this model, and their reimplemented result is not much different from us, as 85 .",
"5% .",
"It is worth to note that when comparing with another augmented method of Liu et al. (2015), our multi-layer attention with kernel filters architecture brings more significant improvement.",
"Relatively, in comparison of efficiency of augmented methods on the baseline model, the full-tree augmentation only brings 1% improvement of F1 while our attentive augmentation boosts up to 1 .",
"5% .",
"Unlike the method of using the whole subtree to supplement information for the target node, our method only uses the most relevant nodes that are direct children to represent augmented infor-Model Source of information F1 depLCNN (Xu et al., 2015a) Word embeddings, SDP, CNN 81.9 + WordNet, word around nominals 83.7 + Negative sampling 85.6 BRCNN (Cai et al., 2016) Word embeddings, SDP, LSTM, CNN 85.4 + POS, NER, WordNet embeddings, inverse SDP 86.3 DepNN (Liu et al., 2015) 200-d Gigaword embeddings, SDP, CNN 81.8 + Augmented sub-tree, Recursive Neural Network 82.8 + NER 83.6 Attention-CNN (Shen and Huang, 2016) Sentence convolution, Attention-based context 84.3 + WordNet, Words around nominals 85.9 AT-BLSTM (Luo et al., 2018) Word embeddings, Sentence attention features, Tensor feature 86.3 Multi-Att-CNN (Wang et al., 2016) Multi-Level Attention CNNs, Attention pooling 88.0 85.5 Baseline Word embeddings, POS tag, WordNet 84.8 RbSP (our model) Baseline + Augmented Information 86.3 + ensemble 86.7 Table 1: The comparison of our RbSP model with other comparative models on SemEval-2010 task 8 dataset.",
"mation.",
"In addition, our method further focuses on the most important children through two attention layers.",
"We also observe that during many training-testing processes, the results may vary.",
"The standard deviation of 20 runs is about 0 .",
"27 .",
"We perform the ensemble strategy by majority voting on the results of 20 runs, and it drives our model to achieve a better result of 86 .",
"7% .",
"This result is outperformed other comparative models.",
"Figure 4 shows the changes in F1 when removing each proposed component from the RbSP model.",
"The F1 reductions illustrate the contributions of all proposals to the final result.",
"However, the impact levels vary with different components.",
"Between two proposed component, the multi-layer attention with kernel filters (augmented information) plays a vital role when contributing 1 .",
"22% to the final performance while the contribution of the LSTM on the original sentence is 0 .",
"33% .",
"An interesting observation comes from the inte-rior of the multi-layer attention with kernel filters.",
"The impact of removing the whole augmented information is much higher than the total impact of removing multi-layer attention or kernel filters ( 1 . 22 vs. 0 . 42+0 . 18 = 0 . 6 ).",
"These results demonstrate that the combination of constituent parts is thoroughly utilized by our sequential augmented architecture.",
"Another experiment is on investigating the meaning of each attention component.",
"The result lightly reduces when we remove the self-attention or heuristic attention component.",
"The results also prove that our proposed heuristic attention method is simple but effective.",
"Its improvement is equivalent to the self-attention which is a complex attention mechanism.",
"Among the input of multi-layer attention, the word embedding has a great influ-ence on the model performance.",
"However, children POS tag and relation to parent are also essential components to have the good results.",
"We studied model outputs to analyze system errors in the cases of using the baseline model and using the proposed model with RbSP representation.",
"In Figure 5, we considered four types of errors: If the model makes a wrong decision and labels an Other relation (negative) as an actual relation (positive), it indicates 1 FP (False Positive) error.",
"Vice versa, if it labels an actual relation as Other , it brings 1 FN (False Negative).",
"In the case that model confused between two types of relations, the model will be penalized twice, with 1 FP and 1 FN .",
"Direction error, i.e., the model predicts the relation correctly but its direction wrongly, also brings 1 FP and 1 FN .",
"The proportions of the left and the right of Figure 5 are quite consistent.",
"In which, RbSP seems to have the most impact on determining whether an instance is positive or negative.",
"RbSP also changes the decision of the relation type in quite many cases.",
"It also influences the decision-making about relation's directionality, but not much.",
"Totally, the use of RbSP helps to correct more than 150 errors of the baseline model.",
"However, it also yields some new errors (about 70 errors).",
"Therefore, the difference of F 1 between the baseline model and our RbSP model is only 1 .",
"5% , as stated in table 1.",
"Table 2 gives some realistic examples of different results when using the RbSP and not.",
"We observed that the baseline model seems to be stuck in over-fitting problem, for examples, it classified all SDP with prep:with as Instrument-Agency and all SDP with prep:in as Member-Collection (exam-44% 19% 31% 6% RbSP Improvements Removing wrong relations Finding new relations Fixing relation type Fixing relation direction 39% 18% 40% 3% RbSP Breakdowns New wrong relations Missing relations Wrong relation type Wrong relation direction Figure 5: Comparing the effects of using RbSP in two aspects,",
"ples 1 2 ).",
"RbSP is really useful for solving these cases partly since it uses attentive augmentation information to distinguish the same SDP or the same preposition with different meanings.",
"RbSP is also proven to be stronger in examples 3 4 to find new results and examples 5 7 to fix wrong results.",
"In our statistic, the use of RbSP bring the big advantage for the relations Component-Whole , Message-Topic , Entity-Destination , Product-Producer and Instrument-Agency .",
"The results are almost constant for Member-Collection relations.",
"Vice versa, we regret to state that using RbSb brings some worse results (examples 8 11 ), especially for Cause-Effect and Content-Container relations.",
"Many errors seem attributable to the parser or our model's limitations that still cannot be overcome by using the RbSP (Examples 12 13 ).",
"We listed here some highlight problems to prioritize future researches",
"(a) information on the SDP and its child nodes is still insufficient or redundant to make the correct prediction,",
"(b) the direction of relations is still challenging since some errors appeared because we predict the relation correctly but its direction wrongly",
"(c) the over-fitting problem (leading to wrong prediction FP ) and",
"(d) lacking in generality (cannot predict new relation FN ).",
"In this paper, we have presented RbSP, a novel representation of relation between two nominals in a sentence that overcomes the disadvantages of traditional SDP.",
"Our RbSP is created by using multilayer attention to choose relevant information to augment a token in SDP from its child nodes.",
"We also improved the attention mechanisms with kernel filters to capture the features on the context vector.",
"We evaluated our model on SemEval-2010 task 8 dataset, then compared the results with very recent state-of-the-art models.",
"Experiments were also constructed to verify the rationality and effectiveness of each of the model's components and information sources.",
"The results demonstrated the advantage and robustness of our model, includes the LSTM on the original sentence, combination of self-attention and heuristic mechanisms and several augmentation inputs as well.",
"The analysis of the results still points our some weaknesses of the model.",
"We aim to address them and further extensions of our model in future works.",
"We released our source code and data on the public repository to support the re-producibility of our work and facilitate other related studies.",
"This research was supported by an EPSRC Experienced Researcher Fellowship (N. Collier: EP/M005089/1).",
"We also thank the anonymous reviewers for their comments and suggestions."
] |
[
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"result",
"abstain",
"objective",
"result",
"objective",
"abstain",
"other",
"other"
] |
[
"Sequence-to-sequence models have delivered impressive results in word formation tasks such as morphological inflection, often learning to model subtle morphophonological details with limited training data.",
"Despite the performance, the opacity of neural models makes it difficult to determine whether complex generalizations are learned, or whether a kind of separate rote memorization of each morphophonological process takes place.",
"To investigate whether complex alternations are simply memorized or whether there is some level of generalization across related sound changes in a sequence-to-sequence model, we perform several experiments on Finnish consonant gradationa complex set of sound changes triggered in some words by certain suffixes.",
"We find that our models often though not alwaysencode 17 different consonant gradation processes in a handful of dimensions in the RNN.",
"We also show that by scaling the activations in these dimensions we can control whether consonant gradation occurs and the direction of the gradation.",
"Recent work on computational morphology demonstrates that neural networks can very effectively learn to inflect words, given adequate amounts of training data (Cotterell et al., 2016, 2017).",
"However, in computational morphology and in NLP at large, the interpretability of neural models remains a serious concern (Doshi-Velez and Kim, 2017)it is unclear how networks trained to inflect words actually accomplish their task.",
"It is also unclear to which extent networks are able to learn linguistic generalizations from their input data instead of simply memorizing training examples and exhibiting a kind of nearest-neighbor behavior.",
"In this paper, we shed light on what kind of linguistic generalizations neural networks are capable of learning from data.",
"We report on an in-1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 Figure 1: Scatter plot of the activation for two encoder hidden state dimensions which activate strongly during gradation.",
"vestigation into how consonant gradation , a particular morphophonological alternation which is common in Finnish and other Uralic languages, is encoded in the hidden states of an LSTM encoder-decoder model trained to perform word inflection.",
"Specifically, we train character-based sequence-to-sequence models for inflection of Finnish nouns into the genitive case, an inflection type which commonly triggers consonant gradation.",
"Consonant gradation is a morphophonological alternation where voiceless stops p , t and k are lenited in certain positions (see Section 3 for further details).",
"We first demonstrate that inflection networks tend to learn an abstract representation for consonant gradation, where the alternation is triggered by the same dimensions in encoder hidden states regardless of which stop p , t or k undergoes gradation.",
"This echoes the treatment of gradation in linguistic literature (Hakulinen et al., 2004, 41) Nevertheless, we also find evidence that this behavior is not universal and that networks can sometimes fail to generalize gradation and instead learn to represent gradation using distinct dimensions for each stop p , t and k .",
"Our second contribution is to show that networks can learn a general representation encompassing both so-called quantitative gradation and qualitative gradation (these are further described in Section 3).",
"This presents further evidence that the phonological representations learned by encoder-decoder models can learn to group linguistic generalizations that target different sounds.",
"As our third contribution, we show evidence of a remarkable property whereby directionality of gradation is encoded as positive or negative hidden state activations: Consonant gradation is called direct when the base form of a noun displays the strong grade (such as kk ) and the genitive form displays the weak grade of a stop (such as k ).",
"In inverse (or strengthening') gradation, the opposite alternation occurs.",
"We find hidden state dimensions which encode for the direction of gradation by a positive or negative activation.",
"This behavior is demonstrated in Figure 1 where a negative activation of dimension 487 in the encoder hidden state marks inverse gradation of a stop, and positive activation instead marks direct gradation (see Section 6 for further discussion of this phenomenon).",
"Interpretation of neural representations in recurrent neural models has been an active area of research over a long period of time starting with Elman (1990).",
"However, representations in models of phonology have received less attention than many other subfields of NLP.",
"Rodd (1997) investigates learning of Turkish vowel harmony by a character-based RNN language model trained on word forms.",
"The paper investigates hidden state activations of RNN models while varying the hidden state dimensionality between 1 and 4.",
"It presents evidence that RNN hidden states can capture Turkish vowel harmony patterns when a sufficient number of hidden dimensions are available.",
"In a similar vein, Silfverberg et al. (2018) investigate phoneme representations for Finnish, Spanish and Turkish finding correlations between embedding representations and phonological distinctive features.",
"Kolachina and Magyar (2019) present an investigation of phone embeddings learned using word2vec (Mikolov et al., 2013) for simulated data showing that phone embeddings capture phonemic and allophonic relationships.",
"They also show that phone embeddings capture co-occurrence restrictions for vowels well, while largely failing to do this for consonants.",
"Our encoder representations, in contrast, are able to capture these co-occurrence restrictions.",
"Begus (2020b) investigates representations learned by a generative adversarial network or GAN (Goodfellow et al., 2014) trained on audio recordings of speech, showing that some of the latent variables of the GAN correspond to phonological features of the speech signal: specifically the presence or absence of the fricative [s] in the output of the network and the amplitude of frication.",
"They show that manipulation of the variables changes these features in a predictable manner.",
"Similarly to our work, Begus (2020b) also scales state activations and observes the effect on the output of the network.",
"In a related investigation of reduplication, Begus (2020a) train GAN models on speech and identify variables which trigger reduplication in the speech signal.",
"Extensive work exists on linguistic probing experiments for neural representations (Conneau et al., 2018a,b; Clark et al., 2019).",
"A recent probing paper by Torroba Hennigen et al. (2020) is more directly related to our work.",
"They present a decomposable probe for finding small sets of hidden states which encode for linguistically relevant information, particularly morphosyntactic information.",
"Our work shares the aim of not only identifying if information is present in a neural system, but also examining how it is represented.",
"However, we additionally perform experiments on manipulating network activations and examine how such manipulations influence the outputs of the network.",
"Our approach was inspired by the now-classic paper on visualization and interpretation of recurrent networks by Karpathy et al. (2015) in that we also seek individual interpretable dimensions.",
"The work by Dalvi et al. (2019) on analyzing individual neurons in networks trained for linguistic tasks (POS tagging as well as semantic and morphological tagging) is more closely related to the present work.",
"They present a general methodology for uncovering neurons which encode linguistic information by training a classifier to predict linguistic features of the input based on the representations generated by the network.",
"They also show that it is possible to manipulate specific neurons to force the Nominative Genitive Gloss pappi papin priest' kentta kentan field' Quantitative kiukku kiukun anger' ripe rippeen remain(s)' laite laitteen device' liike liikkeen motion' sopu sovun agreement' johto johdon lead' aika ajan time' kyky kyvyn skill' olento olennon creature' Qualitative kenka kengan shoe' silta sillan bridge' rumpu rummun drum' ranne ranteen wrist' salko salon pole' Table 1: Examples of various kinds of consonant gradation in Finnish.",
"network to generate a particular linguistic feature.",
"Meyes et al. (2019) investigate the effect of scaling network activations, something they call ablation studies.",
"They train classifiers and look at how clas-sification performance varies when zeroing out the activations for particular states or groups of states.",
"Consonant Gradation (CG), common in many Uralic languages, is a set of assimilation and lenition processes, usually targeting the final syllable in a word stem.",
"Historically the trigger for the alternation has been purely phonological, but in Finnish, the alternation is no longer entirely predictable from the phonological structure (Karlsson, 2017).",
"1 The trigger for gradation is usually an affix that closes the final syllable, such as the genitive -n , e.g. katto katon (roof' sg. nom. sg. 2 gen.).",
"The overall process is divided into quantitative gradation where, for example, geminate pp , tt , kk alternate with their non-geminate counterparts, p , t , k , and qualitative gradation where a large variety of lenition and assimilation processes are found.",
"For example, strong grade k can alternate with the weakened j , v , g , etc.",
"See Table 1 for a summary of these types of gradation processes found in our data set.",
"The lenited or elided forms are commonly called the weak grade (e.g. katon ) and the alter-1 Nevertheless, the alternation is fully determined by the phonological context in the nominative and genitive cases which are the focus of this paper.",
"nant the strong grade (e.g. katto ).",
"Sometimes the weak and strong grades appear in the inverse position, i.e. the weak grade appears with open syllables as in rike rikkeen (offense' sg. nom. sg. gen.).",
"While quantitative gradation remains productive in the language, many stems from more recent loanwords in particular, do not tend to alternate qualitatively; for example auto auton , audon (car' sg. nom. sg. gen.).",
"Speakers must therefore know the lexical status of each stem to inflect it correctly.",
"Our data set includes both gradating and non-gradating lexemes.",
"The advantages of studying Finnish consonant gradation in this context is that the set of sound changes is very diverse, but that the trigger for all of them is the same.",
"Also, the Finnish writing system is very phonemic and surface-oriented and therefore no conversion to an IPA representation is necessary to reveal the sound changes that occur as a result of gradation.",
"Of particular interest to us is that there are many similar-looking alternations in Finnish that are not a result of consonant gradation, but paradigmatic variation.",
"For example, varis (crow' sg. nom.) is inflected variksen in the sg.",
"gen. form.",
"Note the similarity of this alternation to the actual CG case of liike (motion' sg. nom.) liikkeen (sg. gen.) which also involves a k alternation.",
"It is therefore of some interest to observe whether neural inflection models encode the two cases differently in some respect.",
"In total we count 17 different types of lenition or fortition falling under the rubric of consonant gradation in our data set; an example of each type is shown in Table 1.",
"This section presents our nominative genitive inflection models and our approach to finding encoder hidden state dimensions which are associated with consonant gradation.",
"As our inflection model, we use the well-known attentional BiLSTM encoder-decoder model which was presented by Bahdanau et al. (2014) and first applied to inflection by Kann and Schutze (2016).",
"This neural model transduces a nominative input form which is represented as a sequence of characters x [1: T ] of length T into a genitive output form Quantitative gradation Qualitative gradation No gradation Figure 2: Activation of encoder hidden state dimension 487 in model 3 when (1) quantitative gradation, (2) qualitative gradation and (3) no gradation occurs in the input form.",
"The encoder network in an attentional model generates one hidden state vector h t = f t b t R 2 n for every position in the input sequence.",
"Due to the bidirectionality of the encoder, the hidden state vector is a concatenation of a forward state f t R n and a backward state b t R n .",
"We refer to the vectors f t as hidden states and the elements in the vectors f t [ d ] as activations .",
"Here d { 1 , 2 , 3 , ..., 2 n } is called a dimension .",
"Our aim is to investigate encoder hidden state dimensions d which are associated with gradation.",
"To this end, we extract the encoder hidden state activations h 1 [ d ] , ..., h T [ d ] for each example ( x [1: T ] , y [1: S ] ) in our development set and dimension d { 1 , 2 , 3 , ..., 2 n } .",
"In order to find dimensions which activate strongly at positions where gradation occurs, we compare the mean activation of each dimension for forms which undergo gradation and forms which do not.",
"Let a X ( d ) , as defined by Equation (1) below, be the mean activation for dimension d in a set of encoder hidden states X .",
"For each dimension d , we extract the mean activation a G ( d ) , where G is the set of encoder hidden states at positions where gradation occurs.",
"As explained in Section 3, gradation applies to the final stop in word forms which undergo gradation.",
"Usually, this would refer to position T 1 in a string of length T as in tupa cottage sg. nom.', where p undergoes gradation, but can also happen at position T 2 as in the form ratas wheel sg. nom.', where t undergoes gradation.",
"The mean activation a G ( d ) is compared to the activation a N ( d ) of dimension d at the penultimate position T 1 in base forms of length T which do not undergo gradation.",
"In order to specifically capture dimensions which encode for gradation as opposed to simply encoding for consonants, we limit this examination to base forms like kana chicken sg. nom.' and auto car sg. nom.', where the penultimate character is a consonant.",
"We retrieve the topN dimensions d where the difference in mean activation | a N ( d ) a G ( d ) | is maximized and consider these candidate dimensions for gradation.",
"Our dataset was produced by taking the most frequent 5,000 lexemes tagged as singular nominative nouns from the Turku Dependency Treebank (Haverinen et al., 2014) and generating the singular genitive forms using the OmorFi finite-state morphological transducer (Pirinen, 2015).",
"We excluded compound nouns (e.g. ammat-tikorkeakoulututkinnoista from the professional high-school examinations') and words marked as nouns which contained punctuation or numerals (e.g. G8-neuvottelut G8 negotiations', 2000-luvulla in the 2000s', C:ssa in C' etc.).",
"Loan words were included, both unadapted such as workshop and bungalow and partially or fully adapted such as brosyyri brochure' and samp-panja champagne'.",
"This gave a total of 4,797 nominativegenitive pairs.",
"We randomly ordered them and then split these into disjoint sets: 90% for Ungrad.",
"training (4,317 pairs) and 10% for validation.",
"We then took the validation set (479 pairs) and annotated them for: gradation (yes, no), type of gradation (qualitative, quantitative), consonant ( p , t , k ) and direction (direct, inverse).",
"This gave a total of 84 examples of nouns exhibiting consonant gradation.",
"This set was heavily skewed towards t gradation (54 out of 84 examples).",
"3 So we randomly sampled another 84 words from the frequency list, which were not found in the training data or in the existing validation set and which contained p and k , and annotated them and added them to the validation set.",
"Statistics on the composition of the hand-annotated dataset can be found in Table 2 and the full data is freely available on GitHub.",
"4 6 Experiments and Results We investigate representation of consonant gradation in encoder hidden states in the following way: As explained in Section 4.2, we identify individual dimensions in encoder hidden states which activate strongly during gradation regardless of the identity of the consonant undergoing gradation.",
"We then investigate the association of these states using two experiments: we (1) perform significance tests on a held-out dataset to determine if the states activate significantly more strongly when gradation occurs, and (2) scale the state activations and observe the effect on the output of the network.",
"We train ten encoder-decoder models with different random initializations for inflection using the OpenNMT toolkit (Klein et al., 2018).",
"We use a 2-layer BiLSTM encoder with hidden dimension 3 This follows character-level frequency patterns in Finnish, e.g. in the treebank t appears 122,821 times, k appears 64,513 times and p appears 23,130 times.",
"250.",
"Due to the bidirectionality of the encoder this results in 500-dimensional hidden states (consist-ing of a forward and backward hidden state).",
"Our model uses 500-dimensional character embeddings both in the encoder and decoder and we use an attentional decoder with 250-dimensional hidden states.",
"The model is trained for a total of 3,000 steps using stochastic gradient descent and a batch size of 64.",
"See Figure 3 for a plot of the development accuracy during the training process.",
"As can be seen, changes in development accuracy are modes after training step 2,000.",
"We report inflection accuracy for our ten inflection models measured on held-out data in Table 3.",
"The accuracy is reported separately for forms undergoing gradation and forms not undergoing gradation.",
"In addition, we report an overall accuracy for all forms.",
"We can see that the mean performance is close 95% for all forms and performance tends to be higher on forms undergoing gradation than other forms.",
"We randomly split our development set into two disjoint parts of equal size.",
"The first part of the development set we use to discover the top-5 encoder hidden state dimensions which are strongly associated with gradation (as described in Section 4.2).",
"The rest of the development set is used for significance testing.",
"We perform a two-sided t-test to check if the mean activations of our top-5 dimensions differ significantly (at the 99.5% significance level) between positions which undergo gradation Model # States +Grad.",
"and positions which do not undergo gradation.",
"As explained in Section 4.2, we limit this examination to nominative forms where the penultimate character is a consonant to better zone in on gradation.",
"Table 4 shows the results separately for p , t and k gradation.",
"The table also shows results for qualitative and quantitative gradation.",
"We can see that eight of the ten models contain at least one dimension where activation is significantly stronger for all stops p , t and k undergoing gradation than other stem-final consonants indicating that these states are associated with gradation in general rather than gradation of one of the individual consonants p , t , or k .",
"We note that these dimensions also typically activate both for qualitative and quantitative gradation indicating that the network has learned an abstraction for both types of gradation.",
"As a direct test of the effect of hidden state dimensions on gradation, we scale the activations of dimensions which are strongly associated with gradation.",
"Our hypothesis is that negatively scaling these dimensions will prevent forms from undergoing gradation.",
"We experiment on a dataset consisting of all development examples which undergo gradation.",
"For each nominative input form such as luukku , we identify the correct gold standard genitive form luukun (where kk k alternation has applied) and an alternate output form *luukkun which is correct apart from the fact that the form has not undergone gradation.",
"We then compute (1) the number of gold standard forms, (2) the number of alternate forms, and (3) the number of nonce forms generated by our models.",
"Nonce forms here refer to erroneous outputs like *luukuukuukkun which do not belong in category (2).",
"We scale the hidden state activations at positions where gradation occurs, that is at the final stop in the nominative form, before feeding the encoder hidden states into the decoder.",
"For each input form, we scale the topN encoder hidden states which are associated with gradation according to the mapping a (cid:55) x a where x varies between 1 and -25.",
"The number of states which are scaled (that is N ) is tuned for maximal effect on the number of alternate forms which are generated.",
"Figure 4 shows the results for the scaling experiment when tuning N .",
"5 The first graph shows that for most models the number of alternate forms first increases when the scaling factor x approaches 25 , and then gradually decreases.",
"As the number of alternate forms increases, the number of gold standard forms undergoing gradation naturally decreases as demonstrated by the second graph.",
"We also see an increase in the number of nonce forms which do not belong to either category.",
"This is to be expected as scaling represents a deviation from learned model weights which disturbs the network.",
"The effect of scaling varies between models: When scaling activations for Model 9, over half of the output forms do not undergo gradation.",
"In contrast, for Model 7, the best scaling factor only produces around 7% of non-gradating output forms.",
"Crucially, however, we do see an effect for nearly all models (apart from model 8).",
"Contrast this with Figure 5 which shows results when scaling a set of five random states instead of states which are associated with gradation, showing that scaling of randomly sampled states has very small if any effect on the number of alternate forms produced by the models.",
"Based on the graphs in Figure 4, scaling has very limited effect on Model 8.",
"Even when scaling by a = 25 , there is only a small decrease in the number of gold standard forms and a corresponding small increase in nonce forms.",
"This might be evidence of a more redundant representation of information in Model 8, whereby scaling a few states will not strongly perturb the network.",
"Figure 2 shows the activation for a hidden state dimension which is strongly associated with grada-5",
"tion: dimension 487 in model 3.",
"This dimension displays positive activation for consonants undergoing direct gradation as in laukku bag sg. nom.' laukun bag sg. gen.'.",
"Remarkably, the state displays negative activation for consonants undergoing inverse gradation as in the example lauseke phrase' where k is strengthened into a geminate kk resulting in the genitive form lausekkeen phrase-GEN '.",
"This effect can be seen both in forms where quantitative and qualitative gradation occurs.",
"However, as the example basilika basil' in the third heat map demonstrates, dimension 487 can also activate strongly when no gradation occurs.",
"6 This 6 The form basilika is a loan word and would probably undergo gradation if it were a native Finnish word.",
"prompted us to investigate hidden state activations more directly using the scaling experiments described in Section 6.3.",
"Figure 1 shows a scatter plot of two encoder hidden state dimensions (487 and 484 in model",
"3) which activate strongly during gradation.",
"Each point in the plot corresponds to one example in our development dataset.",
"Clearly, examples which do not undergo gradation cluster around (0 , 0) .",
"7 In contrast, gradation for k and p lead to a positive activation for state 484, whereas t -gradation gives a negative activation.",
"Moreover, direct gradation results in a positive activation for state 487 and inverse gradation gives a negative activation.",
"Examples which do not undergo gradation can also have high values for 484 ( > 0 . 4 ).",
"Many of these examples end in -jV , -vV or -mV which could actually be examples where inverse gradation occurs but it happens not to be the case for these particular ones.",
"Examples where the activation for 484 is low ( < 0 . 5 ) span a small number of forms ending -tV , -bV , and -gV .",
"There is also a substantial number of non-gradating forms where the activation for 484 is > 0 .",
"5 .",
"Most of these fall into the lin-noitus fortress' / linnoituksen fortress sg. gen.' patterns where a k is inserted in the penultimate syllable.",
"This alternation bears great resemblance to gradation as mentioned in Section 3.",
"There are also a few examples of the type tase balance sheet' / taseen balance sheet sg. gen.' where the stem-final vowel is doubled displaying large activation for 484.",
"This is perhaps somewhat harder to explain.",
"However, note that this vowel doubling fre-state 487, our model still correctly inflects basilika into basilikan instead of applying gradation, which would give a form like *basilijan or *basilian .",
"ac-cessory', tarvikkeen accessory sg. gen.'.",
"In our experiments we found that the system would sometimes output a gradated form even when the exact type of gradation was not present in the training data, for example bambu bammun (bamboo' sg. nom. sg. gen.).",
"Since Finnish natively lacks b and g , examples of gradation with these consonants are rare.",
"However, it is indeed the case that loanwords that include such voiced stops do undergo gradation, e.g. dubata dubbaan (to dub' inf. 1p sg. pres. sg.) (Voutilainen, 2008).",
"Since native Finnish speakers seem to extend gradation from voiceless stops to their voiced counterparts in loanwords, the question whether neural models can exhibit such generalizing behavior as well is an interesting one.",
"Our initial investigations into whether the similarity of the learned embeddings for p and b could trigger such generalizations across similar sounds failed to identify a clear reason for the behavior, and we leave a detailed study of this to future work.",
"We have presented an investigation of encoder representations of phonological alternations, specifically consonant gradation in Finnish.",
"We found evidence of a generalized representation of gradation covering all stops which undergo gradation and different types of gradation.",
"We also found that scaling hidden states can switch off gradation, prompting the model to generate alternate forms which do not display gradation.",
"Moreover, the direction of gradation can be encoded as positive vs. negative hidden dimension activation."
] |
[
"abstain",
"abstain",
"objective",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"result",
"objective",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"other",
"objective",
"method",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"result",
"abstain"
] |
[
"Aspect term extraction aims to extract aspect terms from review texts as opinion targets for sentiment analysis.",
"One of the big challenges with this task is the lack of sufficient annotated data.",
"While data augmentation is potentially an effective technique to address the above issue, it is uncontrollable as it may change aspect words and aspect labels unexpectedly.",
"In this paper, we formulate the data augmentation as a conditional generation task: generating a new sentence while preserving the original opinion targets and labels.",
"We propose a masked sequence-to-sequence method for conditional augmentation of aspect term extraction.",
"Unlike existing augmentation approaches, ours is controllable and allows us to generate more diversified sentences.",
"Experimental results confirm that our method alleviates the data scarcity problem significantly.",
"It also effectively boosts the performances of several current models for aspect term extraction.",
"Aspect term extraction (ATE), which aims to identify and extract the aspects on which users express their sentiments (Hu and Liu, 2004; Liu, 2012), is a fundamental task in aspect-level sentiment analysis.",
"For example, in the sentence of The screen is very large and crystal clear with amazing colors and resolution , screen , colors and resolution are the aspect terms to extract in this task.",
"ATE is typically formulated as a sequence labeling problem (Xu et al., 2018, 2019; Li et al., 2018), where each word is appended with a label indicating if it identifies an aspect.",
"Sentence and label sequence are both used to train a ATE model.",
"One of the remaining challenges with this task is the shortage of annotated data.",
"While data augmentation appears to be a solution to this problem, it Corresponding author.",
"faces two main obstacles here.",
"First, the new sentences must adhere to their original label sequences strictly.",
"As shown in Figure 1, the generation A is an effective augmentation as the original label sequence is preserved, whereas B is not even though it can be a valid review.",
"Second, a noun phrase is regarded as aspect term only if it is an opinion target.",
"In the generation D of Figure 1, although the term screen remains where it is in the original sentence, the new context makes it just an ordinary mention rather than an opinion target.",
"To sum up, the real difficulty of data augmentation in ATE is generating a new sentence while aligning with the original label sequence and making the original aspect term remain an opinion target.",
"Existing augmentation models such as GAN (Goodfellow et al., 2014) and VAE (Kingma and Welling, 2013) tend to change the opinion target unpredictably and thus are not applicable for this task.",
"Another genre of augmentation strategy is based on word replacement.",
"It generates a new sentence by replacing one or multiple words with their synonyms (Zhang et al., 2015) or with words predicted by a language model (Kobayashi, 2018).",
"This approach seems to be able to address the above issue in ATE augmentation, yet it only brings very lim-7057 ited changes to the original sentences and cannot produce diversified sentences.",
"Intuitively, augmentation strategies are effective when they increase the diversity of training data seen by a model.",
"We argue in this paper that the augmentation for aspect term extraction calls for a conditional approach, which is to be formulated as a masked sequence-to-sequence generation task.",
"Specifically, we first mask several consecutive tokens for an input sentence.",
"Then, our encoder takes the partially masked sentence and its label sequence as input, and our decoder tries to reconstruct the masked fragment based on the encoded context and label information.",
"The process of reconstruction keeps the opinion target unchanged and is therefore controllable.",
"Moreover, compared with replacement-based approaches (Zhang et al., 2015; Kobayashi, 2018) which replace words separately, ours replaces a segment each time and has the potential to generate more diversified new sentences in content.",
"To implement the above conditional augmentation strategy, we adopt Transformer (Vaswani et al., 2017) as our basic architecture and train it like MASS (Song et al., 2019), a pre-trained model for masked sequence-to-sequence generation.",
"To our knowledge, this work is the first effort towards data augmentation of aspect term extraction through conditional text generation.",
"We propose a controllable data augmentation method by masked sequence-to-sequence generation, which is able to generate more diversified sentences than previous approaches.",
"We provide qualitative analysis and discussions as to why our augmentation method works, and test its implementation with other language models to illustrate why this masked sequence-to-sequence framework is favored.",
"Aspect term extraction (ATE) and sentiment classification are two fundamental subtasks of aspect-based sentiment analysis.",
"While the former aims to extract aspect terms in review sentences, the latter tries to determine their sentiment polarities.",
"To deal with ATE, many traditional techniques like syntactic rules (Qiu et al., 2011), hidden Markov models (Jin et al., 2009), and conditional random fields (Li et al., 2010; Toh and Su, 2016) have been explored.",
"Recently, neural network techniques such as LSTM (Liu et al., 2015), CNN (Xu et al., 2018), and attention (Li et al., 2018; Devlin et al., 2019) have been applied for ATE.",
"Luo et al. (2019) and He et al. (2019) further proposed to predict aspect term and polarity jointly in a multi-task learning approach so as to take advantage of their relatedness.",
"Generally, the above approaches treat ATE as a sequence labeling problem.",
"In their pioneering work, Ma et al. (2019) formulated ATE as a sequence-to-sequence task.",
"So far, one of the remaining challenges for ATE lies in the lack of annotated data, especially when today's neural models are becoming increasingly large and complex.",
"Generative adversarial network (GAN) (Goodfel-low et al., 2014) and variational autoencoder (VAE) (Kingma and Welling, 2013) are two neural network based generative models that are capable of generating text conditioned on input text and can be applied for data augmentation of sentence-level sentiment analysis (Gupta, 2019; Hu et al., 2017).",
"These methods encode an input text into latent variables and generate new texts by decoding the latent variables in continuous space.",
"However, they can hardly ensure high-quality sentences in terms of readability and label compatibility.",
"Back translation (Edunov et al., 2018; Sennrich et al., 2016) is another augmentation approach for text data, but is less controllable, although it is good at maintaining the global semantics of an original sentence.",
"As a class of replacement approach, Zhang et al. (2015) and Wang and Yang (2015) proposed to substitute all replaceable words with corresponding synonyms from WordNet (Miller, 1995).",
"Differently, Kobayashi (2018) and Wu et al. (2019) proposed to randomly replace words with those predicted by a pre-trained language model.",
"Nevertheless, none of the above augmentation approaches is applicable for aspect term extraction task, as they are all targeted at sentence-level classification and may change opinion targets and aspect labels unexpectedly during augmentation.",
"Pre-training a large language model and fine-tuning it on downstream tasks has become a new paradigm.",
"MASS (Song et al., 2019) is such a model for language generation.",
"Unlike GPT (Radford et al., Training Set MaskFrag (cid:2180) : The screen is bright and the mouse is nice .",
"2016, 2019) and BERT (Devlin et al., 2019) which only have either an encoder or a decoder, MASS includes both of them and trains them jointly: the encoder takes as input a sentence with a fragment masked and outputs a set of hidden states; the decoder estimates the probability of a token in the masked fragment conditioned on its preceding tokens and the hidden states from the encoder.",
"This pre-training approach enables MASS to perform representation learning and language generation simultaneously.",
"MASS has achieved significant improvements in several sequence-to-sequence tasks, such as neural machine translation and text summarization (Song et al., 2019).",
"Our augmentation method has a similar training objective as MASS, and includes a label-aware module to constrain the generation process.",
"As mentioned before, we formulate the data augmentation of aspect term extraction (ATE) as a conditional generation task.",
"In this section, we first introduce the problem formulation, and then describe our augmentation method in detail.",
"Given a training set D of review texts, in which each sample includes a sequence of n words X = [ x 1 , x 2 , ..., x n ] and a label sequence L = [ l 1 , l 2 , ..., l n ] , where l i { B, I, O } .",
"Here, B , I and O denote if a word is at the beginning , inside or outside of an aspect term, respectively.",
"The objective of our augmentation task is to generate a new sentence consistent with L and the aspect term.",
"The above augmentation is modeled as a fine-grained conditional language generation task implemented by a masked sequence-to-sequence generation model.",
"As depicted in Figure 2, the model adopts Transformer (Vaswani et al., 2017) as its basic architecture, and consists of a 6-layer encoder and a 6-layer decoder with 12 attention heads in each layer.",
"The embedding size and hidden size are both 768, and the feed-forward filter size is 3072.",
"The generation model is initialized with the pre-trained weights of MASS.",
"To further incorporate the domain knowledge, we perform in-domain pre-training as in (Howard and Ruder, 2018).",
"1 3.2.1 Training The training process is illustrated in Algorithm 1.",
"For each batch, we first sample a few examples from the training set with replacement (Line 4) according to a probability p specified in Equation (1).",
"The chosen examples are then masked using the Fragment Masking Strategy function (Line 6) to generate training examples for our model.",
"We elaborate on Algorithm 1 in the following paragraphs.",
"The function MaskFrag (Line 6) is performed on the chosen examples to mask positions from u to v = u + r length ( X ) , where length ( X ) is the length of sentence X .",
"Each masking position is replaced by [ M ] only if its label is O .",
"As a result, we obtain a partially masked sentence X and a fragment Y = [ y 1 , y 2 , ..., y m ] = [ x u , x u +1 , ..., x v ] , 1 The Amazon and Yelp review datasets are used as the Laptop and Restaurant domain corpora, respectively.",
"where m = v u + 1 is the length of the fragment.",
"Line 5 of Algorithm 1 shows that during the training process each sentence is masked every time it is sampled.",
"Since long sentences have more different segments to mask than short ones, they should be sampled more frequently.",
"We define the sampling probability p i of each example i as follows: p i (cid:2) d i , d i > 5 , 0 , otherwise (1) where d i denotes the sequence length of example i .",
"The training objective (Line 9) takes the masked sentence X and label sequence L as input, and reconstructs the masked fragment Y .",
"The inputs of the encoder are obtained by summing up the embeddings of a token x , its aspect label l , and position q .",
"The output is the hidden state H = [ h 1 , h 2 , ..., h n ] : H = Enc ( X, L ) , (2) where Enc represents the encoder, and h t R s h denotes the hidden state of size s h for word x t .",
"X , label sequence L and position Q .",
"The objective of the decoder is to generate a sequence Y based on X and L .",
"In particular, it predicts next token y t based on context representations H , current aspect label l t and previous tokens [ y 1 , ..., y t 1 ] .",
"where the conditional probability of token y t is defined by:",
"(4) Here, W R | V | s h , | V | is the vocabulary size, and s t is the hidden state of the decoder at time step t , being calculated as: s t = z t + Emb l ( l t ) , (5) z t = Dec ( x t 1 , l t 1 ) , (6) where Emb l is the label embedding function and Dec is the decoder.",
"In Equation (5), each decoding step is conditioned on the context information and the whole label sequence, making the generation controllable.",
"The encoder and the decoder are jointly trained by maximizing the log-likelihood loss: J = m (cid:4) t =1 log ( P ( y t | y 1: t 1 , l t , H )) , (7) where includes the trainable parameters.",
"After training for a few epochs, our model is used to predict the words in a masked fragment.",
"Specifically, given an example ( X, L ) from the training set D , we choose a start position u and apply MaskFrag ( X, L, u, r ) to obtain X .",
"To avoid that same positions are chosen repeatedly, we manually choose the start position u for the augmentation.",
"At generation time, we use beam search with a size of 5 for the auto-regressive decoding.",
"After the decoder produces all the tokens compatible with the original label sequence and aspect terms, we obtain a new example ( X, L ) .",
"Empirically, we find the model tends not to generate a same segment as the old one when the masked segment is longer than",
"4. The above process can be run multiple times with different start positions, and generates multiple new examples from a source example.",
"In this approach, each source example is augmented in turn.",
"In this section, we first introduce the experimental datasets and several popular ATE models.",
"Then, we report the experimental results, which are obtained by averaging five runs with different initializations.",
"Two widely-used datasets, the Laptop from SemEval 2014 Task 4 (Pontiki et al., 2014) and the Restaurants from SemEval 2016 Task 5 (Pontiki et al., 2016), are used for our evaluations.",
"The statistics of the two datasets are shown in Table 1, which tells clearly that there are only a limited number of samples in both datasets.",
"For each of the two datasets, we hold out 150 examples from the original training set for validation.",
"For each remaining training example, we generate four augmented sentences according to Section 3.2.2 with the proportion r set to 0.5.",
"The four new sentences are allocated to four different sets.",
"This leads to four generated datasets.",
"To examine our data augmentation method, we use the original training sets and the augmented training sets to train several ATE models.",
"The details of these models are as follows.",
"BiLSTM-CRF is a popular model for sequence labeling tasks.",
"Its structure includes a BiLSTM followed by a CRF layer (Lafferty et al., 2001).",
"The word embeddings for this model are initialized by GloVe-840B-300d (Pennington et al., 2014) and fixed during training.",
"The hidden size is set to 300, and we use Adam (Kingma and Ba, 2014) with a learning rate of 1e-4 and L2 weight decay of 1e-5 to optimize this model.",
"Seq2Seq for ATE (Ma et al., 2019) is the first effort to apply a sequence-to-sequence model for aspect term extraction.",
"It adopts GRU (Cho et al., 2014) for both the encoder and the decoder.",
"The encoder takes a source sentence as input, and the decoder generates a label sequence as the result.",
"This approach is also equipped with a gated unit network and a position-aware attention network.",
"BERT for token classification (Devlin et al., 2019) uses pre-trained BERT with a linear layer.",
"We implement this model using open source 2 and initialize its parameters with the pre-trained BERT-BASE-UNCASED model.",
"We refer to this model as BERT-FTC in the following paragraphs.",
"DE-CNN (Xu et al., 2018) uses two types of word embeddings: general-purpose and domain-specific embeddings.",
"3 While the former adopt GloVe-840B-300d, the latter are trained on a review corpus.",
"They are concatenated and fed to a CNN model of 4 stacked convolutional layers.",
"BERT-PT (Xu et al., 2019) 4 utilizes the weights of pre-trained BERT for initialization.",
"To adapt to both domain knowledge and task-specific knowledge, it is then post-trained on a large-scale unsupervised domain dataset and a machine reading comprehension dataset (Rajpurkar et al., 2016, 2018).",
"So far, it is the state of the art for ATE.",
"The above models are all open-sourced and their default settings are employed in our experiments.",
"We combine the original training set with each of the four generated datasets (refer to 4.2.1) and obtain four augmented training sets, each doubling the original training set in size.",
"For each model, we train it on the four augmented training sets, respectively, and take their average F1-scores on the test set.",
"By comparing this score with the model trained on the original training set, we can examine if the augmented datasets improve the model.",
"5 As shown in Table 2, all the models are improved more or less based on the augmented datasets.",
"Even for the sate-of-the-art DE-CNN and BERT-PT models, our augmentation also brings considerable improvements, which confirms that our augmentation approach can generate useful sentences for training a more powerful model for aspect term extraction.",
"The above results show the effect of double augmentation.",
"In this subsection, we further combine any two of the four generated datasets with the original training set to form triple-augmented datasets, leading to six new datasets.",
"In a similar approach, we can create quadruple-augmented and quintuple-augmented datasets.",
"Then, we train the DE-CNN and BERT-FTC models on the new datasets and take the average F1-score for each model as before.",
"The results are shown in Figure",
"3. Figure 3: Performances of DE-CNN and BERT-FTC on different-sized augmentation datasets, where 1 means the original datasets without augmentation.",
"We can observe from the figure that both models are generally improved as the size of augmentation increases on the Restaurant dataset.",
"There is even a 1.8 boost for DE-CNN.",
"On the Laptop dataset, however, the highest scores are seen at double-augmentation for both models.",
"One of the reasons could be the relatively large volume of the original dataset.",
"Another possible reason is that the aspect terms in this dataset are often regular nouns such as screen and keyboard , which can be successfully extracted just based on their own meanings.",
"Differently, aspect terms in the Restaurant dataset are more arbitrary and diverse such as Cafe Spice and Red Eye Grill , the names of dish or restaurant.",
"This requires a model to pay more attention to the contexts while determining whether the candidate is an aspect terms.",
"As our augmentation approach can generate different contexts for an aspect term, it works better on the Restaurant dataset.",
"In the augmentation stage, the masked proportion r is a hyperparameter and set to the half of the length of a sentence in the above experiments.",
"In this subsection, we explore its influence by changing it from 30% to 70% of sentence length stepped by 10%.",
"We use DE-CNN model for this evaluation on the double-augmented datasets.",
"As shown in Figure 4, the overall trend for F1-scores is moving up as r increases.",
"The reason is that sentences with short masked fragments are likely to be restored to their original forms by our generation model.",
"As the proportion r increases, the content of a sentence has increasingly more chances to be changed significantly, resulting in diversified new sentences.",
"This can be confirmed by the declining BLEU scores in Figure",
"4. Figure 4: Performance of DE-CNN with different masked proportion r for augmentation.",
"Our augmentation model introduces label embeddings into Transformer to force the new sentences to be task-competent.",
"We conduct an ablation study to verify the effectiveness by removing these embeddings during augmentation.",
"The DE-CNN model is used again for this study.",
"As shown in Table 3, the removal of label embeddings causes considerable performance drops, and the results are even worse than that on the original dataset.",
"This is probably due to the poor Recall performance that can be explained as follows.",
"When label sequence information is not present, the augmentation is prone to produce decayed examples in which some new aspect terms are generated in the positions of label O , or verse vice.",
"The model trained with such decayed examples is misled not to extract these aspect terms in the test stage.",
"As a result, the model makes many false-negative errors, leading to poor Recall scores.",
"This indicates that label embeddings are helpful for generating qualified sentences for aspect term extraction.",
"As mentioned before, we formulate the data augmentation for aspect term extraction as a conditional generation problem that is solved by masked sequence-to-sequence learning.",
"One may argue that other pre-trained language models like BERT and GPT-2 are also competent for this task as in (Wu et al., 2019; Sudhakar et al., 2019; Keskar et al., 2019).",
"Here we compare them and demonstrate the superiority of our approach in this task.",
"Following some previous work (Wu et al., 2019; Sudhakar et al., 2019; Keskar et al., 2019), we modify the settings of BERT and GPT-2 to make them fit this task.",
"Readers are recommended to refer to Appendix for more details.",
"Moreover, a widely-used replacement-based method is implemented for comparison, in which half of the tokens are randomly replaced by their synonyms from WordNet (Miller, 1995).",
"We use fluency 6 and BLEU 7 to evaluate the generated sentences.",
"Note that these datasets do not contain the original training examples because we want to focus more on the generated ones.",
"We employ BERT-FTC as the implementation model and train it on these datasets.",
"The results on the test sets are presented in Table",
"4. From the table, we note that the F1 scores of GPT-2 are the worst because of its low recall scores.",
"This conforms with the architecture and the language modeling objective of GPT-2, which does not have an encoder to encode the label information.",
"In this case, the decoding step is uncontrollable and cannot generate a sentence fitting the label sequence.",
"In contrast, our framework contains an encoder to encode a sentence and the label sequence simultaneously, and a decoder to generate sentences conditional on the encoder output.",
"That is, our decoder takes advantage of both context information and aspect label information, making the augmentation conditional and controllable.",
"BERT performs the worst in this task in fluency.",
"This can be attributed to its independence assump-tion in the process of generation, which means that all masked tokens are independently reconstructed, likely leading to in-coherent word sequences.",
"In contrast, our approach generates the sequence in an auto-regressive way, with each decoding step based on the result of its previous step, ensuring fluent new sentences.",
"The replacement-based method does not take into account the sentence context and leads to poor fluency scores.",
"Also, there are limited words to choose for synonyms in such lexical databases as WordNet.",
"Thus, such replacement-based methods can only produce sentences of limited diversity, which is confirmed by the BLEU scores.",
"6 Fluency is measured by the perplexity of sentence, and is calculated by OpenAI GPT.",
"In this metric, sentences with lower perplexity scores are more fluent.",
"Note that the GPT here is different from GPT-2 that we use to generate text data.",
"7 The original sentences are taken as reference.",
"To sum up, our data augmentation model benefits considerably from its encoder-decoder architecture and the masked sequence-to-sequence generation mechanism, which is controllable to ensure qualified data augmentation for aspect term extraction.",
"The results show that this sequence-to-sequence generation framework is non-replaceable by other language models such as BERT and GPT-2.",
"We finally present several augmented examples in Table 5 to illustrate the effect of our augmentation method more intuitively.",
"We observe that the contents of the masked fragments can be dramatically changed from their original forms after augmentation.",
"In some cases, the sentiment polarities are even reversed.",
"Nevertheless, the new contexts are still appropriate for the aspect terms, making them qualified and also diversified new training examples for aspect term extraction.",
"In this paper, we have presented a conditional data augmentation approach for aspect term extraction.",
"We formulated it as a conditional generation problem and proposed a masked sequence-to-sequence generation model to implement it.",
"Unlike existing augmentation approaches, ours is controllable to generate qualified sentences, and allows more diversified new sentences.",
"Experimental results on two review datasets confirm its effectiveness in this conditional augmentation scenario.",
"We also conducted qualitative studies to analyze how this augmentation approach works, and tested other language models to explain why our masked sequence-to-sequence generation framework is favored.",
"Moreover, the proposed augmentation method tends not to be unique to the current task and could be applied to other low-resource sequence labeling tasks such as chunking and named entity recognition.",
"The work was partially supported by the Fundamental Research Funds for the Central Universities (No.19lgpy220) and the Program for Guangdong Introducing Innovative and Entrepreneurial Teams (No.2017ZT07X355)."
] |
[
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"abstain",
"method",
"objective",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"other"
] |
[
"Probes are models devised to investigate the encoding of knowledgee.g. syntactic structurein contextual representations.",
"Probes are often designed for simplicity, which has led to restrictions on probe design that may not allow for the full exploitation of the structure of encoded information; one such restriction is linearity.",
"We examine the case of a structural probe (Hewitt and Manning, 2019), which aims to investigate the encoding of syntactic structure in contextual representations through learning only linear transformations.",
"By observing that the structural probe learns a metric, we are able to kernel-ize it and develop a novel non-linear variant with an identical number of parameters.",
"We test on 6 languages and find that the radial-basis function (RBF) kernel, in conjunction with regularization, achieves a statistically significant improvement over the baseline in all languagesimplying that at least part of the syntactic knowledge is encoded non-linearly.",
"We conclude by discussing how the RBF kernel resembles BERT 's self-attention layers and speculate that this resemblance leads to the RBF-based probe's stronger performance.",
"Probing has been widely used in an effort to better understand what linguistic knowledge may be encoded in contextual word representations such as BERT (Devlin et al., 2019) and ELMo (Peters et al., 2018).",
"These probes tend to be designed with simplicity in mind and with the intent of revealing what linguistic structure is encoded in an embedding, rather than simply learning to perform an NLP task (Hewitt and Liang, 2019; Zhang and Bowman, 2018; Voita and Titov, 2020) This preference for simplicity has often led researchers to place restrictions on probe designs that may not allow them to fully exploit the structure in which information is encoded (Saphra and Lopez, 2019; Pimentel et al., 2020b,a).",
"This preference has led many researchers to advocate the use of linear probes over non-linear ones (Alain and Bengio, 2017).",
"This paper treats and expands upon the structural probe of Hewitt and Manning (2019), who crafted a custom probe with the aim of investigating the encoding of sentence syntax in contextual representations.",
"They treat probing for syntax as a distance learning problem: they learn a linear transformation that warps the space such that two words that are syntactically close to one another (in terms of distance in a dependency tree) should have contextual representations whose Euclidean distance is small.",
"This linear approach performs well, but the restriction to learning only linear transformations seems arbitrary.",
"Why should it be the case that this information would be encoded linearly within the representations?",
"In this paper, we recast Hewitt and Manning (2019)'s structural probing framework as a general metric learning problem.",
"This reduction allows us to take advantage of a wide variety of non-linear extensionsbased on kernelizationproposed in the metric learning literature (Kulis, 2013).",
"These extensions lead to probes with the same number of parameters, but with an increased expressivity.",
"By exploiting a kernelized extension, we are able to directly test whether a structural probe that is capable of learning non-linear transformations improves performance.",
"Empirically, we do find that non-linearity helpsa structural probe based on a radial-basis function (RBF) kernel improves performance significantly in all 6 languages tested over a linear structural probe.",
"We then perform an analysis of BERT 's attention, asserting it is a rough approximation to an RBF kernel.",
"As such, it is not surprising that the syntactic information in BERT representations is more accessible with this specific non-linear transformation.",
"We conclude that kernelization is a useful tool for analyzing contextual representationsenabling us to run controlled experiments and investigate the structure in which information is encoded.",
"Hewitt and Manning (2019) introduce the structural probe , a novel model designed to probe for syntax in contextual word representations.",
"We review their formulation here and build upon it in 4.",
"A sentence w lives in a space V , defined here as the Kleene closure of a (potentially open) vocabulary V .",
"The syntactic distance ij between any two words in a sentence w is the number of steps needed to go from one word to the other while walking in the sentence's syntactic tree.",
"More formally, if we have a dependency tree t (a tree on n +1 nodes) of a sentence w of length n , we define ij as the length of the shortest path in t between w i and w j ; this may be computed, for example, by FloydWarshall.",
"Contextual representations of a sentence w are a sequence of vectors h i R d 1 that encode some linguistic knowledge about a sequence.",
"In the case of BERT , we have h i = BERT( w ) i R d 1 (1) Here, the goal of probing is to evaluate whether the contextual representations capture the syntax in a sentence.",
"In the case of the structural probe, the goal is to see whether the syntactic distance between any two words can be approximated by a learned, linear distance function: d B ( h i , h j ) = || Bh i Bh j || 2 (2) where B R d 2 d 1 is a linear projection matrix.",
"That is to say, they seek a linear transformation such that the transformed contextual representations relate to one another roughly as their corresponding words do in the dependency tree.",
"To learn this probe, Hewitt and Manning minimize the following per-sentence objective with respect to B through stochastic gradient descent 1 | w | 2 | w | (cid:88) i =1 | w | (cid:88) j = i +1 | ij d B ( h i , h j ) | (3) This is simply minimizing the difference between the syntactic distances obtained from the dependency tree and the distance between the two vectors under our learned transformation.",
"From the pairwise distances predicted by the probe, Prim's (1957) algorithm can be used to recover the one-best undirected dependency tree.",
"The restriction to a linear transformation may hinder us from uncovering some of the syntactic structure encoded in the contextual representations.",
"Indeed, there is no reason a-priori to expect that BERT encodes its knowledge in a fashion that is specifically accessible to a linear model.",
"However, if we were to introduce non-linearity by using a neural probe, for example, we would have to pit a model with very few parameters (the linear model) against one with very many (the neural network); this comparison is not fair and also goes against the spirit of designing simple probes.",
"To preclude the need for a neural probe, we instead turn to a kernelized probe.",
"The key insight is that the structural probe reduces the problem of probing for linguistic structure to that of metric learning (Kulis, 2013).",
"This can be clearly seen in eq.",
"(3), where the probe learns a distance metric between two representations in such a way that it matches the syntactic one.",
"Recognizing this relationship allows us to take advantage of established techniques from the metric learning literature to improve the performance of the probe without increasing its complexity, e.g. through kernelization.",
"Many algorithms in machine learning, e.g. support vector machines and k -means, can be kernelized (Schlkopf and Smola, 2002), thus allowing for linear models to be adapted into non-linear ones.",
"Expanding on a classic result (Schoenberg, 1938), Schlkopf (2001) show that any positive semi-definite (PSD) kernel can be used to construct a distance in a Hilbert space H .",
"Formally, their result states that for any PSD kernel : X X R 0 , there exists a feature map : X H such that || ( x ) ( y ) || 2 = (4) (cid:112) ( x , x ) 2 ( x , y ) + ( y , y ) This generalizes eq.",
"(2) to yield a new, non-linear distance metric.",
"This means that we can achieve the effects of using some non-linear feature mapping without having to specify it: we need only specify a kernel function and perform calculations using this kernelized distance metric.",
"Importantly, as opposed to deep neural probes, this learnable metric has an identical number of parameters to the original.",
"1 1 We note that we do not use selectivity (Hewitt and Liang, 2019) to control for probe complexity since it does not apply to 3.2 Common Kernels In this section we introduce the kernels to be used.",
"These kernels were chosen as they represent a comprehensive selection of commonly-used kernels in the metric learning literature (Kulis, 2013).",
"The original work of Hewitt and Manning (2019) makes use of the linear kernel : linear ( h i , h j ) = ( Bh i ) (cid:62) ( Bh j ) (5) The first non-linear kernel we consider is the polynomial kernel , defined as poly ( h i , h j ) = (cid:16) ( Bh i ) (cid:62) ( Bh j ) + c (cid:17) d (6) where d Z + and c R 0 .",
"A polynomial kernel of degree d allows for d -order interactions between the terms.",
"When working with BERT , this means that we may construct d -order conjunctions of the dimensions of the contextual representations input into the probe.",
"Next, we consider the radial-basis function kernel (RBF).",
"This kernel is also called the Gaussian kernel and is defined as rbf ( h i , h j ) = exp (cid:18) || Bh i Bh j || 2 2 2 (cid:19) (7) This kernel has an alternative interpretation as a similarity measure between both vectors, being at its maximum value of 1 when h i = h j .",
"In contrast to the polynomial kernel, the Gaussian kernel implies a feature map in an infinite dimensional Hilbert space.",
"When the RBF kernel is used in our probe, we may rewrite eq.",
"(2) as follows: d rbf ( h i , h j ) 2 (8) = rbf ( h i , h i ) 2 rbf ( h i , h j ) + rbf ( h j , h j ) = 2 2 rbf ( h i , h j ) = 2 2 exp (cid:18) || Bh i Bh j || 2 2 2 (cid:19) Which is similar to the original linear case in eq.",
"(2), but with a scaling term 12 2 and a non-linearity exp( ) .",
"Finally, we consider, the sigmoid kernel , which is defined as 2 sig ( h i , h j ) = tanh ( a ( Bh i ) (cid:62) ( Bh j ) + b ) (9) this syntax tree reconstruction taskselectivity control tasks work at the word type level, as opposed to the sentence one.",
"2 Lin and Lin (2003) observe that it is difficult to effectively tune a and b in the sigmoid kernel.",
"They also note that although this kernel is not in fact PSD, it is PSD when a and b are both positive, which we enforce in this work.",
"We also take advantage of two common regularization techniques employed in the metric learning literature to further improve the transformations learned; both act on the matrix A = B (cid:62) B and are added to the objective specified in eq.",
"(3).",
"The Frobenius norm regularizer takes the form r ( A ) = || A || 2F = tr (cid:16) A (cid:62) A (cid:17) (10) This is the matrix analogue of the L 2 squared regularizer.",
"Minimizing the Frobenius norm of the learned matrix has the effect of keeping the values in the matrix small.",
"It has been a popular choice for regularization in metric learning with adaptations to a variety of problems (Schultz and Joachims, 2004; Kwok and Tsang, 2003).",
"We also consider the trace norm regularizer , which is of the form r ( A ) = tr( A ) (11) The trace norm regularizer is the matrix analogue of the L 1 regularizer and it encourages the matrix A to be low rank.",
"As Jain et al. (2010) point out, using a low-rank transformation in conjunction with a kernel corresponds to a supervised kernel dimensionality reduction method.",
"We experiment with Hewitt and Manning's (2019) probe on 6 typologically diverse languages, following the experimental design of Hall Maudslay et al. (2020).",
"Our data comes from the Universal Dependency 2.4 Treebank (Nivre et al., 2019), providing sentences and their dependency trees, annotated using the Universal Dependencies annotation scheme.",
"3 For each sentence we calculate contextual representations using multilingual BERT .",
"For all languages, we took the first 12,000 sentences (or the maximum number thereof) in the train portion of the treebank and created new 801010 traintestdev splits.",
"4 3 It was recently demonstrated by Kuznetsov and Gurevych (2020) that choice of linguistic formalism may have an impact on probing results.",
"In this work, we investigate using only one formalism, so we cannot be sure that our results would not differ if an alternative formalism were used.",
"Nonetheless, we believe that the results that we find most interesting, which are discussed in 6, should be robust to a change in formalism, since their explanation lies in the way attention is calculated in the transformer architecture.",
"4 We cap the maximum number of sentences analyzed as a nave control for our multilingual analysis.",
"We present the results from our comparison of a re-implementation of Hewitt and Manning's (2019) linear structural probe and the non-linear kernelized probes in Table 1.",
"The two evaluation metrics shown are unlabeled undirected attachment score (UUAS) and the Spearman rank-order correlation (DSpr) between predicted distances and gold standard pairwise distances.",
"UUAS is a standard parsing metric expressing the percentage of correct attachments in the dependency tree, while DSpr is a measure of how accurately the probe predicts the overall ordering of distances between words.",
"We can see that the use of an RBF kernel results in a statistically significant improvement in performance, as measured by UUAS, in all 6 of the languages tested.",
"5 For some languages this improvement is quite substantial, with Tamil seeing an improvement of 8.44 UUAS from the baseline probe to the RBF kernel probe.",
"The RBF kernel produces improvements across all analyzed languages.",
"This suggests that it is indeed the case that syntactic structure is encoded nonlinearly in BERT .",
"As such, analyzing this specific kernel may yield insights into what this structure is.",
"Indeed, none of the other kernels systematically improve over the linear baseline, implying this is not just an effect of the non-linearity introduced through use of a kernelthe specific structure of the RBF kernel must be responsible.",
"In this section, we argue that the reason that the RBF kernel serves as such a boon to probing is that it resembles BERT 's attention mechanism; recall that BERT 's attention mechanism is defined as att( h i , h j ) exp (cid:18) ( Kh i ) (cid:62) ( Qh j ) d 2 (cid:19) (12) where K and Q are linear transformations and d 2 is the dimension vectors are projected into.",
"K projects vector h i into a key vector, while Q projects h j into a query one.",
"When the key and query vectors are similar (i.e. have a high dot product), the value of this equation is large and word j attends to word i .",
"where we take 2 = d 2 .",
"The similarity between eqs.",
"(12) and (14) suggests the attention mechanism in BERT is, up to a multiplicative factor, roughly equivalent to an RBF kernelas such, it is not surprising that the RBF kernel produces the strongest results.",
"The resemblance between these equations, taken together with the significant improvements in capturing syntactic distance, suggest that this encoded information indeed lives in an RBF-like space in BERT .",
"Such information can then be used in its self-attention mechanism; allowing BERT to pay attention to syntactically close words when solving the cloze language modeling task.",
"Being attentive to syntactically close words would also be supported by recent linguistic research, since words sharing syntactic dependencies have higher mutual information on average (Futrell et al., 2019).",
"The representations we analyze, though, are taken from BERT 's final layer; as such, they are not trained to be used in any self-attention layer so why should such a resemblance be relevant?",
"BERT 's architecture is based on the Transformer (Vaswani et al., 2017), and uses skip connections between each self-attention layer.",
"Such skip connections create an incentive for residual learning, i.e. only learning residual differences in each layer, while propagating the bulk of the information (He et al., 2016).",
"As such, BERT 's final hidden representations should roughly live in the same manifold as its internal ones.",
"It is interesting to note that the RBF kernel achieves the best performance in terms of UUAS in all languages, but it only twice achieves the best performance in terms of DSpr.",
"This may be due to the fact that, as we can see by examination of eq.",
"(8), the distance returned by the RBF kernel will not exceed 2, whereas syntactic distances in the tree will.",
"Further, the gradient of the RBF kernel contains an exponential term which will cause it to go to zero as distance increases (while an examination of the unkernelized loss function reveals the opposite behavior).",
"This means that it will be less sensitive to the distances between syntactically distant words and focus more on words with small distances.",
"This may partially explain its better performance on UUAS, and comparably worse performance as measured by correlation (which counts pairwise differences between all words, not just those which are directly attached in the tree).",
"Furthermore, our probe's focus on nearby words resembles the general attentional bias towards syntactically close words (Voita et al., 2019).",
"The direct resemblance between self-attention mechanisms and our proposed probe metric poses a new way of understanding results from more complex probes.",
"While Reif et al. (2019) understood the Euclidean-squared distance of Hewitt and Manning as an isometric tree embedding, their geometric interpretation did not factor in the rest of BERT's architecture.",
"Such simplified context-less probes cannot tell us how linguistic properties are processed by a sequence of learned modules (Saphra and Lopez, 2019).",
"However, we consider representations in the context of the model which is expected to employ them.",
"From this perspective, simpler metrics may be rough approximations to our RBF kernel space, which is actually capable of measuring linguistic properties that can be easily extracted by an attention-based architecture.",
"We find that the linear structural probe (Hewitt and Manning, 2019) used to investigate the encoding",
"of syntactic structure in contextual representations can be improved through kernelization, yielding a non-linear model.",
"This kernelization does not introduce additional parameters and thus does not in-crease the complexity of the probeat least if one treats the number of parameters as a good proxy for model complexity.",
"At the same time, the RBF kernel improves probe performance in all languages under consideration.",
"This suggests that syntactic information may be encoded non-linearly in the representations produced by BERT .",
"We hypothesize that this is true due to the similarity of the RBF kernel and BERT 's self-attention layers.",
"The authors foresee no ethical concerns with the research presented in this paper."
] |
[
"abstain",
"abstain",
"abstain",
"objective",
"result",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"result",
"method",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method"
] |
[
"Automatic sentence summarization produces a shorter version of a sentence, while preserving its most important information.",
"A good summary is characterized by language fluency and high information overlap with the source sentence.",
"We model these two aspects in an unsupervised objective function, consisting of language modeling and semantic similarity metrics.",
"We search for a high-scoring summary by discrete optimization.",
"Our proposed method achieves a new state-of-the art for unsupervised sentence summarization according to ROUGE scores.",
"Additionally, we demonstrate that the commonly reported ROUGE F1 metric is sensitive to summary length.",
"Since this is unwillingly exploited in recent work, we emphasize that future evaluation should explicitly group summarization systems by output length brackets.",
"1 1 Introduction Sentence summarization transforms a long source sentence into a short summary, while preserving key information (Rush et al., 2015).",
"Sentence summarization has wide applications, for example, news headline generation and text simplification.",
"State-of-the-art sentence summarization systems are based on sequence-to-sequence neural networks (Rush et al., 2015; Nallapati et al., 2016; Wang et al., 2019), which require massive parallel data for training.",
"Therefore, unsupervised sentence summarization has recently attracted increasing interest.",
"Cycle-consistency approaches treat the summary as a discrete latent variable and use it to reconstruct the source sentence (Wang and Lee, 2018; Baziotis et al., 2019).",
"Such latent-space generation fails to explicitly model the resemblance between the source sentence and the target summary.",
"Zhou and Rush (2019) propose a left-to-right beam search approach based on a heuristically defined scoring function.",
"However, beam search is biased towards the first few words of the source.",
"In this paper, we propose a hill-climbing approach to unsupervised sentence summarization, directly extracting words from the source sentence.",
"This is motivated by the observation that human-written reference summaries exhibit high word overlap with the source sentence, even preserving word order to a large extent.",
"To perform word extraction for summarization, we define a scoring function similar to Miao et al. (2019) and Zhou and Rush (2019) that evaluates the quality of a candidate summary by language fluency, semantic similarity to the source, and a hard constraint on output length.",
"We search towards our scoring function by first choice hill-climbing (FCHC), shown in Figure 1. We start from a random subset of words of the required output length.",
"For each search step, a new candidate is sampled by randomly swapping a selected word and a non-selected word.",
"We accept the new candidate if its score is higher than the current one.",
"In contrast to beam search (Zhou and Rush, 2019), our summary is not generated sequentially from the beginning of a sentence, and therefore not biased towards the first few words.",
"Due to the nature of the search action, our approach is able to explicitly control the length of a summary as a hard constraint.",
"In all previous work, the summary length is weakly controlled by length embeddings or a soft length penalty (Zhou and Rush, 2019; Wang and Lee, 2018; Fevry and Phang, 2018; Baziotis et al., 2019).",
"Thus, the generated summaries by different systems vary considerably in average length, for example, ranging from 9 to 15 on a headline corpus (Section 4.1).",
"Previous work uses ROUGE F1 to compare summaries that might differ in length.",
"We show that ROUGE F1 is unfortunately sensitive to summary output length, in general favoring models that produce longer summaries.",
"Therefore, we argue that controlling the output length should be an integral part of the summarization task and that a fair system comparison can only be conducted between summaries in the same length bracket.",
"Our model establishes a new state-of-the-art for unsupervised sentence summarization across all commonly-used length brackets and different ROUGE metrics on the Gigaword dataset for headline generation (Rush et al., 2015) and on DUC2004 (Over and Yen, 2004).",
"The main contributions of this paper are: We propose a novel method for unsupervised sentence summarization by hill climbing with word-level extraction.",
"We outperform current unsupervised sentence summarization systems, including more complex sentence reconstruction models.",
"We show that ROUGE F1 is sensitive to summary length and thus emphasize the importance of explicitly controlling summary length for a fair comparison among different summarization systems.",
"Text Summarization.",
"The task can be categorized by source text types, such as multi-document summarization (Erkan and Radev, 2004; Radev et al., 2000; Haghighi and Vanderwende, 2009) and single-document summarization (Mihalcea and Tarau, 2004; Zhou and Hovy, 2004; Zheng and Lapata, 2019).",
"Traditional approaches are mostly extractive , i.e., they extract entire sentences from a document.",
"Recently, sequence-to-sequence (Seq2Seq) models have been used for abstractive summaries, where the system is able to synthesize new sentences (Nallapati et al., 2016, 2017; Gehrmann et al., 2018; Lewis et al., 2019; Fabbri et al., 2019).",
"The copy mechanism (Gu et al., 2016) in a Seq2Seq model can be viewed as word-level extraction in abstractive summarization (See et al., 2017; Paulus et al., 2018).",
"Both state-of-the-art extractive and abstractive approaches are usually supervised.",
"Sentence summarization yields a short summary for a long sentence.",
"Hori and Furui (2004) and Clarke and Lapata (2006) extract single words from the source sentence based on language model fluency and linguistic constraints.",
"They search via dynamic programming with a trigram language model, which restricts the model capacity.",
"The Hedge Trimmer method (Dorr et al., 2003) also uses hand-crafted linguistic rules to remove constituents from a parse tree until a certain length is reached.",
"Rush et al. (2015) propose a supervised abstractive sentence summarization system with an attention mechanism (Bahdanau et al., 2015), and they also introduce a dataset for headline generation derived from Gigaword.",
"2 Subsequent models for this dataset were also supervised and mostly based on Seq2seq architectures (Nallapati et al., 2016; Chopra et al., 2016; Wang et al., 2019).",
"Recently, unsupervised approaches for sentence summarization have attracted increasing attention.",
"Fevry and Phang (2018) learn a denoising autoen-coder and control the summary length by a length embedding.",
"Wang and Lee (2018) and Baziotis et al. (2019) use cycle-consistency (He et al., 2016) to learn the reconstruction of the source sentence and return the intermediate discrete representation as a summary.",
"Zhou and Rush (2019) use beam search to optimize a scoring function, which considers language fluency and contextual matching.",
"Our work can be categorized under unsupervised sentence summarization.",
"We accomplish this by word-level extraction from the source sentence.",
"ral networks generating words left-to-right.",
"This is often enhanced by beam search (Sutskever et al., 2014), which keeps a beam of candidates in a partially greedy fashion.",
"A few studies allow hard constraints on this decoding procedure.",
"Hokamp and Liu (2017) use grid-beam search to impose lexical constraints during decoding.",
"Anderson et al. (2017) propose constrained beam search to predict fixed image tags in an image transcription task.",
"Miao et al. (2019) propose a MetropolisHastings sampler for sentence generation, where hard constraints can be incorporated into the target distribution.",
"This is further extended to simulated annealing (Liu et al., 2020), or applied to the text simplification task (Kumar et al., 2020).",
"Different from the above concurrent work, this paper applies the stochastic search framework to text summarization, and design our specific search space and search actions for word extraction.",
"In previous work on text summarization, length embeddings (Kikuchi et al., 2016; Fan et al., 2018) have been used to indicate the desired summary length.",
"However, these are not hard constraints, because the model may learn to ignore such information.",
"Given a source sentence x = ( x 1 , x 2 , . . . , x n ) as input, our goal is to generate a shorter sentence y = ( y 1 , y 2 , . . . , y m ) as a summary of x .",
"We perform word-level extraction, in addition keeping the original word order intact.",
"Thus, y is a subsequence of x .",
"Our word-level extraction optimizes a manually defined objective function f ( y ; x , s ) , where the summary length s is predefined ( s < n ) and not subject to optimization.",
"In the remainder of this section, we will describe the objective function, search space, and the search algorithm in detail.",
"We define an objective function f ( y ; x , s ) , which our algorithm maximizes.",
"It evaluates the fitness of a candidate sentence y as the summary of an input x , involving three aspects, namely, language fluency f LM ( y ) , semantic similarity f SIM ( y ; x ) , and a length constraint f LEN ( y , s ) .",
"This is given by f ( y ; x , s ) = f LM ( y ) f SIM ( y ; x ) f LEN ( y ; s ) , (1) where the relative weight balances f LM ( y ) and f SIM ( y ; x ) .",
"We treat the summary length as a hard constraint, and therefore we do not need a weighting hyperparameter for f LEN .",
"Language Fluency.",
"The language fluency scorer quantifies how grammatical and idiomatic a candidate summary y is.",
"Our model generates a candidate summary in a non-autoregressive fashion, in contrast to the beam search in Zhou and Rush (2019).",
"Thus, we are able to simultaneously consider forward and backward language models, using the geometric average of their perplexities.",
"Using both forward and backward language models is less biased towards sentence beginnings or endings.",
"Our fluency scorer is the inverse perplexity.",
"Depending on applications, the language models could be pretrained on a target corpus.",
"3 In this case, the fluency scorer also measures whether the summary style is consistent with the target language.",
"This could be important in certain applications, e.g., headline generation, where the summary language differs from the input in style.",
"Semantic Similarity.",
"A semantic similarity scorer ensures that the summary keeps the key information of the input sentence.",
"We adopt the co-sine similarity between sentence embeddings as f SIM ( y ; x ) = cos( e ( x ) , e ( y )) , (3) where e is a sentence embedding method.",
"In our work, we use unigram word embeddings learned by the sent2vec model (Pagliardini et al., 2018).",
"Then, e ( x ) is computed as the average of these unigram embeddings, weighted by the inverse-document frequency ( idf ) of the words.",
"We use sent2vec because it is trained in an unsupervised way on individual sentences.",
"By contrast, other unsupervised methods like SiameseCBOW (Kenter et al., 2016) or BERT (Devlin et al., 2019) use adjacent sentences as part of the training signal.",
"Length Constraint.",
"Our discrete searching approach is able to impose the output length as a hard constraint, allowing the model to generate summaries of any given length.",
"Suppose the desired output length is s , then our length scorer is 3 We use the terminology unsupervised summarization , following Zhou and Rush (2019).",
"While we train the language models on the desired target language, we do not need parallel source-target pairs, i.e., sentences together with their groundtruth summaries.",
"In other words, a candidate summary y is infeasible if it does not satisfy the length constraint.",
"In practice, we implement this hard constraint by searching among feasible solutions only.",
"Most sentence generation models choose a word from the vocabulary at each time step, such as autoregressive generation that predicts the next word (Sutskever et al., 2014; Rush et al., 2015), and edit-based generation with deletion or insertion operations (Miao et al., 2019; Dong et al., 2019).",
"In these cases, the search space is |V| s , given a vocabulary V and a summary length s .",
"However, reference summaries are highly extractive.",
"In the headline generation dataset (Rush et al., 2015), for example, 45% of the words in the reference summary also appear in the source sentence.",
"This yields a ceiling of 45 ROUGE -1 F1 points 4 for a purely extractive method, which is higher than the current state-of-the-art supervised abstractive result of 39 points (Wang et al., 2019).",
"We are thus motivated to propose our word-extraction approach that extracts a subsequence of the input as the summary.",
"Additionally, we arrange the words in the same order as the input, motivated by the monotonicity assumption in summarization (Yu et al., 2016; Raffel et al., 2017).",
"Formally, we define the search space as a = ( a 1 , . . . , a n ) { 0 , 1 } n , where n is the length of the input sentence x .",
"The vector a is a Boolean filter over the source words x .",
"The summary sequence can then be represented by y = x a , i.e., we sequentially extract words from the source sequence x by the Boolean vector a .",
"If a i = 1 , then x i is extracted for the summary, and vice versa.",
"Further, we only consider the search space of all feasible solutions { a : f ( x a ; x , s ) > } .",
"That is to say, the candidate summary has to satisfy the length constraint in Section 3.1.",
"Equivalently, the output length can be expressed by a constraint on the search space such that (cid:80) i a i = s .",
"The above restrictions reduce the search space to (cid:0) ns (cid:1) solutions.",
"In a realistic setting, our search 4 We assume an extracted summary has the same length as the reference, and 45% words of the reference are in the original sentence.",
"This gives us a ceiling of 45% precision and recall.",
"We optimize our objective function f ( y ; x , s ) by first-choice hill climbing (FCHC, Russell and Norvig, 2016).",
"This is a stochastic optimization algorithm that proposes a candidate solution by local change at every search step.",
"The candidate is accepted if it is better than the current solution.",
"Otherwise, the algorithm keeps the current solution.",
"FCHC maximizes the objective function in a greedy fashion and yields a (possibly local) optimum.",
"Algorithm 1 shows the optimization procedure of our FCHC.",
"For each search step, a new candidate is sampled from the neighbor function q ( a (cid:48) | a ) .",
"This is accomplished by randomly swapping two actions a i and a j for a i (cid:54) = a j , i.e., replacing a word in the summary with a word from the source sentence that is not in the current summary.",
"The order of selected words is kept as in the source sentence.",
"If the candidate solution achieves a higher score, then it is accepted.",
"Otherwise, the candidate is rejected and the algorithm proceeds with the current solution.",
"Our search terminates if it exceeds a predefined budget.",
"The last solution is returned as the summary, as it is also the best-scored candidate due to our greedy algorithm.",
"One main potential drawback of hill climbing algorithms is that they may get stuck in a local optimum.",
"To alleviate this problem, we restart the algorithm with multiple random initial word selections a 0 and return the overall best solution.",
"We set the number of restarts as R ns 2 and number of search steps as T ns 2 , where R and T are controlling hyperparameters.",
"We design the formula to encourage more search for longer input sentences, but only with a tractable growth: linear for input length and quadratic for summary length.",
"As the summary length is usually much smaller than the input length, quadratic search is possible.",
"Increasing the number of restarts (and search steps) monotonically improves the scoring function, and thus in practice can be set according to the available search budget.",
"Other discrete optimization algorithms can be explored for sentence generation, such as simulated annealing (Liu et al., 2020) and genetic algorithms.",
"Our analysis on short sentences (where exhaustive search is tractable) showed that hill climbing with restarts achieves ROUGE scores similar to exhaustive search (Section 5.4).",
"In this section, we will describe the datasets, evaluation metrics, and a widely used baseline (called Lead).",
"Additionally, we report the observation that the commonly used evaluation metric, ROUGE F1, is sensitive to summary length, preferring longer summaries.",
"Thus, we propose to group models with similar output length during evaluation for fair comparison.",
"We evaluate our models on the dataset provided for DUC2004 Task 1 (Over and Yen, 2004) and a headline generation corpus 5 (Rush et al., 2015), both widely adopted in the summarization literature.",
"The DUC2004 dataset is designed and used for testing only.",
"It consists of 500 news articles, each paired with four human written summaries.",
"We follow Rush et al. (2015) and adopt DUC2004 for sentence summarization by using only the first sentence of an article as input.",
"The reference summaries are around 10 words long on average.",
"The headline generation dataset (Rush et al., 2015) is derived from the Gigaword news corpus.",
"Each headline/title is viewed as the reference summary of the first sentence of an article.",
"The dataset contains 3.8M training instances and 1951 test instances.",
"The average headline contains 8 words; the average source sentence contains 30 words.",
"We use 500 held-out validation instances for hyperparameter tuning.",
"Note that the training set is only used to train a language model and sent2vec embeddings.",
"The summarization process itself is not trained in our approach.",
"Lead baselines are a strong competitor that extracts the first few characters or words of the input sentence.",
"The DUC2004 shared task includes a Lead baseline, which extracts the first 75 characters as the summary.",
"We call it Lead-C-75.",
"For the Gigaword dataset, the reference has 8 words on average, and it is common to compare with a Lead variant that chooses the first 8 words.",
"We call this baseline Lead-Nn when we choose n words.",
"For fair comparison with previous work (Baziotis et al., 2019; Fevry and Phang, 2018) in Section 5.2, we further introduce a new variant that returns the first p percent of source words as the summary.",
"We denote this baseline by Lead-Pp .",
"Summarization systems are commonly evaluated by ROUGE scores (Lin, 2004).",
"The ROUGE -1 (or ROUGE -2) score computes the unigram (or bigram) overlap of a generated summary and the reference.",
"ROUGE-L calculates the longest common subsequence.",
"Depending on the dataset, either ROUGE Recall or ROUGE F1 variant is adopted.",
"Since the ROUGE Recall metric is not normalized with regard to length, DUC2004 standard evaluation truncates the summary at 75 characters.",
"This procedure was also adopted by Rush et al. (2015) for the headline generation task, but later Chopra et al. (2016) proposed to report the more balanced ROUGE F1 metric for the Gigaword headline generation dataset and abandoned truncation.",
"We follow previous work and use ROUGE F1 for headline generation and truncated ROUGE Recall for DUC2004.",
"As mentioned, ROUGE F1 was introduced to the evaluation of sentence summarization to better compare models with different output lengths (Chopra et al., 2016; Nallapati et al., 2016).",
"To investigate the effect of summary length on ROUGE F1, we calculate ROUGE F1 scores for the Lead-Nn and Lead-Pp baselines with different length parameters.",
"Figure 2 shows that ROUGE F1 peaks at n 18 or p 50 .",
"The difference between the maximum performance at n 18 and the widely adopted baseline (Lead-N8 ) is large: 4.2 ROUGE -1 F1 points.",
"A similar effect is observed by Sun et al. (2019) for document summarization.",
"This shows that ROUGE F1 is still sensitive to summary length, and this effect should be 10 20 30 40 50 60 70 Lead-N n 0 5 10 15 20 25 30 R o u g e F 1 s c o r e Rouge-1 Rouge-2 Rouge-L 20 40 60 80 100 Lead-P p 0 5 10 15 20 25 30 R o u g e F 1 s c o r e Rouge-1 Rouge-2 Rouge-L Figure 2: ROUGE F1 scores on the test set of headline generation for Lead-N and Lead-P baselines with different number n and percentage p of leading words.",
"considered during evaluation.",
"We propose to report the average output length of a model and only compare models in the same length bracket.",
"We conduct experiments with two settings, dependent on how the scorers f LM and f SIM are trained.",
"In the first setting, we train the language model and sent2vec embeddings on the source (article) side of the Gigaword headline generation dataset.",
"This complies with Fevry and Phang (2018) and Baziotis et al. (2019).",
"In the second setting, we train the language model and sent2vec embeddings on the target (title) side like Zhou and Rush (2019).",
"In both settings, we do not need parallel source-target pairs.",
"For output length, our headline generation experiment sets the desired target length as 8 words, 10 words, and 50% of the input, as these mirror either the average reference summary length or the average output lengths of our competitors (Wang and Lee, 2018; Zhou and Rush, 2019; Fevry and Phang, 2018; Baziotis et al., 2019).",
"For DUC2004, the desired summary length is set to 13 words, because the standard evaluation script truncates after the first 75 characters (roughly 13 words) in the summary.",
"Our forward and backward language models use long short term memory units (Hochreiter and Schmidhuber, 1997) and are optimized for 50 epochs by stochastic gradient descent.",
"Embeddings and hidden sizes are set to 1024 dimensions.",
"We tune hyperparameters on the development data of the headline corpus, and set the weighting parameter to 12 for all models.",
"The search steps and restarts are set to T = 0 .",
"1 and R = 0 .",
"035 , respectively.",
"We see a sharp performance improvement when we do more searching.",
"Thus, we choose T and R at the critical values due to efficiency concerns.",
"Besides the Lead baselines discussed in Section 4.2, we compare our models with state-of-the-art unsupervised sentence summarization systems.",
"Wang and Lee (2018) 6 use cycle-consistency to reconstruct source sentences from the headline generation corpus (Rush et al., 2015).",
"The latent discrete representation, learned to be similar to (non-parallel) headlines, is used as the summary.",
"Zhou and Rush (2019) optimize an objective function involving language fluency and contextual matching.",
"Their language modeling scorer is trained on headlines of the Gigaword training set; their contextual matching scorer is based on ELMo embeddings (Peters et al., 2018) trained with the Billion Word corpus (Chelba et al., 2013).",
"Their summary length is controlled by a soft length penalty during beam search.",
"Fevry and Phang (2018) 7 learn a denoising au-toencoder (Vincent et al., 2008) to reconstruct source sentences of the Gigaword training set.",
"Summary length is set to 50% of the input length and is controlled by length embeddings in the decoder.",
"Baziotis et al. (2019) 8 propose SEQ 3 that uses cycle-consistency to reconstruct source sentences from the Gigaword training set.",
"The length is also set to 50% of the input length, controlled by length embeddings in the intermediate decoder.",
"For the DUC2004 dataset, TOPIARY (Zajic et al., 2004) is the winning system in the competition.",
"They shorten the sentence by rule-based syntax-tree trimming (Dorr et al., 2003), but enhance the resulting summary with topics that are learned on 6 Generated summaries are obtained via E-Mail correspondence.",
"Scores differ because of evaluation setup.",
"7 Retrained with official code ( https://github.com/ zphang/usc_dae ) because the authors use a private test set.",
"8 Retrained with official code ( https://github.com/ cbaziotis/seq3 ), because of different test data.",
"The authors remove 54 noisy instances.",
"Our replication thus achieves slightly lower scores than theirs.",
"BOTTLESUMEX (West et al., 2019) uses the information bottleneck principle to predict the next sentence in an article.",
"Their method employs a pretrained small GPT-2 model (Radford et al., 2019).",
"Results for Headline Generation.",
"We first compare with Lead-N-8 (Group A, Table 1).",
"This is a standard baseline in previous work, because the average reference summary contains eight words.",
"Unfortunately, none of the previous papers consider output length during evaluation, making comparisons between their (longer) output summaries and the Lead-N-8 baseline unfair, as discussed in Section 4.4.",
"Our approach, which explicitly controls summary length, considerably outperforms the Lead-N-8 baseline in a fair setting.",
"Next, we compare with state-of-the-art unsupervised methods, whose output summary has roughly 10 words on average (Group B).",
"In this case, we set our hard length constraint as 10 and include the Lead-N-10 baseline for comparison.",
"Trained on the title side only, our HC title 10 model outperforms these competing methods in all ROUGE F1 scores.",
"In particular, Zhou and Rush (2019) use the target side to train the language model, plus the Billion Word Corpus to pretrain embeddings used in the contextual matching scorer.",
"With the same extra corpus to pretrain our sent2vec embeddings, our HC title+billion 10 variant achieves even better performance, outperforming Zhou and Rush (2019) by 2.32 ROUGE -1 and 1.41 ROUGE-L points.",
"The Billion Word Corpus, however, includes complete articles, which implicitly yields unaligned parallel data.",
"This could be inappropriate for an unsupervised method.",
"Thus, we further train sent2vec embeddings on the Twitter corpus by Pagliardini et al. (2018).",
"The HC title+twitter 10 also performs better than HC title 10 and other competitors.",
"In Group C, we compare with the models whose summaries have an average length of 50% of the input sentence.",
"We set our desired target length to 50% as well, and include the Lead-P-50 baseline.",
"Previous studies report a performance improvement over the Lead-N-8 baseline, but in fact, Table 1 shows that they do not outperform the appropriate Lead baseline Lead-P-50.",
"Our model is the only unsupervised summarization system that outperforms the Lead-P-50 baseline on this dataset, even though it is trained solely on the article side.",
"It is noted that our models trained on the title side ( HC title ) consistently outperform those trained on the article side ( HC article ).",
"This is not surprising because the former can generate headlines from the learned target distribution.",
"This shows the importance of learning a summary language model even if we do not have supervision of parallel source-target data.",
"Results for DUC2004.",
"Table 2 shows the results on the DUC2004 data.",
"As this dataset is for test only, we directly transfer the models HC article and HC title from the headline generation corpus with the same hyperparameters (except for length).",
"As shown in the table, we outperform all previous methods and the Lead-C-75 baseline.",
"The results are consistent with Table 1, showing the generalizability of our approach.",
"Human Evaluation.",
"We conduct human evaluation via pairwise comparison of system outputs, in the same vein as (West et al., 2019).",
"The annotator sees the source sentence along with the headline generated by our system and a competing method, presented in random order.",
"The annotator is asked to compare the fidelity and fluency of the two systems, choosing among the three options",
"(i) the first headline is better",
"(ii) the second headline is better, and",
"(iii) both headlines are equally good/bad.",
"This task is repeated for 100 instances with 5 annotators each.",
"The final label is selected by majority voting.",
"The inter-annotator agreement (Krippendorff's alpha) is 0.25 when our model is compared with Wang and Lee (2018) and 0.17 with Zhou and Rush (2019).",
"We report the aggregated score of our system in Table 3. For each sample, we count 1 point if our model wins, 0 points if it ties, -1 point if it loses.",
"The points are normalized by the number of samples.",
"The results show an advantage of our model over Wang and Lee (2018), especially in fluency.",
"Our model is also on par with Zhou and Rush (2019).",
"Note again that we achieve this with fewer data.",
"In this section, we conduct an in-depth analysis of our model, based on HC title 10 for headline generation.",
"Search Objective.",
"Table 4 provides an ablation study on our objective function.",
"It shows that both language fluency and semantic similarity play a Models Score (#wins/#ties/#loses) Fidelity Fluency HC vs. WL +0.18 (44/30/26) +0.30 (45/40/15) HC vs. ZR +0.05 (35/35/30) -0.03 (24/49/27) Table 3: Human evaluation in a pairwise comparison setting on 100 headline generation instances.",
"role in measuring the quality of a summary.",
"The bi-directional language model is also slightly better than a uni-directional language model.",
"Search Algorithm.",
"In Figure 3, we compare our FCHC with the theoretical optimum on short sentences where exhaustive search is tractable.",
"For only 3% of the instances with source sentence length between 25 and 30 words, our FCHC algorithm does not find the global optimum.",
"In 21% of those cases, the better objective score leads to a higher ROUGE-L score.",
"This shows that FCHC with restarts is a powerful enough search algorithm for word extraction-based sentence summarization.",
"Positional Bias.",
"We analyze the positional bias of each algorithm by plotting the normalized frequency of extracted words within four different areas of the source sentence.",
"As shown in Figure 4, the extraction positions of words in the reference headlines are slightly skewed towards the beginning of the source sentence.",
"Our hill-climbing algorithm performs distributed edits over the sentence, which is reflected in the flatter graph across the source sentence areas.",
"By contrast, beam search (Zhou and Rush, 2019) is more biased towards the first quarter of the source sentence.",
"Cycle consistency models (Wang and Lee, 2018; Baziotis et al., 2019) show a strong bias towards the first half of the source sentence.",
"We suspect that the reconstruction decoder is easily satisfied with the beginning of the source sentence as the discrete latent variable, 10 15 20 25 30 source sentence length 30 20 10 0 10 20 d i ff e r e n c e i n R o u g e -L s c o r e Rouge-L 0.0 0.2 0.4 0.6 0.8 1.0 d i ff e r e n c e i n o b j e c t i v e s c o r e objective score Figure 3: Orange crosses show the objective score optimized by exhaustive search minus the objective score optimized by FCHC.",
"Case Study.",
"We show example summaries generated by our system in Figure 5.",
"We see that the HC title models indeed learn the style of headlines, known as headlinese .",
"As shown, HC title often uses simple tense and drops articles (e.g., a and the).",
"The summaries generated by HC article tend to waste word slots by including an uninformative determiner.",
"It is also seen that we can control the length in an explicit way.",
"Comparing HC title with desired lengths of 8 and 10, we see that the additional two words are used to include more information, such as the day of the meeting in Example 2 or the gender of the injured person in Example 3. 1. Input: a german registered container ship ran aground at the entrance to the french port of le havre early tuesday , but authorities said there were no casualties .",
"Reference : container ship runs aground in french port HC article 10 : a container ship ran aground but there were no casualties HC title 10 : container ship ran aground at french port but no casualties HC title 8 : ship ran aground at french port no casualties 2. Input: fidel castro , cuba's president of the council of state , met with a chinese delegation here tuesday .",
"Reference: castro meets chinese official HC article 10 : fidel castro cuba 's president met with a chinese delegation HC title 10 : fidel castro cuba 's president met with chinese delegation tuesday HC title 8 : fidel castro 's president met with chinese delegation 3. Input: two grenades exploded near a national police station monday , slightly injuring one woman , news reports said .",
"Reference: two grenades explode near spanish police station HC article 10 : two grenades exploded near a police station injuring one woman HC title 10 : two grenades exploded near a police station injuring one woman HC title 8 : two grenades exploded near police station injuring one Table 5: Example summaries for headline generation test set.",
"We proposed a novel word-extraction model for sentence summarization that generates summaries by optimizing an objective function of language fluency and semantic similarity.",
"A hard length constraint is also imposed in our objective function.",
"In a controlled experiment, our model achieves better performance than strong baselines on headline generation and DUC2004 datasets.",
"We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), under grant Nos.",
"RGPIN-2019-04897, and RGPIN-2020-04465.",
"Lili Mou is also supported by AltaML, the Amii Fellow Program, and the Canadian CIFAR AI Chair Program.",
"This research was enabled in part by the support of Compute Canada ( www.computecanada.ca )."
] |
[
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"objective",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"objective",
"result",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"other",
"other",
"other",
"other"
] |
[
"Many NLP models operate over sequences of subword tokens produced by hand-crafted tokenization rules and heuristic subword induction algorithms.",
"A simple universal alternative is to represent every computerized text as a sequence of bytes via UTF-8, obviating the need for an embedding layer since there are fewer token types (256) than dimensions.",
"Surprisingly, replacing the ubiquitous embedding layer with one-hot representations of each byte does not hurt performance; experiments on byte-to-byte machine translation from English to 10 different languages show a consistent improvement in BLEU, rivaling character-level and even standard subword-level models.",
"A deeper investigation reveals that the combination of embeddingless models with decoder-input dropout amounts to token dropout, which benefits byte-to-byte models in particular.",
"1 1 Introduction Neural NLP models often operate on the subword level, which requires language-specific tokenizers (Koehn et al., 2007; Adler and Elhadad, 2006) and subword induction algorithms, such as BPE (Sen-nrich et al., 2016; Kudo, 2018).",
"Instead, working at the byte level by representing each character as a variable number of Unicode (UTF-8) bytes, does not require any form of preprocessing, allowing the model to read and predict every computerized text using a single vocabulary of 256 types.",
"While previous work found that byte-level models tend to underperform models based on subword tokens (Wang et al., 2019), byte-based models exhibit an interesting property: their vocabulary is smaller than the number of latent dimensions ( 256 < d ).",
"In this work, we demonstrate that this property allows us to remove the input and output embedding layers from byte-to-byte translation models, 1 Our code is publicly available at: https://github.",
"and in doing so, improve the models' performance consistently.",
"We replace the dense trainable embedding matrix with a fixed one-hot encoding of the vocabulary as the first and last layers of a standard transformer model.",
"Machine translation experiments on 10 language pairs show that byte-to-byte models without an embedding layer achieve higher BLEU scores than byte-based models with parameterized embeddings (+0.5 on average), thus closing the performance gap with subword and character models.",
"We observe this result consistently throughout a wide variety of target languages and writing systems.",
"The fact that removing parameters improves performance is counter-intuitive, especially given recent trends in machine learning that advocate for increasingly larger networks.",
"We further investigate why embeddingless models yield better results and find implicit token dropout (commonly referred to as word dropout) as the main source of that boost.",
"While prior work shows that randomly masking tokens from the decoder input can improve the performance of language generation models (Bowman et al., 2016), we find that this effect is amplified when operating at the byte level.",
"Overall, our results suggest that, even without additional parameters, byte-based models can compete and potentially outperform subword models, but that they may require alternative optimization techniques to achieve that goal.",
"Modern software typically represents text using Unicode strings (UTF-8), which allows one to encode virtually any writing system using a variable number of bytes per token; English characters are typically represented by a single byte, with other writing systems taking two (e.g. Arabic), three (e.g. Chinese), or four (e.g. emojis) bytes per character.",
"By treating each byte as a separate token, we can encode any natural language text using a single uni-Original Text .",
"versal vocabulary of only 256 token types.",
"Moreover, byte tokenization obviates the need for any heuristic preprocessing, such as splitting spaces, punctuation, and contractions.",
"Figure 1 illustrates subword, character, and byte tokenization.",
"Our model is based on the original transformer encoder-decoder (Vaswani et al., 2017) with one main difference: we eliminate the input and output token embedding layers.",
"These layers typically use a common parameter matrix E R | V | d that contains a d -dimensional embedding vector for each source and target vocabulary item in V .",
"2 Instead, we use a fixed one-hot representation of our byte vocabulary.",
"For instance, the character R could be represented as a vector with 1 at dimension 82 and 0 elsewhere.",
"Since it is standard practice to use representations of more than 256 dimensions, every possible byte can be represented by such one-hot vectors.",
"To predict the next token for a decoder input of n tokens, we take the output of the last transformer decoder layer, Y R n d , and apply a softmax across each vector's dimensions.",
"Formal expressions of the input and output of our model are detailed in Figure 2.",
"Omitting the embedding layer reduces the number of parameters by a factor of O ( | V | d ) .",
"3 We do add a total of 3 parameters to scale the encoder and decoder's (one-hot) inputs and the decoder's output (before the softmax).",
"We initialize all three with d , akin to the constant scaling factor typically applied to the input embedding layer in transformers.",
"Despite the reduction in model size, memory 2 One could argue that the first layer of each transformer stack (the key, query, and value matrices) qualify as some form of multi-head multi-purpose embedding layer, where each token type is effectively represented by 3 h different vectors ( h being the number of attention heads) in the encoder and 3 h additional vectors in the decoder.",
"This is very different from the standard notion of embeddings, where each token type has a universal representation that can be shared across the encoder input, decoder input, and decoder output.",
"3 For subword tokenization, this accounts for a significant portion of the parameter budget, but for byte-based models the added parameter cost is negligible.",
"consumption increases when working on longer sequences, since the space complexity of transformers is O ( n 2 + n d ) .",
"In our case, d (512) is typically larger than n (see Table 1), entailing an increase in memory consumption that is roughly linear in the sequence length n , and a similar decrease in processing speed when compared to character and subword models.",
"In addition to replacing the embedding layers, we also remove the dropout layers on the encoder input and decoder output, since zeroing out entries of one-hot vectors is equivalent to randomly masking out input tokens or deleting significant parts of the model's predicted distribution.",
"The dropout on the decoder input (prefix of the target fed with teacher forcing) remains intact at this point and is applied throughout our main experiments.",
"Further analysis shows that decoder input dropout is in fact a significant source of performance gains, which we further investigate in Section 6.",
"We train byte-tokenized embeddingless models for machine translation and compare them to standard byte, character, and subword-based models on a diverse set of languages.",
"We adopt a standard experimental setup that was designed and tuned for the subword baseline and limits our hyperparame-ter tuning to dropout probabilities.",
"et al., 2014), selecting 10 additional languages with varying characteristics 5 (see Table 1).",
"For each one, we train translation models from English to the target language (the original direction of translation ), and also in the opposite direction for completeness.",
"We clean the training data for every language pair by first removing sentences longer than 800 bytes, and then the sentences with the largest byte-length ratio between source and target such that we remove a total of 5% of the training examples.",
"Baselines In addition to the byte-based embeddingless transformer, we train standard transformer encoder-decoder models as baselines, each one using a different tokenization scheme: subword, character, and byte.",
"For subword tokenization, we apply the Moses tokenizer (Koehn et al., 2007) followed by BPE (Sennrich et al., 2016).",
"Both character and byte tokenizations apply no additional preprocessing at all and include whitespaces as valid tokens.",
"Hyperparameters The code for our model and baselines is based on Fairseq (Ott et al., 2019) implementation of the transformer encoder-decoder model.",
"During preprocessing we use 10,000 merging steps when building the BPE vocabulary for every language pair.",
"The vocabularies and embeddings are always shared among source and target languages.",
"In every transformer we use 6 encoder and decoder layers, 4 attention heads, a hidden dimension of 512, and a feed-forward dimension of 1024.",
"We optimize with Adam (Kingma and Ba, 2014), using the inverse square root learning rate scheduler with 4000 warmup steps and a peak learn-5 While in this work we prioritized language and writing system diversity, there is room to test embedingless models on larger datasets in future work.",
"ing rate of 5 10 4 , label smoothing of 0.1, and weight decay of 1 10 4 .",
"We train each model for 50k steps and average the top 5 checkpoints according to the validation loss.",
"We tune dropout (0.2 or 0.3) on the validation set.",
"We set the batch size according to a maximum of 64,000 bytes per batch, which controls for the number of batches per epoch across different tokenization methods.",
"Evaluation We evaluate our models using Sacre-BLEU, case-sensitive, with the 13a tokenizer for all languages except Chinese (ZH tokenizer) and Japanese (MeCab tokenizer).",
"We use the raw text as the reference for all of our experiments, instead of using the default tokenized-detokenized version, which normalizes the text and gives an artificial advantage to text processed with Moses.",
"Table 2 shows our experiments' results.",
"Every row describes the test BLEU scores of our model and the three baselines trained on a different language pair.",
"We discuss the implications of these below.",
"Are embeddings essential?",
"The results show that it is indeed possible to train embeddingless machine translation models that perform competitively.",
"The performance gaps between models with different tokenization schemes are relatively small.",
"Except for Vietnamese, the difference between the embeddingless model and the best embedding-based model is always under 1 BLEU.",
"In the most controlled setting, where we compare byte-based models with and without learnable embeddings, models without embeddings consistently achieve higher BLEU scores in 19 of 20 cases (and an equal score for ru-en), with a boost of about 0.5 BLEU on average.",
"When compared to models based on character embeddings, the embeddingless byte-to-byte approach yields higher BLEU scores in 17 out of 20 cases, though the average difference is quite small in practice (0.3 BLEU).",
"Is subword tokenization superior to bytes or characters?",
"Previous work in machine translation shows that subword models consistently outperform character or byte-based models (Gupta et al., 2019; Wang et al., 2019; Gao et al., 2020).",
"However, our results indicate that this is not necessarily the case.",
"When translating from English to a foreign language, the original direction of the IWSLT dataset, embeddingless byte-to-byte models achieve performance that is equal or better than subword embedding models' in 8 out of 10 cases.",
"We observe a different trend when translating into English, where subword models surpass other models for every source language; the fact that Moses is a particularly good tokenizer for English and less so for other languages is perhaps related to this phenomenon.",
"Whereas prior work proposed closing the performance gap by adding layers to the basic architecture, under the assumption that character-based models lack capacity or expressiveness, our results show that actually removing a component from the model can improve performance under certain conditions.",
"It is possible that character and byte-based transformer models encounter an optimization issue rather than one of capacity or expressivity.",
"Why does removing the embedding matrix improve the performance of byte-based models?",
"As mentioned in Section 3, the embeddingless models do not use dropout on the encoder input and decoder output, but do apply dropout on the decoder input while training.",
"Since the embeddingless decoder's inputs are fixed one-hot vectors, using dropout implicitly drops out complete tokens.",
"In prior work, token dropout (word dropout) has been shown to have a consistently positive effect (Bowman et al., 2016).",
"We, therefore, rerun our experiments while Embedding-based Models Embed-less Subword Char Byte Byte en xx +0.33 +0.53 +0.42 +0.62 xx en +0.69 +0.67 +0.92 +0.83 Table 3: The validation set performance gain of token dropout (0.2), averaged across languages and model dropout values.",
"controlling for token dropout ( p = 0 . 2 ) to determine its effect on our results.",
"Table 3 shows that decoder-side token dropout improves the performance of all models, with a larger impact on byte-based models and embeddingless models in particular.",
"This effect is largely consistent, with only 7 out of 160 cases in which token dropout decreased performance on the validation set.",
"We suspect that dropping out target tokens softens the effects of exposure bias by injecting noise into the ground-truth prefix.",
"Given the benefits of token dropout on the baseline models, we re-evaluate the results from Section 5, while allowing for token dropout as a potential hyperparameter.",
"Table 4 shows that, when translating from the original English text to a foreign language, the different models perform roughly on par, with no single tokenization method dominating the others.",
"Furthermore, byte-level models with and without embeddings achieve almost identical results.",
"In contrast, when translating in the opposite direction, subword models consistently outperform the other methods with an average gap of 0.76 BLEU from the next best model.",
"Also, removing the embeddings from byte-based models decreases performance by an average of 0.45 BLEU when generating English.",
"This discrepancy might stem from artifacts of reverse translation, or perhaps from the English-centric nature of subword tokenization, which is based on Moses preprocessing and BPE.",
"Overall, these results suggest that despite the greater number of parameters in subword models, character and byte models can perform competitively, but may require slightly different optimization techniques to do so.",
"There is prior work on replacing language-specific tokenizers with more universal tokenization approaches.",
"Schtze (2017) shows how character n-gram embeddings can be effectively trained by segmenting text using a stochastic process.",
"Sen-Benchmark Embedding-based Models Embed-less Src Tgt Subword Char Byte Byte en zh 20.3 21.2 20.8 21.0 en es 36.7 36.8 36.8 36.8 en ar 12.7 13.1 12.7 12.9 en ru 18.5 18.2 17.7 18.2 en de 29.8 29.3 29.2 29.1 en ja 12.4 13.1 12.5 13.1 en tr 13.9 14.3 14.4 14.1 en vi 30.0 29.1 28.9 28.7 en fa 11.5 12.2 12.1 12.1 en he 26.8 27.1 27.1 26.7 zh en 17.3 17.2 16.3 16.1 es en 40.0 39.1 39.1 38.8 ar en 32.0 31.1 31.2 30.8 ru en 22.9 22.4 22.5 22.0 de en 35.6 34.9 35.0 34.5 ja en 13.5 12.8 12.3 11.2 tr en 24.3 23.3 23.7 23.3 vi en 27.4 25.9 25.9 25.3 fa en 24.5 23.2 23.3 22.6 he en 38.2 37.8 37.4 37.4 Table 4: Test BLEU scores of the baseline and embeddingless models on the IWSLT dataset, when decoder-side token dropout is considered as a potential hyperpa-rameter setting.",
"tencePiece (Kudo and Richardson, 2018) tokenizes raw Unicode strings into subwords using BPE (Sen-nrich et al., 2016) or unigram LM (Kudo, 2018).",
"Byte BPE (Wang et al., 2019) extends Senten-cePiece to operate at the byte level.",
"While this approach is indeed more language-agnostic than heuristic tokenizers, it does suffer from performance degradation when no pre-tokenization (e.g. splitting by whitespace) is applied.",
"6 Moreover, the assumption that subword units must be contiguous segments does not hold for languages with non-concatenative morphology such as Arabic and Hebrew.",
"Character and byte-based language models (Lee et al., 2017; Al-Rfou et al., 2019) treat the raw text as a sequence of tokens (characters or bytes) and do not require any form of preprocessing or word tokenization, and Choe et al. (2019) even demonstrate that byte-based language models can perform comparably to word-based language models on the billion-word benchmark (Chelba et al., 2013).",
"Although earlier results on LSTM-based machine translation models show that character tokenization can outperform subword tokenization (Cherry et al., 2018), recent literature shows that 6 https://github.com/google/ sentencepiece/blob/master/doc/experiments.md the same does not hold for transformers (Gupta et al., 2019; Wang et al., 2019; Gao et al., 2020).",
"To narrow the gap, recent work suggests using deeper models (Gupta et al., 2019) or specialized architectures (Gao et al., 2020).",
"Our work deviates from this trend by removing layers to improve the model.",
"This observation contests the leading hypothesis in existing literature that the performance gap results from reduced model capacity and suggests that the problem may be one of optimization.",
"This work challenges two key assumptions in neural machine translation models: the necessity of embedding layers, and the superiority of subword tokenization.",
"Experiments on 10 different languages show that, despite their ubiquitous usage, competitive models can be trained without any embeddings by treating text as a sequence of bytes.",
"Our investigation suggests that different tokenization methods may require revisiting the standard optimization techniques used with transformers, which are primarily geared towards sequences of English subwords.",
"This work was supported in part by Len Blavatnik and the Blavatnik Family foundation, the Alon Scholarship, and the Tel Aviv University Data Science Center."
] |
[
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"objective",
"result",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"abstain",
"method",
"other"
] |
[
"In this paper, we present a method for adversarial decomposition of text representation.",
"This method can be used to decompose a representation of an input sentence into several independent vectors, each of them responsible for a specific aspect of the input sentence.",
"We evaluate the proposed method on two case studies: the conversion between different social registers and diachronic language change.",
"We show that the proposed method is capable of fine-grained controlled change of these aspects of the input sentence.",
"It is also learning a continuous (rather than categorical) representation of the style of the sentence, which is more linguistically realistic.",
"The model uses adversarial-motivational training and includes a special motivational loss, which acts opposite to the discriminator and encourages a better decomposition.",
"Furthermore, we evaluate the obtained meaning embeddings on a downstream task of paraphrase detection and show that they significantly outperform the embeddings of a regular autoencoder.",
"Despite the recent successes in using neural models for representation learning for natural language text, learning a meaningful representation of input sentences remains an open research problem.",
"A variety of approaches, from sequence-to-sequence models that followed the work of Sutskever et al. (2014) to the more recent proposals (Arora et al., 2017; Nangia et al., 2017; Conneau et al., 2017; Logeswaran and Lee, 2018; Subramanian et al., 2018; Cer et al., 2018) share one common drawback.",
"Namely, all of them encode the input sentence into just one single vector of a fixed size.",
"One way to bypass the limitations of a single vector representation is to use an attention mechanism (Bahdanau et al., 2014; Vaswani et al., 2017).",
"We propose to approach this problem differently and design a method for adversarial decomposition of the learned input representation into multiple components.",
"Our method encodes the input sentence into several vectors, where each vector is responsible for a specific aspect of the sentence.",
"In terms of learning different separable components of input representation, our work most closely relates to the style transfer work, which has been applied to a variety of different aspects of language, from diachronic language differences (Xu et al., 2012) to authors' personalities (Lipton et al., 2015) and even sentiment (Hu et al., 2017; Fu et al., 2018).",
"The style transfer work effectively relies on the more classical distinction between meaning and form (de Saussure, 1959), which accounts for the fact that multiple surface realizations are possible for the same meaning.",
"For simplicity, we will use this terminology throughout the rest of the paper.",
"Consider encoding an input sentence into a meaning vector and a form vector.",
"This enables a controllable change of meaning or form by a simple change applied to these vectors.",
"For example, we can encode two sentences written in two different styles, then swap the form vectors while leaving the meaning vectors intact.",
"We can then generate new unique sentences with the original meaning, but written in a different style.",
"We propose a novel model for this type of decomposition based on adversarial-motivational training, GAN architecture (Goodfellow et al., 2014) and adversarial autoencoders (Makhzani et al., 2015).",
"In addition to the adversarial loss, we use a special motivator (Albanie et al., 2017), which, in contrast to the discriminator, is used to provide a motivational loss to encourage better decomposition of the meaning and the form.",
"All the code is available on GitHub 1 .",
"We evaluate the proposed methods for learning separate aspects of input representation in the following case studies:",
"1. Diachronic language change.",
"Specifically, we consider the Early Modern English (e.g. What would she have? ) and the contemporary English ( What does she want? ).",
"2. Social register (Halliday et al., 1968), i.e. subsets of language appropriate in a given context or characteristic of a certain group of speakers.",
"Social registers include formal vs informal language, the language used in different genres (e.g., fiction vs. newspapers vs. academic texts), different dialects, and literary idiostyles.",
"We experiment with the titles of scientific papers vs. newspaper articles.",
"As mentioned above, the most relevant previous work comes from research on style transfer 2 .",
"It can be divided into two groups: 1. Approaches that aim to generate text in a given form.",
"For example, the task may be to produce just any verse as long as it is in the style of the target poet.",
"2. Approaches that aim to induce a change in either the form or the meaning of an utterance.",
"For example, Good bye, Mr. Ander-son. can be transformed to Fare you well, good Master Anderson (Xu et al., 2012)).",
"An example of the first group is the work of Potash et al. (2015), who trained several separate networks on verses by different hip-hop artists.",
"An LSTM network successfully generated verses that were stylistically similar to the verses of the target artist (as measured by cosine distance on tf-idf vectors).",
"More complicated approaches use language models that are conditioned in some way.",
"For example, Lipton et al. (2015) produced product reviews with a target rating by passing the rating as an additional input at each timestep of an LSTM model.",
"Tang et al. (2016) generated reviews not only with a given rating but also for a specific product.",
"At each timestep a special context vector was provided as input, gated so as to enable the model to decide how much attention 2 The term style is not entirely appropriate here, but in NLP it is often used in work on any kind of form change while preserving meaning, from translation to changing sentiment polarity.",
"to pay to that vector and the current hidden state.",
"Li et al. (2016) used speaker vectors as an additional input to a conversational model, improving consistency of dialog responses.",
"Finally, Ficler and Goldberg (2017) performed an extensive evaluation of conditioned language models based on content (theme and sentiment) and style (pro-fessional, personal, length, descriptiveness).",
"Importantly, they showed that it is possible to control both content and style simultaneously.",
"Work from the second group can further be divided into two clusters by the nature of the training data: parallel aligned corpora, or non-aligned datasets.",
"The aligned corpora enable approaching the problem of form shift as a paraphrasing or machine translation problem.",
"Xu et al. (2012) used statistical and dictionary-based systems on a dataset of original plays by Shakespeare and their contemporary translations.",
"Carlson et al. (2017) trained an LSTM network on 33 versions of the Bible.",
"Jhamtani et al. (2017) used a Pointer Network (Vinyals et al., 2015), an architecture that was successfully applied to a wide variety of tasks (Merity et al., 2016; Gulcehre et al., 2016; Potash et al., 2017), to enable direct copying of the input tokens to the output.",
"All these works use BLEU (Papineni et al., 2002) as the main, or even the only evaluation measure.",
"This is only possible in cases where a parallel corpus is available.",
"Recently, new approaches that do not require a parallel corpora were developed in both computer vision (CV) (Zhu et al., 2017) and NLP.",
"Hu et al. (2017) succeeded in changing tense and sentiment of sentences with a two steps procedure based on a variational auto-encoder (VAE) (Kingma and Welling, 2013).",
"After training a VAE, a discriminator and a generator are trained in an alternate manner, where the discriminator tries to correctly classify the target sentence attributes.",
"A special loss component forces the hidden representation of the encoded sentence to not have any information about the target sentence attributes.",
"Mueller et al. (2017) used a VAE to produce a hidden representation of a sentence, and then modify it to match the desired form.",
"Unlike Hu et al. (2017), they do not separate the form and meaning embeddings.",
"Shen et al. (2017) applied a GAN to align the hidden representation of sentences from two corpora and forced them not to have any information about the form an via adversarial loss.",
"During the decoding, similarly to Lipton et al. (2015), special style vectors are passed to the decoder at every timestep to produce a sentence with the desired properties.",
"The model is trained using the Professor-Forcing algorithm (Lamb et al., 2016).",
"Kim et al. (2017) worked directly on hidden space vectors that are constrained with the same adversarial loss instead of outputs of the generator, and use two different generators for different styles.",
"Finally, Fu et al. (2018) generate sentences with the target properties using an adversarial loss, similarly to Shen et al. (2017) and Kim et al. (2017).",
"Comparison with previous work In contrast to the proposals of Xu et al. (2012), Carlson et al. (2017), Jhamtani et al. (2017), our solution does not require a parallel corpus.",
"Unlike the model by Shen et al. (2017), our model works directly on representations of sentences in the hidden space.",
"Most importantly, in contrast to the proposals by Mueller et al. (2017), Hu et al. (2017), Kim et al. (2017), Fu et al. (2018), our model produces a representation for both meaning and form and does not treat the form as a categorical (in the vast majority of works, binary) variable 3 .",
"Treating meaning and form not as bi-nary/categorical, but continuous variables is more consistent with the reality of language use, since there are different degrees of overlap between the language used by different registers or in different diachronic slices.",
"Indeed, language change is gradual, and the acceptability of expressions in a given register also forms a continuum, so one expects a substantial overlap between the grammar and vocabulary used, for example, on Twitter and by New York Times.",
"To the best of our knowledge, this is the first model that considers linguistic form in the task of text generation as a continuous variable.",
"A significant consequence of learning a continuous representation for form is that it allows the model to work with a large, and potentially infi-nite, number of forms.",
"Note that in this case the locations of areas of specific forms in the vector form space would reflect the similarity between these forms.",
"For example, the proposed model could be directly applied to the authorship attribution problem: each author would have their own area in the form space, their proximity should mir-3 Although the form was represented as dense vectors in previous work, it is still just a binary feature, as they use a single pre-defined vector for each form, with all sentences of the same form assigned the same form vector.",
"ror the similarity in writing style.",
"Preliminary experiments on this are reported in subsection 6.4.",
"Let us formulate the problem of decomposition of text representation on an example of controlled change of linguistic form and conversion of Shakespeare plays in the original Early Modern to contemporary English.",
"Let X a be a corpus of texts x ai X a in Early Modern English f a F , and X b be a corpus of texts x bi X b in modern English f b F .",
"We assume that the texts in both X a and X b have the same distribution of meaning m M .",
"The form f , however, is different and generated from a mixture of two distributions: f i = ai p ( f a ) + bi p ( f b ) where f a and f b are two different languages (Early Modern and contemporary English).",
"Intuitively, we say that a sample x i has the form f a if ai > bi , and it has the form f b if bi > ai .",
"The goal of dissociation meaning and form is to learn two encoders E m : X M and E f : X F for the meaning and form correspondingly, and the generator G : M , F X such that j { a, b } , k { a, b } : G ( E m ( x k ) , E f ( x j )) X j The form of a generated sample depends exclusively on the provided f j and can be in the same domain for two different m u and m v from two samples from different domains X a and X b .",
"Note that, in contrast to the previous proposals, the form f is not a categorical variable but a continuous vector.",
"This enables fine-grained controllable change of form: the original form f i is changed to reflect the form of the specific target sentence f j with its own unique a and b while preserving the original meaning m i .",
"An important caveat concerns the core assump-tion of the similar meaning distribution in the two corpora, which is also made in all other works reviewed in Section 2. It limits the possible use of this approach to cases where the distributions are in fact similar (i.e. comparable corpora are available; note that they do not have to be parallel).",
"It does not apply to many cases that could be analyzed in terms of meaning and form.",
"For example, books for children and scholarly papers are both registers, they have their own form (i.e. specific subsets of linguistic means and structure conventions) but there is little overlap in the content.",
"Inspired by Makhzani et al. (2015), Kim et al. (2017), and Albanie et al. (2017), we propose ADNet, a new model for adversarial decomposition of text representation (Figure 1).",
"Our solution is based on a widely used sequence-to-sequence framework (Sutskever et al., 2014) and consists of four main parts.",
"The encoder E encodes the input sequence x into two latent vectors m and f which capture the meaning and the form of the sentence correspondingly.",
"The generator G then takes these two vectors as the input and produces a reconstruction of the original input sequence x .",
"The encoder and generator by themselves will likely not achieve the dissociation of the meaning and form.",
"We encourage this behavior in a way similar to Generative Adversarial Networks (GANs) (Goodfellow et al., 2014), which had an overwhelming success the past few years as a way to enforce a specific distribution and characteristics on the output of a model.",
"Inspired by the work of Albanie et al. (2017) and the principle of carrot and stick (Safire, 1995), in contrast to the majority of work that promotes purely adversarial approach (Goodfel-low et al., 2014; Shen et al., 2017; Fu et al., 2018; Zhu et al., 2017), we propose two additional components, the discriminator D and the motivator M to force the model to learn the dissociation of the meaning and the form.",
"Similarly to a regular GAN model, the adversarial discriminator D tries to classify the form f based on the latent meaning vector m , and the encoder E is penalized to make this task as hard as possible.",
"Opposed to such vicious behaviour, the motivator M tries to classify the form based on the latent form vector f , as it should be done, and encourages the encoder E to make this task as simple as possible.",
"We could apply the adversarial approach here as well and force the distribution of the form vectors to fit a mixture of Gaussians (in this particular case, a mixture of two Guassians) with another discriminator, as it is done by Makhzani et al. (2015), but we opted for the dualistic path of two complimentary forces.",
"Both the encoder E and the generator G are neural networks.",
"Gated Recurrent Unit (GRU) (Chung et al., 2014) is used for E to encode the input sentence x into a hidden vector h = GRU ( x ) The vector h then passes through two different fully connected layers to produce the latent vectors of the form and the meaning of the input sentence: m = tanh( W m h + b m ) f = tanh( W f h + b f ) We use E to denote the parameters of the encoder E : W m , b m , W f , b f , and the parameters of the GRU unit.",
"The generator G is also modelled with a GRU unit.",
"The generator takes as input the meaning vector m and the form vector f , concatenates them, and passes trough a fully-connected layer to obtain a hidden vector z that represents both meaning and form of the original input sentence: z = tanh( W z [ m ; f ] + b m ) After that, we use a GRU unit to generate the output sentence as a probability distribution over the vocabulary tokens: p ( x ) = T (cid:89) t =1 p ( x t | z , x 1 , . . . , x t 1 ) We use G to denote the parameters of the generator G : W z , b m , and the parameters of the used GRU.",
"The encoder and generator are trained using the standard reconstruction loss: L rec ( E , G ) = E x X a [ log p ( x | x )] + E x X b [ log p ( x | x )] 4.2 Discriminator The representation of the meaning m produced by the encoder E should not contain any information about the form f .",
"We achieve this by using an adversarial approach.",
"First, we train a discriminator D , consisting of several fully connected layers with ELU activation function (Clevert et al., 2015) between them, to predict the form f of a sentence by its meaning vector: f D = D ( m ) where f is the score (logit) reflecting the probability of the sentence x to belong to one of the form domains.",
"Motivated by the Wasserstein GAN (Arjovsky et al., 2017), we use the following loss function instead of the standard cross-entropy: LD ( D ) = E x X a [ D ( E m ( x ))] E x X b [ D ( E m ( x ))] Thus, a successful discriminator will produce negative scores f for sentences from X a and positive scores for sentences from X b .",
"This discriminator is then used in an adversarial manner to provide a learning signal for the encoder and force dissociation of the meaning and form by maximizing LD : L adv ( E ) = adv LD where adv is a hyperparameter reflecting the strength of the adversarial loss.",
"Our experiments showed that the discriminator D and the adversarial loss L adv by themselves are sufficient to force the model to dissociate the form and the meaning.",
"However, in order to achieve a better dissociation, we propose to use a motivator M (Albanie et al., 2017) and the corresponding motivational loss.",
"Conceptually, this is the opposite of the adversarial loss, hence the name.",
"As the discriminator D , the motivator M learns to classify the form f of the input sentence.",
"However, its input is not the meaning vector but the form vector: f M = M ( f ) The motivator has the same architecture as the discriminator, and the same loss function.",
"While the adversarial loss forces the encoder E to produce a meaning vector m with no information about the form f , the motivational loss encourages E to encode this information in the form vector by minimizing LM : L motiv ( E ) = motiv LM 4.4 Training procedure The overall training procedure follows the methods for training GANs (Goodfellow et al., 2014; Arjovsky et al., 2017) and consists of two stages: training the discriminator D and the motivator M , and training the encoder E and the generator G .",
"In contrast to Arjovsky et al. (2017), we do not train the D and M more than the E and the G .",
"In our experiments we found that simple training in two stages is enough to achieve dissociation of the meaning and the form.",
"Encoder and generator are trained with the following loss function that combines reconstruction loss with the losses from the discriminator and the motivator: L total ( E , G ) = L rec + L adv + L motiv 5 Experimental setup 5.1 Evaluation Similarly to the evaluation of style transfer in CV (Isola et al., 2017), evaluation of this task is difficult.",
"We follow the approach of Isola et al. (2017); Shen et al. (2017) and recently proposed by Fu et al. (2018) methods of evaluation of transfer strength and content preservation.",
"The authors showed that the proposed automatic metrics correlate with human judgment to a large degree and can serve as a proxy.",
"Below we give an overview of these metrics.",
"Transfer Strength.",
"The goal of this metric is to capture whether the form has been changed successfully.",
"To do that, a classifier C is trained on the two corpora, X a and X b to recognize the linguistic form typical of each of them.",
"After that a sentence, for which the form/meaning has been changed, is passed to the classifier.",
"The overall accuracy reflects the degree of success of changing the form/meaning.",
"This approach is widely used in CV (Isola et al., 2017), and was applied in NLP as well (Shen et al., 2017).",
"In our experiments we used a GRU unit followed by four fully-connected layers with ELU activation functions between them as the classifier.",
"Content preservation Note that the transfer strength by itself does not capture the overall quality of a changed sentence.",
"A extremely overfitted model that produces the most characteristic sentence of one corpus all the time would have a high score according to this metric.",
"Thus, we need to measure how much of the meaning was preserved while changing the form.",
"To do that, Fu et al. (2018) proposed to use a cosine similarity based metric using pretrained word embeddings.",
"First, a sentence embedding is computed by concatenation of max, mean, and average pooling over the timesteps: v = [max( v 1 ,..., v T );min( v 1 ,..., v T );mean( v 1 ,..., v T )] Next, the cosine similarity score s i between the embedding v si of the original source sentence and the target sentence with the changed form v ti is computed, and the scores across the dataset are averaged to obtain the total score s .",
"The metrics described above treat the form as a categorical (in most cases, even binary) variable.",
"This was not a problem in previous work since the change of form could be done by simply inverting the form vector.",
"Since we treat the form as a continuous variable, we cannot just use the proposed metrics directly.",
"To enable a fair comparison, we propose the following procedure.",
"For each sentence s as in the test set from the corpus X a we sample k = 10 random sentences from the corpus X b of the opposite form.",
"After that, we encode them into the meaning m i and form f i vectors, and average the form vectors to obtain a single form vector f avg = 1 k k (cid:88) i =1 f i We then generate a new sentence with its original meaning vector m s and the resulting form vector f avg , and use it for evaluation.",
"This process enables a fair comparison with the previous approaches that treat form as a binary variable.",
"We evaluated the proposed method on several datasets that reflect different changes of meaning and form.",
"Changing form: register.",
"This experiment is conducted with a dataset of titles of scientific papers and news articles published by Fu et al. (2018).",
"This dataset (referred to as Headlines) contains titles of scientific articles crawled from online digital libraries, such as ACM Digital Li-brary and arXiv.",
"The titles of the news articles are taken from the News Aggregator Data Set from UCI Machine Learning Repository (Dheeru and Karra Taniskidou, 2017) Changing form: language diachrony.",
"Diachronic language change is explored with the dataset composed by Xu et al. (2012).",
"It includes the texts of 17 plays by William Shakespeare in the original Early Modern English, and their translations into contemporary English.",
"We randomly permuted all sentences from all plays and sampled the training, validation, and test sets.",
"Note that this dataset is much smaller than the Headlines dataset.",
"The most recent and similar to our work is the model proposed by Fu et al. (2018), in particular the style-embedding model.",
"We implemented this model to provide a baseline for comparison.",
"The classifier used in the transfer strength metric achieves high accuracy (0.832 and 0.99 for the Shakespeare and Headlines datasets correspond-ingly).",
"These results concur with the results of Shen et al. (2017) and Fu et al. (2018), and show that the two corpora are significantly different.",
"Following Fu et al. (2018), we show the result of different configuration of the size of the form and meaning vectors on Figure 2. Namely, we report combinations of 64 and 256-dimensional vectors.",
"Note that the sizes of the form vector are important.",
"If the form vector is larger, the transfer strength is gre,ta erbut the content preservation is lessened.",
"This is consistent with Fu et al. (2018), where they observed a similar behaviour.",
"It is clear that the proposed method achieves significantly better transfer strength than the previously proposed model.",
"It also has a lower content preservation score, which means that it repeats fewer exact words from the source sentence.",
"Note that a low transfer strength and very high (~0.9) content preservation score means that the model was not able to successfully learn to transfer the form and the target sentence is almost identical to the source sentence.",
"The Shakespeare dataset is the hardest for the model in terms of transfer strength, probably because it is the smallest dataset, but the proposed method performs consistently well in transfer of both form and meaning and, in contrast to the baseline.",
"storing the meaning in the form vector, except from the size limitations, which would ensure that storing non-form-related information elsewhere would improve model performance.",
"Figure 2 shows that as the meaning vectors get smaller, and the form vectors larger, the higher is transfer strength and the lower is content preservation.",
"If the model would store meaning in the form vector, then the reduction in size of the meaning vector would not have negative impact on content preservation.",
"This shows that the model tends to not store the meaning in the form vector.",
"Nevertheless, to force this behaviour we experimented with adding one more discriminator D f .",
"This discriminator works on the form vector f in the same manner as the discriminator D works on the meaning vector m .",
"Namely, during the training it tries to predict the meaning of a sentence from its form vector: u = D f ( f ) .",
"Note that the vectors u and m are completely different.",
"m is the meaning of a sentence for the purpose of the model, whereas u are pre-defined meaning of a sentence for training of the discriminator.",
"In the simplest case, u can be a multi-hot representation of the input sentence, with the exception of pre-defined style words, which would always have 0 in the corresponding dimension, as it is done by John et al. (2018).",
"We, however, take a different approach.",
"First, we find the form dimensions in the used word embeddings by taking the argmax of the difference between averaged word embeddings of the sentences from two forms (i.e. Early Modern English and contemporary English).",
"Next, for a given sentence we discard the topk tokens with the maximum and minimum values in those dimensions.",
"Finally, we average word embeddings of the remaining tokens in the sentence to get the vector u .",
"Fluency of generated sentences Note that there is no guarantee that the generated sentences would be coherent after switching the form vector.",
"In order to estimate how this switch affects the flu-ency of generated sentences, we trained a language model on the Shakespeare dataset and calculated the perplexity of the generated sentences using the original form vector and the average of form vectors of k random sentences from the opposite form (see subsubsection 5.1.1).",
"While the perplexity of such sentences does go up, this change is not big (6.89 vs 9.74).",
"To investigate the impact of the motivator, we visualized form and meaning embeddings of 1000 random samples from the Headlines dataset using t-SNE algorithm (Van Der Maaten, 2014) with the Multicore-TSNE library (Ulyanov, 2016).",
"The result is presented in Figure 3. There are three important observations.",
"First, there is no clear separation in the meaning embeddings, which means that any accurate form transfer is due to the form embeddings, and the dissociation of form and meaning was successful.",
"Second, even without the motivator the model is able to produce the form embeddings that are clustered into two groups.",
"Recall from section 4 that without the motivational loss there are no forces that influence the form embeddings, but nevertheless the model learns to separate them.",
"However, the separation effect is much more pronounced in the presence of motivator.",
"This explains why the motivator consistently improved transfer strength of ADNet, as shown in Figure 2. 6.2 Qualitative evaluation Table 1 and Table 2 show several examples of successful form/meaning transfer achieved by ADNet.",
"Table 1 presents the results of an experiment that to some extent replicates the approach taken by the authors who treat linguistic form as a binary variable (Shen et al., 2017; Fu et al., 2018).",
"The sentences the original Shakespeare plays were averaged to get the typical Early Modern English form vector.",
"This averaged vector was used to decode a sentence from the modern English translation back into the original.",
"The same was done in the opposite direction.",
"Table 2 illustrates the possibilities of ADNet on fine-grained transfer applied to the change of register.",
"We encoded two sentences in different registers from the Headlines dataset to produce form and meaning embeddings, and then decoded the first sentence with the meaning embedding of the second, and vice versa.",
"Table 2 shows that the model correctly captures the meaning of sentences and decodes them using the form of the source sentences, preserving specific words and the structure of the source sentence.",
"Note that in the first example, the model decided to put the colon after the crisis management, as the source form sentence has this syntactic structure (A review:).",
"This is not possible in the previously proposed models, as they treat form as just a binary variable.",
"We conducted some experiments to test the as-sumption that the derived meaning embeddings should improve performance on downstream tasks",
"BoW Seq2Seq InferSent Fu et al. (2018) ADNet 80.82 74.68 83.17 78.88 81.38 Table 3: F1 scores on the task of paraphrase detection using the SentEval toolkit (Conneau et al., 2017) that require understanding of the meaning of the sentences regardless of their form.",
"We evaluated embeddings produced by the ADNet, trained in the Headlines dataset, on the paraphrase detection task.",
"We used the SentEval toolkit (Conneau et al., 2017) and the Microsoft Research Paraphrase Corpus (Dolan et al., 2004).",
"The F1 scores on this task for different models are presented in Table 3. Note that all models, except InferSent, are unsupervised.",
"The InferSent model was trained on a big SNLI dataset, consisting of more than 500,000 manually annotated pairs.",
"ADNet achieves the the highest score among the unsupervised systems and far outperforms the regular sequence-to-sequence autoencoder.",
"In order to go beyond just two different forms, we experimented with training the model on a set of literature novels from six different authors from Project Gutenberg 4 written in two different time periods.",
"A t-SNE visualization of the resulting meaning and form embeddings is presented in Figure 4. Note how form embeddings create a six-pointed star.",
"After further examination, we observed that common phrases (for example, Good morning or Hello!) were embedded into the center of the star, whereas the most specific sentences from a given author were placed into the rays of the star.",
"In particular, some sentences included character names, thus further research is required to mitigate this problem.",
"Stamatatos (2017) 4 http://www.gutenberg.org/ provides a promising direction for solving this.",
"We presented ADNet, a new model that performs adversarial decomposition of text representation.",
"In contrast to previous work, it does not require a parallel training corpus and works directly on hidden representations of sentences.",
"Most importantly, it does not treat the form as a binary variable (as done in most previously proposed mod-els), enabling a fine-grained change of the form of sentences or specific aspects of meaning.",
"We evaluate ADNet on two tasks: the shift of language register and diachronic language change.",
"Our solution achieves superior results, and t-SNE visualizations of the learned meaning and form embeddings illustrate that the proposed motivational loss leads to significantly better separation of the form embeddings."
] |
[
"method",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"method",
"objective",
"objective",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"objective",
"abstain",
"abstain",
"method",
"objective"
] |
[
"Generating texts which express complex ideas spanning multiple sentences requires a structured representation of their content (docu-ment plan), but these representations are prohibitively expensive to manually produce.",
"In this work, we address the problem of generating coherent multi-sentence texts from the output of an information extraction system, and in particular a knowledge graph.",
"Graphical knowledge representations are ubiquitous in computing, but pose a significant challenge for text generation techniques due to their non-hierarchical nature, collapsing of long-distance dependencies, and structural variety.",
"We introduce a novel graph transforming encoder which can leverage the relational structure of such knowledge graphs without imposing linearization or hierarchical constraints.",
"Incorporated into an encoder-decoder setup, we provide an end-to-end trainable system for graph-to-text generation that we apply to the domain of scientific text.",
"Automatic and human evaluations show that our technique produces more informative texts which exhibit better document structure than competitive encoder-decoder methods.",
"1 1 Introduction Increases in computing power and model capacity have made it possible to generate mostly-grammatical sentence-length strings of natural language text.",
"However, generating several sentences related to a topic and which display overall coherence and discourse-relatedness is an open challenge.",
"The difficulties are compounded in domains of interest such as scientific writing.",
"Here the variety of possible topics is great (e.g. topics as diverse as driving, writing poetry, and picking stocks are all referenced in one subfield of 1 Data and code available at https://github.com/ rikdz/GraphWriter Our Model outperforms HMM models by 15% on this data . used-for comparison We present a CRF Model for Event Detection . CRF Model Event Detection SemEval2011Task 11 u s e d f o r We evaluate this model on SemEval2010 Task 11 evaluatefor evaluate-for e v a l ua t e f o r e v a l u a t e f o r HMM Models c o m pa r i s o n Title : Event Detection with Conditional Random Fields Abstract Graph Figure 1: A scientific text showing the annotations of an information extraction system and the corresponding graphical representation. Coreference annotations shown in color. Our model learns to generate texts from automatically extracted knowledge using a graph encoder decoder setup. one scientific discipline).",
"Additionally, there are strong constraints on document structure, as scientific communication requires carefully ordered explanations of processes and phenomena.",
"Many researchers have sought to address these issues by working with structured inputs.",
"Data-to-text generation models (Konstas and Lapata, 2013; Lebret et al., 2016; Wiseman et al., 2017; Pudup-pully et al., 2019) condition text generation on table-structured inputs.",
"Tabular input representations provide more guidance for producing longer texts, but are only available for limited domains as they are assembled at great expense by manual annotation processes.",
"The current work explores the possibility of using information extraction (IE) systems to automatically provide context for generating longer texts (Figure 1).",
"Robust IE systems are available and have support over a large variety of textual domains, and often provide rich annotations of relationships that extend beyond the scope of a single sentence.",
"But due to their automatic na-ture, they also introduce challenges for generation such as erroneous annotations, structural variety, and significant abstraction of surface textual features (such as grammatical relations or predicate-argument structure).",
"To effect our study, we use a collection of abstracts from a corpus of scientific articles (Ammar et al., 2018).",
"We extract entity, coreference, and relation annotations for each abstract with a state-of-the-art information extraction system (Luan et al., 2018), and represent the annotations as a knowledge graph which collapses co-referential entities.",
"An example of a text and graph are shown in Figure 1. We use these graph/text pairs to train a novel attention-based encoder-decoder model for knowledge-graph-to-text generation.",
"Our model, GraphWriter, extends the successful Transformer for text encoding (Vaswani et al., 2017) to graph-structured inputs, building on the recent Graph Attention Network architecture (Velickovic et al., 2018).",
"The result is a powerful, general model for graph encoding which can incorporate global structural information when contextualizing vertices in their local neighborhoods.",
"The main contributions of this work include: 1. We propose a new graph transformer encoder that applies the successful sequence transformer to graph structured inputs.",
"2. We show how IE output can be formed as a connected unlabeled graph for use in attention-based encoders.",
"3. We provide a large dataset of knowledge-graphs paired with scientific texts for further study.",
"Through detailed automatic and human evaluations, we demonstrate that automatically extracted knowledge can be used for multi-sentence text generation.",
"We further show that structuring and encoding this knowledge as a graph leads to improved generation performance compared to other encoder-decoder setups.",
"Finally, we show that GraphWriter's transformer-style encoder is more effective than Graph Attention Networks on the knowledge-graph-to-text task.",
"Our work falls under the larger scope of concept-to-text generation.",
"Barzilay and Lapata (2005) introduced a collective content selection model for generating summaries of football games from tables of game statistics.",
"Liang et al. (2009) jointly learn to segment and align text with records, reducing the supervision needed for learning.",
"Kim and Mooney (2010) improve this technique by learning a semantic parse to logical forms.",
"Konstas and Lapata (2013) focus on the generation objective, jointly learning planning and generating using a rhetorical (RST) grammar induction approach.",
"These earlier works often focused on smaller record generation datasets such as WeatherGov and RoboCup, but recently Mei et al. (2016) showed how neural models can achieve strong results on these standards, prompting researchers to investigate more challenging domains such as ours.",
"Lebret et al. (2016) tackles the task of generating the first sentence of a Wikipedia entry from the associated infobox.",
"They provide a large dataset of such entries and a language model conditioned on tables.",
"Our work focuses on a multi-sentence task where relations can extend beyond sentence boundaries.",
"Wiseman et al. (2017) study the difficulty of applying neural models to the data-to-text task.",
"They introduce a large dataset where a text summary of a basketball game is paired with two tables of relevant statistics and show that neural models struggle to compete with template based methods over this data.",
"We propose generating from graphs rather than tables, and show that graphs can be effectively encoded to capture both local and global structure in the input.",
"We show that modeling knowledge as a graph improves generation results, connecting our work to other graph-to-text tasks such as generating from Abstract Meaning Representation (AMR) graphs.",
"Konstas et al. (2017) provide the first neural model for this task, and show that pretraining on a large dataset of noisy automatic parses can improve results.",
"However, they do not directly model the graph structure, relying on linearization and sequence encoding instead.",
"Current works improve this through more sophisticated graph encoding techniques.",
"Marcheggiani and Perez-Beltrachini (2018) encode input graphs directly using a graph convolution encoder (Kipf and Welling, 2017).",
"Our model extends the graph attention networks of Velickovic et al. (2018), a direct descendant of the convolutional approach which offers more modeling power and has been Title Abstract KG Vocab 29K 77K 54K Tokens 413K 5.8M 1.2M Entities -518K Avg Length 9.9 141.2 Avg #Vertices -12.42 Avg #Edges -4.43 Table 1: Data statistics of our AGENDA dataset.",
"shown to improve performance.",
"Song et al. (2018) uses a graph LSTM model to effect information propagation.",
"At each timestep, a vertex is represented by a gated combination of the vertices to which it is connected and the labeled edges connecting them.",
"Beck et al. (2018) use a similar gated graph neural network.",
"Both of these gated models make heavy use of label information, which is much sparser in our knowledge graphs than in AMR.",
"Generally, AMR graphs are denser, rooted, and connected, whereas the knowledge our model works with lacks these characteristics.",
"For this reason, we focus on attention-based models such as Velickovic et al. (2018), which impose fewer constraints on their input.",
"Finally, our work is related to Wang et al. (2018) who offer a method for generating scientific abstracts from titles.",
"Their model uses a gated rewriter network to write and revise several draft outputs in several sequence-to-sequence steps.",
"While we operate in the same general domain as this work, our task setup is ultimately different due to the use of extracted information as input.",
"We argue that our setup improves the task defined in Wang et al. (2018), and our more general model can be applied across tasks and domains.",
"We consider the problem of generating a text from automatically extracted information ( knowledge ).",
"IE systems can produce high quality knowledge for a variety of domains, synthesizing information from across sentence and even document boundaries.",
"Generating coherent text from knowledge requires a model which considers global characteristics of the knowledge as well as local characteristics of each entity.",
"This feature of the task motivates our use of graphs for representing knowledge, where neighborhoods localize important information and paths through the graph build connections between distant nodes through intermediate ones.",
"An example knowledge graph can be seen in Figure 1. We formulate our problem as follows: given the title of a scientific article and a knowledge graph constructed by an automatic information extraction system, the goal is to generate an abstract that",
"a) is appropriate for the given title and",
"b) expresses the content of the knowledge graph in natural language text.",
"To evaluate how well a model accomplishes this goal, we introduce the Abstract GENeration DAtaset (AGENDA), a dataset of knowledge graphs paired with scientific abstracts.",
"Our dataset consists of 40k paper titles and abstracts from the Semantic Scholar Corpus taken from the proceedings of 12 top AI conferences (Ammar et al., 2018).",
"For each abstract, we create a knowledge graph in two steps.",
"First, we apply the SciIE system of Luan et al. (2018), a state-of-the-art science-domain information extraction system.",
"This system provides named entity recognition for scientific terms, with entity types Task, Method, Metric, Material, or Other Scientific Term.",
"The model also produces co-reference annotations as well as seven relations that can obtain between different entities (Compare, Used-for, Feature-of, Hyponym-of, Evaluate-for, and Conjunction).",
"For example, in Figure 1, the node labeled SemEval 2011 Task 11 is of type Task', HMM Models is of type Model', and there is a Evaluate-For' relation showing that the models are evaluated on the task.",
"We form these annotations into knowledge graphs.",
"We collapse co-referential entities into a single node associated with the longest mention (on the assumption that these will be the most in-formative).",
"We then connect nodes to one another using the relation annotations, treating these as labeled edges in the graph.",
"The result is a possibly unconnected graph representation of the SciIE annotations for a given abstract.",
"Statistics of the AGENDA dataset are available in Table 1. We split the AGENDA dataset into 38,720 training, 1000 validation, and 1000 test datapoints.",
"We offer standardized data splits to facilitate comparison.",
"Following most work on neural generation we adopt an encoder-decoder architecture, shown in",
"Figure 3, which we call GraphWriter.",
"The input to GraphWriter is a title and a knowledge graph which are encoded respectively with a bidirectional recurrent neural network and a novel Graph Transformer architecture (to be discussed in Section 4.1).",
"At each decoder time step, we attend on encodings of the knowledge graph and document title using the decoder hidden state h t R d .",
"The resulting vectors are used to select output w t either from the decoder's vocabulary or by copying an entity from the knowledge graph.",
"Details of our decoding process are described in Section 4.2.",
"The model is trained end-to-end to minimize the negative log likelihood of the mixed copy and vocabulary probability distribution and the human authored text.",
"The AGENDA dataset contains a knowledge graph for each datapoint, but our model requires unlabeled, connected graphs as input.",
"To encode knowledge graphs with this model, we restructure each graph as an unlabeled connected graph, preserving label information by the method described below and sketched in Figure 2. Graph Preparation We convert each graph to an unlabeled connected bipartite graphs following a similar procedure to Beck et al. (2018).",
"In this process, each labeled edge is replaced with two vertices: one representing the forward direction of the relation and one representing the reverse.",
"These new vertices are then connected to the entity vertices so that the directionality of the former edge is maintained.",
"This restructures the original knowledge graph as an unlabeled directed graph where all vertices correspond to entities and relations in the SciIE annotations without loss of infor-Graph Transformer Title Encoder Text Generation From Knowledge Graphs Attention Layers = Copy Mechanism Vocab Softmax w t h t h t+1 w t-1 c t Figure 3: GraphWriter Model Overview mation.",
"To promote information flow between disconnected parts of the graph, we add a global vertex which connects all entity vertices.",
"This global vertex will be used to initialize the decoder, analogously to the final encoder hidden state in a traditional sequence to sequence model.",
"The final result of these restructuring operations is a connected, unlabeled graph G = ( V, E ) , where V is a list of entities, relations, and a global node and E is an adjacency matrix describing the directed edges.",
"Graph Transformer Our model is most similar to the Graph Attention Network (GAT) of Velickovic et al. (2018), which computes the hidden representations of each node in a graph by attending over its neighbors following a self-attention strategy.",
"The use of self-attention in GAT addresses the shortcomings of prior methods based on graph convolutions (Defferrard et al., 2016; Kipf and Welling, 2017), but limits vertex updates to information from adjacent nodes.",
"Our model allows for a more global contextualization of each vertex through the use of a transformer-style architecture.",
"The recently proposed Transformer (Vaswani et al., 2017) addresses the inherent sequential computation shortcoming of recurrent neural networks, enabling efficient and paralleled computation by invoking a self-attention mechanism for global context modeling.",
"These models have shown promising results in a variety of text processing tasks (Radford et al., 2018).",
"Our Graph Transformer encoder starts with self-Norm & Add Norm & Add Feedforward Network Graph Attention Input: V (cid:2)(cid:1) -1 V Output: (cid:2) V L Block Network v i (cid:2) -1 N i !",
"attention of local neighborhoods of vertices; the key difference with GAT is that our model includes additional mechanisms for capturing global context.",
"This additional modeling power allows the Graph Transformer to better articulate how a vertex should be updated given the content of its neighbors, as well as to learn global patterns of graph structure relevant to the model's objective.",
"Specifically, V is embedded in a dense continuous space by the embedding process described at the end of this section, resulting in matrix V 0 = [ v i ] , v i R d which will serve as input to the graph transformer model shown in Figure 4. Each vertex representation v i is contextualized by attending over the other vertices to which v i is connected in G .",
"We use an N -headed self attention setup, where N independent attentions are calculated and concatenated before a residual connection is applied: v i = v i + N (cid:110) n =1 (cid:88) j N i nij W nV v j (1) nij = a n ( v i , v j ) (2) Here, (cid:107) denotes the concatenation of the N attention heads, N i denotes the neighborhood of v i in G , W nV R d d , and where a n are attention mechanisms parameterized per head.",
"In this work, we use attention functions of the following form: a ( q i , k j ) = exp(( WK k j ) (cid:62) WQ q i ) (cid:80) z N i exp(( WK k z ) (cid:62) WQ q i ) (3) Each a learns independent transformations WQ , WK R d d of q and k respectively, and the resulting product is normalized across all connected edges.",
"To reduce the tendency of these dot products to impede gradient flow, we scale them by 1 d , following Vaswani et al. (2017).",
"The Graph Transformer then augments these multi-headed attention layers with block networks.",
"Each block applies the following transformations: v i = LayerNorm ( v (cid:48) i + LayerNorm ( v i )) (4) v (cid:48) i = FFN ( LayerNorm ( v i )) (5) Where FFN ( x ) is a two layer feedforward network with a non-linear transformation f between layers i.e. f ( xW 1 + b 1 ) W 2 + b 2 .",
"Stacking multiple blocks allows information to propagate through the graph.",
"Blocks are stacked L times, with the output of layer l 1 taken as the input to layer l , so that v li = v l 1 i .",
"The resulting vertex encodings VL = [ v Li ] represent entities, relations, and the global node contextualized by their relationships in the graph structure.",
"We refer to the resulting encodings as graph contextualized vertex encodings .",
"Embedding Vertices, Encoding Title As stated above, the vertices of our graph correspond to entities and relations from the SciIE annotations.",
"Because each relation is represented as both a forwardand backward-looking vertex, we learn two embeddings per relation as well as an initial embedding for the global node.",
"Entities correspond to scientific terms which are often multi-word expressions.",
"To produce a single d dimensional embedding per phrase, we use the last hidden state of a bidirectional RNN run over embeddings of each word in the entity phrase, i.e. BiRNN ( x 1 . . . x m ) for dense embeddings x and phrase length m .",
"The output of our embedding step is a collection V 0 of d -dimensional vectors representing each vertex in V .",
"The title input is also a short string, and so we encode it with another BiRNN to produce T = BiRNN ( x (cid:48) 1 . . . x (cid:48) m ) for title word embedding x (cid:48) .",
"We decode with an attention-based decoder with a copy mechanism for copying input from the knowledge graph and title.",
"At each decoding timestep t we use decoder hidden state h t to compute context vectors c g and c s for the graph and title sequence respectively.",
"c g is computed using multi-headed attention contextualized by h t : c g = h t + N (cid:110) n =1 (cid:88) j V nj W nG v L j (6) j = a ( h t , v L j ) (7) for a as described in Equation (1) by attending over the graph contextualized encodings VL .",
"c s is computed similarly, attending over the title encoding T .",
"We then construct the final context vector by concatenation, c t = [ c g (cid:107) c s ] .",
"We use an input-feeding decoder (Luong et al., 2015) where both h t and c t are passed as input to the next RNN timestep.",
"The final next-token probability distribution is:",
"Where the probability distribution copy over entities and input tokens is computed as copyj = a ([ h t (cid:107) c t ] , x j ) for x j V (cid:107) T .",
"The remaining 1 p probability is given to vocab , which is calculated by scaling [ h t (cid:107) c t ] to the vocabulary size and taking a softmax.",
"Evaluation Metrics We evaluate using a combination of human and automatic evaluations.",
"For human evaluation, participants were asked to compare abstracts generated by various models and those written by the authors of the scientific articles.",
"We used Best-Worst Scaling (BWS; (Louviere and Woodworth, 1991; Louviere et al., 2015)), a less labor-intensive alternative to paired comparisons that has been shown to produce more reliable results than rating scales (Kiritchenko and Mohammad, 2016).",
"Participants were presented with two or three abstracts and asked to decide which one was better and which one was worse in order of grammar and fluency (is the abstract written in well-formed English?), coherence (does the abstract have an introduction, state the problem or task, describe a solution, and discuss evaluations or results?), and informativeness (does the abstract relate to the provided title and make use of appropriate scientific terms?).",
"We provided examples of good and bad abstracts and explain how they succeed or fail to meet the defined criteria.",
"Because our dataset is scientific in nature, evaluations must be done by experts and we can only collect a limited number of these high quality datapoints.",
"2 The study was conducted by 15 experts (i.e. computer science students) who were familiar with the abstract writing task and the content of the abstracts they judged.",
"To supplement this, we also provide automatic metrics.",
"We use BLEU (Papineni et al., 2002), an n-gram overlap measure popular in text generation tasks, and METEOR (Denkowski and Lavie, 2014), a machine translation with paraphrase and language-specific considerations.",
"Comparisons We compare our GraphWriter against several strong baselines.",
"In GAT, we replace our Graph Transformer encoder with a Graph Attention Network of (Velickovic et al., 2018).",
"This encoder consists of PReLU activations stacked between 6 self-attention layers.",
"To determine the usefulness of including graph relations, we compare to a model which uses only entities and title (EntityWriter).",
"Finally, we compare with the gated rewriter model of Wang et al. (2018) (Rewriter).",
"This model uses only the document title to iteratively rewrite drafts of its output.",
"3 Implementation Details Our models are trained end-to-end to minimize the negative joint log likelihood of the target text vocabulary and the copied entity indices.",
"We use SGD optimization with momentum (Qian, 1999) and warm restarts, a cyclical regiment that reduces the learning rate from 0.25 to 0.05 over the course of 5 epochs, then resets for the following epoch.",
"Models are trained for 15 epochs with early stopping (Prechelt, 1998) based on the validation loss, with most models stopping between 8 and 13 epochs.",
"We use single-layer LSTMs (Hochreiter and Schmidhuber, 1997) as recurrent networks.",
"We use dropout (Srivas-tava et al., 2014) in self attention layers set to 0.3.",
"Hidden states and embedding dimensions are fixed at 500 and attentions learn 500 dimen-2 Attempts to crowd source this evaluation failed.",
"3 Due to the larger size and greater variety of our dataset and accompanying vocabularies compared to theirs, we were unable to train this model with the reported batch size of 240.",
"We use batch size 24 instead, which is partially responsible for the lower performance.",
"sional projections.",
"In Block layers, the feedforward network has an intermediate size of 2000, and we use a PReLU activation function (He et al., 2015).",
"GraphWriter and GAT use L = 6 layers.",
"The number of attention heads is set to 4. In all models, for both inputs and output, we replace words occurring fewer than 5 times with < unk > tokens.",
"In each abstract, we replace all mentions in a coreference chain in the abstract with the canonical mention used in the graph.",
"We decode with beam search (Graves, 2012; Sutskever et al., 2014) with a beam size of 4. A post-processing step deletes repeated sentences and repeated coordinated clauses.",
"A comparison of all systems in terms of automatic metrics is shown in Table 2. Our GraphWriter model outperforms other methods.",
"We see that models which leverage title, entities, and relations (GraphWriter and GAT) outperform models which use less information (EntityWriter and Rewriter).",
"We see that GraphWriter outperforms GAT across metrics, indicating that the global contextualization provided by GraphWriter improves generation.",
"To verify the performance gap between GraphWriter and GAT, we report the average test metrics for 4 training runs of each model along with their variances.",
"We see that the variance of the different models is non-overlapping, and in fact all training runs of GraphWriter outperformed all runs of GAT on these metrics.",
"Does Knowledge Help?",
"To evaluate the value of knowledge in the generation task we compare our GraphWriter model to a model which does not generate from knowledge.",
"We provide expert annotators with 50 randomly-selected paper titles from the test set and ask them for a single judg-ment according to the criteria described in Section 5. We pair each paper title with the generated abstracts produced by GraphWriter (a knowledge-informed modes), Rewriter (a knowledge-agnostic model), and the gold abstract (with canonicalized Best Worst Rewriter (No knowledge) 12% 64% GraphWriter (Knowledge) 24% 36% Human Authored 64% 0% Table 3: Does knowledge improve generation?",
"Results of this comparison can be seen in Table 3. We see that GraphWriter is selected as Best more often than Rewriter, and is less often selected as Worst, attesting to the value of including knowledge in the text generation process.",
"We see that sometimes generated texts are preferred to human authored text, which is due in part to the disfluencies introduced by canonicalization of entity mentions.",
"To further understand the advantages of using knowledge graphs, we provide a more detailed comparison of the GraphWriter and EntityWriter models.",
"We select 30 additional test datapoints and ask experts to provide per-criterion judgments of the outputs of the two systems.",
"Since both models make use of extracted entities, we show this list along with the title for each datapoint, and modify the description of Informativeness to include making use of the provided entities.",
"Results of this evaluation are shown in Table 4. Here we see that including structured knowledge in the form of a graph improves abstract generation compared to generating from an unstructured collection of entities.",
"The largest gains are made in terms of document structure and grammar, indicating that the structure of the input knowledge is being translated into the surface form.",
"Generating from Title The Rewriter model (Wang et al., 2018) considers the task of generating an abstract with only the paper's title as input.",
"We compare against this model because it is among the first end-to-end systems to attempt to write scientific abstracts.",
"However, the task setup used in Wang et al. (2018) differs significantly from the task introduced in this work.",
"In order Title Block and Group Regularized Sparse Modeling for Dictionary Learning Knowledge (dictionary learning, CONJUNCTION, sparse coding) ; (optimization problems, USED-FOR, dictionary learning) ; (optimization problems, USED-FOR, sparse coding)...",
"to make a fair comparison, we construct a variant of our model which is only provided with a title as input.",
"We develop a model that predicts entities from the title, and then uses our knowledge-aware model to generate the abstract.",
"For this comparison we use the EntityWriter model with a collection of entities inferred from the title alone (Infer-EntityWriter).",
"To infer relevant entities, we learn to embed titles and entities extracted from the corresponding abstract in a shared dense vector space by minimizing their cosine distance.",
"We use negative sampling to provide definition to this vector space.",
"At test time, we use the title embedding to infer the K = 12 closest entities to feed into the InferEntityWriter model.",
"Results are shown in Table 6, which shows that InferEntityWriter achieves bet-BLEU METEOR Rewriter 1.05 8.38 InferEntityWriter 3.60 12.2 Table 6: Comparison of generation without knowledge and with Inferred Knowledge (InferEntityWriter) ter results than Rewriter, indicating that the intermediate entity prediction step is helpful in abstract generation.",
"Table 5 shows examples of various system outputs for a particular test instance.We see that GraphWriter makes use of more entities from the input, arranged with more articulated textual context.",
"It demonstrates less repetition than GAT.",
"Both GraphWriter and GAT show much better coherence than EntityWriter, which copies entities from the input into unreasonable contexts.",
"Rewriter, while fluent and grammatical, jumps from topic to topic, failing to relate as strongly to the input as the knowledge-aware models.",
"To determine the shortcomings of our model, we calculate rough error statistics over the outputs of the GraphWriter on the test set.",
"We notice that 40% of entities in the knowledge graphs do not appear in the generated text.",
"Future work should address this coverage problem, perhaps through modifications to the inference procedure or a coverage loss (Tu et al., 2016) modified to the specifics of this task.",
"We find that 18% of all sentences generated by our model repeat sentences or clauses and are subjected to the post-processing pruning mentioned in Section 5. While this step is a simple solution to improve generated outputs, a more advanced solution is required.",
"We have studied the problem of generating multi-sentence text from the output of automatic information extraction systems, and have shown that incorporating knowledge as graphs improves performance.",
"We introduced GraphWriter, featuring a new attention model for graph encoding, and demonstrated its utility through human and automatic evaluation compared to strong baselines.",
"Lastly, we provide a new resource for the generation community, the AGENDA dataset of abstracts and knowledge.",
"Future work could address the problem of repetition and entity coverage in the generated texts.",
"This research was supported by the Office of Naval Research under the MURI grant N00014-18-1-2670, NSF (IIS 1616112, III 1703166), Allen Distinguished Investigator Award, Samsung GRO and gifts from Allen Institute for AI, Google, Amazon, and Bloomberg.",
"We gratefully acknowledge the support of the European Research Council (Lap-ata; award number 681760).",
"We also thank the anonymous reviewers and the UW-NLP group for their helpful comments."
] |
[
"abstain",
"method",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"abstain",
"abstain",
"objective",
"result",
"method",
"objective",
"result",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"objective",
"objective",
"abstain",
"other",
"other",
"other"
] |
[
"The problem of learning to translate between two vector spaces given a set of aligned points arises in several application areas of NLP.",
"Current solutions assume that the lexicon which defines the alignment pairs is noise-free.",
"We consider the case where the set of aligned points is allowed to contain an amount of noise, in the form of incorrect lexicon pairs and show that this arises in practice by analyzing the edited dictionaries after the cleaning process.",
"We demonstrate that such noise substantially degrades the accuracy of the learned translation when using current methods.",
"We propose a model that accounts for noisy pairs.",
"This is achieved by introducing a generative model with a compatible iterative EM algorithm.",
"The algorithm jointly learns the noise level in the lexicon, finds the set of noisy pairs, and learns the mapping between the spaces.",
"We demonstrate the effectiveness of our proposed algorithm on two alignment problems: bilingual word embedding translation, and mapping between diachronic embedding spaces for recovering the semantic shifts of words across time periods.",
"We consider the problem of mapping between points in different vector spaces.",
"This problem has prominent applications in natural language processing (NLP).",
"Some examples are creating bilingual word lexicons (Mikolov et al., 2013), machine translation (Artetxe et al., 2016, 2017a,b, 2018a,b; Conneau et al., 2017), hypernym generation (Yamane et al., 2016), diachronic embeddings alignment (Hamilton et al., 2016) and domain adaptation (Barnes et al., 2018).",
"In all these examples one is given word embeddings in two different vector spaces, and needs to learn a mapping from one to the other.",
"The problem is traditionally posed as a supervised learning problem, in which we are given two sets of vectors (e.g.: word-vectors in Italian and in English) and a lexicon mapping the points between the two sets (known word-translation pairs).",
"Our goal is to learn a mapping that will correctly map the vectors in one space (e.g.: English word embeddings) to their known corresponding vectors in the other (e.g.: Italian word embeddings).",
"The mapping will then be used to translate vectors for which the correspondence is unknown.",
"This setup was popularized by Mikolov et al. (2013).",
"The supervised setup assumes a perfect lexicon.",
"Here, we consider what happens in the presence of training noise , where some of the lexicon's entries are incorrect in the sense that they don't reflect an optimal correspondence between the word vectors.",
"We are given two datasets, X = x 1 , ..., x m and Y = y 1 , ..., y n , coming from d -dimensional spaces X and Y .",
"We assume that the spaces are related, in the sense that there is a function f ( x ) mapping points in space X to points in space Y .",
"In this work, we focus on linear mappings, i.e. a d d matrix Q mapping points via y i = Qx i .",
"The goal of the learning is to find the translation matrix Q .",
"In the supervised setting, m = n and we assume that i f ( x i ) y i .",
"We refer to the sets X and Y as the supervision .",
"The goal is to learn a matrix Q such the Frobenius norm is minimized: Q = arg min Q (cid:107) QX Y (cid:107) 2 F .",
"Gradient-based The objective in (1) is convex, and can be solved via least-squares method or via stochastic gradient optimization iterating over the",
"et al. (2016) and Smith et al. (2017) argued and proved that a linear mapping between sub-spaces must be orthogonal.",
"This leads to the modified objective: Q = arg min Q,s.t : QTQ = I (cid:107) QX Y (cid:107) 2 F (2) Objective (2) is known as the Orthogonal Procrustes Problem .",
"It can be solved algebraically by using a singular value decomposition (SVD).",
"Schnemann (1966) proved that the solution to 2 is: Q = UVT s.t. U VT is the SVD of Y XT .",
"The OP method is used in Xing et al. (2015); Artetxe et al. (2016, 2017a,b, 2018a,b); Hamilton et al. (2016); Conneau et al. (2017); Ruder et al. (2018).",
"The supervised alignment problem can be expended to the semi-supervised (Artetxe et al., 2017b; Lample et al., 2017; Ruder et al., 2018) or unsupervised (Zhang et al., 2017; Conneau et al., 2017; Artetxe et al., 2018b; Xu et al., 2018; Alvarez-Melis and Jaakkola, 2018) case, where a very small lexicon or none at all is given.",
"In iterative methods, the lexicon is expended and used to learn the alignment, later the alignment is used to predict the lexicon for the next iteration and so on.",
"In adversarial methods, a final iterative step is used after the lexicon is built to refine the result.",
"We will focus on the supervised stage in the unsupervised setting, meaning estimating the alignment once a lexicon is induced.",
"The previous methods assume the supervision set X, Y is perfectly correct.",
"However, this is often not the case in practice.",
"We consider the case where a percentage p of the pairs in the supervision set are noisy: applying the gold transformation to a noisy point x j will not result in a vector close to y j .",
"The importance of the quality of word-pairs selection was previously analyzed by Vulic and Korhonen (2016).",
"Here, we equate bad pairs to noise, and explore the performance in the presence of noise by conducting a series of synthetic experiments.",
"We take a set of points X , a random transformation Q and a gold set Y = QX .",
"We define error as (cid:107) Y Y (cid:107) 2 F where Y = QX is Figure 1: Noise influence.",
"the prediction according to the learned transform Q .",
"Following the claim that linear transformations between word vector spaces are orthogonal, we focus here on orthogonal transformations.",
"Low Dimensional Synthetic Data We begin by inspecting a case of few 2-dimensional points, which can be easily visualized.",
"We compare a noise-free training to the case of a single noisy point.",
"We construct X by sampling n = 10 points of dimension d = 2 from a normal distribution.",
"We take nine points and transformed them via an orthogonal random transform Q .",
"We then add a single noisy pair which is generated by sampling two normally distributed random points and treating them as a pair.",
"The error is measured only on the nine aligned pairs.",
"When no noise is applied, both Gradient-based and Procrustes methods are aligned with 0 error mean and variance.",
"Once the noisy condition is applied this is no longer the case.",
"Figure 1(A) shows the noisy condition.",
"Here, the red point (true) and box (prediction) represent the noisy point.",
"Green dots are the true locations after transformation, and the blue boxes are the predicted ones after transformation.",
"Both methods are affected by the noisy sample: all ten points fall away from their true location.",
"The effect is especially severe for the gradient-based methods.",
"High Dimensional Embeddings The experiment setup is as before, but instead of a normal distribution we use (6B, 300d) English Glove Embeddings (Pennington et al., 2014) with lexicon of size n = 5000 .",
"We report the mean error for various noise levels on an unseen aligned test set of size 1500.",
"In Figure 1(B) we can see that both methods are effected by noise.",
"As expected, as the amount of noise increases the error on the test set increases.",
"We can again see that the effect is worse with gradient-based methods.",
"Having verified that noise in the supervision severely influences the solution of both methods, we turn to proposing a noise-aware model.",
"The proposed model jointly identifies noisy pairs in the supervision set and learns a translation which ignores the noisy points.",
"Identifying the point helps to clean the underlying lexicon (dic-tionary) that created the supervision.",
"In addition, by removing those points our model learns a better translation matrix.",
"Generative Model We are given x R d and we sample a corresponding y R d by first sampling a Bernoulli random variable with probability : z Bernoulli ( ) y (cid:40) N ( y , 2 y I ) z = 0 (noise') N ( Qx, 2 I ) z = 1 (aligned') The density function y is a mixture of two Gaus-sians: f ( y | x ) = (1 ) N ( y , y 2 I ) + N ( Qx, 2 I ) .",
"The likelihood function is: L ( Q, , y , y ) = (cid:88) t log f ( y t | x t ) EM Algorithm We apply the EM algorithm (Dempster et al., 1977) to maximize the objective in the presence of latent variables.",
"The algorithm has both soft and hard decision variants.",
"We used the hard decision one which we find more natural, and note that the posterior probability of z t was close to 0 or 1 also in the soft-decision case.",
"It is important to properly initialize the EM algorithm to avoid convergence to a local optima.",
"We initialize Q by applying OP on the entire lexicon (not just the clean pairs).",
"We initialize the variance, , by calculating 2 = 1 n d (cid:80) t =1 (cid:107) Qx t y t (cid:107) 2 .",
"We initialize, y , y by taking the mean and variance of the entire dataset.",
"Finally, we initialize to 0 .",
"5 .",
"The (hard version) EM algorithm is shown in Algorithm box 1.",
"The runtime of each iteration is dominated by the OP algorithm (matrix multiplication and SVD on a d d matrix).",
"Each iteration contains an additional matrix multiplication and few simple vector operations.",
"Figure 1(B) shows it obtains perfect results on the simulated noisy data.",
"Experiment Setup This experiment tests the noise-aware solution on an unsupervised translation problem.",
"The goal is to learn the translation matrix, which is a transformation matrix between two languages by building a dictionary.",
"We can treat the unsupervised setup after retrieving a lexicon as an iterative supervised setup where some of the lexicon pairs are noisy.",
"We assumes the unsupervised setting will contain higher amount of noise than the supervised one, especially in the first iterations.",
"We follow the experiment setup in Artetxe et al. (2018b).",
"But instead of using OP for learning the translation matrix, we used our Noise-Aware Alignment (NAA), meaning we jointly learn to align and to ignore the noisy pairs.",
"We used the En-It dataset provided by Dinu and Baroni (2014) and the extensions: En-De, En-Fi and En-Es of Artetxe et al. (2018a, 2017b).",
"Experiment Results In Table 1 we report the best and average precision@1 scores and the average number of iterations among 10 experiments, for different language translations.",
"Our model improves the results in the translation tasks.",
"In most setups our average case is better than the former best case.",
"In addition, the noise-aware model is more stable and therefore requires fewer iterations to converge.",
"The accuracy improvements are small but consistent, and we note that we consider them as a lower-bound on the actual improvements as the current test set comes from the same distribution of the training set, and also contains similarly noisy pairs.",
"Using the soft-EM version results in similar results, but takes roughly 15% more iterations to converge.",
"Table 2 lists examples of pairs that were kept and discarded in En-It dictionary.",
"The algorithm learned the pair (dog dog) is an error.",
"Another example is the translation (good santo) which is a less-popular word-sense than (good buon / buona).",
"When analyzing the En-It cleaned dictionary we see the percentage of potentially misleading pairs (same string, numbers and special characters) is reduced from 12.1% to 4.6%.",
"Experiment Setup The goal is to align English word-embedding derived from texts from different",
"different time periods, in order to identify which words changed meaning over time.",
"The assumption is that most words remained stable, and hence the supervision is derived by aligning each word to itself.",
"This problem contains noise in the lexicon by definition.",
"We follow the exact setup fully described in Hamilton et al. (2016), but replace the OP algorithm with our Noise-aware version 1 .",
"We project 1900s embeddings to 1990s embeddings vector-space.",
"The top 10 distant word embeddings after alignment are analyzed by linguistic experts for semantic shift.",
"Experiment Results 45.5% of the input pairs were identified as noise.",
"After the post processing of removing the non-frequent words as described in the experiment setup we end up with 121 noisy words.",
"Our algorithm successfully identifies all the top-changing words in Hamilton et al. (2016) as noise, and learns to ignore them in the alignment.",
"In addition, we argue our method provides better alignment.",
"Table 3 shows the Nearest Neighbor (NN) of a 1990s word, in the 1900s vector-space after projection.",
"We look at the top 10 changed words in Hamilton et al. (2016) and 3 unchanged words.",
"We compare the alignment of the OP projection to the Noise-aware Alignment (NAA).",
"For example, with our solution the word actually whose meaning shifted from in fact to express emphasize or surprise, is correctly mapped to really instead of believed .",
"The word gay shifted from cheerful to homosexual , yet is still mapped to gay with NAA.",
"This happens because the related embeddings ( homosexual , lesbian and so on) are empty embeddings in 1900s, leaving gay as the next-best candidate, which we argue is better than OP's society .",
"The words car, driver, eve whose meaning didn't change, were incorrectly aligned with OP to cab, stepped, anniversary instead of to themselves.",
"1 Pre-possessing: removing proper nouns, stop words and empty embeddings.",
"Post-processing: removing words whose frequency is below 10 5 in either years.",
"We introduced the problem of embedding space projection with noisy lexicons, and showed that existing projection methods are sensitive in the presence of noise.",
"We proposed an EM algorithm that jointly learns the projection and identifies the noisy pairs.",
"The algorithm can be used as a drop-in replacement for the OP algorithm, and was demonstrated to improve results on two NLP tasks.",
"We provide code at https://github.com/NoaKel/Noise-Aware-Alignment.",
"The work was supported by The Israeli Science Foundation (grant number 1555/15), and by the Israeli ministry of Science, Technology and Space through the Israeli-French Maimonide Cooperation program.",
"We also, thank Roee Aharoni for helpful discussions and suggestions."
] |
[
"abstain",
"abstain",
"result",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"method",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"other",
"other",
"other"
] |
[
"Evaluating image captions is very challenging partially due to the fact that there are multiple correct captions for every single image.",
"Most of the existing one-to-one metrics operate by penalizing mismatches between reference and generative caption without considering the intrinsic variance between ground truth captions.",
"It usually leads to over-penalization and thus a bad correlation to human judgment.",
"Recently, the latest one-to-one metric BERTScore can achieve high human correlation in system-level tasks while some issues can be fixed for better performance.",
"In this paper, we propose a novel metric based on BERTScore that could handle such a challenge and extend BERTScore with a few new features appropriately for image captioning evaluation.",
"The experimental results show that our metric achieves state-of-the-art human judgment correlation.",
"Image captioning is one of the key visual-linguistic tasks that asks for generated captions with specific images.",
"Researchers look forward to inexpensive evaluation metrics that closely resemble human judgment, which remains a challenging task since most of the metrics can hardly get close to human judgment.",
"Image captioning is a one-to-many task since each image can correspond to many possible captions.",
"Different captions may focus on different parts of the image; this not only creates a challenge for generating the captions (Dai et al., 2017; Venu-gopalan et al., 2017), but also for evaluating them.",
"Most of the existing one-to-one evaluation metrics, however, overlook such a challenge.",
"These one-to-one metrics (Lin, 2004; Vedantam et al., 2015; Zhang et al., 2019) ignore other reference captions since the score is computed by comparing the candidate capture with one single reference caption.",
"When there are multiple reference captions, prior works compute individual scores for each reference caption and pool these scores together afterward.",
"Intrinsic variance exists in a set of ground truth captions for an image, since different captions may have different concerns or descriptions.",
"It's challenging to find a remedy for such over-penalization if the metric looks at only one single reference caption.",
"BERTScore (Zhang et al., 2019) is the latest one-to-one metric that computes token-level cosine similarity between two sentences by contextual embeddings of pre-trained models, and greedily picks and adds up cosine values as a score.",
"It reaches high performance in machine translation tasks and a system-level image captioning evaluation task.",
"In one-to-one evaluation, although it is hard to consider all references directly, it is possible to combine references into a single one using contextual embedding from the pre-trained language model.",
"In this work, we propose a metric where all of the references are combined as a new comprehensive embedding by detecting the mismatches between two contextual embeddings.",
"To achieve this goal, we add the concept of mismatch into cosine similarity by a threshold for mismatch detection and proper penalization.",
"Also, our metric considers the importance of different words, and our research shows that adding a stop word list is an efficient way.",
"Using various image captioning evaluation datasets with human annotations like Microsoft COCO (Lin et al., 2014), Flickr8k (Hodosh et al., 2013), COMPOSITE (Aditya et al., 2015) and PASCAL-50S (Vedantam et al., 2015), the experimental results show that our metric achieves state-of-the-art correlation in several tasks, especially in caption-level tasks.",
"Our main contribution is a novel metric that can detect mismatches among captions, build a combined caption with multi-references, and achieve high human correlation in image captioning evaluation tasks.",
"The code for our metric is released at here 1 .",
"For captions evaluation, a traditional method is scoring by human experts, which is a precise but expensive way.",
"Current image captioning models are evaluated by automatic metrics, which compute the similarity between generated captions and ground truth captions.",
"Currently, most widely used caption metrics are n-gram matching metrics such as BLEU, METEOR, ROUGE, CIDEr.",
"BLEU (Papineni et al., 2002) is a precision-based n-gram overlap matching metric that counts the number of overlap n-grams among all of references and the candidate.",
"Several modifications can be applied to improve BELU, such as different n-gram (e.g. n=1,2,3,4), 1 https://github.com/ck0123/improved-bertscore-for-image-captioning-evaluation brevity penalty for a short candidate, and geometrical average.",
"BLEU is a fast, low-cost metric but has a low correlation with human judgment.",
"METEOR (Denkowski and Lavie, 2014) computes both precision and recall in unigram, and consider more factors such as word stems, synonyms, and paraphrases.",
"ROUGE (Lin, 2004) is a package of measures for automatic text summaries evaluation: ROUGE-N uses n-gram co-occurrence statistics; ROUGE-L uses the longest common subsequence; ROUGE-W uses weighted longest common sub-sequence; ROUGE-S uses skip-bigram co-occurrence statistics.",
"CIDEr (Vedantam et al., 2015) represents a sentence as an n-grams vector with tf-idf (term frequency-inverse document frequency), and compute the cosine similarity between reference and candidate.",
"LEIC (Cui et al., 2018) uses a trained neural model to predict whether a caption is generated by humans.",
"LEIC is trained with COCO images data and uses data augmentation, which helps to achieve a high human correlation.",
"However, LEIC suffers from high computational cost to train in the COCO data.",
"SPICE (Anderson et al., 2016) computes F1 score according to the scene graph created by captions.",
"SPICE reaches a high correlation with human judgment while suffers from long repetitive sentence evaluation (Liu et al., 2017).",
"Thanks to the development of a pre-trained language model, better sentence representation can be used in diverse kinds of NLP tasks.",
"Previous works mainly focus on linguistic representation such as word embedding (Mikolov et al., 2013; Pennington et al., 2014; Goldberg and Levy, 2014), which are only word-level embedding without positional information.",
"After the success of Transformer (Vaswani et al., 2017) , a series of language model approaches are proposed such as GPT (Radford et al., 2018), BERT (Devlin et al., 2018), GPT-2 (Radford et al., 2019), XLNET (Yang et al., 2019), XLM (Lample and Conneau, 2019), RoBERTa (Liu et al., 2019).",
"These approaches learn from a huge number of unlabeled text data as a pre-trained process and can fine-tune in downstream tasks with a few epochs.",
"BERTScore is the latest one-to-one matching metric for text similarity.",
"Benefiting from the contextual embedding of the pretrained language Figure 2: This figure explains the differences between an error and different concerns when mismatches occur.",
"models, BERTScore measures two texts similarity by token-level cosine similarity computing and greedily pick strategy: (1) feed reference text and candidate text into pre-trained model, and extract two contextual embeddings r = [ r 1 , .., r n ] , c = [ c 1 , , .., c m ] ; (2) compute the cosine similarity matrix between r and c by r c (cid:107) r (cid:107)(cid:107) c (cid:107) ; (3) greedily pick the maximum value from cosine similarity matrix for each reference token as a matching value; (4) collect all the matching values with optional inverse document frequency weights.",
"Inverse document frequency (idf) computes a score for word frequency in the whole corpus.",
"Given N documents [ s 1 , .., s N ] and each word w , idf score is : idf( w ) = lg 1 NN (cid:88) i =1 I [ w s ] where I [ ] is an indicator function and lg is the base 10 logarithm.",
"The recall of BERTScore (BS for short) is : BS = (cid:80) r i r idf( r i )max c j c r (cid:62) i c j (cid:80) r i r idf( r i ) BERTScore can adequately deal with different descriptions by knowledge from a pre-trained model and achieves high performance in both machine translation, image captioning evaluation tasks.",
"However, as a one-to-one metric approach, it still suffers from different-concerns problems.",
"Another pitfall in BERTScore comes from the strategy greedy pick: when no candidate word attends to a specific reference word, this reference word still gets value by picking a maximum cosine value greedily, which causes under-penalization.",
"Inspired by BERTScore, our metric treats the mismatches between captions carefully, and try to give a proper score for the similarity between captions.",
"Proper scoring for generated captions should consider the information about multi-references and avoid the wrong penalization.",
"In this section, we provide the idea about references combination and fix some under or over penalization issues for cosine similarity-based metrics.",
"Token-level mismatches lead to two kinds of problems: different descriptions and different concerns.",
"We introduce these two concepts in Figure 2.",
"Some methods are available for description problems like thesaurus or similarity with contextual embedding, while few of methods handle the different-concerns problem in multi-references cases.",
"The common ways for one-to-one text metrics to deal with multi-references cases are pooling the results by some strategies like average or maximum.",
"Maximum picks the maximum of results, which can get a higher score than average meanwhile ignores other references directly.",
"Average merges all the results with each reference, which can consider all references.",
"Although average slightly reduce the impact of different concerns, both of the two overpenalize the generated caption since they already Figure 3: Combination of references comes from a phenomenon that: mismatches between two ground truth captions can't be errors but different concerns.",
"regard those mismatches from different concerns as errors during the one-to-one text evaluation process.",
"Different from average and maximum strategies, the strategy of our metric is to combine reference captions.",
"The combination works based on a fact that: all of the reference captions are ground truth captions so that the mismatches between references should not be errors, but considering different concerns (cosine similarity with contextual embedding also ensures that mismatches are not from errors).",
"Once we choose a base reference caption and pick up all the mismatches among base and others, the combination among the base and mismatches contains all the information in references without duplicate.",
"After that, the evaluation between the candidate caption and the combined caption does not suffer from the problems from inter references variance any more.",
"It is hard to define the differences between captions clearly.",
"To simplify the problem, we regard mismatches in token-level between two embeddings as differences between two captions.",
"Mismatch is a concept from n-gram overlap matching metrics like BLEU.",
"We find a mismatch when a word from one sentence cannot be found in the other sentence.",
"Although mismatch is a clear concept to word-level comparison, overlap-based mismatch results in some problems like synonyms.",
"Meanwhile, cosine similarity-based metrics like BERTScore can address this problem quite well.",
"BERTScore uses a pre-trained language model's contextual embedding and regard cosine similarity between two tokens as their similarity.",
"Therefore, the match values change from overlap's discrete value (0 or",
"1) to cosine's continuous value (0 to",
"1) with semantic and positional information, which make similarity values more precise.",
"However, a weakness of cosine similarity is that we cannot distinguish match and mismatch directly since the concept of mismatch does not exist in cosine similarity.",
"To achieve references combination, we simply set a threshold function for distinguish the mismatch: when the cosine value is bigger than the threshold, we keep it; otherwise, we set it to 0, which is shown as follows.",
"( x, ) = (cid:40) x x > 0 .",
"0 x (1) where x is the cosine value and is the threshold value.",
"S is the improved greedy pick function for each r i reference token with threshold: S ( r i , c, ) = (max c j c r (cid:62) i c j , ) (2) where r = [ r 1 , .., r n ] and c = [ c 1 , .., c m ] are contextual embedding.",
"We call this process cut for removing low cosine values.",
"The standard cosine & greedy pick process is a case when the threshold value equals to 0.",
"Then we can get the greedy recall similarity with threshold indicator: R = (cid:80) r i r idf( r i ) S ( r i , c, ) (cid:80) r i r idf( r i )sgn( S ( r i , c, )) (3) where sgn is the sign function.",
"With a threshold indicator, our metric acquires the ability to detect mismatches.",
"Furthermore, since we cut all the low cosine value, the bad impact of greedy pick (mentioned in Section",
"2) will be eliminated, which means our metric provides a more reasonable similarity for each token pair.",
"Empirically for a pre-trained language model, the threshold value in different tasks is similar due to the same architecture and the same widely pretraining process in an ample amount of text data.",
"In this work, we use the threshold value 0.4 for BERT (base) and 0.83 for RoBERTa (large) as the recommended settings.",
"Contextual embeddings are extracted from the pretrained model.",
"Since the inputs of the model contain both token embedding and position embedding, contextual embedding for each token also contains its semantic and positional information.",
"Therefore, the change of tokens' position does not change the inner positional information for each token.",
"For example, [ embed A , embed B ] is the contextual embedding sentence generated from [ word A , word B ].",
"Both [ embed A , embed B ] and [ embed B , embed A ] (only switch tokens' position) still provide same positional information.",
"Using this characteristic, we can now easily combine all of the references with the following steps: (1) choose a random reference caption embedding as a base, A ; (2) compute the similarity between A and another reference B with a threshold; (3) collect those tokens from B that mismatch comparing with A , B (cid:48) ; (4) concatenate A and B (cid:48) as a new base caption A ; (5) repeat steps above until used all the references; R comb computes the recall score for combined reference and candidate.",
"For proper scoring, our metric also focuses on a problem that token-level matching sometimes does not mean similarity between captions.",
"A bird stand-ing on the blue handrail and A bird flying on the blue sky are describing different images with only two words different but five words the same.",
"The meaning of a caption is sensitive to the replacement of essential components like subject, predicate, object, while some replacement (like a the ) are not.",
"The problem is: in matching metric, we only focus on the match and mismatch while ignoring the importance of each word in the sentence.",
"It is hard to provide optimal importance with each word and pick the important ones; in contrast, the removal of unimportant words is more comfortable to achieve.",
"In this work, our metric removes all the stop words and computes an another greedy cosine score as an additional score without idf weight, R rm : R rm = (cid:80) r i r (cid:48) S ( r i , c (cid:48) , ) | r (cid:48) | (5) where r (cid:48) and c (cid:48) are embeddings without stop words and | r (cid:48) | means the length of sentence r (cid:48) .",
"Although taking idf weight into consideration is convenient, using the stop word removal additionally is still necessary.",
"The definition of idf points out that idf is an indicator of frequency, while frequency does not equate to importance.",
"Take COCO caption corpus as an example: all the idf weights of common subjects are low such as man , dog , girl , etc; while those of playfully , sleepy are high.",
"However, there is no doubt that mismatches occur in these common subjects will change the meaning dramatically.",
"In this section, we discussed the mismatches between references, under-penalization of greedy pick, and the importance of words.",
"Moreover, we showed our idea about captions combination, greedy recall similarity with threshold indicator, and stop word removal.",
"Including all of formulas above, the final expression of our metric is the product of R comb and R rm : Score = R comb R rm (6) Type Metric M1 M2 Task-agnostic ROUGE-L 0.062 (0.846) 0.215 (0.503) BLEU-1 0.029 (0.927) 0.165 (0.607) BLEU-4 0.236 (0.459) 0.380 (0.222) CIDEr 0.440 (0.151) 0.539 (0.071) METEOR 0.703 (0.011) 0.701 (0.011) BS (BERT-base) 0.807 (0.001) 0.735 (0.006) BS (RoBERTa-large) 0.873 (0.000) 0.841 (0.000) Ours (BERT) 0.875 (0.000) 0.797 (0.002) Ours (RoBERTa) 0.932 (0.000) 0.869 (0.000) Task-specific SPICE 0.715 (0.009) 0.688 (0.013) LEIC 0.939 * (0.000) 0.949 * (0.000) Table 1: Pearson correlation of system level metrics scores with human judgment in 2015 COCO Captioning Challenge.",
"The most convincing way for metric evaluation is the human correlation in caption-level and system-level tasks.",
"In this section, we evaluate our metric in four typical image captioning evaluation datasets with standard metrics.",
"We also consider the impact of each part in our metric by ablation experiment and key part replacements.",
"Microsoft COCO 2014 COCO dataset contains 123,293 images with 82,783 images in training set, 40,504 images in the validation set and 40,775 images in the test set.",
"Each image has five human-annotated captions as ground truth captions.",
"In 2015 COCO Captioning Challenge (Chen et al., 2015), submissions of the challenge are evaluated by human judgments with five kinds of metrics: M1, percentage of captions that are evaluated as better or equal to human caption; M2, percentage of captions that pass the Turing Test; M3, average correctness of the captions on a scale 1-5 (incorrect correct); M4, the average amount of detail of the captions on a scale 1-5 (lack of details very de-tailed); M5, percentage of captions that are similar to human description.",
"Flickr 8K Flickr 8K dataset contains 8,092 images with five human-generated captions for each image.",
"Flickr 8K provides an annotation called Expert Annotation, and each row contains one image, one candidate caption from Flickr 8K dataset (it may matches this image or not), and three expert scores for the image-caption pair.",
"Scores range from 1: indicating that the caption does not describe the image at all to 4: indicating that the caption describes the image.",
"COMPOSITE The COMPOSITE dataset contains 11985 human judgments from Flickr 8K, Flickr 30K, and COCO captions re-coined.",
"Candidate captions come from human and two caption models scoring by Amazon Mechanical Turk (AMT) workers.",
"All the captions score a 5-point scale from 1 (The description has no relevance to the image) to 5 (The description relates perfectly to the image).",
"PASCAL-50S PASCAL-50S dataset has 1000 images from UIUC PASCAL Sentence Dataset, and each image has 50 reference captions annotated by AMT worker.",
"PASCAL-50S includes over 4000 candidate captions pair with human judgments.",
"Different from COCO and Flickr format, PASCAL-50S consists of the triplet: (cid:104) A, B, C (cid:105) .",
"A is the reference sentence from an image, and B , C are two candidate sentences.",
"AMT workers are asked Which of the two sentences, B or C , is more similar to A ?",
".",
"This kind of question is more accessible for workers to judge than provide correct scores.",
"Candidate sentences come from human-written, or model generated, and four kinds of paired ways: human-correct (HC), human-incorrect (HI), human-model (HM), and model-model (MM).",
"For comparison, we use common standard metrics in our scoring tasks, such as BLEU-1, ROUGE-L, METEOR, CIDEr, and SPICE.",
"All these metrics are implemented in MS COCO evaluation tool.",
"2 We also use the original BERTscore to check the improvement of our metrics.",
"To be more convincing, we compare with the current SOTA training-based approach LEIC in COCO captioning 2015 and Flickr 8K.",
"Two metrics are implemented as baselines: (1) unigram overlap matching metric and (2) references concatenation metric with BERT.",
"Unigram overlap matching metric is implemented for verifying the importance of contextual embedding from the pretrained language model.",
"References concatenation metric with BERT is implemented for verifying the importance of references combination.",
"Unigram overlap matching metric .",
"In our unigram overlap matching metric, we remove contextual embedding from the pre-trained language model and only use unigram overlap matching.",
"Different from continuous value methods like BERTScore, it is easy for overlap matching to distinguish the match and mismatch (1 or 0).",
"In com-2 https://github.com/tylin/coco-caption Flickr 8K COMPOSITE BLEU-1 0.318 0.282 BLEU-4 0.140 0.199 ROUGE-L 0.323 0.313 BS (RoBERTa) 0.367 0.392 BS (BERT) 0.393 0.399 METEOR 0.436 0.381 CIDEr 0.447 0.387 SPICE 0.458 0.418 LEIC 0.466* Ours (RoBERTa) 0.451 0.449 Ours (Unigram) 0.471 0.420 Ours (BERT) 0.481 0.423 Inter-human 0.736* Table 3: In caption-level experiments, we compute the Kendall correlation between human judgments and scores of metrics.",
"To reduce the impact of unimportant words, we remove stop words from the combined caption directly.",
"References concatenation .",
"We also regard the concatenation of references as another baseline comparing with our combination method.",
"The concatenation of references combines all the information from references as well.",
"The difference between concatenation and our combination is the duplicate tokens in majority references.",
"In this metric, we follow all the steps of our metric with BERT, except the combination.",
"In system-level evaluation, we use twelve teams of human judgment results from COCO 2015 Captioning Challenge.",
"We use data from Karpathy splits (Karpathy and Fei-Fei, 2015), which contains 113,287 train images, 5000 test images, and 5000 validation images.",
"Each image has 5 references human captions.",
"Following prior works (An-derson et al., 2016; Cui et al., 2018), we compute the Pearson correlation with human judgment.",
"In the pre-trained model selection for BERTScore, we choose BERT (base), which is the most common model in the set of transformer language models, and RoBERTa (large), which is an optimized ver-HC HI HM MM All BLEU-1 53.1 94.7 90.9 56.9 73.9 BLEU-4 53.3 92.8 85.2 60.5 73.0 ROUGE-L 55.6 95.1 93.3 57.7 75.4 METEOR 61.4 97.2 94.9 64.5 79.5 CIDEr 55.0 98.0 91.0 64.6 77.2 SPICE 57.7 96.1 88.3 65.3 76.9 Ours (RBT) 62.5 97.7 95.0 59.4 78.7 BS (BERT) 64.4 97.9 96.6 59.0 79.5 Ours (BERT) 65.4 98.1 96.4 60.3 80.1 Table 4: In PASCAL-50S, candidate sentences come from human written or model generated.",
"The experimental results in Table 1 show that our metrics with both BERT and with RoBERTa perform better than BERTScore and other standard metrics.",
"What is more, our metric with RoBERTa can reach a high correlation of 0.932 with human judgment, which is even close to the training-based task-specific metric LEIC with image features.",
"To check the influence of each part, we provide both ablation study and replacement study in 2015 COCO Captioning dataset.",
"The results are showed in Table 2.",
"In ablation study, we use our metric with BERT and remove remove , combine and cut one by one.",
"The result shows that each part of our metric is useful, and combine is the most influential part.",
"In the replacement study, we compare our metric with the unigram metric and concatenation metric to check the influence of contextual embedding and combination.",
"The comparison between Ours (Unigram) and Ours (BERT+TBR) shows that contextual embedding is better than unigram matching in the system-level correlation task.",
"The comparison between Ours (BERT+T+cat+R) and Ours (BERT+TBR) shows that the combination process is better than concatenation directly.",
"Furthermore, we also show the comparison between concatenation and average in some standard metrics.",
"In caption-level evaluation tasks, we compute Kendall's correlation (Kendall, 1938) between metrics results and expert judgments.",
"In Flickr 8K, we use Expert Annotation with 5822 samples, including 158 correct image-caption pairs where the candidate caption equals one of captions in references set.",
"Following the prior work (Anderson et al., 2016), we use 5664 samples and exclude those correct image-caption pairs.",
"In COMPOSITE, captions are estimated by two kinds of standards: correctness and throughness, and we only focus on correctnesss in this work.",
"The experimental results in Table 3 show that our metric is quite suitable for caption-level evaluation in image captioning.",
"Our metric outperforms other metrics (including training-based metric LEIC in Flickr 8K).",
"Another interesting fact is that the unigram metric also has high performance in caption-level correlation tasks.",
"In COMPOSITE, our unigram metric is comparable to our metric with BERT.",
"In PASCAL-50S, we use five references for metrics computation, which is comparable with previous experiments.",
"The results in Table 4 show that in four kinds of caption pairs, our metric performs better than others in human-correct (HC), human-incorrect (HI), human-model (HM) classification.",
"In table 5, We evaluate some current state-of-the-art image captioning models reported from Codalab competition: Meshed-Memory-Transformer (Cor-nia et al., 2020), AoAnet (Huang et al., 2019).",
"3 Some of models in 2015 COCO Captioning Chal-3 https://competitions.codalab.org/competitions/3221 lenge are listed for comparison: (1) Show, Attend and Tell (Xu et al., 2015); (2) CNN+LSTM (Vinyals et al., 2015); (3) NeuralTalk (Karpathy and Fei-Fei, 2015).",
"The result shows that: on our metric, current models perform better than previous models.",
"It is worth noting that different judgments exist between AoAnet and M2-Transformer on our metric and CIDEr-D.",
"According to our observation, several captions (1558/5000) generated by M2-Transformer are incomplete, like a bedroom with a bed and a tv in a or a wooden door with a skateboard on a .",
"It may explain why M2-Transformer is a little worse than AoAnet on our metric.",
"In this work, we study the intrinsic variance among ground truth captions in image captioning evaluation.",
"We propose an improved matching metrics based on BERTScore, which can combine all of the references for taking full advantage of multi-references.",
"Our metric also benefits from stop word removal by reducing the impact of stop words.",
"The experimental results show that our metric can reach state-of-the-art human correlation in several evaluation tasks."
] |
[
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"result",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"result",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"result"
] |
[
"Active Learning (AL) has been successfully applied to Deep Learning in order to drastically reduce the amount of data required to achieve high performance.",
"Previous works have shown that lightweight architectures for Named Entity Recognition (NER) can achieve optimal performance with only 25% of the original training data.",
"However, these methods do not exploit the sequential nature of language and the heterogeneity of uncertainty within each instance, requiring the labelling of whole sentences.",
"Additionally, this standard method requires that the annotator has access to the full sentence when labelling.",
"In this work, we overcome these limitations by allowing the AL algorithm to query subsequences within sentences, and propagate their labels to other sentences.",
"We achieve highly efficient results on OntoNotes 5.0, only requiring 13% of the original training data, and CoNLL 2003, requiring only 27%.",
"This is an improvement of 39% and 37% compared to querying full sentences.",
"The availability of large datasets has been key to the success of deep learning in Natural Language Processing (NLP).",
"This has galvanized the creation of larger datasets in order to train larger deep learning models.",
"However, creating high quality datasets is expensive due to the sparsity of natural language, our inability to label it efficiently compared to other forms of data, and the amount of prior knowledge required to solve certain annotation tasks.",
"Such a problem has motivated the development of new Active Learning (AL) strategies which aim to efficiently train models, by automatically identifying the best training examples from large amounts of Code is made available on: https://github.com/ puria-radmard/RFL-SBDALNER unlabeled data (Wei et al., 2015; Wang et al., 2017; Tong and Koller, 2002).",
"This tremendously reduces human annotation effort as much fewer instances need to be labeled manually.",
"To minimise the amount of data needed to train a model, AL algorithms iterate between training a model, and querying information rich instances to human annotators from a pool of unlabelled data (Huang et al., 2014).",
"This has been shown to work well when the queries are atomic'a single annotation requires a unit labour, and describes entirely the instance to be annotated.",
"Conversely, each instance of structured data, such as sequences, require multiple annotations.",
"Hence, such query selection methods can result in a waste of annotation budget (Settles, 2011).",
"For example, in Named Entity Recognition (NER), each sentence is usually considered an instance.",
"However, because each token has a separate label, annotation budgeting is typically done on a token basis (Shen et al., 2017).",
"Budget wasting may therefore arise from the heterogeneity of uncertainty across each sentence; a sentence can contain multiple subsequences (of tokens) of which the model is certain on some and uncertain on others.",
"By making the selection at a sentence level, although some budget is spent on annotating uncertain subsequences, the remaining budget may be wasted on annotating subsequences for which an annotation is not needed.",
"It can therefore be desirable for annotators to label subsequences rather than the full sentences.",
"This gives a greater flexibility to AL strategies to locate information rich parts of the input with improved efficiency and reduces the cognitive demands required of annotators.",
"Annotators may in fact perform better if they are asked to annotate shorter sequences, because longer sentences can cause boredom, fatigue, and inaccuracies (Rzeszo-tarski et al., 2013).",
"In this work , we aim to improve upon the efficiency of AL for NER by querying for subsequences within each sentence, and propagating labels to unseen, identical subsequences in the dataset.",
"This strategy simulates a setup in which annotators are presented with these subsequences, and do not have access to the full context, ensuring that their focus is centred on the tokens of interest.",
"We show that AL algorithms for NER tasks that use subsequences, allowing training on partially labelled sentences, are more efficient in terms of budget than those that only query full sentences.",
"This improvement is furthered by generalising existing acquisition functions ( 4.1) for use with sequential data.",
"We test our approaches on two NER datasets, OntoNotes 5.0 and CoNLL 2003.",
"On OntoNotes 5.0, Shen et al. (2017) achieve state-of-the-art performance with 25% of the original dataset querying full sentences, while we require only 13% of the dataset querying subsequences.",
"On CoNLL 2003, we show that the AL strategy of Shen et al. (2017) requires 50% of the dataset to achieve the same results as training on the full dataset, while ours requires only 27%.",
"Contributions of this paper are: 1. Improving the efficiency of AL for NER by allowing querying of subsequences over full sentences; 2. An entity based analysis demonstrating that subsequence querying AL strategies tend to query more relevant tokens (i.e., tokens belonging to entities); 3. An uncertainty analysis of the queries made by both full sentence and subsequence querying methods, demonstrating that querying full sentences leads to selecting more tokens to which the model is already certain.",
"AL algorithms aim to query information rich data points to annotators in order to improve the performance of the model in a data efficient way.",
"Traditionally these algorithms choose data points which lie close to decision boundaries (Pinsler et al., 2019), where uncertainty is high, in order for the model to learn more useful information.",
"This measure of uncertainty, measured through acquisition functions, are therefore vital to AL.",
"Key functions include predictive entropy (MaxEnt) (Gal et al., 2017), mutual information between model posterior and predictions (BALD) (Houlsby et al., 2011; Gal et al., 2017), or the certainty of the model when making label predictions (here called LC) (Mingkun Li and Sethi, 2006).",
"These techniques ensure all instances used for training, painstakingly labelled by experts, have maximum impact on model performance.",
"There has been exploration of uncertainty and deep learning based AL for NER (Chen et al., 2015; Shen et al., 2017; Settles and Craven, 2008; Fang et al., 2017).",
"These approaches however, treat each sentence as a single query instead of a collection of individually labelled tokens.",
"In these methods, the acquisition functions that score sentences aggregate token-wise scores (through summation or averaging).",
"Other works forgo this aggregation, querying single tokens at a time (Tomanek and Hahn, 2009; Wanvarie et al., 2011; Marcheggiani and Arti`eres, 2014).",
"These works show that AL for NER can be improved by taking the single token as a unit query, and use semi-supervision (Reddy et al., 2018; Is-cen et al., 2019) for training on partially labelled sentences (Muslea et al., 2002).",
"However, querying single-tokens is inapplicable in practise because, either",
"a) annotators have access to the full sentence when queried but can only label one token, which would lead to frustration as they are asked to read the full sentence but only annotate a single token, or",
"b) annotators only have access to the token of interest, which means that they would not have enough information to label tokens differently based on their context, leading to annotators labeling any unique token with the same label.",
"Moreover, if the latter approach was somehow possible, we would be able to reduce the annotation effort to the annotation of only the unique tokens forming the dataset, its dictionary.",
"Furthermore, all of these past works use Conditional Random Fields (CRFs) (Lafferty et al., 2001), which have since been surpassed as the state-of-the-art for NER (and most NLP tasks) by deep learning models (Devlin et al., 2019).",
"In this work we follow the approach where annotators only have access to subsequences of multiple tokens.",
"However, instead of making use of single tokens, we will query more than one token, providing enough context to the annotators.",
"This allows the propagation of these annotations to identical subsequences in the dataset, further reducing the total annotation effort.",
"Most AL strategies are based on a repeating score, query and fine-tune cycle.",
"After initially training an NER model with a small pool of labelled examples, the following is repeated: (1) score all unlabelled instances, (2) query the highest scoring instances and add them to training set, and, (3) fine-tune the model using the updated training set (Huang et al., 2014).",
"To describe this further, notation and proposed training process is introduced, with details in following sections.",
"First, the sequence tagging dataset, denoted by D = { ( x ( n ) , y ( n ) ) } Nn =1 , consists of a collection of sentence and ground truth labels.",
"The i -th token of the n -th sentence ( y ( n ) i ) has a label y ( n ) i = c with c belonging to C = { c 1 , ..., c K } .",
"We also differentiate between the labelled and unlabelled datasets, DL and DU , which initially are empty and equal to D .",
"Finally, we fix A as the total number of tokens queried in each iteration.",
"Instances in the unlabelled pool are queried using an acquisition function.",
"This function aims to quantify the uncertainty of the model when generating predictive probabilities over possible labels for each instance.",
"Instances with the highest predictive uncertainty are deemed as the most informative for model training.",
"Previously used acquisition functions such as Least Confidence (LC) and Maximum Normalized Log-Probability (MNLP) (Shen et al., 2017; Chen et al., 2015) are generalised for variable length sequences.",
"Letting y ( n ) <i be the history of predictions prior to the i -th input, the next output probability will be p ( n ) i,c = P ( y ( n ) i = c | y ( n ) <i , x ( n ) ) .",
"Then, we define the token-wise LC score as: LC ( n ) i = max c C log p ( n ) i,c .",
"Note that this is similar to LC except for the normalization factor 1 /(cid:96) .",
"The formulation above can be applied to other types of commonly used acquisition functions such as Maximum Entropy (MaxEnt) (Gal et al., 2017) by simply defining: ME ( n ) i = (cid:88) c C p ( n ) i,c log p ( n ) i,c , (4) as the token score.",
"Given the task of quantifying uncertainty amongst the unlabelled pool of data, both of these metrics LC and MaxEnt provide intuitive interpretations.",
"eq.",
"(1) scores highly tokens for which the predicted label has lowest confidence, while eq.",
"(4) scores highly tokens for which the whole probability mass function has higher entropy.",
"Both of these therefore score more highly uniform predictive distributions, which indicates underlying uncertainty.",
"Finally, given the similarity of performance between MNLP and Bayesian Active Learning by Disagreement (BALD) (Houlsby et al., 2011) in NER tasks (Shen et al., 2017), and the computational complexity required to calculate BALD with respect to the other activation functions, we will not compare against BALD.",
"In this section we describe how we build on past works, and the core contribution of this paper.",
"Our work forms a more flexible AL algorithm that operates on subsequences, as opposed to full sentences (Shen et al., 2017).",
"This is achieved by generalising acquisition functions for subsequences ( 4.1) scoring and querying subsequences within sentences ( 4.2), and performing label propagation on unseen sentences to avoid the multiple annotations of repeated subsequences ( 4.3).",
"Since this work focuses on the querying of subsequences, from the previously defined LC and MNLP we generalize them to define a family of acquisition functions applicable for both full sentences and subsequences:",
"Special cases are when = 0 and = 1 which return the original definitions of LC in eq.",
"(2) and MNLP in eq.",
"(3).",
"As noted by Shen et al. (2017), 4313 LC for sequences biases acquisition towards longer sentences.",
"The tuneable normalisation factor in eq.",
"(5) over the sequence of scores mediates the bal-ance of shorter and longer subsequences selected.",
"This generalisation can be applied to other types of commonly used acquisition functions such as MaxEnt and BALD by modifying the token-wise score.",
"Each sentence x ( n ) can be broken into a set of subsequences S ( n ) = { ( x ( n ) i , ..., x ( n ) j ) | i < j } where all elements s S ( n ) can be efficiently scored by first computing the token scores, then aggregating as required.",
"Once this has been done for all sentences in DU , a query set SQ n S ( n ) of non-overlapping (mutually disjoint) subsequences is found.",
"The requirement of non-overlapping subsequences avoids the problem of relabelling tokens, but disallows simply choosing the highest scoring subsequences (since these can overlap).",
"Instead at each round of querying, we perform a greedy selection, repeatedly choosing the highest scoring subsequence that does not overlap with previously selected subsequences.",
"Adjustments can be made to reflect practical needs, such as restricting the length (cid:96) of the viable subsequences to [ (cid:96) min , (cid:96) max ] .",
"This is because longer subsequences are easier to label, while shorter subsequences are more efficient in querying uncertain tokens, and so the selection is only allowed to operate within these bounds.",
"Additionally, it is easy to imagine a scenario in which a greedy selection method does not select the maximum total score that can be generated from a sentence.",
"This scenario is illustrated in Table 1 where lengths are restricted to (cid:96) min = (cid:96) max = 3 for simplicity.",
"Note that tokens can become unse-lectable in future rounds because they are not inside a span of unlabelled tokens of at least size (cid:96) min .",
"When the algorithm has queried all subsequences of this size range, it starts to query shorter subsequences by relaxing the length constraint.",
"However in practise, model performance on the validation set converges before all subsequences of valid range have been exhausted.",
"Nonetheless, when choosing subsequences of size [ (cid:96) min , (cid:96) max ] = [4 , 7] these will be exhausted when roughly 90% and 80% of tokens have been labelled for the OntoNotes 5.0 and CoNLL 2003 datasets.",
"Since a subsequence querying algorithm can result in partially labelled sentences, it raises the question of how unlabelled tokens should be handled.",
"In previous work based on the use of CRFs (Tomanek and Hahn, 2009; Wanvarie et al., 2011; Marcheggiani and Arti`eres, 2014) this was solved by using semi-supervision on tokens for which the model showed low uncertainty.",
"However, for neural networks, the use of model generated labels could lead to the model becoming over-confident, harming performance and biasing (Arazo et al., 2020) uncertainty scores.",
"Hence, we ensure that backpropagation only occurs from labelled tokens.",
"Our final contribution to the AL algorithm is the use of another semi-supervision strategy where we propagate uniquely labelled subsequences in order to minimise the number of annotations needed.",
"When queried for a subsequence, the annotator (in this case an oracle) is not given the contextual tokens in the remainder of the sentence.",
"For this reason, given an identical subsequence, a consistent annotator will provide the same labels.",
"Therefore, the proposed algorithm maintains a dictionary that maps previously queried subsequences to their provided labels.",
"Once a queried subsequence and its label are added to the dictionary, all other matching subsequences in the unlabelled pool are given the same, but temporary, labels.",
"The tokens retain these temporary labels until they are queried themselves.",
"After scoring and ranking members of S , the algorithm will disregard sequences that match exactly members of this dictionary, which is updated during the querying round.",
"However, if tokens belonging to these previously seen subsequences are encountered in a different context, meaning as part of a different subsequence, they may also be queried.",
"For example, in Table 1, if the subsequence shop to buy had been previously queried elsewhere in the dataset, the red subsequence will not be considered for querying, as it retains its temporary labels.",
"Instead, the green subsequence could be queried, in which case the temporary labels of tokens 6 and 7 will be overwritten by new, permanent labels.",
"Therefore, the value of (cid:96) min becomes a trade-off between the improved resolution of the acquisition function, and the erroneous propagation of shorter, more frequent label subsequences to identical ones in different contexts.",
"Finally, we summarise the AL algorithm proposed.",
"Given a set of unlabelled data DU , we initially randomly select a proportion of sentences from DU , label them, and add these to DL .",
"A dictionary B is also initialised.",
"Using these labelled sentences we train a model.",
"Then, the following proposed training cycle is repeated until DU is empty (or an early stopping condition is reached): 1. Find all consecutive unlabelled subsequences in DU , and score them using a pre-defined acquisition function.",
"2. Select the top scoring non-overlapping subsequences SQ that do not appear in B , such that the number of tokens in SQ is A , and query them to the annotators.",
"Update DL and DU .",
"As each sequence is selected, add it to B , mapping it to its true labels.",
"3. Provide all occurrences of the keys of B in DU with their corresponding temporary labels.",
"These will not be included in DL as these are temporary.",
"4. Finetune the model on sentences with any label, temporary and permanent.",
"Repeat this process until convergence.",
"OntoNotes 5.0.",
"This is a dataset used to compare results with the full sentence querying baseline (Weischedel, Ralph et al., 2013), and comprising of text coming from: news, conversational telephone speech, weblogs, usenet newsgroups, broadcast, and talk shows.",
"This is a BIO formatted dataset with a total of K = 37 classes and 99,333 training sentences, with an average sentence length of 17.8 tokens in its training set.",
"CoNLL 2003.",
"This is a dataset, also in BIO format, with only 4 entity types (LOC, MISC, PER, ORG) resulting in K = 9 labels (Tjong Kim Sang and De Meulder, 2003).",
"This dataset is made from a collection of news wire articles from the Reuters Corpus (Lewis et al., 2004).",
"The average sentence length is 12.6 tokens in its training set.",
"A full list of class types and entity lengths and frequencies for both datasets can be found in the Appendix.",
"Following the work of Shen et al. (2017), a CNN-CNN-LSTM model for combined letterand token-level embeddings was used; see Appendix for an overview of the model and hyperparameters setting and validation.",
"Furthermore, the AL algorithm used in (Shen et al., 2017) will serve as one of the baselines following the same procedure.",
"This represents an equivalent algorithm to that proposed, but which can only query full sentences, and does not use label propagation.",
"As the evaluation measure we use the F 1 score.",
"After the first round of random subsequence selection, the model is trained.",
"After subsequent selections the model is finetuned training is resumed from the previous round's parameters.",
"In all cases, the model training was stopped either after 30 epochs were completed, or if the F 1 score for the valida-4315 0 5 10 15 20 25 30 Percentage of tokens manually labelled 0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 F 1 FS, =1 FS, =0 FS, =0.7 SUB, =0.1 FS, random SUB, random No AL",
"(b) LC for CoNLL 2003 NER dataset Figure 1: F 1 score on test set achieved each round using round-optimal model parameters.",
"All subsequence experiments here use (cid:96) min = 4 , (cid:96) max = 7 .",
"Each curve is averaged over 10 runs.",
"tion set had monotonically decreased for 2 epochs.",
"This validation set is made up of a randomly selected 1% of sentences of the original training set.",
"After finetuning, the model reloads its parameters from the round-optimal epoch, and its performance is evaluated on the test set.",
"Furthermore, the AL algorithms were also stopped after all hyperparameter variations using that dataset and acquisition function family had converged to the same best F 1 , which we denote with F 1 .",
"For the OntoNotes 5.0 dataset, F 1 value was achieved after 30% of the training set was labelled, and for the CoNLL 2003 dataset after 40%.",
"We choose (cid:96) min = 4 to give a realistic context to the annotator, and to avoid a significant propagation of common subsequences.",
"The upper bound of (cid:96) max = 7 was chosen to ensure subsequences were properly utilised, since the average sentence length of both datasets is roughly twice this size.",
"For the OntoNotes 5.0 dataset, every round A = 10 , 000 tokens are queried, whereas for the CoNLL 2003 dataset A = 2 , 000 tokens.",
"These represent roughly 0.5% and 1% of the available training set.",
"We evaluate the efficacy and efficiency of the tested AL strategies in three ways.",
"First, model performance over the course of the algorithm was evaluated using end of round F 1 score on the test set.",
"We compare the proportion of the dataset's tokens labelled when the model achieves 99% of the F 1 score ( F 1 = 0 . 99 F 1 ).",
"We also quantify the rate of improvement of model performance during training using the normalised Area Under the Curve (AUC) score of each F 1 test curve.",
"The normalisation ensures that the resulting AUC score is in the range [0 , 1] , and it is achieved by dividing the AUC score by the size of the dataset.",
"This implies that methods that converge faster to their best performance will have a higher normalized AUC.",
"Second, we consider how quickly the algorithms can locate and query relevant tokens (named entities).",
"Third, we finally evaluate their ability to extract the most uncertain tokens from the unlabelled pool.",
"Figure 1 shows the LC performance curves for = 0 , = 1 and the best performing value for each acquisition class (based on the normalised AUC score, Table 3) for full sentence querying (FS), and only the best performing values for subsequence querying (SUB).",
"The figure also shows the performance of training on the complete training set (No AL), and when the both sentences and subsequences are random selected by the acquisition function.",
"The equivalent figures for MaxEnt are available in Appendix, and follow similar trends.",
"Then, the performance of each curve, quantified in terms of the normalised AUC is summarised in Table 3. Table 2 shows further analysis of the best results in Figure 1, with best referring to acquisition function and optimal .",
"These results first show that subsequence querying methods are more efficient than querying full sentences, achieving their final F 1 with substantially less annotated data, and with higher normalised AUC scores.",
"For OntoNotes 5.0, querying subsequences reduces final proportion required by 38.8%.",
"For CoNLL 2003, this reduction is 36.6%.",
"Altogether, subsequence querying holds improved efficiency over the full sentence querying baseline.",
"As a point of interest, full sentence querying can be easily improved by optimising alone.",
"For the OntoNotes 5.0 dataset, using LC 1 , 24.2% of tokens are required to achieve F 1 .",
"This however, can be improved by 9.33% to only requiring 22.0% by choosing = 0 .",
"7 .",
"For CoNLL 2003, using LC 1 for full sentences, 50.0% of the dataset was required, but when using LC 0 .",
"7 , it was 40.7% of the tokens.",
"This section and the next aim to understand some of the underlying mechanisms that allow the subsequence querying methods to achieve results substantially better than a full sentence baseline.",
"Namely, the ability of the different methods to extract the tokens for which the model is the most uncertain about.",
"Given that the majority of tokens in both datasets have the same label O, signifying no entity it is likely that tokens belonging to entities, particularly rarer classes, trigger higher model uncertainty.",
"Querying full sentences at a time, the AL algorithm will spend much of its token budget for that round labelling non-entity tokens while attempting to locate the more informative entities.",
"Subsequence querying methods, not faced with this wasteful behaviour, allow the AL algorithm to query entity tokens quicker, locating and labelling the majority of entity tokens faster over the course of training.",
"The proportion of tokens belonging to entities that the AL algorithm has queried against the round number is plotted in Figure 2 for OntoNotes 5.0.",
"For both datasets, the random querying methods contain a distribution of token classes that reflect the dataset at large, producing roughly linear curves for this figure.",
"Curves for all methods that employ Figure 2: Proportion of tokens that belong to entities labelled, against the round number.",
"an uncertainty based acquisition function are concave, and the AUC reflects the ranking of model performance for each querying method.",
"This relation suggests that shortly after initialisation, better performing algorithm variations query entity tokens faster.",
"In later stages of finetuning this rate is reduced, likely because after labelling a large proportion of them, the remaining entity tokens cause little uncertainty for the model.",
"In a practical setting where querying may have to be stopped before model performance has converged (i.e. due to accumulated cost of annotations), it is greatly ben-eficial to ensure that the model is exposed to a high number of relevant tokens, because this increases the likelihood of locating entity tokens belonging to underrepresented classes at an early stage.",
"Finally, this section compares the scores of tokens in the queried set SQ for each querying method.",
"Comparing the distribution and development of these scores provides a direct insight to the core assumptions of why full sentence querying is outperformed.",
"Figure 3 shows the difference in score distributions for sentence versus subsequence querying, against querying round number, for rounds preceding model performance convergence.",
"First, it is seen that decreasing the individual query size (full sentence to subsequence) increases the median uncertainty extracted at the earlier rounds.",
"Second, Figure 3 provides evidence for the mechanism suggested earlier: aggregating the token scores across full sentences means querying both the highly uncertain tokens, and the tokens that provide little uncertainty.",
"Querying high scoring sentences like this can cause a distribution with two peaks as seen in 4317 Dataset AcquisitionFunction Full Sentence Subsequence = 0 = 1 Optimal ( ) = 0 = 1 Optimal ( ) OntoNotes5.0 LC 0.794 0.802 0.804 (0.7) 0.817 0.812 0.818 (0.1) MaxEnt 0.791 0.803 0.803 (1.0) 0.815 0.813 0.816 (0.5) Random 0.734 0.769 CoNLL2003 LC 0.857 0.875 0.879 (0.7) 0.885 0.883 0.892 (1.0) MaxEnt 0.841 0.882 0.882 (1.0) 0.881 0.883 0.891 (0.9) Random 0.824 0.859 Table 3: Normalised AUC scores for model performance (F1 score on test set) for = 0 , 1 , and its optimal value in each case.",
"the figure.",
"As the model becomes increasingly certain about its predictions, high scores are localised within smaller subsequences, and the coarse sensitivity of full sentence querying means it forfeits all the higher scoring tokens.",
"These differences were also observed when comparing subsequence querying methods with sub-optimal .",
"This figure only analyses behaviour of up to 9% of the training set's tokens have been queried.",
"Instead, Figure 4 show how the mean of token-wise scores evolve for different querying methods for the OntoNotes 5.0 dataset until convergence.",
"This clearly shows that subsequence querying methods converge faster over the full course of the algorithm compared to full sentence querying.",
"This is consistent with Figure 1 in terms of initial rate and final time of model performance convergence, namely that model performance plateaus alongside the uncertainty score.",
"training on a very semantically specific corpus, there may not be enough fully labelled sentences to build a test set.",
"In that case, observing the rate progress of score convergence can be used as an early stopping method for the AL algorithm (Zhu et al., 2010).",
"In this study we have employed subsequence querying methods for improving the efficiency of AL for NER tasks.",
"We have seen that these methods outperform full sentence querying in terms of annotations required for optimal model performance, requiring 38.8% and 36.6% fewer tokens for the OntoNotes 5.0 and CoNLL 2003 datasets.",
"Optimal results for subsequence querying (and full sentence querying) were achieved by generalising previously used AL acquisition functions, defining a larger family of acquisition functions for sequential data.",
"The analysis of 6.3 suggests that a full sentence querying causes noisy acquisition functions due to the tokens in the queried sentences that were not 4318 highly scored.",
"This added noise reduces the budget efficiency, and a subsequence querying method eliminates a large part of this effect.",
"This efficiency also translated into a faster recall of named entities in the dataset to be queried ( 6.2).",
"Limitations and future work: Limitations of this study are largely centred on the use of an oracle to provide tokens with their labels.",
"With human annotators, the cropped context of subsequence queries may make them produce more inaccuracies than when annotating full sentences.",
"such studies will help reveal how context affects label accuracy, how this, in turn, affects optimal hyperparameters in the subsequence selection process (such as optimal query length), further accommodations that must be made to effectively optimise worker efficiency, and how to deal with unreliable labels.",
"We leave to future work the evaluation of these querying methods with human annotators.",
"There are also ways to incorporate model generated labelling methods for more robust semi-supervision into our framework that we leave to future work.",
"Finally, there are examples of other tasks for structured data, such as audio, video, and image segmentation, where the part of an instance may be queried.",
"A generalisation of the strategy demonstrated for the NER case may allow for more efficient active learning querying methods for these other types of data."
] |
[
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain"
] |
[
"State-of-the-art parameter-efficient fine-tuning methods rely on introducing adapter modules between the layers of a pretrained language model.",
"However, such modules are trained separately for each task and thus do not enable sharing information across tasks.",
"In this paper, we show that we can learn adapter parameters for all layers and tasks by generating them using shared hypernetworks, which condition on task, adapter position, and layer id in a transformer model.",
"This parameter-efficient multi-task learning framework allows us to achieve the best of both worlds by sharing knowledge across tasks via hypernetworks while enabling the model to adapt to each individual task through task-specific adapters.",
"Experiments on the well-known GLUE benchmark show improved performance in multi-task learning while adding only 0 .",
"29% parameters per task.",
"We additionally demonstrate substantial performance improvements in few-shot domain generalization across a variety of tasks.",
"Our code is publicly available in https://github.com/ rabeehk/hyperformer .",
"Transfer learning from pretrained large-scale language models yields state-of-the-art results in a variety of tasks (Devlin et al., 2019; Radford et al., 2018; Liu et al., 2019b).",
"As a highly expressive and abstract framework, Raffel et al. (2020) explored the landscape of transfer learning by converting text-based natural language processing (NLP) problems into a sequence-to-sequence format to train a unified model on several tasks simultaneously.",
"Multi-task learning with pretrained language models (Ruder, 2017) is appealing for multiple reasons: 1) Training individual models per task results in higher computational costs, which hinders deployment and maintenance.",
"These costs are substantially reduced by training a single Work done while the author was at Google.",
"model.",
"2) Fine-tuning the model across multiple tasks allows sharing information between the different tasks and positive transfer to other related tasks.",
"Specifically, when target datasets have limited training data, multi-task learning improves the performance compared to individually trained models (Liu et al., 2019a; Ratner et al., 2018).",
"However, multi-task fine-tuning can result in models underperforming on high-resource tasks due to constrained capacity (Ari-vazhagan et al., 2019; McCann et al., 2018).",
"An additional issue with multi-task fine-tuning is the potential for task interference or negative transfer , where achieving good performance on one task can hinder performance on another (Wang et al., 2019c).",
"As an alternative to fine-tuning (Howard and Ruder, 2018), adapter layers (Houlsby et al., 2019) insert a small number of additional parameters per task into the model.",
"During fine-tuning, only the adapter modules, layer normalizations, and parameters of the final classification layer are updated, while the original pretrained model parameters remain frozen.",
"Such task-specific adapters eliminate negative task interference by encapsulating task-specific information (Pfeiffer et al., 2020).",
"However, so far there has not been an effective and parameter-efficient way to share information across multiple adapters to enable positive transfer to low-resource and related tasks.",
"To address this problem and to enable sharing information across tasks while reaping the benefits of adapter layers, as depicted in Figure 1, we propose HYPERFORMER ++ , which employs a compact hypernetwork (Ha et al., 2017; Oswald et al., 2020) shared across tasks and layers.",
"The hypernetwork learns to generate task and layer-specific adapter parameters, conditioned on task and layer id embeddings.",
"The hypernetwork is jointly learned between all tasks and is thus able to share information across them, while negative interference is minimized by generating separate adapter layers for each task.",
"For each new task, our model only requires learning an additional task embedding, reducing the number of trained parameters.",
"We use the encoder-decoder T5 model (Raffel et al., 2020) as the underlying model for our experiments and evaluate on the standard GLUE benchmark (Wang et al., 2019b).",
"We achieve strong gains over both the T5 BASE model as well as adapters (Houlsby et al., 2019).",
"To our knowledge, this is the first time that adapters have been successfully integrated into a state-of-the-art encoder-decoder model beyond machine translation (Bapna and Firat, 2019), demonstrating that our method effectively balances sharing information across tasks while minimizing negative transfer.",
"In summary, we make the following contributions: (1) We propose a parameter-efficient method for multitask fine-tuning based on hypernetworks and adapter layers.",
"(2) We demonstrate that our method scales more efficiently than prior work.",
"(3) We provide empirical results on GLUE demonstrating the effectiveness of the proposed method on multi-task learning.",
"(4) We perform extensive few-shot domain transfer experiments, which reveal that the captured shared knowledge can positively transfer to unseen in-domain tasks.",
"We release our code to facilitate future work.",
"In this section, we present our HYPERFORMER model, which integrates hyper network-based adapter layers into a multi-task trans former model.",
"In 2.4, we introduce a parameter-efficient variant of this model, called HYPERFORMER ++ .",
"Problem formulation: We consider a general multi-task learning problem, where we are given the data from a set of tasks {D } T =1 , where T is the total number of tasks and D = { ( x i ,y i ) } N i =1 shows the training data for -th task with N samples.",
"We assume we are also given a large-scale pretrained language model f ( . ) parameterized by that computes the output for input x i .",
"Standard multi-task fine-tuning minimizes the following loss on the training set: L ( , {D } T =1 )= T (cid:88) =1 (cid:88) ( x i ,y i ) D w l (cid:16) f ( x i ) ,y i (cid:17) , (1) where l is typically the cross-entropy loss, and w shows the sampling weight for -th task.",
"Our goal is to finetune the pretrained model in a multi-task learning setup efficiently, while allowing sharing information across tasks and at the same time, enabling the model to adapt to each individual task.",
"The key idea of our approach, depicted in Figure 1, is to learn a parametric task embedding { I } T =1 for each task, and then feed these task embeddings to hypernetworks parameterized by that generate the task-specific adapter layers (Houlsby et al., 2019).",
"We insert adapter modules within the layers of a pretrained model, making the final model of X ( x i , , I ) parameterized by that computes the output for input x i .",
"During training, we only train hypernetwork parameters , task embeddings { I } T =1 , and layer normalizations in f ( . ) , while the rest of the pretrained model parameters are fixed: L ( , { I } Ti =1 , {D } T =1 )= T (cid:88) =1 (cid:88) ( x i ,y i ) D w l (cid:16) X ( x i , , I ) ,y i (cid:17) , (2) The hypernetworks capture the shared information across tasks in a multi-task learning model enabling positive transfer between related domains and transferable tasks, while adapters are reducing negative interference, encapsulating task-specific information.",
"Base model: All of our models are built on top of the state-of-the-art T5 transformer model (Raffel et al., 2020).",
"This model frames text-based language tasks as sequence-to-sequence problems.",
"T5 consists of an encoder-decoder Transformer (Vaswani et al., 2017) with minor modifications (Raffel et al., 2020).",
"The model is trained simultaneously on multiple tasks, obtaining state-of-the-art performance across a diverse set of tasks.",
"We use the T5 framework as it enables training a universal model that interfaces with many language tasks.",
"Our model has three main components: 1) task conditional adapter layers; 2) task conditional layer normalizations; and 3) hypernetworks that generate task-specific parameters.",
"We next describe these components.",
"Prior work has shown that fine-tuning all parameters of the model can result in a sub-optimal solution, particularly for resource-limited datasets (Peters et al., 2019).",
"As an alternative to fine-tuning all the model's parameters, prior work (Houlsby et al., 2019; Rebuffi et al., 2018; Stickland and Murray, 2019) inserted small modules called adapter layers within layers of a pretrained model, as shown in Figure",
"1. Adapters introduce no change to the structure or parameters of the original model.",
"In this work, we propose conditional adapter modules, in which we generate the adapters weights based on input task embeddings using shared hypernetworks (Ha et al., 2017), which capture information across tasks that can be used to positively transfer to other relevant tasks.",
"Each layer of a transformer model consists of an attention block and a feed-forward block, each followed by a skip connection.",
"Following Houlsby et al. (2019), as depicted in Figure 1, we introduce a conditional adapter layer after each block before the skip connection.",
"The conditional adapter layer A l for layer l consists of a down-projection, D l R h d , GeLU non-linearity (Hendrycks and Gimpel, 2016), and up-projection U l R d h , where h is the input dimension, and d is the bottleneck dimension for the adapter layer, mathematically defined as: A l ( x )= LN l (cid:16) U l ( GeLU ( D l ( x ))) (cid:17) + x , (3) where x is the input hidden state and LN l is the conditional layer norm defined in the next section.",
"We generate adapter weights ( U l , D l ) through a hypernetwork described in 2.3.",
"where (cid:12) is the element-wise multiplication between two vectors, and l and l are learnable parameters with the same dimension as x i .",
"Values of and show the mean and standard deviation of training data for the -th task.",
"To allow the layer normalization inside adapters to adapt to each task, inspired by Perez et al. (2018); De Vries et al. (2017), we generate l , l via a hypernetwork as a function of task embeddings (2.3).",
"In order to have a model that can share information while being able to adapt to each individual task, we generate the parameters of task conditional adapter layers and layer normalization using hypernetworks.",
"A hypernetwork is a network that generates the weights of another network (Ha et al., 2017).",
"The hypernetworks capture the shared information, while the generated task conditional adapters and layer normalization allow the model to adapt to each individual task to reduce negative task interference.",
"Learned task embedding: We first compute a task embedding I R t for each individual task using a task projector network h I ( . ) , which is a multi-layer perceptron consisting of two feed-forward layers and a ReLU non-linearity: I = h I ( z ) , (5) where z R t (cid:48) can be a learnable parameter or any pretrained task features (Vu et al., 2020), and the task projector network h I ( . ) learns a suitable compressed task embedding from input task features.",
"In this work, we consider a parametric z to allow end-to-end training which is convenient in practice.",
"1 Removing task prefixes: The T5 model prepends task-specific prefixes to the input sequence for conditioning.",
"For instance, when training on CoLA (Warstadt et al., 2019), cola sentence: is prepended to each sample.",
"Instead, we remove task prefixes and use task embeddings for conditioning.",
"Task conditioned hypernetworks: We consider simple linear layers as hypernetworks that are functions of input task embeddings I .",
"We introduce these hypernetworks in each layer of the transformer.",
"We define hypernetwork h lA ( . ) that generates task conditional adapter weights ( U l , D l ): 1 We ran some pilot experiments with pretrained task embeddings (Vu et al., 2020), but did not observe extra benefits.",
"( U l , D l ):= h lA ( I )= (cid:16) WU l , WD l (cid:17) I , (6) where WU l R ( d h ) t and WD l R ( h d ) t are the respective hypernetwork parameters.",
"We additionally define the hypernetwork h lLN ( . ) that computes the layer normalization parameters: ( l , l ):= h lLN ( I )= (cid:16) W l , W l (cid:17) I , (7) where W l R h t and W l R h t .",
"A downside of introducing a separate hypernetwork in each layer of the Transformer is that it increases the overall number of parameters.",
"We, therefore, propose to share hypernetworks across transformer layers.",
"By having a shared hypernetwork that is reusable, this strategy results in a substantial reduction in the number of parameters.",
"However, reapplying the same hypernetwork across all the layers introduces weight sharing across target parameters, which may not be desirable.",
"To allow for a flexible parameterization of task conditional adapters/layer normalization, for a transformer of L layers, we introduce a set of layer id embeddings I = { l i } Li =1 , and adapter position embeddings P = { p j } 2 j =1 , which specify the position of adapter layers in each transformer block (after the attention layer or feed-forward layer), which are used as additional inputs to the hypernetworks.",
"For simplicity, we consider l i R t , p j R t , and z R t .",
"We feed a concatenation of ( z , l i , p j ) to a similar task projector network h (cid:48) I as in Eq.",
"(5): I = h (cid:48) I ( z , l i , p j ) , (8) which is then followed by a shared layer normalization to compute final task embeddings I R t to the hypernetwork.",
"This way, the hypernetwork is able to produce distinct weights for each task, adapter position, and layer of a transformer.",
"Furthermore, layer id and adapter position embeddings are parameters that are learned via back-propagation, allowing us to train the whole model end-to-end conveniently.",
"Datasets: Following Raffel et al. (2020), we evaluate the performance of the models on the GLUE benchmark (Wang et al., 2019b).",
"This benchmark covers multiple tasks of paraphrase detection (MRPC, QQP), sentiment classification (SST-2), natural language inference (MNLI, RTE, QNLI), and linguistic acceptability (CoLA).",
"2 The original test 2 Following Raffel et al. (2020); Devlin et al. (2019), as a common practice, due to the adversarial nature of WNLI with respect to the training set, we do not experiment with WNLI.",
"sets are not publicly available, and following Zhang et al. (2021), for datasets fewer than 10K samples (RTE, MRPC, STS-B, CoLA), we divide the original validation set in half, using one half for validation and the other for the test.",
"For the other larger datasets, we split 1k samples from the training set as our validation data and test on the original validation set.",
"Experimental details: We use the HuggingFace implementation (Wolf et al., 2020a) of the T5 model (Raffel et al., 2020).",
"We fine-tune all models with a constant learning rate of 0 .",
"0003 and following Raffel et al. (2020), we use 2 18 =262144 steps in all experiments.",
"We save a checkpoint every 1000 steps for all models (see also A).",
"Raffel et al. (2020) report the results based on the best checkpoint for each task independently.",
"In contrast, we focus on the more realistic setting where we report the results on a single checkpoint with the highest average validation performance across all tasks.",
"The hyperparameters are selected in the same manner.",
"In contrast to prior work (Houlsby et al., 2019), we do not learn a separate output layer for each task but instead share a frozen output layer for all the tasks, which makes our setting more parameter-efficient than prior work and is an advantage of multi-task learning with encoder-decoder models.",
"3 Baselines: We compare to the strong adapter baseline (Houlsby et al., 2019).",
"Following Houlsby et al. (2019), we add adapters modules for each task after the two feed-forward modules in each transformer block of the T5 model.",
"As suggested in Houlsby et al. (2019), we train the layer normalization parameters inside the T5 model, per task.",
"We refer to this method as Adapters .",
"We additionally propose a variant of this model, in which we share all layer normalization parameters (T5 and adapters) across all tasks.",
"We refer to this model as Adapters .",
"We compare our models to the state-of-the-art T5 model, in which we fine-tune all parameters of the model on all tasks.",
"We refer to this method as T5 SMALL /T5 BASE in experiments.",
"Sampling tasks: During training, we sample tasks with conventional temperature-based sampling with temperature T =10 for all methods.",
"We sample different tasks proportional to p 1 /T where p = N (cid:80) Ti =1 N and N is the number of training samples for the th task.",
"We did not experiment with more complex sampling strategies (Raffel et al., 2020) or tuning of T .",
"Table 1 shows the results on GLUE for single-task and multi-task training.",
"We experiment with reduction factors of r = { 8 , 16 , 32 } for all adapter-based methods, where r = hd .",
"We report the results both with T5 SMALL (6 layers and 60M parameters) and T5 BASE models (12 layers and 222M parameters).",
"Overall, our proposed HYPERFORMER ++ obtains strong gains over Adapters (82.51 versus 79.53 for T5 SMALL and 86.48 versus 84.88 for T5 BASE ) while being more parameter-efficient.",
"Our variant of Adapters , which shares layer norms across tasks, outperforms prior work (Houlsby et al., 2019), which does not share such information (80.85 versus 79.53 for T5 SMALL and 85.83 versus 84.88 for T5 BASE ).",
"This demonstrates that in encoder-decoder models such as T5 more sharing of information across tasks is beneficial.",
"Our proposed HYPERFORMER obtains consistent improvement over our proposed Adapters method.",
"We attribute this improvement to the ability to learn the shared information across tasks through our hypernetworks.",
"Interestingly, HYPERFORMER ++ obtains similar performance as HYPERFORMER while being more than an order of magnitude more parameter-efficient.",
"Adapter modules thus seem to be similar enough so that much of their information can be modeled by a single, appropriately conditioned network.",
"parameters, our methods on average improve the results by 0.45 for T5 SMALL and 1.81 for T5 BASE with substantial improvement on low-resource datasets like CoLA (63.73 versus 54.85) and RTE (75.36 versus 67.39) due to shared hypernetworks that capture the shared information and enable positive transfer effects.",
"We also report the total number of parameters and trainable parameters for all methods in Table",
"1. For adapter-based methods, the number of parameters varies based on the adapter size (we report all numbers with r =32 ).",
"The multiple in terms of the number of parameters of HYPERFORMER ++ BASE with regard to T5 BASE is 1 .",
"02 with only 0 .",
"29% trainable parameters per task.",
"Note that by keeping the output layer frozen for Adapters SMALL and Adapters BASE , they require 5 .",
"51 and 2 .",
"53 fewer parameters respectively compared to a direct application of prior work (Houlsby et al., 2019).",
"Despite using more efficient baselines, compared to Adapters BASE , HYPERFORMER ++ BASE requires 3 fewer trainable parameters.",
"Finally, we assess how well a trained HYPERFORMER can generalize to new tasks.",
"We evaluate performance on 5 tasks and 7 datasets.",
"In particular, we consider 1) the natural language inference (NLI) datasets SciTail (Khot et al., 2018), and CB (De Marneffe et al., 2019) from SuperGLUE (Wang et al., 2019a) 2) the question answering (QA) dataset BoolQ (Clark et al., 2019a); 3) the sentiment analysis datasets IMDB (Maas et al., 2011) and Yelp Polarity (Zhang et al., 2015); and 4) the paraphrase detection dataset PAWS (Baldridge et al., 2019); 5) the question classification dataset TREC (Li and Roth, 2002).",
"For CB and BoolQ, since test sets are not available, we divide the validation sets in half, using one half for validation and the other for testing.",
"For Yelp polarity, TREC, and IMDB, since validation sets are not available, we similarly divide the test sets to form validation sets.",
"For the rest, we report on the original test sets.",
"We consider the models trained on GLUE reported in Table 1 and evaluate them on the test set after the few-shot fine-tuning on each target training data.",
"For Adapters and our method, we use the adapter and the task embedding respectively trained on the most similar GLUE task for initialization, i.e. MNLI for NLI, QNLI for QA, SST-2 for sentiment analysis, and QQP for paraphrase detection.",
"Following prior evidence of positive transfer from NLI to other tasks (Conneau and Kiela, 2018; Yin et al., 2020; Phang et al., 2018), we initialize the out-of-domain TREC from MNLI.",
"We show the results of full fine-tuning of all model's parameters, Adapters , and HYPERFORMER ++ 4 in Table",
"2. Our method significantly surpasses the baselines on the majority of settings.",
"Given that our model HYPERFORMER ++ BASE has substantially fewer trainable parameters than T5 BASE , we investigate whether it generalizes better in a low-resource setting.",
"We subsample each individual task in GLUE for varying training sizes.",
"We train the models for 15,000 steps, which we found to be 4 We finetune hypernetworks and task embeddings parameters.",
"We also tried only fine-tuning the task embedding but found that this achieves lower performance in the few-shot setting and comparable performance with more samples.",
"trained on GLUE averaged across 5 seeds.",
"We compute accuracy for all datasets.",
"sufficient to allow them to converge.",
"Figure 2 shows the results.",
"HYPERFORMER ++ BASE substantially improves results with limited training data, indicating more effective fine-tuning in this regime.",
"Adapters parameters: The standard setting (Houlsby et al., 2019) employs two adapters per layer for each task.",
"Each adapter layer has 2 hd parameters for projection matrices ( U l and D l ) and 2 h parameters for the layer normalization.",
"The total number of parameters for Adapters for L Transformer layers in both an encoder and a decoder across T tasks is, therefore, 4 TL (2 hd +2 h ) , which scales linearly with the number of tasks times the number of layers.",
"HYPERFORMER ++ parameters: Our approach learns a task feature embedding per task, consisting of Tt parameters.",
"We additionally employ layer id and adapter position embeddings in the encoder and decoder, which require 2(2+ L ) t parameters, with a fixed embedding size of t for all these feature embeddings.",
"We consider a separate task projector networks h (cid:48) I for encoder and decoder, which is in both cases a two-layer MLP, consisting of a total of 2(3 te + et ) parameters, where e = 128 is the hidden dimension for the task-projector network.",
"Our hypernetwork for adapters in encoder/decoder consists of 2(2 thd ) parameters and our layer normalization hypernetwork consists of 2(2 th ) parameters.",
"In total, this results in t ( T +4+2 L ) (cid:124) (cid:123)(cid:122) (cid:125) Task features + 8 te +2 t (2 hd +2 h ) (cid:124) (cid:123)(cid:122) (cid:125) Hypernetworks parameters.",
"The total number of parameters for hypernetworks remains constant, while the task feature parameters scale with the number of tasks or layers times t , where t =64 in our experiments.",
"In settings with a large number of layers and a large number of tasks, since t (cid:28) 2 hd +2 h and T + L (cid:28) TL , our method is much more parameter-efficient compared to Adapters.",
"In the current setting, the term hd is the largest term, and the factor 2 TL for Adapters is larger than the factor t for HYPERFORMER ++ .",
"While our HYPERFORMER ++ is more parameter-efficient than the baselines, the number of parameters of HYPERFORMER per task is higher compared to Adapters .",
"To confirm that the improvements of Model GLUE #Total params #Trained params/task Adapters SMALL 80.97 1.83x 10.44% HYPERFORMERSMALL 82.47 1.45x 5.80 % Adapters BASE 85.84 2.02x 12.73% HYPERFORMERBASE 86.58 1.54x 6.86% Table 3: Averaged test results on GLUE for HYPERFORMER and Adapters , where Adapters has a higher number of parameters compared to HYPERFORMER .",
"HYPERFORMER are due to its capability of sharing information across tasks and not the number of parameters, as an ablation, we run the Adapters with r = { 2 , 4 } and choose the model performing the best on the validation set.",
"This allows Adapters to have a higher number of parameters compared to HYPERFORMER .",
"We report the results in Table 3 and compare them with results of HYPERFORMER in Table",
"1. The results demonstrate that even with an increased number of parameters, Adapters is not able to reach the performance of HYPERFORMER , and HYPERFORMER performs substantially better.",
"We investigate the impact of the components of our framework including: (1) task conditional adapter blocks; (2) task conditional layer normalization; (3) task projection network; (4) fine-tuning of layer normalizations in the T5 model; (5) task conditional layer normalization in adapter modules and fine-tuning of layer normalizations inside the T5 model.",
"We consider our small model of Table 1 and train different variants of it.",
"Table 4 shows the results on GLUE, demonstrating that each component of the model contributes positively to its final performance.",
"To analyze what HYPERFORMER ++ BASE has learned about the relations between different tasks, we visualize the learned task embeddings for the models trained",
"with the largest number of samples in Table 1 and",
"2. Figure 3 illustrates the 2D vector projections of task embeddings using PCA (Wold et al., 1987).",
"Interestingly, the observed groupings correspond to similar tasks.",
"This shows that learned task embeddings by HYPERFORMER ++ BASE are meaningful.",
"For CB, an NLI dataset despite being initialized from MNLI, after few-shot training the task embedding is closest to RTE, another NLI dataset.",
"This is plausible as premises and hypotheses in both the discourse-based CB and the news and Wikipedia-based RTE are more complex compared to MNLI.",
"The sentence similarity dataset STS-B is grouped close to the MRPC paraphrase dataset.",
"CoLA, which focuses on linguistic acceptability is very different from other tasks and is not grouped with any of the observed task embeddings.",
"In addition, the task embeddings for 1) all the sentiment analysis datasets namely SST-2, Yelp polarity, and IMDB; 2) the two large-scale NLI datasets namely MNLI and SciTail; 3) question answering datasets, i.e. BoolQ and QNLI; and 4) paraphrase datasets namely QQP and PAWS are each grouped together.",
"Multi-task learning: Multi-task learning, i.e., learning a unified model to perform well on multiple different tasks, is a challenging problem in NLP.",
"It requires addressing multiple challenges such as catastrophic forgetting, and handling disproportionate task sizes resulting in a model overfitting in low-resource tasks while underfitting in high-resource ones (Arivazhagan et al., 2019).",
"Liu et al. (2019a) proposed Multi-Task Deep Neural Network (MTDNN) for learning from multiple NLU tasks.",
"Although MTDNN obtains impressive results on GLUE, it applies multi-task learning as a form of pretraining followed by task-specific fine-tuning.",
"Concurrently with us, Tay et al. (2021) propose a multi-task learning method by training task-conditioned hyper networks; however, their method is 43x less parameter efficient compared to ours.",
"In another line of research, Clark et al. (2019b) proposed to learn multi-task models with knowledge distillation.",
"Houlsby et al. (2019) trained adapters for each task separately, keeping the model fixed.",
"Stickland and Murray (2019) share the model parameters across tasks and introduce task-specific adapter parameters, which is more parameter-inefficient than our method.",
"Hypernetworks and contextual parameter generation: Our work is closely related to hypernetworks (Ha et al., 2017).",
"In a continual learning setup, where tasks are learned sequentially, Oswald et al. (2020) proposed a task-conditioned hypernetwork to generate all the weights of the target model.",
"Our method is substantially more efficient as we do not generate all the weights of the target model but a very small number of parameters for adapter modules to allow the model to adapt to each individual task efficiently.",
"Similarly, Jin et al. (2020) generate the full model from task-specific descriptions in different domains whereas we efficiently generate only small adapter modules for each task.",
"Prior work also proposed meta-learning or Bayesian approaches to generate softmax layer parameters for new settings (Bansal et al., 2020; Ponti et al., 2020).",
"Meta-learning approaches are notoriously slow to train.",
"In addition, generating softmax parameters requires a substantially higher number of parameters, leaves the method unable to adapt the lower layers of the model, and restricts their application to classification tasks.",
"In contemporaneous work, Ustun et al. (2020) proposed a multilingual dependency parsing method based on adapters and contextual parameter generator networks (Platanios et al., 2018) where they generate adapter parameters conditioned on trained input language embeddings.",
"Their study is limited to multilingual dependency parsing, while our work studies multi-task learning and applies to several tasks thanks to the general sequence-to-sequence nature of our model.",
"Moreover, their number of trainable parameters is 2 .",
"88 larger than their base model since they employ a contextual parameter generator in each layer.",
"In contrast, we use a single compact hypernetwork allowing us to efficiently condition on multiple tasks and layers of a transformer model.",
"We propose a parameter-efficient method for multi-task fine-tuning.",
"Our approach is to train shared hypernetworks to generate task-specific adapters conditioned on the task, layer id, and adapter position embeddings.",
"The shared hypernetworks capture the knowledge across tasks and enable positive transfer to low-resource and related tasks, while task-specific layers allow the model to adapt to each individual task.",
"Extensive experiments show that our method obtains strong improvement over multi-task learning on the GLUE benchmark, and substantially improves the in-domain task generalization.",
"We are grateful to Dani Yogatama, Neil Houlsby, and Colin Raffel for feedback on a draft of this paper.",
"We would like to also thank Adam Paszke, Jamie Kiros, and George Dahl for useful comments and discussions."
] |
[
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"objective",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"objective",
"objective",
"objective",
"result",
"abstain",
"method",
"abstain",
"result",
"method",
"other",
"objective",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"other",
"method",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"objective",
"method",
"abstain",
"result",
"other",
"other"
] |
[
"Recent work in Natural Language Processing has focused on developing approaches that extract faithful explanations, either via identifying the most important tokens in the input (i.e. post-hoc explanations) or by designing inherently faithful models that first select the most important tokens and then use them to predict the correct label (i.e. select-then-predict mod-els).",
"Currently, these approaches are largely evaluated on in-domain settings.",
"Yet, little is known about how post-hoc explanations and inherently faithful models perform in out-of-domain settings.",
"In this paper, we conduct an extensive empirical study that examines: (1) the out-of-domain faithfulness of post-hoc explanations, generated by five feature attribution methods; and (2) the out-of-domain performance of two inherently faithful models over six datasets.",
"Contrary to our expectations, results show that in many cases out-of-domain post-hoc explanation faithfulness measured by sufficiency and comprehensiveness is higher compared to in-domain.",
"We find this misleading and suggest using a random baseline as a yardstick for evaluating post-hoc explanation faithfulness.",
"Our findings also show that select-then predict models demonstrate comparable predictive performance in out-of-domain settings to full-text trained models.",
"1 1 Introduction An explanation or rationale 2 , typically consists of a subset of the input that contributes more to the prediction.",
"Extracting faithful explanations is important for studying model behavior (Adebayo et al., 2020) and assisting in tasks requiring human decision making, such as clinical text classification (Chakrabarty et al., 2019), misinformation detection (Popat et al., 2018; Mu and Aletras, 2020) and legal text classification (Chalkidis et al., 2019, 1 Code available at: https://github.com/ GChrysostomou/ood_faith 2 We use these terms interchangeably throughout our work. 2021).",
"A faithful explanation is one which accurately represents the reasoning behind a model's prediction (Jacovi and Goldberg, 2020) Two popular methods for extracting explanations are through feature attribution approaches (i.e. posthoc explanation methods) or via inherently faithful classifiers (i.e. select-then-predict models).",
"The first computes the contribution of different parts of the input with respect to a model's prediction (Sundararajan et al., 2017; Ribeiro et al., 2016; Shrikumar et al., 2017).",
"The latter consists of using a rationale extractor to identify the most important parts of the input and a rationale classifier, a model trained using as input only the extractor's rationales (Bastings et al., 2019; Jain et al., 2020; Guerreiro and Martins, 2021).",
"3 Figure 1 illustrates the two approaches with an example.",
"Currently, these explanation methods have been mostly evaluated on in-domain settings (i.e. the train and test data come from the same distribution).",
"However, when deploying models in real-world applications, inference might be performed on data from a different distribution, i.e. out-of-domain (Desai and Durrett, 2020; Ovadia et al., 2019).",
"This can create implications when extracted explanations (either using post-hoc methods or through select-then-predict models) are used for assisting human decision making.",
"Whilst we are aware of the limitations of current state-of-the-art models in out-of-domain predictive performance (Hendrycks et al., 2020), to the best of our knowledge, how faithful out-of-domain post-hoc explanations are has yet to be explored.",
"Similarly, we are not aware how inherently faithful select-then-predict models generalize in out-of-domain settings.",
"Inspired by this, we conduct an extensive empirical study to examine the faithfulness of five 3 We refer to the rationale generator (i.e. generating a rationale mask) from Bastings et al. (2019) and Jain et al. (2020) as a rationale extractor, to avoid any confusion between these approaches and free-text rationales (Wiegreffe et al., 2021).",
"feature attribution approaches and the generalizability of two select-then-predict models in out-of-domain settings across six dataset pairs.",
"We hypothesize that similar to model predictive performance, post-hoc explanation faithfulness reduces in out-of-domain settings and that select-then-predict performance degrades.",
"Our contributions are as follows: To the best of our knowledge, we are the first to assess the faithfulness of post-hoc explanations and performance of select-then-predict models in out-of-domain settings.",
"We show that post-hoc explanation sufficiency and comprehensiveness show misleading increases in out-of-domain settings.",
"We argue that they should be evaluated alongside a random baseline as yardstick out-of-domain.",
"We demonstrate that select-then-predict classifiers can be used in out-of-domain settings.",
"They lead to comparable predictive performance to models trained on full-text, whilst offering inherent faithfulness.",
"Given a model M , we are interested in explaining why M predicted y for a particular instance x X .",
"An extracted rationale R , should therefore represent as accurately as possible the most important subset of the input ( R x ) which contributed mostly towards the model's prediction y .",
"Currently, there are two popular approaches for extracting rationales.",
"The first consists of using feature attribution methods that attribute to the input tokens an importance score (i.e. how important an input token is to a model's M prediction y ).",
"We can then form a rationale R , by selecting the K most important tokens (independent or contiguous) as indicated by the feature attribution method.",
"The second select-then-predict approach focuses on training inherently faithful classifiers by jointly training two modules, a rationale extractor and a rationale classifier , trained only on rationales produced by the extractor (Lei et al., 2016; Bastings et al., 2019; Treviso and Martins, 2020; Jain et al., 2020; Guerreiro and Martins, 2021).",
"Recent studies have used feature attribution approaches as part of the rationale extractor (Jain et al., 2020; Treviso and Martins, 2020), showing improved classifier predictive performance.",
"Having extracted R , we need to evaluate the quality of the explanation (i.e. how faithful that explanation is for a model's prediction).",
"Typically, post-hoc explanations from feature attribution approaches are evaluated using input erasure (Serrano and Smith, 2019; Atanasova et al., 2020; Madsen et al., 2021).",
"This approach masks segments of the input to observe if the model's prediction changed.",
"DeYoung et al. (2020) proposed measuring the comprehensiveness and sufficiency of rationales as faithfulness metrics.",
"A comprehensive rationale is one which is influential to a model's prediction, while a sufficient rationale that which is adequate for a model's prediction (DeYoung et al., 2020).",
"The term fidelity is also used for jointly referring to comprehensiveness and sufficiency (Carton et al., 2020).",
"Carton et al. (2020) suggested normalizing these metrics using the predictions of the model with a baseline input (i.e. an all zero embedding 6921 vector), to account for baseline model behavior.",
"Select-then-predict models are inherently faithful, as their classification component is trained only on extracted rationales (Jain et al., 2020).",
"A good measure for measuring rationale quality is by evaluating the predictive performance of the classifier trained only on the rationales (Jain et al., 2020; Treviso and Martins, 2020).",
"A higher score entails that the extracted rationales are better when compared to those of a classifier with lower predictive performance.",
"Given model M trained on an end-task, we typically evaluate its out-of-domain predictive performance on a test-set that does not belong to the same distribution as the data it was trained on (Hendrycks et al., 2020).",
"Similarly, the model can also extract explanations R for its out-of-domain predictions.",
"Camburu et al. (2018) studied whether generating explanations for language inference match human annotations (i.e. plausible explanations).",
"They showed that this is challenging in-domain and becomes more challenging in out-of-domain settings.",
"In a similar direction, Rajani et al. (2019) and Kumar and Talukdar (2020) examined model generated explanations in out-of-domain settings and find that explanation plausibility degrades compared to in-domain.",
"Kennedy et al. (2020) proposed a method for detecting model bias towards group identity terms using a post-hoc feature attribution approach.",
"Then, they use them for regularizing models to improve out-of-domain predictive performance.",
"Adebayo et al. (2020) have studied feature attribution approaches for identifying out-of-distribution images.",
"They find that importance allocation in out-of-domain settings is similar to that of an in-domain model and thus cannot be used to detect such images.",
"Feder et al. (2021) finally argued that explanations can lead to errors in out-of-distribution settings, as they may latch onto spurious features from the training distribution.",
"These studies indicate that there is an increasing need for evaluating post-hoc explanation faithfulness and select-then-predict performance in out-of-domain settings.",
"To the best of our knowledge, we are the first to examine these.",
"We employ a pre-trained BERT -base and fine-tune it on in-domain training data.",
"We then extract posthoc rationales for both the in-domain test-set and two out-of-domain test-sets.",
"We compute input importance using five feature scoring methods and a random baseline: Random (RAND ): Random allocation of token importance.",
"Attention ( ): Token importance corresponding to normalized attention scores (Jain et al., 2020).",
"Scaled Attention ( ): Attention scores i scaled by their corresponding gradients i = y i (Serrano and Smith, 2019).",
"InputXGrad ( x x ): Attributes input importance by multiplying the input with its gradient computed with respect to the predicted class, where x i = y x i (Kindermans et al., 2016; Atanasova et al., 2020).",
"Integrated Gradients ( IG ): Ranking words by computing the integral of the gradients taken along a straight path from a baseline input (zero embedding vector) to the original input (Sundararajan et al., 2017).",
"DeepLift: Ranking words according to the difference between the activation of each neuron and a reference activation (zero embedding vector) (Shrikumar et al., 2017).",
"HardKuma: An end-to-end trained model, where the rationale extractor uses Hard Ku-maraswamy variables to produce a rationale mask z , which the classifier uses to mask the input (Bastings et al., 2019).",
"Model training takes advantage of reparameterized gradients compared to REINFORCE style training employed by Lei et al. (2016) and has shown improved performance (Guerreiro and Martins, 2021).",
"FRESH: We compute the predictive performance of a classifier trained on rationales extracted with feature attribution metrics (see 6922 3.1) using FRESH, following a similar approach to Jain et al. (2020).",
"We extract rationales from an extractor by (1) selecting the topk most important tokens (TOPK) and (2) selecting the span of length k with the highest overall importance (CONTIGUOUS ).",
"We use BERT -base for the extraction and classification components of FRESH similar to Jain et al. (2020).",
"However, for HardKuma we opt using a biLSTM (Hochreiter and Schmidhuber, 1997) as it provides comparable or improved performance over BERT variants (Guerreiro and Martins, 2021), even after hyperparameter tuning.",
"4 4 Experimental Setup 4.1 Datasets For evaluating out-of-domain model explanation, we consider the following datasets (see Table 1 and Appendix A for details): SST: Stanford Sentiment Treebank (SST) consists of sentences tagged with sentiment on a 5-point-scale from negative to positive (Socher et al., 2013).",
"We remove sentences with neutral sentiment and label the remaining sentences as negative or positive if they have a score lower or higher than 3 respectively (Jain and Wallace, 2019).",
"IMDB: The Large Movie Reviews Corpus consists of movie reviews labeled either as positive or negative (Maas et al., 2011; Jain and Wallace, 2019).",
"Yelp: Yelp polarity review texts.",
"Similar to Zhang et al. (2015) we construct a binary classification task to predict a polarity label by considering one and two stars as negative, and three and four stars as positive.",
"Amazon Reviews: We form 3-way classification tasks by predicting the sentiment (negative, neutral, positive) of Amazon product reviews across 3 item categories: (1) Digital Music ( AmazDigiMu ); (2) Pantry ( AmazPantry ); and (3) Musical Instruments ( AmazInstr ) (Ni et al., 2019).",
"Post-hoc Explanations: We evaluate post-hoc explanations using:",
"Normalized Sufficiency (NormSuff) measures the degree to which the extracted rationales are adequate for a model to make a prediction (DeYoung et al., 2020).",
"Following Carton et al. (2020), we bind sufficiency between 0 and 1 and use the reverse difference so that higher is better: Suff ( x , y, R ) = 1 max (0 , p ( y | x ) p ( y |R )) NormSuff ( x , y, R ) = Suff ( x , y, R ) Suff ( x , y, 0) 1 Suff ( x , y, 0) (1) where Suff ( x , y, 0) is the sufficiency of a baseline input (zeroed out sequence) and y the model predicted class using the full text x as input.",
"Normalized Comprehensiveness (Norm-Comp) measures the influence of a rationale to a prediction (DeYoung et al., 2020).",
"Similarly to Carton et al. (2020), we bind this metric between 0 and 1 and normalize it: Comp ( x , y, R ) = max (0 , p ( y | x ) p ( y | x \\R )) NormComp ( x , y, R ) = Comp ( x , y, R ) 1 Suff ( x , y, 0) (2) To measure sufficiency and comprehensiveness across different explanation lengths we compute the Area Over the Perturbation Curve\" (AOPC) following DeYoung et al. (2020). We therefore compute and report the average normalized sufficiency and comprehensiveness scores when keeping (for sufficiency) or masking (for comprehensiveness) the top 2%, 10%, 20% and 50% of tokens extracted by an importance attribution function. 5 5 We also present results for each of these rationale lengths in Appendix F. 6923 Train Test Full-text Normalized Sufficiency Normalized Comprehensiveness F1 Rand DeepLift x x IG Rand DeepLift x x IG SST SST 90.1 0.38 0.51 0.42 0.42 0.40 0.41 0.19 0.39 0.22 0.25 0.26 0.26 IMDB 84.3 0.31 0.53 0.39 0.32 0.31 0.32 0.23 0.54 0.34 0.27 0.27 0.28 Yelp 87.9 0.32 0.56 0.40 0.35 0.33 0.34 0.21 0.48 0.28 0.24 0.24 0.25 IMDB IMDB 91.1 0.32 0.55 0.46 0.36 0.36 0.36 0.16 0.48 0.31 0.25 0.23 0.24 SST 85.8 0.24 0.35 0.28 0.28 0.27 0.27 0.27 0.46 0.32 0.33 0.33 0.33 Yelp 91.0 0.35 0.48 0.41 0.36 0.36 0.36 0.21 0.45 0.32 0.26 0.26 0.26 Yelp Yelp 96.9 0.23 0.32 0.31 0.29 0.24 0.25 0.12 0.20 0.14 0.16 0.15 0.16 SST 86.8 0.41 0.45 0.43 0.44 0.41 0.41 0.17 0.24 0.18 0.21 0.22 0.22 IMDB 88.6 0.18 0.34 0.32 0.25 0.22 0.22 0.19 0.34 0.29 0.23 0.23 0.24 AmazDigiMu AmazDigiMu 70.6 0.34 0.56 0.34 0.31 0.41 0.39 0.13 0.32 0.14 0.10 0.16 0.17 AmazInstr 61.2 0.29 0.54 0.32 0.31 0.33 0.32 0.19 0.47 0.23 0.19 0.22 0.23 AmazPantry 64.6 0.33 0.55 0.33 0.31 0.37 0.36 0.21 0.46 0.22 0.17 0.23 0.25 AmazPantry AmazPantry 70.2 0.25 0.46 0.36 0.19 0.28 0.27 0.20 0.42 0.31 0.15 0.25 0.25 AmazDigiMu 59.5 0.24 0.47 0.37 0.19 0.27 0.26 0.19 0.41 0.32 0.15 0.23 0.24 AmazInstr 64.5 0.17 0.42 0.30 0.15 0.20 0.20 0.24 0.52 0.40 0.23 0.30 0.30 AmazInstr AmazInstr 71.5 0.16 0.34 0.18 0.21 0.18 0.17 0.26 0.52 0.26 0.29 0.28 0.29 AmazDigiMu 61.3 0.21 0.38 0.21 0.22 0.24 0.22 0.23 0.46 0.20 0.22 0.24 0.25 AmazPantry 68.2 0.22 0.39 0.21 0.23 0.24 0.23 0.27 0.51 0.22 0.25 0.27 0.29 Table 2: AOPC Normalized Sufficiency and Comprehensiveness (higher is better) in-domain and out-of-domain for five feature attribution approaches and a random attribution baseline. We omit from our evaluation the Remove-and-Retrain method (Madsen et al., 2021) as it requires model retraining. Whilst this could be applicable for in-domain experiments where retraining is important, in this work we evaluate explanation faithfulness in zero-shot out-of-domain settings. Select-then-Predict Models: We first train select-then-predict models in-domain and then measure their predictive performance on the in-domain test-set and on two out-of-domain test-sets (Jain et al., 2020; Guerreiro and Martins, 2021). Our out-of-domain evaluation is performed without retraining (zero-shot). Similar to full-text trained models, we expect that predictive performance deteriorates out-of-domain. However, we assume that explanations from a select-then-predict model should generalize better in out-of-domain settings when the predictive performance approaches that of the full-text trained model. We do not conduct human experiments to evaluate explanation faithfulness, since that is only relevant to explanation plausibility (i.e. how intuitive to humans a rationale is (Jacovi and Goldberg, 2020)) and in practice faithfulness and plausibility do not correlate (Atanasova et al., 2020). 5 Results 5.1 Post-hoc Explanation Faithfulness Table 2 presents the normalized comprehensiveness and sufficiency scores for post-hoc explanations on in-domain and out-of-domain test-sets, using five feature attribution methods and a random baseline. For reference, we include the averaged F1 performance across 5 random seeds, of a BERT base model finetuned on the full text and evaluated inand out-of-domain (Full-text F1). 6 In-domain results show that feature attribution performance varies largely across datasets. This is in line with the findings of Atanasova et al. (2020) and Madsen et al. (2021) when masking rationales (i.e. comprehensiveness). We find the only exception to be , which consistently achieves the highest comprehensiveness and sufficiency scores across all in-domain datasets. For example evaluated on in-domain AmazDigiMu, results in sufficiency of 0.56 compared to the second best of 0.39 with IG. Contrary to our expectations, results show that post-hoc explanation sufficiency and comprehensiveness are in many cases higher in out-of-domain test-sets compared to in-domain. For example using DeepLift, comprehensiveness for the in-domain test-set in Yelp (0.16) is lower compared to the out-of-domain test-sets (0.21 for SST and 0.23 for IMDB). This is also observed when measuring sufficiency with , scoring 0.32 when tested in-domain on Yelp and 0.45 for the out-of-domain SST test-set. Apart from increased sufficiency and comprehen-6 We report predictive performance for all models and standard deviations in the Appendix. 6924 siveness scores in out-of-domain post-hoc explanations, we also observe increased scores obtained by our random baseline. In fact, the random baseline outperforms several feature attribution approaches in certain cases in out-of-domain settings . As an example, consider the case where the model has been trained on AmazInstr and tested on AmazPantry. Our random baseline achieves a comprehensiveness score of 0.27 while , DeepLift, x x perform similarly or lower (0.22, 0.25 and 0.27 respectively). Similarly, using a model trained on Yelp and tested on SST, the random baseline produces equally sufficient rationales to x x and IG, with all of them achieving 0.41 normalized sufficiency. A glaring exception to this pattern is , which consistently outperforms both the random baseline and all other feature attribution approaches in inand out-of-domain settings, suggesting that it produces the more faithful explanations. For example with out-of-domain AmazPantry test data, using a model trained on AmazInstr results in sufficiency scores of 0.39 with . This is a 0.15 point increase compared to the second best ( x x with 0.24). We recommend considering a feature attribution for producing faithful explanations out-of-domain, if it only scores above a baseline random attribution. We suggest that the higher the deviation from the random baseline, the more faithful an explanation is . 5.2 Select-then-predict Model Performance HardKuma: Table 3 presents the F1-macro performance of HardKuma models (Bastings et al., 2019) and the average rationale lengths (the ratio of the selected tokens compared to the length of the entire sequence) selected by the model. For reference, we also include the predictive performance of a full-text trained biLSTM . Results are averaged across 5 runs including standard deviations in brackets. As expected, predictive performance of HardKuma models degrades when evaluated on out-of-domain data. Surprisingly, though, we find that their performance is not significantly different (t-test; p-value > 0.05) to that of the full-text LSTM in 9 out of the 12 out-of-domain dataset pairs. For example, by evaluating the out-of-domain performance of a HardKuma model trained on AmazDigiMu on the AmazPantry test-set, we record on average a score of 54.3 F1 compared to Train Test Full-text HardKuma L F1 F1 (%) SST SST 81.7 77.6 56.8 IMDB 71.9 65.7 39.5 Yelp 68.7 67.7 32.7 IMDB IMDB 87.4 82.0 1.9 SST 77.5 73.6 16.8 Yelp 41.0 47.2 3.1 Yelp Yelp 96.0 92.4 7.4 SST 80.4 72.4 14.1 IMDB 84.5 73.3 4.7 AmazDigiMu AmazDigiMu 67.6 66.8 18.4 AmazInstr 54.2 53.3 25.8 AmazPantry 55.3 54.7 27.8 AmazPantry AmazPantry 67.9 66.6 18.9 AmazDigiMu 50.9 51.0 11.2 AmazInstr 55.9 57.4 18.2 AmazInstr AmazInstr 67.2 66.7 19.2 AmazDigiMu 54.3 53.7 13.9 AmazPantry 61.1 59.5 24.4 Table 3: F1 macro performance (five runs) for HardKuma models and the selected rationale length (L). Bold denotes no significant difference between HardKuma and Full-text (t-test; p > 0.05). For clarity, we include F1 scores with standard deviations in Appendix C. 55.3 with an LSTM classifier trained on full text. We also observe that HardKuma models trained on SST and IMDB generalize comparably to models trained on full-text when evaluated on Yelp, however the opposite does not apply. Our assumption is that HardKuma models trained on Yelp, learn more domain-specific information due to the large training corpus (when compared to training on IMDB and SST) so they fail to generalize well out-of-domain. Results also show, that the length of rationales selected by HardKuma models depend on the source domain, i.e. training HardKuma on a dataset which favors shorter rationales, leads to also selecting shorter rationales out-of-domain . For example, in-domain test-set explanation lengths are on average 56.8% of the full-text input length for SST. In comparison, training a model on Yelp and evaluating on SST results in rationale lengths of 14.1%. We observe that in certain cases, HardKuma models maintain the number of words, not the ratio to the sequence in out-of-domain settings. For example, in-domain Yelp test-set rationales are about 11 tokens long that is the similar to the length selected when evaluating on IMDB using a model trained on Yelp. This is also observed where in-domain AmazInstr test-set rationales are on average 5 tokens long, which 6925 Train Test Full-text DeepLift x x IG SST (20%) SST 90.1 87.7 81.1 84.4 76.3 76.8 IMDB 84.3 81.8 52.6 64.0 55.0 56.3 Yelp 87.9 88.1 72.6 75.4 59.6 63.9 IMDB (2%) IMDB 91.1 87.9 80.4 87.2 59.8 59.7 SST 85.8 80.9 71.8 70.1 69.6 70.7 Yelp 91.0 87.8 82.0 79.4 69.0 69.1 Yelp (10%) Yelp 96.9 94.0 90.4 93.6 70.5 71.9 SST 86.8 59.3 69.8 67.2 67.7 69.3 IMDB 88.6 78.0 64.5 66.6 53.0 55.8 AmazDigiMu (20%) AmazDigiMu 70.6 66.1 63.4 65.8 51.9 65.8 AmazInstr 61.2 58.0 57.2 57.4 46.0 57.2 AmazPantry 64.6 59.1 56.5 56.5 44.8 44.8 AmazPantry (20%) AmazPantry 70.2 67.3 62.6 67.2 48.6 48.7 AmazDigiMu 59.5 57.7 54.6 56.2 41.2 57.7 AmazInstr 64.5 63.8 58.0 63.6 40.1 40.3 AmazInstr (20%) AmazInstr 71.5 69.8 62.1 69.7 45.6 48.6 AmazDigiMu 61.3 60.0 53.2 57.8 43.8 60.0 AmazPantry 68.2 64.5 56.3 63.1 44.6 47.6 Table 4: Average F1 macro performance of FRESH models (five runs) with the a priori defined rationale length in the brackets. Bold denotes no significant difference between FRESH and Full-text (t-test; p > 0.05). For clarity, we present F1 scores with standard deviations in Appendix D. is the same rationale length when evaluating on AmazDigiMu using a model trained on AmazInstr. In general, our findings show that in the majority of cases, using HardKuma in out-of-domain data results to comparable performance with their full-text model counterparts. This suggests that HardKuma models can be used in out-of-domain settings, without significant sacrifices in predictive performance whilst also offering faithful rationales . FRESH: Table 4 shows the averaged F1-macro performance across 5 random seeds for FRESH classifiers on inand out-of-domain using TopK rationales. 7 We also include the a priori defined rationale length in parentheses and the predictive performance of the Full-Text model for reference. 8 We first observe that in-domain predictive performance varies across feature attribution approaches with attention-based metrics ( , ) outperforming the gradient-based ones ( x x , IG), largely agreeing with Jain et al. (2020). We also find that and DeepLift are the feature attribution approaches that lead to the highest predictive performance across all datasets. As we initially hypothesized, performance of FRESH generally degrades when testing on out-of-domain data similarly to the behavior of models 7 For clarity we include standard deviations and Contiguous results in Appendix D 8 When evaluating out-of-domain, we use the average rationale length of the dataset we evaluate on. This makes FRESH experiments comparable with those of HardKuma. trained using the full text. The only exceptions are when using x x and IG in IMDB. We argue that this is due to these feature attribution methods not being able to identify the appropriate tokens relevant to the task using a rationale length 2% of the original input. Increasing the rationale length to 20% (SST) and 10% (Yelp) also increases the performance. Results also suggest that and DeepLift outperform the rest of the feature attributions, with being the best performing one in the majority of cases. In fact when using or DeepLift, the out-of-domain performance of FRESH is not significantly different to that of models trained on full text (t-test; p-value > 0.05) in 5 cases. For example, a FRESH model trained on AmazPantry and evaluated on AmazInstr records 63.6 F1 macro (using DeepLift) compared to 64.5 obtained by a full-text model. However, this does not apply to the other feature attribution methods ( ; x x ; IG). To better understand this behavior, we conduct a correlation analysis between the importance rankings using any single feature attribution from (1) a model trained on the same domain with the evaluation data; and (2) a model trained on a different domain (out-of-domain trained model). High correlations suggest that if a feature attribution from an out-of-domain trained model produces similar importance distributions with that of an in-domain model, it will also lead to high predictive performance out-of-domain. Contrary to our initial 6926 assumption we found that the lower the correlation, the higher the predictive performance with FRESH. Results show low correlations when using and DeepLift (highest FRESH perfor-mance). Surprisingly, IG and x x (lowest FRESH performance) showed consistently strong correlations across all dataset pairs. Thus, we conclude that lower correlation scores indicate lower attachment to spurious correlations learned during training. We expand our discussion and show results for the correlation analysis in Appendix E. Our findings therefore suggest that using FRESH in out-of-domain settings, can result to comparable performance with a model trained on full-text. However this highly depends on the choice of the feature attribution method . HardKuma vs. FRESH: We observe that HardKuma models are not significantly different compared to models trained on the full text in out-of-domain settings in more cases, when compared to FRESH (9 out of 12 and 5 out of 12 respectively). However, FRESH with or DeepLift records higher predictive performance compared to HardKuma models (both inand out-of-domain) in all cases . We attribute this to the underlying model architectures, as FRESH uses BERT and HardKuma a biLSTM . As we discussed in 3.2, we attempted using BERT for HardKuma models in the extractor and classifier similar to Jain et al. (2020). However, the performance of HardKuma with BERT is at most comparable to when using a biLSTM similar to findings of Guerreiro and Martins (2021). 5.3 Correlation between Post-hoc Explanation Faithfulness and FRESH Performance We hypothesize that a feature attribution with high scores for sufficiency and comprehensiveness, should extract rationales that result in high FRESH predictive performance. We expect that if our hypothesis is valid, faithfulness scores can serve as early indicators of FRESH performance, both on in-domain and out-of-domain settings. Table 5 shows the Spearman's ranking correlation ( ) between FRESH F1 performance (see Table 4) and comprehensiveness and sufficiency (see Table 2). Correlation is computed using all feature scoring methods for each dataset pair. Results show that only 4 cases achieve statistically significant correlations (p-value < 0.05) with only 3 out-of-domain and mostly between sufficiency and FRESH performance. We do not observe Train Test FRESH Sufficiency Comprehen. SST SST 0.97 0.15 IMDB 0.36 0.21 Yelp 0.90 0.56 IMDB IMDB 0.69 0.87 SST 0.65 0.23 Yelp 0.92 0.92 Yelp Yelp 0.82 0.55 SST -0.67 -0.67 IMDB 0.87 0.56 AmazDigiMu AmazDigiMu -0.11 0.22 AmazInstr 0.23 0.69 AmazPantry 0.11 0.11 AmazPantry AmazPantry 0.16 0.16 AmazDigiMu 0.05 0.41 AmazInstr 0.16 0.16 AmazInstr AmazInstr 0.79 0.55 AmazDigiMu 0.24 0.67 AmazPantry 0.21 0.20 Table 5: Spearman's ranking correlation ( ) between FRESH performance and comprehensiveness, sufficiency across all feature attribution approaches. Bold denotes statistically significant (p-value 0.05) correlations. high correlations with comprehensiveness which is expected, as comprehensiveness evaluated the rationale's influence towards a model's prediction. Our findings refute our initial hypothesis and suggest that there is no clear correlation across all cases, between post-hoc explanation faithfulness and FRESH predictive performance. Therefore, sufficiency and comprehensiveness scores cannot be used as early indicators of FRESH predictive performance. 6 Qualitative Analysis Table 6 presents examples from a qualitative analysis we performed, aimed at better understanding out-of-domain post-hoc explanations. Rows with highlighted text in blue are from a model trained in the same domain as the presented example (ID), whilst those with highlighted text in red are from models trained on a different domain. Importance scores are computed using scaled attention ( ). In Example (1), we observe that models trained on two closely related tasks (AmazInstr and AmazDigiMu) place more importance to the phrase sound good.",
"For an explanation to be highly comprehensive, the model's prediction after masking the rationale should have a large difference compared to the model's prediction using the full text.",
"On the contrary, the model trained on AmazPantry which has not encountered such phrases during training, mostly focuses on Work great.",
"This is expected as the term sound is not typical of pantry reviews.",
"Similarly, in Example 6927 M Trained On Example (1) AmazInstr (ID) Work great and sound good AmazDigiMu Work great and sound good AmazPantry Work great and sound good (2) AmazPantry (ID) Delicious and at a good price .",
"(2) from the AmazPantry dataset, the in-domain model focuses on a domain-specific word deli-cious.",
"On the contrary, the two models trained on music-related tasks focus on more generic terms such as good and would recommend.",
"In Example (3) the model trained on Yelp focuses mostly on the word behavior, a term we consider more relevant to restaurant reviews rather than movie reviews.",
"In comparison, the other models which are both trained on movie reviews focus both on the term funny.",
"In Example (4), again the two movie-review models focus on more generic terms (i.e. amazing) compared to must taste that the model trained in-domain (i.e. Yelp) identifies as important.",
"Overall, results show that rationales from models applied to a different domain (other than that they were trained for), comprise of terms that are mostly present within the domain they were trained for.",
"This can partly explain the performance of out-of-domain FRESH classifiers.",
"Our assumption, similar to (Adebayo et al., 2020), is that a model's inability to generalize to other domains, is based on the model latching on to specific features from the training dataset.",
"We conducted an extensive empirical study to assess the faithfulness of post-hoc explanations (i.e. using feature attribution approaches) and performance of select-then-predict (i.e. inherently faithful)",
"faithful) models in out-of-domain settings.",
"Our findings highlight, that using sufficiency and comprehensiveness to evaluate post-hoc explanation faithfulness out-of-domain can be misleading.",
"To address this issue, we suggest comparing faithfulness of post-hoc explanations to a random attribution baseline for a more robust evaluation.",
"We also show that select-then-predict models, which are inherently faithful, perform surprisingly well in out-of-domain settings.",
"Despite performance degradation, in many cases their performance is comparable to those of full-text trained models.",
"In future work, we aim to explore methods for improving the evaluation of faithfulness for out-of-domain post-hoc explanations.",
"We would like to thank Katerina Margatina and Tulika Bose for their insightful feedback in a preliminary version of this paper.",
"GC and NA are supported by EPSRC grant EP/V055712/1, part of the European Commission CHIST-ERA programme, call 2019 XAI: Explainable Machine Learning-based Artificial Intelligence."
] |
[
"abstain",
"abstain",
"abstain",
"method",
"result",
"result",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"result",
"method",
"objective",
"objective",
"method",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"result",
"method",
"result",
"abstain",
"objective",
"other",
"other"
] |
[
"We consider a task based on CVPR 2018 challenge dataset on advertisement (Ad) understanding.",
"The task involves detecting the viewer's interpretation of an Ad image captured as text.",
"Recent results have shown that the embedded scene-text in the image holds a vital cue for this task.",
"Motivated by this, we fine-tune the base BERT model for a sentence-pair classification task.",
"Despite utilizing the scene-text as the only source of visual information, we could achieve a hit-or-miss accuracy of 84.95% on the challenge test data.",
"To enable BERT to process other visual information, we append image captions to the scene-text.",
"This achieves an accuracy of 89.69%, which is an improvement of 4.7%.",
"This is the best reported result for this task.",
"The advertisement understanding challenge dataset of CVPR 2018 collected textual inputs from a set of viewers to capture their interpretations of Ad images (Hussain et al., 2017).",
"The task is to rank the given valid and negatively sampled invalid interpretations of an image.",
"Initial approaches to the problem tried capturing the visual semantics with a combination of object proposal features and relationships of objects with common symbolism (Doshi and Hinthorn, 2018; Ye and Kovashka, 2018; Ahuja et al., 2018).",
"Recently, Dey et al. (2019a,b) have obtained a significant improvement in performance by utilizing the text embedded in the image (termed as the scene-text) as another channel of information.",
"These approaches do not evaluate the validity of an interpretation by using attention to associate the words and phrases of the interpretation to fragments of textual and visual cues in the image.",
"For example, in the Ad of the car company in Figure 1, the words and phrases from a viewer's input I should buy this car because it would add some Figure 1: Ads Dataset: Textual and Visual Cues excitement to my life ' can be associated with the object car' in the image and the phrase add spark to life' in the scene-text.",
"To capture these mappings, we need a model that can simultaneously pay attention to the image and the interpretations at various levels of granularity.",
"The recently proposed BERT pre-trained language model (Devlin et al., 2019) has provided excellent performance on several NLP tasks.",
"The underlying attention-based transformer architecture (Vaswani et al., 2017) allows BERT to capture contextual representations.",
"We leverage the pre-trained base BERT model to capture a contextual representation of the viewer's interpretation with respect to the visual and textual cues in the image.",
"One of the challenges we face is to provide information on visual cues to the BERT model.",
"We overcome this challenge by extracting dense-cap captions(densecaps) (Johnson et al., 2016) to provide textual information about the image objects, their properties, and interactions.",
"This is motivated by the approaches of Visual Question Answering (VQA) (Li et al., 2019; Hudson and Manning, 2019), question generation (Zhang et al., 2017), which talk about leveraging more abstract text or concept-level information instead of pixel-level information of an image.",
"We fine-tune BERT for the sentence-pair classification task, where the scene-text and the densecaps form the first sentence, and the viewer's interpretation forms the second sentence.",
"With this approach, we achieve an accuracy of 89.69% and recall@3 of 2.411, which is by far the best reported result for this task.",
"The challenge dataset (Hussain et al., 2017) has 64,028 images.",
"Every image has 3 to 5 interpretations in terms of Action-Reason Pairs (ARPs), which are the answers provided by a set of crowd-workers to the questions, viz.",
"What should I do according to this ad?' and Why should I do it?' respectively.",
"These form the valid set of ARPs.",
"For every image, 10 to 12 ARPs are randomly negatively sampled, forming the invalid set of ARPs.",
"The challenge has provided 51223 images for training and 12805 images for testing.",
"The dataset providers have taken care to ensure that there is no information leakage between these partitions by constraining the negative sampling to be from within each partition.",
"The challenge contributors such as VSE++, ADVISE, and Cyberagent, have reported results on the test set (ref. Table 1).",
"Other prior works (Ye and Kovashka, 2018; Ahuja et al., 2018; Dey et al., 2019a,b) may have random partitions of the training images to obtain a validation (VAL) set, and report the results on some split of such a VAL set.",
"A random (80-20) train-val split of the training images causes approximately 98% of the val split ARPs to overlap with the train ARPs.",
"Unless they have taken care to partition the images first, and conduct negative sampling from only within each partition, this can lead to a possible information leakage.",
"However, such sampling amounts to changing the training data split provided by the challenge, making it hard for the community to replicate the results.",
"To provide a comprehensive comparison, we provide results on both the test set and the VAL split by considering a 5-fold split of the provided training images.",
"The challenge dataset also provides the annotations for advertisement strategies, sentiments, top-ics, symbolism, etc.",
"In this work, we do not utilize these annotations.",
"However, one can derive a potential benefit by including these annotations as additional channels of visual information.",
"For example, previous works have included the symbol' annotations provided, as an additional stream.",
"These annotations are image regions depicting symbol objects.",
"A symbol object signifies an abstract concept.",
"For instance, blood represents danger; muscle represents strength, etc. 2.2 Ranking Metrics The task is to rank the validity of the ARPs concerning an image.",
"We have considered various metrics to measure the quality of the ranking: Accuracy: Percentage of images having any one of the valid ARPs with rank one.",
"Rank: Rank of the highest-ranked valid ARP, averaged over all images.",
"Rank Average: Average of ranks of all valid ARPs of an image, further averaged over all images.",
"Recall@3: Number of valid ARPs ranked in the top-3, averaged over all images.",
"Hussain et al. (2017) introduced the CVPR challenge Ads dataset and established the baseline by modeling the task as VQA.",
"In their proposed approach, a two-layer LSTM encodes the questions, and the last hidden layer output of VGGNet encodes the image.",
"They convert the ARPs to a one-word answer by considering the word with the highest TF-IDF score, and the model predicts the word using a softmax layer.",
"Symvise (Doshi and Hinthorn, 2018) uses an extension of the top-down, bottom-up attention approach (Anderson et al., 2018) by adding a symbol stream using the symbol' annotations provided by the dataset.",
"ADVISE (Ye and Kovashka, 2018) is the first paper that claims to take knowledge' into account for the given task and adapts (Hussain et al., 2017) for the ranking task.",
"They use two branches, viz.",
"(i) The main branch, which uses attention mechanism to represent an image as a weighted combination of object regions,",
"(ii) The knowledge branch, which provides symbol' distribution for the image by making use of densecaps (Johnson et al., 2016) to map image to the symbol' labels.",
"The embeddings received from both the branches are added to get the image embedding.",
"They use triplet loss to learn an embedding space that keeps images closer to the valid ARPs.",
"Ahuja et al. (2018) proposes a weakly supervised learning algorithm that uses a multi-hop co-attention mechanism to iteratively refine the attention map that associates image proposals with symbol labels, thereby aggregating information from both modalities.",
"They use max-margin loss to get Figure 2: BERT Sentence-Pair Classification the image-symbol embedding closer to the valid ARPs.",
"Dey et al. (2019a) is the first approach that has considered scene-text as one of the inputs, along with the visual features.",
"Their algorithm and training is similar to Ye and Kovashka (2018).",
"We draw the following learnings from the literature:",
"(i) scene-text carries a strong signal (Dey et al., 2019b),",
"(ii) densecaps can be used to embed external knowledge (Ye and Kovashka, 2018),",
"(iii) capturing associations between modalities using co-attention mechanism is effective for the given task (Ahuja et al., 2018).",
"Thus, in this paper, we leverage the pre-trained language model BERT (De-vlin et al., 2019), which allows to learn contextual representations that capture associations between words and phrases of an ARP, and image inputs, using self-attention mechanism.",
"To abstract concepts from the pixel stream, we extract densecaps 1 (Johnson et al., 2016) of the image.",
"We use Google Vision API 2 to extract scene-text from the image.",
"We append the densecaps to the extracted scene-text to form a composite textual signal.",
"This text is paired with an ARP to form sentence pairs, that are served as inputs to BERT, as shown in Figure 2.",
"147.",
"For the samples for which the sentence-pair token length goes beyond the maximum allowed length (512 tokens) of the base BERT model, we truncate the length of the composite textual signal of the image.",
"To avoid a significant information loss due to the truncation, we arrange the densecaps in decreasing order of their confidence score.",
"BERT (Devlin et al., 2019) has been pre-trained to use the [CLS] pin output for sentence-pair classification.",
"Hence, we use the [CLS] pin output and fine-tune (BERT FT ST+C) (ref. Table 1) for the binary classification task to determine the validity of a candidate ARP with reference to the textual and visual cues of the image.",
"We collect and rank the softmax outputs of all the ARPs concerning an image, to obtain their relative validity.",
"We fine-tune BERT with only scene-text ARP pairs as input (BERT FT ST) and only densecaps ARP pairs as input (BERT FT C) to understand the contribution of the different inputs.",
"To understand the role of BERT's pretraining, we use BERT purely as a feature extractor (BERT FE ST+C) by training only a dense classifier layer over the [CLS] pin output.",
"For training all of the above models, we use the batch size of 6, a learning rate of 2e-5, and 3 epochs.",
"Most of the prior work has considered an information retrieval setting in which the learned embedding of an image is matched with the learned embedding of an ARP.",
"To compare specifically with such a setting, we have performed a sentence-pair matching task by using BERT in a siamese setting (Reimers and Gurevych, 2019).",
"We extract sentence representations by mean-pooling the word vectors and use mean-squared-error loss over co-sine similarity of the sentence vectors.",
"We fine-tune siamese BERT (SBERT FT ST+C) as well as use it as a feature extractor (SBERT FE ST+C).",
"We use a batch size of 16, a learning rate of 2e-5, and 4 epochs for its training.",
"There have been recent proposals for transformer-based cross-modal encoders such as LXMERT (Tan and Bansal, 2019) and ViLBERT (Lu et al., 2019), showing promising performance on VQA.",
"To evaluate the efficacy of these models on the Ads dataset, we fine-tune them for a binary classification task that determines the validity of an ARP with reference to the object proposals obtained from an Ad image.",
"We retain Method Image TEST Data VAL Data** Input Accu Rank Rank Recall Accu Rank Rank Recall -racy Avg @3 -racy Avg @3 VSE++ O 62% --66.6% -3.858 Symvise* O 57.11% 1.998 4.227 1.601 59.73% 1.931 4.049 1.683 LXMERT O 50.00% 2.262 5.000 1.410 53.22% 2.159 4.860 1.470 VilBERT O 61.76% 1.860 4.19 1.710 64.13% 1.760 4.028 1.790 ADVISE O + K 69% --72.84% -3.552 cyberagent ST + O 82% ---VS (v1) ST + O --88.70% -VS (v1)* ST + O 86.84% 1.264 3.072 2.259 89.28% 1.213 2.889 2.356 VS (v3) ST + O --90.90% -3.090 SBERT FE ST + C 37.31 % 2.870 6.515 1.024 37.59 % 2.847 6.472 1.025 BERT FE ST + C 81.94% 1.496 3.854 2.078 84.10% 1.423 3.744 2.141 SBERT FT ST + C 84.54% 1.334 3.123 2.310 87.87% 1.269 2.993 2.413 BERT FT C 60.09 % 2.175 4.489 1.667 62.81% 2.012 4.284 1.743 BERT FT ST 84.95% 1.884 3.622 2.271 87.53% 1.774 3.502 2.353 BERT FT ST + C 89.69% 1.230 2.982 2.411 91.56% 1.189 2.830 2.487 Table 1: Results on CVPR 2018 Challenge Data (FE: Feature Extractor, FT: Fine-Tuned, ST: Scene-Text, C: Dense-cap Captions, O: Object-Proposals, K: Knowledge) Symvise (Doshi and Hinthorn, 2018), VS(v1):Visual Semantics version 1 (Dey et al., 2019a), VS(v3): Visual Semantics version 3 (Dey et al., 2019b), LXMERT (Tan and Bansal, 2019), VilBERT (Lu et al., 2019), BERT (Devlin et al., 2019), SBERT: Siamese BERT (Reimers and Gurevych, 2019), * Our implementation , ** Results on their respective VAL splits, our results are on 5-fold train-val split, Results from challenge leaderboard (https://evalai.cloudcv.org/web/challenges/challenge-page/86/evaluation), Results from ADVISE github page (https://github.com/yekeren/ADVISE-Image ads understanding) April 2020.",
"the hyper-parameters provided in LXMERT and ViLBERT, except for a reduced learning rate of 4e-7.",
"In this section, we compare the performance of the models as presented in Table 1, draw empirical observations, and attempt to provide a rationale for the performances observed.",
"We also provide qualitative insights for some failure cases by manually inspecting the data.",
"We first make a broad observation that the performance of all the techniques on the test data is inferior as compared to the VAL data.",
"Information leakage can be one of the reasons for observing better performance on the VAL data.",
"Hence, we limit most of the discussion to the test set, but one can observe that the comparative performance of the models is similar on the VAL set.",
"VS(v3) has been published simultaneously to our work; hence we were unable to create results for the test data for this model.",
"Nevertheless, we observe that (BERTFT ST+C) could give better performance on VAL Data**.",
"Our proposed (BERT FT ST+C) model achieves the best performance on all the metrics amongst the considered models.",
"We observe that just using scene-text (BERT FT ST) gives an accuracy of 84.95%, which is within 1.89% of VS(v1)*.",
"Furthermore, the performance of BERT with just densecaps as input (BERT FT C) is competitive with other models that use just the visual cues as input.",
"We compare (BERT FT C) and (BERT FT ST), and observe that the contribution of scene-text in the accuracy is higher, compared to densecaps.",
"This validates the primary observation of Dey et al. (2019a).",
"In Table 2, we compare the BERT models with different inputs in terms of the number of misses of one model that are converted to hits by another.",
"This represents the potential advantage that a model can get by adding or removing an information channel.",
"We observe that for the misses of the (BERT FT ST+C), (BERT FT ST) was able to make correct inference for 2.31% of the images, whereas (BERT FT C) could infer correctly for 4.02%.",
"This leads us to the conclude that, for some images, scene-text and densecaps do not combine well, blocking cor-ST+C ST C ST+C 0 7.76% 34.33% ST 2.31% 0 33.36% C 4.02% 8.50% 0 Table 2: Cell-(i,",
"This is further validated, when we observe that for the misses of the (BERT FT ST) model, (BERT FT ST+C) was able to make correct inference for 7.76% of the images which is 51.58% of the misses of (BERT FT ST), whereas (BERT FT C) could infer correctly for 8.50% which is 56.51% of the misses.",
"The performance of (BERT FT C) is inferior to VilBERT, and ADVISE that directly operate on object proposals, implying a loss of information.",
"Comparing VilBERT and (BERT FT C), we observe that VilBERT could give 18.5% unique hits.",
"However, after the addition of scene-text (BERT FT ST+C), the unique hits of VilBERT have dropped to 4.8%.",
"This shows that adding an object proposal stream to (BERT FT ST+C) could contribute only a low additional advantage.",
"We make a similar comparison of VS(v1)* with (BERT FT ST+C) and observed that only 5% of the images get converted to hits by VS(v1)*.",
"Note that this number is in the same range as 4.02% obtained for (BERT FT C).",
"We observe that, BERT without any fine-tuning (BERT FE ST+C) has achieved an accuracy of 81.94% by itself.",
"Fine-tuning BERT (BERT FT ST+C) results in an improvement of only 7.75%.",
"This shows that BERT's pre-training has played a significant role in achieving this accuracy.",
"However, the performance of matching BERT features (SBERT FE ST+C), which does not use attention between the ARP and the composite textual signal of the image, achieves only 37.31% in comparison to (BERT FE ST+C).",
"This substantiates our argument that using attention to associate words and phrases in the ARPs to textual and visual cues in the image helps the task.",
"Nevertheless, after fine-tuning, (SBERT FT ST+C) achieves an accuracy of 84.54%, which, though inferior to 89.69% (BERTFT ST+C), is within 2.3% of VS(v1)*.",
"We wanted to evaluate the indirect inference BERT has to conduct.",
"Towards this, we analyze the syntax matches of densecaps and scene-text with the ARPs concerning an image on the test data.",
"Two sentences are said to have a syntax match if there is atleast one word common between them.",
"We remove non-alphanumeric characters and additionally perform stemming on ARPs and densecaps.",
"We perform POS tagging on densecaps and ARPs and consider only Nouns, Pronouns, Adjectives, and Adverbs POS Tags for syntax match analysis.",
"We observe that 14.12%, 56.73%, and 62.46% of samples show syntax matches between valid ARPs and inputs of (BERT FT C), (BERT FT ST) and (BERT FT ST+C), respectively.",
"Meanwhile, the corresponding numbers for the invalid ARPs are 6.32%, 10.58%, and 15.93%.",
"This establishes that syntax matches are a major discriminating factor.",
"However, a comparison with Table 1 shows that the performance of these models cannot be entirely attributed to syntax matches.",
"We manually inspect 900 randomly sampled images from the test dataset and made the following qualitative observations on the errors/limitations of the scene-text extractor and densecaps.",
"We observe that for 82.6% of the images, at least some scene-text was not detected.",
"We also notice that spelling errors were substantial.",
"The causes for these could be the usage of a non-standard font, poor resolution, curvy or rotated text, non-English language, or overlapping with an object.",
"We observe several spurious and false-positive dense captions.",
"In future, the captions could be more helpful if they capture",
"(i) additional object classes, e.g., cigarettes, ice-cream, etc.,",
"(ii) semantic attributes such as age or emotions,",
"(ii) object parts or fine-granular classification, e.g., ketchup bottle or perfume,",
"(iii) object interactions,",
"(iv) scene or situation depicted in the image such as office, fight, romance, etc. 6 Conclusion and Future work The scene-text holds vital information and can be used to achieve good accuracy on this task.",
"Syntax matches play a vital role in achieving the accuracy, but are not entirely the reason behind it.",
"Although the conversion of visual cues to captions cause a loss of information, the addition of scene-text mitigates most of the loss.",
"Using attention to associate the ARPs with the textual and visual cues is helping the task.",
"Better emotion, scene, scene-text, object detection and captions might lead to further improvement of performance."
] |
[
"method",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Previous work on review summarization focused on measuring the sentiment toward the main aspects of the reviewed product or business, or on creating a textual summary.",
"These approaches provide only a partial view of the data: aspect-based sentiment summaries lack sufficient explanation or justification for the aspect rating, while textual summaries do not quantify the significance of each element, and are not well-suited for representing conflicting views.",
"Recently, Key Point Analysis (KPA) has been proposed as a summarization framework that provides both textual and quantitative summary of the main points in the data.",
"We adapt KPA to review data by introducing Collective Key Point Mining for better key point extraction; integrating sentiment analysis into KPA; identifying good key point candidates for review summaries; and leveraging the massive amount of available reviews and their metadata.",
"We show empirically that these novel extensions of KPA substantially improve its performance.",
"We demonstrate that promising results can be achieved without any domain-specific annotation, while human supervision can lead to further improvement.",
"With their ever growing prevalence, online opinions and reviews have become essential for our everyday decision making.",
"We turn to the wisdom of the crowd before buying a new laptop, choosing a restaurant or planning our next vacation.",
"However, this abundance is often overwhelming: reading hundreds or thousands of reviews on a certain business or product is impractical, and users typically have to rely on aggregated numeric ratings, complemented by reading a small sample of reviews, which may not be representative.",
"The vast majority of available information is left unexploited.",
"Opinion summarization is a long-standing challenge, which has attracted a lot of research interest over the past two decades.",
"Early works (Hu and Liu, 2004; Gamon et al., 2005; Snyder and Barzi-lay, 2007; Blair-goldensohn et al., 2008; Titov and McDonald, 2008) aimed to extract, aggregate and quantify the sentiment toward the main aspects or features of the reviewed entity (e.g., food , price , service , and ambience for restaurants).",
"Such aspect-based sentiment summaries provide a high-level, quantitative view of the summarized opinions, but lack explanations and justifications for the assigned scores (Ganesan et al., 2010).",
"An alternative line of work casts this problem as multi-document summarization, aiming to create a textual summary from the input reviews (Carenini et al., 2006; Ganesan et al., 2010; Chu and Liu, 2019; Brazinskas et al., 2020b).",
"While such summaries provide more detail, they lack a quantitative view of the data.",
"The salience of each element in the summary is not indicated, making it difficult to evaluate their relative significance.",
"This is particularly important for the common case of conflicting opinions.",
"In order to fully capture the controversy, the summary should ideally indicate the proportion of favorable vs. unfavorable reviews for the controversial aspect.",
"Recently, Key Point Analysis (KPA) has been proposed as a novel extractive summarization framework that addresses the limitations of the above approaches (Bar-Haim et al., 2020a,b).",
"KPA extracts the main points discussed in a collection of texts, and matches the input sentences to these key points (KPs) .",
"The salience of each KP corresponds to the number of its matching sentences.",
"The set of key points is selected out of a set of candidates short input sentences with high argumentative quality, so that together they achieve high coverage, while aiming to avoid redundancy.",
"The resulting summary provides both textual and quantitative Positive Key Points % Reviews Negative Key Points % Reviews Amazingly helpful and friendly staff.",
"views of the data, as illustrated in Table 1. Table 2 shows a few examples of matching sentences to KPs.",
"Originally developed for argument summarization, KPA has also been applied to user reviews and municipal surveys, using the same supervised models that were only trained on argumentation data, and was shown to perform reasonably well.",
"However, previous work only used KPA out-of-the-box, and did not attempt to adapt it to different target domains.",
"In this work we propose several improvements to KPA, in order to make it more suitable to review data, and in particular to large-scale review datasets: 1. We show how the massive amount of reviews available in datasets like Amazon and Yelp, as well as their meta-data, such as numeric rating, can be leveraged for this task.",
"2. We integrate sentiment classification into KPA, which is crucial for analyzing reviews.",
"3. We improve key point extraction by introducing Collective Key Point Mining : extracting a large, high-quality set of key points from a large collection of businesses in a given domain.",
"4. We define the desired properties of key points in the context of user reviews, and develop a classifier that detects such key points.",
"We show empirically that these novel extensions of KPA substantially improve its performance.",
"We demonstrate that promising results can be achieved without any domain-specific annotation, while human supervision can lead to further improvement.",
"Overall, this work makes a dual contribution: first, it proposes a new framework for review summarization.",
"Second, it advances the research on KPA, by introducing novel methods that may be applied not only to user reviews, but to other use cases as well.",
"KPA was initially developed for summarizing large argument collections (Bar-Haim et al., 2020a).",
"KPA matches the given arguments to a set of key points (KPs) , defined as high-level arguments.",
"The set of KPs can be either given as input, or automatically extracted from the data.",
"The resulting summary includes the KPs, along with their salience, represented by the number (or fraction) of matching arguments.",
"The user can also drill down from each KP to its associated arguments.",
"Bar-Haim et al. (2020b) proposed the following method for automatic extraction of KPs from a set of arguments, opinions or views, which they refer to as comments : 1. Select short, high quality sentences as KP candidates .",
"2. Map each comment to its best matching KP, if the match score exceeds some threshold t match .",
"3. Rank the candidates according to the number of their matches.",
"4. Remove candidates that are too similar to a higher-ranked candidate 1 .",
"5. Re-map the removed candidates and their matched comments to the remaining candidates.",
"6. Re-sort the candidates by the number of matches and output the topk candidates.",
"Given a set of KPs and a set of comments, a summary is created by mapping each comment to its best-matching KP, if the match score exceeds t match .",
"The above method relies on two models: a matching model that assigns a match score for a (comment, KP) pair, and a quality model , that assigns a quality score for a given comment.",
"The matching model was trained on the ArgKP dataset, which contains 24K (argument, KP) pairs labeled as matched/unmatched.",
"The quality model was trained on the IBM-ArgQ-Rank-30kArgs dataset, which contains quality scores for 30K arguments (Gretz et al., 2020) 2 .",
"The arguments in both datasets support or contest a variety of common controversial topics (e.g., We should abolish capital punishment ), and were collected via crowdsourcing.",
"Bar-Haim et al. showed that models trained on argumentation data not only perform well on arguments, but also achieve reasonable results on other domains, including survey data and sentences taken from user reviews.",
"However, they did not attempt to adapt KPA to these domains.",
"In the following sections we look more closely at applying KPA to business reviews.",
"In this work we apply KPA to business reviews from the Yelp Open Dataset 3 .",
"The dataset contains about 8 million reviews for 200K businesses.",
"Each business is classified into multiple categories.",
"1 That is, their match score with that candidate exceeds the threshold t match .",
"RESTAURANTS is by far the most common category, comprising the majority of the reviews.",
"Besides restaurants, the dataset contains a wide variety of other business types, from NAILSALONS to DENTISTS .",
"We focus on two business categories in our experiments: RESTAURANTS (4.9M reviews) and HOTELS (258K reviews).",
"We will henceforth refer to these business categories as domains .",
"Each review includes, in addition to the review text, several other attributes, most relevant for our work is the star rating on a 1-5 scale.",
"We filtered and split the dataset as follows.",
"First, we removed reviews with more than 15 sentences (10% of the reviews).",
"Second, we removed businesses with less than 50 reviews.",
"The remaining businesses were split into Train, Development (Dev) and Test set, as detailed in Table 3. Our goal is to create a summary of the reviews for a given business.",
"The summary would list the top k positive and top k negative KPs, and indicate for each KP its salience in the reviews, represented by the percentage of reviews that match the KP.",
"A review is matched to a KP if at least one of its sentences is matched to that KP.",
"An example of such summary is given in Table 1. Table 2 shows a few examples of matching sentences to KPs.",
"Our system employs several classification models: in addition to the matching and argument quality models discussed in Section 2, in this work we add a sentiment classification model and a KP quality model, to be discussed in the next sections.",
"All four classifiers were trained by fine-tuning a RoBERTa-large model (Liu et al., 2019).",
"Prior to the fine-tuning of each classifier, we adapted the model to the business reviews domain, by pretraining on the Yelp dataset.",
"We performed Masked LM pertraining (Devlin et al., 2019; Liu et al., 2019) on 1.5 million sentences sampled from the train set with a length filter of 20-150 characters per sentence.",
"The following parameters were used: learning rate 1e-5; 2 epochs.",
"Training took two days on a single v100 GPU.",
"The matching model was then obtained by fine-tuning the pre-trained model on the ArgKP dataset, with the parameters specified by Bar-Haim et al. (2020b).",
"The quality model was fine-tuned following the procedure described by Gretz et al. (2020), except for using RoBERTa-large instead of BERT-base, with learning rate of 1e-5.",
"Previous work on KPA has ignored the issue of sentiment (or stance) altogether.",
"When applied to argumentation data, it was assumed that the stance of the arguments is known, and KPA was performed separately for pro and con arguments.",
"Accordingly, the ArgKP dataset only contains (argument, KP) pairs having the same stance.",
"There are, however, several advantages for incorporating sentiment into KPA, in particular when analyzing reviews: 1. Separating positive KPs from negative ones makes the summaries more readable.",
"2. Filtering neutral sentences, which are mostly irrelevant, may improve KPA quality.",
"3. Attempting to match only sentences and KPs with the same polarity may reduce both matching errors and run time.",
"We developed a sentence-level sentiment classifier for Yelp data by leveraging the abundance of available star ratings for short reviews.",
"We extracted from the entire train set reviews having at most 3 sentences and 64 tokens.",
"Reviews with 1-2, 3 and 4-5 star rating were labeled as negative ( NEG , 20% of the reviews), neutral ( NEUT , 11%) and positive ( POS , 69%), respectively.",
"The reviews were divided into a training set, comprising 235,481 reviews, and a held-out set, comprising 26,166 reviews.",
"The sentiment classifier was trained by fine-tuning the pre-trained model on the above training data, for 3 epochs.",
"The first two rows in Table 4 show the classifier's performance on the held-out set.",
"Since we ultimately wish to apply the classifier to individual sentences, we also annotated a small sentence-level benchmark of 158 reviews from the held-out set, which contain 952 sentences.",
"We selected a minimal threshold t s for predicting POS or NEG sentiment.",
"If both POS and NEG predictions are below this threshold, the sentence is predicted as NEUT .",
"The threshold was selected so that the recall of both POS and NEG is at least 70%, while POS NEG NEUT Reviews P 0.96 0.86 0.58 R 0.97 0.91 0.47 Sentences P 0.82 0.81 0.48 R 0.88 0.70 0.47 Table 4: Sentiment classification results on held-out data.",
"aiming to maximize precision 4 .",
"Sentence-level performance on the benchmark using this threshold is shown in the last two rows of Table 4. Almost all the errors involved neutral labels confusion between positive and negative labels was very rare.",
"We integrate sentiment into KPA as follows.",
"We extract positive KPs from a set of sentences classified as positive, and likewise for negative KPs.",
"In order to further improve precision, positive (neg-ative) sentences are only selected from positive (negative) reviews.",
"When matching sentences to the extracted KPs we filter out neutral sentences and match sentences only to KPs with the same polarity.",
"However, at this stage we do not filter by the review polarity, since we would like to allow matching positive sentences in negative reviews and vice versa, as well as positive and negative sentences in neutral reviews.",
"KPA is an extractive summarization method: KPs are selected from the review sentences being summarized.",
"When generating a summary for a business with just a few dozens of reviews, the input reviews may not have enough good KP candidates short sentences that concisely capture salient points in the reviews.",
"This is a common problem for extractive summarization methods, where it is often difficult to find sentences that fit into the summary in their entirety.",
"We propose to address this problem by mining KPs collectively for the whole domain (e.g., restaurants or hotels).",
"The extracted set of domain KPs is then matched to the review sentences of each analyzed business.",
"This method can extract KPs from reviews of thousands of businesses, rather than from a single business, and therefore is much more robust.",
"It overcomes a fundamental limitation of extractive summarization limited selection of candidate sentences, while sidestepping the com-4 The chosen threshold was 0.79.",
"plexity of sentence generation that exists in abstractive summarization.",
"Using the same set of KPs for each business makes it easy to compare different businesses.",
"For example, we can rank businesses by the prevalence of a certain KP of interest.",
"For each domain, we sampled 12,000 positive reviews and 12,000 negative reviews from the train set, from which positive and negative KPs were extracted, respectively 5 .",
"We extracted positive and negative sentences from the reviews using the sentiment classifier, as described in the previous section.",
"We filtered sentences with less than 3 tokens or more than 36 tokens (not including punctuation), as well as sentences with less than 10 characters.",
"The number of positive and negative sentences obtained for each domain is detailed in Table 5. We ran the KP extraction algorithm described in Section 2 separately for the positive and negative sentences in each domain.",
"We used a matching threshold t match = 0 .",
"99 .",
"The length of KP candidates was constrained to 3-5 tokens, and their minimal quality score was t quality =0.42 6 .",
"For each run, we selected the resulting top 70 candidates.",
"The number of RoBERTa predictions required by the algorithm is O ( #KP-candidates #sentences ) .",
"While the input size in previous work was up to a few thousands of sentences, here we deal with 50K-60K sentences per run.",
"In order to maintain reasonable run time, we had to constrain both the number of sentences and the number of KP candidates.",
"We selected the top 25% sentences with the highest quality score.",
"The maximal number of KP candidates was 1 .",
"5 N s , where N s is the number of input sentences, and the highest-quality candidates were selected.",
"Each run took 3.5-4.5 hours using 10 v100 GPUs.",
"Previous work did not attempt to explicitly define the desired properties KPs should have, or to de-5",
"de-5 To ensure diversity over the businesses, we employed a",
"two-step sampling process: first sampled a business and then sampled a review for the business.",
"6 The threshold was selected by inspecting a sample of the training data.",
"velop a model that identifies good KP candidates.",
"Instead, KP candidates were selected based on their length and argument quality, using the quality model of Gretz et al. (2020).",
"This quality model, however, is not ideally suited for selecting KP candidates for review summarization: first, it is trained on crowd-contributed arguments, rather than on sentences extracted from user reviews.",
"Second, quality is determined based on whether the argument should be selected for a speech supporting or contesting a controversial topic, which is quite different from our use case.",
"We fill this gap by defining the following requirements from a KP in review summarization: 1. VALIDITY : the KP should be a valid, understandable sentence.",
"This would filter out sentences such as It's rare these days to find that! .",
"2. SENTIMENT : it should have a clear sentiment (either positive or negative).",
"This would exclude sentences like I came for a company event .",
"3. INFORMATIVENESS : it should discuss some aspect of the reviewed business.",
"Statements such as Love this place or We were very disappointed , which merely express an overall sentiment should be discarded, as this information is already conveyed in the star rating.",
"The KP should also be general enough to be relevant for other businesses in the domain.",
"A common example of sentences that are too specific is mentioning the business name or a person's name ( Byron at the front desk is the best! ).",
"4. SINGLEASPECT : it should not discuss multiple aspects (e.g., Decent price, respectable portions, good flavor ).",
"As we show in Section 8, the method presented in the previous sections extracts many KPs that do not meet the above criteria.",
"In order to improve this situation, we developed a new KP quality classifier.",
"We created a labeled dataset for this task, as follows.",
"We sampled from the restaurant and hotel reviews in the train set 2,000 sentences comprising 3-8 tokens and minimal argument quality of t quality .",
"each sentence was annotated for each of the above criteria 7 by 10 crowd annotators, using the Appen platform 8 .",
"We took several measures 7 The guidelines are included in the appendix.",
"8 https://appen.com/ to ensure annotation quality, following Gretz et al. (2020) and Bar-Haim et al. (2020b).",
"First, the annotation was performed by trusted annotators, who performed well on previous tasks.",
"Second, we employed the Annotator score (Toledo et al., 2019), which measures inter annotator agreement, and removed annotators whose annotator was too low.",
"The details are provided in the appendix.",
"For each sentence and each criterion, the fraction of positive annotations was taken to be its confidence.",
"The final dataset was created by setting upper and lower thresholds on the confidence value of each of the four criteria.",
"Sentences that matched all the upper thresholds were considered positive.",
"Sentences that matched any of the lower thresholds were considered negative.",
"The rest of the sentences were discarded.",
"The threshold values we used are given in the appendix.",
"Overall, the dataset contains 404 positive examples and 1,291 negative examples.",
"We trained a KP quality classifier by fine-tuning the pretrained RoBERTa model (cf. Section 4) on the above dataset (4 epochs, learning rate: 1e-05).",
"Figure 1 shows that this classifier (denoted KP quality FT ) performs reasonably well on the dataset, in a 4-fold cross-validation experiment.",
"Unsurprisingly, the argument quality classifier trained on argumentation data is shown to perform poorly on this task.",
"The classifier was used to filter bad KP candidates, as part of the KP mining algorithm (Sec-tion 6).",
"Candidates that passed this filtering were filtered and ranked by the argument quality model as before.",
"We selected a threshold of 0.4 for the classifier, which corresponds to keeping 32% of the candidates, with precision of 0.62 and recall of 0.82.",
"Our evaluation follows Bar-Haim et al. (2020b), while making the necessary changes for our setting.",
"Let D be a domain, K a set of positive and negative KPs for D , and B a sample of businesses in D .",
"Applying KPA to a business b B using the set of KPs K and a matching threshold t match creates a mapping from sentences in b 's reviews, denoted R b , to KPs in K .",
"By modifying t match we can explore the tradeoff between precision (fraction of correct matches) and coverage .",
"Bar-Haim et al. performed KPA over individual sentences, and correspond-0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Recall 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 P r e c i s i o n Argument quality KP quality FT Figure 1: KP Quality Precision vs. Recall.",
"ingly defined coverage as the fraction of matched sentences.",
"We are more interested in review-level coverage, since not all the sentences in the review are necessarily relevant for the summary.",
"Given KPA results for B , K and t match , we can compute the following measures: 1. Review Coverage : the fraction of reviews per business that are matched to at least one KP, macro-averaged over the businesses in B .",
"2. Mean Matches per Review : the average number of matched KPs per review, macro-averaged over the businesses in B .",
"Computing precision requires a labeled sample.",
"We create a sample S by repeating the following procedure until N samples are collected: 1. Sample a business b B ; a review r R b and a sentence s r .",
"2. Let the KP k K be the best match of s in K with match score m .",
"The ( s, k ) pairs in S are annotated as cor-rect/incorrect matches.",
"We can then compute the precision for any threshold t match > t min by considering the corresponding subset of the sample.",
"We sampled for each domain 40 businesses from the test set, where each business has between 100 and 5,000 reviews.",
"For each domain, and each evaluated set of KPs, we labeled a sample of 400 pairs.",
"We experimented with several configurations of KPA adapted to Yelp reviews, as described in the previous sections.",
"These configurations are denoted by the prefix RKPA .",
"Each configuration only differs in the method it employs for creating the set of domain KPs ( K ): RKPA-BASE : This configuration filters KP candidates according to their length and quality, using the quality model trained on argumentation data.",
"In each domain, the top 30 mined KPs for each polarity were selected.",
"RKPA-FT: This configuration applies the fine-tuned KP quality model as an additional filter for KP candidates.",
"As with the previous configuration, we take the top 30 KPs for each polarity, in each domain.",
"RKPA-MANUAL : We also experimented with an alternative form of human supervision, where the set of automatically-extracted KPs obtained by the RKPA-BASE configuration is manually reviewed and edited.",
"KPs may be rephrased, redundancies are removed and bad KPs are filtered out.",
"While this kind of task is less suitable for crowdsourcing, it can be completed fairly quickly about an hour per domain.",
"The task was performed by two of the authors, each working on one domain and reviewing the results for the other domain.",
"The final set includes: 18 positive and 15 negative KPs for restaurants; 20 positive and 20 negative KPs for hotels.",
"9 In addition to the above configurations, we also experimented with a vanilla KPA configuration (denoted KPA), which replicates the system of Bar-Haim et al. (2020b), without any of the adaptations and improvements introduced in this work.",
"No Yelp data was used for pretraining or fine-tuning the models; key points were extracted independently for each business in the test set; and no sentiment analysis was performed.",
"Instead of taking the top 30 KPs for each polarity, we took the top 60 KPs.",
"Sample labeling.",
"Similar to the KP quality dataset, the eight samples of 400 pairs (two domains four configurations) were annotated in the Appen crowdsourcing platform.",
"The annotation guidelines are included in the appendix.",
"Each instance was labeled by 8 trusted annotators, and annotators with Annotator < 0 .",
"05 were removed (cf. Section 7).",
"We set a high bar for labeling correct matches: at least 85% of the annotators had to agree that the match is correct, otherwise it was labeled as incorrect.",
"We verified the annotations consistency by sampling 250 pairs, and annotating each pair by 16 annotators.",
"Annotations for each pair were randomly split into two sets of 8 annotations, and a binary label was derived from each set, as described above.",
"The two sets of labels for the sample agreed on 85.2% of the pairs, with Cohen's Kappa of 0.6 10 .",
"Figure 2 shows the precision/coverage curves for the four configurations, where coverage is measured either as Review Coverage (left) or as Mean Matches per Review (right).",
"We first note that all three configurations developed in this work outperform vanilla KPA by a large margin.",
"The RKPA-BASE configuration, which is only trained on previously-available data, already achieves reasonable performance.",
"For example, the precision at Review Coverage of 0.8 is 0.77 for hotels and 0.83 for restaurants.",
"Applying human supervision for improving the set of key points, either by training a KP quality model on crowd labeling (RKPA-FT), or by employing a human-in-the loop approach (RKPA-MANUAL ) leads to substantial improvement in both domains.",
"While both alternatives perform well, RKPA-FT achieves better precision at higher coverage rates.",
"Table 6 shows, for each configuration in the restaurants domain, the top 10 KPs ranked by their number of matches in the sample.",
"The matching threshold for each configuration corresponds to Review Coverage of 0.75.",
"For the RKPA-BASE configuration, we can see examples of KPs that discuss multiple aspects (rows 3, 4), are too general (row 8) or too specific (row 9).",
"These issues are much improved by applying the KP quality classifier, as illustrated by the top 10 KPs for the RKPA-FT configuration.",
"Table 7 provides a more systematic comparison of the KP quality in both configurations, based on the top 30 KPs for each polarity in each domain (120 in total per configuration).",
"For each domain and configuration, the table shows the fraction of KPs that conform to our guidelines (Section 7).",
"In both domains, KP quality is much improved for the RKPA-FT configuration.",
"Error Analysis: By analyzing the top matching errors of both domains, we found several systematic patterns of errors.",
"The most common type of 10 This result is comparable to (Bar-Haim et al., 2020b), who reported Cohen's Kappa of 0.63 in a similar experiment.",
"error consisted of a KP and a sentence making the same claim towards different targets, e.g. We had to refill our own wine and ask for refills of soda. was matched to Coffee was never even refilled. .",
"This usually stemmed from a too specific KP and was more common in the restaurants domain.",
"In some cases, a sentence was matched to an unrelated KP with a shared concept or term.",
"For example, Cheap, easy, and filling was matched to Ordering is quick and easy .",
"Polarity errors were rare but present, e.g. However she wasn't the friendliest when she came to help us and The waitress was friendly though. .",
"Previous work on review summarization was dominated by two paradigms: aspect-based sentiment summarization and multi-document opinion summarization.",
"line of work aims to create structured summaries that assign an aggregated sentiment score or rating to the main aspects of the reviewed entity (Hu and Liu, 2004; Gamon et al., 2005; Snyder and Barzi-lay, 2007; Blair-goldensohn et al., 2008; Titov and McDonald, 2008).",
"Aspects typically comprise 1-2 words (e.g., service, picture quality ), and are either predefined or extracted automatically.",
"A core sub-task in this approach is Aspect-Based Sentiment Analysis: identification of aspect mentions in the text, which may be further classified into high-level aspect categories, and classification of the sentiment towards these mentions.",
"Recent examples are (Ma et al., 2019; Miao et al., 2020; Karimi et al., 2020).",
"The main shortcoming of such summaries is the lack of detail, which makes it difficult for a user to understand why an aspect received a particular rating (Ganesan et al., 2010).",
"Although some of these summaries include for each aspect a few supporting text snippets as evidence, these examples may be considered anecdotal rather than representative.",
"Multi-document opinion summarization.",
"This approach aims to create a fluent textual summary from the input reviews.",
"A major challenge here is the limited amount of human-written summaries available for training.",
"Recently, several abstractive neural summarization methods have shown promising results.",
"These models require no summaries for training (Chu and Liu, 2019; Brazinskas et al., 2020b; Suhara et al., 2020), or only a handful of them (Brazinskas et al., 2020a).",
"As discussed in the previous section, textual summaries provide more detail than aspect-based sentiment summaries, but lack a quantitative dimension.",
"In addition, the assessment of such summaries is known to be difficult.",
"As demonstrated in this work, KPA can be evaluated using straightforward measures such as precision and coverage.",
"We introduced a novel paradigm for summarizing reviews, based on KPA.",
"KPA addresses the limitations of previous approaches by generating summaries that combine both textual and quantitative views of the data.",
"We presented several extensions to KPA, which make it more suitable for large-scale review summarization: collective key point mining for better key point extraction; integrating sentiment analysis into KPA; identifying good key point candidates for review summaries; and leveraging the massive amount of available reviews and their metadata.",
"We achieved promising results over the Yelp dataset without requiring any domain-specific annotations.",
"We also showed that performance can be substantially improved with human supervision.",
"While we focused on user reviews, the methods introduced in this work may improve KPA performance in other domains as well.",
"In future work we would like to generate richer summaries by combining domain level key points with local key points, individually extracted per business.",
"It would also be interesting to adapt current methods for unsupervised abstractive summarization to generate key points.",
"Our use of the Yelp dataset has been reviewed and approved by both the data acquisition authority in our organization and the Yelp team.",
"We do not store or use any user information from the Yelp dataset.",
"We ensured fair compensation for crowd annotators as follows: we set a fair hourly rate according to our organization's standards, and derived the payment per task from the hourly rate by estimating the expected time per task based on our own experience.",
"Regarding the potential use of the proposed method one of the advantages of KPA is that it is transparent, verifiable and explainable the user can drill down from each key point to it matched sentences, which provide justification and supporting evidence for its inclusion in the summary."
] |
[
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"objective",
"objective",
"objective",
"objective",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"result",
"result",
"result",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain"
] |
[
"We focus on improving name tagging for low-resource languages using annotations from related languages.",
"Previous studies either directly project annotations from a source language to a target language using cross-lingual representations or use a shared encoder in a multitask network to transfer knowledge.",
"These approaches inevitably introduce noise to the target language annotation due to mismatched source-target sentence structures.",
"To effectively transfer the resources, we develop a new neural architecture that leverages multilevel adversarial transfer: (1) word-level adversarial training, which projects source language words into the same semantic space as those of the target language without using any parallel corpora or bilingual gazetteers, and (2) sentence-level adversarial training, which yields language-agnostic sequential features.",
"Our neural architecture outperforms previous approaches on CoNLL data sets.",
"Moreover, on 10 low-resource languages, our approach achieves up to 16% absolute F-score gain over all high-performing baselines on cross-lingual transfer without using any target-language resources.",
"1 1 Introduction Low-resource language name tagging is an important but challenging task.",
"An effective solution is to perform cross-lingual transfer, by leveraging the annotations from high-resource languages.",
"Most of these efforts achieve cross-lingual annotation projection based on bilingual parallel corpora combining with automatic word alignment (Yarowsky et al., 2001; Wang et al., 2013; Fang and Cohn, 2016; Ehrmann et al., 2011; Ni et al., 2017), bilingual gazetteers (Feng et al., 2017; Zirikly 1 Our programs will be released at https://github. com/wilburOne/AdversarialNameTagger and Hagiwara, 2015), cross-lingual word embedding (Fang and Cohn, 2017; Wang et al., 2017; Huang et al., 2018), or cross-lingual Wikifica-tion (Kim et al., 2012; Nothman et al., 2013; Tsai et al., 2016; Pan et al., 2017), but these resources are still only available for dozens of languages.",
"Recent efforts on multi-task learning model each language as one single task while all the tasks share the same encoding layer (Yang et al., 2016, 2017; Lin et al., 2018).",
"These methods can transfer knowledge via the shared encoder without using bilingual resources.",
"However, different languages usually have different underlying sequence structures, as shown in Figure",
"1. Without an explicit constraint, the encoder is not guaranteed to extract language-independent sequential features.",
"Moreover, when the size of annotated resources is not balanced, the encoder is likely to be biased toward the resource-dominant language.",
"NED: ENG: ESP: The European Union 5 ' s competition policy 3 has been of central importance 4 since European integration 2 began 1 .",
"La poltica de competencia 3 de la Unin Europea 5 ha sido de central importancia 4 desde que se inici 1 la integracin europea 2 .",
"Sedert het begin 1 van de Europese integratie 2 is het mededingingsbeleid 3 van groot belang 4 voor de Europese Unie 5 .",
"Considering these challenges, we develop a new neural architecture which can effectively transfer resources from source languages to improve target language name tagging.",
"Our neural architecture is built upon a state-of-the-art sequence tagger: bi-directional long short-term memory as input to conditional random fields (Bi-LSTM-CRF) (Lam-ple et al., 2016; Huang et al., 2015; Ma and ... ... Target Language Source Language ... Linear Projection Word Discriminator Sequence Feature Encoder ContextEncoder CRF Name Tagger SequenceDiscriminator C onvo l u ti on a l N e u r a l N e t w o r k s B-PER I-PER O ... O B-GPE Figure 2: Architecture overview. Hovy, 2016), integrated with multi-level adversarial transfer: (1) word level adversarial transfer, similar to Conneau et al. (2017), applying a projection function on the source language and a discriminator to distinguish each word of the target language from that of the source language, resulting in a bilingual shared semantic space; (2) sentence-level adversarial transfer, where a discriminator is trained to distinguish each sentence of the target language from that of the source language, 2 and a sequence encoder is applied to each sentence of both languages to prevent the discriminator from correctly predicting the source of each sentence, yielding language-agnostic sequential features.",
"These features can better facilitate the resource transfer from the source language to the target language.",
"Our contributions are twofold: (1) without requiring any parallel corpora or bilingual gazetteers, the multi-level adversarial approach can efficiently transfer annotated resources from the source language to the target language and improve target language name tagging; (2) In addition to outperforming previous high-performing baselines on CoNLL data sets, we also evaluate cross-lingual name tagging on 10 low-resource languages and achieve up to 16% absolute F-score gain over all baselines when there is no annotated resource for the target language.",
"Figure 2 shows the overview of our neural architecture.",
"It consists of three components: 2 For the name tagging task, sequence' always means sentence.' Cross-lingual word embedding learning with adversarial training: Given pre-trained monolingual word embeddings for a target language t and a source language s , we first apply a mapping function to each word representation from s , then feed both the projected source word representations and the target word representations to a word discriminator to predict the language of each word.",
"If the discriminator cannot distinguish the language of t from the projection of s , then we consider t and the projection of s to be in a shared space.",
"Language-agnostic sequential feature extraction: For each sentence of t and s , we apply a sequence encoder to extract sequential features, and a Convolutional Neural Network (CNN) (Krizhevsky et al., 2012) based sequence discriminator to predict the language source of each sentence.",
"The sequence encoder is trained to prevent the sequence discriminator from correctly predicting the language of each sentence, such that it finally extracts language-agnostic sequential features.",
"Language-independent name tagger The language-agnostic sequential features from both t and s are further fed into a context encoder to better capture and refine contextual information and a conditional random field (CRF) (Lafferty et al., 2001) based name tagger.",
"To better leverage the resources from the source language, our first step is to construct a shared semantic",
"semantic space where the words from the source and target languages are semantically aligned.",
"Without requiring any bilingual gazetteers, recent efforts (Zhang et al., 2017b; Conneau et al., 2017; Chen and Cardie, 2018) explore unsupervised approaches to learn cross-lingual word embeddings and achieve comparable performance to supervised methods.",
"Following these studies, we perform word-level adversarial training to automatically align word representations from s and t .",
"Formally, assume we are given pretrained monolingual word embeddings V t = f v t 1 ; v t 2 ; :::; v tN g 2 RN (cid:2) d t for t , and V s = f v s 1 ; v s 2 ; :::; v sM g 2 RM (cid:2) d s for s , where v ti and v sj are the vector representations of words w ti and w si from t and s , N and M denote the vocabulary sizes, d t and d s denote the embedding dimensionality of t and s respectively.",
"We then apply a mapping function f to project s into the same semantic space as t : e V s = f ( V s ) = V s U (1) where U 2 R d s (cid:2) d t is the transformation matrix.",
"e V s 2 RM (cid:2) d t are the projected word embeddings for s , and (cid:2) f = f (cid:18) f g denotes the set of parameters to be optimized for f .",
"Similar to Xing et al. (2015), Conneau et al. (2017), and Chen and Cardie (2018), we constrain the transformation matrix U to be orthogonal with singular value decomposition (SVD) to reduce the parameter search space: U = AB ; with A (cid:6) B = SVD ( e V s V s ) (2) To automatically optimize the mapping function f without using extra bilingual signals, we introduce a multi-layer perceptron D as a word discriminator, which takes word embeddings of t and projected word embeddings of s as input features and outputs a single scalar.",
"D ( w (cid:3) i ) represents the probability of w (cid:3) i coming from t .",
"The word discriminator is trained by minimizing the binary cross-entropy loss: L wdis = (cid:0) 1 I t ; s (cid:1) I t ; s i =0 ( y i (cid:1) log ( D ( w (cid:3) i )) + (1 (cid:0) y i ) (cid:1) log (1 (cid:0) D ( w (cid:3) i )) ) ; y i = (cid:14) i (1 (cid:0) 2 ) + ; where (cid:14) i = 1 when w (cid:3) i is from t and (cid:14) i = 0 otherwise.",
"I t ; s represents the number of words sampled from the vocabulary of t and s together.",
"is a smoothed value added to the positive and negative labels.",
"(cid:2) dis = f (cid:18) D g is the parameter set.",
"The mapping function f and word discriminator D are two adversarial players, thus we flip the word labels and optimize f by minimizing the following loss: L wf = (cid:0) 1 I t ; s (cid:1) I t ; s i =0 ( (1 (cid:0) y i ) (cid:1) log ( D ( w (cid:3) i )) + y i (cid:1) log (1 (cid:0) D ( w (cid:3) i )) ) ; y i = (cid:14) i (1 (cid:0) 2 ) + Following the standard training procedures of deep adversarial networks (Goodfellow et al., 2014), we train the word discriminator and the mapping function successively with stochastic gradient descent (SGD) (Bottou, 2010) to minimize L wdis and L wf .",
"Similar to Conneau et al. (2017), after word-level adversarial training, we also adopt a refinement step to construct a bilingual dictionary for the topk most frequent words in the source language 3 based on e V s and V t , and further optimize U with Equation 2 in a supervised way.",
"Once s is projected into the same semantic space as t , we can regard both sentences as coming from one unified language and directly project annotations from s to t .",
"However, name tagging not only relies on word level features, but also on sequential contextual features for entity type classification.",
"Without constraints, the sequence encoder can only extract sequential features for both t and s based on their final training signals while these features are not necessarily beneficial to the target language.",
"Thus, we further design sentence level adversarial transfer to encourage the encoder to extract language-agnostic sequential features.",
"Given a sentence x t = f w t 1 ; w t 2 ; ::: g from t and a sentence x s = f w s 1 ; w s 2 ; ::: g from s , we first use V t and e V s to initialize a vector representation for each w ti and w si .",
"We also apply a character-based CNN (denoted as CharCNN) (Kim et al., 2016) for each language to compose a word representation from its characters.",
"For each word, we 3 We set k=15,000 in our experiment.",
"concatenate its word representation and character based representation.",
"Then we feed the sequence of vector representations into a weight sharing Bi-LSTM encoder E to obtain sequential features H t = f h t 1 ; h t 2 ; ::: g and H s = f h s 1 ; h s 2 ; ::: g for x t and x s respectively.",
"The parameter set of optimizing both language-dependent CharCNN and the sequence encoder can be denoted as (cid:2) e = f (cid:18) CharCNN t ; (cid:18) CharCNN s ; (cid:18) E g .",
"Based on these sequential features, we use a sequence discriminator to predict the language source of each sentence.",
"Given a sentence x (cid:3) and its sequential features H = f h (cid:3) 1 ; h (cid:3) 2 ; ::: g from E , we first apply a language-independent CNN with max-pooling to get an overall vector representation for x (cid:3) , then feed it into another multi-layer perceptron, ~ D , to predict the probability that x (cid:3) comes from language t .",
"The sequence discriminator is trained by minimizing the following binary cross-entropy loss: L xdis = (cid:0) 1 ~ I t ; s (cid:1) ~ I t ; s i =0 ( ~ y i (cid:1) log ( ~ D ( x (cid:3) i )) + (1 (cid:0) ~ y i ) (cid:1) log (1 (cid:0) ~ D ( x (cid:3) i )) ) ; ~ y i =~ (cid:14) i (1 (cid:0) 2 (cid:17) ) + (cid:17) ; where ~ (cid:14) i = 1 if the sentence x (cid:3) i is from t and ~ (cid:14) i = 0 otherwise.",
"~ I t ; s represents the number of sentences sampled from the whole data set of t and s .",
"(cid:17) is another smoothed value for sequence labels.",
"(cid:2) f dis = f (cid:18) CNN ; (cid:18) ~ D g denotes the parameter set for optimizing the sequence discriminator.",
"The sequence encoder E and the sequence discriminator ~ D are two adversarial players and E is optimized by trying to fool ~ D to correctly predict the language source of each sentence.",
"Thus we flip the sequence labels and optimize E by minimizing the following loss: L xe = (cid:0) 1 ~ I t ; s (cid:1) ~ I t ; s i =0 ( (1 (cid:0) ~ y i ) (cid:1) log ( ~ D ( x (cid:3) i )) + ~ y i (cid:1) log (1 (cid:0) ~ D ( x (cid:3) i )) ) ; ~ y i =~ (cid:14) i (1 (cid:0) 2 (cid:17) ) + (cid:17) 2.4 Name Tagger Training With the language-agnostic sequential features from E , we can directly combine all annotated Algorithm 1 Multi-level Adversarial Training for Improving Target Language Name Tagging Input: Monolingual pre-trained word embeddings V t for target language t , and V s for source language s .",
"Annotated sentence set t for t and s for related language s .",
"1. for iter = 1 to word _ epoch do",
"2. for a = 1 to word _ dis _ steps do",
"3. sample a batch of words b t (cid:24) V t , b s (cid:24) V s 4. loss = L wdis ([ b t ; f ( b s )])",
"5. update (cid:2) dis to minimize loss",
"6. sample a batch of words b t (cid:24) V t , b s (cid:24) V s 7. loss = L wf ([ b t ; f ( b s )])",
"8. update (cid:2) f to minimize loss",
"9. build a parallel dictionary with V t and f ( V s ) and refine projected word embeddings e V s = f ( V s ) 10. for iter = 1 to seq _ epoch do",
"11. sample a batch of sentences ~ b t (cid:24) t , ~ b s (cid:24) s 12. extract sequential features from ~ b t , ~ b s with",
"E 13. loss = L xdis ([ E (~ b t ) ; E (~ b s )])",
"14. update (cid:2) e , (cid:2) g dis to minimize loss",
"15. for g = 1 to seq _ tagger _ steps do",
"16. sample a batch of sequences ~ b t (cid:24) t , ~ b s (cid:24) s 17. loss = L xe ([ E (~ b t ) ; E (~ b s )]) + L crf ([~ b t ; ~ b s ])",
"18. update (cid:2) e , (cid:2) c to minimize loss training data from both t and s to train the name tagger for t .",
"To do so, we feed the sequential features from E to another Bi-LSTM encoder E c to refine the context information for each token, and use a CRF output layer to render predictions for each token, which can effectively capture dependencies among name tags (e.g., an inside-organization token cannot follow a beginning-person token).",
"Specifically, given an input sentence x = f w 1 ; w 2 ; :::w n g , we extract language-agnostic sequential features with E , and further obtain a new sequence of contextual features e H = f ~ h 1 ; ~ h 2 ; :::; ~ h n g with E c .",
"Then we a apply a linear layer to further convert each ~ h i to a score vector y i , in which each dimension denotes the predicted score for a tag (the starting, inside or outside of a name mention with a pre-defined entity type).",
"Then we feed the sequence of score vectors Y = f y 1 ; y 2 ; :::; y n g into the CRF layer.",
"The score of a sequence of tags Z = f z 1 ; z 2 ; :::; z n g is defined as: Score ( x; Y ; Z ) = n i =1 ( R z i (cid:0) 1 ;z i + Y i;z i ) where R is a transition matrix and R p;q denotes the binary score of transitioning from tag p to tag q .",
"Y i;z represents the unary score of assigning tag z to the i -th word.",
"Given the annotated sequence of tags Z , the CRF loss is: L crf = log Z 2 ~ Z e Score ( x; Y ; Z ) (cid:0) Score ( x; Y ; Z ) where ~ Z is the set of all possible tagging paths.",
"The parameter set for optimizing the name tagger can be denoted as (cid:2) c = f (cid:18) E c ; (cid:18) ; (cid:18) CRF g .",
"We jointly optimize the sequence encoder E , the context encoder E c and the CRF together by minimizing the loss L = L xe + L crf , and successively minimize L xdis and L with SGD.",
"The end-to-end training for our neural architecture is described in Algorithm",
"1. 3 Experiment 3.1 Data and Experimental Setup We evaluate our methods from multiple settings.",
"We first evaluate our architecture on 10 low-resource languages from the DARPA LORELEI project.",
"The annotations are released by the Linguistic Data Consortium (LDC).",
"4 Each dataset has four predefined name types: person (PER), organization (ORG), location (LOC) and geo-political entity (GPE).",
"For each target low-resource language, we choose a source language if they are from the same language family or use the same script.",
"To show the impact of resource transfer between distinct languages, we also use English as a source language for each target low-resource language.",
"We create the English annotated resource by combining the TAC-KBP 2015 English Entity Discovery and Linking (Ji et al., 2015) data set and the Automatic Content Extraction (ACE2005) data set.",
"5 To avoid the impact of parameter initialization, we perform 5-fold cross validation.",
"For each experiment, we run twice and get the averaged F-score.",
"Table 1 shows the statistics of each data set.",
"We also evaluate our approach on high-resource languages.",
"We use Dutch (nl) and Spanish (es) data sets from the CoNLL 2002 (Tjong Kim Sang, 2002) shared task as target languages, and use English (en) data from the CoNLL 2003 (Tjong Kim Sang and De Meulder, 2003) shared task as 4 The annotations are from: am (LDC2016E87), ti (LDC2017E39), ar (LDC2016E89), fa (LDC2016E93), om (LDC2017E27), so (LDC2016E91), sw (LDC2017E64), yo (LDC2016E105), ug (LDC2016E70), uz (LDC2016E29) 5 The data sets are LDC2015E103 and LDC2006T06 Language # of Sents # of Tokens # of Names Amharic (am) 4,770 71,399 3,891 Tigrinya (ti) 5,023 95,364 6,201 Arabic (ar) 4,781 80,715 4,937 Farsi (fa) 3,855 72,629 3,966 Oromo (om) 2,987 52,876 4,985 Somali (so) 3,453 78,400 5,571 Swahili (sw) 4,155 96,902 6,044 Yoruba (yo) 1,599 46,084 2,016 Uyghur (ug) 3,961 60,999 2,575 Uzbek (uz) 11,135 177,816 10,937 English (en) 17,936 388,120 23,938 Table 1: Data set statistics for each low-resource language.",
"the source language.",
"All the data sets have four pre-defined name types: PER, ORG, LOC and miscellaneous (MISC).",
"Table 2 shows the statistics of these data sets.",
"For fair comparison, we use the same pretrained word embeddings of English, Dutch and Spanish as Lin et al. (2018), while for each low-resource language we train their word embeddings using the documents from their LDC packages with FastText.",
"6 Table 3 lists the key hyper-parameters we used in our experiments.",
"We compare our methods with three categories of baseline methods: 7",
"",
"Monolingual Name Tagging Using monolingual annotations only, the current state-of-the-art name tagging model is the Bi-LSTM-CRF network (Huang et al., 2015; Lample et al., 2016; Ma and Hovy, 2016).",
"8 Multi-task Learning Lin et al. (2018) apply multi-task learning to boost name tagging performance by introducing additional annotations from source languages using a weight sharing context encoder across multiple languages.",
"Language Universal Representations We apply word adversarial transfer only to project the source language into the same semantic space as the target language, then train the name tagger on the annotations of source and target languages.",
"Word-Adv 1 refers to the approach which is directly trained on the combination of the anno-6 https://fasttext.cc/ 7 All the baselines are trained for 100 epochs 8 For each word, we also combine its word embedding with a CharCNN based representation.",
"tations, while Word-Adv 2 refers to the baseline that is first trained on the target language annotations and then further tuned on the related language annotations.",
"We first evaluate our approach on a cross-lingual transfer setting without using any annotated training data from the target language.",
"We conduct experiments on 8 low-resource languages.",
"Among those, some pairs, such as Amharic (am) and Tigrinya (ti), Oromo (om) and Somali (so), or Yoruba (yo) and Swahili (sw), are from the same language family and are closely related, while some are not, such as Arabic (ar) and Farsi (fa).",
"Since our approach requires some unlabeled sentences from the target language to train the sentence-level discriminator, we entirely remove the annotations from the annotated data set of the target language.",
"Table 4 presents the results.",
"Our approach significantly outperforms the previous methods on all languages.",
"Specifically, compared with the Word-Adv 1 baseline, which only performs word-level adversarial transfer, our approach achieves 10% absolute F-score gain on average, which demonstrates the effectiveness of the sentence-level adversarial transfer.",
"In addition, compared with Lin et al. (2018), who only apply a shared context-encoder to transfer the knowledge, our approach not only includes a language-sharing target Cross-lingual Multitask Our (source) Word-Adv 1 Learning Approach am (ti) 15.19 19.72 26.86 ti (am) 16.20 9.06 29.36 ar (fa) 1.53 3.52 13.83 fa (ar) 2.59 0.91 11.14 om (so) 4.66 3.40 14.14 so (om) 4.12 2.98 20.02 sw (yo) 7.20 5.60 18.25 yo (sw) 13.07 6.14 23.73 Table 4: Cross-lingual transfer when the target language has no resources (F-score %).",
"encoder, but also performs multi-level adversarial training to encourage the semantic alignment of words from both languages and a sequence encoder to extract language-agnostic sequential features.",
"Here we use some Arabic (Farsi) examples to further show the effectiveness of each level of adversarial training in our architecture.",
"Without using any annotated training data from Arabic, both our approach and the Word-Adv 1 baseline successfully identify ( French ) as a GPE from the Arabic (ar) sentence in Figure 3, since with word-level adversarial training, the semantics of is well aligned with the GPE names in Farsi annotated data, such as ( France ), ( Russia ) and ( Germany ).",
"However, both the Word-Adv 1 and Lin et al. (2018) baselines fail to identify ( Algerian ) as a GPE since its top ranked similar words in Farsi include ( negotiations ), ( Doha ) and ( agreement ).",
"With sentence-level adversarial training, our approach successfully captures language-agnostic sequential features, such as ( or ) usually connects two names with the same type , thus our approach successfully identifies ( Algerian ) as a GPE name.",
"We also investigate the impact of cross-lingual transfer when the target languages have some annotated resources.",
"For each target low-resource language, we explore the use of a related low-resource language vs. using the high-resource En-target Monolingual Cross-lingual Embedding Multitask Our Approach (related) Bi-LSTM-CRF Word-Adv 1 Word-Adv 2 Learning Multi-Adversarial am (ti) 72.23 72.15 72.01 72.35 73.98 ti (am) 74.68 74.43 74.83 74.71 74.93 ar (fa) 48.92 48.37 47.90 47.53 49.76 fa (ar) 64.35 63.93 64.43 63.21 65.09 om (so) 76.37 76.43 76.19 76.18 77.19 so (om) 77.63 77.31 77.13 77.99 78.15 sw (yo) 77.01 77.31 77.85 77.86 76.28 yo (sw) 68.97 68.89 69.62 70.12 70.59 ug (uz) 68.73 68.53 68.29 68.39 69.46 uz (ug) 74.59 74.21 74.74 74.56 75.37 am (en) 72.23 72.43 71.63 72.22 73.35 ti (en) 74.68 74.61 74.69 74.68 74.80 ar (en) 48.92 48.50 47.91 47.40 50.08 fa (en) 64.35 64.04 64.25 63.44 63.92 om (en) 76..27 76.68 76.53 76.2 77.29 so (en) 77.63 76.67 77.88 77.88 78.21 sw (en) 77.01 77.52 76.84 77.89 77.01 yo (en) 68.97 69.21 69.46 70.43 70.88 ug (en) 68.73 68.14 68.79 68.69 69.06 uz (en) 74.59 73.95 74.46 74.48 74.75 Table 5: Cross-lingual transfer when the target language has resources (F-score %).",
"AR: 1 2 3 .",
"EN: The deputy prosecutor has ruled that the evidence against those with French 3 or 2 Algerian 1 nationality is mostly sufficient.",
"glish as our source language.",
"Table 5 shows the performance on 10 low-resource languages.",
"Comparing cross-lingual embedding based baselines to the monolingual baseline, we observe that for most low-resource languages, directly adding the annotations from the source language to the target language slightly hurts the model.",
"This suggests that when the training data for the target language is not enough, the model will be very sensitive to noise.",
"The multitask learning based baseline (Lin et al., 2018) performs better than the monolingual baseline only when the target and source languages are very close, such as Amharic (am) and Tigrinya (ti), or Swahili (sw) and Yoruba (yo).",
"By introducing annotated training data from English, the performance of all the baselines becomes worse than the monolingual baseline.",
"Since the script and sequence structure of English is very different from these low-resource languages, the addition of English to the limited target language training data yields a considerably noisy corpus.",
"However, by forcing the sequence encoder to extract language-agnostic features, our approach still achieves better performance than the monolingual baseline for most languages.",
"All of these experiments demonstrate that our approach is more effective in leveraging annotations from other languages to improve target language name tagging.",
"We finally investigate the results when both the source and target languages are all high-resource languages.",
"Table 6 presents the performance on Dutch and Spanish while using English as the source language.",
"Our approach significantly outperforms all the other approaches even when the size of the annotated training data for the target language is huge.",
"We notice that our approach achieves larger improvement on Dutch than Spanish.",
"The reason may be that, compared with Spanish, Dutch is much closer to English (Cutler and Pasveer, 2006).",
"Both English and Dutch are from the same West Germanic branch of the Indo-European language family while Spanish is from the Italic branch.",
"We use Amharic as the target language and Tigrinya as the source language to show the impact of the size of their annotations.",
"Specifically, to explore the impact of the size of target language annotations, we use 0, 10%, 50%, or 100% annotated training data from Amharic.",
"Similarly, to show the effect of the size of source language annotations, for each experiment, we also gradually add 0, 20%, 50%, or 100% annotated training data from Tigrinya.",
"For all experiments, we use the same dev and test set of Amharic.",
"As Figure 4 shows, as we gradually add annotations from the source or target language, the performance can always be improved.",
"When the size of target language annotations is small, such as 400 sentences, we can achieve 5%-30% F-score gain by adding about 4,000 sentences from the source language.",
"When the size of target language annotations is over 2,000 sentences, the improvement is about 2% if we add in about 4,000 sentences from source language annotations.",
"Name tagging methods based on sequence labeling have been widely studied in recent years.",
"Huang et al. (2015) and Lample et al. (2016) propose an effective Bi-LSTM-CRF architecture; the Bi-LSTM encodes previous and following contexts, and the CRF is used for tag prediction.",
"Other studies incorporate a character-level CNN (Ma and Hovy, 2016), global contexts (Zhang et al., 2018), or language models (Liu et al., 2018; Peters et al., 2017, 2018; Devlin et al., 2018) to improve name tagging.",
"In addition, several approaches (Zhang et al., 2016a, 2017a; Al-Badrashiny et al., 2017) attempt to incorporate hand-crafted linguistic features into a Bi-LSTM-CRF to improve low-resource name tagging performance.",
"Recent attempts on cross-lingual transfer for name tagging can be divided into two categories: the first projects annotations from a source language to a target language via parallel corpora (Yarowsky et al., 2001; Wang and Manning, 2013; Wang et al., 2013; Zhang et al., 2016b; Fang and Cohn, 2016; Ehrmann et al., 2011; En-ghoff et al., 2018; Ni et al., 2017), a bilingual gazetteer (Feng et al., 2017; Zirikly and Hagiwara, 2015), Wikipedia anchor links (Kim et al., 2012; Nothman et al., 2013; Tsai et al., 2016; Pan et al., 2017), and language universal representations, including Unicode bytes (Gillick et al., 2016) and cross-lingual word embeddings (Fang and Cohn, 2017; Wang et al., 2017; Huang et al., 2018; Xie et al., 2018).",
"The second is based on multitask learning via a weight sharing encoder (Yang et al., 2016, 2017; Lin et al., 2018).",
"Compared to these studies, our approach not only automatically learns cross-lingual word embeddings without requiring any parallel resources, but also carefully extracts language-agnostic sequential features, yielding better performance.",
"Adversarial training has also been extensively studied and applied for cross-lingual and cross-domain transfer.",
"Several studies (Barone, 2016; Zhang et al., 2017c,b; Conneau et al., 2017; Chen and Cardie, 2018) explore adversarial training to automatically induce bilingual and multilingual word representations without using any parallel corpora or bilingual gazetteers.",
"Adversarial training is also applied to extract language-agnostic (Chen et al., 2016; Zou et al., 2018; Wang and Pan, 2018; Kim et al., 2017a; Muis et al., 2018; Cao et al., 2018) and domain-agnostic features (Kim et al., 2017b; Ganin et al., 2016; Tzeng et al., 2017; Chen et al., 2017; Li et al., 2017; Fu et al., 2017; Bousmalis et al., 2016; Shi et al., 2018) for cross-lingual and cross-domain adaptation.",
"Compared with these methods, our approach combines both word-level and sentence-level adversarial training.",
"We design a new neural architecture which integrates multi-level adversarial transfer into a Bi-LSTM-CRF to improve low-resource name tagging.",
"With word-level adversarial training, it can automatically project the source language into a shared semantic space with the target language without requiring any comparable data or bilingual gazetteers.",
"Moreover, considering the different underlying sequential structures among various languages, we further design a sentence-level adversarial transfer to encourage the sequence encoder to extract language-agnostic features.",
"The experiments show that our approach achieves the state-of-the-art on both CoNLL data sets and 10 low-resource languages.",
"In the future, we will further explore selecting the feature-consistent annotations from the source language and add to the target language, and explore unsupervised pretrained cross-lingual language models (Peters et al., 2018; Radford et al., 2018; Devlin et al., 2018; Lample and Conneau, 2019) for cross-lingual low resource name tagging.",
"This research is based upon work supported in part by U.S. DARPA LORELEI Program # HR0011-15-C-0115, and the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via contract # FA8650-17-C-9116, and ARL NS-CTA No.",
"W911NF-09-2-0053.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, ODNI, IARPA, or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein."
] |
[
"result",
"abstain",
"abstain",
"objective",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"result",
"abstain",
"other",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"method",
"objective",
"abstain",
"method",
"result",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Monolingual word alignment is important for studying fine-grained editing operations (i.e., deletion, addition, and substitution) in text-to-text generation tasks, such as paraphrase generation, text simplification, neutralizing biased language, etc.",
"In this paper, we present a novel neural semi-Markov CRF alignment model, which unifies word and phrase alignments through variable-length spans.",
"We also create a new benchmark with human annotations that cover four different text genres to evaluate monolingual word alignment models in more realistic settings.",
"Experimental results show that our proposed model outperforms all previous approaches for monolingual word alignment as well as a competitive QA-based baseline, which was previously only applied to bilingual data.",
"Our model demonstrates good generalizability to three out-of-domain datasets and shows great utility in two downstream applications: automatic text simplification and sentence pair classification tasks.",
"1 1 Introduction Monolingual word alignment aims to align words or phrases with similar meaning in two sentences that are written in the same language.",
"It is useful for improving the interpretability in natural language understanding tasks, including semantic textual similarity (Li and Srikumar, 2016) and question answering (Yao, 2014).",
"Monolingual word alignment can also support the analysis of human editing operations (Figure 1) and improve model performance for text-to-text generation tasks, such as text simplification (Mad-dela et al., 2021) and neutralizing biased language (Pryzant et al., 2020).",
"It has also been shown to be helpful for data augmentation and label projection 1 Our code and data will be available at: https:// github.com/chaojiang06/neural-Jacana (cid:70) Authors contributed equally.",
"paraphrase generation.",
"One major challenge for automatic alignment is the need to handle not only alignments between words and linguistic phrases (e.g., a dozen more than 10 ), but also non-linguistic phrases that are semantically related given the context (e.g., tensions relations being strained in Figure 3).",
"In this paper, we present a novel neural semi-Markov CRF alignment model, which unifies both word and phrase alignments though variable-length spans, calculates span-based semantic similarities, and takes alignment label transitions into consideration.",
"We also create a new manually annotated benchmark, Multi -Genre M onolingual W ord A lignment (MultiMWA), which consists of four datasets across different text genres and is large enough to support the training of neural-based models (Table 1).",
"It addresses the shortcomings of existing datasets for monolingual word alignment: MTReference (Yao, 2014) was annotated by crowd-sourcing workers and contains many obvious errors (more details in 4); iSTS (Agirre et al., 2016) and SPADE/ESPADA (Arase and Tsujii, 2018, 2020) were annotated based on chunking and parsing results, which may restrict the granularity and flexibility of the alignments.",
"Our experimental results show that the proposed semi-Markov CRF model achieves state-of-the-art performance with higher precision, in comparison to the previous monolingual word alignment models (Yao et al., 2013a,b; Sultan et al., 2014), as well as another very competitive span-based neural model (Nagata et al., 2020) that had previously only applied to bilingual data.",
"Our model exceeds 90% F1 in the in-domain evaluation and also has very good generalizability on three out-of-domain datasets.",
"We present a detailed ablation and error analysis to better understand the performance gains.",
"Finally, we demonstrate the utility of monolingual word alignment in two downstream applications, namely automatic text simplification and sentence pair classification.",
"Word alignment has a long history and was first proposed for statistical machine translation.",
"The most representative ones are the IBM mod-els(Brown et al., 1993), which are a sequence of unsupervised models with increased complexity and implemented the GIZA++ toolkit (Och and Ney, 2003).",
"Many more works followed, such as FastAlign (Dyer et al., 2013).",
"Dyer et al. (2011) also used a globally normalized log-linear model for discriminative word alignment.",
"Bansal et al. (2011) proposed a hidden semi-Markov model to handle both continuous and noncontinuous phrase alignment.",
"These statistical methods promoted the development of monolingual word alignment (MacCartney et al., 2008; Thadani and McKe-own, 2011; Thadani et al., 2012).",
"Yao et al. (2013a) proposed a CRF aligner following (Blun-som and Cohn, 2006), then extended it to a semi-CRF model for phrase-level alignments (Yao et al., 2013b).",
"Sultan et al. (2014) designed a simple system with heuristic rules based on word similarity and contextual evidence.",
"Neural methods have been explored in the past decade primarily for bilingual word alignment.",
"Some early attempts (Yang et al., 2013; Tamura et al., 2014) did not match the performance of GIZA++, but recent Transformer-based models started to outperform.",
"Garg et al. (2019) proposed a multi-task framework for machine translation and word alignment, while Zenkel et al. (2020) designed an alignment layer on top of Transformer for machine translation.",
"Both can be trained without word alignment annotations but rely on millions of bilingual sentence pairs.",
"As for supervised methods, Stengel-Eskin et al. (2019) extracted representations from the Transformer-based MT system, then used convolutional neural network to incorporate neighboring words for alignment.",
"Nagata et al. (2020) proposed a span prediction method and formulated bilingual word alignment as a SQuAD-style question answering task, then solved it by fine-tuning multilingual BERT.",
"We adapt their method to monolingual word alignment as a new state-of-the-art baseline ( 5.1).",
"Some monolingual neural models have different settings from this work.",
"Ouyang and McKe-own (2019) introduced pointer networks for long, sentenceor clause-level alignments.",
"Arase and Tsujii (2017, 2020) utilized constituency parsers for compositional and non-compositional phrase alignments.",
"Culkin et al. (2021) considered span alignment for FrameNet (Baker et al., 1998) annotations and treated each span pair as independent prediction.",
"In this section, we first describe the problem formulation for monolingual word alignment, then present the architecture of our neural semi-CRF word alignment model (Figure 2).",
"We formulate word alignment as a sequence tagging problem following previous works (Blunsom and Cohn, 2006; Yao et al., 2013b).",
"Given a source sentence s and a target sentence t of the same language, the span alignment a consists of a sequence of tuples ( i, j ) , which indicates that span s i in the source sentence is aligned with span t j in the target sentence.",
"More specifically, a i = j means source span s i is aligned with target span t j .",
"We consider all spans up to a maximum length of D words.",
"Given a source span s i of d ( d D ) words [ s wb i , s wb i +1 , ..., s wb i + d 1 ] , where b i is the beginning word index, its corresponding label a i means every word within the span s i is aligned to the target span t a i .",
"That is, the word-level alignments a wb i , a wb i +1 , ..., a wb i + d 1 have the same value j .",
"We use a w to denote the label sequence of alignments between words and s wb i to denote the b i th word in the source sentence.",
"There might be cases where span s i is not aligned to any words in the target sentence, then a i = [NULL] .",
"When D 2 , the Markov property would no longer hold for word-[NULL] S t o c k s Wall street stocks fell sharply wall_street fell_sharply stocks_fell_sharply s l u m p on w a ll s t r ee t Wall street stocks fell sharply [SEP] Stocks slump on wall street P re t r a i n e d Sp a n BERT E n c o d er v ( s i , t j ) Span Interaction Alignment Label Transition Bidirectional Training T a r g e t ( ) t S ou r ce ( ) s a s 2 t = a r g m a x a P ( a | s , t ) Span Representation [NULL] W a ll Stocks slump on wall street stocks_slump wall_street on_wall_street s t r ee t s t o c k s f e ll s h a r p l y a t 2 s = a r g m a x a P ( a | t , s ) Figure 2: Illustration of our neural semi-CRF word alignment model.",
"level alignment labels, but for span-level labels.",
"That is, a i depends on a wb i 1 , the position in the target sentence where the source span (with ending word index b i 1 ) that precedes the current span s i is aligned to.",
"We therefore design a discriminative model using semi-Markov conditional random fields (Sarawagi and Cohen, 2005) to segment the source sentence and find the best span alignment, which we present below.",
"One unique aspect of our semi-Markov CRF model is that it utilizes a varied set of labels for each sentence pair.",
"The conditional probability of alignment a given a sentence pair s and t is defined as follows:",
"where i denotes the indices of a subset of source spans that are involved in the alignment a ; a represents the gold alignment sequence at span-level.",
"The potential function consists of three elements, of which the first two compose negative log-likelihood loss: the span interaction function , which accounts the similarity between a source span and a target span; the Markov transition function , which models the transition of alignment labels between adjacent source spans; the cost is implemented with Hamming loss to encourage the predicted alignment sequence to be consistent with gold labels.",
"Function and are implemented as two neural components which we describe below.",
"Span Representation Layer.",
"First, source and target sentences are concatenated together and encoded by the pre-trained SpanBERT (Joshi et al., 2020) model.",
"The hidden representations in the last layer of the encoder are extracted for each WordPiece token, then averaged to form the word representations.",
"Following previous work (Joshi et al., 2020), the span is represented by a self-attention vector computed over the representations of each word within the span, concatenated with the Transformer output states of two endpoints.",
"Span Interaction Layer.",
"The semantic similarity score between source span s i and target span t j is calculated by a 2-layer feed-forward neural network FF sim with Parametric Relu (PReLU) (He et al., 2015), 2 after applying layer normalization to each span representation: ( s i , t j ) = FF sim ([ h si ; h tj ; | h si h tj | ; h si h tj ]) (3) where [; ] is concatenation and is element-wise multiplication.",
"We use h si and h tj to denote the representation of source span s i and target span t j , respectively.",
"Markov Transition Layer.",
"Monolingual word alignment moves along the diagonal direction in most cases.",
"To incorporate this intuition, we propose a scoring function to model the transition between the adjacent alignment labels a wb i 1 and a i .",
"The main feature we use is the distance between the beginning index of current target span and the 2 We also compared ReLU and GeLU, and found PReLU works slightly better.",
"end index of the target span that the prior source span is aligned to.",
"The distance is binned into 1 of 13 buckets with the following boundaries [-11, -6, -4, -3, -2, -1, 0, 1, 2, 3, 5, 10], and each bucket is encoded by a 128-dim randomly initialized embedding.",
"It is then transformed into a real-value score by a 1-layer feed forward neural network.",
"Training and Inference.",
"During training, we minimizes the negative log-likelihood of the gold alignment a , and the model is trained from both directions (source to target, target to source): (cid:80) ( s , t , a ) log p ( a s 2 t | s, t ) log p ( a t 2 s | t, s ) (4) where a s 2 t and a t 2 s represent the gold alignment labels from both directions.",
"During inference, we use the Viterbi algorithm to find the optimal alignment.",
"There are different strategies to merge the outputs from two directions, including intersection, union, grow-diag (Koehn, 2009), bidi-avg (Nagata et al., 2020), etc.",
"It can be seen as a hyper-parameter and decided based on the dev set.",
"In this work, we use intersection in our semi-CRF model for all experiments.",
"We implement our model in PyTorch (Paszke et al., 2017).",
"We use the Adam optimizer and set both the learning rate and weight decay as 1e-5.",
"We set the maximum span size to 3 for our neural semi-CRF model, which can converge within 5 epochs.",
"The neural semi-CRF model has 2 hour training time per epoch for MultiMWA-MTRef, measured on a single GeForce GTX 1080 Ti GPU.",
"In this section, we present the manually annotated Multi -genre M onolingual W ord A lignment (Mul-tiMWA) benchmark that consists of four datasets of different text genres.",
"As summarized in Table 1, our new benchmark is the largest to date and of higher quality compared to existing datasets.",
"In contrast to iSTS (Agirre et al., 2016) and SPADE/ESPADA (Arase and Tsujii, 2018, 2020), our annotation does not rely on external chunking or parsing that may introduce errors or restrict the granularity and flexibility.",
"Our benchmark contains both token alignments and a significant portion of phrase alignments as they are semantically equivalent as a whole.",
"Our benchmark also contains a large portion of semantically similar but not strictly equivalent sentence pairs, which are common in text-to-text generation tasks and thus important for evaluating the monolingual word alignment models under this realistic setting.",
"For all four datasets, we closely follow the standard 6-page annotation guideline 3 from (Callison-Burch et al., 2006) and further extend it to improve the phrase-level annotation consistency (more details in Appendix B.1).",
"We describe each of the four datasets below.",
"MultiMWA-MTRef.",
"We create this dataset by annotating 3,998 sentence pairs from the MTReference (Yao, 2014), which are human references used in a machine translation task.",
"The original labels in MTReference were annotated by crowd-sourcing workers on Amazon Mechanical Turk following the guideline from (Callison-Burch et al., 2006).",
"In an early pilot study, we discovered that these crowd-sourced annotations are noisy and contain many obvious errors.",
"It only gets 73.6/96.3/83.4 for Precision/Recall/F 1 on a random sample of 100 sentence pairs, when compared to the labels we manually corrected.",
"To address the lack of reliable annotation, we hire two in-house annotators to correct the original labels using GoldAlign 4 (Gokcen et al., 2016), an annotation tool for monolingual word alignment.",
"Both annotators have linguistic background and extensive NLP annotation experience.",
"We provide a three-hour training session to the the annotators, during which they are asked to align 50 sentence pairs and discuss until consensus.",
"Following previous work, we calculate the inter-annotator agreement as 84.2 of F 1 score for token-level nonidentical alignments by comparing one annotator's annotation against the other's.",
"The alignments between identical words are usually easy for human annotators.",
"After merging the the labels from both annotators, we create a new split of 2398/800/800 for train/dev/test set.",
"To ensure the quality, an adjudicator further exams the dev and test sets and constructs the final labels.",
"corpus (Xu et al., 2015b) consists of 1,932 English news articles and their simplified versions written by",
"professional editors.",
"It has been widely used in text simplification research (Xu et al., 2016; Zhang and Lapata, 2017; Zhong et al., 2020).",
"We randomly select 500 complex-simple sentence pairs from the test set of Newsela-Auto (Jiang et al., 2020), 5 which is the newest sentence-aligned version of Newsela.",
"214 of these 500 pairs contain sentence splitting.",
"An in-house annotator 6 labels the word alignment by correcting the outputs from GIZA++ (Och and Ney, 2003).",
"MultiMWA-arXiv.",
"The arXiv 7 is an open-access platform that stores more than 1.7 million research papers with their historical versions.",
"It has been used to study paraphrase generation (Dong et al., 2021) and statement strength (Tan and Lee, 2014).",
"We first download the LATEX source code for 750 randomly sampled papers and their historical versions, then use OpenDetex 8 package to extract plain text from them.",
"We use the trained neural CRF sentence alignment model (Jiang et al., 2020) to align sentences between different versions of the papers and sample 200 nonidentical aligned sentence pairs for further annotation.",
"The word alignment is annotated in a similar procedure to that of the MultiMWA-Wiki.",
"MultiMWA-Wiki.",
"Wikipedia has been widely used in text-to-text tasks, including text simpli-5 More specifically, we sample from the exact test set used in Table 2 in Maddela et al. (2021).",
"fication (Jiang et al., 2020), sentence splitting (Botha et al., 2018), and neutralizing bias language (Pryzant et al., 2020).",
"We follow the method in (Pryzant et al., 2020) to extract parallel sentences from Wikipedia revision history dump (dated 01/01/2021) and randomly sample 4,099 sentence pairs for further annotation.",
"We first use an earlier version of our neural semi-CRF word aligner ( 3) to automatically align words for the sentence pairs, then ask two in-house annotators to correct the aligner's outputs.",
"The inter-annotator agreement is 98.1 at token-level measured by F 1 .",
"9 We split the data into 2514/533/1052 sentence pairs for train/dev/test sets.",
"In this section, we present both in-domain and out-of-domain evaluations for different word alignment models on our MultiWMA benchmark.",
"We also provide a detailed error analysis of our neural semi-CRF model and an ablation study to analyze the importance of each component.",
"We introduce a novel state-of-the-art baseline by adapting the QA-based method in (Nagata et al., 2020), which has not previously applied to monolingual word alignment but only bilingual word alignment.",
"This method treats the word alignment problem as a collection of independent predictions 9 The inter-annotator agreement is much higher compared to that of MultiMWA-MTRef, as the parallel sentences extracted from Wikipedia revision history have more overlap.",
"from every token in the source sentence to a span in the target sentence, which is then solved by fine-tuning multilingual BERT (Devlin et al., 2019) similarly as for SQuAD-style question answering task.",
"Taking the sentence pair in Figure 1 as an example, the word to be aligned is marked by in the source sentence and concatenated with the entire target sentence to form the input as With Canadian conduct his model. Lkoyd performed his model. .",
"A span prediction model based on fine-tuning multilingual BERT is then expected to extract performed from the target sentence.",
"The predictions from both directions (source to target, target to source) are symmetrized to produce the final alignment, using a probability threshold of 0.4 instead of the typical 0.5.",
"We change to use standard BERT in this model for monolingual alignment and find that the 0.4 threshold chosen by Nagata et al. (2020) is almost optimal in maximizing the F 1 score on our MultiMWA-MTRef dataset.",
"This QA-based method alone outperforms all existing models for monolingual word alignment, including: JacanaToken aligner (Yao et al., 2013a), which is a CRF model using hand-crafted features and external resources; JacanaPhrase aligner (Yao et al., 2013b), which is a semi-CRF model relying on feature templates and external resources; PipelineAligner (Sultan et al., 2014), which is a pipeline system that utilizes word similarity and contextual information with heuristic algorithms.",
"We also create a variation of our model, a Neural CRF aligner , in which all modules remain the same but the max span length is set to 1, to evaluate the ben-efits of span-based alignments.",
"Following the literature (Thadani et al., 2012; Yao et al., 2013a,b), we present results under both Sure and Sure + P oss settings for the MultiMWA-MTRef dataset.",
"Sure + P oss setting includes all the annotated alignments, and Sure only contains a subset of them which are agreed by multiple annotators.",
"We consider Sure + P oss as the default setting for all the other three datasets.",
"Table 2.",
"The neural models are working remarkably well in comparison to the non-neural methods, especially as measured by Exact Matches (EM).",
"On both MTRef and Wiki datasets, our neural semi-CRF model achieves the best F 1 and EM.",
"QA-based aligner also has competitive performance with strong recall, however, its precision is lower compared to our model.",
"It is worthy to note that our model has a modular design, and can be more easily adjusted than QA-based method to suit different datasets and downstream tasks.",
"Table 3 presents the out-of-domain evaluation results.",
"Our neural models achieve the best performance across all three datasets.",
"This demonstrates the generalization ability of our model, which can be useful in the downstream applications.",
"Table 4 shows the ablation study for our neural semi-CRF model.",
"F 1 and EM drops by 1.3 and 4.4 points respectively after replacing SpanBERT with BERT, indicating the importance of optimized pre-trained representations.",
"Markov transition layer contributes mainly to the alignment accuracy (EM).",
"We have experimented with different strategies to merge the outputs from two directions: intersection yields better precision, grow-diag and union bias towards recall.",
"Leveraging the span interaction matrix generated by our model (details in 3.2), we design a simple postprocessing rule to extend the phrasal alignment to spans that are longer than 3 tokens.",
"Adjacent target words are gradually included if they have very high semantic similarity with the same source span.",
"This rule further improves recall and achieves the best F 1 on the MultiMWA-MTRef.",
"We sample 50 sentence pairs from the dev set of MultiMWA-MTRef and analyze the errors under Sure+Poss setup.",
"10 Figure 4 shows how the performance of different alignment models would improve, if we resolve each of the 7 types of errors.",
"We discuss the categorization of errors and their breakdown percentages below: Phrase Boundary (58.6%).",
"The phrase boundary error (see 3 in Figure 3 for an example) is the most prominent error in all models, attributing 7.6 points of F 1 for JacanaPhrase, 5.7 for QA aligner, and 4.7 for neural semi-CRF aligner.",
"For another example, instead of 3x2 alignment funds for research research funding , our model captures two 1x1 alignments, funds funding and research research .",
"This is largely due to the fact that alignments are not limited to linguistic phrases (e.g., noun phrases, verb phrases, etc.), but rather, include non-linguistic phrases.",
"It could also be challenging to handle longer spans, such as keep his position protect himself from being removed (more on this in Appendix B.2).",
"Although we use SpanBERT for better phrase representation, there is still room for improvement.",
"Function Words (19.1%).",
"Function words can be tricky to align when rewording and reordering happens, such as 2 .",
"Adding on the complexity, same function word may appear more than once in one sentence.",
"This type of error is common in all the models we experiment with.",
"It attributes 4.7 points of F 1 for JacanaPhrase, 1.3 for QA aligner, and 1.5 for our neural semi-CRF aligner.",
"Content Words (14.2%).",
"Similar to function words, content words (e.g., security bureau defense ministry ) can also be falsely aligned or missed, but the difference between neural and nonneural model is much more significant.",
"This error type attributes 7.7 points of F 1 score for Jacana aligner, but only 1.1 and 0.8 for neural semi-CRF aligner and QA aligner, respectively.",
"Context Implication (5.6%).",
"Some words or phrases that are not strictly semantically equivalent can also be aligned if they appear in a similar context.",
"For example, given the source sentence 10 The strict Sure only labels exclude many alignments that are critical for certain applications, such as label projection.",
"We thus focus on the Sure+Poss labels for error analysis.",
"Gaza international airport was put into operation the day before' and the target sentence The airport began operations one day before' , the phrase pair was put into began can be aligned.",
"This type is related to 2.8 F 1 score improvement for Jacana aligner, but only 0.4 and 0.2 for neural semi-CRF and QA-based aligners, respectively.",
"Debatable Labels (1.9%).",
"Word alignment annotation can be subjective sometimes.",
"Take phrase alignment two days of a two-day for example, it can go either way to include the function word a ' in the alignment, or not.",
"Name Variations (0.6%).",
"While our neural semi-CRF model is designed to handle spelling variations or name abbreviations, it fails sometimes as shown by 1 in Figure 3 as an example.",
"Some cases can be very difficult, such as SAWS the state's supervision and control bureau of safe production , where SAWS stands for State Administration of Work Safety .",
"Skip Alignment (0.0%).",
"Non-contiguous tokens can be aligned to the same target token or phrase (e.g., owes ... to is a result of ), posing a challenging situation for monolingual word aligners.",
"However, this error is rare, as only 0.6% of all alignments in MTRef dev set are discontinuous.",
"In this section, we apply our monolingual word aligner to some downstream applications, including both generation and understanding tasks.",
"Text simplification aims to improve the readability of text by rewriting complex sentences with simpler language.",
"We propose to incorporate word alignment information into the state-of-the-art EditNTS model (Dong et al., 2019) to explicitly learn the edit operations, including addition, deletion and paraphrase.",
"The EditNTS model uses a neural programmer-interpreter architecture, which derives the ADD, KEEP and DELETE operation sequence based on the edit-distance measurements during training time.",
"We instead construct this edit sequence based on the neural semi-CRF aligner's outputs (trained on MTRef Sure + Poss ) with an additional REPLACE tag to train the EditNTS model (more details in Appendix A).",
"Table 5 presents the text simplification results on two benchmark datasets, Newsela-auto and Wikipedia-auto (Jiang et al., 2020), where we improve the SARI score (Xu et al., 2016) by 0.9 and 0.6, respectively.",
"The SARI score averages the F 1 /precision of n-grams inserted ( add ), kept ( keep ) and deleted ( del ) when compared to human references.",
"We also calculate the BLEU score with respect to the input ( s-BL ), the percentage of new words ( %new ) added, and the percentage of system outputs being identical to the input ( %eq ) to show the paraphrasing capability.",
"We manually inspect 50 sentences sampled from Newsela-auto test set and find that both models (EditNTS and EditNTS+Aligner) generate the same output for 10 sentences.",
"For the remaining 40 sentences, the original EditNTS only attempts to paraphrase 4 times (2 are good).",
"Our modified model (Edit-NTS+Aligner) is more aggressive, generating 25 paraphrases (11 are good).",
"With the help of word aligner, the modified model also produces a higher number of good deletions (20 vs.",
"13) and a lower number of bad deletions (6 vs. 12), which is consistent with the better keep and del scores.",
"Models RTE MRPC STS-B STS14 WikiQA SICK PIT URL TrecQA QQP MNLI SNLI 2.5k 3.5k 5.7k 8k 8k 10k 11k 42k 53k 363k 392k 549k Acc F 1 r / r MAP/MRR Acc max F 1 max F 1 MAP/MRR Acc Acc m/Acc mm Acc BERT 65.3 88.2 86.7/85.8 83.6 81.8/83.0 86.2 75.0 78.7 84.4/ 89.6 90.8 84.8/83.1 90.5 BERT + Aligner 67.3 88.9 86.8 / 86.0 83.7 83.2 / 84.4 87.2 75.5 78.5 85.1 /87.8 90.9 84.8/ 83.5 90.4 Table 6: Downstream applications on natural language inference (RTE, SICK, MNLI, SNLI), paraphrase identifi-cation (MRPC, PIT, URL, QQP), question answering (WikiQA, TrecQA), and semantic textual similarity (STS-B, STS14) tasks.",
"We can utilize our neural aligner in sentence pair classification tasks (Lan and Xu, 2018), adding conditional alignment probability p ( a | s, t ) as an extra feature.",
"We concatenate it with the [CLS] representation in fine-tuned BERT and apply the softmax layer for prediction.",
"We experiment with on different datsets for various tasks, including: natural language inference on SNLI (Bowman et al., 2015), MNLI (Williams et al., 2018), SICK (Marelli et al., 2014), and RTE (Giampiccolo et al., 2007) from the GLUE benchmark (Wang et al., 2018); semantic textual similarity on STS-B (Cer et al., 2017) and STS14 (Agirre et al., 2014); question answering on WikiQA (Yang et al., 2015) and TrecQA (Wang et al., 2007); paraphrase iden-tification on MRPC (Dolan and Brockett, 2005), URL (Lan et al., 2017), PIT (Xu et al., 2015a), and QQP (Iyer et al., 2017).",
"We implement the fine-tuned BERT base model using Huggingface's library (Wolf et al., 2019).",
"Table 6 shows performance improvement on small (2k-15k) datasets, which include SICK, STS-B, MRPC, RTE, WikiQA, and PIT, but little or no improvement on large (40k-550k) datasets, such as SNLI, MNLI, and QQP.",
"We hypothesize that the Transformer model can potentially learn the latent word alignment through self-attentions, but not as effectively for small data size.",
"In this work, we present the first neural semi-CRF word alignment model which achieves competitive performance on both in-domain and out-of-domain evaluations.",
"We also create a manually annotated Multi -Genre M onolingual W ord Alignment (MultiMWA) benchmark which is the largest and of higher quality compared to existing datasets.",
"We thank Yang Chen, Sarthak Garg, and anonymous reviewers for their helpful comments.",
"We also thank Sarah Flanagan, Yang Zhong, Pa-nya Bhinder, Kenneth Kannampully for helping with data annotation.",
"This research is supported in part by the NSF awards IIS-2055699, ODNI and IARPA via the BETTER program contract 19051600004, ARO and DARPA via the SocialSim program contract W911NF-17-C-0095, and Criteo Faculty Research Award to Wei Xu.",
"The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of NSF, ODNI, IARPA, ARO, DARPA or the U.S. Government.",
"The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein."
] |
[
"abstain",
"objective",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"objective",
"other",
"other",
"result",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"objective",
"method",
"other",
"other",
"other",
"other",
"other"
] |
[
"Translating text into a language unknown to the text's author, dubbed outbound translation, is a modern need for which the user experience has significant room for improvement, beyond the basic machine translation facility.",
"We demonstrate this by showing three ways in which user confidence in the outbound translation, as well as its overall final quality, can be affected: backward translation, quality estimation (with alignment) and source paraphrasing.",
"In this paper, we describe an experiment on outbound translation from English to Czech and Estonian.",
"We examine the effects of each proposed feedback module and further focus on how the quality of machine translation systems influence these findings and the user perception of success.",
"We show that backward translation feedback has a mixed effect on the whole process: it increases user confidence in the produced translation, but not the objective quality.",
"When dealing with machine translation (MT) on the web, most of the attention of the research community is paid to inbound translation .",
"In this scenario, the recipients are aware of the MT process, and thus it is their responsibility to interpret and understand the translated content correctly.",
"For an MT system, it is sufficient to achieve such quality that allows a recipient to get the gist of the meaning of texts on webpages.",
"For outbound translation , it is the other way round: the responsibility to create the content in the way that it is correctly interpreted by a recipient lies on the authors of the message.",
"The main issue is that the target language might be entirely unknown to them.",
"Prototypically it is communication by email, filling in foreign language forms, or involving some other kind of interactive medium.",
"The focus in this scenario is placed not only on producing high-quality translations but also on reassuring the author that the MT output is correct.",
"One of the approaches to improving both quality and authors' confidence, first employed in this scenario by Zouhar and Bojar (2020), is to provide cues that indicate the quality of MT output as well as suggest possible rephrasing of the source.",
"They may include backward translation to the source language, highlighting of the potentially problematic parts of the input, or suggesting paraphrases.",
"Except for preliminary work by Zouhar and Novk (2020), the impact of individual cues has not yet been properly explored.",
"In this paper, we present the results of a new experiment on outbound translation.",
"Building on the previous works, the focus was expanded to investigate the influence of different levels of performance of the underlying MT systems, as well as utilizing a much greater range and diversity of participants and evaluation methods.",
"Native English speakers were tasked to produce text either in Czech or in Estonian with an outbound translation system in an e-commerce context.",
"Every user also reported a confidence score upon finishing each stimulus trial.",
"A native Czech or Estonian speaker later evaluated each final translation for fluency and adequacy.",
"The set of available cues varied for each participant from stimuli to stimuli, following a controlled experimental design, in order to determine the impact of specific combinations of cues on the self-reported confidence and the final translation quality.",
"For our study, we made use of the Ptakopet system (Zouhar, 2020).",
"This bespoke software was specifically developed to examine user behavior when testing machine translation user interfaces, especially in the context of outbound translation.",
"1 The structure of the paper is as follows.",
"After an overview of the related work in Section 2, we 1 The code for this project and also the experiment data are available as open-source.",
"present the environment for the outbound translation we used for the experiment, including the MT systems and modules that provided cues to the users, in Section 3. Section 4 describes the data that we collected during the experiment, and in Section 5 we further analyze them to reveal and discuss various aspects of our approach to outbound translation.",
"We conclude with the main findings in Section 6.",
"Despite recent advances in neural machine translation (NMT) quality, resulting in output comparable to human professionals in specific settings (Hassan et al., 2018; Popel et al., 2020), it is far from reasonable to blindly believe that the output of MT systems is perfectly accurate.",
"It should thus not be simply included in an email or another message without some means of verification.",
"Feedback in this scenario is needed, which would tell users if the translation is correct and ideally even give instructions on how to improve it.",
"A related area of interactive machine translation (IMT) focuses mainly on either post-editor scenarios (Martnez-Gmez et al., 2012; Sanchis-Trilles et al., 2014; Underwood et al., 2014; Alabau et al., 2016) or generally scenarios in which users are able to produce the translation themselves and the system only aims to speed it up or improve it (Santy et al., 2019).",
"speak the target language, and hence operates on the MT result only in a limited way.",
"The first work to deal with this task by Zouhar and Bojar (2020) focused on working with Czech-German MT in context of asking and reformulating questions.",
"A preliminary experiment on the effect of translation cues has been carried out by Zouhar and Novk (2020), but it was conducted on a much smaller scale both in terms of participants and annotators and with non-native speakers of English.",
"This may have affected the results that differ in some aspects, especially in the usefulness of the word-level quality estimation.",
"In order to test the effect of different cues, we utilized Ptakopet, a web-based tool for outbound translation.",
"The tool provides machine translation together with cues in the form of backward translation, quality estimation and paraphrasing.",
"These cues are intended to help the user arrive at a better translation and increase their confidence in the produced output.",
"The tool is modular, allowing the modules for MT and cues to be either replaced with others or turned on and off.",
"By linking a collection of sample stimuli to the tool it can also be used to conduct experiments.",
"Participants are asked to react to stimuli by formulating texts in a language known to them and producing and editing translations in a language they do not know.",
"The set of cues they are presented with may vary.",
"The users are also asked to report their confidence in the produced output.",
"In this experiment, each participant was presented with a sequence of scenes, interacting with the outbound translation system in each of them.",
"Figure 1 shows an example of a scene and user interaction.",
"In the following sections, we describe the main components of the experiment.",
"We used screenshots of web forms (real-world examples from the e-commerce domain) as stimuli.",
"Every screenshot displayed an excerpt of a web form containing a text field for open queries with a specific query already pre-filled and highlighted in a green rectangle.",
"For example, Figure 1 shows a form at hotel webpages with a pre-filled special request.",
"This query, or rather its message, is what should be translated.",
"Apart from the query, the screenshot captured elements of the webpage that should make it easier and faster for the user to understand the intended message and its context.",
"The stimuli are also accompanied by a short description of the website's domain (e.g. accommodation ) above the screenshot for the same purpose.",
"The dataset consists of 70 screenshots and corresponding pre-filled queries in English.",
"2 It was selected from a collection of 462 such screenshots, collated by six annotators.",
"3 The annotators were instructed to look for web forms with text boxes that could be filled with text which would require translation.",
"We were not interested in fields such as names, addresses, numbers or pre-defined lists of values (e.g. countries).",
"We emphasized that the collection should consist of a broad variety of domains, but the particular choice of domains and websites was up to the annotators.",
"The set of available modules (backward translation BT , quality estimation QE , paraphrasing PP ), as well as the choice of the MT system, was randomized for every user for every stimulus.",
"We denote a specific cue configuration by the modules present,",
"e.g.",
"BT PP .",
"Figure 1 shows an example of modules' outputs, given a user's rephrasing of the query from the stimulus.",
"Machine Translation.",
"We used three MT systems for Czech (differing in speed and training data size) and one for Estonian.",
"All of the systems were trained in both directions: the forward systems translate from English, whereas the opposite direction is used as a backward translation cue.",
"All the MT systems follow the Transformer model architecture (Vaswani et al., 2017) design, though student systems make use of the simplified simple recurrent unit and other modifications described in Germann et al. (2020).",
"Table 1 shows how the MT systems performed in terms of BLEU score (Pap-ineni et al., 2002) on the test set of WMT18 News task (Bojar et al., 2018).",
"The Czech 3 system is the winning MT model of CzechEnglish News Translation in WMT 2019 (Popel et al., 2019), having been trained on 58M authentic sentence pairs and 65M backtranslated monolingual sentences.",
"4 The training proposed by Germann et al. (2020) was used for a CPU-optimized student model Czech 2 .",
"It was created by the knowledge distillation (Kim and Rush, 2016) method on translations generated by Czech 3. Although it has been trained solely on synthetic data, its performance in the news domain falls behind the teacher only by 0.5 to 3.0 BLEU points, depending on the translation direction.",
"We included it mainly due to its speed as shown in Section 4. The design of the Czech 1 system is identical to Czech 3. The only difference is that the former was trained only on a subsample of 5M sentence pairs from CzEng 1.7 (Bojar et al., 2016).",
"This system was chosen to simulate performance on less resourceful language pairs.",
"The Estonian system uses the same construction procedure as Czech 2.",
"The teacher system utilized in knowledge distillation was internally trained for us by the authors of Germann et al. (2020).",
"Quality Estimation.",
"QE is the task of predicting the quality of an MT output without relying on reference translation, as opposed to traditional evaluation based on automatic metrics (BLEU, TER, etc.).",
"We have used QE to predict potential translation errors at the word-level which in turn, combined with a source-target token-level alignment algorithm, 5 enables us to identify the source words that have led to those translation errors.",
"QE suggestions are presented by red word highlighting (see Figure 1).",
"We note that word-level error annotation is a hard and costly task.",
"Thus, available data for building systems to predict word-level errors is scarce.",
"To circumvent this issue we relied on a feature-based approach which exploited information from the neural MT system (i.e. a glass-box approach to QE) and did not require large amounts of data for training.",
"Glass-box features have been successfully used for QE of statistical MT (Blatz et al., 2004; Specia et al., 2013) and have been recently shown to be effective for sentence-level QE of neural MT systems (Fomicheva et al., 2020).",
"To accommodate for the different types of MT models used in this work, including a student model Czech 2 , we did not use the full set of features from Fomicheva et al. (2020) but instead relied on simple subset of log-probability based features: Log-probability of the word Log-prob.",
"of the previous word Log-prob.",
"of the next word Average log-prob.",
"of the translated sentence Number of characters in the word We build a binary gradient boosting classifier to predict word-level quality.",
"To train the classifier we collected a small curated dataset with transla-5 It was provided by FastAlign (Dyer et al., 2013) models trained on bitext from CzEng 2.0 (Kocmi et al., 2020) and OPUS collection (Tiedemann, 2012) for English-Czech and English-Estonian, respectively.",
"Measured on 10 queries sampled from the dataset of stimuli and their translations produced by the Czech 3 and Estonian systems, the F1 score of English tokens alignment exceeds 80% in both cases.",
"tion error annotation.",
"Although the annotation is binary 6 ( OK/BAD class), the dataset is heavily imbalanced.",
"To alleviate this issue, we over-sampled the minority class ( BAD ).",
"We randomly split the data for each MT system into train (80%) and test (20%).",
"In addition to accuracy, we report F1 for each class and Matthews correlation coefficient (MCC) as proposed by Fonseca et al. (2019) for imbalanced data.",
"Table 2 shows these results for Estonian and Czech.",
"We observed that F1 for the BAD class is much lower than F1 for OK .",
"This indicates the difficulty of our QE models in correctly predicting the minority class.",
"The reasons for that are as follows.",
"First, log-probabilities might not contain enough information to predict major or critical issues.",
"In particular, critical issues concern the mistranslation of specific elements in the text (e.g. numbers or named entities), which is beyond the scope of the glass-box features used in our experiments.",
"We plan to investigate other light-weight features that could better capture this information.",
"Secondly, on average, MT quality is quite high (even for weaker models) and therefore, the vast majority of the words belong to the positive class.",
"Paraphraser.",
"This module was expected to provide users with a potential rephrasing of their inputs from which they may draw inspiration for alternative translations.",
"The paraphraser is based on pivoting, i.e. a round-trip translation via a pivot language.",
"Federmann et al. (2019) showed that pivoting is an effective way of generating diverse paraphrases, especially if done via linguistically unrelated languages.",
"A larger set of pivot languages should further increase the diversity of paraphrases.",
"Our paraphrasing system performed two-step English-to-English translation through 41 pivot languages.",
"It is based on T2T-multi-big model from Machcek et al. (2020), a multi-lingual Transformer-big (Vaswani et al., 2017) model with a shared encoder and decoder.",
"It has been trained on 231M sentence pairs sampled from the OPUS collection (Tiedemann, 2012).",
"Given a sentence, the model yielded 41 variants.",
"In order not to overwhelm users, the paraphrases are then grouped so that two paraphrases with the same bag of words 6 In addition, each translated word labeled as BAD was manually annotated with a subcategory: minor, major or critical.",
"excluding stop words end up in the same group.",
"In the end, users are presented with a list of one random representative from each group, sorted by the group size in descending order.",
"The paraphrases suggested by multiple languages should thus appear at the top.",
"To achieve reasonable response time (ca. 3s), the service has been run on a GPU.",
"Table 3 shows the performance of the paraphraser in terms of BLEU score, evaluated on a subset of the Quora Question Pairs dataset.",
"7 The subset consists of 4000 question pairs, with 2000 pairs containing real paraphrases, and 2000 containing similar sentences with a different meaning.",
"The two cases are respectively denoted by + and .",
"The produced outputs seem to be more similar to real paraphrases than to fake ones, which corresponds to what we observed for source sentences with twice as high BLEU scores.",
"Users were asked to submit their rephrased English query and its translation by reporting their confidence in the produced translation.",
"They spec-ified how much they trusted the translation on a standard Likert scale from 1 (least) to 5 (most).",
"During a single scene, the participant saw a stimulus, worked on it and then finished it either by rating their confidence or by describing the reason for skipping.",
"The participant was continuously presented with the translation output and the cues.",
"We logged all incoming data as well as requests to the modules and their responses together with timestamps.",
"In total, 52 English speaking participants joined our experiment, out of whom 49 were native 7 quoradata.quora.com/First-Quora-Dataset-Release-Question-Pairs Config.",
"speakers of English.",
"There were 70 scenes, each with a unique stimulus, prepared for every participant.",
"After filtering out the scenes which we found invalid as they contained either no input from the users or were not finished, the total number of scenes to be analyzed was 2486 .",
"The participants thus succeeded in completing 48 scenes on average.",
"As shown in Table 4, the distribution of completed scenes over different configurations appears to be balanced.",
"Since one of the goals of Ptakopet is to facilitate work with MT, we also focused on the time participants had to spent in the interface together with the number of their actions 8 needed to finish stimuli.",
"They are summarized in Table 4. It is clear that the short response times of student models (Czech MT 2 and Estonian) encourage the users to perform more actions, while still spending less time on one scene on average.",
"Having recorded the essential interactions of participants with Ptakopet, we further analyzed the collected data, especially user inputs and their translations.",
"Viable inputs.",
"Unless a participant skipped a scene, it was concluded by confirming the final input and its translation.",
"We were also interested in examining intermediate complete sentences which 8 We measure actions by the number of forward translation requests because they are present in every configuration.",
"users considered and later abandoned.",
"We call these viable intermediate inputs .",
"The collection of such inputs was possible because the Ptakopet tool continually records user's interaction.",
"We set the minimum time without any edit for an input to be sent to the forward translation module to 1000 ms. Despite this relatively long period, still many incomplete or erroneous inputs were recorded, perhaps while the user was deliberating.",
"We thus used a simple heuristic to extract the viable ones.",
"For an input to be considered viable, it had to end with a full stop, an exclamation mark, or the same token as the final input ended.",
"Furthermore, its length had to be within a 25% margin around the length of the final input without whitespaces.",
"9 Whereas each confirmed scene by design resulted in 1 final input and its translation, the number of intermediate viable inputs (non-final) was 0 .",
"62 .",
"Their average length was 98 .",
"43% of the final input.",
"Evaluation of translation quality.",
"The extracted viable inputs and their translations were rated for quality and adequacy by 12 Czech and 3 Estonian native speakers.",
"For each viable input, the annotators were shown the source, its translation and the corresponding stimulus.",
"They were asked to rate on the scale from 1 (least) to 5 (most) 9 This rule discredits inputs meant to be viable, where the very last token was later edited, though.",
"SRC-STI: The meaning of user input corresponds to what is entered in the form shown in the image.",
"TGT-SRC: The meaning of the translation corresponds to the user input.",
"TGT-STI: The meaning of the translation corresponds to what is entered in the form shown in the image.",
"Fluency: The translation is fluent (including typography, punctuation, etc.) Overall: The overall translation quality including both adequacy with respect to the stimulus and fluency is high.",
"On average, we collected 7 .",
"15 assessments per viable input.",
"The inter-rater agreement measured by Kripendorff's alpha was 0.47 and 0.48 for Czech and Estonian, respectively.",
"Data Normalization.",
"Because of data imbalance in favor of high confidence, we normalized the self-reported user confidences using the following formula: x (cid:48) = x min max min 4 + 1 .",
"The min and max values were taken individually for every participant.",
"This only affected those who never used 1 or 2 in their self-reported confidences.",
"We did not apply this normalization to the quality annotations, because the annotators used the whole scale in almost all cases.",
"The overall average of all confidence judgments decreased from 3 .",
"72 to 3 .",
"59 by this normalization.",
"To avoid strong assumptions about the underlying process, we did not normalize the data to have zero mean and standard deviation of 1 for every feature dimension.",
"This would also have made any interpretation less intuitive.",
"Results on final inputs.",
"Table 5 shows the average evaluation scores of final confirmed inputs, accompanied by average self-confidence scores across various configurations.",
"For clarity, we illustrate the same results in Figure 2.",
"Comparing the Czech MT systems, their ranking with respect to the Overall score corresponds to the results of the automatic evaluation in the news domain shown in Table 1.",
"Interestingly enough, Czech 2 received an average confidence score comparable to its teacher model Czech 3 (see in Figure 2).",
"The results of comparison across different combinations of cues suggest that configurations with backtranslation feedback enabled achieved better performance in terms of the overall quality.",
"In such cases, the users also felt more confident.",
"Unlike for overall quality, the effect of an available backward translation cue on user confidence was statistically significant by Mann-Whitney U test for 0 .",
"6 point difference ( U = 24243 . 5 , p < 0 . 0001 ).",
"Conversely, quality estimation cues appear not to be useful, which the users also noted.",
"Unfortunately, the presence of paraphrases increased user confidence, but decreased the objective translation quality.",
"These results are in contrast with the work of Zouhar and Novk (2020).",
"We attribute this difference to an insufficient number of samples and also a more homogeneous composition of partic-TGT SRC TGT STI Fluency Overall Conf.",
"ipants (all foreign PhD students studying in the Czech Republic) in their work.",
"Note that users who had knowledge of some other Slavic language (Polish or Russian) on average expressed higher confidence ( 3 . 95 ) and also produced translations of higher quality ( 4 . 44 ).",
"The effects of different modules on their work were closer to the effects described in Zouhar and Novk (2020).",
"As seen in Figure 3, a significant proportion of the scenes (~41%) received 4 or 5 on both self-reported confidence and overall translation quality.",
"Although these high scores are positive in terms of industry progress, it makes the quality-confidence dependency harder to analyze.",
"Table 6 shows expected rating behavior in terms of correlations.",
"We can see that Fluency is mostly correlated with TGT-SRC and TGT-STI adequacies and less with SRC-STI adequacy, which should affect the translation fluency only slightly.",
"10 We also see that TGT-STI adequacy and Fluency affects the Overall rating the most, which accords with its definition.",
"Self-reported user confidence correlates the least with all the rest, but slightly more with TGT-STI, TGT-SRC and Overall scores, which we consider positive.",
"MT comparison in detail.",
"Figure 4 shows the average spent time per stimulus as well as the number of forward translation requests and input length in characters with respect to the confidence and overall translation quality for submitted translations.",
"The figure is split into three graphs, each corresponding to one of the Czech MT systems.",
"Input text length does not appear to affect the overall translation quality significantly, while it seems to affect users' self-reported confidence.",
"The curves for time spent, although different in 10 In a scenario where the SRC-STI adequacy is lowered by typos in Source, which then also negatively affects the translation process and also the Fluency.",
"absolute values, peak in the middle (rating",
"3) and have the lowest values for scores of 1 and 5.",
"This may happen because the stimulus was either easy to complete, or the users did not work on this stimulus diligently.",
"It is supported by the fact that they did not report low confidences in these instances.",
"A similar trend, although less pronounced, can be seen with the number of requests.",
"We can also notice that the Czech 2 system has the lowest times despite also having a vastly higher number of executed requests.",
"The request delay was the same for all MT systems, so in this case, the users recognized that they did not have to wait so long for getting a translation back and hence sent more requests.",
"This is one of the possible explanations for why in Figure 2 the average self-reported confidence for this system is on par with its teacher model, Czech 3, despite being less performant objectively.",
"The degree of interactivity appears to be the main factor affecting these MT systems profiles.",
"The figures of Czech 1 and Czech 3 look very similar even though they vary greatly in performance and only have their speeds in common (slower than Czech 2).",
"Intermediate vs. final.",
"Having also intermediate viable inputs at our disposal, we explored how quality changes in the transition from intermediate to final inputs.",
"We excluded those scenes that contain no viable intermediate input, which accounts for almost 69%.",
"Although our heuristics can filter out most of the intermediate inputs which are not viable, some Config SRC STI TGT SRC TGT STI Fluency Overall BT QE PP -0.19 +0.10 +0.04 +0.05 +0.08 BT QE -0.14 +0.16 (cid:5) +0.03 +0.04 +0.03 BT PP -0.12 +0.14 +0.16 +0.12 +0.17 QE PP -0.20 +0.03 -0.13 +0.02 -0.07 BT -0.24 +0.33 +0.10 +0.10 +0.11 QE -0.11 -0.01 -0.10 -0.05 -0.04 PP -0.11 (cid:5) +0.09 -0.05 -0.00 -0.04 -0.02 +0.16 +0.16 +0.04 +0.06 Total -0.15 +0.11 +0.01 +0.04 +0.03 Table 7: Average difference of quality between intermediate viable and final inputs and their translations for all combinations of available cue modules.",
"They may contain a typo, artifacts of unfin-ished rephrasing or may miss important information.",
"These non-viable inputs must be excluded from the comparison, as the user would unlikely submit them or they could be easily fixed by a spell-checker.",
"We manually examined all intermediate viables and excluded the defective ones from the following statistics.",
"Table 7 shows the average difference in the quality of intermediate and corresponding final inputs and their translations.",
"The greatest improvement in the Overall score is again achieved by configurations utilizing backtranslation feedback, although the difference is not statistically significant.",
"What is significant, though, are some Inter I teach my son English with the 'Learning Time with Timmy' series on Youtube.",
"TGT-SRC scores including the BT configuration. It shows that the translation of the final input is on average more adequate to the source than the translation of the intermediate inputs. Nevertheless, the effect on the TGT-STI adequacy is marginal due to negative differences in the SRC-STI adequacy score. These can be justified by the fact that any modification of the original query in the stimulus might have been considered as a shift in meaning by the annotators, although in reality the original intention could be still understandable.",
"In Table 8, we show three examples of the intermediate and the final inputs with their translations to Czech. In the top two, the rephrasing helped to improve the translation quality: (1) by adding a word language to prevent translating English as a Czech word for Englishmen, or (2) by substituting a preposition.",
"Conversely, the replacement of the verb has expired by a phrase out of date led to a drop in translation quality.",
"This is due to a grammatical error and use of the Czech expression meaning got obsolete, which indeed sounds old-fashioned in this context.",
"In this paper, we demonstrated through an experiment the effect of three translation cues on user confidence and translation quality.",
"The backward translation cue proves to be a powerful means to enhance user confidence in MT. At the same time, it neither increase nor decrease significantly the translation quality.",
"The fact that backtranslation feedback has a marginal effect to objective quality but greatly increases user confidence is surprising because it is the most intuitive low-effort approach to outbound translation scenarios which can be done even with publicly available MT systems.",
"confidence less (compared to not being present), with no or slightly negative impact on the translation quality.",
"Without a better method to generate diverse and still adequate paraphrases, employing this cue is questionable.",
"The effect of word-level quality estimation appears to be even more questionable.",
"We attribute it mainly to the underlying word-level models, which may not be mature enough for user-facing applications.",
"Despite the loss in objective translation quality, the CPU-optimized student MT model either managed to maintain its teacher's high trustworthiness or compensated for it by its speed.",
"Future work.",
"Scores in both user confidence and overall translation quality annotation cluster together.",
"Having the distribution less concentrated by changing the underlying task with stimuli or by working with more low resource languages could reveal stronger dependencies between individual variables.",
"We limited ourselves to only three baseline solutions to help in outbound translation.",
"In the future work, inspiration could be drawn from the approaches of interactive machine translation systems and these could be adapted for the purposes of outbound translation.",
"Sincere thanks to Chris Burns and the three anonymous reviewers for their thorough review and helpful comments.",
"This project has received funding from the grants H2020-ICT-2018-2-825303 (Bergamot) of the European Union and 19-26934X (NEUREM3) of the Czech Science Foundation.",
"The work has also been supported by the Ministry of Education, Youth and Sports of the Czech Republic, Project No.",
"LM2018101 LINDAT/CLARIAH-CZ.",
"All participants were recruited online and had to complete an informed consent form using a secure Qualtrics survey before they could progress to taking part in the experiment.",
"Data was anonymized, recorded and stored in accordance with ACM protocol.",
"Ethical clearance was confirmed by the School of Informatics Ethics Committee at the University of Edinburgh (Reference RT 4058).",
"Participants were offered a 20 Amazon voucher as compensation for their time upon completion of the experiment."
] |
[
"abstain",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"We rely on arguments in our daily lives to deliver our opinions and base them on evidence, making them more convincing in turn.",
"However, finding and formulating arguments can be challenging.",
"In this work, we present the Arg-CTRL a language model for argument generation that can be controlled to generate sentence-level arguments for a given topic, stance, and aspect.",
"We define argument aspect detection as a necessary method to allow this fine-granular control and crowdsource a dataset with 5,032 arguments annotated with aspects.",
"Our evaluation shows that the Arg-CTRL is able to generate high-quality, aspect-specific arguments, applicable to automatic counter-argument generation.",
"We publish the model weights and all datasets and code to train the Arg-CTRL.",
"1 1 Introduction Language models (Bengio et al., 2003) allow to generate text through learned distributions of a language and have been applied to a variety of areas like machine translation (Bahdanau et al., 2015), summarization (Paulus et al., 2018), or dialogue systems (Wen et al., 2017).",
"A rather new field for these models is the task of producing text with argumentative content (Wang and Ling, 2016).",
"We believe this technology can support humans in the challenging task of finding and formulating arguments.",
"A politician might use this to prepare for a debate with a political opponent or for a press conference.",
"It may be used to support students in writing argumentative essays or to enrich one-sided discussions with counter-arguments.",
"In contrast to retrieval methods, generation allows to combine and stylistically adapt text (e.g. arguments) based on a given input (usually the beginning of a sen-tence).",
"Current argument generation models, however, produce lengthy texts and allow the user little 1 https://github.com/UKPLab/ controlled-argument-generation control over the aspect the argument should address (Hua et al., 2019; Hua and Wang, 2018).",
"We show that argument generation can be enhanced by allowing for a fine-grained control and limiting the argument to a single but concise sentence.",
"Controllable language models like the CTRL (Keskar et al., 2019) allow to condition the model at training time to certain control codes.",
"At inference, these can be used to direct the model's output with regard to content or style.",
"We build upon this architecture to control argument generation based solely on a given topic, stance, and argument aspect.",
"For instance, to enforce focus on the aspect of cancer for the topic of nuclear energy , we input a control code Nuclear Energy CON cancer that creates a contra argument discussing this aspect, for instance: Studies show that people living next to nuclear power plants have a higher risk of developing cancer. .",
"To obtain control codes from training data, we pre-define a set of topics to retrieve documents for and rely on an existing stance detection model to classify whether a sentence argues in favor ( pro ) or against ( con ) the given topic (Stab et al., 2018a).",
"Regarding argument aspect detection, however, past work has two drawbacks: it either uses simple rule-based extraction of verband noun-phrases (Fujii and Ishikawa, 2006) or the definition of aspects is based on target-concepts located within the same sentence (Gemechu and Reed, 2019).",
"Aspects as we require and define them are not bound to any part-of-speech tag and (1) hold the core reason upon which the conclusion/evidence is built and (2) encode the stance towards a general but not necessarily explicitly mentioned topic the argument discusses.",
"For instance: Topic : Nuclear Energy Argument : Running nuclear reactors is costly as it involves long-time disposal of radioactive waste.",
"a negative stance towards the topic of Nuclear Energy, the topic itself is not mentioned explicitly in the argument.",
"Our final controlled argument generation pipeline (see Figure 1) works as follows: (1) We gather several million documents for eight different topics from two large data sources.",
"All sentences are classified into pro-, con-, and non-arguments.",
"We detect aspects of all arguments with a model trained on a novel dataset and concatenate arguments with the same topic, stance, and aspect into training documents.",
"(2) We use the collected classified data to condition the Arg-CTRL on the topics, stances, and aspects of all gathered arguments.",
"(3) At inference, passing the control code [Topic] [Stance] [Aspect] to the model will generate an argument that follows these commands.",
"Our evaluation shows that the Arg-CTRL is able to produce aspect-specific, high-quality arguments, applicable to automatic counter-argument generation.",
"The contributions are as follows:",
"(i) We adapt and fine-tune the CTRL for aspect-controlled neural argument generation.",
"(ii) We show that detecting argument aspects and conditioning the generation model on them are necessary steps to control the model's training process and its perspective while generating.",
"(iii) We propose several methods to analyze and evaluate the quality of (controllable) argument generation models.",
"(iv) We develop a new scheme to annotate argument aspects and release a dataset with 5,032 samples.",
"and restricts aspects to nounand verb-phrases, extracted via hand-crafted rules.",
"Boltuic and na-jder (2017) extract noun-phrases and aggregate them into concepts to analyze the microstructure of claims.",
"Misra et al. (2015) introduce facets as low level issues, used to support or attack an argumentation.",
"In that, facets are conceptually similar to aspects, but not explicitly phrased and instead seen as abstract concepts that define clusters of semantically similar text-spans of summaries.",
"Bilu et al. (2019) define commonplace arguments that are valid in several situations for specified actions (e.g. ban) and topics (e.g. smoking).",
"These actions are similar to aspects, but limited in number and manually defined.",
"Gemechu and Reed (2019) detect, amongst others, concepts and aspects in arguments with models trained on expert annotations.",
"However, in their definition, aspects have to point to a target concept mentioned in the argument.",
"In our definition, aspects refer to a general topic which is not necessarily part of the sentence and our annotation scheme is applicable by non-experts.",
"The concept of framing dimensions (Boydstun et al., 2014) is close to argument aspects.",
"In the field of argument mining, Ajjour et al. (2019) recently applied frames to label argument clusters.",
"Yet, their method does not allow to detect frames.",
"Other works present methods to automatically label sentences of news articles and online discussions with frames (Hartmann et al., 2019; Naderi and Hirst, 2017).",
"These methods are, however, limited to a small set of predefined frames that represent high-level concepts.",
"Contrarily, we operate on a fine-grained span-level to detect aspects that are explicitly mentioned in arguments.",
"Argument Generation Early approaches rely on rules from argumentation theory and user preference models (Carenini and Moore, 2006; Zuker-man et al., 1998).",
"In a more recent work, Sato et al. (2015) construct rules to find arguments in a large data source, which are then filtered and ordered with a neural network based ranker.",
"Baff et al. (2019) use a clustering and regression approach to assemble discourse units (major claims, pro and con statements) to argumentative texts.",
"However, most of these approaches rely on hand-crafted features and do not generalize well.",
"Moreover, they all require permanent access to large data sources and are not able to generate new arguments.",
"Recently, research on generating arguments with language models gained more attention.",
"Hua and Wang (2019) use a sequence to sequence model (Sutskever et al., 2014) to generate argumentative text by attending to the input and keyphrases automatically extracted for the input from, for example, Wikipedia.",
"Other work focuses on generating argumentative dialogue (Le et al., 2018) and counter-arguments (Hidey and McKeown, 2019; Hua et al., 2019) based on a given input sentence, or on generating summaries from a set of arguments (Wang and Ling, 2016).",
"Contrarily, we train a language model that does not require a sentence-level input for generation and allows for direct control over the topic, stance, and aspect of the produced argument.",
"Xing et al. (2017) design a language model that attends to topic information to generate responses for chatbots.",
"Dathathri et al. (2019) train two models that control the sentiment and topic of the output of pre-trained language models at inference.",
"Gretz et al. (2020a) fine-tune GPT-2 on existing, labeled datasets to generate claims for given topics.",
"However, the latter works do not explore generation for such a fine-grained and explicit control as proposed in this work.",
"We show that argument generation requires the concept of argument aspects to shape the produced argument's perspective and to allow for diverse arguments for a topic of interest.",
"Argument aspect detection is necessary for our argument generation pipeline, as it allows for a fine-grained control over the generation process.",
"We create a new dataset, as existing approaches either rely on coarse-grained frames or cannot be applied by non-expert annonators in a scalable manner.",
"We base our new aspect detection dataset on the UKP Sentential Argument Mining Corpus (UKP-Corpus) by Stab et al. (2018b), as it already contains sentence-level arguments and two of the control codes we aim to use: topics and stance labels.",
"More precisely, it contains 25,474 manually labelled sentences for eight controversial topics in English.",
"Each sample consists of a topic and a sentence, labelled as either being supporting, attacking, or no argument towards the given topic.",
"As we are only interested in arguments, we do not consider the non-argumentative sentences.",
"Step 1: Preliminary annotations To ensure the feasibility of creating a dataset for this task, two experts (a post-doctoral researcher and an undergraduate student with NLP background) independently annotate 800 random samples (from four topics, 200 per topic) taken from the UKP-Corpus.",
"The annotations are binary and on token-level, where multiple spans of tokens could be selected as aspects.",
"The resulting inter-annotator agreement of this study is Krippendorff's u = .",
"38 .",
"While this shows that the task is generally feasible, the agreement on exact token spans is rather low.",
"Hence, in the following steps, we reduce the complexity of the annotation task.",
"Step 2: Annotation scheme Instead of free span-level annotations, we present annotators with a ranked list of aspect recommendations.",
"To generate meaningful recommendations, we train a ranking model using the preliminary annotations (Step 1).",
"Step 2a: Data preparation for ranking To create training data for the ranker, we use a simple heuristic to calculate scores between 0 and 1 for all N-grams of a sentence by dividing the number of aspect tokens within an N-gram by its length N : # aspect tokens N [0 , 1] .",
"Our analysis reveals that 96% (783 of 814) of all aspects in the preliminary annotation dataset only contain one to four tokens.",
"We thus decide to ignore all candidates with more than four tokens.",
"No other limitations or filtering mechanisms are applied.",
"Step 2b: Training the ranker We use BERT (De-vlin et al., 2019) and MT-DNN 2 (Liu et al., 2019) (base and large) to train a ranker.",
"For training, we create five splits: (1) one in-topic split using a random subset from all four topics and (2) four 2 BERT, fine-tuned on several NLP tasks via multi-task learning.",
"cross-topic splits using a leave-one-topic-out strategy.",
"The cross-topic setup allows us to estimate the ranker's performance on unseen topics of the UKP-Corpus.",
"A single data sample is represented by an argument and an 1to 4-gram of this argument, separated by the BERT architecture's [SEP] token.",
"This technique expands the 800 original samples of the dataset to around 80,336.",
"The model is trained for 5 epochs, with a learning rate of 5 10 5 , and a batch size of 8.",
"We use the mean squared error as loss and take the recall@k to compare the models.",
"The inand cross-topic results of the best-performing model (MT-DNNBASE ) are reported in Table 2.",
"All results are the average over runs with five different seeds (and over all four splits for the cross-topic experiments).",
"Step 2c: Creating the annotation data For each of the four topics that are part of the preliminary annotation dataset, we use the in-topic model to predict aspects of 629 randomly chosen, unseen arguments from the UKP-Corpus.",
"For the other four topics of the UKP-Corpus, we choose the best cross-topic model to predict aspects for the same amount of samples.",
"To keep a recall of at least 80%, we choose the ten and fifteen highest-ranked aspect candidates for samples as predicted by the in-topic and cross-topic model, respectively.",
"We remove aspect candidates that include punctuation, begin or end with stopwords, or contain digits.",
"Step 3: Annotation study We use Amazon Mechanical Turk to annotate each sample by eight different workers located in the US, paying $7.6 per hour (minimum wage is $7.25 per hour).",
"Based on a subset of 232 samples, we compute an u of .67 between crowdworkers and experts (three doctoral researchers).",
"Compared to the initial study, the new approach increases the inter-annotator agreement between experts by approx.",
"11 points (see App. A for further details on the annotation study).",
"Based on this promising result, we create a dataset of 5,032 high-quality samples that are labelled with aspects, as well as with their original stance labels from the UKP-Corpus.",
"We show the most frequent (lemmatized) aspects that appear in some topics in Table 1.",
"We create a cross-topic split with the data of two topics as test set ( gun control , school uniforms ), one topic as dev set ( death penalty ), and the remaining topics as train set and evaluate two models with it.",
"First, we use the ranking approach described in Step 2a-2b to fine-tune MT-DNNBASE on the newly generated data (Ranker).",
"At inference, we choose the top T aspects for each argument as candidates.",
"We tune T on the dev set and find T = 2 to be the best choice.",
"Second, we use BERT for sequence tagging (Wolf et al., 2020) and label all tokens of the samples with BIO tags.",
"As previously done with the ranker, we experiment with BERT and MT-DNN weights and find BERTLARGE to be the best choice (trained for 5 epochs, with a learning rate of 1 10 5 and a batch size of 32).",
"We flatten the predictions for all test samples and calculate the F 1 , Precision, and Recall macro scores.",
"All models are trained over five seeds and the averaged results are reported in Table 3.",
"BERTLARGE predicts classes B and I with an F 1 of .65 and .53, hence aspects with more than one token are less well identified.",
"A difference is to be expected, as the class balance of B's to I's is 2,768 to 2,103.",
"While the ranker performs worse based on the shown metrics, it has a slightly higher recall for class I.",
"We assume this is due to the fact that it generally ranks aspects with more than one token on top, i.e. there will often be at least one or more I's in the prediction.",
"In contrast to that, BERTLARGE focuses more on shorter aspects, which is also in accordance with the average aspect length of 1.8 tokens per aspect in the dataset.",
"This section describes the data collection and preprocessing for the argument generation pipeline.",
"We aim to train a model that is able to transfer argumentative information concisely within a single sentence.",
"We define such an argument as the combination of a topic and a sentence holding evidence with a specific stance towards this topic (Stab et al., 2018b).",
"Consequently, the following preprocessing steps ultimately target retrieval and classification of sentences.",
"To evaluate different data sources, we use a dump from Common-Crawl 3 ( CC ) and Reddit comments 4 ( REDDIT ) to fine-tune two separate generation models.",
"The CC dump is from July 2016 and contains 331M documents (3.6TB) after deduplication.",
"The REDDIT dump contains 2.5B documents (1.6TB) from December 2012 to May 2019.",
"We choose to compare these two sources, as REDDIT is focused around user discussions and CC contains mixed sources with potentially higher quality.",
"Document Retrieval We index REDDIT and CC with ElasticSearch 5 and, for both, gather up to 1.5M documents for each of the eight topics of the UKP-Corpus.",
"To increase the search results, we add synonyms (see App. B) for most topics.",
"Argument and Stance Classification We split the sentences of all documents and remove duplicates.",
"We notice that many sentences are not relevant with regard to the document's topic.",
"To enforce topic-relevance, we decide to filter out all sentences that do not contain at least one token of the respective topic or its defined synonyms (see App. B).",
"We use the ArgumenText API's 6 argument and stance classification models (Stab et al., 2018a) to classify 3 https://commoncrawl.org 4 https://files.pushshift.io/reddit/ comments/ 5 https://www.elastic.co 6 https://api.argumentsearch.com all sentences into argument or non-argument (F 1 macro = . 7384 ), and remaining arguments into pro or con with regard to the topic (F 1 macro = . 7661 ).",
"Aspect Detection We detect aspects on all remaining arguments.",
"To speed up the detection on millions of sentences, we use BERTBASE instead of BERTLARGE (see Table 3).",
"Training Document Generation We create the final training documents for the argument generation model by concatenating all arguments that have the same topic, stance, and aspect (i.e. the same control code).",
"Further, we aggregate all arguments that include an aspect with the same stem into the same document (e.g. arguments with cost and costs as aspect).",
"To cope with limited hardware resources, we restrict the total number of arguments for each topic and stance to 100,000 (i.e. 1.6M over all eight topics).",
"Also, as some aspects dominate by means of quantity of related arguments and others appear only rarely, we empirically determine an upper and lower bound of 1,500 and 15 arguments for each document, which still allows us to retrieve the above defined amount of training arguments.",
"In the following, we describe the architecture and the training process of the Arg-CTRL and analyze its performance.",
"Model The goal of a statistical language model is to learn the conditional probability of the next word given all (or a subset of) the previous ones (Bengio et al., 2003).",
"That is, for a sequence of tokens x = ( x 1 , ..., x n ) , the model learns p ( x i | x <i ) where x i is the i -th word of sequence x .",
"For this work, we use the 1.63 billion-parameter Conditional Transformer Language Model (CTRL) by Keskar et al. (2019), which is built on a transformer-based sequence to sequence architecture (Vaswani et al., 2017).",
"The CTRL has shown to produce high quality text, is general enough to be adapted for conditioning on the control codes we aim to use, and we do not need to pre-train the weights from scratch.",
"Formally, the CTRL adds an extra condition to each sequence by prepending a control code c , hence learning p ( x i | x <i , c ) .",
"The control code is represented by a single token and can then be used to direct the model output at inference.",
"We extend the model from its previous limit of a single-token control code to accept multiple tokens.",
"For cloning CON unrespectable .",
"decoding at inference, we use penalized sampling as proposed by Keskar et al. (2019).",
"It defines a near-greedy sampling strategy that uses a penalty constant, effectively lowering the probability of previously generated tokens to prevent repetitions.",
"Training The CTRL was trained on 140GB of data from several large resources like Wikipedia, sub-reddits, and news data.",
"We base our experiments on the pre-trained weights for a sequence length of 256 and fine-tune (see App. C for technical details) two models: Arg-CTRL CC (on the CC data) and Arg-CTRL REDDIT (on the REDDIT data).",
"All training documents are sampled randomly for training.",
"The respective control code is prepended to each sequence of 256 subwords of a document.",
"Generation At inference, we gather multiple generated arguments from a control code input by splitting the generated output text into sentences with NLTK (Bird et al., 2009).",
"We observe that for the first generated argument, the Arg-CTRL mostly outputs very short phrases, as it tries to incorporate the control code into a meaningful start of an argument.",
"We prevent this by adding punctuation marks after each control code (e.g. a period or colon), signaling the model to start a new sentence.",
"In this fashion, we generate pro and con -arguments up to the pre-defined training split size 7 for each topic of the UKP-Corpus, resulting in 7,991 newly generated arguments.",
"We do this with both models and use the generated arguments as a basis for the following analysis and evaluation methods.",
"Examples of generated arguments can be found in Tables 4, 6, and 7 (as part of the evaluation, see Section 7).",
"Results With no other previous work on explicit control of argument generation (to the best of our knowledge), we decide to proof our concept of aspect-controlled neural argument generation by 7 Not counting non-arguments from the splits.",
"comparing both generation models to a retrieval approach as a strong upper bound.",
"The retrieval approach returns all arguments from the classified training data (see Section 4) that match a given topic, stance, and aspect.",
"Both the retrieval and generation approaches are evaluated against reference data from debate portals and compared via METEOR (Lavie and Agarwal, 2007) and ROUGE-L (Lin, 2004) metrics.",
"The retrieval approach has an advantage in this setup, as the arguments are also of human origin and aspects are always explicitly stated within a belonging argument.",
"The reference data was crawled from two debate portals 8 and consists of proand con-paragraphs discussing the eight topics of the UKP-Corpus.",
"As the paragraphs may include non-arguments, we filter these out by classifying all sentences with the ArgumenText API into arguments and non-arguments.",
"This leaves us with 349 proand 355 con-arguments over all topics (see App. D for the topic-wise distribution).",
"Next, we detect all aspects in these arguments.",
"Arguments with the same topic, stance, and aspect are then grouped and used as reference for arguments from the",
"(a) generated arguments and",
"(b) retrieval approach arguments if these hold the same topic, stance, and aspect.",
"The results reveal that both the average METEOR and ROUGE-L scores are only marginally lower than the retrieval scores (METEOR is 0.5/1.1 points lower for the Arg-CTRL REDDIT /Arg-CTRL CC , see Table 5).",
"It not only shows the strength of the architecture, but also the success in generating sound aspect-specific arguments with our approach.",
"Overlap with Training Data We find arguments generated by the models to be genuine, i.e. demonstrating substantial differences to the training data.",
"For each of the 7,991 generated arguments, we find the most similar argument in the training data based on the cosine similarity of their BERT embeddings 8 procon.org and idebate.org Model METEOR ROUGE-L Retrieval ( CC ) 17.85 14.72 Arg-CTRL CC 16.80 11.95 Retrieval ( REDDIT ) 17.29 15.26 Arg-CTRL REDDIT 16.82 12.34 Table 5: Comparison of retrieval and generation approach with reference data from debate portals.",
"(CLS token).",
"The average cosine similarity of the most similar pairs for both the Arg-CTRL CC and Arg-CTRL REDDIT is .92.",
"However, this value is misleading, as even highly similar samples still show clear differences.",
"This is also evident when looking at the average edit distances of 343 (Arg-CTRL CC ) and 163 (Arg-CTRL REDDIT ) for the pairs with highest similarity.",
"Further comparison of these pairs for their longest common (string) overlap reveals only 9% (Arg-CTRL CC ) and 11% (Arg-CTRL REDDIT ) overlap on average, mostly consisting of stopwords.",
"For illustration, we show two examples of highly similar pairs in Table 6.",
"To show the necessity of having prior knowledge of aspects for our controlled argument generation approach, we create training data without prior knowledge of aspects, train a new generation model on it, and compare it to our previous models with prior knowledge of aspects.",
"Equally to the original Arg-CTRL CC 's procedure, we gather 100,000 sentences for each stance of a topic from the CC data.",
"As we assume to have no knowledge about the aspects of the arguments, we randomly sample arguments from the CC source documents.",
"We create training documents with numbers of arguments varying between 15 and 1,500 to mimic the data generation process of the original models and fine-tune a new generation model on them.",
"After training, we generate the same number of arguments as for the other two models by using our default control code of [Topic] [Stance] [Aspect] .",
"While the new model was only conditioned on topics and stances at training time, we make sure that all aspects used for generation appear in at least one argument of the model's training data.",
"We compare all models by verifying whether or not the aspect used for generation (including synonyms and their stems and lemmas) can be found in the generated arguments.",
"For the original models conditioned on aspects, this is true in 79% of Generated sentence : We do n't need more gun control laws when we already have enough restrictions on who can buy guns in this country .",
"Training sentence : We have some of the strongest gun laws in the country , but guns do n't respect boundaries any more than criminals do .",
"Cosine similarity / edit distance / rel.",
"overlap : 95.59 / 88 / 8% Generated sentence : The radioactivity of the spent fuel is a concern , as it can be used to make weapons and has been linked to cancer in humans .",
"Training sentence : However , it does produce radioactive waste , which must be disposed of carefully as it can cause health problems and can be used to make nuclear weapons Cosine similarity / edit distance / rel.",
"overlap : 92.40 / 99 / 17% Table 6: Training data vs. generated arguments: examples of most similar arguments.",
"the cases for Arg-CTRL REDDIT and in 74% of the cases for Arg-CTRL CC .",
"For the model that was not conditioned on aspects, however, it is only true in 8% of the cases.",
"It clearly shows the necessity to condition the model on aspects explicitly, implying the need for argument aspect detection, as the model is unable to learn generating aspect-related arguments otherwise.",
"Moreover, without prior detection of aspects, we have no means for proper aggregation over aspects.",
"We notice that for the model without prior knowledge of aspects, 79% of all aspects in the training data appear in only one argument.",
"For these aspects, the model will likely not pick up a strong enough signal to learn them.",
"We evaluate the quality (intrinsic evaluation) of the Arg-CTRL and its performance on an exemplary task (extrinsic evaluation).",
"As a basis, we use the 7,991 arguments generated in Section 5.",
"Human Evaluation We conduct an expert evaluation on a subset of generated arguments with two researchers (field of expertise is natural language processing) not involved in this paper.",
"Two aspects are evaluated: fluency and persuasiveness .",
"We consider a sentence as fluent if it is grammatically correct (Hua et al., 2019), i.e. contains neither semantic nor syntactic errors, and arrange this as a binary task.",
"To reduce subjectivity for the persuasiveness evaluation, the experts do not annotate single arguments but instead compare pairs (Haber-nal and Gurevych, 2016) of generated and reference data arguments (see Section 5.2).",
"The experts could either choose one argument as being more persuasive or both as being equally persuasive.",
"In total, the experts compared 100 (randomly sorted and ordered) argument pairs for persuasiveness and fluency (50 from both the Arg-CTRL REDDIT and the Arg-CTRL CC ).",
"A pair of arguments always had the same topic and stance.",
"For fluency, only the annotations made for generated arguments were extracted and taken into account.",
"Averaged results of both experts show that in 33% of the cases, the generated argument is either more convincing (29%) or as convincing (4%) as the reference argument.",
"Moreover, 83% of generated arguments are fluent.",
"The inter-annotator agreement (Cohen, 1960) between the two experts is Cohen's = .",
"30 (percentage agreement: .62) for persuasiveness and = .",
"43 (percentage agreement: .72) for fluency, which can be interpreted as fair and moderate agreement, respectively (Landis and Koch, 1977).",
"As we compare to high-quality, curated data, the perceived persuasiveness of the generated arguments shows the potential of the workfurther strengthened in the remainder of this section.",
"Argument Quality We introduce a novel method to evaluate generated arguments based on the argument quality detection approach proposed by Gretz et al. (2020b).",
"They create an argument quality dataset that contains around 30,000 arguments over 71 topics.",
"For each argument, annotators were asked whether or not they would recommend a friend to use the displayed argument in a speech.",
"The quality scores for each argument result from a weighted average (WA) or MACE Probability function of all annotations and range between 0 (low-est quality) and 1.0 (highest quality).",
"We use the WA -score as label, the same model (BERTBASE ) and hyperparameters as given in the original paper, and reproduce the reported correlations of .52 (Pearson) and .48 (Spearman) on the test dataset (averaged over five different seeds).",
"The model predicts an average argument quality of .71 for the Arg-CTRL REDDIT arguments, .75 for the training arguments of the UKP-Corpus, and even .76 for the Arg-CTRL CC arguments.",
"It shows that our model is able to produce arguments that are generally on a similar quality level as arguments made by humans.",
"The difference of 5 points between the Arg-CTRL CC and the Arg-CTRL REDDIT , however, shows that training on REDDIT data leads to a lower quality of generated arguments.",
"Exemplary Human: Nuclear energy produces waste that stays radioactive for thousands of years and pollutes the environment.",
"Arg-CTRL CC : (nuclear energy PRO waste .) The only thing that 's going to be left over is the spent fuel rods , which are not very radioactive and can be reprocessed [...] Arg-CTRL CC : (nuclear energy PRO radioactive .) The radioactivity of the waste is not a problem , because it will decay away in about 300 years or so [...] Arg-CTRL CC : (nuclear energy PRO environment .) The fact that the radioactive waste is not only stored in a safe manner , but also protected from any potential threat to human health or the environment for thousands of years [...] Human: School uniforms are expensive and affect the pupil's individuality.",
"Arg-CTRL CC : (school uniforms PRO expensive .) The cost of the uniform is not that much more than a regular shirt and shorts [...] Arg-CTRL CC : (school uniforms PRO individuality .) The uniform is a symbol of unity and identity , which helps to foster an environment where students can feel comfortable about expressing their own individual style without being judged [...] Table 7: Generated counter-arguments with the Arg-CTRL CC .",
"for three topics, we show the generated arguments with the highest and lowest argument quality in Table 4 (see App. E for the full table).",
"Drafting counter-arguments is an important skill for debating, to provide constructive feedback, and to foster critical thinking.",
"We lean onto the work of Wachsmuth et al. (2018) who describe a counterargument as discussing the same aspect as an initial argument, but with a switched stance.",
"Hence, given our defined control codes, our model is especially fit for counter-argument generation.",
"Unlike current models for this task, we do not require a specific dataset with argument and counterargument pairs (Hidey and McKeown, 2019; Hua et al., 2019).",
"Also, in contrast to the model by Hua and Wang (2019) that implicitly integrates input-related Keyphrases into the process of counterargument generation, our model is able to concentrate on every aspect of the input explicitly and with a separate argument, allowing for more transparency and interpretability over the process of counter-argument generation.",
"We exemplary show how the combination of aspect detection and controlled argument generation can be successfully leveraged to tackle this task.",
"For that, we manually compose initial arguments for the topics nuclear energy and school uniforms .",
"Then, we automatically detect their aspects and generate a counterargument for each aspect by passing the topic, opposite stance of the original argument, and one of the aspects into the Arg-CTRL CC .",
"For both topics, the Arg-CTRL CC produces meaningful counter-arguments based on the detected aspects (see Table 7).",
"We apply the concept of controlled neural text generation to the domain of argument generation.",
"Our Arg-CTRL is conditioned on topics, stances, and aspects and can reliably create arguments using these control codes.",
"We show that arguments generated with our approach are genuine and of high argumentative and grammatical quality in general.",
"Moreover, we show that our approach can be used to generate counter-arguments in a transparent and interpretable way.",
"We fine-tune the Arg-CTRL on two different data sources and find that using mixed data from Common-Crawl results in a higher quality of generated arguments than using user discussions from Reddit-Comments.",
"Further, we define argument aspect detection for controlled argument generation and introduce a novel annotation scheme to crowdsource argument aspect annotations, resulting in a high-quality dataset.",
"We publish the model weights, data, and all code necessary to train the Arg-CTRL.",
"Models for argument and claim generation have been discussed in our related work and are widely available.",
"Gretz et al. (2020a) suggest that, in order to allow for a fine-grained control over claim/argument generation, aspect selection needs to be handled carefully, which is what we have focused on in this work.",
"The dangers of misuse of language models like the CTRL have been extensively discussed by its authors (Keskar et al., 2019).",
"The ethical impact of these works has been weighed and deemed justifiable.",
"Argument generationand natural language generation as a wholeis subject to dual use.",
"The technology can be used to create arguments that cannot be distinguished from human-made arguments.",
"While our intentions are to support society, to foster diversity in debates, and to encourage research on this important topic, we are aware of the possibility of harmful applications this model can be used for.",
"For instance, the model could be used to generate only opposing (or supporting) arguments on one of the pretrained topics and aspects and, as such, bias a debate into a certain direction.",
"Also, bots could use the generated arguments to spread them via social media.",
"The same is true, however, for argument search engines, which can be used by malicious parties to retrieve (and then spread) potentially harmful information.",
"However, controllable argument generation can also be used to support finding and formulating (counter-)arguments for debates, for writing essays, to enrich one-sided discussions, and thus, to make discourse more diverse overall.",
"For instance, anticipating opposing arguments is crucial for critical thinking, which is the foundation for any democratic society.",
"The skill is extensively taught in school and university education.",
"However, con-firmation bias (or myside bias ) (Stanovich et al., 2013), i.e. the tendency to ignore opposing arguments, is an ever-present issue.",
"Technologies like ours could be used to mitigate this issue by, for instance, automatically providing topicand aspect-specific counter-arguments for all arguments of a given text (this has been shown for single arguments in Section 7.2).",
"We believe that working on and providing access to such models is of major importance and, overall, a benefit to society.",
"Open-sourcing such language models also encourages the work on counter-measures to detect malicious use: While many works have been published on the topic of automatic fake news detection in texts (Kaliyar et al., 2020; Reis et al., 2019; Hanselowski et al., 2018; Prez-Rosas et al., 2018), the recent emergence of large-scale language models has also encouraged research to focus on detecting the creator of these texts (Varshney et al., 2020; Zellers et al., 2019).",
"The former approaches are aimed at detecting fake news in general, i.e. inde-pendent of who (or what) composed a text, whereas the latter approaches are designed to recognize if a text was written by a human or generated by a language model.",
"We encourage the work on both types of methods.",
"Ideally, social networks and news platforms would indicate if a statement was automatically generated in addition to its factual correctness.",
"Further, we point out some limitations of the Arg-CTRL that mitigate the risks discussed before.",
"One of these limitations is that it cannot be used to generate arguments for unseen topics, which makes a widespread application (e.g. to produce fake news) rather unlikely (using an unseen topic as control code results in nonsensical repetitions of the input).",
"The analysis in Section 6 of the paper shows that the model fails to produce aspect-specific sentences in 92% of the cases if it was not explicitly conditioned on them at training time.",
"Even in case of success, the aspect has to exist in the training data.",
"Also, the model is trained with balanced classes, i.e. both supporting and opposing arguments for each topic are seen with equal frequency to prevent possible bias into one or the other direction.",
"To further restrict malicious use, we release the training data for the Arg-CTRLs with an additional clause that forbids use for any other than research purposes.",
"Also, all the training datasets for the Arg-CTRLs will be accessible only via access control (e-mail, name, and purpose of use).",
"Lastly, this work has been reviewed by the ethics committee of the Technical University of Darmstadt that issued a positive vote.",
"We thank Tilman Beck and Nandan Thakur for their support in the human evaluation (Section 7.1).",
"This work has been supported by the German Research Foundation within the project Open Argument Mining (GU 798/25-1), associated with the Priority Program Robust Argumentation Machines (RATIO) (SPP-1999)."
] |
[
"abstain",
"abstain",
"method",
"method",
"result",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"result",
"abstain",
"abstain",
"method",
"objective",
"result",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"method",
"abstain",
"result",
"abstain",
"method",
"result",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"result",
"result",
"result",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"Recent research in multilingual language models (LM) has demonstrated their ability to effectively handle multiple languages in a single model.",
"This holds promise for low web-resource languages (LRL) as multilingual models can enable transfer of supervision from high resource languages to LRLs.",
"However, incorporating a new language in an LM still remains a challenge, particularly for languages with limited corpora and in unseen scripts.",
"In this paper we argue that relatedness among languages in a language family may be exploited to overcome some of the corpora limitations of LRLs, and propose RelateLM.",
"We focus on Indian languages, and exploit relatedness along two dimensions: (1) script (since many Indic scripts originated from the Brahmic script), and (2) sentence structure .",
"RelateLM uses transliteration to convert the unseen script of limited LRL text into the script of a Related Prominent Language (RPL) (Hindi in our case).",
"While exploiting similar sentence structures, RelateLM utilizes readily available bilingual dictionaries to pseudo translate RPL text into LRL corpora.",
"Experiments on multiple real-world benchmark datasets provide validation to our hypothesis that using a related language as pivot, along with transliteration and pseudo translation based data augmentation, can be an effective way to adapt LMs for LRLs, rather than direct training or pivoting through English.",
"BERT-based pre-trained language models (LMs) have enabled significant advances in NLP (Devlin et al., 2019; Liu et al., 2019; Lan et al., 2020).",
"Pre-trained LMs have also been developed for the multilingual setting, where a single multilingual model is capable of handling inputs from many different Authors contributed equally Figure 1: Number of wikipedia articles for top-few Indian Languages and English.",
"languages.",
"For example, the Multilingual BERT (mBERT) (Devlin et al., 2019) model was trained on 104 different languages.",
"When fine-tuned for various downstream tasks, multilingual LMs have demonstrated significant success in generalizing across languages (Hu et al., 2020; Conneau et al., 2019).",
"Thus, such models make it possible to transfer knowledge and resources from resource rich languages to Low Web-Resource Languages (LRL).",
"This has opened up a new opportunity towards rapid development of language technologies for LRLs.",
"However, there is a challenge.",
"The current paradigm for training Mutlilingual LM requires text corpora in the languages of interest, usually in large volumes.",
"However, such text corpora is often available in limited quantities for LRLs.",
"For example, in Figure 1 we present the size of Wikipedia, a common source of corpora for training LMs, for top-few scheduled Indian languages 1 and English.",
"The top-2 languages are just one-fiftieth the size of 1 According to Indian Census 2011, more than 19,500 languages or dialects are spoken across the country, with 121 of them being spoken by more than 10 thousand people.",
"This calls for the development of additional mechanisms for training multilingual LMs which are not exclusively reliant on large monolingual corpora.",
"Recent methods of adapting a pre-trained multilingual LM to a LRL include fine-tuning the full model with an extended vocabulary (Wang et al., 2020), training a light-weight adapter layer while keeping the full model fixed (Pfeiffer et al., 2020b), and exploiting overlapping tokens to learn embeddings of the LRL (Pfeiffer et al., 2020c).",
"These are general-purpose methods that do not sufficiently exploit the specific relatedness of languages within the same family.",
"We propose RelateLM for this task.",
"RelateLM exploits relatedness between the LRL of interest and a Related Prominent Language ( RPL ).",
"We focus on Indic languages, and consider Hindi as the RPL.",
"The languages we consider in this paper are related along several dimensions of linguistic typology (Dryer and Haspelmath, 2013; Littell et al., 2017): phonologically, phylogenetically as they are all part of the Indo-Aryan family, geographically, and syntactically matching on key features like the Subject-Object-Verb (SOV) order as against the Subject-Verb-Object (SVO) order in English.",
"Even though the scripts of several Indic languages differ, they are all part of the same Brahmic family, making it easier to design rule-based transliteration libraries across any language pair.",
"In contrast, transliteration of Indic languages to English is harder with considerable phonetic variation in how words are transcribed.",
"The geographical and phylogenetic proximity has lead to significant overlap of words across languages.",
"This implies that just after transliteration we are able to exploit overlap with a Related Prominent Language (RPL) like Hindi.",
"On three Indic languages we discover between 11% and 26% overlapping tokens with Hindi, whereas with English it is less than 8%, mostly comprising numbers and entity names.",
"Furthermore, the syntax-level similarity between languages allows us to generate high quality data augmentation by exploiting pre-existing bilingual dictionaries.",
"We generate pseudo parallel data by converting RPL text to LRL and vice-versa.",
"These allow us to further align the learned embeddings across the two languages using the recently proposed loss functions for aligning contextual embeddings of word translations (Cao et al., 2020; Wu and Dredze, 2020).",
"In this paper, we make the following contributions: We address the problem of adding a Low Web-Resource Language (LRL) to an existing pre-trained LM, especially when monolingual corpora in the LRL is limited.",
"This is an important but underexplored problem.",
"We focus on Indian languages which have hundred of millions of speakers, but traditionally understudied in the NLP community.",
"We propose RelateLM which exploits relatedness among languages to effectively incorporate a LRL into a pre-trained LM.",
"We highlight the relevance of transliteration and pseudo translation for related languages, and use them effectively in RelateLM to adapt a pre-trained LM to a new LRL.",
"Through extensive experiments, we find that RelateLM is able to gain significant improvements on benchmark datasets.",
"We demonstrate how RelateLM adapts mBERT to Oriya and Assamese, two low web-resource Indian languages by pivoting through Hindi.",
"Via ablation studies on bilingual models we show that RelateLM is able to achieve accuracy of zero-shot transfer with limited data (20K documents) that is not surpassed even with four times as much data in existing methods.",
"Transformer (Vaswani et al., 2017) based language models like mBERT (Devlin et al., 2019), MuRIL (Khanuja et al., 2021), IndicBERT (Kakwani et al., 2020), and XLM-R (Conneau et al., 2019), trained on massive multilingual datasets have been shown to scale across a variety of tasks and languages.",
"The zero-shot cross-lingual transferability offered by these models makes them promising for low-resource domains.",
"Pires et al. (2019) find that cross-lingual transfer is even possible across languages of different scripts, but is more effective for typologically related languages.",
"However, recent works (Lauscher et al., 2020; Pfeiffer et al., 2020b; Hu et al., 2020) have identified poor cross-lingual transfer to languages with limited data when jointly pre-trained.",
"A primary reason behind poor transfer is the lack of model's capacity to accommodate all languages simultaneously.",
"This has led to increased interest in adapting multilingual LMs to LRLs and we discuss these in the following two settings.",
"LRL adaptation using monolingual data For eleven languages outside mBERT, Wang et al. (2020) demonstrate that adding a new target language to mBERT by simply extending the embedding layer with new weights results in better performing models when compared to bilingual-BERT pre-training with English as the second language.",
"Pfeiffer et al. (2020c) adapt multilingual LMs to the LRLs and languages with scripts unseen during pre-training by learning new tokenizers for the unseen script and initializing their embedding matrix by leveraging the lexical overlap w.r.t. the languages seen during pre-training.",
"Adapter (Pfeiffer et al., 2020a) based frameworks like (Pfeiffer et al., 2020b; Artetxe et al., 2020; Ustun et al., 2020) address the lack of model's capacity to accommodate multiple languages and establish the advantages of adding language-specific adapter modules in the BERT model for accommodating LRLs.",
"These methods generally assume access to a fair amount of monolingual LRL data and do not exploit relatedness across languages explicitly.",
"These methods provide complimentary gains to our method of directly exploiting language relatedness.",
"LRL adaptation by utilizing parallel data When a parallel corpus of a high resource language and its translation into a LRL is available, Conneau and Lample (2019) show that pre-training on concatenated parallel sentences results in improved cross-lingual transfer.",
"Methods like Cao et al. (2020); Wu and Dredze (2020) discuss advantages of explicitly bringing together the contextual embeddings of aligned words in a translated pair.",
"Language relatedness has been exploited in multilingual-NMT systems in various ways (Neubig and Hu, 2018; Goyal and Durrett, 2019; Song et al., 2020).",
"These methods typically involve data augmentation for a LRL with help of a related high resource language (RPL) or to first learn the NMT model for a RPL followed by finetuning on the LRL.",
"Wang et al. (2019) propose a soft-decoupled encoding approach for exploiting subword overlap between LRLs and HRLs to improve encoder representations for LRLs.",
"Gao et al. (2020) address the issue of generating fluent Percentage Overlap of Words LRL Related Prominent Distant Prominent (Hindi) (English) Punjabi 25.5 7.5 Gujarati 23.3 4.5 Bengali 10.9 5.5 Table 1: Motivation for transliteration : % overlapping words between transliterated LRL (in Prominent Language's script) and prominent language text.",
"Xia et al. (2019) utilize data augmentation techniques for LRL-English translation using RPL-English and RPL-LRL parallel corpora induced via bilingual lexicons and unsupervised NMT.",
"Goyal et al. (2020) utilize transliteration and parallel data from related Indo-Aryan languages to improve NMT systems.",
"Similar to our approach they transliterate all the Indian languages to the Devanagri script.",
"Similarly, Song et al. (2020) utilize Chinese-English parallel corpus and transliteration of Chinese to Japanese for improving Japanese-English NMT systems via data augmentation.",
"To the best of our knowledge no earlier work has explored the surprising effectiveness of transliteration to a related existing prominent language, for learning multilingual LMs, although some work exists in NMT as mentioned above.",
"Problem Statement and Notations Our goal is to augment an existing multilingual language model M , for example mBERT, to learn representations for a new LRLL for which available monolingual corpus DL is limited.",
"We are also told that the language to be added is related to another language R on which the model M is already pre-trained, and is of comparatively higher resource.",
"However, the script of DL may be distinct from the scripts of existing languages in M .",
"In this section we present strategies for using this knowledge to BLEU Scores LRL Related Prominent Distant Prominent (Target) (Hindi) (Source) (English) (Source) Punjabi 24.6 16.5 Gujarati 20.3 12.9 Bengali 19.3 12.4 Table 2: Motivation for pseudo translation : BLEU scores between pseudo translated prominent language sentences and LRL sentences.",
"In addition to the monolingual data DR in the RPL and DL in the LRL, we have access to a limited bilingual lexicon BL R that map a word in language L to a list of synonyms in language R and vice-versa BR L .",
"We focus on the case where the RPL, LRL pairs are part of the Indo-Aryan language families where several levels of relatedness exist.",
"Our proposed approach, consists of three steps, viz., Transliteration to RPL's script, Pseudo translation, and Adaptation through Pre-training.",
"We describe each of these steps below.",
"Figure 2 presents an overview of our approach.",
"First, the scripts of Indo-Aryan languages are part of the same Brahmic script.",
"This makes it easier to design simple rule-based transliterators to convert a corpus in one script to another.",
"For most languages transliterations are easily available.",
"Example, the Indic-Trans Library 2 (Bhat et al., 2015).",
"We use DLR to denote the LRL corpus after transliterating to the script of the RPL.",
"We then propose to further pre-train the model M with MLM on the transliterated corpus DLR instead of DL .",
"Such a strategy could provide little additional gains over the baseline, or could even hurt accuracy, if the two languages were not sufficiently related.",
"For languages in the Indo-Aryan family because of strong phylogenetic and geographical overlap, many words across the two languages overlap and preserve the 2 https://github.com/libindic/ indic-trans same meaning.",
"In Table 1 we provide statistics of the overlap of words across several transliterated Indic languages with Hindi and English.",
"Note that for Hindi the fraction of overlapping words is much higher than with English which are mostly numbers, and entity names.",
"These overlapping words serve as anchors to align the representations for the non-overlapping words of the LRL that share semantic space with words in the RPL.",
"Parallel data between a RPL and LRL language pair has been shown to be greatly useful for efficient adaptation to LRL (Conneau and Lample, 2019; Cao et al., 2020).",
"However, creation of parallel data requires expensive supervision, and is not easily available for many low web-resource languages.",
"Back-translation is a standard method of creating pseudo parallel data but for low web-resource languages we cannot assume the presence of a well-trained translation system.",
"We exploit the relatedness of the Indic languages to design a pseudo translation system that is motivated by two factors: First, for most geographically proximal RPL-LRL language pairs, word-level bilingual dictionaries have traditionally been available to enable communication.",
"When they are not, crowd-sourcing creation of word-level dictionaries 3 requires lower skill and resources than sentence level parallel data.",
"Also, word-level lexicons can be created semiautomatically (Zhang et al., 2017) (Artetxe et al., 2019) (Xu et al., 2018).",
"Second, Indic languages exhibit common syntactic properties that control how words are composed to form a sentence.",
"For example, they usually follow the Subject-Object-Verb (SOV) order as against the Subject-Verb-Object (SVO) order in English.",
"We therefore create pseudo parallel data between R and L via a simple word-by-word translation using the bilingual lexicon.",
"In a lexicon a word can be mapped to multiple words in another language.",
"We choose a word with probability proportional to its frequency in the monolingual corpus DL .",
"We experimented with a few other methods of selecting words that we discuss in Section 4.4.",
"In Table 2 we present BLEU scores obtained by our pseudo translation model of three Indic languages from 3 Wiktionary is one such effort Hindi and from English.",
"We observe much high BLEU for translation from Hindi highlighting the syntactic relatedness of the languages.",
"Let ( DR , BR LR ( DR )) denote the parallel corpus formed by pseudo translating the RPL corpus via the transliterated RPL to LRL lexicon.",
"Likewise let ( DLR , BLR R ( DLR )) be formed by pseudo translating the transliterated low web-resource corpus via the transliterated LRL to RPL lexicon.",
"The union of the two pseudo parallel corpora above, collectively called P , is used for fine-tuning M using an alignment loss similar to the one proposed in (Cao et al., 2020).",
"This loss attempts to bring the multilingual embeddings of different languages closer by aligning the corresponding word embeddings of the source language sentence and the pseudo translated target language sentence.",
"Let C be a random batch of source and (pseudo translated) target sentence pairs from P , i.e. C = (( s 1 , t 1 ) , ( s 2 , t 2 ) , ..., ( s N , t N )) , where s and t are the source and target sentences respectively.",
"Since our parallel sentences are obtained via word-level translations, the alignment among words is known and monotonic.",
"Alignment loss has two terms: L = L align + L reg where L align is used to bring the contextual embeddings closer and L reg is the regularization loss which prevents the new embeddings from deviating far away from the pre-trained embeddings.",
"Each of these are defined below: L align = (cid:88) ( s , t ) C #word ( s ) (cid:88) i =1 || f ( s , l s ( i )) f ( t , l t ( i )) || 22 L reg = (cid:88) ( s , t ) C #tok ( s ) (cid:88) j =1 || ( f ( s , j ) f 0 ( s , j ) || 22 + #tok ( t ) (cid:88) j =1 || f ( t , j ) f 0 ( t , j ) || 22 where l s ( i ) is the position of the last token of i-th word in sentence s and f ( s , j ) is the learned contextual embedding of token at j -th position in sentence s , i.e, for L align we consider only the last tokens of words in a sentence, while for L reg we consider all the tokens in the sentence.",
"f 0 ( s , j ) denotes the fixed pre-trained contextual embedding of the token at j -th position in sentence s .",
"#word ( s ) and #tok ( s ) are the number of (whole) words and tokens in sentence s respectively.",
"We carry out the following experiments to evaluate RelateLM's effectiveness in LRL adaptation:",
"First, in the full multilingual setting, we evaluate whether RelateLM is capable of extending mBERT with two unseen low-resource Indic languages: Oriya (unseen script) and Assamese (seen script).",
"(Section 4.2) We then move to the bilingual setting where we use RelateLM to adapt a model trained on a single RPL to a LRL.",
"This setting allowed us to cleanly study the impact of different adaptation strategies and experiment with many RPL-LRL language pairs.",
"(Section 4.3) Finally, Section 4.4, presents an ablation study on dictionary lookup methods, alignment losses, and corpus size.",
"LM Models We take m-BERT as the model M for our multilingual experiments.",
"For the bilingual experiments, we start with two separate monolingual language models on each of Hindi and English language to serve as M .",
"For Hindi we trained our own Hi-BERT model over the 160K monolingual Hindi Wikipedia articles using a vocab size of 20000 generated using WordPiece tokenizer.",
"For English we use the pre-trained BERT model which is trained on almost two orders of magnitude Wikipedia articles and more.",
"When the LRL is added in its own script, we use the bert-base-cased model and when the LRL is added after transliteration to English, we use the bert-base-uncased model.",
"LRLs, Monolingual Corpus, Lexicon As LRLs we consider five Indic languages spanning four different scripts.",
"Monolingual data was obtained from Wikipedia as summarized in Table",
"4. We extend m-BERT with two unseen low web-resource languages: Assamese and Oriya.",
"Since it was challenging to find Indic languages with task-specific labeled data but not already in m-BERT, we could not evaluate on more than two languages.",
"For the bilingual model experiments, we adapt each of Hi-BERT and English BERT with three different languages: Punjabi, Gujarati and Bengali.",
"For these languages we simulated the LRL setting by Dataset Split Lang Number of Sentences NER POS TextC.",
"downsampling their Wikipedia data to 20K documents.",
"For experiments where we require English monolingual data for creating pseudo translations, we use a downsampled version of English Wikipedia having the same number of documents as the Hindi Wikipedia dump.",
"The addition of a new language to M was done by adding 10000 tokens of the new language generated by WordPiece tokenization to the existing vocabulary, with random initialization of the new parameters.",
"For all the experiments, we use li-bindic's indictrans library (Bhat et al., 2015) for transliteration.",
"For pseudo translation we use the union of Bilingual Lexicons obtained from CFILT 4 and Wiktionary 5 and their respective sizes for each language are summarized in Table 4 Tasks for zero-shot transfer evaluation After adding a LRL in M , we perform task-specific fine-4 https://www.cfilt.iitb.ac.in/ 5 https://hi.wiktionary.org/wiki/ LRL Adaptation Prominent Language Punjabi Gujarati Bengali NER POS TextC.",
"tuning on the RPL separately for three tasks: NER, POS and Text classification.",
"Table 3 presents a summary of the training, validation data in RPL and test data in LRL on which we perform zero-shot evaluation.",
"We obtained the NER data from WikiANN (Pan et al., 2017) and XTREME (Hu et al., 2020) and the POS and Text Classification data from the Technology Development for Indian Languages (TDIL) 6 .",
"We downsampled the TDIL data for each language to make them class-balanced.",
"The POS tagset used was the BIS Tagset (Sardesai et al., 2012).",
"For the English POS Dataset, we had to map the PENN tagset in to the BIS tagset.",
"We have provided the mapping that we used in the Appendix (B) Methods compared We contrast RelateLM with three other adaptation techniques: (1) EBERT (Wang et al., 2020) that extends the vocabulary and tunes with MLM on DL as-is, (2) RelateLM without pseudo translation loss, and (3) m-BERT when the language exists in m-BERT.",
"Training Details For pre-training on MLM we chose batch size as 2048, learning rate as 3e-5 and maximum sequence length as 128.",
"We used whole word masking for MLM and BertWordPieceTok-enizer for tokenization.",
"For pre-training Hi-BERT the duplication was taken as 5 with training done for 40K iterations.",
"For all LRLs where monolingual data used was 20K documents, the duplication factor was kept at 20 and and training was done for 24K iterations.",
"For Assamese, where monolingual data was just 6.5K documents, a duplication factor of 60 was used with the same 24K training iterations.",
"The MLM pre-training was done on Google v3-8 Cloud TPUs.",
"6 https://www.tdil-dc.in",
"maximum sequence length as 128.",
"The training was done for 10 epochs also on Google v3-8 Cloud TPUs.",
"For task-specific fine-tuning we used learning-rate 2e-5 and batch size 32, with training duration as 10 epochs for NER, 5 epochs for POS and 2400 iterations for Text Classification.",
"The models were evaluated on a separate RPL validation dataset and the model with the minimum F1-score, accuracy and validation loss was selected for final evaluation for NER, POS and Text Classification respectively.",
"All the fine-tuning experiments were done on Google Colaboratory.",
"The results reported for all the experiments are an average of 3 independent runs.",
"We evaluate RelateLM's adaptation strategy on mBERT, a state of the art multilingual model with two unseen languages: Oriya and Assamese.",
"The script of Oriya is unseen whereas the script of Assamese is the same as Bengali (already in m-BERT).",
"Table 6 compares different adaptation strategies",
"in-(a) Punjabi",
"cluding the option of treating each of Hindi and English as RPL for transliteration into.",
"For both LRLs, transliterating to Hindi as RPL provides gains over EBERT that keeps the script as-is and English transliteration.",
"We find that these gains are much more significant for Oriya than Assamese, which could be because Oriya is a new script.",
"Further augmentation with pseudo translations with Hindi as RPL, provides significant added gains.",
"We have not included the NER results for Assamese due to the absence of good quality evaluation dataset.",
"For more extensive experiments and ablation studies we move to bilingual models.",
"Table 5 shows the results of different methods of adapting M to a LRL with Hi-BERT and BERT as two choices of M .",
"We obtain much higher gains when the LRL is transliterated to Hindi than to English or keeping the script as-is.",
"This suggests that transliteration to a related language succeeds in parameter sharing between a RPL and a LRL.",
"Note that the English BERT model is trained on a much larger English corpus than the Hi-BERT model is trained on the Hindi corpus.",
"Yet, because of the relatedness of the languages we get much higher accuracy when adding transliterated data to Hindi rather than to English.",
"Next observe that pre-training with alignment loss on pseudo translated sentence pairs improves upon the results obtained with transliteration.",
"This shows that pseudo translations is a decent alternative when a parallel translation corpora is not available.",
"Overall, we find that RelateLM provides substantial gains over the baseline.",
"In many cases RelateLM is even better than mBERT which was pre-trained on a lot more monolingual data in that language.",
"Among the three languages, we obtain lowest gains for Bengali since the phonetics of Bengali Loss Dict Lookup NER POS Text C. Punjabi MSE first 62.4 80.0 77.6 MSE max 68.2 81.3 77.6 MSE root-weighted 64.9 78.9 76.9 MSE weighted 66.9 81.3 78.6 cstv weighted 68.2 80.8 79.4 Gujarati MSE first 39.2 83.3 78.6 MSE max 39.1 82.5 80.4 MSE root-weighted 39.7 82.6 79.9 MSE weighted 39.7 82.3 79.8 cstv weighted 40.2 84.0 81.6 Bengali MSE first 55.5 68.0 74.0 MSE max 56.2 70.3 79.7 MSE root-weighted 56.4 69.3 76.5 MSE weighted 57.3 71.7 78.7 cstv weighted 56.6 67.6 76.5 Table 7: Usefulness of Bilingual Dictionaries with MSE(Mean Squared Error Loss) and cstv(Contrastive Loss) evaluated on NER, POS tagging and Text Classification in RelateLM.",
"varies to some extent from other Indo-Aryan languages, and Bengali shows influence from Tibeto-Burman languages too (Kunchukuttan and Bhat-tacharyya, 2020).",
"This is also evident in the lower word overlap and lower BLEU in Table 1 and Table 2 compared to other Indic languages.",
"We further find that in case of Bengali, the NER results are best when Bengali is transliterated to English rather than Hindi, which we attribute to the presence of English words in the NER evaluation dataset.",
"Methods of Dictionary Lookups We experimented with various methods of choosing the translated word from the lexicon which may have multiple entries for a given word.",
"In Table 7 we compare four methods of picking entries: first entry at first position, max -entry with maximum frequency in the monolingual data, weighted entry with probability proportional to that frequency and root-weighted entry with probability proportional to the square root of that frequency.",
"We find that these four methods are very close to each other, with the weighted method having a slight edge.",
"Alignment Loss We compare the MSE-based loss we used with the recently proposed contrastive loss (Wu and Dredze, 2020) for L align but did not get any significant improvements.",
"We have provided the results for additional experiments in the Appendix (A) Increasing Monolingual size In Figure 3 we in-crease the monolingual LRL data used for adapting EBERT four-fold and compare the results.",
"We observe that even on increasing monolingual data, in most cases, by being able to exploit language relatedness, RelateLM outperforms the EBERT model with four times more data.",
"These experiments show that for zero-shot generalization on NLP tasks, it is more important to improve the alignment among languages by exploiting their relatedness, than to add more monolingual data.",
"We address the problem of adapting a pre-trained language model (LM) to a Low Web-Resource Language (LRL) with limited monolingual corpora.",
"We propose RelateLM, which explores relatedness between the LRL and a Related Prominent Language (RPL) already present in the LM.",
"RelateLM exploits relatedness along two dimensions script relatedness through transliteration, and sentence structure relatedness through pseudo translation.",
"We focus on Indic languages, which have hundreds of millions of speakers, but are understudied in the NLP community.",
"Our experiments provide evidence that RelateLM is effective in adapting multilingual LMs (such as mBERT) to various LRLs.",
"Also, RelateLM is able to achieve zero-shot transfer with limited LRL data (20K documents) which is not surpassed even with 4X more data by existing baselines.",
"Together, our experiments establish that using a related language as pivot, along with data augmentation through transliteration and bilingual dictionary-based pseudo translation, can be an effective way of adapting an LM for LRLs, and that this is more effective than direct training or pivoting through English.",
"Integrating RelateLM with other complementary methods for adapting LMs for LRLs (Pfeiffer et al., 2020b,c) is something we plan to pursue next.",
"We are hopeful that the idea of utilizing relatedness to adapt LMs for LRLs will be effective in adapting LMs to LRLs in other languages families, such as South-east Asian and Latin American languages.",
"We leave that and exploring other forms of relatedness as fruitful avenues for future work.",
"Acknowledgements We thank Technology Development for Indian Languages (TDIL) Programme initiated by the Ministry of Electronics Information Technology, Govt.",
"of India for providing us datasets used in this study.",
"The experiments reported in the paper were made possible by a Tensor Flow Research Cloud (TFRC) TPU grant.",
"The IIT Bombay authors thank Google Research India for supporting this research.",
"We thank Dan Gar-rette and Slav Petrov for providing comments on an earlier draft."
] |
[
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"objective",
"objective",
"method",
"objective",
"objective",
"result",
"objective",
"result",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"objective",
"objective",
"method",
"other",
"objective",
"abstain",
"method",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"In goal-oriented dialogue systems, users provide information through slot values to achieve specific goals.",
"Practically, some combinations of slot values can be invalid according to external knowledge.",
"For example, a combination of cheese pizza (a menu item) and oreo cook-ies (a topping) from an input utterance Can I order a cheese pizza with oreo cookies on top? exemplifies such invalid combinations according to the menu of a restaurant business.",
"Traditional dialogue systems allow execution of validation rules as a post-processing step after slots have been filled which can lead to error accumulation.",
"In this paper, we formalize knowledge-driven slot constraints and present a new task of constraint violation detection accompanied with benchmarking data.",
"Then, we propose methods to integrate the external knowledge into the system and model constraint violation detection as an end-to-end classification task and compare it to the traditional rule-based pipeline approach.",
"Experiments on two domains of the MultiDoGO dataset reveal challenges of constraint violation detection and sets the stage for future work and improvements.",
"Natural language understanding (NLU) is an important component of goal-oriented dialogue systems.",
"The function of NLU is to construct a semantic frame for a user utterance by performing two tasks intent classification (IC) and slot labelling (SL) (Chen et al., 2017).",
"The former task aims to identify the intent of the user (i.e., an activity or a transaction that the user wants to accomplish), while the latter task extracts attributes of the intent.",
"For example, given an input utterance Please add one XL fries to my order in Figure 1(A), IC classifies that the user intent is AddToOrder (Adding a new menu item to the order), while SL detects one, Work performed while at Amazon AI XL, and fries as Quantity, MenuItemSize, and MenuItem, respectively.",
"These two tasks, IC & SL, could be performed either independently (Zhao and Wu, 2016; Haffner et al., 2003; Kurata et al., 2016) or jointly (Xu and Sarikaya, 2013; Li et al., 2018; Gupta et al., 2019) although recent research shows that training jointly generally leads to better results (Hakkani-Tr et al., 2016; Goo et al., 2018).",
"To make the recognition of intents and slots more reliable, NLU models require the list of all possible intents and the slots associated to each intent.",
"For instance, the intent show_flights has airline, departure_city, arrival_city, departure_date, and departure_time as its associated slots.",
"Practically, each slot has its own type.",
"Some types are domain-agnostic such as DATE for the depar-ture_date, while other types are domain-specific, such as AIRLINE for the slot airline.",
"We also refer to the latter category as custom slot types , for which custom lists of valid entities are provided.",
"Moreover, slots could be marked as either required (such as departure_city and arrival_city) or optional (such as airline and departure_time).",
"All of these details are usually defined structurally in a single document called a bot schema which guides the conversational flow of the dialogue system (Peskov et al., 2019; Rastogi et al., 2019).",
"Besides the above details, the dialogue domain may have conditions permitting or forbidding some combinations of slot values.",
"For example, for a book_flight intent which has Singapore airlines as the airline slot, not all cities are valid destinations where the airlines operate.",
"The NLU may deal with invalid combinations of slot values by just ignoring them, i.e., not detecting them in the SL task.",
"This approach will result in a deteriorated user experience as the users would not know why their attempts to provide slot values are not successful.",
"Therefore, we envision these conditions as constraints between slots, and the system should be able to detect constraint violations and (A) Input utterance : Please add one XL fries to my order.",
"request new slot combinations from the users when the violations happen.",
"However, to the best of our knowledge, we have not found any existing work formalizing the constraints between slots nor modeling detection of constraint violations.",
"In this paper, we formally represent the slot constraints which could be integrated into a bot schema and present a new task of constraint violation detection : given a bot schema with constraints, a current utterance, and a conversation history, predict whether the current state of conversation violates any constraints or not and which constraints are violated.",
"After that, we propose three approaches to solve this problem (based on a pipeline approach and an end-to-end approach) and conduct experiments with two domains of the MultiDoGO dataset (Peskov et al., 2019) augmented with constraint violation labels.",
"By design, the end-to-end approach does not suffer from error accumulation (whereas the pipeline approach does); however, it is more difficult to inject the constraint information into the end-to-end approach.",
"The experimental results reveal challenges of the violation detection task together with room for improvement.",
"We formally represent slot constraints in goal-oriented dialog system.",
"We create and release 1 two domains of the augmented MultiDoGO dataset to support the constraint violation detection task, focusing on constraints on custom slot types.",
"We experiment with three approaches for detecting constraint violations and discuss room for improvement in this task.",
"We experiment with several unsupervised methods for open entity linking (based on string similarity, natural language inference, and combinations of them) as a part of the pipeline approach.",
"The remainder of this paper is organized as follows.",
"Section 2 explains related work about natural language understanding in dialogue systems as well as entity linking.",
"Section 3 presents formal representations of the constraints.",
"Section 4 proposes the three approaches we use to detect constraint violations.",
"Section 5 explains the created datasets and the experimental results.",
"Finally, section 6 concludes the paper.",
"Goal-oriented dialogue systems allow the usage of natural language to achieve specific goals such as food ordering or travel booking.",
"Traditionally, these systems are built using a pipeline approach including user intent and slots detection (NLU), dialog management and knowledge base querying (Levin et al., 2000; Williams and Young, 2007; Young et al., 2013).",
"The ability to interface with external knowledge is essential as it constraints possible entities and their relations per application (e.g., different restaurants can have different menus) and guides the system responses.",
"Constraints detection is usually handled by a post-processing step, for example in the DSTC2 dataset (Henderson et al., 2014), the canthelp act is inferred if the database returns zero results.",
"In addition, previous work integrated knowledge base information or lists of potential slot entities into goal-oriented dialogue systems but did not model constraint violation detection (Madotto et al., 2018; Liu et al., 2018; Rastogi et al., 2019; Zhang et al., 2020).",
"In this work, we fill the gap by first formalizing the task of constraint violation detection for dialogue systems and modeling it using supervised machine learning.",
"Entity linking aims to link entity mentions (i.e., slot values) v in user utterances with their corresponding entities e E defined in the bot schema (where E is a list of all possible entities of the associated slot type).",
"According to Shen et al. (2015), an entity linking system generally consists of three modules.",
"First, candidate generation filters out irrelevant entities from E to reduce the search space, Second, candidate ranking ranks the candidates to find the entity which the mention most likely refers to.",
"Third, unlinkable mention prediction predicts whether the correct entity is really in E or not.",
"In this paper, we assume that the first module is not needed because the set E for goal-oriented dialogue systems is usually in a manageable size.",
"So, our focus is on the last two tasks.",
"Candidate ranking could be done in either a supervised way (Chen and Ji, 2011; Gupta et al., 2017; Kolitsas et al., 2018) or an unsupervised way (Cucerzan, 2007; Chen et al., 2010; Xu et al., 2018).",
"Potential features for ranking include surface names, popularity, types of the entities, and the context surrounding the mention and the entities (Shen et al., 2015).",
"Usually, it is not easy to find a large annotated dataset to train a candidate ranking model for goal-oriented dialogue systems.",
"Hence, in our approaches, we conduct unsupervised entity linking based on surface names and types of the entities.",
"Due to the same limitation, we use unsupervised methods to perform unlinkable mention prediction which are using a threshold (Ferragina and Scaiella, 2010; Gottipati and Jiang, 2011), discussed in section 4.",
"As constraint violation check must be applied to every state in the conversation, we first define dialogue states as follows.",
"Definition 1 A dialogue state d is a tuple ( d i , d s ) where d i is an intent and d s is a list of slot-value pairs (Rastogi et al., 2019).",
"Figure 1(A) shows a dialogue state d as an example.",
"Next, to represent a constraint, we define atomic formula the smallest logical condition in constraint statements.",
"Definition 2 An atomic formula f can be written as ( s, o, v ) where s is a slot variable, v is a list of values, and o { = , >, <, , , (cid:54) = , between , regexp , in , not_null } is an operator.",
"A dialogue state d satisfies f if and only if the corresponding slot value s in d s satisfies f .",
"For instance, the dialogue state d in Figure 1(A) satisfies an atomic formula f = (MenuItemSize, in, [medium', large', extra large']).",
"Definition 3 A constraint c is a triple ( c i , c S , c l ) where (1) c i is a list of intents where the constraint applies, (2) c S is a list of associated slots ( s 1 , s 2 , ..., s n ) , and (3) c l is a constraint statement defined on c S a logical formula in disjunctive normal form where each conjunction consists of n atomic formulas that correspond to n slot variables in c S .",
"Figure 1(B) shows an example of constraints between MenuItem and MenuItemSize, applying to the AddToOrder intent.",
"Basically, it specifies valid sizes of each menu item.",
"In other words, a constraint applies to a dialogue state when the dialogue state has an applicable intent and contains all the relevant slot variables.",
"In Figure 1, the constraint c is applicable to the dialogue state d but not applicable to, for instance, d (cid:48) = (AddToOrder, {Quantity: 1, MenuItem: Fries'}).",
"Definition 5 A dialogue state d violates a constraint c if and only if c is applicable to d but d does not satisfy c l .",
"For the running example, d does not violate c because the slot-value pairs {MenuItem: Fries', MenuItemSize: extra large'} of d satisfies c l .",
"Note that, in Figure 1(A), the dialogue state is a result of a single utterance.",
"However, a dialogue state in practice contains the information of the current user turn fused with the dialogue state of the previous turn.",
"So, the objective of the constraint violation detection task is checking whether any constraints defined in the bot schema are violated after the dialogue state is updated with the information of the current turn.",
"We propose three approaches to tackle this problem.",
"The overview is shown in Figure 2.",
"To detect constraint violations, the deterministic pipeline approach (DP) performs three steps.",
"First, it runs intent classification and slot labelling on the input utterance.",
"Since the detected slot values may have different surface forms from the entities defined in the bot schema and the constraints, DP conducts entity linking and updates the dialogue state using the predicted intent and the linked entities, as the second step.",
"In the third step, DP runs a deterministic satisfiability check simply on the dialogue state to detect violations.",
"To implement DP, we use JointBERT (Chen et al., 2019), with default hyper-parameters, to perform IC/SL in the first step.",
"JointBERT utilizes BERT-base (Devlin et al., 2019) as an encoder to jointly predict the intent and the slot values.",
"Following Chen et al. (2019), we add Conditional Random Fields (CRF) on top of the BERT model to leverage dependencies between slot labels.",
"The second step, entity linking, is challenging because goal-oriented dialogue systems are usually domain-specific and no training data for entity linking is provided.",
"Furthermore, a detected slot value may not correspond to any entity defined in the bot schema.",
"So, this step should predict None as an answer when the value cannot be linked.",
"These two conditions make this step become unsupervised open entity linking.",
"In this paper, we use the following methods to perform this step.",
"(1) String similarity : We link a slot value to the most similar defined entity.",
"Three methods to measure similarity are used exact match, Jaccard Index on character bigrams (so called Bijaccard metric for short) (Jaccard, 1901), and Levenshtein edit distance (Levenshtein, 1966).",
"For the exact match method, we link a slot value to an entity only if their surface forms exactly match (case-insensitive).",
"Otherwise, we return None.",
"In contrast, for Bijaccard and Levenshtein, we always answer the most similar entity.",
"So, they cannot detect unlinkable slot values.",
"(2) Natural language inference (NLI) : NLI aims to predict if a hypothesis is true (entailment), false (contradiction), or undetermined (neutral) given a premise.",
"To predict if a slot value v corresponds to an entity e , we apply a pre-trained NLI model, in particular RoBERTa (Liu et al., 2019) pre-trained on MNLI (Williams et al., 2018), to predict if v (premise) entails e (hypothesis) and return the entity that gets the highest entailment score.",
"Also, we set a threshold of 0.8 for predicting unlinkable values.",
"That means we predict None if the highest entailment probability is less than 0.8.",
"(3) Average scores of methods : We average the scores returned from the three methods (Bijaccard, Levenshtein, and NLI) to be the final entity score.",
"Bijaccard and NLI scores already stay between 0 and 1 where 1 is the best score.",
"To combine the Levenshtein edit distance with these two methods, we transform the edit distance x to be 1 x a where a is the length of the slot value v .",
"Then we return the entity with the highest average score.",
"We also have an option of returning None when the highest average score is less than a threshold of 0.5.",
"The probabilistic pipeline approach (PP) has the same three steps as the deterministic one.",
"The difference is that instead of linking one slot value to one entity, PP uses the probability distribution (i.e., the entity linking scores normalized using softmax) over the candidate entities (including None) to represent the slot value.",
"To predict whether the dialogue state violates a constraint c , we calculate the probability of each valid entity combination according to the constraint statement c l and define the violation score as 1 (cid:80) | = c l P ( ) .",
"If the violation score is larger than a threshold of 0.5, PP predicts that the dialogue state violates the constraint c .",
"We use four entity linking methods to generate the raw linking scores (before softmax) including Bijaccard, Levenshtein edit distance (normalized by the length of the slot value), NLI, and average scores of the three methods.",
"The raw score of None is set at the threshold, i.e., 0.8 and 0.5 for NLI and the average method, respectively.",
"The end-to-end approach (EE) aims to predict violations without performing intermediate steps like IC/SL or entity linking.",
"This task can be seen as multilabel classification predicting all the violations that the current dialogue state causes.",
"Hence, the number of classes equals the number of constraints defined in the bot schema.",
"We use BERT as a text encoder and apply a linear layer (with sigmoid function) on top of the embedding of the CLS token to predict violations 2 .",
"Then binary cross-entropy loss is used for optimization on the training data that maps conversations to violations.",
"This is different from the pipeline approaches which use the training data at the IC/SL step, not the violation detection step.",
"Because EE does not construct the dialogue state along the way, it needs to consider both the current turn and all the previous turns to predict violations.",
"Therefore, all the user utterances till the current turn are concatenated to be an input of the BERT model.",
"If the input length is longer than the maximum input length of BERT, we trim off the older turns to make the input meet the length limit.",
"As constraint violation detection is a novel problem, there had not been an existing dataset for this task.",
"So, we modified two domains, insurance (sentence-level annotation) and fast food (turn-level annotation), of the MultiDoGO dataset (Peskov et al., 2019), which is an English multi-domain goal-oriented IC/SL dataset, to support violation detection as follows.",
"We created a list of possible entities for each custom slot type by manually investigating and grouping slot values annotated in the dataset 3 .",
"We mapped distinct surface forms of slot values to the corresponding entities we just defined.",
"These mappings would be used as ground truths for entity linking testing.",
"We analyzed the co-occurrences of the entities and then manually wrote constraints for each intent.",
"We constructed a dialogue state for each turn in the dataset semi-automatically using the mapped entities and meaningful rules.",
"For example, entities found in the ContentOnly' 4 turn were associated to the dialogue state of the most recent domain intent.",
"We ran deterministic satisfiability check on the dialogue states and added the constraint violation results to the dataset.",
"The check here is the same as the last step of the DP approach, so we can expect that the last step of DP works perfectly if the input, obtained from the previous step (entity linking), is correct.",
"Table 1 summarizes the statistics of the augmented MultiDoGO dataset.",
"Both domains share the same set of general intents including OpeningGreeting, ClosingGreeting, Confirmation, ContentOnly, OutOfDomain, ThankYou, and Rejection.",
"The three domain intents of the insurance domain are CheckClaimStatus, GetProofOfInsurance, and ReportBrokenPhone, while the domain intents of the fast food domain concern different types of food such as OrderBreakfastIntent, OrderBurgerIntent, and OrderDessertIntent.",
"The insurance and the fast food domains have three out of nine and six out of ten custom slot types, respectively.",
"For each custom slot type, we create a closed type constraint indicating that a linked entity must be in the set of possible entities recognized by the slot type.",
"In addition, we 3 SL annotations in the public MultiDoGO dataset do not include boundaries of slot values.",
"This is problematic especially for the fast food domain where utterances usually contain multiple slot values consecutively.",
"Hence, we requested the raw fast food data from Peskov et al. and imported the slot boundaries into our modified version using the BIO schema.",
"4 ContentOnly intent is used when the user is providing details in response to a question from the agent (Peskov et al., 2019).",
"have domain-specific constraints enforcing the domain knowledge.",
"The insurance domain has only the car_model_brand constraint specifying valid car models for each car brand.",
"Among the twelve constraints of the fast food domain, eight of them specify valid menu items for the each domain intent, two of them specify valid sizes for each menu item, and the other two specify valid ingredients for each menu item.",
"Concerning conversation statistics, on average, the fast food domain has more slot values per turn than the insurance domain (because a user can mention several ingredients and menu items in one turn).",
"Besides, it has more unlinkable slot values (None), resulting in more closed type constraint violations than the insurance domain.",
"Since the fast food domain has so many constraints, only 32.8% of the conversations and 48.7% of the user turns do not have any violations.",
"The average violations per turn of 1.38 results from some turns having many violations.",
"For instance, when a user orders an unrecognized pizza menu with some unrecognized ingredients, the detected intent is OrderPizzaIntent' whereas the slots are mapped to None' entities causing closed type constraint violations for the food item (pizza) slot and the ingredient slot.",
"Moreover, they violate the constraint of valid food items for the OrderPizzaIntent' intent and another constraint of valid combinations between food items and ingredients.",
"We used PyTorch as a core framework for the three approaches.",
"External packages we used include JointBERT 5 for IC/SL, edit-distance 6 for string similarity, and transformers 7 for the BERT-base 8 (for all the three approaches) and RoBERTa (for NLI).",
"In addition, we used the softmax temperature of 0.1 to convert raw entity linking scores to probability in the probabilistic pipeline approach.",
"We first consider the performance of individual components in the pipeline approaches.",
"Table 2 shows the performance of JointBERT for intent classification and slot labelling.",
"It can be seen that JointBERT performed better on the insurance domain for both IC and SL and this trend is consistent with the results of the original MultiDoGO paper.",
"For entity linking, we used several evaluation metrics, all of which were only computed when the intents were correctly classified.",
"These include (1) Link accuracy : Given that the SL module detects the value of the correct slot type, link accuracy shows how likely the value is linked to the correct entity (including None).",
"(2) None recall : The recall of None being predicted.",
"This metric shows how often it can detect when entity mentions cannot be linked.",
"It is also related to the ability of detecting closed type constraint violations.",
"(3) Precision, Recall, F1 : Considering all the turns in the test data, compare the predicted entities to the ground truth entities.",
"(These metrics are affected by the performance of IC/SL. If the SL module incorrectly detects the slot type, this could cause low precision, recall, and F1 at the entity level here. In contrast, if the SL module does not detect the slot value, no text will be fed to the entity linker and the entity will not be predicted. This could cause low recall but would not affect the precision.) 5 https://github.com/monologg/JointBERT 6 https://pypi.org/project/ edit-distance/ 7 https://huggingface.co/transformers/ 8 BERT-base makes our models have 110 M parameters.",
"Table 3 shows the results of entity linking on two MultiDoGO domains.",
"The simplest method, exact match, yielded acceptable results for the fast food domain and surprisingly good results for the insurance domain.",
"This is because possible entities in the insurance domain (with the types car_brand, car_model, and car_year) usually have only one surface form.",
"For example, we can only say Honda to refer to the Honda car brand entity.",
"Meanwhile, the slot types of the fast food domain are much more flexible such as food_item and ingredient.",
"A user may say only meatball or meatballs to refer to the italian meatballs entity in the bot schema.",
"Besides, the difference between the two domains is partly because the IC/SL model worked better on the insurance domain and provided more accurate slot values to the entity linking step.",
"Because exact match is a very strict condition, it predicted None more often than other methods and got the highest None recall, while some other methods do not support open entity linking (including Bijaccard, Levenshtein, NLI, and Average) and got zero None recall.",
"However, applying reasonable None thresholds to NLI and Average boosted up the results for all the metrics.",
"The Average method with the threshold of 0.5 achieved the best link accuracy and F1 for both the insurance domain and the fast food domain.",
"Overall, the results highlight that using a combination of methods results in better entity linking performance.",
"This section discusses the overall constraint violation detection results with respect to the following metrics.",
"(1) Turn correct : The proportion of the turns where the violation prediction is exactly correct for all constraints.",
"(2) Turn IoU : The IoU score showing how much overlapping the predicted and the ground truth violations of a given turn are, on average.",
"Let P and G be sets of predicted and (A) User: Hi, I need 1 white top pizza Ground truth: Intent: order_pizza_intent Slots: {quantity: [1], food_item: [white top pizza]} Entities: {quantity: [1], food_item: [white top pizza]} Violations: None Deterministic pipeline approach (DP): Intent: order_pizza_intent Slots: {quantity: [1], food_item: [white top, pizza]} Entities: {quantity: [1], food_item: [None, pizza]} Violations: [closed_type_food_item] (cid:55) Probabilistic pipeline approach (PP): Violations: [closed_type_food_item] (cid:55) End-to-End approach (EE): Violations: None (cid:51) (B) User: Hai, I need bbq chicken pizza with cheese Ground truth: Intent: order_pizza_intent Slots: {food_item: [bbq chicken pizza], ingredient: [cheese]} Entities: {food_item: [bbq chicken pizza], ingredient: [cheese]} Violations: None Deterministic pipeline approach (DP): Intent: order_pizza_intent Slots: {food_item: [bbq chicken pizza], ingredient: [cheese]} Entities: {food_item: [bbq chicken pizza], ingredient: [cheese]} Violations: None (cid:51) Probabilistic pipeline approach (PP): None (cid:51) End-to-End approach (EE): Violations: [food_item-ingredient-invalid] (cid:55) Figure 3: Examples of violation predictions of the three approaches.",
"ground truth violations of a given turn, respectively.",
"IoU (Intersection over Union) of this turn equals | P G | | P G | .",
"(3) Conversation correct : The proportion of conversations where the violation predictions are correct for all the turns.",
"(4) Precision, Recall, and F1 : Consider each violation of a constraint as a positive instance, calculate precision, recall, and F1 of the violations being predicted.",
"approach (DP) with exact match as the entity linking method got the highest violation recall.",
"This is because the exact match is good at detecting unlinkable slot values (see None recall in Table 3), so it got high recall concerning violation detection of closed type constraints.",
"Conversely, entity linking methods which could not predict None (i.e., Bijaccard, Levenshtein, NLI, Average) got significantly lower violation recall and, hence, F1.",
"Furthermore, the difference between the two domains in Table 4 are more prominent than what we see for individual steps in Table 2-3.",
"There are several reasons for this.",
"First, the fast food domain has more custom slot types and more constraints.",
"So, it is more difficult to predict violations of all the constraints correctly for each turn resulting in lower conversation correct and turn correct .",
"Second, for the pipeline approaches, the errors of individual steps of the fast food domain were higher than the errors of the insurance domain; therefore, the gap became larger when the errors were accumulated in the last step.",
"An example in Figure 3(A) illustrates this case.",
"The slot labelling part of Joint BERT identified white top and pizza as two separate food items.",
"The entity linker, Average (0.5), could not map white top to any of the defined entity.",
"The system then understood that the user ordered an unknown food item and returned the closed_type_food_item violation which is incorrect.",
"However, we did not see this particular error with the end-to-end approach.",
"the probabilistic pipeline (PP) approaches, we can see that DP outperformed PP in most settings, especially in the insurance domain.",
"We believe that when the entity linking module works accurately (as in the insurance domain), switching from DP to PP probably harms the overall performance since PP adds unnecessary uncertainty to the correct entity predictions.",
"Conversely, when entity linking is a challenging step, PP with an appropriate softmax temperature could yield better results.",
"According to Table 4, the end-to-end approach (EE) clearly outperformed DP and PP in the insurance domain while being competitive to DP and PP in the fast food domain.",
"This might be because the insurance domain has only one domain-specific (binary) constraint and three closed type (unary) constraints that are easier to learn from the training data.",
"Meanwhile, the fast food domain has twelve binary and six unary constraints, respectively.",
"Without access to the constraint statements, the existing training examples may not be sufficient to teach the end-to-end model all possible cases of the constraints.",
"An example in Figure 3(B) shows that EE falsely returned the food_item-ingredient-invalid violation in response to the input Hai, I need bbq chicken pizza with cheese although this sentence in fact did not violate the constraint.",
"This error might be because the model had not seen the combination of bbq chicken pizza and cheese during training and it did not have access to the constraints defined in the bot schema.",
"Focusing on goal-oriented dialogue systems, we proposed a novel task slot constraint violation detection in NLU, together with constraint representation and three approaches to tackle this problem.",
"While the pipeline approaches apply constraints as a post-processing step after IC/SL, the end-to-end approach attempts to model constraints inside the NLU.",
"This sets the stage for future research and modeling of slot constraints and knowledge within NLU.",
"In particular, there are several ways to enhance the end-to-end approach.",
"For example, we could perform joint learning of IC, SL, and constraint violation detection to share the learned knowledge among tasks.",
"Also, injecting logical constraints into BERT is an interesting direction.",
"One way to do so is to translate constraints into violating and non-violating examples (by generating conversations with templates derived from existing training examples) and use them to train BERT together with other training examples.",
"In addition, using constraints information, one can control the training data generation and the percentage of data with constraint violations depending on expected user behavior.",
"We would like to thank Jason Krone, Yi Arshit Gupta, and anonymous reviewers for helpful comments.",
"References Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang.",
"A survey on dialogue systems: Recent advances and new frontiers.",
"Acm Sigkdd Explorations Newsletter , 19(2):2535.",
"Bert for joint intent classification and slot filling.",
"Zheng Chen, Suzanne Tamang, Adam Lee, Xiang Li, Wen-Pin Lin, Matthew G Snover, Javier Artiles, Marissa Passantino, and Heng Ji.",
"Paolo Ferragina and Ugo Scaiella.",
"2010.",
"Tagme: on-the-fly annotation of short text fragments (by wikipedia entities).",
"In Proceedings of the 19th ACM international conference on Information and knowledge management , pages 16251628.",
"Chih-Wen Goo, Guang Gao, Yun-Kai Hsu, Chih-Li Huo, Tsung-Chieh Chen, Keng-Wei Hsu, and Yun-Nung Chen.",
"2018.",
"Slot-gated modeling for joint slot filling and intent prediction.",
"In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) , pages 753757, New Orleans, Louisiana.",
"Association for Computational Linguistics.",
"Nitish Gupta, Sameer Singh, and Dan Roth.",
"2017.",
"Entity linking via joint encoding of types, descriptions, and context.",
"In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing , pages 26812690, Copenhagen, Denmark.",
"Association for Computational Linguistics.",
"P. Haffner, G. Tur, and J. H. Wright.",
"2003.",
"Optimizing svms for complex call classification.",
"In 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003.",
"Proceedings.",
"(ICASSP '03).",
", volume 1, pages II.",
"Dilek Hakkani-Tr, Gkhan Tr, Asli Celikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and Ye-Yi Wang."
] |
[
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"other",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Probing neural models for the ability to perform downstream tasks using their activation patterns is often used to localize what parts of the network specialize in performing what tasks.",
"However, little work addressed potential mediating factors in such comparisons.",
"As a test-case mediating factor, we consider the prediction's context length , namely the length of the span whose processing is minimally required to perform the prediction.",
"We show that not controlling for context length may lead to contradictory conclusions as to the localization patterns of the network, depending on the distribution of the probing dataset.",
"Indeed, when probing BERT with seven tasks, we find that it is possible to get 196 different rankings between them when manipulating the distribution of context lengths in the probing dataset.",
"We conclude by presenting best practices for conducting such comparisons in the future.",
"1 1 Introduction The strong performance of end-to-end models and the difficulty in understanding their inner workings has led to extensive research aimed at interpreting their behavior (Li et al., 2016; Yosinski et al., 2015; Karpathy et al., 2015).",
"This notion has led researchers to investigate the behavioral traits of networks in general (Li et al., 2015; Haco-hen et al., 2020) and representative architectures in particular (Schlichtkrull et al., 2020).",
"Within NLP, Transformer-based pretrained embeddings are the basis for many tasks, which underscores the importance in interpreting their behavior (Belinkov et al., 2020), and especially the behavior of BERT (De-vlin et al., 2019; Rogers et al., 2020), perhaps the most widely used of Transformer-based models.",
"of particular tasks is encoded; localization is often carried out in terms of the layers most responsible for the task at hand (c.f. Tenney et al., 2019b).",
"Various works (Tenney et al., 2019a; Peters et al., 2018; Blevins et al., 2018) showed that some tasks are processed in lower levels than others.",
"We examine the extent to which potential mediating factors may account for observed trends and show that varying some mediating factors (see 2) may diminish, or even reverse, the conclusions made by Tenney et al. (T19; 2019a).",
"Specifically, despite reaffirming T19's experimental findings, we contest T19's interpretation of the results, namely that the processing carried out by BERT parallels the classical NLP pipeline.",
"Indeed, T19 concludes that lexical tasks (POS tagging) are performed by the lower layers, followed by syntactic tasks, whereas more semantic tasks are performed later on.",
"This analysis rests on the assumption that the nature of the task (lexical, syntactic, or semantic) is the driving force that determines what layer performs what analysis.",
"We show that other factors should be weighed in as well.",
"Specifically, we show that manipulating the distribution of examples in the probing dataset can lead to a variety of different conclusions as to what tasks are performed first.",
"We argue that potential mediators must be considered when comparing tasks, and focus on one such mediator the context length , which we define as the number of tokens whose processing is minimally required to perform the prediction.",
"We operationalize this notion by defining it as the maximal distance between any two tokens for which a label is predicted.",
"This amounts to the span length in tasks that involve a single span (e.g., NER), and to the dependency length in tasks that address the relation between two spans.",
"See 2.",
"Our motivation for considering context length as a mediator is grounded in previous work that presented the difficulty posed by long-distance dependencies in various NLP tasks (Xu et al., 2009; Sennrich, 2017), and particularly in previous work that indicated the Transformers' difficulty to generalize across different dependency lengths (Choshen and Abend, 2019).",
"We show that in some of the cases where one task seems to be better predicted by a higher layer than another task, controlling for context length may reverse that order.",
"Indeed we show that 196 different rankings between the seven tasks explored in T19 may be obtained with a suitable distribution over the probing datasets, namely 196 different ways to rank the tasks according to their expected layer.",
"Moreover, our results show that when context length is not taken into account, one task (e.g., dependency parsing) may seem to be processed at a higher layer than another (e.g., NER), when its expected layer (see 2) is, in fact, lower for all ranges of context lengths (3.1.1).",
"We begin by laying out the terminology and methodology we will use in the paper.",
"Edge Probing.",
"Edge probing is the method of training a classifier for a given task on different parts of the network (without fine-tuning).",
"Success in classification is interpreted as evidence that the required features for classification are somehow encoded in the examined part and are suffi-ciently easy to extract.",
"In our experiments, we follow T19 and probe BERT with Named Entity Recognition (NER), a constituent-based task (clas-sifying Non-terminals Non-term.), Semantic Role Labeling (SRL), Co-reference (Co-ref.), Semantic Proto-Roles (SPR; Reisinger et al., 2015), Relation Classification (RC) and the Stanford Dependency Parsing (Dep.; de Marneffe et al., 2006).",
"Causal considerations in interpreting probing results were also emphasized by several recent works (e.g., Kaushik et al., 2020; Vig et al., 2020; Elazar et al., 2021).",
"Localization by Expected Layer.",
"The expected layer metric (which we will henceforth refer to it as E layer ) of T19 assesses which layer in BERT is most needed for prediction: a probing classifier P ( l ) is trained on the lowest l layers.",
"Then, a differential score ( l ) is computed, which indicates the performance gain when taking into account one additional layer: ( l ) = Score ( P ( l ) ) Score ( P ( l 1) ) (1) Once all the { ( l ) } 12 l =1 are computed, we may compute E layer : E layer [ l ] = (cid:80) 12 l =1 l ( l ) (cid:80) 12 l =1 ( l ) (2) Therefore, unlike standard edge probing, which is performed on each layer individually, computing E layer takes into account all layers up to a given l .",
"Mediation Analysis.",
"Each of the explored tasks classifies one or two input sub-spans.",
"In both cases, we define the context length to be the distance between the earliest and latest span index.",
"Namely, for tasks with two spans (e.g., SPR), span 1 =[ i 1 , j 1 ] and span 2 =[ i 2 , j 2 ], where span 1 appears before span 2 , the context length is j 2 i 1 , whereas for tasks with just one span (e.g., NER), span 1 =[ i 1 , j 1 ], it is j 1 i 1 .",
"In order to examine the effect of context length on E layer , we model it as a mediating factor, namely as an intermediate variable that (partly) explains the relationship between two other variables (in this work, a task and its E layer ).",
"See Figure 1.",
"We bin each task's test set into non-overlapping bins, according to their context length ranges.",
"We use the notation i-j' to denote the bin of context lengths in the range [i,j].",
"For example, the second bin would be '3-5', denoting context lengths 3, 4, and 5.",
"In addition, given a specific task, two possible approaches exist to examine the mediation effect of context length on the task's E layer .",
"The first one bins all the task's data into sub-sets, in advance.",
"Then, this approach fine-tunes over each subset separately.",
"Alternatively, the second approach fine-tunes over the whole dataset, binning only during the test phase.",
"We follow the latter approach, as it is more computationally efficient.",
"Interestingly, in 3.1.1, we encounter a special edge case, where the aggregated average (i.e., E layer ) of one task is higher than another, whereas in each sub-set (by a given context length) it is lower.",
"This may occur when the weight of the sub-sets differs between the two aggregations.",
"We hypothesize that the context length is a mediating factor in the E layer of a task.",
"In order to test this hypothesis, we run the following experiments, aiming at isolating the context length.",
"We use the SPR1 dataset (Reisinger et al., 2015) to probe SPR, the English Web Treebank for the Dep.",
"task (Silveira et al., 2014), the SemEval 2010 Task 8 for the RC task (Hendrickx et al., 2009), and the OntoNotes 5.0 dataset (Weischedel et al., 2013) for the other tasks.",
"Configurations follow the defaults in the Jiant toolkit implementation (Wang et al., 2019).",
"In addition, we work with the BERT-base model.",
"First, we wish to confirm that context length indeed affects E layer and that the task is not a sole contributor to this.",
"Given a task and a threshold thr , we compile a dataset for the task containing the subset of examples with context lengths shorter than thr , and use it to compute E layer .",
"We do it for all tasks and for every integer threshold between 0 and a maximal threshold, which is selected separately for each task to ensure that at least 2000 instances remain in the last bin.",
"We find that context length plays an important role in the difference between the expected layers (Figure 2).",
"Most notably, the",
"Co-ref., SRL,",
"Dep., and RC tasks' E layer increases when increasing the threshold.",
"Next, we divide the data into smaller bins of non-overlapping context length ranges, in order to control for the influence of the context lengths on the expected layers of the tasks.",
"We compute E layer for sub-sets of similar lengths.",
"In choosing the size of each such range, we try to balance between informativeness (narrower ranges) and reliability (having enough examples in each range, so as to reduce noise).",
"We find that the narrowest range width that retains at least 1% of the examples in each bin is 3.",
"We thus divide the dataset for each task into context length ranges of width 3, until the maximal threshold is reached.",
"Higher context lengths are lumped into an additional bin.",
"We begin by examining two specific tasks: Dep.",
"and NER, and their E layer for each context length's range.",
"We then consider, for simplicity, a case where all the context lengths of Dep.",
"are of length 9+, while those of NER are in the range of 3-5 (Figure 3).",
"We see that when controlling for context length, Dep.",
"is computed in a lower layer than NER, regardless of the range.",
"However, depending on the distribution of context lengths in the probing dataset, the outcome may be completely different, with Dep.",
"being processed in higher layers (for a similar example of a different task-pair, see A.1).",
"These results indicate that the results of T19 do not necessarily indicate that BERT is performing a pipeline of computations (as is commonly asserted, see e.g., T19 and Blevins et al. (2018)), and that mediating factors need to be taken into account when interpreting E layer .",
"In the previous section, we observed that one task can be both higher and lower than another.",
"That depends on the distribution of context lengths in the probing dataset.",
"We next ask whether such a \"paradox\" arises in experiments when imposing the same context length distributions on the two tasks.",
"Following Pearl (2001), we employ mediation analysis and specifically concentrate on the Natural Direct Effect (NDE), which is the difference between two of the observed dependent variables (in our case E layer ), when fixing the mediator.",
"In our case, the NDE is the difference between the E layer of two tasks, while forcing the same context length distribution on both.",
"For convenience, we force the distribution of one of the examined tasks (for more details, see A.2), but any distribution is applicable.",
"In general, the equation for computing the NDE of tasks t 1 and t 2 , with the context length distribution of t 1 imposed on both, is: NDE t 1 (cid:1) t 2 = (cid:88) c [ E [ l | C = c,T = t 2 ] E [ l | C = c,T = t 1 ]] P ( C = c | T = t 1 ) (3) where T is a random variable of the tasks, and C is a random variable of the context length.",
"We apply NDE twice for every pair of tasks (once for each task's context length distribution).",
"We then compare the results to the difference between the tasks' expected layers where each task keeps its original context length distribution (un-mediated).",
"Results (Figure",
"4) show that the difference could be more than 50 times larger (change of 1.24 in absolute value) or decrease by 86% (0.73 in absolute value).",
"In some cases the order of the two tasks is reversed, namely, the task that is lower with one distribution becomes higher with another.",
"This shows that even among our examined set of seven tasks, the effect of potential mediators cannot be ignored.",
"For more results, see A.3.",
"After observing that the distribution of context length in the probing dataset may affect the relative order of the expected layers, we propose a more detailed and accurate method to compare the expected layers, which does not rely on a specific length distribution.",
"We do so by plotting the controlled effect , namely E layer for each range separately.",
"Our results (Figure",
"5) allow computing the range of possible expected layers for a task, that may result from taking any context length distribution Figure 4: Difference between unmediated E layer and NDE for NER and Co-ref.",
"(Figure 6).",
"The figure shows the wide range of possible relative behaviors of E layer for task-pairs: from notable to negligible difference in expected layers (e.g., SRL and Co-ref.), to pairs whose ordering of expected layers may be reversed (i.e., overlapping ranges, such as with SPR and RC).",
"In fact, by taking into account every possible combination of context length distribution for each of the tasks, we get as many as 196 possible rankings of the seven tasks according to their E layer .",
"One such possible order is, for example, Non-term.",
"< Dep.",
"< SRL < RC < NER < Co-ref.",
"< SPR.",
"We elaborate on this in A.4.",
"To recap, we find that the difference in E layer between some tasks may considerably change and their order may reverse, depending on the context length.",
"This finding lends further support to our claim that mediators should be taken into account.",
"We showed that when performing edge probing to identify what layers are responsible for addressing what tasks, it is imperative to take into account potential mediators, as they may be responsible",
"for much of the observed effect.",
"Specifically, we showed that context length has a significant impact on a task's E layer .",
"Our analysis shows the wide range of relative orderings of the expected layers for different tasks when assuming different context length distributions; from extreme edge cases, like the one we observed in 3.1.1, to more common, but potentially misleading ones, where the difference between expected layers may dramatically increase or decrease depending on the context length distribution.",
"Most importantly, it shows that by manipulating the context length distribution, we may get a wide range of outcomes.",
"Our work suggests that mediating factors should be taken into account when basing analysis on the E layer .",
"On a broader note, alternative hypotheses should be considered, before limiting oneself to a single interpretation.",
"Future work will consider the effect of other mediating factors.",
"The two methods we used, NDE and controlled effect, can be used to examine the impact of other mediating factors and should be adopted as part of the field's basic analysis toolkit (cf. Feder et al., 2020; Vig et al., 2020).",
"NDE should be used when several effects are examined simultaneously, as it facilitates the assessment of their effect on the tasks' complexity.",
"It is also advisable to use NDE when a more practical examination is required, i.e., when distributions of the mediators are given empirically, as it is easier to derive the mediating factors' impact using this method.",
"In contrast, the controlled effect method should be used when examining the effects of two variables (e.g., tasks and mediating factors) or when comparing several tasks with one mediating effect.",
"This work was supported by the Israel Science Foundation (grant no. 929/17).",
"We would also like to thank Amir Feder for his very insightful feedback on our paper."
] |
[
"abstain",
"abstain",
"method",
"result",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"objective",
"method",
"method",
"abstain",
"abstain",
"abstain",
"result",
"objective",
"result",
"method",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"method",
"other",
"other",
"method",
"other",
"abstain",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"result",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"This paper introduces the task of factual error correction: performing edits to a claim so that the generated rewrite is better supported by evidence.",
"This extends the well-studied task of fact verification by providing a mechanism to correct written texts that are refuted or only partially supported by evidence.",
"We demonstrate that it is feasible to train factual error correction systems from existing fact checking datasets which only contain labeled claims accompanied by evidence, but not the correction.",
"We achieve this by employing a two-stage distant supervision approach that incorporates evidence into masked claims when generating corrections.",
"Our approach, based on the T5 transformer and using retrieved evidence, achieved better results than existing work which used a pointer copy network and gold evidence, producing accurate factual error corrections for 5x more instances in human evaluation and a .125 increase in SARI score.",
"The evaluation is conducted on a dataset of 65,000 instances based on a recent fact verification shared task and we release it to enable further work on the task.",
"1 1 Introduction Fact verification is the task of predicting whether claims are true or false using evidence.",
"With the availability of a number of resources (Wang, 2017; Karadzhov et al., 2017; Thorne et al., 2018; Au-genstein et al., 2019; Wadden et al., 2020), the task has attracted significant attention and spawned the development of new models, architectures and approaches.",
"With potentially sensitive applications, recent works have focused on building explainable variants of fact checking (Atanasova et al., 2020; Stammbach and Ash, 2020; Kotonya and Toni, 2020).",
"Exposing the evidence source and 1 https://github.com/j6mes/ 2021-acl-factual-error-correction System Outputs Brown recluse spiders do not bite The brown recluse spider's bite sometimes requires medical attention.",
"decision making process may help the reader uncover subtle issues that cause automated systems to fail.",
"Additionally, using such evidence to continuously update news articles as facts change forms part of the vision outlined by Cohen et al. (2011) for automated newsrooms.",
"In this paper, we propose Factual Error Correction , as an explainable alternative for fact verification.",
"Rather than merely assigning a truth label, possibly accompanied by evidence, our goal is to rewrite claims so that they are better supported by the retrieved evidence.",
"For example, in Figure 1, a claim that would be REFUTED by the evidence using a fact verification system is rewritten so that it becomes supported by evidence retrieved from Wikipedia.",
"This work extends fact guided sentence modification (Shah et al., 2020), which uses short factoid claims to introduce changes to Wikipedia passages.",
"However, they assume that the claim and Wikipedia text are always incongruous and require a meaning-altering change, our proposal makes no assumptions over the veracity, and is applicable to claims both supported and refuted by evidence.",
"Additionally, we incorporate a retrieval component to select evidence for a given claim from a corpus (in our case, Wikipedia) rather than requiring gold standard evidence to be explicitly provided.",
"A challenge for factual error correction is the lack of datasets consisting of claims paired with their corrections.",
"However, with recent developments in fact checking, there is an abundance of new datasets consisting of claims paired with evidence.",
"To address this data scarcity, we make use of distant supervision to incorporate retrieved evidence into generating the corrections.",
"We release a dataset of 65,000 claims, containing the intermediate annotations from FEVER (Thorne et al., 2018).",
"These consist of factoid sentences that were used to construct the supported and refuted claims in the dataset, and use these as reference targets for automated evaluation.We further verify the findings through a final round of annotation using human raters.",
"Our evaluation finds high correlation between manual scores and the SARI metric (Xu et al., 2016) and our best performing distantly-supervised system generated corrected claims for 24% of instances when using retrieved evidence, with a SARI Final score of .419.",
"A fully-supervised system with gold evidence generated corrections for 69% of instances, indicating plenty of opportunities for future work to extend our contributions.",
"A number of related works offer methods to make corrections to sentences.",
"However, their use of external information differs.",
"This can be placed on a continuum from only using the knowledge captured during language model pre-training, to conditioning generation based on a context sentence.",
"We briefly outline key methods and approaches below.",
"Grammatical Error Correction (GEC) (Knight and Chander, 1994; Han et al., 2010; Ng et al., 2014) is the task of making meaning-preserving changes to sentences such that grammatical errors made by language learners are removed.",
"No external information is required as the sentence is undergoing a surface-level transformation where the (intended) semantic content of the sentence should remain unchanged.",
"In contrast, the semantic content of sentences undergoing factual error correction will be altered, if needed, to better align the meaning with ground truth evidence.",
"Shah et al. (2020) make meaning-altering updates to sentences in Wikipedia in a two step process that does not require reference corrections in training: salient tokens are masked and a corrector conditionally replaces the masks with ground truth evidence.",
"In this approach, token salience is predicted by querying a model that is trained to perform fact verification for a claim against evidence.",
"Cao et al. (2020) generate corrections as a post-editing step for outputs from abstractive summarization so that they are consistent with the source text.",
"Their approach uses a sequence-to-sequence model trained to restore artificially generated corruptions of a reference summary.",
"One potential way to introduce knowledge is to use information stored in the parameters of large-scale pre-trained language models (Petroni et al., 2019).",
"The language model can be used recover tokens responsible for causing factual errors that are masked out as a variant of cloze-style evaluation (Taylor, 1953).",
"While such approaches have been employed for fact verification (Lee et al., 2020), these approaches share the following limitations.",
"Without explicit control (Nie et al., 2019), the most likely token when decoded may not be factually accurate, or supported by the retrieved evidence, commonly referred to as a hallucination (Rohrbach et al., 2018; Zhou et al., 2020).",
"Furthermore, even if the information stored within language model parameters could be reliably retrieved for factual error correction, facts change over time and the need to obtain information from up-to-date sources becomes greater as the state of the world diverges from the information captured within the model parameters.",
"Recent language models augmented with a retrieval component such as REALM (Guu et al., 2020) and RAG (Lewis et al., 2020) could be applied, however, task-specific fine-tuning would still be required to condition the generation based on the factual error to mitigate hallucination.",
"Training Let a claim c be the input sentence undergoing correction to yield c (cid:48) .",
"The correction requires incorporating knowledge from retrieved evidence E ( c ) such that c (cid:48) is supported by this evidence, E ( c ) (cid:15) c (cid:48) .",
"Factual error correction is subject to the following 3 requirements: John Goodman had the lead role in The Babe.",
"R1 Intelligible Similar to other language generation tasks, our first requirement is that generated outputs are fluent and intelligible.",
"They must be free of grammatical mistakes and the meaning must be understandable without the aid of additional context or evidence so that their factual correctness can be assessed.",
"R2 Supported by Evidence The generated correction must be supported by the retrieved evidence.",
"This property follows from previous work (Thorne et al., 2018) and also requires models to condition generation on the retrieved evidence penalizing models that hallucinate (Holtzman et al., 2020).",
"R3 Error correction Specific to our task, the corrections should be targeted to the errors present in the inputted claim.",
"While this, in part, can be assessed by R2 we need to compare the correction to the inputted claim to ensure the output is not introducing new unrelated information.",
"For example, an erroneous claim: France is in South America could be supported by evidence if it were rewritten as France is a republic .",
"However, the desired correction should instead state France is in Europe .",
"The choice of supervision for the error correction system influences the task decomposition.",
"For example, with full supervision, the system can be constructed with an information retrieval module and a sequence-to-sequence module that conditionally generates a correction given the claim and evidence.",
"However, large datasets of claims paired with corrections are not available.",
"The absence of full supervision requires that we distantly-supervise our systems using fact verification datasets, which are an abundant resource.",
"Fact verification datasets contain claims labeled with evidence but do not contain corrections.",
"With this resource, we propose a task decomposition that generated corrections by training models to reconstruct claims with masked tokens using retrieved evidence.",
"Test time Corrections are generated by a two-stage process, illustrated in Figure",
"2. Tokens from the claim, c , are first masked, yielding c , and then input to the corrector c (cid:48) = Corr ( c, E ( c )) .",
"The masker, c = Mask ( c, E ( c )) , replaces a subset of tokens in the claim with a blank placeholder, conditioned on E ( c ) .",
"Its purpose is to remove tokens that are salient to the claim being supported or refuted by the evidence.",
"Using the masked claim, c , the corrector replaces the blank placeholders with tokens conditionally generated using retrieved evidence.",
"To correct errors, evidence refuting a claim ( E ( c ) (cid:50) c ) conditions generation of a correction supported by it E ( c ) (cid:15) c (cid:48) .",
"This extends the protocol Shah et al. (2020) by conditioning both the masker and corrector with multiple retrieved evidence sentences, rather than a single gold factoid.",
"Training the corrector Similar to masked language modeling, the training objective is to generate the input claim c (cid:48) = c conditioned on the masked claim c and evidence E ( c ) .",
"By training the model to generate the input claim, we expect the model to generate the input claim only if it was in complete agreement with the evidence (as-suming the masking and the evidence are correct).",
"Otherwise, the generated correction will contain evidence pertinent to the correcting the masked claim, which enables us to generate corrections satisfying requirements R2 and R3.",
"Masker When applied to factual error correction, masking the tokens from the claim acts as a proxy to which tokens need to be removed to correct an error.",
"Parallels can be drawn between masking and generating token-level explanations.",
"We briefly summarize common approaches to generating explanations in Section 5.2.",
"We use GENRE (Cao et al., 2021) and Dense Passage Retrieval (Karpukhin et al., 2020) together to retrieve evidence for claims E ( c ) .",
"Both have shown success for a number of language understanding tasks over Wikipedia (Petroni et al., 2020).",
"GENRE is a pre-trained seq2seq model, trained to predict a Wikipedia page name for a claim.",
"DPR encodes fixed length passages from Wikipedia into vectors using a BERT encoder to build a static index.",
"At test-time, the claim is encoded and the most-similar passages are returned using an inner-product search.",
"We return the topk passages returned by DPR from pages predicted by GENRE.",
"At test time, the purpose of the masker is to selectively remove tokens that contribute to the factual errors within a claim.",
"We study how the choice of masker influences the quality of corrections.",
"This considers varying levels of access to model information and different run-time complexity.",
"Both the blackand white-box methods, outlined below, require querying a model trained to classify the veracity of claims given evidence whereas the the language model masker and baselines do not.",
"Black-box masker We evaluate perturbing the input to a classifier that is trained to predict the veracity of a claim given evidence.",
"We use LIME (Ribeiro et al., 2016), a diagnostic that trains a locally linear model to score the importance of input features (in our case, tokens in the claim) with respect to the predicted labels.",
"The model under test is a BERT classifier where evidence and the claim are concatenated in the input.",
"This is referred to as black-box because the model does not undergo modification and no information about internal values or states is exposed.",
"White-box masker In contrast, to obtain white-box model explanations, the model has undergone modification to expose internal information.",
"We use the Neutrality Masker from (Shah et al., 2020) to predict which tokens, when masked, are likely to cause a label flip from supports or refuted to not enough information.",
"This masker exposes encoded input of an ESIM classifier (Chen et al., 2017), and adds a linear classifier over the hidden states to predict per-token masking probability.",
"At test time, masks can be generated through a single query to the model (unlike LIME in the black-box masker which requires multiple queries to the model), however this requires an additional step to train, using predictions from the classifier as signal.",
"Language model masker We evaluate whether it is possible to generate masks without the need for a fact verification model.",
"We use a BERT pre-trained language model (Devlin et al., 2019) to measure the surprisal of tokens in the claim.",
"Our intuition is to identify tokens which introduce misinformation under the hypothesis that the world knowledge (Petroni et al., 2019) captured in retraining would assign lower probabilities to tokens contradictory to the world state.",
"This language model has no additional task-specific fine-tuning.",
"We independently predict the cross-entropy for each token under a masked language modelling objective using BERT and return the top-k tokens.",
"Baselines We additionally consider two simple baseline maskers: random masking of a subset of tokens and also a heuristic method of masking tokens which are not in common between the claim and the retrieved evidence.",
"We train an encoder-decoder transformer model to generate corrections from masked claims and",
"evidence.",
"Our model uses a pre-trained T5 transformer (Raffel et al., 2020) which we fine-tune with the distant supervision protocol described in Section 4.1.",
"This model jointly encodes the masked claim and evidence by concatenating these two inputs in the input.",
"We also compare against a baseline model from a related task of fact guided sentence modification (Shah et al., 2020) which uses a pointer generator network (See et al., 2017).",
"Unlike our model, which captures long-range dependencies between claim and evidence through the transformer self-attention (Vaswani et al., 2017), the baseline independently encodes the evidence and masked claim using LSTMs (Hochreiter and Schmidhuber, 1997) before decoding using a pointer-copy mechanism.",
"In order to evaluate the impact of conditioning on evidence, we decode tokens from masked claims using a language model without fine-tuning or conditioning, similar to the Language Models as Knowledge Bases hypothesis introduced by Petroni et al. (2019).",
"This would consider correcting claims using the implicit knowledge stored within the model parameters rather than using external evidence.",
"We make use of FEVER (Thorne et al., 2018), a commonly used fact verification dataset, as the basis for our experiments.",
"FEVER is one of the largest resources consisting of claims paired with evidence from Wikipedia.",
"There are 185k instances with corresponding evidence sentences and a label as to whether the claim is SUPPORTED or REFUTED by it.",
"Claims where no information could be found are labeled as NOTENOUGHINFO .",
"To comprehensively evaluate the corrections generated manual evaluation is required.",
"However, this is expensive and not suitable for system development and hyper-parameter optimization.",
"To automate system evaluation or to train a seq2seq model with full supervision, a reference gold standard correction is also required.",
"For this, we release annotations from the FEVER shared task as follows.",
"The claims in FEVER were generated in a two-stage process: annotators extracted facts from Wikipedia and then performed meaning altering perturbations called mutations over these extracted facts.",
"Each claim was independently labeled using retrieved evidence.",
"Our reference corrections are the unmodified facts extracted from Wikipedia.",
"reported in Table 1.",
"The training and test splits are disjoint by entity.",
"The additional hidden shared task test set was not used.",
"The claims labelled as NOTENOUGHINFO .",
"are used for training fact verification classifiers, but they will not be used for training the error correction systems in this paper as there is no labeled evidence to make corrections from.",
"For completeness, we also release these unused NOTENOUGHINFO instances, as they have claims paired unmodified extracted facts (21934 training, 1870 development and 2037 test).",
"While it's convenient to use an automatic metric during development, these metrics compute token overlap against a single reference sentence and cannot capture the nuances required to assess the veracity of the generated corrections against evidence.",
"Thus, our primary evaluation will use human raters to label whether the model predictions meet the task requirements stated in Section",
"3. Human raters are asked three questions about system outputs to assess whether the corrections meet the requirements of intelligibility, supported by evidence, and error correction introduced in Section",
"3. For the first 2 requirements, the question has a binary answer.",
"For the third requirement of error correction, the question has 3 answer choices: (1) the information content w.r.t. the evidence improved, (2) information unrelated to the claim was added (i.e. the claim was ignored), (3) no correction was needed (i.e. the claim was already supported by evidence).",
"The raters were shown each question in this sequence without knowledge of which system generated the correction.",
"Negative answers to a question automatically assigned negative answers to subsequent ones (prescribing that an unintelligible sentence could not contain a fact supported by evidence or introduce a correction).",
"20% of the tasks are assigned to two raters to measure inter-annotator agreement.",
"We used 4 expert participants from our lab (none of them co-authors of the paper) who were familiar with fact verification, but not with error correction.",
"Responses were calibrated using a pilot study on the validation set.",
"For automated evaluation, we use SARI (Xu et al., 2016) which is a metric used for sentence sim-plification.",
"SARI considers ngrams retained from the source as well added or deleted ngrams through comparison against a reference sentence.",
"We additionally report BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) to indicate precision and recall of the correction.",
"In Section 9, we report correlation of automated metrics against our manual evaluation.",
"T5 Masker-Corrector We fine-tuned the T5-base pre-trained models released by HuggingFace (Wolf et al., 2020).",
"The number of training epochs and learning rate was selected through optimizing the overall SARI score.",
"The search space for learning rate was { 10 5 , 5 10 5 , 10 4 , 5 10 4 } .",
"We used 5 10 5 for all experiments.",
"We found diminishing returns in SARI after 4 epochs and stopped training.",
"Fully Supervised Ceiling We use this model to estimate the ceiling performance of a factual error correction system (assuming a reasonable amount of training data is available) that other methods can be compared against.",
"We fine-tune a T5-base model with supervision of the correction (see Section 6), using the same hyper-parameter choices as the T5 Masker-Corrector.",
"Automated Scoring A single reference sentence from the FEVER dataset is used for automated scoring.",
"We consider BLEU, ROUGE, and SARI.",
"SARI considers the F1 of added tokens, F1 of kept tokens, precision of deletions, and the mean of these 3 scores (denoted final ).",
"We use code made available by Xu et al. (2016).",
"Evidence Retrieval We use the Facebook implementation of DPR (Karpukhin et al., 2020) without fine-tuning and constructed an index over the Wikipedia version released with FEVER (Thorne et al., 2018), chunked into passages of 50 tokens.",
"For GENRE, the original authors' implementation was used.",
"We selected the top matching 2 passages.",
"This resulted in the highest scores on the downstream corrections; SARI was lower when using 1 or 3 passages.",
"Maskers For the white-box masker, we use the implementation provided by Shah et al. (2020) applied to our dataset retaining original hyper-parameters trained on FEVER.",
"For the black-box masker, we use the LIME implementation from (Ribeiro et al., 2016) to probe a BERT classifier (Devlin et al., 2019) fine-tuned on FEVER.",
"For the LM and random baseline maskers, where the number of masks was tunable, we masked 50% of the tokens, which was similar to the number of tokens masked by the blackand white-box maskers.",
"Language Model as Correctors?",
"We greedily decode masked tokens using a BERT-base-cased language model using the HuggingFace implementation (Wolf et al., 2020) without fine-tuning.",
"Comparison to Previous Work For comparison to previous work, we use the dual-encoder pointer network implementation from (Shah et al., 2020), retaining the original hyper-parameter choices.",
"We first report results from a manual evaluation, assessing the requirements that corrections are intelligible, supported by evidence, and improve the factuality of the claim, as listed in Section",
"3. Our evaluation considers a sample of 200 instances per system.",
"We report the results in Table",
"2. For inter-annotator agreement control, 20% of instances were annotated by two annotators: the Cohen's scores for the 3 questions are 0.92 for intelligible, 0.92 for supported, and 0.86 for corrected.",
"When using retrieved evidence, the white-box masker generated no masks for 41% of instances.",
"Without masked tokens, the T5 corrector copied the input claim to the output.",
"This fits the assumption that, if the claim is already supported well by evidence, no correction is required.",
"The fully supervised models had the highest rate of satisfactory corrections that improved the factuality of the claim (requirement 3), indicating a performance ceiling for the distantly-supervised models.",
"Incorporating retrieved evidence in these supervised models (rather than gold) reduced the number of corrections supported by evidence from 88.9% to 64.7% and the number of satisfactory corrections from 68.9% to 48.9% showing the challenges of incorporating (possibly noisy) retrieved evidence when generating the corrections.",
"to train the corrector to the masker used at test time.",
"We observed that training the corrector with random masks yielded both a higher rate of satisfactory corrections and corrections supported by evidence when using either the black-box or heuristic masker at test time.",
"We further evaluate other maskers with automated metrics in Section 9.2.",
"Using a heuristic masker at test time, which removed tokens from the claim not present in the evidence, generated more claims meeting the supported and corrected requirements than masks generated by querying a fact verification model (both black-box and white-box).",
"An analysis of the masker's influence on the corrections is provided in Section 9.1.",
"The two baseline systems, Dual Encoder M+C, based on Shah et al. (2020), and a pre-trained BERT language model, generated corrections that were intelligible or supported by evidence at a lower rate than the aforementioned models, further discussed in Sections 9.3 and 9.4.",
"We report the correlation between automated scoring metrics and our manual evaluation in Table",
"3. The KEEP component of SARI, which measures the F1 of n-grams from the claim retained in the output, had the highest correlation with all three requirements.",
"Overly aggressive maskers which remove too much content from the claim can result in unintelligible outputs, or corrections unrelated to the claim.",
"ROUGE2, which measures the recall of bigrams in the correction w.r.t. the reference, exhibited reasonable correlation to the manual evaluation against the supported and corrected requirements, however does not correlate as well with intelligibility.",
"The ADD and DELETE components of SARI provide further information but do not correlate as strongly with the human judgements.",
"Having only one reference correction reduces the utility of precision-oriented metrics, like BLEU, as valid corrections can differ from the reference.",
"When training the corrector with the same masker that is used at test time, both the heuristic and black-box maskers yielded comparable scores under human evaluation.",
"Inspection of SARI breakdown in Table 4 indicates that more tokens were kept when using the heuristic masker (Keep=.651) whereas the black box model was more aggressive in masking, resulting in less information from the claim being retained (Keep=.594).",
"This correlated well with human judgements as more information retained gives a richer context for generating the correction and prevents erasure of claims already (partially) supported by the evidence.",
"Both the black-box (LIME) and white-box (the masker from Shah et al. (2020)) methods require querying a veracity classifier to generate the masks.",
"Using retrieved evidence for the veracity classifier, which was used to generate the masks in conjunction with these two methods, had a negative impact on most components of the SARI score.",
"For the black-box masker, using retrieved evidence reduced the number of masked tokens from an average of 4 .",
"7 per claim to 3 .",
"9 .",
"Whereas the number of masked tokens by the white-box masker remained unchanged at 4 .",
"7 (approximately 50% of number of tokens in the claim).",
"Most notably, the white-box method of mask generation (row 4 in Table 4) did not to generate masks for 41% of instances when using retrieved evidence, whereas all instances had at least one mask when using gold evidence an artefact of the noise introduced by retrieval.",
"Generating large quantities of masked training data through querying a model, such as with the black-box model explanation techniques, can be computationally expensive.",
"In contrast, random masks can be generated without querying a model.",
"Using a corrector trained on random masks resulted in higher quality outputs at test time when paired the black-box and heuristic maskers.",
"Training with random masks promotes good exploration of the task.",
"In contrast, while the black-box and heuristic approaches worked well during testing, correctors trained on these maskers generated worse outputs due to the limited exploration of the task space.",
"Additionally, generating training data using the black-and white-box methods requires making predictions using the model's training data which may result in different outcomes to making predictions on unseen test data.",
"Previous work uses a dual encoder pointer network (Shah et al., 2020) to make corrections, reported in Table 6.",
"The corrector tended to copy portions of claim rather than correct it, resulting in a SARI KEEP score of .452 which is lower than the T5 model using the same white-box masker (Table 4).",
"Human evaluation considered these corrections mostly unintelligible, even when using gold evidence (Table 2).",
"This was especially the case for rarer entities.",
"Hyper-parameter tuning of the cor-rector's coverage ratio, as suggested by the authors, did not yield improvements.",
"With the exception of the heuristic masker, using a pre-trained language model, without fine-tuning, to correct claims resulted in low SARI scores (Ta-ble 7).",
"Without conditioning on the evidence, the correction is not related to the claim or supported by evidence to verify the claim, which is indicated by the low SARI Add scores which consider the precision of the added tokens.",
"As these maskers deleted most tokens, retaining only stop-words, decoding most likely tokens without a prompt or context tokens resulted in unintelligible outputs.",
"For the heuristic masker, more content words were retained yielding more intelligible outputs.",
"However, these were not always supported by evidence, indicated in the human evaluation in Table",
"2. Masker SARI Score Keep Delete Add Final Masked LM .360 .472 .019 .289 Heuristic (IR) .",
"In this section we discuss the following issues which were present in all master-corrector systems:",
"Over-erasure In some instances, the masker removed most or all of the non-stopword tokens from the claim.",
"This resulted in the original meaning of the claim being erased.",
"Without this information the corrector could not reconstruct the claim, resulting in corrections that were unrelated to the input claim.",
"This issue was most prevalent for the black-box masker, where 15% of instances had more than 5 consecutive tokens masked and 32% of instances had 4 consecutive tokens masked.",
"In contrast, the heuristic masker, which identifies the tokens not present in the retrieved evidence had 5 consecutive tokens masked for 3% of instances and 4 consecutive tokens masked for 9% of instances.",
"While, in some cases, appropriate corrections could be made despite the aggressive masking (e.g. the claim Exit the King is by man[sic]. was fully masked, but corrected to include the author's name), others were re-written focusing on a different fact, e.g. a claim about the length of reign of Maria Theresa was rewritten to be about her date of birth.",
"Incorrect masking When the erroneous tokens in a claim were not masked, the corrector would generate outputs not supported by evidence.",
"For example the following claim, which has an incorrect year, was masked but retaining the error: Ghost, the film was released in 1994 as [MASK] , [MASK] [MASK] [MASK] [MASK] [MASK] in 1994.",
"Even with suitable retrieved evidence, indicating the release year is 1990, no appropriate correction could be made.",
"Inadequate evidence retrieval Where the evidence retrieved was related, but not specifically supporting or refuting the claim, the generated corrections were vague: the claim Poldark aired on HBO was corrected to Poldark premiered on TV as the evidence lacked the name of the correct TV station.",
"Similarly, where incorrect masks were made, additional retrieval retrieval may be required to prevent the corrector from hallucinating information to cover the knowledge missing from the evidence.",
"For example, the name of the TV show was masked in the claim Two and a half men starred Jamie Fox[sic], but as no mention of Jamie Fox was present in the evidence, the model hallucinated a different TV show name.",
"Going beyond simply identifying errors, factual error correction presents a number of challenges for information retrieval, fact verification and abstractive summarization communities alike.",
"In this paper, we demonstrated that the task can be performed with distant supervision in the form of claims labeled by evidence supporting or refuting them.",
"However, there are a number of outstanding challenges that must be addressed.",
"The data we used from the FEVER task was re-purposed to evaluate whether systems can undo mutations introduced by human annotators and may not be representative of the range of factual errors that would be present in real-world documents.",
"While some automated metrics correlated well with human judgements, future work should consider how automated scoring can be better used to discriminate the adequacy of the generated corrections going beyond similarity to the reference sentence.",
"From a modelling perspective, the masks strongly influenced the corrector and further work is required to generate masks that result in better corrections.",
"We observed where masks mismatched the evidence, the correction was vague, hallucinated or did not correct the factual errors in the claim.",
"This could be addressed through joint training of both components to enable them to avoid error propagation from masking to correction.",
"The authors wish to thank: Tal Schuster for his helpful comments and feedback; Nicola De Cao for providing the GENRE predictions for FEVER; Amrith Krishna, Guy Aglionby, Rami Aly and Zhi-jiang Guo for manual evaluation of the model predictions.",
"This research was supported by donation of compute resources from Google Cloud.",
"James Thorne is supported by an Amazon Alexa Graduate Research Fellowship.",
"Andreas Vlachos is supported by the ERC grant AVeriTeC (GA 865958).",
"Our experiments were performed on publicly available data about common facts from Wikipedia.",
"These data are released under a creative-commons license.",
"The expert raters from our lab who manually reviewed the generated instances were volunteers and were compensated through quid-pro-quo help on their own projects.",
"The intended use of this project is to help explain reasoning using evidence, going beyond single-label classification.",
"This adds an additional safeguard, making the decision process more transparent as poor predictions by our model expose limitations that would be hidden by classification.",
"Our data is synthetic in nature and is biased towards synthetic facts from popular entities.",
"Application to political or scientific domains would require additional work.",
"Misinformation about populations that are under-represented in our data may not be accurately identified or corrected without further mitigation.",
"One positive finding in our paper was that some of biases perpetuated in the hallucinations of language models were mitigated when conditioning the generation on retrieved evidence.",
"Model fine-tuning took approximately 2 hours per experiment on a single P100 GPU.",
"Generating LIME explanations of the training dataset took approximately one day motivating our experiments that used models trained on random or heuristic maskers which required fewer resources by several orders of magnitude."
] |
[
"method",
"abstain",
"objective",
"result",
"result",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"result",
"result",
"objective",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"other",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain"
] |
[
"While idiosyncrasies of the Chinese classifier system have been a richly studied topic among linguists (Adams and Conklin, 1973; Erbaugh, 1986; Lakoff, 1986), not much work has been done to quantify them with statistical methods.",
"In this paper, we introduce an information-theoretic approach to measuring idiosyncrasy; we examine how much the uncertainty in Mandarin Chinese classifiers can be reduced by knowing semantic information about the nouns that the classifiers modify.",
"Using the empirical distribution of classifiers from the parsed Chinese Gigaword corpus (Graff et al., 2005), we compute the mutual information (in bits) between the distribution over classifiers and distributions over other linguistic quantities.",
"We investigate whether semantic classes of nouns and adjectives differ in how much they reduce uncertainty in classifier choice, and find that it is not fully idiosyncratic; while there are no obvious trends for the majority of semantic classes, shape nouns reduce uncertainty in classifier choice the most.",
"Many of the world's languages make use of numeral classifiers (Aikhenvald, 2000).",
"While theoretical debate still rages on the function of numeral classifiers (Krifka, 1995; Ahrens and Huang, 1996; Cheng et al., 1998; Chierchia, 1998; Li, 2000; Nis-bett, 2004; Bale and Coon, 2014), it is generally accepted that they need to be present for nouns to be modified by numerals, quantifiers, demonstratives, or other qualifiers (Li and Thompson, 1981, 104).",
"In Mandarin Chinese, for instance, the phrase one person translates as ( y g ` e r en ); the classifier ( g ` e ) has no clear translation in English, yet, nevertheless, it is necessary to place it between the numeral ( y ) and the word for person ( ren ).",
"There are hundreds of numeral classifiers in the Mandarin lexicon (Po-Ching and Rimmington, Classifier Pnyn Semantic Class g`e objects, general-purpose ji`an matters tou domesticated animals zh general animals zhang flat objects tiao long, narrow objects xi`ang items, projects d`ao orders, roads, projections p horses, cloth d`un meals Table 1: Examples of Mandarin classifiers.",
"2015, Table 1 gives some canonical examples), and classifier choices are often argued to be based on inherent, possibly universal, semantic properties associated with the noun, such as shape (Kuo and Sera, 2009; Zhang and Jiang, 2016).",
"Indeed, in a summary article, Tai (1994) writes: Chinese classifier systems are cognitively based, rather than arbitrary systems of classification.",
"If classifier choice were solely based on conceptual features of a given noun, then we might expect it to be nearly determinatelike gender-marking on nouns in German, Slavic, or Romance languages (Erbaugh, 1986, 400)and perhaps even fixed for all of a given noun's synonyms.",
"However, selecting which classifers go with nouns in practice is an idiosyncratic process, often with several grammatical options (see Table 2 for two examples).",
"Moreover, knowing what a noun means doesn't always mean you can guess the classifier.",
"For example, most nouns referring to animals, such as ( luu , donkey ) or ( yang , goat ), use the classifier ( zh ).",
"However, horses Classifier p ( C | N = ) p ( C | N = ) ( w`ei ) 0.4838 0.0058 ( mng ) 0.3586 0.0088 ( g`e ) 0.0205 0.0486 ( p ) 0.0128 0.0060 ( xi`ang ) 0.0063 0.4077 ( q ) 0.0002 0.2570 Everything else 0.1178 0.2661 Table 2: Empirical distribution of selected classifiers over two nouns: ( ren sh`, people ) and ( gong cheng, project ).",
"cannot use ( zh ), despite being semantically similar to goats and donkeys, and instead must appear with ( p ).",
"Conversely, knowing which particular subset of noun meaning is reflected in the classifer also doesn't mean that you can use that classifier with any noun that seems to have the right semantics.",
"For example, classifier ( tiao ) can be used with nouns referring to long and narrow objects, like rivers, snakes, fish, pants, and certain breeds of dogsbut never cats, regardless of how long and narrow they might be!",
"In general, classifiers carve up the semantic space in a very idiosyncratic manner that is neither fully arbitrary, nor fully predictable from semantic features.",
"Given this, we can ask: precisely how idiosyncratic is the Mandarin Chinese classifier system?",
"For a given noun, how predictable is the set of classifiers that can be grammatically employed?",
"For instance, had we not known that the Mandarin word for horse ( ma ) predominantly takes the classifier ( p ), how likely would we have been to guess it over the much more common animal classifier ( zh )?",
"Is it more important to know that a noun is ( ma ) or simply that the noun is an animal noun?",
"We address these questions by computing how much the uncertainty in the distribution over classifiers can be reduced by knowing information about nouns and noun semantics.",
"We quantify this notion of classifier idiosyncrasy by calculating the mutual information between classifiers and nouns, and also between classifiers and several sets that are relevant to noun meaning (i.e., categories of noun senses, sets of noun synonyms, adjectives, and categories of adjective senses).",
"Our results yield concrete, quantitative measures of idiosyncrasy in bits, that can supplement existing hand-annotated, intuition-based approaches that organize Mandarin classifiers into an ontology.",
"Why investigate the idiosyncrasy of the Mandarin Chinese classifier system?",
"How idiosyncratic or predictable natural language is has captivated researchers since Shannon (1951) originally proposed the question in the context of printed English text.",
"Indeed, looking at predictability directly relates to the complexity of languagea fundamental question in linguistics (Newmeyer and Preston, 2014; Dingemanse et al., 2015)which has also been claimed to have consequences learnabil-ity and processing.",
"For example, how hard it is for a learner to master irregularity, say, in the English past tense (Rumelhart and McClelland, 1987; Pinker and Prince, 1994; Pinker and Ullman, 2002; Kirov and Cotterell, 2018) might be affected by predictability, and highly predictable noun-adjacent words, such as gender affixes in German and prenominal adjectives in English, are also shown to confer online processing advantages (Dye et al., 2016, 2017, 2018).",
"Within the Chinese classifier system itself, the very common, general-purpose classifier ( g`e ) is acquired by children earlier than rarer, more semantically rich ones (Hu, 1993).",
"General classifiers are also found to occur more often in corpora with nouns that are less predictable in context (i.e., nouns with high surprisal ; Hale 2001) (Zhan and Levy, 2018), providing initial evidence that predictability likely plays a role in classifier-noun pairing more generally.",
"Furthermore, providing classifiers improves participants' recall of nouns in laboratory experiments (Zhang and Schmitt, 1998; Gao and Malt, 2009) (but see Huang and Chen 2014)but, it isn't known whether classifiers do so by modulating noun predictability.",
"We take an information-theoretic approach to statistically quantify the idiosyncrasy of the Mandarin Chinese classifier system, and measure the uncertainty (entropy) reductionor mutual information (MI) (Cover and Thomas, 2012)between classifiers and other linguistic quantities, like nouns or adjectives.",
"Intuitively, MI lets us directly measure classifier idiosyncrasy, because it tells us how much information (in bits) about a classifier we can get by observing another linguistic quantity.",
"If classifiers were completely independent from other quantities, knowing them would give us no information about classifier choice.",
"Notation.",
"Let C be a classifier-valued random variable with range C , the set of Mandarin Chinese classifiers.",
"Let X be a second random variable, which models a second linguistic quantity, with range X .",
"Mutual information (MI) is defined as I ( C ; X ) H ( C ) H ( C | X ) (1a) = (cid:88) c C ,x X p ( c, x ) log p ( c, x ) p ( c ) p ( x ) (1b) Let N and A denote the sets of nouns and adjectives, respectively, with N and A be nounand adjective-valued random variables, respectively.",
"Let N i and A i denote the sets of nouns and adjectives in i th SemCor supersense category for nouns (Tsvetkov et al., 2015) and adjectives (Tsvetkov et al., 2014), respectively, with their random variables being N i and A i , respectively.",
"Let S be the set of all English WordNet (Miller, 1998) senses of nouns, with S be the WordNet sense-valued random variable.",
"Given the formula above and any choice for X { N, A, N i , A i , S } , we can calculate the mutual information between classifiers and other relevant linguistic quantities.",
"Mutual information between classifiers and nouns ( I ( C ; N )) shows how much uncertainty (i.e., entropy) in classifiers can be reduced once we know the noun, and vice versa.",
"Because only a few classifiers are suitable to modify a given noun (again, see Table 2) and the entropy of classifiers for a given noun is predicted to be close to zero, MI between classifiers and nouns is expected to be high.",
"The distribution over adjectives that modify a noun in a large corpus give us a language-internal peek into a word's lexical semantics.",
"Moreover, adjectives have been found to increase the predictability of nouns (Dye et al., 2018), so we ask whether they might affect classifier predictability too.",
"We compute MI between classifiers and adjectives ( I ( C ; A ) ) that modify the same nouns to investigate their relationship.",
"If both adjectives and classifiers track comparable portions of the noun's semantics, we expect I ( C ; A ) to be significantly greater than zero, which implies mutual dependence between classifier C and adjective A .",
"To uncover which nouns are able to reduce the uncertainty in classifiers more, we divide them",
"into 26 SemCor supersense categories (Tsvetkov et al., 2015), and then compute I ( C ; N i ) ( i { 1 , 2 , ..., 26 } ) for each supersense category.",
"The supersense categories (e.g., animal, plant, person, artifact , etc.) provide a semantic classification system for English nouns.",
"Since there are no known supersense categories for Mandarin, we need to translate Chinese nouns into English to perform our analysis.",
"We use SemCor supersense categories instead of WordNet hypernyms because different basic levels for each noun make it difficult to determine the correct category for each noun.",
"We translated and divided the adjectives into 12 supersense categories (Tsvetkov et al., 2014), and compute mutual information I ( C ; A i ) ( i { 1 , 2 , ..., 12 } ) for each category separately to determine which categories have more mutual dependence on classifiers.",
"Adjective supersenses are defined as categories describing certain properties of nouns.",
"For example, adjectives in MIND category describe intelligence and awareness, while those in the PERCEPTION category focus on, e.g., color, brightness, and taste.",
"Examining the distribution over adjectives is a language-specific measure of noun meaning, albeit an imperfect one, because only certain adjectives modify any given noun.",
"We also compute the mutual information I ( C ; S ) between classifiers and nouns' WordNet (Miller, 1998) synonym sets ( synsets ), assuming that each synset is independent.",
"For nouns with multiple synsets, we assume that all synsets are equally probable for simplicity.",
"If classifiers are fully semantically determined, then knowing a noun's synsets should enable one to know the appropriate classifier(s), resulting in high MI.",
"If classifiers are largely idiosyncratic, then noun synsets should have lower MI with classifiers.",
"We do not use WordNet to attempt to capture word polysemy here.",
"Data Provenance.",
"We apply an existing neural Mandarin word segmenter (Cai et al., 2017) to the Chinese Gigaword corpus (Graff et al., 2005), and then feed the segmented corpus to a neural dependency parser, using Google's pretrained Parsey Uni-H ( C ) H ( C | N ) I ( C ; N ) H ( C | S ) I ( C ; S ) H ( C | A ) I ( C ; A ) 5.61 0.66 4.95 4.14 1.47 3.53 2.08 Table 3: Mutual information between classifiers and nouns I ( C ; N ) , noun senses I ( C ; S ) , and adjectives I ( C ; A ) , is compared to their entropies.",
"versal model on Mandarin.",
"1 The model is trained on Universal Dependencies datasets v1.3.",
"2 We extract classifier-noun pairs and adjective-classifier-noun triples from sentences, where the adjective and the classifier modify the same nounthis is easily determined from the parse.",
"We also record the tuple counts, and use them to compute an empirical distribution over classifiers that modify nouns, and noun-adjective pairs, respectively.",
"Data Preprocessing.",
"Since no annotated supersense list exists for Mandarin, we first use CC-CEDICT 3 as a Mandarin Chinese-to-English dictionary to translate nouns and adjectives into English.",
"Acknowledging that translating might introduce noise, we subsequently categorize our words into different senses using the SemCor supersense data for nouns (Miller et al., 1993; Tsvetkov et al., 2015), and adjectives (Tsvetkov et al., 2014).",
"After that, we calculate the mutual information under each noun, and adjective supersense.",
"Modeling Assumptions.",
"As this contribution is the first to investigate classifier predictability, we make several simplifying assumptions.",
"Extracting distributions over classifiers from a large corpus, as we do, ignores sentential context, which means we ignore the fact that some nouns (i.e., relational nouns, like mama , Mom ) are more likely to be found in verb frames or other constructions where classifiers are not needed.",
"We also ignore singular-plural, which might affect classifier choice, and the mass-count distinction (Cheng et al., 1998; Bale and Coon, 2014), to the extent that it is not encoded in noun superset categories (e.g., substance includes mass nouns).",
"We also assume that every classifier-noun or classifier-adjective pairing we extract is equally acceptable to native speakers.",
"However, it's possible that native speakers differ in either their knowledge of classifier-noun distributions or confidence 1 https://github.com/tensorflow/models/ blob/master/research/syntaxnet/g3doc/universal.md 2 https://universaldependencies.org/ 3 www.mdbg.net/chinese/dictionary in particular combinations.",
"Whether and how such human knowledge interacts with our calculations would be an interesting future avenue.",
"Table 3 shows MI between classifiers and other linguistic quantities.",
"As we can see, I ( C ; N ) > I ( C ; A ) > I ( C ; S ) .",
"As expected, knowing the noun greatly reduces classifier uncertainty; the noun and classifier have high MI ( 4 . 95 bits).",
"Classifier MI with noun synsets ( 1 . 47 bits) is not comparable to with nouns ( 4 . 95 bits), suggesting that knowing a synset does not greatly reduce classifier uncertainty, leaving > 3 / 5 of the entropy unaccounted for.",
"We also see that adjectives ( 2 . 08 bits) reduce the uncertainty in classifiers more than noun synsets ( 1 . 47 bits), but less than nouns ( 4 . 95 bits).",
"Noun supersense results are in Figure",
"1. Natural categories are helpful, but are far from completely predictive of the classifier distribution: knowing that a noun is a plant helps, but cannot account for about 1 / 3 of the original entropy for the distribution over classifiers, and knowing that a noun is a location leaves > 1 / 2 unexplained.",
"The three supersenses with highest I ( C ; N ) are body , artifact , and shape .",
"Of particular interest is the shape category.",
"Knowing that a noun refers to a shape (e.g., ; ji ao d ` u , angle ), makes the choice of classifier relatively predictable.",
"This result sheds new light on psychological findings that Mandarin speakers are more likely to classify words as similar based on shape than English speakers (Kuo and Sera, 2009), by uncovering a possible role for information structure in shape-based choice.",
"It also accords with Chien et al. (2003) and Li et al. (2010, 216) that show that children as young as three know classifiers often delineate categories of objects with similar shapes.",
"Adjective supersense results are in Figure",
"2. Interestingly, the top three senses that have the highest mutual information between their adjectives and classifiers MIND , BODY (constitution, appearance) and PERCEPTION are all involved with people's subjective views.",
"With respect to Kuo and Sera (2009), adjectives from the SPATIAL sense pick out shape nouns in our results.",
"Although it does not make it into the top three, MI for the SPATIAL sense is still significant.",
"While classifier choice is known to be idiosyncratic, no extant study has precisely quantified this.",
"To do so, we measure the mutual information between classifiers and other linguistic quantities, and find that classifiers are highly mutually dependent on nouns, but are less mutually dependent on adjectives and noun synsets.",
"Furthermore, knowing which noun or adjective supersense a word comes from helps, often significantly, but still leaves much of the original entropy in the classifier distribution unexplained, providing quantitative support for the notion that classifier choice is largely idiosyncratic.",
"Although the amount of mutual dependence is highly variable across the semantic Figure 2: Mutual information between classifiers and adjectives (dark blue), and classifier entropy (light & dark blue) plotted with H ( C | A ) = H ( C ) I ( C ; A ) (light blue) decreasing from left.",
"classes we investigate, we find that knowing a noun refers to a shape reduces uncertainty in classifier choice more than knowing it falls into any other semantic class, arguing for a role for information structure in Mandarin speakers' reliance on shape (Kuo and Sera, 2009).",
"This result might have implications for second language pedagogy, adducing additional, tentative evidence in favor of collocational approaches to teaching classifiers (Zhang and Lu, 2013) that encourages memorizing classifiers and nouns together.",
"Investigating classifiers might also provide cognitive scientific insights into conceptual categorization (Lakoff, 1986)often considered crucial for language use (Ungerer, 1996; Taylor, 2002; Croft and Cruse, 2004).",
"Studies like this one opens up avenues for comparisons with other phenomena long argued to be idiosyncratic, such as grammatical gender, or declension class.",
"This research benefited from generous financial support from a Bloomberg Data Science Fellowship to Hongyuan Mei and a Facebook Fellowship to Ryan Cotterell.",
"We thank Meilin Zhan, Roger Levy, Geraldine Walther, Arya McCarthy, Ekaterina Vylomova, Sebastian Mielke, Katharina Kann, Jason Eisner, Jacob Eisenstein, and anonymous reviewers (NAACL 2019) for their comments."
] |
[
"abstain",
"method",
"method",
"objective",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"We study the interpretability issue of task-oriented dialogue systems in this paper.",
"Previously, most neural-based task-oriented dialogue systems employ an implicit reasoning strategy that makes the model predictions uninterpretable to humans.",
"To obtain a transparent reasoning process, we introduce neuro-symbolic to perform explicit reasoning that justifies model decisions by reasoning chains.",
"Since deriving reasoning chains requires multihop reasoning for task-oriented dialogues, existing neuro-symbolic approaches would induce error propagation due to the one-phase design.",
"To overcome this, we propose a two-phase approach that consists of a hypothesis generator and a reasoner.",
"We first obtain multiple hypotheses, i.e., potential operations to perform the desired task, through the hypothesis generator.",
"Each hypothesis is then verified by the reasoner, and the valid one is selected to conduct the final prediction.",
"The whole system is trained by exploiting raw textual dialogues without using any reasoning chain annotations.",
"Experimental studies on two public benchmark datasets demonstrate that the proposed approach not only achieves better results, but also introduces an interpretable decision process.",
"Code and data: https://github.",
"com/shiquanyang/NS-Dial .",
"Neural task-oriented dialogue systems have enjoyed a rapid progress recently (Peng et al., 2020; Hosseini-Asl et al., 2020; Wu et al., 2020), achieving strong empirical results on various benchmark datasets such as SMD (Eric et al., 2017) and MultiWOZ (Budzianowski et al., 2018).",
"However, most existing approaches suffer from the lack of explainability due to the black-box nature of neural networks (Doshi-Velez and Kim, 2017; Lipton, 2018; Bommasani et al., 2021), which may hurt the trustworthiness between the users and the system.",
"For External Knowledge Base (KB) Cityroom Price_range Moderate Chadstone Located_in Leichhardt Cityroom Next_to Palm_Lawn Gonville_Hotel Price_range Expensive Palm_Lawn Located_in Chadstone Gonville_Hotel Located_in Moorabbin User : Can you recommend me a hotel located in Leichhardt ?",
"instance, in Figure 1, a user is asking for a hotel recommendation at a given location.",
"The system performs reasoning on a knowledge base (KB) and incorporates the correct entity in the response.",
"However, when the system fails to provide the correct entities, it would be difficult for humans to trace back the issues and debug the errors due to its intrinsic implicit reasoning nature.",
"As a result, such system cannot be sufficiently trusted to be deployed in real-world products.",
"To achieve trustworthy dialogue reasoning, we aim to develop an interpretable KB reasoning as it's crucial for not only providing useful information (e.g., locations in Figure 1) to users, but also essential for communicating options and selecting target entities.",
"Without interpretability, it's difficult for users to readily trust the reasoning process and the returned entities.",
"To tackle this challenge, we present a novel N euroS ymbolic Dial ogue framework ( NS-Dial ) which combines representation capacities of neural networks and explicit reasoning nature of symbolic approaches (e.g., rule-based expert systems).",
"Existing neuro-symbolic approaches (Vedantam et al., 4918 2019; Chen et al., 2020) mostly employ a one-phase procedure where a tree-structured program composed of pre-defined human interpretable neural modules (e.g., attention and classification modules in Neural Module Networks (Andreas et al., 2016)) is generated to execute to obtain the final predictions.",
"However, since the KB reasoning task involves a reasoning process spanning over multiple triplets in a diverse and large-scale KB, only generating and following a single program (i.e., a reasoning chain formed by KB triplets) is prone to error propagation where a mistake in one step could lead to a failure of the subsequent reasoning process and may result in sub-optimal performances.",
"To address this, we propose a two-phase procedure to alleviate the effects of error propagation by first generating and then verifying multiple hypotheses.",
"Here, a hypothesis is in the form of a triplet containing an entity mentioned in dialogue context and an entity within KB, and their corresponding relation.",
"The valid (i.e., correct) hypothesis is the one that contains the entity mentioned in the ground-truth response.",
"Once we obtain multiple hypothesis candidates during the generation phase, we employ a reasoning engine for verifying those hypotheses.",
"For instance in Figure 1, given the user query Can you recommend me a hotel located in Leichhardt? , in order to find the valid hypothesis, the hypothesis generator obtains multiple candidates e.g., [Cityroom, Located_in, Leichhardt] and [Gonville_Hotel, Located_in, Leichhardt] .",
"The reasoning engine will then construct proof trees to verify them, e.g., for the first hypothesis [Cityroom, Located_in, Leichhardt] , it can be verified with the following reasoning chain in the KB: [Cityroom, Next_to, Palm_Lawn] [Palm_Lawn, Located_in, Chadstone] [Chadstone, Located_in, Leichhardt] .",
"The whole framework is trained end-to-end using raw dialogues and thus does not require additional intermediate labels for either the hypothesis generation or verification modules.",
"To summarize, our contributions are as follows: We introduce a novel neuro-symbolic framework for interpretable KB reasoning in task-oriented dialogue systems.",
"We propose a two-phase generating-and-verifying approach which generates multiple hypotheses and verifies them via reasoning chains to mitigate the error-propagation issue.",
"two benchmark datasets to verify the effectiveness of our proposed model.",
"By analyzing the generated hypotheses and the verifications, we demonstrate our model's interpretability.",
"Task-Oriented Dialogue Traditionally, task-oriented dialogue systems are built via pipeline-based approaches where task-specific modules are designed separately and connected to generate system responses (Chen et al., 2016; Zhong et al., 2018; Wu et al., 2019a; Chen et al., 2019a; Huang et al., 2020).",
"In another spectrum, many works have started to shift towards end-to-end approaches to reduce human efforts (Bordes et al., 2017; Lei et al., 2018; Madotto et al., 2018; Moon et al., 2019; Jung et al., 2020).",
"Lei et al. (2018) propose a two-stage sequence-to-sequence model to incorporate dialogue state tracking and response generation jointly in a single sequence-to-sequence architecture.",
"Zhang et al. (2020) propose a domain-aware multi-decoder network (DAMD) to combine belief state tracking, action prediction and response generation in a single neural architecture.",
"Most recently, the success of large-scale pre-trained language models (e.g., BERT, GPT-2) (Devlin et al., 2018; Radford et al., 2019) has spurred a lot of re-cent dialogue studies starting to explore large-scale pre-trained language model for dialogues (Wolf et al., 2019; Zhang et al., 2019).",
"In task-oriented dialogue, Budzianowski and Vulic (2019) use GPT-2 to fine-tune on MultiWOZ dataset for dialogue response generation.",
"Peng et al. (2020) and Hosseini-Asl et al. (2020) employed a single unified GPT-2 model jointly trained for belief state prediction, system action and response generation in a multi-task fashion.",
"However, most existing approaches cannot explain why the model makes a specific decision in a human understandable way.",
"We aim to address this limitation and introduce interpretability for dialogue reasoning in this study.",
"Neuro-Symbolic Reasoning Neuro-Symbolic reasoning has attracted a lot of research attentions recently due to its advantage of exploiting the representational power of neural networks and the compositionality of symbolic reasoning for more robust and interpretable models (Andreas et al., 2016; Hu et al., 2017; Hudson and Manning, 2018; Vedantam et al., 2019; Chen et al., 2019b; Vedantam et al., 2019; van Krieken et al., 2022).",
"The main difference between neuro-symbolic vs. pure 4919 neural networks lies in how the former combines basic rules or modules to model complex functions.",
"Rocktschel and Riedel (2017) propose a neuro-symbolic model that can jointly learn sub-symbolic representations and interpretable rules from data via standard back-propagation.",
"In visual QA, Andreas et al. (2016) propose neural module networks to compose a chain of differentiable modules wherein each module implements an operator from a latent program.",
"Yi et al. (2018) propose to discover symbolic program trace from the input question and then execute the program on the structured representation of the image for visual question answering.",
"However, these approaches cannot be easily adapted to task-oriented dialogues due to the error propagation issue caused by multihop reasoning on large-scale KBs.",
"Thus, we aim to bridge this gap by developing a neuro-symbolic approach for improving task-oriented dialogues.",
"In this work, we focus on the problem of task-oriented dialogue response generation with KBs.",
"Formally, given the dialogue history X and knowledge base B , our goal is to generate the system responses Y word-by-word.",
"The probability of the generated responses can be written as: p ( Y | X, B ) = n (cid:89) t =1 p ( y t | X, B, y 1 , y 2 , ..., y t 1 ) (1) where y t is the t-th token in the response Y .",
"The overall architecture is shown in Figure 2. We start by introducing the standard modules in our system and then explain the two novel modules afterward.",
"We employ pre-trained language model BERT (De-vlin et al., 2019) as the backbone to obtain the distributed representations for each token in the dialogue history.",
"Specifically, we add a [ CLS ] token at the start of the dialogue history to represent the overall semantics of the dialogue.",
"The hidden states H enc = ( h CLS , h 1 , ..., h M ) for all the input tokens X = ( [ CLS ] , x 1 , ..., x M ) are computed using: H enc = BERT enc ( emb ( X )) (2) where M is the number of tokens in the dialogue history, emb is the embedding layer of BERT.",
"To generate the system response, we first utilize a linear layer to project H enc to H enc = ( h CLS , h 1 , ..., h M ) that are in the same space of the decoder.",
"We initialize the decoder with h CLS .",
"During decoding timestep t , the model utilizes the hidden state h dec,t to attend H enc to obtain an attentive representation h dec,t via standard attention mechanism.",
"We then concatenate h dec,t and h dec,t to form a context vector C and project it into the vocabulary space V : C = [ h dec,t , h dec,t ] (3) P vocab,t = Softmax ( U 1 C ) (4) where U 1 is a learnable linear layer, P vocab,t is the vocabulary distribution for generating the token y t .",
"Next, we aim to estimate the KB distribution P kb,t , i.e., the probability distribution of entities in the KB, in an interpretable way and fuse P vocab,t and P kb,t for generating the final output tokens.",
"We follow See et al. (2017) and employ a soft-switch mechanism to fuse P vocab,t and P kb,t to generate output token y t .",
"Specifically, the generation probability p gen [0,1] is computed from the attentive representation h dec,t and the hidden state h dec,t : p gen = ( U 2 ([ h dec,t , h dec,t ])) (5) where is sigmoid function, U 2 is a linear layer.",
"The output token y t is generated by greedy sampling from the probability distribution P ( w ) : P ( w ) = p gen P vocab,t + (1 p gen ) P kb,t (6) We next describe how to obtain the KB distribution P kb,t in details using the two novel modules we proposed, i.e., hypothesis generator and hierarchical reasoning engine.",
"To compute the KB distribution P kb,t , we present two novel modules: hypothesis generator (HG) and hierarchical reasoning engine (HRE).",
"We take the context vector C (Equation 3) as the input of HG module and generate K hypotheses H , each of which are then fed into the HRE module to generate the logical reasoning chains and their belief scores.",
"The estimated belief scores are then served as P kb,t , giving us a distribution over the entities in the KB.",
"Next, we describe how each component works in detail and explain how they interact with each other for generating P kb,t .",
"Let a hypothesis be a 3-tuple of the form [ H, R, T ] , where H and T are the head and tail entities, and R is the relation between entities.",
"In this paper, we are interested in three types of hypotheses including the H-Hypothesis, T-Hypothesis, and R-Hypothesis.",
"The H-Hypothesis is the structure where the tail entity T and relation R are inferred from the context and the head entity H is unknown (which needs to be answered using the KB), and it takes the form [ , R, T ] .",
"In a similar vein, the T-Hypothesis and R-Hypothesis have unknown tail entity T and relation R , respectively.",
"The goal of the Hypothesis Generator module is to generate hypotheses in this triple format which will later be verified by the Hierarchical Reasoning Engine.",
"Intuitively, a hypothesis can be determined by its content and structure.",
"The structure indicates the template form of the hypothesis while the content fills up the template.",
"For instance, the H-Hypothesis has its template form of [ , R, T ] and the content that needs to be realised includes candidate entities (i.e., ), and query states (i.e., the tail T and relation entities R ).",
"To this end, we employ a divide-and-conquer strategy to jointly learn three sub-components: structure prediction, query states prediction, and candidates prediction.",
"Next, we describe each sub-component in details.",
"Structure Prediction (SP) The goal of the structure prediction module is to determine the structure of the hypothesis (i.e., H/T/R-Hypothesis) based on the context.",
"For example in Figure 1, one might expect an H-Hypothesis at timestep 0 .",
"Specifically, SP uses a shared-private architecture to predict the hypothesis type.",
"It first takes the context vector C (Equation 3) as input and utilizes a shared transformation layer between all the three sub-components to learn task-agnostic feature h share : h share = W 2 ( LeakyReLU ( W 1 C )) (7) where W 1 and W 2 are learnable parameters (shared by the structure prediction, query states prediction and candidate prediction components) and LeakyReLU is the activation function.",
"The shared layer can be parameterised with complicated neural architectures.",
"However, to keep our model simple, we use linear layers which we found to perform well in our experiments.",
"SP next uses a private layer on top of the shared layer to learn task-specific features for structure prediction: h spprivate = W 4 ( LeakyReLU ( W 3 h share )) (8) where W 3 and W 4 are learnable parameters.",
"For ease of presentation, we define the private feature transformation function as: F : h share h private (9) where denotes any of the three sub-components.",
"To obtain the predicted hypothesis structure, a straightforward approach is to apply softmax on h spprivate .",
"However, this will break the differentiability of the overall architecture since we perform sampling on the outcome and pass it to the neural networks.",
"To avoid this, we utilize the Gumbel-Softmax trick (Jang et al., 2017) over h spprivate to get the sampled structure type: I sp = Gumbel-Softmax ( h spprivate ) R 3 (10) where I sp is a one-hot vector and the index of one element can be viewed as the predicted structure.",
"In this paper, we define 0 as H-Hypothesis, 1 as T-Hypothesis and 2 as R-Hypothesis.",
"Query States Prediction (QSP) Query states are the tokens in hypothesis that need to be inferred from the dialogue history.",
"For example, one might want to infer relation R = Located_in and tail 4921 T = Leichhardt based on the history in Figure 1. Therefore, the goal of the query states prediction is to estimate the state information (e.g., T and R in H-Hypothesis) of hypothesis.",
"Specifically, QSP takes the shared feature h share as the input and next applies the private feature transformation function followed by Gumbel-Softmax to obtain the state tokens of hypothesis using: h qsp,kprivate = F qsp,k ( h share ) (11) I kqsp = Gumbel-Softmax ( h qsp,kprivate ) R n (12) where n is the number of tokens (entities and relations) in the KB, k {0,1}, I 0 qsp and I 1 qsp are two one-hot vectors where their corresponding tokens in KB serve as the state tokens of the hypothesis.",
"Candidates Prediction (CP) To generate the final hypotheses, we need multiple candidates to instantiate the structure of the hypothesis except the state tokens, e.g., Cityroom or Gonville_Hotel as candidate head entities H in Figure 1. To this end, we utilize an embedding layer embcp to convert all the tokens in the KB to vector representations.",
"We then compute a probability distribution over all the KB tokens using: P i = Sigmoid ( embcp ( K i ) h share ) (13) where K i is the i -th token in KB, embcp is the embedding layer of CP, P i is the probability of the i -th token to be candidate, denotes inner-product.",
"We use sigmoid instead of softmax as we find that softmax distribution is too sharp making the probability between different tokens are hard to differentiate for sampling multiple reasonable candidates.",
"Hypothesis Synthesizing The final hypotheses H are composed by combining the outputs of the three sub-components as follows:",
"(i) We generate the hypothesis template according to the predicted structure type.",
"For example, if SP predicts a structure type 0 which denotes H-Hypothesis, the model will form a template of [ , R, T ] ;",
"(ii) We next instantiate the state tokens in the hypothesis sequentially by using the outputs of QSP module.",
"For example, if the output tokens of QSP are Located_in ( k =0) and Leichhardt ( k =1), the hypothesis will become [ , Located_in , Leichhardt ] ;",
"(iii) Finally, we instantiate the candidate (i.e., ) with the top-K ( K = 5 in our best-performing version) entities selected from P. If the top-2 highest probability tokens are Cityroom and Gonville_Hotel , the model will instantiate two hypotheses [Cityroom, Located_in, Leichhardt] , [Gonville_Hotel, Located_in, Leichhardt] .",
"With the hypotheses generated by HG module, we next aim to verify them via logical reasoning chains.",
"Inspired by Neural Theorem Provers (Rocktschel and Riedel, 2017), we develop chain-like logical reasoning with following format: , ( H, R, T ) ( H, R n , Z n ) ( Z 1 , R 1 , T ) (14) where is a weight indicating the belief of the model on the target hypothesis [ H, R, T ] , and the right part of the arrow is the reasoning chain used to prove that hypothesis, and R i and Z i are relations and entities from the KB.",
"The goal is to find the proof chain and the confidence for a given hypothesis.",
"To this end, we introduce a neural-network based hierarchical reasoning engine (HRE) that learns to conduct chain-like logical reasoning.",
"At a high level, HRE recursively generates multiple levels of sub-hypotheses using neural networks that form a tree structure as shown in Figure 2. Next, we describe how this module works in details.",
"The module takes the output hypotheses from the HG module as input.",
"Each hypothesis serves as one target hypothesis.",
"To generate the reasoning chain in Equation 14, the module first finds sub-hypotheses of the same format as the target in the hypothesis space.",
"The sub-hypotheses can be viewed as the intermediate reasoning results to prove the target.",
"One straightforward approach is to use neural networks to predict all the tokens in the sub-hypotheses (2 heads, 2 tails and 2 relations).",
"However, this can lead to extremely large search space of triples and is inefficient.",
"Intuitively, subhypotheses inherit from the target hypothesis and sub-hypotheses themselves are connected by bridge entities.",
"For example, [Uber,office_in,USA] can be verified by two sub-hypotheses [Uber,office_in,Seattle] and [Seattle,a_city_of,USA] , Uber and USA are inherited from the target and Seattle is the bridge entity between sub-hypotheses.",
"Motivated by this, we propose to reduce the triple search complexity by constraining the sub-hypotheses.",
"Specifically, given target [ H, R, T ] , we generate sub-hypotheses of the format [ H, R 1 , Z ] , [ Z, R 2 , T ] , where Z is the bridge entity, R 1 and R 2 are relations to be predicted.",
"Therefore, the goal of the neural networks has been reduced to predict three tokens (2 relations and 1 bridge entity).",
"Formally, HRE predicts the vector representation of bridge entity as follows: 4922 h H , h R , h T = embcp ( H ) , embcp ( R ) , embcp ( T ) (15) h Z = W 6 ( LeakyReLU ( W 5 [ h H , h R , h T ])) (16) where [ h H , h R , h T ] are the concatenation of the representations of tokens in target hypothesis, h Z is the vector representation of bridge entity Z .",
"The prediction of h R 1 and h R 2 uses the same architecture in Equation 16 and the difference is that they use different linear layers for the feature transformation.",
"Note that h Z denotes a KB token in the embedding space.",
"We can decode the token by finding the nearest KB token to h Z in vector space.",
"More details on the token decoding can be found in Appendix A. Upon obtaining h Z , h R 1 , h R 2 , the module generates the two sub-hypotheses in vector representations.",
"Next, the module iteratively takes each of the generated sub-hypothesis as input and extend the proof process by generating next-level sub-hypotheses in a depth-first manner until the maximum depth D has been reached.",
"Belief Score To model confidence in different reasoning chains, we further measure the semantic similarities between each triple of the leaf node and triples in the KB, and compute the belief score m of the m-th hypothesis H m : m = min i U max j V e d j ( Leaf i ,KB j ) (17) where Leaf i is the representation (concatenation of H, R, T ) of the i-th leaf node in the proof tree (DFS manner), KB j is the representation of the j-th triple in KB, U =[0,..., u -1], V =[0,..., v -1] where u and v are the number of leaf nodes and KB triples correspondingly, d is the distance metric.",
"In general, any distance function can be applied and we adopt Euclidean distance in our implementation since we found that it worked well in our experiments.",
"All the triples in the leaf nodes form the reasoning chain for the input hypothesis as in Equation 14.",
"The hypotheses H coupled with the belief form our KB distribution P kb,t .",
"More details can be found in Appendix B. Intuitively, the belief score can be viewed as the likelihood of the hypothesis contains the correct entity.",
"If the hypothesis is valid (i.e., contains the correct answer entity), it should have a high likelihood and thus encourage to generate more proper reasoning chains based on the triples stored in the KB.",
"Training We apply two loss functions to train the whole architecture end-to-end.",
"The first loss function L gen is for the final output.",
"We use a cross-entropy loss over the ground-truth token and the Dataset Domains Train Dev Test SMD Navigate,Weather,Schedule 2425 302 304 MultiWOZ 2.1 Restaurant,Attraction,Hotel 1839 117 141 Table 1: Statistics of SMD and MultiWOZ 2.1.",
"generated token from the final distribution P ( w ) .",
"The second loss L cp is for the candidates prediction (CP) module in the hypotheses generator.",
"We apply binary cross-entropy loss over the output distribution for each KB token (Equation 13) and their corresponding labels.",
"The labels for each KB token are computed as follows: Label i = (cid:26) 1 , K i = y t 0 , K i = y t (18) where K i is the i -th token in the KB and y t is the ground-truth output at timestep t.",
"The final loss L is calculated by: L = g L gen + c L cp (19) where g and c are hyper-parameters and we set them to 1 in our experiments.",
"To evaluate the effectiveness and demonstrate the interpretability of our proposed approach, we conduct experiments on two public benchmark datasets for task-oriented dialogue in this paper, SMD (Eric et al., 2017) and MultiWOZ 2.1 (Budzianowski et al., 2018).",
"We use the partitions created by Eric et al. (2017); Madotto et al. (2018) and Qin et al. (2020) for SMD and MultiWOZ, respectively.",
"Statistics of the datasets are presented in Table 1. In the Appendix E, we present several additional results on a large-scale synthetic dataset to demonstrate our model's multi-hop reasoning capability under complex KB reasoning scenarios.",
"We compare our model with the following state-of-the-art baselines on KB reasoning in task-oriented dialogues: (1) Mem2Seq (Madotto et al., 2018): employs memory networks to store the KB and combine pointer mechanism to either generate tokens from vocabulary or copy from memory; (2) GLMP (Wu et al., 2019b): uses a global-to-local pointer mechanism to query the KB during decoding; (3) DF-Net (Qin et al., 2020): employs",
"shared-private architecture to capture both domain-specific and domain-general knowledge to improve the model transferability; (4) GraphDialog (Yang et al., 2020): incorporates graph structural information obtained from sentence dependency parsing results for improving KB reasoning accuracy and response generation quality.",
"Detailed experimental settings are included in Appendix C. 5.3 Main Results Following prior work (Eric et al., 2017; Madotto et al., 2018; Wu et al., 2019b), we adopt the BLEU and Entity F1 metrics to evaluate the performance of our framework.",
"The results on the two datasets are shown in Table 2. As we can see, our framework consistently outperforms all the previous state-of-the-art baselines on all datasets across both metrics.",
"Specifically, on MultiWOZ dataset, our model achieves more than 2% absolute improvement in Entity F1 and 1.2% improvement in BLEU over baselines.",
"The improvement in Entity F1 indicates that our model enhances KB reasoning, while the increase in BLEU suggests that the quality of the generated responses has been improved.",
"The same trend has also been observed on SMD dataset.",
"This indicates the effectiveness of our proposed framework for task-oriented dialogue generation.",
"To demonstrate our framworks's interpretability, we investigate the inner workings of our framework.",
"As shown in Figure 3, given the dialogue history Can you recommend me a restaurant near Palm_Beach? , the generated response is There is a Golden_House. .",
"During the 3rd timestep, our model has successfully predicted an appropriate H-Hypothesis with Located_in and Palm_Beach as its state tokens.",
"Our model further instantiates five concrete hypotheses and computes their belief scores leveraging the reasoning engine, respectively.",
"As we can see from the table, our model successfully generates five reasonable hypotheses and scores them correctly (with highest score for the oracle KB entity Golden_House ).",
"The proof process for the highest score hypothesis is shown in Figure 3. The verification procedure generated by the HRE module has a depth of 3 and the reasoning chaining used to verify the target hypothesis is: [Golden_House, Next_to, Preston_Market] [Pre-ston_Market, Located_in, Williamstown] [Williamstown, Located_in, Herb_Garden] [Herb_Garden, Located_in, Palm_Beach] .",
"This indicates that our framework has successfully utilized the KB information to support the reasoning process explicitly to reach a correct conclusion.",
"More examples and error analyses can be found in the Appendix (Appendix E.4 and F).",
"We ablate each component in our framework to study their effectiveness on both datasets.",
"The results are shown in Table 3. Specifically, 1) w/o HRE denotes that we simply use the probability in candidates prediction (CP) module (Equation 13) as the KB distribution without using the scores from the reasoning engine.",
"2) w/o BERT denotes that we use standard GRU as encoder instead of BERT.",
"3) w/o Soft-switch denotes that we simply sum the KB distribution and vocabulary distribution without using a soft gate.",
"As we can see from the table, all the individual components have notably contributed to the overall performance of 4924 Dialogue history : Can you recommend me a restaurant near Palm_Beach?",
"Leichhardt Located_in Leichhardt #3, Belief Score: 1.00 Sub-hypothesis KB: Cityroom Price_range Moderate Cityroom Stars 3 Cityroom Next_to Palm_Lawn Palm_Lawn Located_in Chadstone Chadstone Located_in Leichhardt Gonville_Hotel Price_range Expensive Gonville_Hotel Stars 4 Gonville_Hotel In_district Moorabbin",
"...... Cityroom Next_to Palm_Lawn Palm_Lawn Located_in Chadstone Chadstone Located_in Leichhardt Cityroom Located_in Leichhardt Reasoning Chain: Figure 1: Proof tree generated by the hierarchical reasoning module for the highest score hypothesis [Gold_House, Located_in, Palm_Beach] in Table 1. Our model performs 4-hop reasoning to arrive at the correct conclusion.",
"All the leaf nodes predicted by HRE have a belief score of 1.0 as they are exactly supported by the external KB.",
"In the original dataset released by prior works, the entity overlap ratio between the train and test split is 78% and 15.3% for MultiWOZ 2.1 and SMD, respectively.",
"To simulate unseen scenario, we construct a new dataset split that reduces the entity overlap ratio to 30% for MultiWOZ 2.1 and 2% for SMD between the train and test split, which is a more challenging setting for all the models.",
"More details of the construction process can be found in Appendix D. We re-run all the baselines with their released codes and our model on the new data split and report the results in Table 4. As we can see, the performance drops significantly for all systems on both datasets.",
"However, our model degrades less compared to other systems, showing that it has better generalisation capability under unseen scenarios.",
"This also verifies that neuro-symbolic approach has the advantage of better generalisation ability which has also been confirmed by many other studies (An-dreas et al., 2016; Rocktschel and Riedel, 2017; Minervini et al., 2020).",
"our framework.",
"Specifically, when removing HRE module, the performance has decreased substantially (more than 5% absolute drop), which confirms that the effectiveness of the proposed hierarchical reasoner module.",
"We further investigate the generalization ability of our model under unseen settings.",
"Following prior work (Qin et al., 2020), we also conduct human evaluations for our framework and baselines from three aspects: Correctness , Fluency , and Humanlikeness .",
"Details about the scoring criterions can be found in Appendix H. We randomly select 300 different dialogue samples from the test set and ask human annotators to judge the quality of the responses and score them according to the three metrics ranging from 1 to 5. We train the annotators by showing them examples to help them 4925 Model Correct Fluent Humanlike GLMP 4.01 3.78 3.25 GraphDialog 4.15 4.19 3.40 DF-Net 4.16 4.25 3.54 Ours (Full model) 4.41 4.28 3.59 Human 4.83 4.65 4.57 Agreement 75% 69% 71% Table 5: Human evaluation results.",
"understand the criteria and employ Fleiss' kappa (Fleiss, 1971) to measure the agreement across different annotators.",
"The results are shown in Table 5. As we can see, our model outperforms all baselines across all the three metrics, consistent with our previous observations using automatic evaluations.",
"In this paper, we propose an explicit and interpretable Neuro-Symbolic KB reasoning framework for task-oriented dialogue generation.",
"The hypothesis generator employs a divide-and-conquer strategy to learn to generate hypotheses, and the reasoner employs a recursive strategy to learn to generate verification for the hypotheses.",
"We evaluate our proposed framework on two public benchmark datasets including SMD and MultiWOZ 2.1.",
"Extensive experimental results demonstrate the effectiveness of our proposed framework, as well being more interpretable.",
"For the human evaluation in this paper, we recruit several annotators on Amazon Mechanical Turk from English-speaking countries.",
"We pay the annotators USD$0.15 for each annotation task.",
"Each task can be finished on average in 1 minute, which amounts to $9.0 per hour that is above the US fed-eral minimum wage ($7.25).",
"To ensure the quality of the human evaluation results, we perform quality control in a few ways.",
"First, the annotators will be shown our scoring standards (Appendix H) before their tasks, and are asked to follow them.",
"If the task is not done properly, either as determined by expert judgements (we recruit 3 native English speakers to validate the results of the Turkers' annotations) or there are obvious patterns such as constantly giving the same score for all tasks, we remove their annotations.",
"We also compute agreement score to check for the consistency among the annotators."
] |
[
"method",
"abstain",
"result",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"result",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"objective",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method"
] |
[
"Open Domain dialog system evaluation is one of the most important challenges in dialog research.",
"Existing automatic evaluation metrics, such as BLEU are mostly reference-based.",
"They calculate the difference between the generated response and a limited number of available references.",
"Likert-score based self-reported user rating is widely adopted by social conversational systems, such as Amazon Alexa Prize chatbots.",
"However, self-reported user rating suffers from bias and variance among different users.",
"To alleviate this problem, we formulate dialog evaluation as a comparison task.",
"We also propose an automatic evaluation model CMADE (Compar-ison Model for Automatic Dialog Evaluation) that automatically cleans self-reported user ratings as it trains on them.",
"Specifically, we first use a self-supervised method to learn better dialog feature representation, and then use KNN and Shapley to remove confusing samples.",
"Our experiments show that CMADE achieves 89.2% accuracy in the dialog comparison task.",
"Our implementation is available at https://github.com/Weixin-Liang/ dialog_evaluation_CMADE .",
"Open-domain dialog system evaluation is one of the most difficult challenges in the dialog community.",
"Open-domain chatbots have a user-centric goal: to provide human with enjoyable user experience.",
"However, user experience is difficult to quantify due to bias and variance among different users.",
"Previous research has optimized on automatic dialog evaluation metrics such as BLUE (Pa-pineni et al., 2002), which measures the difference between the generated responses and the reference responses.",
"Due to the contrast between the one-to-many nature of open-domain conversations and the limited number of available references, such metrics correlate poorly with human judgments (Liu et al., 2016; Lowe et al., 2017; Novikova et al., 2017).",
"Designing a fully automatic dialog evaluation metric is still an open research problem.",
"Currently, both academia and industry (Ram et al., 2018a; Li et al., 2019b; Liang et al., 2019) rely on human ratings to evaluate open-domain dialogs.",
"Following the ubiquitous application of Likert scores in survey research like online reviews (Godes and Silva, 2012) and consumer satisfaction (Peterson and Wilson, 1992), a common practice of human evaluation on dialogs is to ask either a third-person rater or the chatbot user to report a Likert score.",
"However, concerns have been raised about the validity of Likert score-based ratings.",
"Ku-likov et al. (Kulikov et al., 2018) observe high bias and variance of Likert scores.",
"Such issue is more severe in real-world commercial dialog systems like Alexa social chatbot (Ram et al., 2018a; Venkatesh et al., 2018), because the real-world users have neither monetary incentive nor necessary annotation training to calibrate their ratings.",
"To explore the validity of Likert score based dialog evaluation, we first perform a large-scale data analysis of 3,608 collected real-world human-machine dialogs along with their self-reported Likert scale ratings from Amazon Alexa Prize Challenge (Ram et al., 2018a; Yu et al., 2019; Chen et al., 2018).",
"One noticeable property of the ratings is its J-shape skew distribution: nearly half of the dialogs are rated with the highest Likert score.",
"The prevalence of such extreme distribution of ratings has long been observed by the business research community in variable aspects of real-life (Schoenmuller et al., 2018; Godes and Silva, 2012; Hu et al., 2017; Zervas et al., 2015).",
"Although we could tell which dialog system is better by running statistical test on a large number of noisy ratings, it is difficult to locate dialogs with bad performance reliably to improve dialog system quality.",
"In this paper, we take on the challenge of calibrating a large number of noisy self-reported user ratings to build better dialog evaluation models.",
"We formulate the task as to first denoise the self-reported user ratings and then train a model on the cleaned ratings.",
"We design CMADE (Compar-ison Model for Automatic Dialog Evaluation), a progressive three-stage denoising pipeline.",
"We first perform a self-supervised learning to obtain good dialog representations.",
"We then fine-tune CMADE on smoothed self-reported user ratings to improve the dialog representation while preventing the network from overfitting on noisy ratings.",
"Finally, we apply data Shapley to remove noisy training data, and fine-tune the model on the cleaned training set.",
"Our experiments show that CMADE is able to successfully identify noisy training data and achieves 89.2% in accuracy and 0.787 in Kappa on a test set with unseen expert-rated dialog pairs.",
"Open-domain dialog system evaluation is a long-lasting challenge.",
"It has been shown that previous automatic dialog evaluation metrics correlate poorly with human judgments (Liu et al., 2016; Lowe et al., 2017; Novikova et al., 2017).",
"A wellknown reason is that these automatic dialog evaluation metrics rely on modeling the distance between the generated response and a limited number of references available.",
"The fundamental gap between the open-ended nature of the conversations and the limited references (Gupta et al., 2019) is not addressed in methods that are lexical-level based (Pa-pineni et al., 2002; Lin, 2004; Banerjee and Lavie, 2005), embedding based (Rus and Lintean, 2012; Forgues et al., 2014), or learning based (Tao et al., 2018; Lowe et al., 2017).",
"Given the aforementioned limitations, Likert-score based rating is the de-facto standard for current dialog research and social conversational systems such as in Amazon Alexa Prize Challenge (Yu et al., 2019; Chen et al., 2018).",
"Various forms of evaluation settings have been explored to better measure human judgments.",
"Single-turn pairwise comparison (Vinyals and Le, 2015; Li et al., 2016) is primarily used for comparing two dialog systems.",
"Each system predicts a single utterance given the static gold context utterance from human-human logs.",
"Although such A/B test setting is robust to annotator score bias, it cannot capture the multiturn nature of dialogs.",
"A more complete multiturn evaluation is typically measured with a Likert scale for the full dialog history, where either a third-person rater or the chatbot user (Perez-Rosas et al., 2019) reports a Likert score on user experience (Venkatesh et al., 2018), engagement (Bohus and Horvitz, 2009) or appropriateness (Lowe et al., 2017).",
"However, as observed in (Kulikov et al., 2018; Ram et al., 2018a; Venkatesh et al., 2018) Likert scores suffer from bias and variance among different users.",
"Different from previous empirical observations, we conduct a large-scale quantitative and qualitative data analysis of Likert score based ratings.",
"To address the issue of Likert scores, the Alexa team proposed a rule-based ensemble of turn-granularity expert ratings (Yi et al., 2019), and automatic metrics like topical diversity (Guo et al., 2018) and conversational breadth.",
"ACUTE-EVAL (Li et al., 2019a) makes a small-scale attempt to use multi-turn pair-wise comparison to rank different chatbots.",
"Given the ubiquity and simplicity of Likert scores based evaluation, instead of proposing an alternative measure, we take on the challenge of denoising Likert scores with minimal expert annotations introduced (one order of magnitude smaller).",
"Different from (Li et al., 2019a), our proposed expert annotation scheme is for comparing the dialogs within the same chatbot.",
"The data used in this study was collected during the 2018 Amazon Alexa Prize Competition (Ram et al., 2018b).",
"Our data contain long and engaging spoken conversations between thousands of real-world Amazon Alexa customers and Gunrock, the 2018 Alexa Prize winning social bot (Yu et al., 2019).",
"The chatbot has 11 topic dialog modules including movies, books, and animals.",
"One notable characteristic of the chatbot is its versatile and complex dialog flows which interleaves facts, opinions and questions to make the conversation flexible and interesting (Chen et al., 2018).",
"At the end of each dialog, a self-reported Likert scale rating is elicited by the question on a scale of one to five, how likely would you talk to this social bot again?",
"We first filter out dialogs that have inappropriate content using keyword matching.",
"We then select 3,608 ten-turn dialogs on movies, because movie dialogs are more coherent and diverse compared to other topics according to both real users and Amazon selected experts.",
"We observe that dialogs R e a l x AF a ke x BB B Figure 1: Schematic of the CMADE workflow.",
"with more than eight turns are more meaningful and semantically versatile, while dialogs more than 10 turns exceed the max length limit of the BERT model (512 tokens).",
"So we select dialogs that have ten turns.",
"Our approach could support longer conversations by adopting a memory footprint efficient algorithm for self-attention to support sequences with thousands of tokens (Huang et al., 2019).",
"We leave this to future work.",
"We aim to evaluate user experience for each dialog from the same chatbot of the same length.",
"This is significantly more challenging than identifying which chatbot provides a better user experience on average since our problem setup requires us to capture more subtle difference in user experience.",
"J-Shape Skewness We perform a detailed analysis of the self-reported Likert scale ratings.",
"As shown in Table 1, abnormally, nearly half of the dialogs are rated as five, which is the highest score.",
"A similar skewed distribution is also observed in previous years' Alexa competition (Fang et al., 2018).",
"In fact, the business research community has long observed the prevalence of the extreme distribution of reviews in which the reviews are heavily skewed to the positive end of the rating scale (known as J-shape) in online reviews (e.g., Amazon, Airbnb, Yelp) (Godes and Silva, 2012; Hu et al., 2017; Zervas et al., 2015), word of mouth (East et al., 2007) and consumer satisfaction (Peterson and Wilson, 1992; Danaher and Haddrell, 1996).",
"Comparison to expert ratings We randomly selected 50 dialogs rated score-5 and showed these to an expert, and our expert rated 27 of them with score-4 or less.",
"The Alexa team (Venkatesh et al., 2018) has also reported that the inter-user agreement is quite low for their internal rating analysis.",
"Such phenomena indicate that the self-reported Likert scale ratings are extremely noisy.",
"Using such ratings cannot localize individual bad interactions.",
"In addition, Likert score based evaluation also suffers from insensitivity issues.",
"As observed by the Alexa team (Venkatesh et al., 2018) in multiple internal user studies, even though users evaluated multiple dialogs with the same score, they had a clear rank order among the dialogs.",
"The skewness, noisiness and insensitivity of the self-reported Likert scale rating make it a suboptimal dialog evaluation metric.",
"In practice, we find that directly training a classifier (even for pre-trained BERT-based model) on the noisy self-reported Likert scale ratings suffers from under-fitting.",
"One of the Alexa Price Challenge team, Alana (Papaioannou et al., 2017) train a binary-classifier between successful dialogs (human rating 4 or 5) and unsuccessful dialogs (rating 1 or 2) with heavy hand-engineered features.",
"They reach 69.40% accuracy on this binary classification problem, which is far from usable in real-world settings.",
"Selecting the better dialog from two options is easier for a human evaluator than giving an absolute number like the Likert score, which requires the evaluator to maintain a consistent standard.",
"Peo-ple's perception is inherently relative, and pair-wise comparison is local and does not require the user to have global consistency.",
"There are many other examples where humans find it easier to perform pairwise comparisons rather than providing direct labels (Simpson and Gurevych, 2018; Mailthody et al., 2019; Liang et al., 2018), including content search (Furnkranz and Hullermeier, 2010), image retrieval (Wah et al., 2014; Feng et al., 2019), and age estimation (Zhang et al., 2017).",
"We randomly sample 400 dialog pairs for experts to annotate.",
"We ask the question, If you were the user, in which scenario would you be more likely to come back and talk to the system again? We guide the experts to focus on the user experience rather than calibrating the performance of any specific module of the dialog system.",
"Two researchers with conversational training experience annotated the data.",
"The leading expert has been working in an Alexa competition team for more than one year with an emphasis on the user ratings.",
"For each dialog pair ( A, B ) , they label A is better than B ' or B is better than A ' or cannot tell'.",
"They reached a high inter-annotator agreement score (Cohen, 1968) with kappa = 0 .",
"83 .",
"To make sure that the dev & test is accurate, we throw away all cannot tell dialog pairs.",
"We then study the correlation between Likert score based evaluation and pairwise comparison based evaluation.",
"To further analyze the self-reported Likert scale ratings, we also compare the annotated labels of the 403 dialog pairs with the self-reported Likert scale ratings of these dialogs.",
"For each pair of dialogs, we compare the pairwise comparison label and the delta between the self-reported Likert scale ratings of the two dialogs.",
"Ideally, the dialog with a higher self-reported Likert scale rating should be the one that is annotated as having a better user experience in the pairwise comparison.",
"We count the number and fraction of disagreement between the two types of ratings.",
"Overall, roughly 1/3 of the dialog pairs disagree.",
"As shown in Table 2, as the gap between the self-reported Likert scale ratings becomes larger, the disagreement between expert and self-reported ratings goes down.",
"This suggests that if the difference between the two dialogs' Likert score is huge, they are more likely to be consistent with the comparison ratings.",
"Suppose the training set D train consists of data points D train = { ( x i , y i ) } N train 1 where x i is a dialog and y i is the noisy self-reported user ratings.",
"We define a strict partial order relationship (cid:46) where x i (cid:46) x j means that dialog x i provides a better user experience than dialog x j .",
"Note that y i > y j does not always imply x i (cid:46) x j since self-reported user ratings are noisy ( 3.3, 3.4).",
"The test set D test consists of N test dialog pairs along with their binary pair-wise comparison labels D test = { ( x testi , x testj , z testi,j ) } i,j I test , where z testi,j is annotated by experts and indicates whether dialog A provides a better user experience than dialog B, i.e., z testi,j = 1 ( x i (cid:46) x j ) .",
"The development set D dev has a similar structure.",
"Following the structure of the expert annotated pairs, we formulate our model M ( , f ) as a pairwise dialog predictor with a similar architecture as RankNet (Burges et al., 2005).",
"For a dialog pair ( x i , x j ) , the model predicts an un-normalized score o i , o j R for each dialog: o i = f ( ( x i )) and o i = f ( ( x j )) where is a dialog encoder that maps each dialog to a feature space and f is a linear transformation that converts each dialog feature into a real number o .",
"We define a binary relationship (cid:46) where x i (cid:46)x j means that the model predicts that dialog x i provides a better user experience than dialog x j .",
"We denote model's prediction of z i,j as z i,j where z i,j = 1 ( x i (cid:46)x j ) .",
"We model the predicted posterior P ( z i,j = 1) = P ( x i (cid:46)x j ) as: P ( z i,j = 1) = P ( x i (cid:46)x j ) = 1 1 + e ( o i o j ) 5 Method Our goal is to reduce the noise of the self-reported user ratings ( 3).",
"Directly training a classification model using the noisy ratings leads to severe un-derfitting.",
"To this end, we propose a three-stage training pipeline to denoise self-reported ratings to train an automatic dialog comparison model.",
"Figure 1 describes the overall pipeline: In Stage 1, we learn dialog feature representation with a self-supervised dialog flow anomaly detection task.",
"In Stage 2, we perform label smoothing to adjust the noisy self-reported ratings in the training set and fine-tune the dialog comparison model on the smoothed ratings.",
"In Stage 3, we perform data Shapley (Ghor-bani and Zou, 2019; Jia et al., 2019a) on the self-reported user ratings to identify and remove noisy data points.",
"We then fine-tune the dialog comparison model on the cleaned training set.",
"Having a good dialog representation is the first step towards denoising the data.",
"Our primary goal in this stage is to train a dialog encoder to learn good dialog feature representations for the following stages.",
"Here could be any sequence encoder that could encode a dialog and we use BERT (De-vlin et al., 2019) in this paper.",
"and train the model to differentiate the fake dialog and the real one.",
"Dialog flow is a user-centric measure of whether a conversation is going smoothly (Eskenazi et al., 2019).",
"To perturb the dialog flow for each dialog x i , we randomly replace a user utterance in x i with a random user utterance from the training corpus D train , yielding a perturbed dialog x i,fake .",
"With high probability, the system utterance immediately following the replaced user utterance becomes inappropriate.",
"Therefore, we incorporate { ( x i , x i,fake , z = 1) } into the training pairs.",
"Similarly, we also randomly replace a system utterance and yield another perturbed dialog.",
"We generate two perturbed dialogs for each dialog in the training set and thus 2 N train real-fake dialog pairs in total.",
"An example is shown in Table",
"3. We note that appropriateness is one of the most widely applied metrics of human evaluation on dialogs (Lowe et al., 2017).",
"By learning to differentiate the perturbed dialog and the original one, we expect CMADE to learn a good dialog encoder which maps dialogs with similar dialog flow close to each other in the feature space.",
"Stage 1 only performs unsupervised learning and does not incorporate any supervision from human ratings.",
"To obtain better dialog feature representations for Stage 3, Stage 2 fine-tunes with supervision from the noisy self-reported user ratings.",
"We adopt a simple yet effective label smoothing, inspired by (Szegedy et al., 2016; Nie et al., 2019), using the representation learned in Stage",
"1. A key assumption in Stage 2 is that dialogs with similar dialog flow provide a similar user experience.",
"For each dialog x i , we find its K nearest neighbors in the feature space defined by .",
"We use the average self-reported ratings of the K nearest neighbors as a smoothed rating y si for x i .",
"To construct training dialog pairs, we randomly sample dialog pairs x i and x j and derive a pair-wise comparison label z si,j by comparing the smoothed rating y si and y sj : z si,j = 1 ( y si > y sj ) .",
"We discard the pairs with equal y si and y sj .",
"To improve the dialog feature representation, we fine-tune the model M ( , f ) on sampled dialog pairs along with the derived labels from comparing the smoothed scores { x i , x j , z si,j } .",
"We note that z si,j depends solely on the noisy self-reported ratings in the training set and does not depend on the expert annotations.",
"Theoretically, we could iterate between label smoothing and model fine-tuning since the fine-tuned model provides better dialog feature representation.",
"In practice, we find that one iteration is enough to reach good prediction performance.",
"Label smoothing has led to state-of-the-art models in image classification (Szegedy et al., 2016), language translation (Vaswani et al., 2017) and speech recognition (Chorowski and Jaitly, 2017).",
"Prior attempts in label smoothing (Szegedy et al., 2016; Vaswani et al., 2017; Chorowski and Jaitly, 2017; Muller et al., 2019) focus on categorical labels to prevent the network from becoming over-confident while we apply label smoothing on ordinal labels (i.e., Likert scores) to prevent the network from overfitting on noisy ordinal labels.",
"In Stage 2, noisy ratings still have effect in the smoothed ratings for other data points.",
"In Stage 3, we aim to identify and remove dialogs with noisy self-reported user ratings y i with data Shapley value technique (Ghorbani and Zou, 2019; Jia et al., 2019a,b).",
"Shapley value comes originally from cooperative game theory (Dubey, 1975).",
"In a cooperative game, there are n players D = { 1 , ..., n } and a utility function v : 2 [ n ] R assigns a reward to each of 2 n subsets of players: v ( S ) is the reward if the players in subset S D cooperate.",
"Shapley value defines a unique scheme to distribute the total gains generated by the coalition of all players v ( D ) with a set of appealing mathematical properties.",
"Shapley value has been applied to problems in various domains, ranging from economics (Gul, 1989) to machine learning (Cohen et al., 2005; Yona et al., 2019).",
"In our setting, given D train = { ( x i , y i ) } N train 1 , we view them as N train players.",
"We could also view the utility function v ( S ) as the performance on the development set.",
"The Shapley value for player i is defined as the average marginal contribution of { ( x i , y i ) } to all possible subsets that are formed by other users (Jia et al., 2019a): s i = 1 N (cid:88) S D train \\{ x i } 1 (cid:0) N 1 | S | (cid:1) [ v ( S { x i } ) v ( S )] As suggested by the definition of data Shapley, computing data Shapley value requires an exponentially large number of computations to enumerate O (2 N train ) possible subsets and train the model M on each subset, which is intractable.",
"Inspired by (Jia et al., 2019a), CMADE tackles this issue by reducing the deep model M to a k-nearest neighbors (KNN) model and then apply the closed-form solution of shapley value on KNN.",
"Using the feature extractor trained in Stage 1 and Stage 2, we fix and map all dialogs in the training data { x i } N train 1 to { ( x i ) } N train 1 .",
"We first define the utility function v ( S ) in a special case where the development set only contains one dialog pair ( x devp , x devq , z devp,q ) p,q I dev = { (1 , 2) } .",
"In our setting, the development set contains dialog pairs annotated by experts.",
"Given any nonempty subset S D train , we use the KNN Regressor to rate x devp and x devq .",
"To do this, we compute ( x devp ) and sort { x p } N train 1 based on their euclidean distance in the dialog feature space to x devp , yielding ( x ( p ) 1 , x ( p ) 2 , ..., x ( p ) | S | ) with x ( p ) 1 , ..., x ( p ) K as the top-K most similar dialogs to x devp .",
"Similarly, we get ( x ( q ) 1 , x ( q ) 2 , ..., x ( q ) | S | ) with x ( q ) 1 , ..., x ( q ) K as the top-K most similar dialogs to x devq .",
"Based on the self-reported user ratings in the training data, we use the KNN Regressor to rate x devp and x devq as follows: y devp = 1 K min { K, | S |} (cid:88) k =1 y ( p ) k (1) y devq = 1 K min { K, | S |} (cid:88) k =1 y ( q ) k (2) The model predicts z devp,q = 1 if y devp > y devq and vice versa.",
"To obtain a closed-form solution to calculate Shapley value, instead of defining the utility function as the accuracy of the pair-wise prediction, we define the utility function as follows: v ( S ) = y devp y devq , if z devp,q = 1 , y devq y devq , if z devp,q = 0 (3) Theorem 1 Consider the utility function in Equation (3).",
"Then the Shapley value of each training point s m can be decomposed into two terms s ( p ) m and s ( q ) m which depend on x devp and x devq respectively.",
"s ( p ) m and s ( q ) m can be calculated recursively as follows: s m = s ( p ) m s ( q ) m , if z devp,q = 1 , s ( q ) m s ( p ) m , if z devp,q = 0 s ( p ) ( p ) N = y ( p ) NN s ( q ) ( q ) N = y ( q ) NN s ( p ) m = s ( p ) ( p ) m +1 + y ( p ) m y ( p ) m +1 K min { K, m } m s ( q ) ( q ) m = s ( q ) ( q ) m +1 + y ( q ) m y ( q ) m +1 K min { K, m } m With Theorem 1, the Shapley value calculation could be finished in O ( N log N ) time.",
"The above result for a single point in the development set could be readily extended to the multiple-testpoint case.",
"In our experiment, with such optimization, the Shapley value calculation takes less than 5 seconds to finish.",
"Theorem 1 comes primarily from (Jia et al., 2019a,b) and we extends their results of vanilla KNN regressor (Jia et al., 2019a) to our pairwise testing setting.",
"By applying the Shapley technique to the data, we identify noisy training data points which contribute negatively to the performance and remove them from the training set.",
"Similar to Stage 2, to construct training dialog pairs, we randomly sample dialog pairs x i and x j from the cleaned training set and derive z i,j by comparing the self-reported rating y i and y j .",
"We then further fine tune the model from Stage",
"2. Theoretically, we could iterate between Stage 2 and Stage 3 multiple times while in practice one iteration is enough.",
"We use a similar factorization technique for pairwise ranking in LambdaRank (Burges et al., 2006) to speed up training.",
"For Stage 2 and 3, we have O ( N 2 ) possible dialog pairs, which leads to quadratically increasing training time.",
"Similar to LambdaRank (Burges et al., 2006), it is possible to calculate the exact gradient of O ( N 2 ) possible dialog pairs with O ( N ) forwards and back-propagations.",
"More specifically, we denote the possible input pairs during training at Stage 2 or Stage 3 as: D pairtrain = { ( x i , x j , z i,j ) } i,j I .",
"The total cost L for O ( N 2 ) possible dialog pairs is the sum of O ( N 2 ) cross-entropy costs: L i,j = CrossEntropy ( z i,j , z i,j ) L = (cid:88) ( i,j ) IL i,j Theorem 2 We can compute Lw k in O ( N ) by fac-tor it into a weighted sum of o i w k where the weight i R only depends on { o j } and { z i,j } .",
"Here o i = f ( ( x i )) R and o j = f ( ( x j )) R are the outputs of the two branches of the model.",
"Theorem 2 shows that instead of performing back-propagation for all possible pairs, we could first perform N forward passes to obtain { o j } and then calculate { i } .",
"Calculating { i } from { o j } in Equation 5.4 takes negligible time since this stage does not involve any neural network operation.",
"Finally, we calculate a weighted sum of O ( N ) back-propagation and update the model parameters.",
"Model Setup We fine tune the pre-trained BERT (Devlin et al., 2019) to learn the dialog feature extractor .",
"We partition the 403 expert annotated dialog pairs into a 200-pair development set and a 203-pair test set.",
"We set K = 50 for both the KNN label smoothing in Stage 2 and the KNN Shapley value calculation in Stage",
"3. Model Details The details of extending BERT to encode multi-turn dialogs are as follows.",
"Each dialog is represented as a sequence of tokens in the following input format: Starting with a special starting token [ CLS ] , we concatenate tokenized user and system utterances in chronological order with [ SEP ] as the separators for adjacent utterance.",
"In other words, we represent each dialog as a sequence: [ CLS ] , S 1 , 1 , S 1 , 2 , ... , [ SEP ] , U 1 , 1 , U 1 , 2 , ... , [ SEP ] , S 2 , 1 , S 2 , 2 , ... , [ SEP ] where S i,j and U i,j are the j th token of the system and user utterance in the i th turn.",
"Following BERT, we also add a learned embedding to every token indicating whether it comes from user utterances or system utterances.",
"Model Comparisons and Ablations We compare CMADE to its several ablations (Table 4) and evaluate the performance on the testing set, which is annotated by experts.",
"We also report the kappa No.",
"agreement (Cohen, 1968) (kappa and Standard Error SE ) between the predicted output and the expert annotations.",
"(1) BERT-Classification and (2) BERT-Regression fine tune the pre-trained BERT to perform a 5-class classification and regression respectively directly using the noisy self-reported ratings.",
"To test BERT-Classification on dialog pairs, we apply the DEX trick (Rothe et al., 2015) to get a floating-point number of predicted rating and thus get rid of the cases when the model predicts the dialog pairs as tie.",
"(3) BERT-Pairwise shares the same model architecture with CMADE.",
"It constructs dialog pairs for training by randomly sample dialog pairs x i and x j and derive z i,j by comparing the corresponding self-reported user rating y i and y j .",
"We discard the pairs with equal y i and y j .",
"(4) BERT-Pairwise+Dev augments (3) by adding the 200 expert annotated dialog pairs in the development into the training data.",
"We also compare the variants of CMADE which skips one or two of the three stages.",
"Results Our first takeaway is that vanilla classification or regression formulation might not be the best way to formulate the problem of learning a dialog evaluation model.",
"As shown in Table 4, pairwise architecture (BERT-Pairwise, 0.73) is better than classification (BERT-Classification, 0.53) or regression (BERT-Regression, 0.64) in this problem.",
"Similar to our observation, the research community in computer vision has long observed that both vanilla classification and regression formulation has drawbacks in age estimation (Rothe et al., 2015; Niu et al., 2016; Zhang et al., 2017).",
"Our second takeaway is that denoising algorithm that is more aggressive usually makes stronger assumptions on the quality of feature representa-Figure 2: Removing training data with low Shapley value improves the performance of the KNN regressor.",
"tions.",
"Therefore, it helps to create a denoising pipeline that starts with better feature representation learning and less aggressive denoising algorithm to learn better feature representation before applying the more aggressive denoising algorithms.",
"As shown in Table 4, our three-stage denoising pipeline CMADE (Acc. 0.892) significantly outperforms all baselines by a large margin.",
"Although (8) Stage 1 does not directly provide high accuracy (Acc. 0.620), the feature representation it learned is extremely important.",
"Without Stage 1, both (5) Stage 2 (Acc. 0.755) and (6) Stage 2 + Stage 3 (Acc. 0.763) perform worse.",
"Since the KNN label smoothing is performed on the feature space, we expect the smoothing performs worse without self-supervised dialog feature representation learning in Stage",
"1. However, they still work better than baseline (1) (2) (3) which are models that do not account for the noise in data.",
"This is because we use the pre-trained BERT to initialize our dialog encoder and thus is still able to provide some useful features for Stage",
"2. In addition, we observe that denoising with data Shapley in Stage 3 requires better dialog feature representation.",
"(7) Stage 3 (Acc. 0.714) performs even worse than BERT-Pairwise (0.730) without good representations to perform the Shapley denoising algorithm.",
"Skipping Stage 2 also hurts performance (Acc. 0.788).",
"However, it does not mean that Shapley denoising in Stage 3 is not powerful.",
"We observe a large performance gain in applying stage 3 after stage 1 and stage 2 (Acc. 0.837 v.s. 0.892).",
"Finally, we note that adding the expert annotated development set directly into the training data is much less efficient compared to using the development set for data Shapley to denoise.",
"BERT-Pairwise+Dev only reaches an accuracy of 0.749.",
"Additional analysis We also present an analysis to show how Shapley denoising works as shown in",
"Figure",
"2. We examine the Shapley value for each training datum in Stage",
"3. We first show an example dialog with a negative Shapley value in Table 5.",
"According to the Shapley value, we remove data points one by one starting from the least valuable to the most valuable.",
"Each time, after the point is removed, we create new KNN regressor models K = 1 , 5 , 25 , 50 , 100 on the remaining dialogs and labels and evaluate them on the test set with expert annotations.",
"We extract the features of the remaining dialogs using the dialog encoder tuned by Stage 1 and Stage",
"2. As shown in Figure 2, removing training data with low Shapley values increases the performance to a certain point before convergence for K of all choices.",
"We observe a similar trend when re-training a model on the remaining data.",
"In contrast, removing data randomly decreases the performance on the test set.",
"In addition, larger K has a better performance, which validates the denoising effect of KNN with large K .",
"The ultimate chatbot evaluation metric should be user-centric, as chatbots are there to provide human with an enjoyable experiences.",
"Previously Likert-score based self-reported rating is the de-facto standard for current dialog evaluation .",
"However, our analysis indicates that self-reported dialog ratings are skewed (J-shape), noisy and insensitive due to bias and variance among different users.",
"We propose a three-stage denoising pipeline CMADE to reduce self-reported ratings and, at the same time, build an automatic comparison-based automatic dialog quality predictor.",
"CMADE's results highly correlate with expert judgments on pair-wise dialog comparison ratings (89.2% agreement, 0.787 User: the whole harry potter series Sys: whoops i think i misheard you.",
"can you repeat the movie name please?",
"User: all the harry potter series Sys: ah, i don't know any movies by that name.",
"sorry if i sometimes confuse you saying something else with a movie name ... ...",
"Sys: oh, okay.",
"anyways, i really enjoyed watching black panther and wonder woman recently.",
"what's your favorite superhero movie?",
"User: super man Sys: what was that?",
"can you please say the movie name again?",
"User: stop i gotta go Table 5: An example dialog with noisy self-reported rating (Score-5) identified by data shapley.",
"We would like to sincerely thank ACL 2020 Chairs and Reviewers for their review efforts and helpful feedback.",
"We thank Yu Li for his insightful guidance and support in shaping this project.",
"We thank Boxin Wang for helpful discussions on data Shapley.",
"We would also like to extend our gratitude to Yanbang Wang, Youzhi Tian, Weiyan Shi and Michihiro Yasunaga for their valuable feedback and suggestions."
] |
[
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"objective",
"method",
"objective",
"result",
"method",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"method",
"other",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other"
] |
[
"Recently, parallel text generation has received widespread attention due to its success in generation efficiency.",
"Although many advanced techniques are proposed to improve its generation quality, they still need the help of an autoregressive model for training to overcome the one-to-many multi-modal phenomenon in the dataset, limiting their applications.",
"In this paper, we propose latent -GLAT, which employs the discrete latent variables to capture word categorical information and invoke an advanced curriculum learning technique, alleviating the multi-modality problem.",
"Experiment results show that our method outperforms strong baselines without the help of an autoregressive model, which further broadens the application scenarios of the parallel decoding paradigm.",
"1 Introduction Non-autoregressive Transformer (NAT, Gu et al., 2018) introduce a parallel decoding paradigm with higher decoding efficiency (> 10 ) than autoregressive models (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017).",
"Unlike autoregressive models, NAT models impose conditional independence assumptions in words to support parallel decoding of sentences during inference.",
"It attracts many researchers to explore NAT in machine translation (Gu et al., 2018; Lee et al., 2018; Kaiser et al., 2018) and text-to-speech tasks (Chen et al., 2019; Peng et al., 2020).",
"Amount of researchers devoted themselves to improve the NATs' inferior generation quality.",
"Such as modeling word inter-dependencies by curriculum learning (Guo et al., 2020a; Liu et al., 2020) or iterative refinements mechanism (Ghazvininejad * Shujian Huang is the corresponding author. Work is done while at ByteDance AI Lab. The implementation of latentGLAT will be released at https://github.com/baoy-nlp/Latent-GLAT . et al., 2019; Guo et al., 2020b), introducing latent variables to decompose target sentences and serve as the springboard for decoding (Shu et al., 2019; Ma et al., 2019; Bao et al., 2021), and introduce inductive bias for models' training (Wei et al., 2019; Li et al., 2019).",
"The most successful method is the glancing transformer (GLAT, Qian et al., 2021a), which trains the NAT model by sampling partial target words as inputs to predict the remaining target words, explicitly building dependencies between the observed and unobserved words.",
"Qian et al. (2021b) employ GLAT to achieve impressive results on the translation task of WMT21 1 , even outperforming many strong autoregressive translation systems in BLEU score (Papineni et al., 2002).",
"Although existing NAT models achieve competitive results compared to autoregressive models in translation tasks, it is not negligible that they still need the help of an autoregressive Transformer (AT, Vaswani et al., 2017) as a teacher for training, i.e., sequence-level knowledge distillation (Kim and Rush, 2016).",
"A well-recognized explanation is a multi-modality problem (Zhou et al., 2020; Sun and Yang, 2020): each input may have multiple valid outputs in datasets, which will prevent NAT models from learning to organize consistent outputs.",
"Training with the outputs of an AT can directly bypass the multi-modal phenomenon in the dataset, effectively improving the models' performances.",
"However, training NAT models by knowledge distillation are limited.",
"First, it needs to train an extra AT model, which inevitably enlarges the training cost.",
"Second, it is hard to promise that the teacher (or AT) model can be accurate enough in all text generation settings, which will become the bottleneck for its student NAT model.",
"Therefore, training a model from scratch without the help of an AT model is still an open and interesting problem.",
"the multi-modality problem following a divide-and-conquer spirit, introducing a small set of discrete latent variables to capture the target word categorical information and divide the origin goal into latent variables modeling and sentence reconstruc-tion.",
"First, the categorical information may have fewer multi-modality phenomena than the original words, thus can be learned directly without the help of knowledge distillation.",
"Second, the word categorical information is informativeness to the sentence reconstruction.",
"We can extend glancing training with these discrete latent variables for modeling the sentence, encouraging the model to build dependencies on word categorical information rather than words, which works more robustly.",
"Experiment results on WMT14, Quora, and DailyDialog datasets show that latentGLAT achieves remarkable improvements over several strong baselines, verifying the effectiveness of latentGLAT.",
"More impressively, latentGLAT even outperforms autoregressive models in Quora and DailyDialog datasets, further validating our motivation for removing knowledge distillation.",
"In-depth analyses indicate that the introduced discrete latent variables are helpful to alleviate the multi-modality problem and are necessary for performance improvement.",
"For a sequence-to-sequence task of predicting sequence Y = ( y 1 , y 2 , , y m ) given its input sequence X = ( x 1 , x 2 , , x n ) , the classical autoregressively factorization decomposes the p ( Y | X ) with a series of conditional probability:",
"<t 1 2 t 1",
"Although such factorization achieved great success in previous studies (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017), they predict each word 2 based on the prefix words, which may suffer from the issues of error accumulation and slow decoding during inference.",
"the above problems, Gu et al. (2018) firstly propose non-autoregressive Transformer (NAT), introduc-2",
"introduc-2 We use BPE segmentation in our experiments, and they are strictly tokens.",
"For clarity, we use words and tokens interchangeably in the paper.",
"ing a non-autoregressive factorization as: p NAT ( Y | X ) = m (cid:89) t =1 p ( y t | X ) , (2) where each word y t are modeled independently.",
"During inference, the NAT model can decode the word simultaneously by arg max y t p ( y t | X ) for each y t , remarkably improving the efficiency (15 speedups to an autoregressive Transformer).",
"However, the independence assumption may prevent the NAT model from leveraging the inherent word dependencies to organize consistent outputs.",
"Due to this,the efficiency improvements of NAT are at the cost of its quality, e.g., the performance degradation by more than 10.0 BLEU (Papineni et al., 2002) points in machine translation tasks (Gu et al., 2018).",
"Besides, recent studies (Zhou et al., 2020; Sun and Yang, 2020) point out that the multimodality phenomenon in the dataset aggravates the challenge of NAT models.",
"Glancing Transformer.",
"To mitigate the issue of missing word dependency in NAT models, Qian et al. (2021a) propose Glancing Transformer (GLAT), introducing glancing training (GLT) and sampling partial target tokens for training NAT: LGLAT = log p ( Y obs | Y obs , X ) = (cid:88) y i Y obs log p ( y i | Y obs , X ) , (3) where Y obs is the partial target tokens, and Y obs is its complements set.",
"It progressively decreases the sampling ratio and obtains better performances in machine translation tasks.",
"Nevertheless, we find that GLAT in experiments still has a multi-modality problem 3 : First, its sampling rate cannot be decreased to zero during training, which exists the issue of exposure bias .",
"Second, it still heavily relies on a teacher model for further improvements (Qian et al., 2021a).",
"Latent Transformer.",
"To alleviate the multimodality problem, Kaiser et al. (2018); Shu et al. (2019); Ma et al. (2019); Bao et al. (2021) propose Latent Transformer (LT), introducing latent variables z for NAT predictions as: p LT ( Y | X ) = (cid:90) z p ( z | X ) p ( Y | z , X ) .",
"where p LT ( Y | X ) is always trained by variational inference (Ma et al., 2019) or discretization techniques (Kaiser et al., 2018).",
"Such latent variables are decomposed from the target sentence, which is informative to determine the mode of the sentence and alleviates the multi-modality problems.",
"Although Latent Transformer models improve performance in terms of BLEU score, their used autoregressive predictor (Kaiser et al., 2018; Bao et al., 2021) or deep iterative transformation (Shu et al., 2019; Ma et al., 2019) for predicting latent variables unavoidable sacrifice the overall decoding efficiency.",
"Besides, they do not explicitly build the interdependencies among the outputs.",
"In this section, we present latentGLAT.",
"latent-GLAT follows Latent Transformer models (Kaiser et al., 2018; Bao et al., 2021) but introduces glancing training (Qian et al., 2021a) with the discrete latent variables.",
"Our intuitions are as follows: First, compared to the words, the introduced discrete latent variables may have fewer modes than words and be informative to determine the modes of the sentences.",
"In such a case, we can directly learn the discrete latent variables by the Glancing Transformer (Qian et al., 2021a), keeping competitive inference efficiency.",
"More importantly, we can employ the latent variables to invoke glancing training for modeling the target sentences, which is informative enough to reduce the multi-modality problem of original sentences.",
"Besides, glancing at latent variables also works robustly due we can obtain the latent variables during inference.",
"In this part, we state the structure of latentGLAT, which introduces a small set of discrete latent variables for a NAT model, basically following Kaiser et al. (2018); Roy et al. (2018); Bao et al. (2021).",
"Let K be the size of the discrete latent space and let [ K ] denote the set { 1 , 2 , , K } .",
"For each target sentence Y = ( y 1 , y 2 , , y m ) , we use a same-length latent variable sequence for modeling it as: p ( Y | X ) = (cid:88) z p ( z | X ) m (cid:89) t =1 p ( y t | z , X ) , (5) where z = ( z 1 , z 2 , , z m ) and z i [ K ] , is the model parameters.",
"Discretization.",
"For discretizing target sentences to latent variables, we use vector quantization (Roy et al., 2018), which works by dividing a large set of origin vector representations into small groups.",
"We assign each token y i with a group j [ K ] that has the nearest distance to its representation: z i = arg min j [ K ] || repr( y i ) q j || 2 , (6) where q RK d model is the maintained representations and d model is its dimension.",
"We use the embedding as repr( y i ) , refer to Bao et al. (2021).",
"Finally, the model is trained to minimize LLT = LLP + LWP , (7) where LWP and LLP are the prediction loss for words Y and latent variables z , respectively.",
"The maintained representations q are updated with an exponential moving average over a mini-batch of target tokens { y 1 , , y i , } : c j c j + (1 ) (cid:88) i 1 [ z i = j ] , q j q j + (1 ) (cid:88) i 1 [ z i = j ] repr( y i ) c j (8) where c j is assigned count for group j , and we set decay parameter = 0 .",
"999 in our experiments.",
"Encoder), a latent predictor FLP (NAT Predictor), and a decoder FDEC (Mix. Decoder).",
"We parameterize them with the multi-head attention-based encoder or decoder, similar to Transformer (Vaswani et al., 2017).",
"Their functions can be formalized as: ( e 1 , e 2 , , e n ) FENC ( x 1 , x 2 , , x n ) , ( h 1 , h 2 , , h m ) softcopy( e 1: n ) , p ( z | X ) FLP ( h 1: m , e 1: n ) , p ( Y | z , X ) FDEC ( z 1: m , h 1: m , e 1: n ) , where we use an extra module FLEN to predict the target length m and initialize the decoder inputs H = ( h 1 , h 2 , , h m ) with the softcopy (Wei et al., 2019) mechanism.",
"As shown in Figure 2b, we eventually employ words to invoke glancing training for minimizing LWP , namely we optimize the FDEC by minimizing LGLTWP = log p ( Y obs | z obs , Y obs , X ) , (11) where Y obs and z obs are the sampled target tokens and discrete latent variables.",
"We find Eqn.",
"(10) works robustly in experiments and analyze it in Section ( 4.3).",
"The small number ( K < 128 ) of discrete latent variables can capture high-level categorical information of the target words, supporting better learning design for parallel sequence decoding.",
"Our first insight is that we can learn to non-autoregressively predict the discretized latent variables directly without the help of distillation.",
"Specifically, we parameterize the FLP in a non-autoregressive fashion and use a glancing training technique (GLT, Qian et al., 2021a) for optimizing it, as shown in Figure 2a: LGLTLP = log p ( z obs | z obs , X ) (9) where z obs is uniformly sampled from z , refer to Qian et al. (2021a).",
"We provide more training details of latentGLAT in Appendix B. Our next insight is modeling the sentence based on the sampled latent variables z obs rather than z , namely, glancing at z obs for optimizing FDEC : LWP = log p ( Y | z obs , X ) .",
"Overall Training Loss.",
"Our full-fledged loss includes latent variable prediction, sentence recon-struction, and length prediction losses: L = LGLTWP + LGLTLP + LLEN , (12) where = 0 .",
"length, latent variables, and sentence in turn.",
"For the target length, latentGLAT first predicts the target length m with the length predictor FLEN .",
"To avoid the length prediction errors during inference, latentGLAT expands the length m to a ranges (we use [ m 3 , , m + 2] , total six candidates in our experiments).",
"Then, latentGLAT predicts the latent variables z with arg max z p ( z | X ) and sentence Y with arg max Y p ( Y | z , X ) for each candidate.",
"Similar to Ma et al. (2019), latentGLAT also ranks the candidates by itself ( self-reranking ) and chooses the highest score output with: Y = arg max Y p ( Y | z , X ) | Y | (13) where is the length penalty ratio to avoid the length bias, and | Y | denotes the length of Y .",
"We conduct experiments on several generation tasks, including machine translation, paraphrase generation, and dialog generation.",
"Dataset.",
"We chose the most popular benchmarks for each task: Machine Translation (MT) : We follow previous practices in NAT models and use the WMT14 English (EN) German (DE) corpus (4.5M sentence pairs) and the IWSLT14 German (DE) English (EN) corpus (160K sentence pairs) to validate our proposed model.",
"We obtain the datasets following the instruction open-sourced in fairseq 4 .",
"In detail, we first tokenize the datasets with Moses script.",
"Then, we use 37,000 and 10,000 operations to split the words into byte-pair encodings (BPE, Sennrich et al., 2016) in WMT14 and IWSLT14 datasets, respectively.",
"We also share subword embeddings between the source and target language for each dataset.",
"Paraphrase Generation (PG) : We use the Quora 5 dataset to evaluate the paraphrase generation task.",
"The Quora dataset contains around 135K labeled paraphrases pairs.",
"Following the standard dataset split, we sample 100K sentence pairs from the labeled paraphrases as training data and hold out 30K pairs for testing, the remaining about 5K pairs for validation.",
"Like the MT tasks, we tokenize the corpus with Moses scripts and split the words into BPE units with total 32K operations.",
"Dialog Generation (DG) : We conduct the dialog generation experiments on the DailyDialog dataset (Li et al., 2017).",
"We obtain the processed DailyDialog dataset from Bao et al. (2020) 6 .",
"The training set contains 87,170 sentence pairs (11,118 dialogues).",
"The validation and testing set in the dataset contain 8069 pairs (1000 dialogues) and 7740 pairs (1000 dialogues), respectively.",
"Note that these tasks emphasize different aspects.",
"The task of MT aims to transfer bilingual sentences with semantically invariant conditions.",
"The PG task differs from machine translation and works on 4 https://github.com/pytorch/fairseq 5 https://www.kaggle.com/c/ quora-question-pairs/data 6 https://github.com/gmftbyGMFTBY/ MultiTurnDialogZoo mode transformation in the same language, whose goal is to synthesize a sentence different from the original input but conveys the same meaning.",
"The DG task is most challenging due to the complex generation goal.",
"Implementations.",
"We compare latentGLAT with Transformer (Vaswani et al., 2017), NAT (Gu et al., 2018), and GLAT (Qian et al., 2021a) models.",
"We implement them based on the open-source framework fairseq (Ott et al., 2019).",
"For machine translation tasks, we use the base setting ( d model = 512 , d hidden = 2048 , dropout = 0 . 1 , n head = 8 , and n layer = 6 ) of Transformer (Vaswani et al., 2017) for WMT14 dataset and a smaller setting ( d model = 512 , d hidden = 1024 , dropout = 0 . 3 , n head = 4 , and n layer = 6 ) for IWSLT14 dataset.",
"The number of layers in latentGLAT decoder and latent predictor are both set to 4 in experiments.",
"We use inverse square root learning rate scheduling for WMT14 and a linear annealing learning rate from 3 .",
"0 10 4 to 1 .",
"0 10 5 in 250K steps for IWSLT14.",
"The models are optimized with Adam (Kingma and Ba, 2015) optimizer ( 1 = 0 . 9 , 2 = 0 . 999 ) in 300K steps for WMT14 and 250K steps for IWSLT14.",
"As for the ratio that used in glancing sampling, we linear anneal the ratio from 0 .",
"5 to 0 .",
"3 in whole training steps.",
"The mini-batch in each step consists of 2K tokens for IWSLT14 and 64K tokens for WMT14.",
"Since the scale of the Quora and DailyDialog datasets are close to the IWSLT14, we keep the same setting to the IWSLT14, such as the Adam, learning rate (linear annealing from 3 . 0 10 4 to 1 . 0 10 5 ), and batch size (2K tokens).",
"Evaluation.",
"To validate the effectiveness of our proposed method, we evaluate it in terms of quality and efficiency.",
"We use tokenized and cased BLEU scores (Papineni et al., 2002) 7 to evaluate the generation quality of MT and PG tasks.",
"For dialog generation, we also include BLEU-1 and BLEU-2 scores for analysis.",
"Following the common practices (Gu et al., 2018; Qian et al., 2021a), we measure the decoding latency of each model by decoding sentence by sentence and compute the speedup compared with the autoregressive Transformer (AT) model to reflect its decoding efficiency.",
"We highlight the best NAT result.",
"We can see from Table 1 that our latentGLAT almost outperforms all the NAT baselines (NAT and GLAT) in generation quality on all tasks while keeping a competitive decoding speedup to the autoregressive counterpart.",
"Machine Translation.",
"As seen, without the help of an AT model for training, the vanilla NAT and advanced GLAT model only obtain inferior generation quality.",
"In contrast, latentGLAT achieves competitive generation quality in machine translation tasks, indicating that the introduced latent variables effectively reduce the multimodality issue and support glancing training well.",
"It narrows the performance gap between non-autoregressive decoding and autoregressive decoding from 11.46 (GLAT vs. AT) to 2.34 ( latent-GLAT vs. AT) BLEU points on WMT14 EN DE task while keeping a high-speed decoding efficiency.",
"Paraphrasing.",
"Unlike the translation task, the performance gap between non-autoregressive and autoregressive decoding on the paraphrase generation task is minor (NAT vs. AT, 3 . 32 BLEU points, GLAT vs. AT, 0 . 96 BLEU points ).",
"Nevertheless, introducing discrete latent variables still is helpful to obtain a better performance.",
"latent-GLAT realizes a non-autoregressive model with better performance than the autoregressive model on Quora ( latentGLAT vs. AT, +1 . 14 points).",
"Dialog Generation.",
"We can see a different trend on the DailyDialog dataset an AT model performs poorly than NAT models.",
"Both GLAT and latentGLAT outperform the AT model in BLEU-1, BLEU-2, and BLEU scores, indicating that these models recall more reference tokens and organize the tokens well.",
"We conjecture that the weak and indirect association between the inputs and outputs of the dialogue Models WMT14 IWSLT14 Speedups EN DE DE EN DE EN CMLM 1 10.88 -CMLM 4 22.06 - 9.79 CMLM 10 24.65 - 3.77 LevT 2 . 05 24.43 -2.93 LV-NAR 11.80 -22.30 SynST 20.74 25.50 23.82 4.86 Flowseq 20.85 25.40 1.10 CNAT 21.30 25.73 29.81 10.37 AT 27.17 31.53 34.29 1.00 NAT 10.78 15.19 17.77 15.29 GLAT 16.71 24.78 29.07 15.29 latentGLAT 24.71 29.16 32.31 11.31 Table 2: BLEU scores and speedups of different models trained with raw datasets on machine translation tasks.",
"results in this unusual phenomenon.",
"Specifically, the weak connection may encourage the AT model to predict the tokens by paying more attention to their history outputs, which degenerate to a target-side language model.",
"In contrast, the NAT models do not have this fast track, pushing them to pay more attention to the inputs and recall more target tokens.",
"We further find that there are so-called safe response (Li et al., 2016) in AT's outputs, which verify our conjecture.",
"More Comparisons.",
"we further compare the advanced NAT models that builds upon latent variables or iterative refinement in machine translation tasks: NATs w/ latent variables: LV-NAR (Shu et al., 2019), SynST (Akoury et al., 2019), Flowseq (Ma et al., 2019), and CNAT (Bao et al., 2021).",
"Iterative NATs: CMLM (Ghazvininejad et al., 2019) and LevT (Gu et al., 2019).",
"Table 2 shows that introducing latent variables 8403 Figure 3: BLEU scores and their relative decoding speedups of different models on WMT14 EN DE test set.",
"(LV-NAR, Flowseq, and CNAT) or decoding with multiple iterations (CMLM and LevT) both improve non-autoregressive decoding in translation quality.",
"However, iterative refinements or deep transformations always sacrifice decoding efficiency.",
"In contrast, the proposed latentGLAT outperforms all NAT models with a relatively low cost, keeping a competitive speedup over autoregressive Transformer (AT).",
"Specifically, latentGLAT with one-pass decoding narrows the performance gap to the AT from 5.87 BLEU points to 2.34 BLEU points on the WMT14 EN DE test set.",
"Decoding efficiency.",
"We can see there is a tradeoff between the translation quality and decoding efficiency in Table 2.",
"We thus present the scatter plot of different models in Figure 3, showing the trend of translation quality and decoding efficiency.",
"As seen, latentGLAT is located on the top-right of the baselines.",
"It outperforms the baselines in the BLEU score if decoding speedup is fixed and in decoding speedup if the BLEU score is fixed.",
"We now turn to verify our intuition that latent-GLAT can alleviate the multi-modality problem.",
"latentGLAT largely alleviates the sentence-level multi-modal problem.",
"Previous researches (Gu et al., 2018; Ma et al., 2019; Qian et al., 2021a; Bao et al., 2021) always utilize a Transformer model as a teacher for training NAT models, namely sequence-level knowledge distillation (Kim and Rush, 2016), which can Methods WMT14 IWSLT14 Avg EN DE DE EN DE EN NAT 10.78 15.19 17.77 +6.58 w/ KD 17.69 22.02 23.78 GLAT 16.71 24.78 29.07 +5.19 w/ KD 25.21 29.84 31.07 Flowseq 20.85 25.40 24.75 +2.87 w/ KD 23.72 28.39 27.55 CNAT 21.30 25.73 29.81 +3.08 w/ KD 25.56 29.36 31.15 latentGLAT 24.71 29.16 32.31 +0.95 w/ KD 26.64 29.93 32.47 Table 3: BLEU scores of NAT models trained with (or without) knowledge distillation (KD) on translation tasks.",
"directly reduces the sentence-level multi-modal phenomenon in datasets.",
"Therefore, we use the average gains from the knowledge distillation to reflect the ability of the NAT models to overcome this issue.",
"As seen in Table 3, the pure NAT models heavily rely on knowledge distillation.",
"By introducing the target information with the latent variables (Flowseq and CNAT) or sampled tokens (GLAT), the NAT models improve its' ability to overcome the multi-modality issue.",
"Our proposed latent-GLAT well combines the above two techniques.",
"It obtains only 0.95 BLEU points average gains and validates our motivation.",
"raw sentences.",
"To validate our intuition that the introduced latent variables are easier to predict than tokens, we refer to Zhou et al. (2020) to compute the complexity metrics on each dataset according to alignment relations.",
"Specifically, we use the fast_align 8 toolkit to align source input X and target outputs Y or discretized latent variable se-8 https://github.com/clab/fast_align 8404 L# Introduce z Glancing Training BLEU ( ) with z with Y 1 12.60 2 (cid:88) 13.43 (+0.83) 3 (cid:88) 17.11 (+4.51) 4 (cid:88) (cid:88) 18.88 (+6.20) 5 (cid:88) (cid:88) 22.35 (+9.75) 6 (cid:88) (cid:88) (cid:88) 23.64 (+11.04) Table 5: BLEU scores of different latentGLAT con-figurations on the WMT14 EN DE valid set.",
"quences z .",
"Then, we compute the token-level complexity CTOK ( d ) and the sentence-level complexity CSEN ( d ) according to Zhou et al. (2020).",
"These metrics can trivially understand as the number of valid candidates for each input.",
"As shown in Table 4, the latent variables have the lowest complexity in both token-level complexity and sentence-level complexity.",
"In other words, predicting the latent variable sequences is effortless than predicting others, which is consistent with our intuition.",
"Although we obtain a lower complexity dataset by filtering the datasets with an autoregressive model (AT outputs versus Raw outputs), they may introduce model error and need extra training for AT model.",
"In contrast, the discrete latent variables are simple and informative enough to serve as a springboard for modeling target sentences.",
"performance with a large margin.",
"We can see in Table 5 that introducing latent variables both obtain performance gains to their counterpart (L#2 vs. L#1, +0 . 83 points, and L#4 vs. L#3, +1 . 69 points).",
"As expected, the gains are largely improved while adopting the glancing training with discrete latent variables (L#5 vs. L#1, +9 . 75 points), which already outperforms glancing training with the reference token (L#5 vs. L#4, +3 . 55 points).",
"Finally, we jointly perform glancing training with the reference tokens and discrete latent variables, achieving the best result (L#6 vs. L#1, +11 . 04 points).",
"Effects of K and .",
"As shown in Figure 4 and Table 6, we search the hyper-parameter of latent-GLAT that the number of discrete latent variables and the length penalty ratio according to the validation performance.",
"We notice that using more latent codes causes performance degradation during inference, in which the latent variables may degenerate to tokens and contains more prediction error during inference.",
"The latentGLAT implemented with 64 latent variables and = 1 .",
"1 obtains the best result on WMT14 EN DE valid set.",
"Gu et al. (2018) first propose a non-autoregressive Transformer (NAT) model for neural machine translation (NMT) and begin to explore parallel decoding.",
"It abandons explicitly modeling word interdependencies to decode the tokens in parallel, sig-nificantly improving the inference speed.",
"However, its translation quality is inferior to the Transformer (Vaswani et al., 2017).",
"To alleviate this performance degradation, many researchers work to enhance word dependency modeling, including imitation learning (Wei et al., 2019; Li et al., 2019), curriculum learning (Guo et al., 2020a; Liu et al., 2020), iterative refinements (Lee et al., 2018; Ghazvininejad et al., 2019; Gu et al., 2019; Guo et al., 2020b; Huang et al., 2022), and a simplified autoregressive process (Sun et al., 2019).",
"The most representative method is the glancing transformer model (Qian et al., 2021a), which adaptively and progressively samples partial tokens as inputs and predicts the remaining tokens, effectively establishing the dependencies between the sampled tokens and the remaining tokens.",
"However, these models still rely on a teacher 8405 for training, which cannot directly learn the raw dataset that contains one-to-many multi-modality phenomenon.",
"Introducing latent variables (Bao et al., 2019, 2021) to organize the target sentence is also a helpful route.",
"Among them, our method is close to Kaiser et al. (2018); Shu et al. (2019); Ma et al. (2019); Akoury et al. (2019); Bao et al. (2021).",
"These methods decompose the latent variables (hints) from the target sentence and divide the origin goal into two parts: modeling latent variables and modeling the target sentences based on latent variables.",
"It implicitly overcomes the multimodality phenomenon of target sentences because the latent variables can largely determine the mode of the sentence.",
"However, these methods always model the latent variables with an autoregressive predictor, which naturally sacrifices the decoding efficiency.",
"Unlike them, our approach models the discrete latent variables in a non-autoregressive fashion and extends glancing training with the discrete latent variables.",
"As a result, latentGLAT accomplishes a competitive performance both in decoding efficiency and quality.",
"We propose latentGLAT, which can be directly trained without the help of knowledge distillation.",
"Specifically, we employ discrete latent variables to capture the word categorical information and divide the original goal into the latent variables modeling and word prediction tasks.",
"Then, we learn each task with the glancing training and encourage the model to build dependencies on the latent variables, which have fewer modes than the words and are also informative for modeling the target sentences.",
"Experiments results on machine translation, paraphrase generation, and dialogue generation tasks validate the effectiveness of our latentGLAT.",
"We would like to thank the anonymous reviewers for their insightful comments.",
"Shujian Huang is the corresponding author.",
"This work is supported by National Science Foundation of China (No. U1836221, 6217020152)."
] |
[
"abstain",
"abstain",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"objective",
"abstain",
"other",
"other",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"other",
"objective",
"other",
"objective",
"objective",
"method",
"result",
"other",
"other",
"other"
] |
[
"Studies in Social Sciences have revealed that when people evaluate someone else, their evaluations often reflect their biases.",
"As a result, rater bias may introduce highly subjective factors that make their evaluations inaccurate.",
"This may affect automated essay scoring models in many ways, as these models are typically designed to model (potentially biased) essay raters.",
"While there is sizeable literature on rater effects in general settings, it remains unknown how rater bias affects automated essay scoring.",
"To this end, we present a new annotated corpus containing essays and their respective scores.",
"Different from existing corpora, our corpus also contains comments provided by the raters in order to ground their scores.",
"We present features to quantify rater bias based on their comments, and we found that rater bias plays an important role in automated essay scoring.",
"We investigated the extent to which rater bias affects models based on hand-crafted features.",
"Finally, we propose to rectify the training set by removing essays associated with potentially biased scores while learning the scoring model.",
"Automated Essay Scoring (AES) aims at developing models that can grade essays automatically or with reduced involvement of human raters (Page, 1967).",
"AES systems may rely not only on grammars, but also on more complex features such as semantics, discourse and pragmatics (Davis and Veloso, 2016; Song et al., 2014; Farra et al., 2015; Somasundaran et al., 2014).",
"Thus, a prominent approach to AES is to learn scoring models from previously graded samples, by modeling the scoring process of human raters.",
"When given the same set of essays to evaluate and enough graded samples, AES systems tend to achieve high agreement levels with trained human raters (Taghipour and Ng, 2016).",
"While research in AES has focused on designing scoring models that maximize the agreement with human raters(Chen and He, 2013; Alikan-iotis et al., 2016), there is a lack of discussion on how biased are human ratings.",
"Despite making judgments on a common dimension, raters may be influenced by their attitudes, their cultural background, and their political and economic views (Guerra et al., 2011).",
"Since AES models are designed to learn by analyzing human-graded essays, AES models could inherit rating biases present in the scores from human raters, and this may result in systematic errors.",
"Thus, our objective in this paper is to examine the extent to which rater bias affects the effectiveness of state-of-the-art AES models.",
"A deeper understanding of such factors may help mitigating the effects of rater bias, enabling AES models to achieve greater objectivity.",
"In order to study the effects of rater bias in essay scoring, we created an annotated corpus containing essays written by high school students as part of a standardized Brazilian national exam.",
"Our corpus contains a number of essays, written in Portuguese, along with their respective scores.",
"Further, raters must also provide a comment for each essay in order to ground their scores.",
"As in (Re-casens et al., 2013) we built subjectivity and sentiment lexicons that serve as features to represent the comments, that is, rater comments are represented according to the subjectivity distribution as given by specific subjectivity cues in our lexicons.",
"We present empirical evidence suggesting that the subjectivity distribution within rater comment is a proxy for the score that is given to the essay.",
"More specifically, very low (or very high) scores are associated with essays for which rater comments showed a very particular subjectivity distribution.",
"We also investigated the relationship be-229 tween subjectivity distribution and the misalignment between human raters and AES models.",
"Interestingly, the subjectivity distribution becomes very characteristic as the misalignment increases.",
"Our main contributions are three-fold: We built subjectivity lexicons for the Portuguese language.",
"These lexicons include words and phrases associated with different subjectivity dimensions sentiments, factive verbs, entailments, intensifiers and hedges.",
"We identify biased language within rater comments by calculating the word mover's distance (Kusner et al., 2015) between comments and the lexicons.",
"This approach ben-efits from large unsupervised corpora, that can be used to learn effective word embeddings (Mikolov et al., 2013).",
"By identifying biased language, we observed that biases can work to inflate essay scores or to deflate them.",
"We employ a set of linguistic features in order to learn different AES models, and we evaluate the effects of biased ratings in the efficacy of these models.",
"In summary, biased ratings affect AES models in different ways, but in general the misalignment between human rater and the AES model is more acute when the rater shows biased language in their comments.",
"We propose simple ways of preventing and reducing the negative effects of biased ratings while learning AES models.",
"Results in a controlled experimental setting revealed that detecting and removing biased ratings from the training set lead to significant improvements in automated essay scoring.",
"In the remainder of this paper, Section 2 discusses related work on automated essay scoring.",
"Section 3 describes the features used for learning AES models, as well as the features used for identifying biased language in rater comments.",
"Further, our debiasing approach is also discussed in Section",
"3. Section 4 describes the data, the setup and the results of our empirical evaluation.",
"Finally, Section 5 provides our conclusions.",
"Research in cognitive science, psychology and other social studies offer a great amount of work",
"on (conscious and unconscious) biases and their effects on a variety of human activities (Kaheman and Tversky, 1972; Tversky and Kaheman, 1974).",
"Biases can create situations that lead us to make decisions that project our experiences and values onto others (Baron, 2007; Ariely, 2008).",
"While there is sizeable literature on rater effects in general settings (Myford and Wolfe, 2003), it remains unknown how biased ratings affect automated essay scoring models.",
"Rather, works on automated essay scoring are mainly focused on designing AES models by maximizing the agreement with human raters, despite the assertiveness of the ratings.",
"Typically, AES systems are built on the basis of predefined linguistic features that are then given to a machine learning algorithm (Amorim and Veloso, 2017).",
"Works that fall into this approach include (Srihari et al., 2008, 2007; Cummins et al., 2016; McNamaraa et al., 2015).",
"Further, authors in (Dong and Zhang, 2016) presented an empirical analysis of features typically used for learning AES models.",
"Authors in (Crossley et al., 2015) studied a broader category of features that can also be used to build AES models.",
"There are also more recent approaches for learning AES models that do not assume a set of predefined features.",
"These approaches are based on deep architectures, and include (Alikaniotis et al., 2016; Taghipour and Ng, 2016; Riordan et al., 2017; Dong et al., 2017).",
"Finally, there also models based on domain adaptation (Phandi et al., 2015) and unsupervised learning (Chen et al., 2010).",
"Few works have investigated the subjective na-ture of essay scoring.",
"An interesting exception is (Allen et al., 2015), in which the authors investigated the misalignment between students' and teachers' ratings of essay.",
"Results revealed that students who were less accurate in their self-assessments produced essays that were more causal, contained less meaningful words, and had less argument overlap between sentences.",
"The work in this paper builds upon prior work on building subjectivity lexicons (Klebanov et al., 2012) and subjectivity detection (Recasens et al., 2013), but in our case applied to score agreement.",
"In this respect, our work is more comparable to (Klebanov and Beigman, 2009; Beigman and Klebanov, 2009), where authors discussed and investigated the problem of learning in the presence of biased annotators.",
"Other works that are also 230 close to ours include (Farra et al., 2015; Somasundaran et al., 2016; Song et al., 2014), in which the authors studied the problem of scoring persuasive and argumentative essays.",
"Our aim in this work is to learn AES models that are less prone to the effects of biased ratings, that is, models that are able to perform highly objective and impartial judgements.",
"Thus, we start this section by proposing features that are useful for building AES models.",
"Then, we propose another set of features that are useful for identifying biased ratings based on subjectivity cues.",
"Finally, we propose an approach to remove biased ratings from the training set, thus learning more objective AES models.",
"As most existing AES systems, our models are built on the basis of predefined features (e.g. number of words, average word length, and number of spelling errors) that are given to a machine learning algorithm.",
"The features used to build our AES models are discussed and evaluated in (Amorim and Veloso, 2017).",
"They may fall into two broad categories: Domain features: These are simple linguistic features, including the number of first-person pronouns, demonstrative pronouns and verbs.",
"Features also include the number of pronouns and verbs normalized by the number of tokens in the corresponding sentence.",
"General features: Most of the general features are based on (Attali and Burstein, 2006).",
"However, due to lack of tools for processing the Portuguese language, we implemented the following features, which are sub-divided as follows: Grammar and style: Features include the number of grammar errors and misspellings.",
"These numbers are also normalized by the number of tokens in the corresponding sentence.",
"In order to evaluate style, we designed features based on the style rules suggested in (Martins, 2000).",
"Features include the number of style errors and the number of style of errors per sentence.",
"Organization and development: Features include the number of discourse markers from the Portuguese grammar, and the number of discourse markers per sentence.",
"Discourse markers are linguistic units that establish connections between sentences to build coherent and knit discourse.",
"Lexical complexity: Features include the Portuguese version for the Flesh score (Martins et al., 1996), the average word length (i.e., the number of sylla-bles), the number of tokens in an essay, and the number of different words in an essay.",
"Prompt-specific vocabulary usage: Features include different distances between prompt and essay (i.e., cosine distance).",
"In this case, both the prompt and the essay are treated as frequency vectors of words.",
"We assume a scenario in which essay raters must ground the provided scores with specific comments.",
"We also assume that we can identify biased ratings by detecting comments with biased language.",
"In order to detect biased language, we developed subjectivity lexicons for the Portuguese language.",
"Specifically, a linguist built a list of Portuguese lexicons based on the analysis of expressions that seem to express some subjectivity of the human evaluator.",
"Our subjectivity lexicons are categorized into the following groups: Argumentation: This lexicon includes markers of argumentative discourse.",
"Argumentative markers include lexical expressions and connectives, such as: even ( ate ), by the way ( alias ), as a consequence ( como con-sequencia ), or else ( ou entao ), as if ( como se ), rather than ( em vez de ), some-how ( de certa forma ), despite ( apesar de ), among others.",
"Presupposition: This lexicon includes markers that suggest the rater assumes something is true.",
"Some examples of such markers include: nowadays ( hoje em dia ), to keep on doing ( continuar a ), and factive verbs.",
"and some type of verbs.",
"Sentiment: This lexicon also includes markers that indicate a state of mind or a sentiment of the rater while evaluating the essay.",
"Some examples of such markers include: with re-gret ( infelizmente ), with pleasure ( feliz-mente ), and it is preferable ( preferencial-mente ).",
"Valuation: This lexicon assigns a value to facts.",
"Usually, adjectives are employed as valuation, but as adjectives are context dependent we use only in this class the markers related to intensification, such as: absolutely ( ab-solutamente ), highly ( altamente ), and ap-proximately ( aproximadamente ).",
"Bias is generally defined as a deviation from a norm.",
"If the norm is unknown to us, then bias is hard to identify.",
"Thus, our approach for debiasing the training set starts by finding the norm (in terms of the subjectivity within rater comments) for each score value.",
"Intuitively, the amount of subjectivity within a comment should be similar to the amount of subjectivity within another comment, given that the scores associated with the corresponding essays are close to each other.",
"So, we should not expect to find essays having discrepant scores, but for which the corresponding comments show a similar amount of subjectivity.",
"Our debiasing approach is divided into three steps:",
"1. Rater comments are represented according to the amount of subjectivity cues.",
"In order to represent a comment, we calculate the distance between it and each of the five subjectivity lexicons.",
"More specifically, we learn word embeddings (Mikolov et al., 2013) for the Portuguese language, and then we employed the Word Mover's Distance function (Kusner et al., 2015) between a comment and the five subjectivity lexicons.",
"As a result, each comment is finally represented by a five-dimensional subjectivity vector, where each dimension corresponds to the amount of a specific type of subjectivity.",
"This results in a subjectivity space, where comments are placed according to their amount of subjectivity.",
"2. We group subjectivity vectors according to the score misalignment associated with the corresponding essay.",
"Then, we calculate centroids for each group in order to find the prototypical subjectivity vector for each group (or misalignment level).",
"3. The distance to the prototypical subjectivity vector is used as a measure of deviation from the norm.",
"Specifically, we sort essays according to the distance between the subjectivity vector and the corresponding centroid.",
"Then, we define a number of essays to be removed from the training set.",
"The relative number of essays to be removed from the training set is controlled by hyper-parameter .",
"In this section, we present the data we used to learn and evaluate different AES models.",
"Then, we discuss our evaluation procedure and report the results obtained with our debiasing approach.",
"In particular, our experiments aim to answer the following research questions: RQ1: How scores are distributed across the essays?",
"Our corpus is composed of essays ( n = 1 , 840 ) that were written by high-school students as part of a standardized Brazilian national exam.",
"Each essay is evaluated according to the following five objective aspects: Formal language: Mastering of the formal Portuguese language.",
"of essay prompt and application of concepts from different knowledge fields, to develop the theme in an argumentative dissertation format.",
"The final score is given as the sum of the scores associated with each aspect.",
"Raters are supposed to perform impartial and objective evaluations, and they must enter specific comments in order to ground their scores.",
"Also, each essay was assessed by one rater.",
"Bias-free ratings: We also separate a number of essays ( n = 50 ) which received similar scores by three expert raters who were directly instructed to perform impartial, objective, and unbiased evaluations.",
"These raters are PhD-level in Linguistics with unlimited time to provide their ratings, and they do not participate on the creation of the training set.",
"We assume the ratings given to these essays were not contaminated by biased judgements, and we will use these essays for evaluating the efficacy of AES models learned after the training set is debiased.",
"We implemented the different AES models using scikit-learn (Pedregosa et al., 2011).",
"Specifically, we learn AES models using Support Vector Regression (SVR), Random Forests (RF), Logistic Regression (LR), Gradient Boosting (GB), and Multi-Layer Perceptron (MLP).",
"All models are based on the same set of features, previously 0 50 100 150 200 250 300 350 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 # e ss a ys Misalignment SVRRFLRGBMLP Figure 2: Distribution of misalignment for the different AES models.",
"described in Section 3.1, and all models are trained in regression mode.",
"The measure used to evaluate the effectiveness of the different models is the quadratic weighted kappa ( ) which measures the inter-agreement between human raters and AES models (Cohen, 1960).",
"We conducted five-fold cross validation, where the dataset is arranged into five folds with approximately the same number of examples.",
"At each run, four folds are used as training set, and the remaining fold is used as test set.",
"We also kept a separate validation set.",
"The training set is used to learn the models, the validation set is used to tune hyper-parameters and the test set is used to estimate numbers for the different the models.",
"Unless otherwise stated, the results reported are the average of the five runs, and are used to assess the overall effectiveness of each model.",
"To ensure the relevance of the results, we assess the statistical significance of our measurements by comparing each pair of models using a Welch's t-test with p value 0.01.",
"Next we report results obtained from the execution of the experiments, and discuss these results in the light of our research questions.",
"Score distribution: The first experiment is concerned with RQ1.",
"Figure 1 shows how scores are distributed over the essays in our corpus.",
"Although the distribution differs for each AES model, scores are centered around 4, and few essays received extreme scores.",
"The LR model seems to have a preference for lower scores.",
"The scores provided by the GB and MLP models are better distributed.",
"Figure 2 shows how aligned with human raters are the different AES models.",
"For most of the essays, AES models are well aligned with human 233 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0 1 2 3 4 5 6 7 8 9 10 D i s t r i bu t i on Score ArgumentationPresupposition Modalization ValuationSentiment Figure 3: Subjectivity distribution for human raters.",
"raters, showing misalignments that vary from 2 to +2 .",
"For some essays, the LR model tends to give scores that are much smaller than the score given by the human rater.",
"The GB and MLP models perform very similary, but the MLP model shows a slightly better alignment.",
"Subjectivity vectors and biased ratings: The second experiment is concerned with RQ2.",
"Figure 3 shows the average subjectivity vector grouped according to the score given to the corresponding essay (i.e., the centroid or prototypical vector of a score).",
"More specifically, we first grouped subjectivity vectors according to the score associated with the corresponding essay, and then we calculated the average subjectivity vector for each group.",
"As shown in Figure 3, the argumentation dimension increases with the score, while modalization tends to decrease.",
"Presupposition, valuation and sentiment dimensions show a very similar trend with varying score values.",
"Figure 4 shows t-SNE representations (van ter Maaten and Hinton, 2008) for the average subjectivity vectors (centroids for each group of score).",
"Three larger clusters emerged: subjectivity vectors associated with score 0, subjectivity vectors associated with scores between 1 and 6, and subjectivity vectors associated with scores between 6 and 10.",
"Subjectivity vectors and misalignment: The third experiment is concerned with RQ3.",
"Figure 5 shows the average subjectivity vector considering different levels of misalignment.",
"More specifically, we grouped essays according to the misalignment between the score provided by the AES model and the human rater.",
"Then, we calculated the average subjectivity vector for each group.",
"As we can see, subjectivity affects AES 0 1 2 3 45 6 7 8 910 Figure 4: t-SNE representation for subjectivity vectors.",
"models in different ways.",
"In general, however, subjectivity vectors within groups of essays associated with extreme misalignments are very different from subjectivity vectors associated with mild misalignments.",
"Figure 6 shows t-SNE representations for subjectivity vectors grouped by misalignment levels.",
"Each cluster contains 80% of the vectors associated with one of the misalignment levels inside the cluster.",
"That is, 20% of the essays will be removed from the training set (i.e., = 0 . 2 ).",
"Debiasing the training set: The last experiment is concerned with RQ4.",
"As described in Section 3.3, our debiasing approach works by removing from the training set a number of essays (con-trolled by ) that are more likely to be associated with biased ratings.",
"Table 1 shows numbers for different values.",
"Clearly, the inter-agreement decreases as we remove essays with potentially biased ratings from the training set.",
"This happens because the test set remains with essays that are potentially associated with biased ratings.",
"In this case, removing biased ratings from the training set is always detrimental to the efficacy of AES models.",
"In order to properly evaluate our debiasing approach, we employ the 50 separate essays with bias-free ratings as our test set.",
"In this case, biased ratings are removed from the training set, and the test set is composed by unbiased ratings.",
"Table 2 shows numbers for different values.",
"As expected, the inter-agreement increases significantly with , until a point in which keeping removing essays from the training set becomes detrimental.",
"This happens either because we start to remove unbiased ratings, or the training set becomes too small.",
"In all cases, the MLP model showed to be 234 0.35 0.4 0.45 0.5 0.55 0.6 0.65 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 D i s t r i bu t i on Misalignment ArgumentationPresupposition Modalization ValuationSentiment 0.35 0.4 0.45 0.5 0.55 0.6 0.65 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 D i s t r i bu t i on Misalignment ArgumentationPresupposition Modalization ValuationSentiment 0.35 0.4 0.45 0.5 0.55 0.6 0.65 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 D i s t r i bu t i on Misalignment ArgumentationPresupposition Modalization ValuationSentiment 0.35 0.4 0.45 0.5 0.55 0.6 0.65 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 D i s t r i bu t i on Misalignment ArgumentationPresupposition Modalization ValuationSentiment 0.35 0.4 0.45 0.5 0.55 0.6 0.65 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 D i s t r i bu t i on Misalignment ArgumentationPresupposition Modalization ValuationSentiment Figure 5: Subjectivity distribution.",
"statistically superior than the other models.",
"In this paper, we investigated the problem of automated essay scoring in the presence of biased ratings.",
"Most of the existing work on automated essay scoring is devoted to maximize the agreement with the human rater.",
"This is fairly dangerous, since human ratings may be biased.",
"Overall, discussion about the quality of the ratings in automated essay scoring is lacking, and this was a central interest in this paper.",
"Specifically, we create a subjectivity space from which potentially biased scores/ratings can be identified.",
"We showed that removing biased scores from the training set results in improved AES models.",
"Finally, the essay data as well as the subjectivity lexicons that we will release as part of this research could prove useful in other bias related tasks.",
"This work was partially funded by projects InWeb (grant MCT/CNPq 573871/2008-6) and MASWeb (grant FAPEMIG/PRONEX APQ-01400-14), and by the authors individual grants from CNPq and FAPEMIG.",
"AV thanks the support received from Kunumi."
] |
[
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"objective",
"abstain",
"objective",
"objective",
"method",
"objective",
"result",
"method",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"result",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"method",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"other",
"other"
] |
[
"Jonathan May Information Sciences Institute University of Southern California [email protected]",
"Kevin Knight Information Sciences Institute University of Southern California [email protected]",
"Abstract",
"We investigate the computational complexity of various problems for simple recurrent neural networks (RNNs) as formal models for recognizing weighted languages.",
"We focus on the single-layer, ReLU-activation, rational-weight RNNs with softmax, which are commonly used in natural language processing applications.",
"We show that most problems for such RNNs are undecidable, including consistency, equivalence, minimization, and the determination of the highest-weighted string.",
"However, for consistent RNNs the last problem becomes decidable, although the solution length can surpass all computable bounds.",
"If additionally the string is limited to polynomial length, the problem becomes NP-complete.",
"In summary, this shows that approximations and heuristic algorithms are necessary in practical applications of those RNNs.",
"Recurrent neural networks (RNNs) are an attractive apparatus for probabilistic language modeling (Mikolov and Zweig, 2012).",
"Recent experiments show that RNNs significantly outperform other methods in assigning high probability to held-out English text (Jozefowicz et al., 2016).",
"Roughly speaking, an RNN works as follows.",
"At each time step, it consumes one input token, updates its hidden state vector, and predicts the next token by generating a probability distribution over all permissible tokens.",
"The probability of an input string is simply obtained as the product of the predictions of the tokens constituting the string followed by a terminating token.",
"In this manner, each RNN defines a weighted language ; i.e. a total function from strings to weights.",
"Siegelmann and Sontag (1995) showed that single-layer rational-weight RNNs with saturated linear activation can compute any computable function.",
"To this end, a specific architecture with 886 hidden units can simulate any Turing machine in real-time (i.e., each Turing machine step is simulated in a single time step).",
"However, their RNN encodes the whole input in its internal state, performs the actual computation of the Turing machine when reading the terminating token, and then encodes the output (provided an output is produced) in a particular hidden unit.",
"In this way, their RNN allows thinking time (equivalent to the computation time of the Turing machine) after the input has been encoded.",
"We consider a different variant of RNNs that is commonly used in natural language processing applications.",
"It uses ReLU activations, consumes an input token at each time step, and produces softmax predictions for the next token.",
"It thus immediately halts after reading the last input token and the weight assigned to the input is simply the product of the input token predictions in each step.",
"Other formal models that are currently used to implement probabilistic language models such as finite-state automata and context-free grammars are by now well-understood.",
"A fair share of their utility directly derives from their nice algorithmic properties.",
"For example, the weighted languages computed by weighted finite-state automata are closed under intersection (pointwise product) and union (pointwise sum), and the corresponding unweighted languages are closed under intersection, union, difference, and complementation (Droste et al., 2013).",
"Moreover, toolkits like OpenFST (Allauzen et al., 2007) and Carmel 1 implement efficient algorithms on automata like minimization, intersection, finding the highest-weighted path and the highest-weighted string.",
"based machine translation system should extract the highest-weighted output string (i.e., the most likely translation) generated by an RNN, (Sutskever et al., 2014; Bahdanau et al., 2014).",
"Currently this task is solved by approximation techniques like heuristic greedy and beam searches.",
"To facilitate the deployment of large RNNs onto limited memory devices (like mobile phones) minimization techniques would be bene-ficial.",
"Again currently only heuristic approaches like knowledge distillation (Kim and Rush, 2016) are available.",
"Meanwhile, it is unclear whether we can determine if the computed weighted language is consistent; i.e., if it is a probability distribution on the set of all strings.",
"Without a determination of the overall probability mass assigned to all finite strings, a fair comparison of language models with regard to perplexity is simply impossible.",
"The goal of this paper is to study the above problems for the mentioned ReLU-variant of RNNs.",
"More specifically, we ask and answer the following questions: Consistency: Do RNNs compute consistent weighted languages?",
"Is the consistency of the computed weighted language decidable?",
"Highest-weighted string: Can we (efficiently) determine the highest-weighted string in a computed weighted language?",
"Equivalence: Can we decide whether two given RNNs compute the same weighted language?",
"Minimization: Can we minimize the number of neurons for a given RNN?",
"Before we introduce our RNN model formally, we recall some basic notions and notation.",
"An alphabet is a finite set of symbols, and we write | | for the number of symbols in .",
"A string s over the alphabet is a finite sequence of zero or more symbols drawn from , and we write for the set of all strings over , of which (cid:15) is the empty string.",
"The length of the string s is denoted | s | and coincides with the number of symbols constituting the string.",
"As usual, we write AB for the set of functions { f | f : B A } .",
"A weighted language L is a total function L : R from strings to real-valued weights.",
"For example, L ( a n ) = e n for all n 0 is such a weighted language.",
"We restrict the weights in our RNNs to the rational numbers Q .",
"In addition, we reserve the use of a special symbol $ to mark the start and end of an input string.",
"To this end, we assume that $ / for all considered alphabets, and we let $ = { $ } .",
"Definition 1. A single-layer RNNR is a 7 -tuple h , N, h 1 , W, W 0 , E, E 0 i , in which is an input alphabet , N is a finite set of neurons , h 1 QN is an initial activation vector , W QN N is a transition matrix , W 0 = ( W 0 a ) a $ is a $ -indexed family of bias vectors W 0 a QN , E Q $ N is a prediction matrix , and E 0 Q $ is a prediction bias vector .",
"Next, let us define how such an RNN works.",
"We first prepare our input encoding and the effect of our activation function.",
"For an input string s = s 1 s 2 s n with s 1 , . . . , s n , we encode this input as $ s $ and thus assume that s 0 = $ and s n +1 = $ .",
"Our RNNs use ReLUs (Rectified Linear Units), so for every v QN we let h v i (the ReLU activation) be the vector h v i QN such that h v i ( n ) = max (cid:0) 0 , v ( n ) (cid:1) for every n N .",
"In other words, the ReLUs act like identities on nonnegative inputs, but clip negative inputs to 0 .",
"We use softmax-predictions, so for every vector p Q $ and a $ we let softmax h p i ( a ) = e p ( a ) P a 0 $ e p ( a 0 ) .",
"RNNs act in discrete time steps reading a single letter at each step.",
"We now define the semantics of our RNNs.",
"Definition 2. Let R = h , N, h 1 , W, W 0 , E, E 0 i be an RNN, s an input string of length n and 0 t n a time step.",
"We define the hidden state vector h s,t QN given by h s,t = h W h s,t 1 + W 0 s t i , where h s, 1 = h 1 and we use standard matrix product and point-wise vector addition, the next-token prediction vector E s,t Q $ E s,t = E h s,t + E 0 the next-token distribution E 0 s,t R $ E 0 s,t = softmax h E s,t i .",
"Finally, the RNNR computes the weighted language R : R , which is given for every input s = s 1 s n as above by",
"In other words, each component h s,t ( n ) of the hidden state vector is the ReLU activation applied to a linear combination of all the components of the previous hidden state vector h s,t 1 together with a summand W 0 s t that depends on the t -th input letter s t .",
"Thus, we often specify h s,t ( n ) as linear combination instead of specifying the matrix W and the vectors W 0 a .",
"The semantics is then obtained by predicting the letters s 1 , . . . , s n of the input s and the final terminator $ and multiplying the probabilities of the individual predictions.",
"Let us illustrate these notions on an example.",
"We consider the RNN h , N, h 1 , W, W 0 , E, E 0 i with Q and = { a } and N = { 1 , 2 } , h 1 = ( 1 , 0) T and W = (cid:18) 1 0 1 0 (cid:19) and W 0 $ = W 0 a = (cid:18) 1 0 (cid:19) E ($ , ) = ( M + 1 , ( M + 1)) and E ( a, ) = (1 , 1) and E 0 ($) = M and E 0 ( a ) = 0 .",
"In this case, we obtain the linear combinations h s,t = * h s,t 1 (1) + 1 h s,t 1 (1) + computing the next hidden state components.",
"Given the initial activation, we thus obtain h s,t = h t, t 1 i .",
"Using this information, we obtain E s,t ($) = ( M + 1) ( t h t 1 i ) M E s,t ( a ) = t h t 1 i .",
"Consequently, we assign weight e M 1+ e M to input , weight 1 1+ e M e 1 e 1 + e 1 to a , and, more generally, weight 1 1+ e M 12 n to a n .",
"Clearly the weight assigned by an RNN is always in the interval (0 , 1) , which enables a probabilistic view.",
"Similar to weighted finite-state automata or weighted context-free grammars, each RNN is a compact, finite representation of a weighted language.",
"The softmax-operation enforces that the probability 0 is impossible as assigned weight, so each input string is principally possible.",
"In practical language modeling, smoothing methods are used to change distributions such that impossibility (probability 0 ) is removed.",
"Our RNNs avoid impossibility outright, so this can be considered a feature instead of a disadvantage.",
"The hidden state h s,t of an RNN can be used as scratch space for computation.",
"For example, with a single neuron n we can count symbols in s via: h s,t ( n ) = h h s,t 1 ( n ) + 1 i .",
"Here the letter-dependent summand W 0 a is universally 1 .",
"Similarly, for an alphabet = { a 1 , . . . , a m } we can use the method of Siegelmann and Sontag (1995) to encode the complete input string s in base m + 1 using: h s,t ( n ) = h ( m + 1) h s,t 1 ( n ) + c ( s t ) i , where c : $ { 0 , . . . , m } is a bijection.",
"In principle, we can thus store the entire input string (of unbounded length) in the hidden state value h s,t ( n ) , but our RNN model outputs weights at each step and terminates immediately once the final delimiter $ is read.",
"It must assign a probability to a string incrementally using the chain rule decomposition p ( s 1 s n ) = p ( s 1 ) . . . p ( s n | s 1 s n 1 ) .",
"Let us illustrate our notion of RNNs on some additional examples.",
"They all use the alphabet = { a } and are illustrated and formally specified in Figure 1. The first column shows an RNN R 1 that assigns R 1 ( a n ) = 2 ( n +1) .",
"The next-token prediction matrix ensures equal values for a and $ at every time step.",
"The second column shows the RNNR 2 , which we already discussed.",
"In the beginning, it heavily biases the next symbol prediction towards a , but counters it starting at t = 1 .",
"The third RNN R 3 uses another counting mechanism with h s,t = h t 100 , t 101 , t i .",
"The first two components are ReLU-thresholded to zero until t > 101 , at which point they overwhelm the bias towards a turning all future predictions to $ .",
"We first investigate the consistency problem for an RNN R , which asks whether the recognized weighted language R is indeed a probability distribution.",
"Consequently, an RNN R is consistent 2263 R 1 ( a n ) = 2 ( n +1) R 2 ( ) 0 R 3 ( a 100 ) 1 R 2 ( a n ) 2 n ( n 1) R 3 ( a n ) 0 ( n 6 = 100) N { 1 } { 1 , 2 } { 1 , 2 , 3 } h 1 (cid:0) 0 (cid:1) (cid:18) 1 0 (cid:19) 0 0 0 W (cid:0) 0 (cid:1) (cid:18) 1 0 1 0 (cid:19) 0 0 1 0 0 1 0 0 1 W 0 $ W 0 a (cid:0) 0 (cid:1) (cid:0) 0 (cid:1) (cid:18) 1 0 (cid:19) (cid:18) 1 0 (cid:19) 99 100 1 99 100 1 E $ E a (cid:0) 0 (cid:1) (cid:0) 0 (cid:1) (cid:18) M + 1 ( M + 1) (cid:19) (cid:18) 1 1 (cid:19) M M 0 M M 0 E 0 $ E 0 a 0 0 M 0 M 0 Figure 1: Sample RNNs over single-letter alphabets, and the weighted languages they recognize.",
"if P s R ( s ) = 1 .",
"We first show that there is an inconsistent RNN, which together with our examples shows that consistency is a nontrivial property of RNNs.",
"2 We immediately use a slightly more complex example, which we will later reuse.",
"Example 3. Let us consider an arbitrary RNNR = h , N, h 1 , W, W 0 , E, E 0 i with the single-letter alphabet = { a } , the neurons { 1 , 2 , 3 , n, n 0 } N , initial activation h 1 ( i ) = 0 for all i { 1 , 2 , 3 , n, n 0 } , and the following linear combinations: h s,t (1) = h h s,t 1 (1) + h s,t 1 ( n ) h s,t 1 ( n 0 ) i 2 For comparison, all probabilistic finite-state automata are consistent, provided no transitions exit final states.",
"Not all probabilistic context-free grammars are consistent; necessary and sufficient conditions for consistency are given by Booth and Thompson (1973).",
"However, probabilistic context-free grammars obtained by training on a finite corpus using popular methods (such as expectation-maximization) are guaranteed to be consistent (Nederhof and Satta, 2006).",
"h s,t (2) = h h s,t 1 (2) + 1 i h s,t (3) = h h s,t 1 (3) + 3 h s,t 1 (1) i E s,t ($) = h s,t (3) h s,t (2) E s,t ( a ) = h s,t (2) Now we distinguish two cases: Case 1: If h s,t ( n ) h s,t ( n 0 ) = 0 for all t N , then h s,t (1) = 0 and h s,t (2) = t + 1 and h s,t (3) = 0 .",
"Hence we have E s,t ($) = ( t + 1) and E s,t ( a ) = t + 1 .",
"In this case the termination probability E 0 s,t ($) = e ( t +1) e ( t +1) + e t +1 = 1 1 + e 2( t +1) (i.e., the likelihood of predicting $ ) shrinks rapidly towards 0 , so the RNN assigns less than 15% of the probability mass to the terminating sequences (i.e., the finite strings), so the RNN is inconsistent (see Lemma 15 in the appendix).",
"Case 2: Suppose that there exists a time 2264 point T N such that for all t N h s,t ( n ) h s,t ( n 0 ) = ( 1 if t = T 0 otherwise.",
"Then h s,t (1) = 0 for all t T and h s,t (1) = 1 otherwise.",
"In addition, we have h s,t (2) = t + 1 and h s,t (3) = h 3( t T 1) i .",
"Hence we have E s,t ($) = h 3( t T 1) i ( t + 1) = ( ( t + 1) if t T 2 t 3 T 4 otherwise E s,t ( a ) = t + 1 , which shows that the probability E 0 s,t ($) = 1 1+ e 2( t +1) if t T e t 3 T 5 1+ e t 3 T 5 otherwise of predicting $ increases over time and eventually (for t (cid:29) 3 T ) far outweighs the probability of predicting a .",
"Consequently, in this case the RNN is consistent (see Lemma 16 in the appendix).",
"We have seen in the previous example that consistency is not trivial for RNNs, which takes us to the consistency problem for RNNs: Consistency: Given an RNN R , return yes if R is consistent and no otherwise.",
"We recall the following theorem, which, combined with our example, will prove that consistency is unfortunately undecidable for RNNs.",
"with saturated linear activation, input alphabet = { a } , and 1 designated neuron n N such that for all s and 0 t | s | h s,t ( n ) = 0 if M does not halt on , and if M does halt on empty input after T steps, then h s,t ( n ) = 1 if t = T",
"In other words, such RNNs with saturated linear activation can semi-decide halting of an arbitrary Turing machine in the sense that a particular neuron achieves value 1 at some point during the evolution if and only if the Turing machine halts on empty input.",
"An RNN with saturated linear activation is an RNN following our definition with the only difference that instead of our ReLU-activation the following saturated linear activation 0 : QN QN is used.",
"For every vector v QN and n N , let 0 h v i ( n ) = 0 if v ( n ) < 0 v ( n ) if 0 v ( n ) 1 1 if v ( n ) > 1 .",
"Since 0 h v i = h v i h v ~ 1 i for all v QN , and the right-hand side is a linear transformation, we can easily simulate saturated linear activation in our RNNs.",
"To this end, each neuron n N of the original RNN R = h , N, h 1 , U, U 0 , E, E 0 i is replaced by two neurons n 1 and n 2 in the new RNN R 0 = h , N 0 , h 0 1 , V, V 0 , F, F 0 i such that h s,t ( n ) = h 0 s,t ( n 1 ) h 0 s,t ( n 2 ) for all s and 0 t | s | , where the evaluation of h 0 s,t is performed in the RNN R 0 .",
"More precisely, we use the transition matrix V and bias function V 0 , which is given by V ( n 1 , n 0 1 ) = V ( n 2 , n 0 1 ) = U ( n, n 0 ) V ( n 1 , n 0 2 ) = V ( n 2 , n 0 2 ) = U ( n, n 0 ) V 0 a ( n 1 ) = U 0 a ( n ) V 0 a ( n 2 ) = U 0 a ( n ) 1 h 0 1 ( n 1 ) = h 1 ( n ) h 0 1 ( n 2 ) = 0 for all n, n 0 N and a { $ } , where n 1 and n 2 are the two neurons corresponding to n and n 0 1 and n 0 2 are the two neurons corresponding to n 0 (see Lemma 17 in the appendix).",
"Corollary 5.",
"Let M be an arbitrary deterministic Turing machine.",
"There exists an RNN R = h , N, h 1 , W, W 0 , E, E 0 i with input alphabet = { a } and 2 designated neurons n 1 , n 2 N such that for all s and 0 t | s | h s,t ( n 1 ) h s,t ( n 2 ) = 0 if M does not halt on , and if M does halt on empty input after T steps, then h s,t ( n 1 ) h s,t ( n 2 ) = ( 1 if t = T 0 otherwise.",
"We can now use this corollary together with the RNN R of Example 3 to show that the consistency problem is undecidable.",
"To this end, we simulate a given Turing machine M and identify the two designated neurons of Corollary 5 as n and n 0 in Example 3. It follows that M halts if and only if R is consistent.",
"Hence we reduced the undecidable halting problem to the consistency problem, which shows the undecidability of the consistency problem.",
"Theorem 6. The consistency problem for RNNs is undecidable.",
"As mentioned in Footnote 2, probabilistic context-free grammars obtained after training on a finite corpus using the most popular methods are guaranteed to be consistent.",
"At least for 2-layer RNNs this does not hold.",
"Theorem 7. A two-layer RNN trained to a local optimum using Back-propagation-through-time (BPTT) on a finite corpus is not necessarily consistent.",
"Proof.",
"The first layer of the RNN R with a single alphabet symbol a uses one neuron n 0 and has the following behavior: h 1 ( n 0 ) = 0 h s,t ( n 0 ) = h h s,t 1 ( n 0 ) + 1 i The second layer uses neuron n and takes h s,t ( n 0 ) as input at time t : h s,t ( n ) = h h s,t ( n 0 ) 2 i E s,t ( a ) = h s,t ( n ) E s,t ($) = 0 E 0 s,t ( a ) = ( 12 if t 1 e ( t 1) 1+ e ( t 1) otherwise.",
"Let the training data be { a } .",
"Then the objective we wish to maximize is simply R ( a ) .",
"The derivative of this objective with respect to each parameter is 0 , so applying gradient descent updates does not change any of the parameters and we have converged to an inconsistent RNN.",
"model or the most likely translation for a decoder RNN in machine translation.",
"For deterministic probabilistic finite-state automata or context-free grammars only one path or derivation exists for any given string, so the identification of the highest-weighted string is the same task as the identification of the most probable path or derivation.",
"However, for nondeterministic devices, the highest-weighted string is often harder to identify, since the weight of a string is the sum of the probabilities of all possible paths or derivations for that string.",
"A comparison of the difficulty of identifying the most probable derivation and the highest-weighted string for various models is summarized in Table 1, in which we marked our results in bold face.",
"We present various results concerning the difficulty of identifying the highest-weighted string in a weighted language computed by an RNN.",
"We also summarize some available algorithms.",
"We start with the formal presentation of the three studied problems.",
"1. Best string: Given an RNNR and c (0 , 1) , does there exist s with R ( s ) > c ?",
"2. Consistent best string: Given a consistent RNN R and c (0 , 1) , does there exist s with R ( s ) > c ?",
"3. Consistent best string of polynomial length: Given a consistent RNN R , polynomial P with P ( x ) x for x N + , and c (0 , 1) , does there exist s with | s | P ( | R | ) and R ( s ) > c ?",
"As usual the corresponding optimization problems are not significantly simpler than these decision problems.",
"Unfortunately, the general problem is also undecidable, which can easily be shown using our example.",
"Theorem 8.",
"The best string problem for RNNs is undecidable.",
"Proof.",
"Let M be an arbitrary Turing machine and again consider the RNN R of Example 3 with the neurons n and n 0 identified with the designated neurons of Corollary 5.",
"We note that R ( ) = 1 1+ e 2 < 0 .",
"12 in both cases.",
"If M does not halt, then R ( a n ) 1 1+ e 2( n +1) 1 1+ e 2 < 0 .",
"12 for all n N .",
"On the other hand, if M halts after T steps, then R ( a 3 T 5 ) = (cid:16) TY t =0 e 2( t +1) 1 + e 2( t +1) (cid:17) (cid:16) 3 T 6 Y t = T +1 1 1 + e t 3 T 5 (cid:17) 1 2 2 ( 1 , e 2 ) (cid:16) 3 T 6 Y t = T +1 e 3 T +5 t e 3 T +5 t +1 (cid:17) 1 2 2 ( 1 , e 2 ) ( 1 , e 1 ) 0 .",
"using Lemma 14 in the appendix.",
"Consequently, a string with weight above 0 .",
"12 exists if and only if M halts, so the best string problem is also undecidable.",
"If we restrict the RNNs to be consistent, then we can easily decide the best string problem by simple enumeration.",
"Theorem 9. The consistent best string problem for RNNs is decidable.",
"Proof.",
"Let R be the RNN over alphabet and c (0 , 1) be the bound.",
"Since is countable, we can enumerate it via f : N .",
"In the algorithm we compute S n = P ni =0 R ( f ( i )) for increasing values of n .",
"If we encounter a weight R ( f ( n )) > c , then we stop with answer yes.",
"Otherwise we continue until S n > 1 c , at which point we stop with answer no.",
"Since R is consistent, lim i S i = 1 , so this algorithm is guaranteed to terminate and it obviously decides the problem.",
"Next, we investigate the length | w max R | of the shortest string w max R of maximal weight in the weighted language R generated by a consistent RNN R in terms of its (binary storage) size | R | .",
"As already mentioned by Siegelmann and Sontag (1995) and evidenced here, only small precision rational numbers are needed in our constructions, so we assume that | R | c | N | 2 for a (reasonably small) constant c , where N is the set of neurons of R .",
"We show that no computable bound on the length of the best string can exist, so its length can surpass all reasonable bounds.",
"Theorem 10. Let f : N + N be the function with f ( n ) = max consistent RNN R | R | n | w max R | for all n N + .",
"Proof.",
"In the previous section (before Theorem 6) we presented an RNN RM that simulates an arbitrary (single-track) Turing machine M with n states.",
"By Siegelmann and Sontag (1995) we have | RM | c (4 n + 16) .",
"Moreover, we observed that this RNN RM is consistent if and only if the Turing machine M halts on empty input.",
"In the proof of Theorem 8 we have additionally seen that the length | w max R | of its best string exceeds the number TM of steps required to halt.",
"For every n N , let BB ( n ) be the n -th Busy Beaver number (Rado, 1962), which is BB ( n ) = max normalized n -state Turing machine M with 2 tape symbols that halts on empty input TM It is well-known that BB : N + N cannot be bounded by any computable function.",
"However, BB ( n ) max normalized n -state Turing machine M with and 2 tape symbols that halts on empty input | w max RM | max consistent RNN R | R | c (4 n +16) | w max R | = f (4 nc + 16 c ) , so f clearly cannot be computable and no computable function g can provide bounds for f .",
"Finally, we investigate the difficulty of the best string problem for consistent RNN restricted to solutions of polynomial length.",
"Theorem 11. Identifying the best string of polynomial length in a consistent RNN is NP-complete.",
"be a formula in conjunctive normal form, where ij { x 1 , . . . , x m , x 1 , . . . , x m } .",
"3-SAT asks whether there is a setting of x i s that makes F true.",
"We initialize h 1 ( n ) = 0 , n N = { x 1 , . . . , x m , c 1 , . . . , c k , c 0 1 , . . . , c 0 k , F, n 1 , n 2 , n 3 , ?",
"} .",
"Let s { 0 , 1 } be the input string.",
"Denote the value of F when x j = s j for all j [ m ] as F ( s ) .",
"Let t N with t | s | .",
"Set h s,t ( x m ) = h I ( s t ) i , where I (0) = I ($) = 0 and I (1) = 1 .",
"This stores the current input symbol in neuron x m , so h s,t ( x m ) = I ( s t ) .",
"In addition, we let h s,t ( x j ) = h h s,t 1 ( x j +1 ) i for all j [ m 1] .",
"Consequently, for all j [ m ] h s,t ( x j ) = ( I ( s t ( m j ) ) if m j t 0 otherwise.",
"Next, we evaluate the clauses.",
"For each i [ k ] , we use two neurons c i and c 0 i such that h s,t ( c i ) = h f s,t ( i 1 ) + f s,t ( i 2 ) + f s,t ( i 3 ) i h s,t ( c 0 i ) = h f s,t ( i 1 ) + f s,t ( i 2 ) + f s,t ( i 3 ) 1 i , where f s,t ( x m ) = I ( s t ) , f s,t ( x m ) = 1 I ( s t ) , and j [ m 1] , f s,t ( x j ) = h s,t 1 ( x j +1 ) , f s,t ( x j ) = 1 h s,t 1 ( x j +1 ) .",
"Note that h s,t ( c i ) h s,t ( c 0 i ) contains the evaluation of the clause i 1 i 2 i 3 .",
"Let h s,t ( F ) = D k X i =1 (cid:16) h s,t 1 ( c i ) h s,t 1 ( c 0 i ) (cid:17) k +1 E , so h s,t ( F ) = F ( s ) contains the evaluation of the formula F using the values in neurons x 1 , . . . , x m .",
"We use three counters n 1 , n 2 , n 3 to ensure that the only relevant inputs are of length m + 2 : h s,t ( n 1 ) = h h s,t 1 ( n 3 ) ( m + 2) i h s,t ( n 2 ) = h h s,t 1 ( n 3 ) ( m + 1) i h s,t ( n 3 ) = h h s,t 1 ( n 3 ) + 1 i , which yields h s,t ( n 3 ) = t + 1 , h s,t ( n 2 ) = h t ( m + 1) i , and h s,t ( n 1 ) = h t ( m + 2) i .",
"Our goal neuron is ?",
", which we set to h s,t ( ? ) = h h s,t 1 ( F ) h s,t 1 ( n 1 )+ h s,t 1 ( n 2 ) 1 i so that h s,t ( ? ) = ( h s,t 1 ( F ) if t = m + 2 0 otherwise, so h s,t ( ? ) = 1 if and only if t = m + 2 and F ( s ) = 1 .",
"Let m 0 = m + 4 .",
"The output is set as follows: E s,t (0) = E s,t (1) = m 0 (cid:0) 1 2 h s,t ( ? ) (cid:1) E s,t ($) = m 0 (cid:0) 1 2 h s,t ( ? ) (cid:1) , This yields E s,t (0) = E s,t (1) = E s,t ($) = m 0 if t = m +2 and F ( s ) = 1 , and m 0 otherwise.",
"For a { 0 , 1 } , E 0 s,t ( a )= e m 0 2 e m 0 + e m 0 if t = m +2 and F ( s )=1 e m 0 2 e m 0 + e m 0 otherwise E 0 s,t ($)= e m 0 2 e m 0 + e m 0 if t = m +2 and F ( s )=1 e m 0 2 e m 0 + e m 0 otherwise.",
"Finally, we set the threshold = 3 m 0 .",
"When | s | 6 = m + 2 , s m +3 6 = $ , so the weight of s contains the factor e m 0 2 e m 0 + e m 0 = 1 2+ e 2 m 0 and thus is upper-bounded by 1 2+ e 2 m 0 < .",
"Hence no input of length different from m + 2 achieves a weight that exceeds .",
"A string s of length m + 2 achieves the weight w s given by w s = e m 0 2 e m 0 + e m 0 Q m +2 i =1 e m 0 2 e m 0 + e m 0 if F ( s )=1 e m 0 2 e m 0 + e m 0 Q m +2 i =1 e m 0 2 e m 0 + e m 0 otherwise.",
"When F ( s ) = 0 , w s < e m 0 2 e m 0 + e m 0 < , so if F is unsatisfiable, no input string achieves a weight above the threshold .",
"When F ( s ) = 1 , w s = e m 0 2 e m 0 + e m 0 (cid:16) e m 0 2 e m 0 + e m 0 (cid:17) m +2 > .",
"An input string with weight above exists if and only if F is satisfiable.",
"Obviously, the reduction can be computed in polynomial time since all constants can be computed in logarithmic space.",
"The constructed RNN is consistent, since the output prediction is constant after m + 3 steps.",
"We prove that equivalence of two RNNs is undecidable.",
"For comparison, equivalence of two deterministic WFSAs can be tested in time O ( | | ( | QA | + | QB | ) 3 ) , where | QA | , | QB | are the number of states of the two WFSAs and | | is the size of the alphabet (Cortes et al., 2007); equivalence of nondeterministic WFSAs are undecidable (Griffiths, 1968).",
"The decidability of language equivalence for deterministic probabilistic push-downtown automata (PPDA) is still open (Forejt et al., 2014), although equivalence for deterministic unweighted push-downtown automata (PDA) is decidable (Senizergues, 1997).",
"The equivalence problem is formulated as follows:",
"Theorem 12. The equivalence problem for RNNs is undecidable.",
"Proof.",
"We prove by contradiction.",
"Suppose Turing machine M decides the equivalence problem.",
"Given any deterministic Turing Machine M 0 , construct the RNN R that simulates M 0 on input (cid:15) as described in Corollary 5.",
"Let E s,t ( a ) = 0 and E s,t ($) = h s,t ( n 1 ) h s,t ( n 2 ) .",
"If M 0 does not halt on (cid:15) , for all t N , E 0 s,t ( a ) = E 0 s,t ($) = 1 / 2 ; if M 0 halts after T steps, E 0 s,T ( a ) = 1 / ( e + 1) , E s,T ($) = e/ ( e + 1) .",
"Let R 0 be the trivial RNN that computes { a n : P ( a n ) = 2 ( n +1) , n 0 } .",
"We run M on input h R, R 0 i .",
"If M returns no, M 0 halts on x , else it does not halt.",
"Therefore the Halting Problem would be decidable if equivalence is decidable.",
"Therefore equivalence is undecidable.",
"We look next at minimization of RNNs.",
"For comparison, state-minimization of a deterministic PFSA is O ( | E | log | Q | ) where | E | is the number of transitions and | Q | is the number of states (Aho et al., 1974).",
"Minimization of a non-deterministic PFSA is PSPACE-complete (Jiang and Raviku-mar, 1993).",
"We focus on minimizing the number of hidden neurons ( | N | ) in RNNs: Minimization: Given RNNR and non-negative integer n , return yes if RNN R 0 with number of hidden units | N 0 | n such that R ( s ) = R 0 ( s ) for all s , and no otherwise.",
"Theorem 13.",
"RNN minimization is undecidable.",
"Proof.",
"We reduce from the Halting Problem.",
"Suppose Turing Machine M decides the minimization problem.",
"For any Turing Machine M 0 , construct the same RNN R as in Theorem 12. We run M on input h R, 0 i .",
"Note that an RNN with no hidden unit can only output constant E 0 s,t for all t .",
"Therefore the number of hidden units in R can be minimized to 0 if and only if it always outputs E 0 s,t ( a ) = E 0 s,t ($) = 1 / 2 .",
"If M returns yes, M 0 does not halt on (cid:15) , else it halts.",
"We proved the following hardness results regarding RNN as a recognizer of weighted languages:",
"1. Consistency:",
"(a) Inconsistent RNNs exist.",
"(b) Consistency of RNNs is undecidable.",
"2. Highest-weighted string:",
"(a) Finding the highest-weighted string for an arbitrary RNN is undecidable.",
"(b) Finding the highest-weighted string for a consistent RNN is decidable, but the solution length can surpass all computable bounds.",
"(c) Restricting to solutions of polynomial length, finding the highest-weighted string is NP-complete.",
"3. Testing equivalence of RNNs and minimizing the number of neurons in an RNN are both undecidable.",
"Although our undecidability results are upshots of the Turing-completeness of RNN (Siegelmann and Sontag, 1995), our NP-completeness result is original, and surprising, since the analogous hardness results in PFSA relies on the fact that there are multiple derivations for a single string (Casacu-berta and de la Higuera, 2000).",
"The fact that these results hold for the relatively simple RNNs we used in this paper suggests that the case would be the same for more complicated models used in NLP, such as long short term memory networks (LSTMs; Hochreiter and Schmidhuber 1997).",
"Our results show the non-existence of (effi-cient) algorithms for interesting problems that researchers using RNN in natural language processing tasks may have hoped to find.",
"On the other hand, the non-existence of such efficient or exact algorithms gives evidence for the necessity of approximation, greedy or heuristic algorithms to solve those problems in practice.",
"In particular, since finding the highest-weighted string in RNN is the same as finding the most-likely translation in a sequence-to-sequence RNN decoder, our NP-completeness result provides some justification for employing greedy and beam search algorithms in practice.",
"This work was supported by DARPA (W911NF-15-1-0543 and HR0011-15-C-0115).",
"Andreas Maletti was financially supported by DFG Graduiertenkolleg 1763 (QuantLA)."
] |
[
"abstain",
"abstain",
"abstain",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"objective",
"method",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"method",
"objective",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"other",
"other",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"other",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"other",
"other"
] |
[
"Sequence-to-sequence neural networks have recently achieved great success in abstractive summarization, especially through fine-tuning large pre-trained language models on the downstream dataset.",
"These models are typically decoded with beam search to generate a unique summary.",
"However, the search space is very large, and with the exposure bias, such decoding is not optimal.",
"In this paper, we show that it is possible to directly train a second-stage model performing re-ranking on a set of summary candidates.",
"Our mixture-of-experts SummaReranker learns to select a better candidate and consistently improves the performance of the base model.",
"With a base PEGASUS, we push ROUGE scores by 5.44% on CNN-DailyMail (47.16 ROUGE-1), 1.31% on XSum (48.12 ROUGE-1) and 9.34% on Reddit TIFU (29.83 ROUGE-1), reaching a new state-of-the-art.",
"Our code and checkpoints will be available at https://github.com/ntunlp/ SummaReranker .",
"In recent years, sequence-to-sequence neural models have enabled great progress in abstractive summarization (See et al., 2017; Lin et al., 2021).",
"In the news domain, they have surpassed the strong LEAD-3 extractive baseline.",
"With the rise of transfer learning since BERT (Devlin et al., 2019), leading approaches typically fine-tune a base pre-trained model that either follows a general text generation training objective like T5 (Raffel et al., 2019), BART (Lewis et al., 2020), ERNIE (Zhang et al., 2019b) and ProphetNet (Qi et al., 2021), or an objective specifically tailored for summarization like in PEGASUS (Zhang et al., 2020).",
"Most of these sequence-to-sequence models are history-based, where an output sequence is represented as a sequence of decisions and the probabil-* Equal contribution.",
"ity of the sequence is computed as a product of decision probabilities.",
"This is also known as the autoregressive factorization.",
"To transform the sequence of probabilities into summaries, beam search is commonly used.",
"While auto-regressive decoding with beam search is simple and has many advantages, it can be difficult to encode global constraints such as grammaticality, coherence and factual consistency within this framework, properties that are believed to be useful in discriminating among candidate outputs.",
"If the model starts decoding in a bad direction, mistakes might propagate, carry over the mistake of previous tokens to the generation of new ones, and the model has no way to know that it should adjust the decoding.",
"Furthermore, these models are typically trained with teacher forcing (Williams and Zipser, 1989), which leads to an inherent discrepancy between training time and inference time known as the exposure bias problem (Bengio et al., 2015; Sun and Li, 2021).",
"Decoding methods such as beam search maintain a list of topk best candidates, and output a single best one.",
"In the case of beam search, candidates are sorted by decreasing log-probability, and the last ( k 1) hypotheses are discarded.",
"However, these ( k 1) other hypotheses often contain considerably better sequences in terms of different evaluation measures.",
"This observation holds over other decoding methods: diverse beam search (Vi-4504 jayakumar et al., 2016), top-k sampling (Fan et al., 2018) and top-p sampling (Holtzman et al., 2019).",
"In Table 1, we illustrate this phenomenon with the oracle scores (maximum scores over the pool of candidates) for four popular decoding methods and five metrics on the CNN-DailyMail (Hermann et al., 2015) dataset with a PEGASUS model.",
"The oracle ROUGE-1 scores are up to 10 points higher (+22.8%) than the top beam baseline.",
"Moreover, oracle gains significantly increase when mixing several generation methods together, reaching an improvement of more than 13 ROUGE-1 points (+30.5%).",
"Such a gap is larger than the progress made by research in the whole field of neural abstractive summarization in the last five years (Nal-lapati et al., 2016; Dou et al., 2021).",
"This suggests that current abstractive models are not exploited to their full capacity, calling for better methods to identify the best summary candidate.",
"Given this assessment, we investigate whether it is possible to train a second-stage summarization model which learns to select the best summary among a set of candidates obtained from a base model and with a decoding process, which itself can potentially involve a set of decoding methods (e.g., beam search variants).",
"This way, the model would recover the gap that separates it with the oracle.",
"This raises the question of what makes a summary candidate the optimal one?",
"Admittedly, summarization has been an underconstrained task and its evaluation is complex and remains an active research area (Kryscinski et al., 2019; Fabbri et al., 2021; Koto et al., 2021).",
"To build a flexible approach, we use a multi-task learning framework based on a mixture-of-experts architecture in order to optimize jointly over several measures.",
"To design a robust re-ranker, we systematically explore the dimensions of summary re-ranking: base model, decoding process, and evaluation measure.",
"Our system, named SummaReranker , is flexible and multi-task: it can be trained with any set of evaluation metrics.",
"It is considerably less com-putationnally expensive to train than the single-stage summarization models that it is plugged on.",
"We apply our system across three different datasets {CNN-DailyMail, XSum, Reddit TIFU} and two base models {PEGASUS, BART}.",
"Optimizing ROUGE metrics leads to relative performance improvements from 1.31% to 9.34% depending on the dataset.",
"It outperforms recently proposed second-stage summarization approaches RefSum (Liu et al., 2021) and SimCLS (Liu and Liu, 2021) and sets a new state-of-the-art on CNN-DailyMail and XSum (Narayan et al., 2018).",
"We present extensive quantitative results coupled with a qualitative human evaluation.",
"Re-ranking has been adopted in several branches of NLP for long.",
"In syntactic parsing, Collins and Koo (2005) were the first to employ a re-ranker on the outputs of a base parser, followed by Charniak and Johnson (2005), who used a Maximum Entropy re-ranker.",
"Passage re-ranking is used as the first stage of question-answering systems, to retrieve relevant passages where the answer might lay (Kratzwald and Feuerriegel, 2018; Nogueira and Cho, 2019).",
"Some recent question-answering models also propose to perform answer re-ranking, to refine the answer selection (Kratzwald et al., 2019; Iyer et al., 2021).",
"Re-ranking has also been used in neural machine translation.",
"Checkpoint reranking (Pandramish and Sharma, 2020) generates several translation candidates with multiple model checkpoints, based on the observation (similar to the one we made in 1) that the oracle across checkpoints is of higher quality than just the last checkpoint.",
"Bhattacharyya et al. (2021) use an energy-based model on top of BERT to select translation candidates with higher BLEU score.",
"In abstractive summarization, second-stage approaches such as re-ranking remain underexplored.",
"Recently, RefSum (Liu et al., 2021) defined a second-stage summarization framework which helps address the problem of the train-test distribution mismatch in second-stage models.",
"With a base GSum model (Dou et al., 2021), the authors reach a 46.18 state-of-the-art ROUGE-1 on CNN-DailyMail.",
"In SimCLS (Liu and Liu, 2021), the authors train a second-stage model with contrastive learning, using a ranking loss to select the best summary candidate from a pool of 16 diverse beam search candidates, reaching 46.67 ROUGE-1 on CNN-DailyMail.",
"Our approach differs from RefSum and SimCLS in terms of model architecture and loss function, as well as summary candidate generation process.",
"In contrast with RefSum, we use a single base model, but mix several decoding methods, as our goal is single-model improvement.",
"Unlike SimCLS, we do not use a ranking loss, but directly model the probability that a summary candidate is the best one.",
"To the best of our knowl-4505 edge, we are the first ones to propose a multi-task re-ranking system for abstractive summarization.",
"This enables practitioners to leverage the recent rich literature in automatic abstractive summarization evaluation (Lin, 2004; Zhang et al., 2019a; Zhao et al., 2019a; Yuan et al., 2021).",
"Our approach follows the paradigm of second-stage models.",
"Specifically, given a source document S , a base model B , and a set of decoding methods D , we get a pool of m summary candidates C = { C 1 , . . . , C m } .",
"Given an evaluation metric in a set of metrics M , we get associated scores for each candidates S = { ( C 1 ) , . . . , ( C m ) } .",
"Our goal is to train a model f parameterized by to explicitly identify the best summary candidate C according to the metric, which is given by: C = arg max C i C { ( C 1 ) , . . . , ( C m ) } (1) We frame this problem as a binary classification.",
"C is the positive candidate, while other candidates are treated as negative.",
"For a metric , the re-ranker f is trained with a binary cross-entropy loss: L = y i log p ( C i ) (1 y i ) log(1 p ( C i )) (2) where y i = 1 if C i = C , otherwise y i = 0 .",
"Binary classification has been successfully employed for re-ranking in prior work (Nallapati, 2004; Nogueira and Cho, 2019).",
"While multi-way classification could be an alternative, we noticed that for each generation method, a significant fraction of candidates share the same score for one or several metrics, while it is rare that all candidates share the same score (Appendix C-D).",
"Thus, there is not enough signal to distinguish m candidates into m different classes, but enough for two classes.",
"To optimize for N different metrics M = { 1 , . . . , N } simultaneously, we use a separate prediction head (tower) for each and we minimize the average over metric losses defined as: L = 1 N (cid:88) ML (3) 3.2 Model Architecture We first need to get a good representation of the summary candidate.",
"To use contextual information, we concatenate the source with the candidate, Figure 1: SummaReranker model architecture , optimizing N metrics.",
"separating the two with a special token: [ CLS ] Source [ SEP ] Candidate , and feed it to a pre-trained language model.",
"In all experiments, we use RoBERTa-large (Liu et al., 2019) as encoder.",
"Concatenating the source with the candidate enables RoBERTa to perform cross-attention between the two, which finds parts of the source relevant to the summary candidate.",
"We take the [ CLS ] representation from RoBERTa's last layer, and feed it to a multi-layer perceptron (MLP).",
"Once we have a joint representation of the source with the candidate (noted x ), we perform multi-task learning in order to optimize for the desired metrics.",
"Since metrics are different, yet may be strongly correlated (e.g., ROUGE variants), we adopt a mixture-of-experts (MoE) architecture.",
"In particular, we follow the sparse MoE approach (Shazeer et al., 2017), which introduces experts dropout.",
"To adapt it to multi-task training, we use the multi-gate approach proposed in Zhao et al. (2019b).",
"Given E experts E 1 , . . . , EE and N prediction towers T 1 , . . . , TN , the prediction for an input summary representation x for a metric indexed by k { 1 , . . . , N } is: f k ( x ) = T k ( E (cid:88) i =1 softmax ( W k x ) ( i ) E i ( x )) (4) where W k is the weight matrix associated with gate 4506 k .",
"The corresponding prediction probability is: p = sigmoid ( f k ( x )) (5) Experts are shared across all tasks, and through the softmax gates the model learns how much weight to assign to each expert for each task.",
"Our SummaReranker model architecture is shown in Fig.",
"1. In practice, the shared bottom MLP consists in two fully-connected layers with ReLU activation (Glorot et al., 2011).",
"Each expert E i is also a two-layer MLP with ReLU, and each prediction tower T k is a single-layer MLP.",
"We set the number E of experts to be equal to twice the number of tasks ( N ), and the experts dropout to 50%, so that the effective number of experts being used during training matches N .",
"Our model has 370.09 million trainable parameters, representing a slight 4.14% increase due to the mixture-of-experts compared to the off-the-shelf RoBERTa-large.",
"Second-stage learning approaches may suffer from an inherent distribution bias.",
"Indeed, the base model has a different output distribution on the training set than on the validation and test sets.",
"Thus, it is ineffective to train a second-stage model on the training set outputs of the base model.",
"To resolve this distribution shift, we shuffle the training set and randomly split it into equal parts, then fine-tune a pre-trained model on each half.",
"Then, to build a training set for the re-ranker, we infer with each model on the half that it was not trained on.",
"At testing time, we face two options: Base setup : in this setup, we infer on the test set with one of the two base models trained on half the training set , then apply the re-ranker.",
"Since the base models are trained on less data, their performance on the test set worsens.",
"However, we will show that SummaReranker brings improvements which more than compensate this performance drop.",
"Transfer setup : this setup consists in applying SummaReranker on top of a base model trained on the whole training set .",
"Note that SummaReranker is still trained in the same fashion as before.",
"There could be a distribution mismatch in this setting too, since SummaReranker needs to rank summary candidates of a potentially higher quality (generated by a model trained on the full data) than the summaries that it was trained on R-1 R-2 R-L BS BaS R-1 1.000 0.884 0.977 0.858 0.662 R-2 0.884 1.000 0.910 0.833 0.665 R-L 0.977 0.910 1.000 0.855 0.669 BS 0.858 0.833 0.855 1.000 0.682 BaS 0.662 0.665 0.669 0.682 1.000 Table 2: Pearson correlation coefficient between the five evaluation metrics {R-1, R-2, R-L, BS, BaS} for a base PEGASUS with beam search on CNN/DM .",
"(generated by a model trained on half the data).",
"Nevertheless, SummaReranker still transfers well and considerably improves the performance of the base model in this transfer setup.",
"If D is made of multiple decoding methods { 1 , ..., j } , each producing several candidates, the overall candidate set may be large, slowing down inference.",
"Thus, to explore lower-resource inference setups, we separate the sets of decoding methods D train and D test used for training and inference, respectively, and enforce that D test D train .",
"Throughout our experiments, we vary all the three dimensions of our re-ranking framework: the base model B , the set of decoding methods D and the set of scoring metrics M .",
"As base models, we use PEGASUS (Zhang et al., 2020) and BART (Lewis et al., 2020), each one in their large version, as they are leading summarization models with publicly available checkpoints.",
"We obtain pre-trained and fine-tuned checkpoints from the HuggingFace transformers library (Wolf et al., 2020).",
"For decoding methods ( D ), we experiment with beam search (referred to as 1 ), diverse beam search (2), topk sampling (3) and topp sampling (4).",
"For each decoding method, we set the number of candidates to 15 , as it is close to the maximum which could fit in a standard 11GB RAM GPU when doing generation with PEGASUS-large.",
"As set of metrics, we first use ROUGE (Lin and Hovy, 2003), in its commonly used three flavours of ROUGE-1 (noted R-1 ), ROUGE-2 (noted R-2 ) 4507 Dataset Domain # Data points # Words Train Val Test Doc.",
"and ROUGE-L (noted R-L ) for summarization evaluation.",
"We also leverage recently introduced model based evaluation methods BERTScore (noted BS ) (Zhang et al., 2019a) and BARTScore (noted BaS ) (Yuan et al., 2021), which both rely on contextual word embeddings from pre-trained language models.",
"Thus, our total set of metrics is M = {R-1, R-2, R-L, BS, BaS}.",
"As seen in Table 2, R-1 and R-L are strongly correlated (Pearson correlation score of 0.977).",
"BARTScore is the least correlated to other metrics, suggesting that it captures aspects complementary to the other four.",
"We train SummaReranker on the following datasets, covering multiple domains: CNN-DailyMail (Hermann et al., 2015) contains 93k and 220k articles from the CNN and DailyMail newspapers, respectively.",
"We use the non anonymized version from (See et al., 2017).",
"XSum (Narayan et al., 2018) contains 227k articles from the BBC for years 2010 2017.",
"While also in the news domain, XSum is by design significantly more abstractive than CNN/DM and is made of single-sentence summaries.",
"Reddit TIFU (Kim et al., 2019) contains 120k posts from the popular online Reddit forum.",
"As in other summarization works (Zhang et al., 2020), we use the TIFU-long subset, containing 37k posts.",
"As there is no official split, we build a random 80:10:10 split for training:validation:test.",
"We refer to Table 3 for statistics on each dataset.",
"To help the model better discriminate between candidates, we found that sampling was useful.",
"Specifically, during training, we rank candidates by decreasing sum of normalized scores for the evaluation metrics and keep the top m top and bottom m bottom candidates.",
"Thus, training time varies in O ( m top + m bottom ) , while inference is in O ( m ) as we need to score each candidate.",
"In practice, we found that taking m top = 1 and m bottom = 1 performed well, on top of decreasing the training time.",
"This means that at training time, the model only sees two candidates per data point.",
"We scale the pool of candidates that these two are sampled from to four decoding methods, totalling 60 summary candidates per source document.",
"We train SummaReranker for five epochs.",
"We use the Adafactor optimizer (Shazeer and Stern, 2018), with maximum learning rate 1e-5, warming up the learning rate linearly over the first 5% training steps.",
"Training on CNN/DM takes four days on a single RTX 2080 Ti GPU.",
"For inference, we need to output a single candidate.",
"After getting predicted probabilities across each metric M , we output the candidate maximizing the sum of predicted probabilities.",
"Note that relaxing inference to allow for a different best candidate for each metric would improve performance, but is not practical.",
"We perform inference with the model checkpoint maximizing the sum of the scores for the metrics on the validation set.",
"First, we investigate how our model performs in the base setup described in 3.",
"We apply SummaReranker on top of PEGASUS and BART models fine-tuned on each half.",
"For each model, we decode using beam search (1) and diverse beam search (2).",
"The latter performs better for PEGASUS, while the former is better for BART.",
"We then apply SummaReranker optimized jointly for R-1, R-2, and R-L on 4508 Decoding methods Evaluation metrics Model Modelstage D train D test m OptimizedMetrics( M ) R-1 R-2 R-L BS BaS Gain(%) PEGASUS (Zhang et al., 2020) 1 {1} {1} 8 _ 44.16 21.56 41.30 _ _ _ PEGASUS our setup 1 {1} {1} 15 _ 44.23 21.48 41.21 87.39 -2.78 _ PEGASUS our setup 1 {2} {2} 15 _ 44.56 20.90 41.58 87.36 -2.81 _ BART (Lewis et al., 2020) 1 {1} {1} 5 _ 44.16 21.28 40.90 _ _ _ BART our setup 1 {1} {1} 15 _ 43.28 20.44 40.06 87.78 -2.48 _ BART our setup 1 {2} {2} 15 _ 44.48 21.21 41.60 88.11 -2.33 _ BART + R3F (Aghajanyan et al., 2020) 1 {1} {1} 5 _ 44.38 21.53 41.17 _ _ _ GSum (Dou et al., 2021) 1 {1} {1} 4 _ 45.94 22.32 42.48 _ _ _ GSum + RefSum (Liu et al., 2021) 2 {1} {1} 4 _ 46.18 22.36 42.91 _ _ _ BART + SimCLS (Liu and Liu, 2021) 2 {2} {2} 16 _ 46.67 22.15 43.54 66.14 _ _ PEGASUS + SR 2 {1} {1} 15 {R-1, R-2, R-L} 45.56 22.23 42.46 87.60 -2.74 3.18 PEGASUS + SR 2 {2} {2} 15 {R-1, R-2, R-L} 46.86 22.01 43.59 87.66 -2.73 5.10 PEGASUS + SR 2 {1, 2} {1} 15 {R-1, R-2, R-L} 46.13 22.61 42.94 87.67 -2.72 4.59 PEGASUS + SR 2 {1, 2} {2} 15 {R-1, R-2, R-L} 46.83 21.88 43.55 87.63 -2.74 4.84 BART + SR 2 {1} {1} 15 {R-1, R-2, R-L} 44.60 21.38 41.36 88.03 -2.40 3.63 BART + SR 2 {2} {2} 15 {R-1, R-2, R-L} 46.47 22.17 43.45 88.43 -2.19 4.48 BART + SR 2 {1, 2} {1} 15 {R-1, R-2, R-L} 45.08 21.79 41.85 88.13 -2.37 5.08 BART + SR 2 {1, 2} {2} 15 {R-1, R-2, R-L} 46.50 22.15 43.50 88.45 -2.18 4.51 PEGASUS + SR ( new SOTA ) 2 {1, 2} {1, 2} 30 {R-1, R-2, R-L} 47.16 22.55 43.87 87.74 -2.71 5.44 PEGASUS + SR 2 {1, 2} {1, 2} 30 {BS, BaS} 45.00 20.90 41.93 87.56 -2.55 4.23 PEGASUS + SR 2 {1, 2} {1, 2} 30 {R-1, R-2, R-L, BS, BaS} 46.59 22.41 43.45 87.77 -2.58 4.39 BART + SR 2 {1, 2} {1, 2} 30 {R-1, R-2, R-L} 46.62 22.39 43.59 88.47 -2.18 5.05 BART + SR 2 {1, 2} {1, 2} 30 {BS, BaS} 44.90 20.85 42.03 88.28 -2.05 6.11 BART + SR 2 {1, 2} {1, 2} 30 {R-1, R-2, R-L, BS, BaS} 45.96 21.79 43.01 88.44 -2.09 4.03 PEGASUS + SR 2 {1, 2, 3, 4} {1, 2, 3, 4} 60 {R-1, R-2, R-L} 47.04 22.32 43.72 87.69 -2.74 _ Table 5: Transfer setup results on CNN/DM .",
"SummaReranker improves a base PEGASUS by 4.57% to 7.21% with 15 candidates, and 8.70% to 9.36% with 30 candidates.",
"With BART, SummaReranker improves by 3.94% to 11.65% with 15 candidates, and 7.98% with 30 candidates.",
"When using several decoding methods, we compare the re-ranker performance with the best baseline among decoding methods.",
"Notably, with SummaReranker, PEGASUS and BART models trained on 50% of the training set now surpass their counterparts trained on the whole training set, achieving 46.19 R-1 with PEGASUS and 45.96 R-1 with BART.",
"This is better than GSum (Dou et al., 2021), the best reported summarization model on CNN/DM.",
"Next, we look at how SummaReranker performs in the transfer setup.",
"That means, we apply it on top of PEGASUS and BART models fine-tuned on the entire dataset, using public checkpoints.",
"We also include R3F (Aghajanyan et al., 2020) and GSum (Dou et al., 2021) in our single-stage model comparison.",
"In terms of second-stage approaches, we compare SummaReranker with RefSum (Liu et al., 2021) and SimCLS (Liu and Liu, 2021).",
"Note that SummaReranker is trained as usual, on the outputs of two base models each trained on 50%.",
"We first optimize for ROUGE metric {R-1, R-2, R-L} with multi-task training on CNN/DM (Ta-ble 5).",
"With two decoding methods, PEGASUS + SummaReranker sets a new state of the art on CNN/DM with 47.16 R-1, 22.55 R-2 and 43.87 RL, corresponding to gains of 2.60/1.65/2.29 R-1/2/L or +5.44% from our diverse beam search baseline.",
"As expected, the relative gains in transfer setup are lower than in base setup.",
"Next, we optimize model-based metrics, and note the difficulty in improving BERTScore, compared to BARTScore.",
"Optimizing jointly ROUGE and model-based metrics improves all metrics, but does not match the results when training only ROUGE.",
"Interestingly, performance gains saturate when adding two extra decoding methods (topk and topp sampling), despite gains in the oracle scores observed in Table",
"1. To assert statistical significance of performance gains, we perform a t-test between SummaReranker scores and scores from the base model with each of the decoding methods being used, and mark with results where the p -value is smaller than 0.05 for all these decoding methods.",
"softmax weights from the gates) for the model optimized on all five metrics in Fig.",
"2. Notably, some experts specialize in certain metrics (for instance, expert 0 on R-2 and expert 4 on R-L).",
"Then, we apply SummaReranker on XSum and Reddit TIFU, as shown in Table 6.",
"We train SummaReranker using the three ROUGE metrics {R-1, R-2, R-L} as objective, and decoding methods {beam search, diverse beam search} to generate the candidates.",
"On XSum, SummaReranker improves a base PEGASUS with beam search candidates by 1.31%, setting a new state-of-the-art of 48.12/24.95/40.00 R-1/2/L.",
"On Reddit TIFU, we improve a base PEGASUS with beam search and diverse beam search (30 candidates) by 9.34%, reaching 29.83/9.50/23.47 R-1/2/L, and a base BART with beam search by 4.22%, reaching 28.99/9.82/22.96 R-1/2/L.",
"Across datasets, training on a combination of beam search and diverse beam search candidates is consistently effective.",
"Beyond summary properties, we investigate the performance of re-ranking itself with rank-based evaluation measures.",
"A perfect re-ranker should always single out the best summary from the rest, yielding oracle results.",
"To evaluate how SummaReranker ranks the best summary, we compute the best summary candidate recall at different thresholds.",
"Since several candidates might get the same metric scores (Appendix C), the best candidate recall at threshold k for the random uniform ranking baseline is not the standard R @ k = km anymore Figure 2: Expert utilization for a base PEGASUS with SummaReranker optimized with {R-1, R-2, R-L, BS, BaS} on CNN/DM , with 10 experts.",
"where m best is the number of best candidates.",
"Following Fig. 3, a PEGASUS with diverse beam search ranking of summary candidates (dashed lines) is not significantly better than the corresponding random baseline from eq.",
"(6) (dot-ted lines) on CNN/DM and Reddit TIFU.",
"However, it improves on it on XSum, confirming the observation made in Table 6 that it is harder to train a re-ranker on this dataset.",
"On all three datasets, SummaReranker (solid lines) significantly pushes the recall at all thresholds.",
"We note +14.90 abso-lute recall@5 improvement on CNN/DM (50.84 versus 35.94, indicated by the black arrow), +9.54 on XSum and +5.23 on Reddit TIFU.",
"Lastly, we demonstrate that re-ranking improvements in quantitative metrics also translate to qualitatively better summaries.",
"Fig. 4 shows an example of summary selected by SummaReranker, alongside its source document, ground-truth (reference) summary and output from the base model.",
"SummaReranker is able to include a whole sentence which was missed by the base summary.",
"We refer to Appendix K for full re-ranking demonstrations on each of the three datasets.",
"We also conduct a human evaluation.",
"We asked three different humans to evaluate 50 randomly sampled test summaries for each dataset.",
"Human raters were graduate students with professional English proficiency (TOEFL scores above 100 out of 120).",
"Humans were shown the source document alongside the top beam search summary from Figure 5: Human evaluation results on all three datasets.",
"PEGASUS, and the corresponding summary candidate selected by SummaReranker.",
"They were asked to choose which one they believe is more faithful.",
"They could choose a tie, because in some cases the base summary and the re-ranked one are very similar, or even identical (Appendix I).",
"In Fig. 5, we see that on average, humans are more likely to pick the SummaReranker candidate.",
"Abstractiveness Given that we are not modifying the base model nor its training procedure, we analyze whether our re-ranking system favors more abstractive candidates.",
"In Fig. 6, we display the percentage of novel n -grams for n in {1,2,3,4}, for a base PEGASUS with beam search (blue) and diverse beam search (purple) decoding, and when adding SummaReranker in both cases (green and red, respectively).",
"As first raised in (See et al., 2017), summary candidates are much less abstractive than ground truth summaries on CNN/DM.",
"Yet, our re-ranker selects more abstractive candidates 4511 Figure 6: Novel n -grams with PEGASUS, across all datasets and with beam search and diverse beam search.",
"according to all n -grams metrics, even more so with diverse beam search, which is already more abstractive than beam search.",
"This observation also holds on Reddit TIFU and XSum (other than 1-grams).",
"XSum summary candidates are already almost as abstractive as the ground truth and it is harder to obtain significant abstractiveness gains through our re-ranking.",
"Speed/Performance trade-off On top of base model training and candidate generation, SummaReranker inference cost is linear in the number of candidates.",
"A single candidate takes on average 38ms to be scored.",
"As seen in Table 5 and Table 6, the performance gains from mixing several decoding methods to generate summary candidates are not scaling consistently (all four decoding methods are not better than just beam search and diverse beam search).",
"To provide more insights on the speed/performance trade-off, we show in Appendix J SummaReranker performance when randomly sub-sampling k { 1 , . . . , 15 } candidates.",
"On CNN/DM, re-ranking as few as two candidates is sufficient to improve on the baseline PEGASUS.",
"On XSum, it needs three to eight, and on Reddit TIFU three to four.",
"As a rule of thumb, it is better to score all candidates when possible, but six to eight candidates provide a good trade-off between speed and performance across datasets.",
"Further Work To encode the source jointly with the summary candidate, we need to truncate the source to a fixed number of tokens.",
"Thus, we are limited by the maximum context window of the language model encoder (512 in the case of RoBERTa-large).",
"Applying SummaReranker to long-document summarization, such as scien-tific articles summarization (Cohan et al., 2018) would need better long-range modeling.",
"In 3, we weighted metric-dependent losses uniformly.",
"We leave to further work the exploration of more complex weight balancing or multi-task learning objectives (Lin et al., 2019).",
"We introduced SummaReranker, the first multi-task re-ranking framework for abstractive summarization.",
"Encoding the source with the candidate, our model predicts whether the summary candidate maximizes each of the metrics optimized for.",
"SummaReranker works well across diverse datasets, models, decoding methods and summarization evaluation metrics.",
"Summaries selected by SummaReranker improve the ROUGE state-of-the-art on CNN/DM and XSum.",
"In addition, we also show that they are more abstractive and more likely to be preferred by human evaluators over base model outputs.",
"This research was supported by the SINGA scholarship and partially supported by the National Research Foundation, Prime Minister's Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme.",
"We would like to thank anonymous reviewers for their insightful feedback on how to improve the paper."
] |
[
"abstain",
"abstain",
"abstain",
"result",
"result",
"objective",
"other",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"other",
"objective",
"objective",
"method",
"objective",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"result",
"other",
"other"
] |
[
"Millions of conversations are generated every day on social media platforms.",
"With limited attention, it is challenging for users to select which discussions they would like to participate in.",
"Here we propose a new method for microblog conversation recommendation.",
"While much prior work has focused on post-level recommendation, we exploit both the conversational context, and user content and behavior preferences.",
"We propose a statistical model that jointly captures: (1) topics for representing user interests and conversation content, and (2) discourse modes for describing user replying behavior and conversation dynamics.",
"Experimental results on two Twitter datasets demonstrate that our system outperforms methods that only model content without considering discourse.",
"Online platforms have revolutionized the way individuals collect and share information (O'Connor et al., 2010; Lee and Ma, 2012; Bakshy et al., 2015), but the vast bulk of online content is irrelevant or unpalatable to any given individual.",
"A user interested in political discussion, for instance, might prefer content concerning a specific candidate or issue, and only then if discussed in a positive light without controversy (Adamic and Glance, 2005; Bakshy et al., 2015).",
"How do individuals facing such large quantities of superfluous material select which conversations to engage in, and how might we better algorithmically recommend conversations suited to individual users?",
"We approach this problem from a microblog conversation recommendation framework.",
"Where prior work has focused on the content of individual posts for recommendation (Chen Conversation 1 ... [ U 1 ] : The sheer cognitive dissonance required for a liberal to say Clinton is as bad as Trump is just staggering. [ U 2 ] : Hillarists, Troll; they insult Liberals trying to distract from Hillary's Conseratism. [ U 3 ] : I still prefer Hillarist b/c it describes their Cultish and ideological aspects. ... Conversation 2 ... [ U 4 ] : I do not like trump at all, but Comey left her in place knowing Bernie is much stronger. [ U 1 ] : If you're going to actively start rooting against the Democrats, get off my mentions. I have enough GOP doing that. [ U 5 ] : Your tweets are an example of why open primaries are stupid. You're not a Dem, you're just for one guy. [ U 1 ] : No offense, but you've been wrong about pretty much everything so far. Why would I trust your prognostication now? ... Figure 1: Two snippets of conversations on Twitter. [ U i ] : The message is posted by user U i . is the dividing line between training history and test part. U 1 did not reengage in Conversation 1 but reengaged in Conversation 2. et al., 2012; Yan et al., 2012; Vosecky et al., 2014; He and Tan, 2015), we examine the entire history and context of a conversation, including both topical content and discourse modes such as agreement, question-asking, argument and other dialogue acts (Ritter et al., 2010).",
"1 And where Backstrom et al. (2013) leveraged conversation reply structure (such as previous user engagement), their model is unable to predict first entry into new conversations, while ours is able to predict both new 1 In this paper, discourse mode refers to a certain type of dialogue act, e.g., agreement or argument.",
"The discourse structure of a conversation means some combination (or a probability distribution) of discourse modes.",
"and repeated entry into conversations based on a combination of topical and discourse features.",
"To illustrate the interplay between topics and discourse, Figure 1 displays two snippets of conversations on Twitter collected during the 2016 United States presidential election.",
"User U 1 participates in both conversations.",
"The first conversation is centered around Clinton, and U 1 , who is more typically involved with conversations about candidate Sanders, does not return.",
"In the second conversation, however, U 1 is involved in a heated back-and-forth debate, and thus is drawn back to a conversation that they may otherwise have abandoned but for their enjoyment of adversarial discourse.",
"Effective conversation prediction and recommendation requires an understanding of both user interests and discourse behaviors, such as agreement, disagreement, inquiry, backchanneling, and emotional reactions.",
"However, acquiring manual labels for both is a time-consuming process and hard to scale for new datasets.",
"We instead propose a unified statistical learning framework for conversation recommendation, which jointly learns (1) hidden factors that reflect user interests based on conversation history, and (2) topics and discourse modes in ongoing conversations, as discovered by a novel probabilistic latent variable model.",
"Our model is built on the success of collaborative filtering (CF) in recommendation systems, where latent dimensions of product ratings or movie reviews are extracted to better capture user preferences (Linden et al., 2003; Salakhutdinov and Mnih, 2008; Wang and Blei, 2011; McAuley and Leskovec, 2013).",
"To the best of our knowledge, we are the first to model both topics and discourse modes as part of a CF framework and apply it to microblog conversation recommendation.",
"2 Experimental results on two Twitter conversation datasets show that our proposed model yields significantly better performance than state-of-the-art post-level recommendation systems.",
"For example, by leveraging both topical content and discourse structure, our model achieves a mean average precision (MAP) of 0.76 on conversations about the U.S. presidential election, compared with 0.70 by McAuley and Leskovec (2013), which only considers topics.",
"We further con-2 To ensure the general applicability of our approach to domains lacking such information, we do not utilize external features such as network structure, but it may certainly be added in future, more narrowly targeted applications.",
"ducted detailed analysis on the latent topics and discourse modes and find that our model can discover reasonable topic and discourse representations, which play an important role in characterizing reply behaviors.",
"Finally, we also provide a pilot study on recommendation for first time replies, which shows that our model outperforms comparable recommendation systems.",
"The rest of this paper is structured as follows.",
"The related work is discussed in Section 2. We then present our microblog conversation recommendation model in Section 3. The experimental setup and results are described in Sections 4 and 5. Finally, we conclude in Section 6. 2 Related Work Social media has attracted increasing attention in digital communication research (Agichtein et al., 2008; Kwak et al., 2010; Wu et al., 2011).",
"The problem studied here is closely related to work on recommendation and response prediction in mi-croblogs (Artzi et al., 2012; Hong et al., 2013), where the goal is to predict whether a user will share or reply to a given post.",
"Existing methods focus on measuring features that reflect personalized user interests, including topics (Hong et al., 2013) and network structures (Pan et al., 2013; He and Tan, 2015).",
"These features have been investigated under a learning to rank framework (Duan et al., 2010; Artzi et al., 2012), graph ranking models (Yan et al., 2012; Feng and Wang, 2013; Alawad et al., 2016), and neural network-based representation learning methods (Yu et al., 2016).",
"Distinguishing from prior work that focuses on post-level recommendation, we tackle the challenges of predicting user reply behaviors at the conversation-level.",
"In addition, our model not only captures latent factors such as the topical interests of users, but also leverages the automatically learned discourse structure.",
"Much of the previous work on discourse structure and dialogue acts has relied on labeled data (Jurafsky et al., 1997; Stolcke et al., 2000), while unsupervised approaches have not been applied to the problem of conversation recommendation (Woszczyna and Waibel, 1994; Crook et al., 2009; Ritter et al., 2010; Joty et al., 2011).",
"Our work is also in line with conversation modeling for social media discussions (Ritter et al., 2010; Budak and Agrawal, 2013; Louis and Cohen, 2015; Cheng et al., 2017).",
"Topic modeling 376 has been employed to identify conversation content on Twitter (Ritter et al., 2010).",
"In this work, we propose a probabilistic model to capture both topics and discourse modes as latent variables.",
"A further line of work studies the reposting and reply structure of conversations (Gomez et al., 2011; La-niado et al., 2011; Backstrom et al., 2013; Budak and Agrawal, 2013).",
"But none of this work distinguishes the rich discourse functions of replies, which is modeled and exploited in our work.",
"Our proposed microblog conversation recommendation framework is based on collaborative filtering and a novel probabilistic graphical model.",
"Concretely, our objective function takes the form: min L + NLL ( C | ) (1) This function encodes two types of information.",
"First, L models user reply preference in a similar fashion to collaborative filtering (CF) (Hu et al., 2008; Pan et al., 2008).",
"It captures topics of interests and discourse structures users are commonly involved (e.g., argumentation), and takes the form of mean square error (MSE) based on user reply history.",
"This part is detailed in Section 3.1.",
"The second term, NLL ( C | ) , denotes the negative log-likelihood of a set of conversations C , with containing all parameters.",
"A probabilistic model is described in Section 3.2 that shows how the topical content and discourse structures of conversations are captured by these latent variables.",
"The hyperparameter controls the trade-off between the two effects.",
"2 regularization is also added for parameters to avoid model overfitting.",
"For the rest of this section, we first present the construction of L and NLL ( C | ) in Sections 3.1 and 3.2.",
"We then discuss how these two components can be mutually informed by each other in Section 3.3.",
"Finally, the generative process and parameter learning are described in Section 3.4.",
"L Our user reply preference modeling is built on the success of collaborative filtering (CF) for product ratings.",
"However, classic CF problems, such as product recommendation, generally rely on explicit user feedback.",
"Unlike user ratings on products, our input lacks explicit feedback from users about negative preferences and non-response.",
"Therefore, we follow one-class Collaborative Filtering (Hu et al., 2008; Pan et al., 2008), which weights positive instances higher during training and is thus suited to our data.",
"Formally, for user u and conversation c , we measure reply preference based on the MSE between predicted preference score p u,c and reply history r u,c .",
"r u,c equals 1 if u is in the conversation history; otherwise, it is 0 .",
"The first term of objective (Eq.",
"1) takes the following form: L = |U| X u =1 |C| X c =1 f u,c ( p u,c r u,c ) 2 (2) where U consists of users { u } and C is a set of conversations { c } in a dataset.",
"f u,c is the corresponding weight for a conversation c and a target user u .",
"Intuitively, it has a large value if positive feedback (user replied) is observed.",
"Therefore, we adapt the formulation from Pan et al. (2008): f u,c = (cid:26) s if r u,c = 1 (i.e., user replied) 1 if r u,c = 0 (3) where s > 1 , an integer hyperparameter to be tuned.",
"Inspired by prior models (Koren et al., 2009; McAuley and Leskovec, 2013), we propose the following latent factor model to describe p u,c : p u,c = Uu Cc + (1 ) Uu Cc + b u + b c + a (4) Uu and Cc are K -dimensional latent vectors that encode topic-specific information (where K is the number of latent topics) for users and conversations.",
"Specifically, Uu reflects the topical interests of u , with higher value Uu,k indicating greater interest by u in topic k .",
"Cc captures the extents that topics are discussed in conversation c .",
"Similarly, D -dimensional vectors Uu and Cc capture discourse structures in shaping reply behaviors (where D is the number of discourse clus-ters).",
"U u reflects the discourse behaviors u prefers, such as u 1 often enjoys arguments as in the second conversation of Figure 1, while Cc captures the discourse modes used throughout conversation c .",
"By multiplying user and conversation factors, we can measure the corresponding similarity.",
"The predicted score p u,c thereby reflects the tendency for a user u to be involved in conversation c .",
"As pointed out by McAuley and Leskovec (2013), these latent vectors often encode hidden factors that are hard to interpret under a CF framework.",
"Therefore, in Section 3.2, we present a novel probabilistic model which can extract interpretable topics and discourse modes as word 377 distributions.",
"We then describe how they can be aligned with the latent vectors of C and U .",
"Parameter a is an offset parameter, b u and b c are user and conversation biases, and [0 , 1] serves as the weight for trading offs of topic and discourse factors in reply preference modeling.",
"C | Here we present a novel probabilistic model that learns coherent word distributions for latent topics and discourse modes of conversations.",
"Formally, we assume that each conversation c C contains M c messages, and each message m has N c,m words.",
"We distinguish three latent components discourse , topic , and background underlying conversations, each with their own type of word distribution.",
"At the corpus level, there are K topics represented by word distribution Tk ( k = 1 , 2 , ..., K ), while Dd ( d = 1 , 2 , ..., D ) represents the D discourse modes embedded in corpus.",
"In addition, we add a background word distribution B to capture general information (e.g., common words), which do not indicate either discourse or topic information.",
"Dd , Tk , and B are all multinomial word distributions over vocabulary size V .",
"Below describes more details.",
"Message-level Modeling.",
"Our model assigns two types of message-level multinomial variables to each message: z c,m reflects its latent topic and d c,m represents its discourse mode .",
"Topic assignments.",
"Due to the short nature of microblog posts, we assume each message m in conversation c contains only one topic, indexed as z c,m .",
"This strategy has been proven useful to alleviate data sparsity for topic inference (Quan et al., 2015).",
"We further assume messages in the same conversation would focus on similar topics.",
"We thus draw topic z c,m c , where c denotes the fractions of topics discussed in conversation c .",
"Discourse assignments.",
"To capture discourse behaviors of u , distribution u is used to represent the discourse modes in messages posted by u .",
"The discourse mode d c,m for message m is then generated from u c,m , where u c,m is the author of m in c .",
"Word-level Modeling.",
"We aim to separate discourse , topic , and background information for conversations.",
"Therefore, for each word w c,m,n of message m , a ternary switcher x c,m,n { DISC , TOPIC , BACK } controls word w c,m,n to fall into one of the three types: discourse , topic , and background .",
"Discourse words (DISC) are indicative of the discourse modes of messages.",
"When x c,m,n = DISC (i.e., w c,m,n is assigned as a discourse word), word w c,m,n is generated from the discourse word distribution Dd c,m where d c,m is discourse assignment to message m .",
"Topic words (TOPIC) describe the topical focus of a conversation.",
"When x c,m,n = TOPIC, w c,m,n is assigned as a topic word and generated from Tz c,m word distribution given topic of m .",
"Background words (BACK) capture the general information that is not related to discourse or topic.",
"When word w c,m,n is assigned as a background word ( x c,m,n = BACK), it is drawn from background distribution B .",
"Switching among Topic, Discourse, and Background.",
"We further assume the word type switcher x c,m,n is sampled from a multinomial distribution which depends on the current discourse mode d c,m .",
"The intuition is that messages of different discourse modes may show different distributions of the three word types.",
"For instance, a statement message may contain more content words than a rhetorical question.",
"Specifically, x c,m,n Multi ( d c,m ) , where d is a 3 -dimension stochastic vector that expresses the appearing probabilities of three kinds of words (DISC, TOPIC, BACK), when the discourse assignment is d .",
"Stop words and punctuations are forced to be labeled as discourse or background.",
"By explicitly distinguishing different types of words with switcher x c,m,n , we can thus separate word distributions that reflect discourse, topic, and background information.",
"Likelihood.",
"Based on the message-level and the word-level generation process, the probability of observing words in the given corpus is: Pr ( C | , , , , z , d , x ) = CY c =1 M c Y m =1 c,z c,m u c,m ,d c,m Y x c,m,n = BACK d c,m , BACK Bw c,m,n Y x c,m,n = DISC d c,m , DISC Dd c,m ,w c,m,n Y x c,m,n = TOPIC d c,m , TOPIC T z c,m ,w c,m,n (5) And we use negative log likelihood to model corpus likelihood effect in Eq.",
"1, i.e., NLL ( C | ) = 378 log( P r ( C | ) , where parameters set = { , , , , z , d , x } .",
"As mentioned above, the hidden factors discovered in Section 3.1 lack interpretability, which can be boosted by the learned latent topics and discourse modes in Section 3.2.",
"However, it is nontrivial to link the topic-related parameters of Cc to the conversation topic distributions of c , since the former takes real values from to + while the latter is a stochastic vector.",
"Therefore, we follow the strategy from McAuley and Leskovec (2013) to apply a softmax function over Cc : c,k = exp ( T Cc,k ) P Kk 0 =1 exp ( T Cc,k 0 ) (6) We further assume that the discourse mode preference by users, Uu , can also be informed by the discourse mode distribution captured by u , i.e., a user who enjoys arguments may be willing to participate another.",
"So similarly, we define: u,d = exp ( D Uu,d ) P Dd 0 =1 exp ( D Uu,d 0 ) (7) where T and D are learnable parameters that control the peakiness of the transformation.",
"For example, a larger T indicates a more focused conversation, while a smaller T means users discuss diverse topics.",
"Finally, softmax transformation is also applied to Tk , Dd , B , and d , as done in McAuley and Leskovec (2013), with additional parameters Tk , Dd , B , and d (as shown in Figure 2).",
"This is to ensure that the distributions and d are stochastic vectors.",
"In doing so, these distributions can be learned via optimizing and d , which take any value and thus ensure that the cost function in Eq.",
"1 is optimized without considering any parameter constraints.",
"Our word generation process is displayed in Figure 2 and described as follows:",
"For message m = 1 to M c : Compute discourse distribution u c,m by Eq.",
"7 Draw topic assignment z c,m Multi ( c ) Draw discourse mode d c,m Multi ( u c,m ) For word index n = 1 to N c,m : Draw word type x c,m,n Multi ( d ) # % % %,* %,* %,*,.",
"if x c,m,n == BACK : Draw word w c,m,n Multi ( B ) if x c,m,n == DISC : Draw word w c,m,n Multi ( Dd c,m ) if x c,m,n == TOPIC : Draw word w c,m,n Multi ( Tz c,m ) Parameter Learning.",
"For learning, we randomly initialize all learnable parameters and then alternate between the following two steps: Step 1. Fix topic and discourse assignments z and d , and word type switcher x , then optimize the remaining parameters in Eq.",
"1 by L-BFGS (No-cedal, 1980): Update a, b, , , , , = argmin L + NLL ( C | ) (8) Step 2. Sample topic and discourse assignments z and d at the message level and word type switcher x at the word level, using the distributions, computed according to parameters optimized in step 1: Sample z c,m , d c,m , x c,m,n with probabilities p ( z c,m = k ) = c,k p ( d c,m = d ) = u c,m ,d p ( x c,m,n = BACK ) = Bw c,m,n d c,m ,BACK p ( x c,m,n = DISC ) = Dd c,m ,w c,m,n d c,m ,DISC p ( x c,m,n = TOPIC ) = Tz c,m ,w c,m,n d c,m ,TOPIC (9) Step 2 is analogous to Gibbs Sampling (Grif-fiths, 2002) in probabilistic graphical models, such as LDA (Blei et al., 2003).",
"However, distinguishing from previous models, the multinomial distributions in our models are not drawn from a Dirichlet prior.",
"Instead, they are computed based on the parameters learned in Step 1. Our learning process stops when the change of parameters is small (i.e., below a pre-specified 379 Dataset # of # of # of Avg msg Avg conv user conv msg per user per user US Election 4,300 2,013 22,092 5.14 1.23 TREC 10,122 7,500 38,999 3.85 1.71 Table 1: Statistics of two datasets.",
"Datasets.",
"We collected two microblog conversation datasets from Twitter for experiments 3 : one contains discussions about the U.S. presidential election (henceforth US Election ), the other gath-ers conversations of diverse topics based on the tweets released by TREC 2011 microblog track (henceforth TREC ) 4 .",
"US Election was collected from January to June of 2016 using Twitter's Streaming API 5 with a small set of political keywords.",
"6 To recover conversations, Tweet Search API 7 was used to retrieve messages with the in-reply-to relations to collect tweets in a recursive way until full conversations were recovered.",
"Statistics of the datasets are shown in Table 1. Figure 3 displays the number of conversations individual users participated in.",
"As can be seen, most users are involved in only a few conversations.",
"Simply leveraging personal chat history will not produce good performance for conversation 3 The datasets are available at http://www.ccs.",
"neu.edu/home/luwang/ 4 http://trec.nist.gov/data/tweets/ 5 https://developer.twitter.com/ en/docs/tweets/filter-realtime/api-reference/post-statuses-filter.html 6 Keyword list: trump, hillary, clinton, president, politics, and election. 7 https://developer.twitter.com/en/ docs/tweets/search/api-reference/get-saved_searches-show-id recommendation.",
"In our experiments, we predict whether a user will engage in a conversation given the previous messages in that conversation and past conversations the user is involved.",
"For model training and testing, we divide conversations into three ordered segments, corresponding to training, development, and test sets at 75% , 12 .",
"5% , and 12 .",
"5% .",
"8 Preprocessing and Hyperparameter Tuning.",
"For preprocessing, links, mentions (i.e., @user-name), and hashtags in tweets were replaced with generic tags of URL, MENTION, and HASHTAG.",
"We then utilized the Twitter NLP tool 9 (Gimpel et al., 2011; Owoputi et al., 2013) for tokenization and non-alphabetic token removal.",
"We removed stop words and punctuations for all comparisons to ensure comparable performance.",
"We maintain a vocabulary with the 5,000 most frequent words.",
"Our model parameters are tuned on the development set based on grid search, i.e. the parameters that give the lowest value for our objective are selected.",
"Specifically, the number of discourse modes ( D ) and topics ( K ) are tuned to be 10.",
"The trade-off parameter between user preference and corpus negative log-likelihood takes value of 0 .",
"1 , and , the parameter for balancing topic and discourse, is set to 0 .",
"5 .",
"Finally, the confidence parameter s takes a value of 200 to give higher weight for positive instances, i.e., a user replied to a conversation.",
"Evaluation Metrics.",
"Following prior work on social media post recommendation (Chen et al., 2012; Yan et al., 2012), we treat our task on conversation recommendation as a ranking problem.",
"Therefore, popular information retrieval evaluation metrics, including precision at K (P@K), mean average precision (MAP) (Manning et al., 2008), and normalized Discounted Cumulative Gain at K (nDCG@K) (Jarvelin and Kekalainen, 2002) are reported.",
"The metrics are computed per user in the dataset and then averaged over all users.",
"The values range from 0 .",
"0 to 1 .",
"0 , with higher values indicating better performance.",
"we first consider three baselines:",
"1) ranking 8 At least one turn per conversation is retained for training.",
"It is possible that one user only replies in either development set or test set, but it is rather infrequent.",
"9 http://www.cs.cmu.edu/ark/TweetNLP/ 380 Models US Election TREC MAP P@1 nDCG@5 MAP P@1 nDCG@5 Baselines RANDOM 0.018 0.004 0.009 0.006 0.001 0.002 LENGTH 0.025 0.002 0.003 0.013 0.002 0.004 POPULARITY 0.050 0.010 0.025 0.023 0.005 0.010 Comparisons OCCF 0.637 0.589 0.649 0.410 0.385 0.425 RSVM 0.687 0.680 0.690 0.554 0.575 0.559 CTR 0.673 0.649 0.678 0.475 0.431 0.495 ADAPTEDHFT 0.698 0.652 0.706 0.487 0.447 0.504 Our model 0.762 0.750 0.757 0.591 0.591 0.600 Table 2: Conversation recommendation results on US Election and TREC.",
"conversations randomly ( RANDOM );",
"2) longer conversations (i.e., more words) ranked higher ( LENGTH );",
"3) conversations with more distinct users ranked higher ( POPULARITY ).",
"We further compare results with three established recommendation models: OCCF: one-class Collaborative Filtering (Pan et al., 2008), which only considers users' reply history without modeling content in conversations.",
"RSVM: ranking SVM (Joachims, 2002), which ranks conversations for each user with the content and Twitter features as in Duan et al. (2010).",
"CTR: messages in one conversation are aggregated into one post and a state-of-the art Collaborative Filtering-based post recommendation model is applied (Chen et al., 2012).",
"Finally, we also adapt the hidden factors as topics (HFT) model proposed in McAuley and Leskovec (2013) (henceforth ADAPTEDHFT).",
"Because the original model leverages the ratings for all product reviews and does not handle implicit user feedback well, we replace their user preference objective function with ours (Eq. 2).",
"In this section, we first discuss our main evaluation in Section 5.1.",
"A case study and corresponding discussion are provided in Section 5.2 to provide further insights, which is followed by an analysis of the topics and discourse modes discovered by our model (Section 5.3).",
"We also examine our performance on first time replies (Section 5.4).",
"Experimental results are displayed in Table 2, where our model yields statistically significantly better results than baselines and comparisons",
"(paired t -tests, p < 0 . 01 ).",
"For P@K, we only report P@1, because a significant amount of users participate only in 1 or 2 conversations.",
"For nDCG@K, different K values are experimented, which results in similar trend, so only nDCG@5 is reported.",
"We find that the baselines that rank conversations with simple features (e.g., length or popularity) perform poorly.",
"This implies that generic algorithms that do not consider conversation content or user preference cannot produce reasonable recommendations.",
"Although some non-baseline systems capture content in one way or another, only ADAPTEDHFT and our model exploit latent topic models to better represent content in tweets, and outperform other methods.",
"Compared to ADAPTEDHFT, which only considers latent topics under a collaborative filtering framework, our model extracts both topics and discourse modes as latent variables, and shows superior performance on both datasets.",
"Our discourse variables go beyond topical content to capture social behaviors that affect user engagement, such as 381 0 0.10.20.30.40.50.60.70.80.9 1 2 3 >3 MAPOCCF RSVM CTR adapted HFT Our Model Figure 5: MAP scores of models for users involved in varying number of conversations on TREC dataset.",
"Training with Varying Conversation History.",
"To test the model performance based different levels of user engagement history, we further experiment with varying the length of conversations for training.",
"Specifically, in addition to using 75% of conversation history, we also extract the first 25% and 50% of history as training.",
"The rest of a conversation is separated equally for development and test.",
"Figure 4 shows the MAP scores for US Election and TREC datasets.",
"The increasing MAP for all methods as the training history increases indicates that generally, conversation history is essential for recommendation.",
"Our model performs consistently better over different lengths of conversation histories.",
"Results for Varying Degree of Data Sparsity.",
"From Table 1 and Figure 3, we observe that most users in our datasets are involved in only a few conversations.",
"In order to study the effects of data sparsity on recommendation models, we examine in Figure 5 the MAP scores for users engaged in a varying number of conversations, as measured on the TREC dataset.",
"The results on the US Election dataset have similar distributions.",
"As we see, the prediction results become worse for users involved in fewer conversations.",
"This indicates that data sparsity serves as a challenge for all recommendation models.",
"We also observe that our model performs consistently better than other models over different degrees of sparsity.",
"This implies that effectively capturing discourse structure in conversation context is useful to mitigating the effects of Models Conv 1 ( c 1 ) Conv 2 ( c 2 ) OCCF 0.941 0.922 ADAPTEDHFT 0.923 0.954 Our model 0.924 0.961 Table 3: Predicted recommendation scores by different models of U 1 for conversations c 1 and c 2 in Figure 1. U 1 later replies to c 2 but not c 1 , where our model predicts scores of 0 .",
"Latent Dim.",
"User U 1 Conv 1 ( c 1 ) Conv 2 ( c 2 ) Topic 1 (Sanders) 0.92 ( Uu 1 , 1 ) 0.10 ( Cc 1 , 1 ) 0.63 ( Cc 2 , 1 ) Topic 2 (Clinton) 0.14 ( Uu 1 , 2 ) 0.84 ( Cc 1 , 2 ) 0.12 ( Cc 2 , 2 ) Disc 1 (argument) 0.46 ( Uu 1 , 1 ) 0.28 ( Cc 1 , 1 ) 0.38 ( Cc 2 , 1 ) Disc 2 (statement) -0.24 ( Uu 1 , 2 ) 0.98 ( Cc 1 , 2 ) -0.09 ( Cc 2 , 2 ) Table 4: Sample latent dimensions of topics ( Uu 1 for user, and Cc for conversations) and discourse modes ( Uu 1 for user, and Cc for conversations).",
"User U 1 shows interest in topic 1 (about Sanders), which is also a dominating topic in conversation c 2 , but is not interested in topic 2 (about Clinton).",
"U 1 shows a preference for discourse mode 1 (argument) over mode 2 (state-ment).",
"Here we present a case study based on the sample conversations in Figure 1. Recall that user U 1 is interested in conversations about Sanders, and also prefers more argumentative discourse, and thus returns in conversation c 2 but not c 1 .",
"Table 3 shows the predicted scores for the two conversations from OCCF, ADAPTEDHFT, and our model (as in Eq. 2).",
"Both ADAPTEDHFT and our model more accurately recommend c 2 over c 1 , with our model producing a slightly higher recommendation score for c 2 .",
"Table 4 shows the latent dimension values for the learned topics and discourse modes for this user and these two conversations.",
"Based on human inspection, topic 1 appears to contain words about Sanders, which is the main topic in conversation c 2 .",
"Topic 2 is about Clinton, which is a dominating topic in conversation c 1 .",
"Our model also picks up user interest in topic 1 (Sanders), and thus assigns Uu 1 , 1 a high value.",
"For discourse modes, our model also generates a high score for argument discourse (labeled via human inspection) for both the user and c 2 .",
"Ablation Study.",
"We have shown that joint modeling of topical content and discourse modes produces the superior performance for our model.",
"Here we provide an ablation study to examine the relative contributions of those two aspects by setting the trade-off parameter to 1 .",
"0 (topic only) or 0 .",
"0 (discourse only).",
"Table 5 shows that topics or discourse individually improve slightly upon the comparison ADAPTEDHFT, but only jointly do they improve significantly upon it.",
"Topic Coherence.",
"To examine the quality of topics found by our model, we use the CV topic coherence score measured via the open-source toolkit Palmetto 10 , which has been shown to produce evaluation performance comparable to human judgment (Roder et al., 2015).",
"Our model achieves topic coherence scores of 0 .",
"343 and 0 .",
"376 on TREC and US Election datasets, compared to 0 .",
"338 and 0 .",
"371 for the topics from ADAPTEDHFT.",
"Sample Discourse Modes.",
"While our topic word distributions are relatively unsurprising, of greater interest are the discourse mode word distributions.",
"Table 6 shows a sample of discourse modes as labeled by human.",
"Although this is merely a qualitative human judgment at this point, there does appear to be a notable overlap in discourse modes between the two datasets even though they were learned separately.",
"10 https://github.com/AKSW/Palmetto/ 5.4 First Time Reply Results From a recommendation perspective, users may be interested in joining new conversations.",
"We thus compare each recommendation system for first time replies.",
"For each user, we only evaluate for conversations where they are newcomers.",
"Table 7 shows that, unsurprisingly, all systems perform poorly on this task, though our model performs slightly better.",
"This suggests that other features, e.g., network structures or other discussion thread features, could usefully be included in future studies that target new conversations.",
"This paper has presented a framework for microblog conversation recommendation via jointly modeling topics and discourse modes.",
"Experimental results show that our method can outperform competitive approaches that omit user discourse behaviors.",
"Qualitative analysis shows that our joint model yields meaningful topics and discourse representations.",
"This work is partly supported by Innovation and Technology Fund (ITF) Project No. 6904333, General Research Fund (GRF) Project No. 14232816 (12183516), and National Science Foundation Grant IIS-1566382.",
"We thank Shum-ing Shi, Yan Song, and the three anonymous reviewers for the insightful suggestions on various aspects of this work."
] |
[
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"objective",
"result",
"method",
"result",
"objective",
"method",
"result",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"other",
"method",
"other",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"other",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"result",
"abstain",
"method",
"result",
"result",
"other",
"other"
] |
[
"In this paper, we formulate the personalized news headline generation problem whose goal is to output a user-specific title based on both a user's reading interests and a candidate news body to be exposed to her.",
"To build up a benchmark for this problem, we publicize a large-scale dataset named PENS (PErsonal-ized News headlineS).",
"The training set is collected from user impressions logs of Microsoft News, and the test set is manually created by hundreds of native speakers to enable a fair testbed for evaluating models in an offline mode.",
"We propose a generic framework as a preparatory solution to our problem.",
"At its heart, user preference is learned by leveraging the user behavioral data, and three kinds of user preference injections are proposed to personalize a text generator and establish personalized headlines.",
"We investigate our dataset by implementing several state-of-the-art user modeling methods in our framework to demonstrate a benchmark score for the proposed dataset.",
"The dataset is available at https: //msnews.github.io/pens.html .",
"News headline generation (Dorr et al., 2003; Lopy-rev, 2015; Alfonseca et al., 2013; Tan et al., 2017; See et al., 2017; Zhang et al., 2018; Xu et al., 2019; Murao et al., 2019; Gavrilov et al., 2019; Gu et al., 2020; Song et al., 2020), conventionally considered as a paradigm of challenging text summarization task, has been extensively explored for decades.",
"Their intuitive intention is to empower the model to output a condensed generalization, e.g., one sentence, of a news article.",
"The recent year escalation of online content vendors such as Google News, TopBuzz, and This work was done when Xiang was visiting MSRA supported by the MSRA Young Visiting Researcher Program.",
"Corresponding author.",
"etc (LaRocque, 2003) propels a new research direction that how to decorate the headline as an irresistible invitation to users for reading through the article (Xu et al., 2019) since more readings may acquaint more revenue of these platforms.",
"To this end, specified stylized headline generation techniques were proposed, such as question headline (Zhang et al., 2018), sensational headline (Xu et al., 2019) generation, and so on (Shu et al., 2018; Gu et al., 2020).",
"However, the over-decorate headlines might bring negative effects as click-baits begin to become notorious in ubiquitous online services 1 .",
"Hence, the question is now changing to how to construct a title that catches on reader curiosity without entering into click-bait territory.",
"Inspired by the tremendous success of personalized news recommendation (An et al., 2019; Wang et al., 2018; Li et al., 2010; Zheng et al., 2018) where the ultimate goal is to learn users' reading interests and deliver the right news to them, a plausible solution to this question could be producing headlines satisfying the personalized interests of readers.",
"It thus motivates the study of the personalized news headline generation whose goal is to output a user-specific title based on both a user's reading interests and a candidate news body to be exposed to her.",
"Analogous to personalized news recommendations, user preference can be learned by leveraging the behavioral data of readers on content vendors, and the representation could personalize text generators and establish distinct headlines, even with the same news body, for different readers.",
"However, it might be difficult to evaluate the approaches of personalized headline generation due to the lack of large-scale available datasets.",
"First, there are few available benchmarks that simultaneously contain user behavior and news content to train models.",
"For example, most available news rec-1 https://www.vizion.com/blog/ do-clickbait-titles-still-work/ ommendation datasets may predominately contain user-side interaction data, e.g., exposure impressions and click behaviors, but the textual features usually have already been overly pre-processed (Li et al., 2010; Zheng et al., 2018).",
"As a result, advanced NLP techniques that extract useful features from textual data are limited.",
"News headline generation datasets, on the other hand, usually consist of news bodies as well as their headlines, which all come from the news-side (Tan et al., 2017; Zhang et al., 2018) rather than the user-side.",
"Though the MIND dataset (Wu et al., 2020), which was presented by Microsoft, simultaneously contains the user-side behavioral data and the news-side original textual data, it was constructed for personalized news recommendations rather than our problem.",
"The more challenging issue for evaluating personalized headline generation approaches is the severe cost during the test phase.",
"It could be intractable and infeasible to do an A/B test for every model in online environments.",
"An efficient and fair testbed to evaluate the models in an offline mode is in urgent demand to make the effectiveness and reproducibility of proposed models comparable.",
"To this end, we publicize a dataset named PENS (PErsonalized News headlineS) in this paper as a benchmark to testify the performance of personalized news headline generation approaches.",
"The training set of PENS is collected from the user impression logs of Microsoft News 2 , in which 500 , 000 impressions over 445 , 765 users on more than one hundred thousand English news articles are provided.",
"In addition, we collected 103 English native speakers' click behaviors as well as their more than 20 , 000 manually-crafted personalized headlines of news articles on the same news corpus for testing.",
"These manually-written headlines are regarded as the gold standard of the user-preferred titles.",
"Then, proposed methods can take prevailing matching metrics, e.g., ROUGE, BLEU and etc., to verify the performance.",
"Moreover, we propose a generic framework to inject personalized interests into a proposed neural headline generator to enable a beacon for this area, considering there are few existing works that can generate personalized news headlines.",
"In more detail, we devise three kinds of incorporation methods to inject user interest representation into a proposed neural headline generator with a transformer-based encoder and a pointer network-based (See et al., 2 https://microsoftnews.msn.com NewsEncoder NewsEncoder NewsEncoder NewsEncoder User Encoder Candidate News Clicked News Dot Click Probability Click Predictor r u r v Figure 1: Personalized news recommendation framework. 2017) decoder.",
"We implement six state-of-the-arts personalized news recommendation approaches to model user preferences and provide a horizontal standard for the PENS dataset.",
"The experimental results show effective personalization modeling and comprehensive injection of user interests can underpin an improvement in the quality of personalized news headline generation.",
"We expect PENS can serve as a benchmark for personalized headline generation and bolster the research in this area.",
"In this section, we formulate the problem of personalized news headline generation and differentiate it from personalized news recommendations.",
"The problem of personalized news headline generation is formulated as follows.",
"Given a user u on an online content vendor, we denote his past click history as [ c u 1 , c u 2 , . . . , c uN ] where each c represents the headline of user u 's clicked news and each headline is composed of a sequence of words c = [ w c 1 , . . . , w c T ] with the maximum length of T .",
"Then, given the news body of a piece of news v = [ w v 1 , . . . , w v n ] to be exposed to user u , our problem is to generate a personalized news headline H uv = [ y uv 1 , . . . , y uv T ] based on the clicked news [ c u 1 , c u 2 , . . . , c uN ] and v .",
"Here we differentiate our problem from personalized news recommendation whose general framework is shown as Fig. 1.",
"Recall that the aim of personalized news recommendation is computing and matching between the candidate news and the user's interests.",
"Hence, <6 6 7 8 9 10 11 12 13 14 15>15 title length 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14",
"learning accurate news and user representations is critical for this problem.",
"Under the neural framework, the news representation is usually modeled by a news encoder that encodes news title, news body or other attributes via various neural structures (Okura et al., 2017; Wang et al., 2018; Wu et al., 2019a; An et al., 2019; Wu et al., 2019a).",
"The user representation is generated by engraving the high-level aspects over their clicked news sequences using sequential (Okura et al., 2017; An et al., 2019) or attentive modules (Wu et al., 2019b,a), in which every news is encoded by the news encoder in advance.",
"Finally, the two representations are matched by the click predictor, and the whole model is trained by the supervision of click signals.",
"Different from personalized news recommendations, our personalized news headline generation could be regarded as an NLP task than a user modeling and matching problem.",
"Although it similarly needs to model preferences for the individual users as what personalized news recommendations do, the output of our problem is a natural language sequence that the target user might be interested in, i.e., user-preferred news title, rather than a click probability score.",
"In this section, we detail our PENS dataset.",
"The dataset was randomly sampled impression logs of Microsoft News from June 14 to July 12, 2019.",
"Both user behaviors and news contents are involved, and each user was de-linked from the production system when securely hashed into an anonymous ID to reserve the data privacy issues.",
"The PENS dataset contains 113 , 762 pieces of news articles whose topics are distributed into 15 categories.",
"The topical distribution is demonstrated in Fig. 2",
"(c).",
"Each news article in the PENS dataset includes a news ID, a title, a body and a category label.",
"The average length of news title and news body is 10 .",
"5 and 549 .",
"0 , individually.",
"Moreover, we extract entities from each news title and body and link them to the entities in WikiData 3 .",
"It could be taken as an auxiliary source to facilitate knowledge-aware personalization modeling and headline generation.",
"The key statistical information of the PENS dataset is exhibited in Fig. 2",
"(a)(e).",
"The training set of PENS consists of impression logs.",
"An impression log records the news articles displayed to a user as well as the click behaviors on these news articles when he/she visits the news website homepage at a specific time.",
"We follow the MIND dataset (Wu et al., 2020) that we add the news click histories of every individual user to his/her impression log to offer labeled samples for learning user preferences.",
"Hence, the format of each labeled sample in our training set is [ uID , tmp , clkNews , uclkNews , clkedHis ] , where uID indicates the anonymous ID of a user, tmp denotes the timestamp of this impression record.",
"clkNews and uclkNews are the clicked news and un-clicked news in this impression, respectively.",
"clkedHis represents the news articles previously clicked by this user.",
"All the samples in clkNews , uclkNews and clkedHis are news IDs, and they all sort by the user's click time.",
"The histogram of the number of news in the clicked history per user is shown in Fig. 2",
"(f).",
"To provide an offline testbed, we invited 103 English native speakers (all are college students) man-3",
"man-3 https://www.wikidata.org/wiki/Wikidata:MainPage",
"ually create a test set by two stages.",
"At the first stage, each person browses 1 , 000 news headlines and marks at least 50 pieces he/she is interested in.",
"These exhibited news headlines were randomly selected from our news corpus and were arranged by their first exposure time.",
"At the second stage, everyone is asked to write down their preferred headlines for another 200 news articles from our corpus, without exhibiting them the original news titles.",
"Note that these news articles are excluded from the first stage, and only news bodies were exhibited to these annotators in this stage.",
"These news articles are evenly sampled, and we redundantly assign them to make sure each news is exhibited to four people on average.",
"The quality of these manually-written headlines was checked by professional editors from the perspective of the factual aspect of the media frame (Wagner and Gruszczynski, 2016).",
"Low-quality headlines, e.g. containing wrong factual information, inconsistent with the news body, too-short or overlong, etc., are removed.",
"The rest are regarded as the personalized reading focuses of these annotators on the articles and are taken as gold-standard headlines in our dataset.",
"The statistics of the training and test sets of the PENS are shown in Table 1. 4 Our Framework In this section, we illustrate our generic framework for resolving personalized news headline generation, and its key issue is how to inject the user preference into a news headline generator.",
"We devise a headline generator with a transformer encoder and a pointer network decoder as our base model and propose three kinds of manners of injecting the user interests to generate personalized headlines.",
"The user interests can be derived following the approaches in news recommendations community, and we omit its details due to the space limitation.",
"The architecture of our proposed framework is shown as Figure 3.",
"The pin-point of our proposed headline generator is a variant of transformer encoder and pointer network decoder.",
"During the encoding, given the news body of a candidate news v = [ w v 1 , . . . , w v n ] , its word embeddings [ e v 1 , . . . , e v n ] R d w are first fed to a two-layer positional encoder.",
"The first layer aims to enhance the word structure within the whole news body sequence following Vaswani et al. (2017), and we add the positional encoding to each embedding vector with, PE ( pos, 2 i ) = sin( pos/ 10000 2 i/d w ) (1) PE ( pos, 2 i +1) = cos( pos/ 10000 2 i/d w ) (2) where pos is the word position and i is the dimension.",
"We also apply a sentence-layer positional encoding to discover structural relations from higher level.",
"Suppose the W pos RL d s represents the position embedding matrix of sentence level where L is the sentence length and d s is the embedding size, the l -th row of W pos represents the positional embedding of all the words in the l -th sentence.",
"Thus, each word embedding e (cid:48) pos with positional information can be represented as: e (cid:48) pos = ( e pos + PE pos ) W pos [ l ] .",
"where means concatenation.",
"Furthermore, multihead self-attention mechanism (Vaswani et al., 2017) is adopted to capture the word and sentence interactions by, h i = softmax ( E (cid:48) W Qi ( E (cid:48) W Ki ) (cid:62) d k ) E (cid:48) W Vi (4) where d k = d s + d w k and i = 1 , . . . , k given k heads.",
"W Qi , W Ki , W Vi R ( d s + d w ) d k .",
"E (cid:48) represents the word sequence embeddings in candidate news v .",
"Thus, the encoder hidden states h = h 1 h 2 , . . . , h k can be derived.",
"During the process of decoding, the decoded hidden state s t at time step t can be derived after given the input x t , and an attention distribution a t over the encoder hidden states h is calculated as, a t = F ( h, s t ) (5) F ( h, s t ) = softmax ( V (cid:62) att tanh ( W h h + W s s t + b att )) (6) where F represents a function template parameterized by to combine the linear transformation of the encoder and the decoder states, i.e., h and s t .",
"Next, the context vector c t , which can be seen as a fixed-size representation read from the news body at time step t , is computed by a weighted sum of the encoder hidden states over the attention distribution.",
"Then the vocabulary distribution is produced by, P vocab ( w t ) = tanh ( V p [ s t ; c t ] + b v ) , (7) where V p and b v are learnable parameters while P vocab ( w t ) represents the probability distribution over all the words in the vocabulary to predict the word at time step t .",
"Inspired by pointer-generator network (See et al., 2017), which exhibits desirable performance on either dealing with out-of-vocabulary (OOV) words or improving the reproducing factual details with copy mechanism, we adopt a pointer p tgen at decoding step t as a soft switch to choose between generating a word from the vocabulary with a probability of P vocab ( w t ) or copying a word from the news body sampling from the attention distribution a t .",
"Thus, the probability distribution over the extended vocabulary is computed by, P ( w t ) = p tgen P vocab ( w t ) + (1 p tgen ) (cid:88) j : w j = w t a t,j (8) where P vocab ( w t ) is zero when w t is out of vocabulary while (cid:80) j : w j = w t a t,j = 0 when the w t is not in the news body.",
"p tgen is calculated based on the context vector c t , decoder state s t and the decoder input x t : p tgen = T ( c t , s t , x t ) , (9) where T is a function template as Eq.",
"(6).",
"So far, the imperative issue is to personalize the headline generator by injecting the user's preference.",
"Recall that we can obtain user embedding indicating user's reading interests based on his/her historical clicked news sequences, and we denote such representation as u .",
"As the user embedding u is usually not aligned with the word embeddings, it remains challenges to incorporate the user interests to influence the headline generation with personalized information.",
"In our framework, based on our headline generator, we propose three different manners to inject user interests, considering different intuitions, and they are exhibited in Fig. 3.",
"First, the most simple and intuitive choice is to utilize the user embedding u to initialize the decoder hidden state of the headline generator.",
"Second, under the empirical assumption that users may attend on different paragraphs and words in news articles corresponding to their individual preference, we inject u to affect the attention distribution a t in order to personalize the attentive values on the different words in the news body.",
"That is, we modify Eq.",
"(5) and derive a t = F ( h, s t , u ) .",
"Lastly, we incorporate the personalized information to perturb the choice between generating a word from vocabulary or copying a word from the news body, and derive p tgen = T ( c t , s t , x t , u ) .",
"Compared with Eq.",
"(9), u is taken as an auxiliary parameter, where T is also a function template as Eq.",
"(6).",
"In this subsection, we present the training process of our framework.",
"The headline generation can be considered as a sequential decision-making process, hence we optimize a parametrized policy for the generator by maximizing the expected reward of generated headline Y 1: T : EY 1: T G [ R ( Y 1: T )] .",
"For the generator, policy gradient methods are applied to maximize the objective function in Eq.",
"(10), whose gradient can be derived as, J ( ) (cid:39) E y t G ( y t | Y 1: t 1 ) [ log G ( y t | Y 1: t 1 ) R ( Y 1: t 1 , y t )] (11) where the reward R is estimated by the degree of personalization, fluency and factualness as we aim to generate a user-specific and coherent headline to cover the main theme of news articles and arouse personalized reading curiosity.",
"The implemented rewards in our framework contain: (1) The personalization of the generated headline is measured by the dot product between the user embedding and the generated headline representation.",
"Such a score might imply a matching degree of personalization.",
"(2) The fluency of a generated headline is assessed by a language model.",
"We adopt a two-layer LSTM pre-trained by maximizing the likelihood of news body and consider the probability estimation of a generated headline as the fluency reward.",
"(3) We measure the degree of factual consistency and the coverage by calculating the mean of ROUGE (Lin, 2004)-1, -2 and -L F-scores between each sentence in the news body and the generated headline, and then take the average of the top 3 scores as the reward.",
"We average all three rewards as the fi-nal signal.",
"As all the above reward functions only produce an end reward after the whole headline is generated, we apply a Monte Carlo Tree search to estimate the intermediate rewards.",
"In this section, we investigate our proposed PENS dataset and conduct several comparisons to give benchmark scores of personalized headline generation on this dataset.",
"In the following part, we will introduce the compared methods first, and then detail the experimental setup, and finally present the results and analysis.",
"We mainly compare two groups of approaches.",
"The first group consists of various user modeling methods, which are all SOTA neural-based news recommendation methods: (1) EBNR (Okura et al., 2017) learns user representations by aggregating their browsed news with GRU.",
"(2) DKN (Wang et al., 2018) is a deep knowledge-aware network for news recommendation.",
"(3) NPA (Wu et al., 2019b) proposes personalized attention module in both news and user encoder.",
"(4) NRMS (Wu et al., 2019c) conducts neural news recommendation with multi-head self-attention.",
"(5) LSTUR (An et al., 2019) models longand shor-term user representations based on user ID embedding and sequential encoding, individually.",
"(6) NAML (Wu et al., 2019a) proposes multi-view learning in user representation.",
"To the best of our knowledge, there are no exclusive methods for personalized news headline generation.",
"Hence we take several headline generation methods for comparison.",
"(1) Pointer-Gen (See et al., 2017) proposes an explicit probabilistic switch to choose between copying from source text and generating word from vocabulary.",
"(2) PG+RL-ROUGE (Xu et al., 2019) extends Pointer-Gen with as a reinforcement learning framework which generates sensational headlines by considering ROUGE-L score as rewards.",
"We perform the following preprocessings.",
"For each impression, we empirically keep at most 50 clicked news to learn user preferences, and set the length of news headline and news body to 30 and 500 , respectively.",
"Word embeddings are 300 -dimension and initialized by the Glove (Pennington et al., 2014) while the size of position embeddings at sentence level is 100 .",
"The multi-head attention networks have 8 heads.",
"First of all, we conduct news recommendation tasks to pretrain a user encoder with a learning rate of 10 4 on the first three weeks, i.e., from June 14 to July 4, 2019, on the training set, and test on the rest.",
"Notice that the parameters of the user encoder are not updated thereafter.",
"Meanwhile, the headline generator is also pretrained with a learning rate of 0 .",
"001 by maximizing the likelihood of original headlines based on a random but fixed user embedding which can be considered as a global user without personalized information.",
"Next, we train each individual model for 2 epochs following Eq.",
"10, and Adam (Kingma and Ba, 2014) is used for model optimization where we sample 16 sequences for Monte Carlo search.",
"For news recommendation evaluation, we report the average results in terms of AUC, MRR, nDCG@5 and nDCG@10.",
"For personalized headline generation, we evaluate the generation quality using F1 ROUGE (Lin, 2004) 4 including unigram 4 We compute all ROUGE scores with parameters -a -c 95 -m -n 4 -w 1.2.",
"and bigram overlap (ROUGE-1 and ROUGE-2) to assess informativeness, and the longest common subsequence (ROUGE-L) to measure fluency.",
"Here we adopt ROUGE because we care more about evaluating the recall of the generated results.",
"All the reported values are the averaged results of 10 independently repeated runs.",
"Since we include six kinds of user modeling methods from personalized news recommendations and propose three ways of injecting user interests in our framework, we can derive 18 variants of approaches that can generate personalized news headlines.",
"Meanwhile, there are two headline generation baselines, hence we totally have 20 methods for evaluation.",
"The overall performance is illustrated in Table 2, and we have the following observations.",
"First, we can see that every personalized news headline generation method can outperform non-personalized methods like PG.",
"It might be that our proposed framework can generate personalized news headlines by incorporating user interests.",
"Such personalized headlines are more similar to the manually-written ones, which are taken as gold-standard in our evaluation.",
"Second, we find https://pypi.python.org/pypi/pyrouge/0.1.3 that user modeling makes a difference in generating personalized headlines.",
"For instance, NAML achieves the best performance in news recommendation by learning news and user representations from multiple views, i.e., obtaining 66 .",
"18 , 25 .",
"51 , 27 .",
"56 and 35 .",
"17 on AUC, MRR NDCG@5 and NDCG@10.",
"Then injecting the user preferences learned by NAML to the proposed headline generator also gets the highest ROUGE scores with either way of the incorporation.",
"We conjecture it is because better user modeling methods can learn more rich personalized information from click behaviors, and well-learned user embeddings could strive to generate better-personalized headlines.",
"Third, it is reported that the second way of injecting user interests gets the best performance on most of the user modeling methods, e.g., EBNR, DKN and NAML.",
"It is probably because the differentiation of the attention distribution is intensified after the user embedding perturbation, which then impacts the word generation in the decoding process.",
"However, it still remains a large room for explorations on better injecting user representations into the generation process since the second way seems to be defective at some time.",
"To further comprehend our task and the proposed framework, we demonstrate interesting cases from two representative methods, namely one non-personalized method Pointer-Gen (PG) and one personalized method NAML+HG which utilizes the second user interests injection (c.f. Fig. 3).",
"We also exhibit the manually-written headlines by the users and the original news headline as references.",
"From the results shown in Table 3, we can observe that generated headline by non-personalized method might omit some detailed but important information.",
"We believe the reason is that PG is trained via supervised learning to maximize the log-likelihood of ground-truth news headlines.",
"While our framework is trained via RL technique where coverage score is considered as an indicator to encourage the generation to be more complete.",
"In addition, the exhibited cases show that our framework can produce user-specific news headlines in accordance with their individual interests reflected by historical click behaviors.",
"Meanwhile, some key phrases in the personalized-written titles successfully appeared in the machine-generated headlines.",
"Headline generation has been considered as specialized text summarization (Luo et al., 2019; Jia et al., 2020), from which both extractive (Dorr et al., 2003; Alfonseca et al., 2013) and abstractive summarization (Sun et al., 2015; Takase et al., 2016; Tan et al., 2017; Gavrilov et al., 2019; See et al., 2017) approaches prevailed for decades.",
"Extractive methods select a subset of actual sentences in original article, which may derive incoherent summary (Alfonseca et al., 2013).",
"While abstractive models, basically falling in an encoder-decoder (Shen et al., 2017a; Murao et al., 2019) framework, can generate more condensed output based on the latent representation of news content.",
"However, the nature of text summarization methods without considering interactions between news and users renders them ineffective in our personalized headline generation.",
"Recently, stylized headlines generation were proposed to output eye-catching headlines by implicit style transfer (Shen et al., 2017b; Fu et al., 2018; Prabhumoye et al., 2018) or style-oriented supervisions (Shu et al., 2018; Zhang et al., 2018; Xu et al., 2019).",
"However, either training a unified text style transfer model or constructing a personalized text style transfer model for every user is infeasible due to the complex personalized style-related patterns and the limited personalized-oriented examples.",
"Meanwhile, these methods might suffer from the risk of entering into click-bait territory.",
"Personalized News Recommendation is also related to our problem.",
"Among them, content-based recommendations (Okura et al., 2017; Liu et al., 2010; Li et al., 2011; Lian et al., 2018; Wang et al., 2018; Wu et al., 2019a,b) perform user and news matching on a learned hidden space, and user representation is learned based on historical clicked news contents.",
"It inspires us to personalize headline generator by incorporating user embeddings.",
"Deep models (Lian et al., 2018; Wang et al., 2018; Wu et al., 2019b,a), recently, demonstrated significant improvements because of their capabilities in representation learning on both user-side and news-side data.",
"Different from the efforts on personalized news recommendation, our work focuses on generating fascinating headlines for different users, which is orthogonal to existing work.",
"In this paper, we formulated the problem of personalized news headline generation.",
"To provide an offline testbed for this problem, we constructed a dataset named PENS from Microsoft News.",
"The news corpus of this dataset contains more than 100 thousand news articles over 15 topic categories.",
"The training set constitutes of 500 , 000 impressions of 445 , 765 users to learn user interests and construct personalized news headline generator by distant supervisions.",
"The test set was constructed by 103 annotators with their clicked behaviors and manually-written personalized news headlines.",
"We propose a generic framework that injects user interests into an encoder-decoder headline generator in three different manners to resolve our problem.",
"We compared both SOTA user modeling and headline generating approaches to present benchmark scores on the proposed dataset.",
"For future work, we first believe designing more complex and refined approaches to generated more diversified personalized news headlines will be interesting.",
"More importantly, how to improve personalization while keeping factualness will be another interesting work, and it will propel the methods deployable in practical scenarios.",
"Third, news headline personalization might burgeon the news content personalization, which is a more challenging but interesting open problem.",
"The research work is supported by the National Key Research and Development Program of China under Grant No. 2017YFB1002104, the National Natural Science Foundation of China under Grant No. 92046003, 61976204, U1811461.",
"Xiang Ao is also supported by the Project of Youth Innovation Promotion Association CAS and Beijing Nova Program Z201100006820062."
] |
[
"objective",
"method",
"abstain",
"objective",
"abstain",
"objective",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"other",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"other",
"other",
"other",
"method",
"other",
"other",
"other",
"abstain",
"other",
"abstain",
"other",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"objective",
"abstain",
"abstain",
"other",
"other"
] |
[
"We propose an alternate approach to quantifying how well language models learn natural language: we ask how well they match the statistical tendencies of natural language.",
"To answer this question, we analyze whether text generated from language models exhibits the statistical tendencies present in the human-generated text on which they were trained.",
"We provide a frameworkpaired with significance testsfor evaluating the fit of language models to these trends.",
"We find that neural language models appear to learn only a subset of the tendencies considered, but align much more closely with empirical trends than proposed theoretical distributions (when present).",
"Further, the fit to different distributions is highly-dependent on both model architecture and generation strategy.",
"As concrete examples, text generated under the nucleus sampling scheme adheres more closely to the type token relationship of natural language than text produced using standard ancestral sampling; text from LSTMs reflects the natural language distributions over length, stopwords, and symbols surprisingly well.",
"Neural language models 1 have become shockingly good at modeling natural language data in recent years (Merity et al., 2017; Conneau and Lample, 2019; Radford et al., 2019).",
"Thus, to test just how well neural language models capture language NLP researchers have started to look beyond standard evaluation metrics such as perplexity, endeavoring to understand which underlying attributes of human language these models are learning.",
"To this end, a nascent literature has emerged that focuses on probing language models (Belinkov 1 In this work, we do not use the term language model to refer to cloze language models such as BERT (Devlin et al., 2019), which do not give us a distribution over strings.",
"Figure 1 : Average number of unique words vs. document length, i.e., typetoken, in text sampled from language models.",
"Values from models' test set are plotted for reference.",
"and Glass, 2019), i.e., determining whether models encode linguistic phenomena.",
"For the most part, these works have been limited to analyses of sentence-level phenomenon, such as subjectverb agreement (Gulordava et al., 2018) and garden path effects (van Schijndel and Linzen, 2018) among a myriad of other properties (Blevins et al., 2018; Chowdhury and Zamparelli, 2018, inter alia ).",
"In this work, we attempt to understand which macro-level phenomena of human language today's language models reflect.",
"That is, we pose the question: Do neural language models exhibit the statistical tendencies of human language?",
"Phenomena that can be measured at this level provide an alternate view of a model's comprehension; for example, rather than exploring whether morphological agreement is captured, we look at whether our models learn the trends across a corpus as a whole, e.g., the token rankfrequency (Zipf's) relationship.",
"In comparison to standard probing techniques, this framework does not require we know a priori how linguistic phenomena should manifest themselves.",
"That is, when there is no law stating the theoretical tendencies of an attribute of natural language or we have reason to believe our language domain does not follow such a law, we can use the statistical tendencies present in empirical data as our baseline.",
"This characteristic both allows us to assess a model's fit to highly corpus-dependent distributionslike the length distributionand mitigates the biases introduced by our own preconceptions regarding properties of natural language.",
"2 More concretely, our paper describes an experimental design and accompanying hypothesis tests to determine precisely whether text generated from language models follows the same empirical trends as human language.",
"Our experiments reveal that adherence to natural language tendencies varies widely with both model architecture and generation strategy, e.g., Fig. 1 shows varying degrees of adherence to the empirical typetoken relationship, an artifact that perplexity alone could not reveal.",
"Our findings suggest this framework is a valuable tool for gaining a deeper understanding of where today's language models are succeeding and failing at capturing human language.",
"Language models are probability distributions over natural language sentences.",
"We define the support of a language model p with parameters as Y := { BOS \u0000 v \u0000 EOS | v 2 V } (1) where V is the model's vocabulary and tokens EOS and BOS demarcate the beginning and end of a string, respectively, and V is the Kleene closure of V .",
"In this paper, we term vocabularies consisting of words closed and those consisting of BPE tokens (Sennrich et al., 2016) open .",
"In the case when p is locally normalized, which is the predominant case for language models, p is defined as the product of probability distributions: p ( y ) = | y | Y t =1 p ( y t | y <t ) (2) where each p ( | y <t ) is a distribution with support over V := V [ { EOS } and y < 1 = y 0 := BOS . To estimate model parameters , one typically optimizes the log-likelihood function over a corpus C train : L ( | C train ) = X y 2 C train log p ( y ) (3) where we call each string y a document . To determine the goodness of fit of a model to the 2 Such biases are naturally introduced by many probing techniques that e.g., draw conclusions from carefully constructed challenge tasks. empirical distribution (defined by C train ), it is standard practice to measure perplexity on a held-out dataset, which is simply a monotonic function of average (per token) log-likelihood under that model. While low perplexity on an evaluation set undoubtedly reflects some level of fit to natural language, it does not give us a fine-grained view of which linguistic attributes a model has learned. 3 Statistical Tendencies of Language Human languages are thought to exhibit statistical tendencies, several of which are explicitly quanti-fied by laws (Altmann and Gerlach, 2016). In this section, we review a subset of these distributions both with and without well-established forms over which we subsequently perform analyses. 3.1 Classical Laws RankFrequency. Zipf's law (1949), otherwise known as the rankfrequency law, states that the frequency of a word in a corpus decays exponentially in the frequency rank of that word, i.e., the frequency ! ( ) of the k th most frequent word w k follows the power-law distribution: ! ( w k ) / k \u0000 s . When fit to natural language text, the free parameter s is typically close to 1 . Zipf's law also has a probabilistic interpretation: the marginal probability that a random word in our corpus takes on the value of the k th most frequent can be expressed as p zipf ( W = w k ) = 1 ( s ) k \u0000 s (4) where ( s ) = 1 / P 1 k =1 k \u0000 s is the normalizing constant of our probability mass function (pmf). The adherence of language to Zipf's law has been widely studied and is considered one of the canonical laws of quantitative linguistics (Baroni, 2009; Li et al., 2010; Moreno-Sanchez et al., 2016). Estimating s from an observed set of rank frequency pairs can be done using standard estimation techniques. Here we use the maximum-likelihood estimate 3 (MLE), employing numerical optimization to solve for s since the MLE of the discrete power law lacks a closed form solution. TypeToken. Heaps' law (Herdan, 1960), also known as the typetoken relationship, states that 3 Derivation in App. A. We may also estimate s using, e.g., least squares over the original or loglog transform of our distribution. However, it has been empirically observed that least-squares estimates under this paradigm are not reliable (Clauset et al., 2009) and further, directly incorporate assumptions that contradict power law behavior (Schluter, 2020). the number of additional unique tokens (i.e., number of types) in a document diminishes as its length increases. Formally, we can express the expected number of types u ( ) as a function of the length l ( ) of the string y via the relationship u ( y ) / l ( y ) \u0000 where \u0000 < 1 is a free parameter. Types may be, e.g., unigrams or bigrams. The above formulation of Heaps' law lacks an obvious probabilistic interpretation. However, if we frame Heaps' law as modeling the expected value of the number of types for any given length document, then we can model the relation as a Poisson process, where the marginal distribution over document length follows Heaps' proposed power law. Specifically, we model the number of types for a document of a given length as a non-homogeneous Poisson process (NHPP; Ross, 1996) where our rate parameter \u0000 ( l ( y )) is Heaps' power law relation. The probability that there are k types in a document of length t is then p heaps ( u ( y t ) = k ) = \u0000 ( t ) k k ! exp( \u0000 \u0000 ( t )) (5) for \u0000 ( l ( y )) = l ( y ) \u0000 . Similarly to Eq. (4), we can fit parameters , \u0000 using MLE (see App. A). 3.2 Other Tendencies Natural language has other quantifiable distributions, e.g., over document length or unigrams. While there may not exist well-established laws for the behavior of these (often highly corpus-dependent) distributions, we can observe their empirical distributions w.r.t. a corpus. We review a few here and leave the exploration of others to future work. Length. Using notation from earlier, we estimate the pmf of the distribution over the length of documents in a corpus C as p l ( l ( y ) = k ) / X y 2 C { l ( y ) = k } (6) We can additionally compute statistics of this distribution, such as sample mean: l ( C ) = 1 / |C| P y 2 C l ( y ) . Unigram. Notably, the rankfrequency law of 3.1 leaves the categorical distribution over words unspecified, i.e., it defines the frequency for the k th ranked word without specifying the word itself. In order to make explicit comparisons, we define the unigram distribution w.r.t. corpus C as p uni ( w ) / X w 0 2 C { w 0 = w } (7) Stopwords and Symbols. Certain percentages of words in a string consist of either symbols, i.e., numbers and punctuation, or stopwords, i.e., common words such as that or so that primarily serve a syntactic function. We can model this percentage as a (continuous) random variable S and estimate its probability density function (pdf) as p stop ( s < S s + \u0000 ) (8) / X y 2 C n #stop( y ) l ( y ) 2 ( s, s + \u0000 ] o The pdf for symbols is defined similarly.",
"As with our length distribution, we can compute the means stop , sym of these distributions.",
"In this work, we aim to quantify the degree to which the linguistic distributions of text generated from language models matchor differ fromthose of natural language.",
"To this end, we propose the use of several probability metrics (Mostafaei and Kord-nourie, 2011; Rachev et al., 2013) as our notion of statistical distance.",
"4 For each of these metrics, we present nonparametric statistical significance tests, i.e., tests that may be used when the underlying distribution of observed data is not known.",
"Perhaps the simplest method for measuring the distance between two random variables is through differences in expectations, e.g., means or variances.",
"(Semi-)distances of this nature are formally called primary metrics .",
"To estimate this distance, we can use observations from random samples S 1 and S 2 , e.g., 1 \u0000 2 \u0000 ( S 1 , S 2 ) = ( S 1 ) \u0000 ( S 2 ) .",
"Observing a value of \u0000 ( S 1 , S 2 ) 6 = 0 on its own is not enough to confirm a difference between 1 and 2 ; we need to assess whether the observed distance is significantly above or below 0 .",
"Formally, our null and alternative hypotheses are: H 0 : \u0000 ( S 1 , S 2 ) = 0 (9) H a : \u0000 ( S 1 , S 2 ) 6 = 0 4 Some of these metrics are formally pseudo-distances, as they are not necessarily symmetric.",
"In our setting, we typically do not know the theoretical distributions of the random variables generating S 1 and S 2 , nor of an arbitrary test statistic \u0000 .",
"Consequently, we use resampling techniques to construct the sampling distribution of \u0000 ( S 1 , S 2 ) .",
"Permutation Tests.",
"In a nutshell, a permutation test provides a simple method for constructing the sampling distribution of a test statistic \u0000 through empirical observations.",
"The method uses the value of \u0000 over all possible rearrangements of the observed data points to represent the distribution of the test statistic under the null hypothesis.",
"Using this distribution, we can determine the probability of observing a value of the test statistic (or a more extreme value), which if low, may give us reason to reject a specific null hypothesis.",
"In this work, we only consider statistics \u0000 ( , ) over two samples.",
"We provide pseudocode for this case in App.",
"B. 5 4.2 Simple Metrics Primary metrics provide only a weak measure of the sameness of random variables as they are completely dependent on a single statistic of a distribution.",
"On the other hand, we know a random variable can be completely described by its distribution function.",
"As such, we turn to simple metrics of distance between random variables.",
"Given cumulative density functions (cdfs) P 1 and P 2 over one-dimensional random variables, the KolmogorovSmirnov (KS) metric is D ( P 1 , P 2 ) = sup y | P 1 ( y ) \u0000 P 2 ( y ) | (10) where D 2 [0 , 1] and D ( , ) = 0 indicates the distributions are identical.",
"However, not all random variables can be described in terms of a cdf.",
"For categorical distributions where the support of our random variable is not ordinal, the natural counterpart to the KS metric is the Chi-square distance.",
"This metric has a number of drawbacks (discussed in App. C)primarily that its value can be hard to interpret and so we instead turn to the total variation distance ( TVD )a widely used metric of distance between probability distributions.",
"Given two pmfs p 1 and p 2 , we define TVD as TVD ( p 1 , p 2 ) = sup y | p 1 ( y ) \u0000 p 2 ( y ) | (11) 5 When the number of possible permutations of the data is computationally prohibitive, we may instead use a MC sampling approach, where we sample from the set of possible permutations (Good, 2000).",
"where similarly to the KS metric, TVD is bounded above by 1 and a value of 0 indicates identical distributions.",
"In our setting, we consider two use cases for the KS metric and TVD : as distance metrics between an empirical and theoretical distribution (one-sample) and between two empirical distributions (two-sample).",
"The corresponding hypotheses that we can test with these metrics are: One-Sample Case: (12) H 0 : Sample S is drawn from p H a : Sample S is not drawn from p Two-Sample Case: (13) H 0 : Samples S 1 and S 2 are drawn from same p H a : Samples S 1 and S 2 are not drawn from same p where in the two-sample case, the exact form of p does not need to be known.",
"These hypotheses require the following tests.",
"The KolmogorovSmirov Test.",
"The KS test (Smirnov, 1948) is a nonparametric goodness-of-fit test originally designed to assess the fit of a continuous cdf to empirically-observed data; the two-sample version tests whether two samples come from the same distribution.",
"The method has since been extended to discrete distributions and is regarded as one of the most widely applicable nonparametric goodness-of-fit tests for comparing two distributions (Horn, 1977; Moreno-Sanchez et al., 2016).",
"The test uses the KS metric D as its test statistic; under our null hypothesis, D converges to 0 almost surely in the limit as our number of samples n !",
"1 by the GlivenkoCantelli theorem.",
"6 We may reject the null hypothesis if our test statistic is greater than the critical value, which is computed based off of our sample size and a desired significance level.",
"7 A Test for TVD .",
"Unlike the KS metric, we do not have a (theoretical) limiting distribution for TVD between samples from the same distribution that holds for all density functions (Devroye and Gyorfi, 1990).",
"However, we can construct this distribution using resampling techniques.",
"Formally, when S 1 and S 2 are drawn from the same distribution p where p need not be knownthen the test statistic TVD ( p S 1 , p S 2 ) follows the sampling distribution Z p , i.e., TVD ( p S 1 , p S 2 ) Z p .",
"The distribution of Z p can 6 Also known as the fundamental theorem of statistics. 7 Under the null hypothesis, our text statistic D follows a Kolmogorov distribution.",
"Figure 2 : Vocabulary sizes of test set and model-generated samples.",
"Training set (not shown) has vocabulary size of 53 .",
"2e5 .",
"Only Transformer (AS) and trigram models have a closed vocabulary; the higher red line is the size of the former.",
"We use the above framework to assess the degree to which language models learn various distributions of natural language, i.e., we report metrics outlined in 4 measured over the distributions and quantities defined in 3.",
"We compare samples generated from language models to a reserved test set taken from the same corpus as the model's training data.",
"Each set contains 1 million samples.",
"8 We tokenize all samples using the Moses decoder toolkit (Koehn et al., 2007).",
"All text is lower-cased and only complete unigrams are considered, i.e., when BPE is used, only the detokenized unigram is considered.",
"Length of a string is computed as the number of tokens separated by whitespace.",
"Note that when reporting the KS metric ( D ), we always report the metric between",
"(a) an empirical cdf computed over the respective model-generated samples and",
"(b) a reference cdf, where D p indicates direct comparison with empirical cdf of the test set.",
"D p and D p indicate comparison with cdfs of a parametric distribution, whose parameters are estimated on the model and test set, respectively.",
"Natural Language Corpus.",
"We use English Wikipedia Dumps, 9 preprocessing data following the steps used for XLM (Conneau and Lample, 2019) albeit with a 44 .",
"7e6 train 1e4 valid 1e6 test split.",
"The test set is used in all statistical tests, however, we estimate standard deviations for statistics in Tab.",
"4 (in the Appendix) using samples from 8 Due to our large sample sizes, we should anticipate that our results will almost always be significant, even when effect sizes are trivially small.",
"As such, we will almost assuredly reject our null hypotheses that model-generated samples come from the same distribution as natural language ones.",
"While in this light, the presentation of hypothesis tests in 4 may seem pointless, we provide them for cases where generating many samples for each model setting is computationally prohibitive.",
"9 dumps.wikimedia.org/ the training set; see this table for e.g., parameter estimates over test set.",
"Simulating Corpora from Language Models.",
"Given the distribution p , we may exactly compute statistics and distributions for language models over the entire set Y , weighting examples by the probability assigned to each string; however, doing so is infeasible due to the size of the output space and non-Markovian structure of most neural models.",
"Rather, we turn to sampling to create a representative set S = h y (1) , . . . , y ( N ) i from p .",
"We explore three sampling schemes: ancestral random sampling ( Random ), nucleus sampling ( Nucleus ), and beam sampling ( Beam ).",
"10 In ancestral random sampling, y ( i ) are constructed iteratively according to the distribution y ( i ) t p ( | y ( i ) <t ) (14) where y 0 = BOS .",
"Under the local normalization scheme of Eq.",
"(2), sampling according to Eq.",
"(14) is equivalent to sampling y ( i ) directly from p .",
"In nucleus sampling, our distribution is truncated to the most probable items covering portion n 2 (0 , 1] of the probability mass.",
"Formally, we now sample y ( i ) t ( p ( | y ( i ) <t ) /Z if y ( i ) t 2 V n ( p ( | y ( i ) <t )) 0 otherwise (15) where V n ( p ) V is the smallest subset such that P y 2 V n ( p ) p ( y ) \u0000 n and Z := P y 2 V n ( p ) p ( y ) .",
"Beam sampling uses Eq.",
"(14) as the sampling distribution, but extends a beam of k sequences at each sampling iteration.",
"I.e., k extensions are sampled from p ( | y ( i ) <t ) and the k most probable of the k 2 sampled items remain on the beam; note that unlike standard beam search, this is a stochastic procedure.",
"11 We use a beam size of 5 in all experiments.",
"10 The latter two sampling designs do not result in samples drawn according to our original p .",
"As such, the schemes lead Figure 3 : Distinct vs. unique token distributions (unigram and bigram) for test set and text generated from models.",
"Table 1 : Neural language model statistics.",
"Models.",
"We perform our tests on neural models with three different architectures: a transformer (Vaswani et al., 2017; Baevski and Auli, 2019) (only decoder portion), LSTM (Hochreiter and Schmidhuber, 1997), and Convolutional Neural Network (Dauphin et al., 2017).",
"All models are implemented and trained using fairseq .",
"12 We train models on corpora processed both with and without BPE.",
"We include details for each model in Tab.",
"1. We additionally estimate a trigram model on the training data; formally, we build a model where the probability of observing token x 2 V at position i of the text is estimated as p ( x | x i \u0000 2 , x i \u0000 1 ) (16) = c ( h x i \u0000 2 , x i \u0000 1 , x i ) P x 0 2 V c ( h x i \u0000 2 , x i \u0000 1 , x 0 i ) where c ( ) denotes the function counting occurrences of a sequence in some implicit C .",
"Note that we do not employ smoothing techniques in this model, thus, perplexity over a held-out dataset may diverge and so is not reported in Tab.",
"1. Vocabulary statistics for each sample are shown in Fig.",
"2. We provide samples of model-generated text in App.",
"E. to two new distributions, p ( n ) and p ( b ) , respectively.",
"To understand the rankfrequency relationship implicitly learned by language modelsand how it relates to the rankfrequency distribution present in natural languagewe compute the three KS metrics previously described: D p , D p , and D p .",
"Specifically, for the first two values, we use the cdf of a Zipfian distribution parameterized by s as our referencewhere s is estimated using model generated samples or the test set, respectively.",
"13 These metrics give us a sense of how well the rankfrequency distribution under our language models match a Zipfian distribution.",
"Since the power-law behavior of the token rankfrequency distribution is known to fall off at higher ranks (Piantadosi, 2014; Moreno-Sanchez et al., 2016), we consider solely the first 10,000 ranks in each sample, including when computing D p .",
"We report these values in Tab.",
"2. Values of estimates of s and plots of rankfrequency are shown in App.",
"D. Our results indicate that our models' empirical rankfrequency distributions do not adhere very closely to a standard Zipfian distribution (as shown by D p and D p \u0000 0 ), despite appearing to at a superficial level (see App. D).",
"However, the same is true for our test ( D p = 0 . 148 ), which suggests that our models fit a Zipfian distribution perhaps no more poorly than natural language does.",
"Rather, the model produces qualitatively worst text (see App. E)a trigram model under the beam sampling generation strategyfollows a power law trend the most closely of any of our samples.",
"On the other hand, the small values of D p suggest our 13 s is known to vary with the corpus size |C| (Powers, 1998), however |C| is the same for all sets, so this should not affect our analysis.",
"Table 2 : KS metrics (lower implies closer fit) between models' empirical cdf and reference cdfs for the rankfrequency relationship.",
"D p and D p indicate statistical distance from a Zipfian distribution, where parameter s is estimated using the model and test sets, respectively.",
"D p indicates direct comparison with empirical cdf of test set.",
"p -values (estimated using Monte Carlo simulations (Wood and Altavela, 1978)) for all KS metrics are 0 .",
"001 .",
"For the unigram distribution, we report TVD between empirical cdfs of model and test set.",
"All p -values are < 0 .",
"001 (see App. D).",
"Figure 4 : KS metrics (lower implies closer fit) with reference distributions for the typetoken relationship as a function of document length.",
"D p and D p statistical distance from NHPP distribution for params fit to model text and test sets, respectively; D p is computed directly against the empirical cdf of test set.",
"Shading indicates significance of the statistic.",
"models learn the empirical rankfrequency trends of human text quite well, something that would not be evident by simply looking at adherence to a Zipfian distribution.",
"The combination of these results suggest the limitation of using adherence to Zipf's law as a gauge for a model's consistency with natural language.",
"Fig. 3 shows the typetoken trend for all corpora and generation schemes.",
"While most models appear not to follow the same trend as the natural language distribution (as depicted by our test set), we observe that transformers under the nucleus sampling generation scheme match it most closely.",
"Indeed, both models based on the transformer architecture exhibit remarkably similar trends in these experiments, despite having different vocabulary sizes and hyperparameters: both in their generally close fit to the natural language typetoken distribution and in their visible fall-off for longer length sequences.",
"The latter observation reveals a deficiency that is seemingly specific to the transformer architectureone that may be linked to observations in natural language generation tasks.",
"More specifically, we take this as quantitative evidence for recent qualitative observations that when left to generate lots of text, neural language models based on the transformer architecture tend to babble repetitively (Holtzman et al., 2020; Cohen and Beck, 2019; Eikema and Aziz, 2020).",
"To provide a more mathematically rigorous analysis, we compute KS metrics, 14 again presenting three values: D p , D p , and D p .",
"In Fig. 4, we can see that model-generated text follows a NHPP parameterized by Heaps' law moderately well ( D p ); there are larger divergences at the tails of document length.",
"However, most do not follow an NHPP with the same parameters as our test set ( D p ).",
"Further, in contrast to rankfrequency, the typetoken distribution is more disparate from the empirical natural language distribution than our parameterized ones, as shown by high values of D p .",
"While both transformers exhibit the closest fit for all document lengths, which is in-line with our observations in Fig. 3, statistical distance from the natural language distribution for all models and in all settings increases with document length.",
"14 3.1 provides motivation for comparing distributions at individual time steps rather than collectively over time; analyzing Eq.",
"(5) for all document lengths simultaneously would not give us a sense of how the power-law fit changes as a function of document length.",
"Because we do not have a well-established law dictating the form of the natural language unigram distribution, we compare only empirical pmfs from model-generated samples and the test set directly.",
"Further, as the distribution over unigrams is categorical, we employ TVD following 4.2.",
"Our results in Tab.",
"2 indicate that language models generally capture the unigram distribution quite well.",
"The transformer (AS), which has a closed vocabulary, consistently performs poorly in comparison to other models.",
"While we might speculate this outcome is a result of disparate tails between empirical cdfsi.e., the part of the distribution over infrequent words, which may have been omitted from the closed vocabulary but could still be generated using BPEthe TVD metric in this setting should generally be robust to tail probabilities.",
"15 This suggests that BPE (or similar) vocabulary schemes may lead to models that can better fit this natural language distribution.",
"Similarly to the unigram distribution, for length, stopwords and symbols, we compare solely empirical cdfs.",
"We use the set of English stopwords defined by NLTK (Bird et al., 2009).",
"We define the set of symbols as tokens consisting solely of punctuation and numerical values.",
"Our results in Tab.",
"3 demonstrate that our language modelsat least when using random and nucleus sampling mimic these natural language distributions quite well.",
"Notably, text generated from an LSTM using random sampling follows all three distributions the closest of any model, suggesting LSTMs may have an inductive bias that is helpful for capturing these distributions.",
"On the other hand, using beam sampling leads to strong divergence from natural language distributions across the board.",
"Results for differences in distribution means in the permutation testing framework can be found in App.",
"D. With respect to the length distribution, these results are perhaps surprising: the local-normalization scheme used by the majority of language generation models (and by those in these experiments) has been claimed to result in models that favor shorter than typical sequences (Sountsov and Sarawagi, 2016; Murray and Chiang, 2018).",
"The results in Tab.",
"3 and Fig. 5 suggest otherwise.",
"15 We observe this empirically; calculating TVD between distributions truncated to the (union of the) first 1000 ranked unigrams lead to almost the exact same result.",
"Figure 5 : Boxplots showing the distribution of sample length per model and generation scheme.",
"Distribution of test set is repeated in each group for reference.",
"Specifically, we see that our models fit the natural language length distribution of our corpus quite closely, in terms of both overall distributions and means (see App. D).",
"Rather, it appears that the generation strategy may be the cause of prior observations.",
"This finding raises further questions: since models capture the length distribution well, is a language model more likely to produce degenerate text (e.g., repetitions) than the EOS token if only long documents are used in training?",
"We posit that corpus preprocessing should perhaps be more carefully considered in light of these results.",
"Across results, we observe that text generated using the nucleus sampling decoding scheme often aligns with natural language more closely than text produced using other generation strategies.",
"This suggests that nucleus sampling performs a helpful alteration to a standard distribution learned via MLE, which may in turn provide motivation for recent efforts to employ truncated or sparse probability distributions directly at training time, e.g., truncated loss (Kang and Hashimoto, 2020) or entmax loss (Peters et al., 2019).",
"We additionally observe large discrepancies in both 5.1 and 5.2 between the results when using empirical natural language cdfs vs. parametric ones.",
"We take this as a warning that assumptions about the forms of linguistic distributionssuch as the ones employed by challenge tasks in probingcan have significant effects on results.",
"In the last few years, a number of works have extended language model analysis beyond simple",
"Table 3 : KS metrics ( D p ) between empirical length, stopword, and symbol distributions of test set and model generated text.",
"p -values (estimated using Monte Carlo simulations (Wood and Altavela, 1978)) for all KS metrics are 0 .",
"001 .",
"evaluation metricslike perplexityin order to understand what attributes of human language these models are learning.",
"Some use task-based approaches, i.e., they design a set of tasks that require a specific subset of linguistic knowledge then evaluate model performance on these tasks (Linzen et al., 2016; Gulordava et al., 2018; Jiang et al., 2020, inter alia ).",
"Others use model-based approaches, where a separate model is trained to perform some auxiliary task on representations learned by the model under test (Blevins et al., 2018; Giulianelli et al., 2018; Sorodoc et al., 2020, inter alia ).",
"We direct readers to Belinkov and Glass (2019) for a full survey of probing methods.",
"These approaches have drawbacks; for example, introducing a secondary model to determine what the original model has learned presents confounding factors (Hewitt and Liang, 2019).",
"The designing of auxiliary tasks for assessing linguistic knowledge requires large manual effort and lends itself to implicit bias about how linguistic phenomena should manifest.",
"In contrast, our work allows us to take a hands-off approach to analyzing language models.",
"We see the benefit of this in 5, where our results without an assumed model of statistical tendencies give us a much different sense of which empirical properties of human-generated text our models have learned.",
"Our work is closest to that of Takahashi and Tanaka-Ishii (2017, 2019) who use model generated text to visually analyze whether language models reflect well-established statistical tendencies.",
"In contrast, our work provides a quantitative framework, along with appropriate significance tests, 16 for evaluating distribution fits.",
"We additionally assess the fit of language models to our test set directly, rather than solely to established laws.",
"Further, our analysis includes different generation strategies, multiple neural architectures, and a wider variety of empirical language distributions.",
"In this work, we present a framework for determining the linguistic properties learned by language models through analysis of statistical trends in generated text.",
"We find that neural language models accurately capture only a subset of natural language distributions and that this subset is highly dependent on both model architecture and generation strategy; no one configuration stands out as capturing all linguistic distributions.",
"Ultimately, we see this analysis framework as a means for a more fine-grained evaluation of language models than perplexity alone can provide.",
"Uncovering which linguistic properties language models have learned and which they have notshould help us to understand both the inductive biases of various models and via which avenues they can still be improved.",
"There are a number of important axes of variation that this work does not explore: perhaps most importantly, our results are limited to a single corpora in the English language.",
"A cross-linguistic analysis may reveal whether different model architectures exhibit inductive biases compatible with different languages; observing how these metrics change as a function of corpus size would have implications about the effects of data availability.",
"An exploration of the correlation of these metrics with other quantifications of model performance, such as perplexity or a model's ability to capture sentence level phenomenon, may help us understand how comprehensive other evaluation metrics are.",
"We leave these analyses as future work.",
"We thank Adhi Kuncoro for helpful discussion and feedback in the middle stages of our work and Tiago Pimentel, Jason Wei, and our anonymous reviewers for insightful feedback on the manuscript.",
"We additionally thank B. Bou for his concern."
] |
[
"objective",
"method",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"method",
"method",
"method",
"result",
"abstain",
"abstain",
"method",
"abstain",
"other",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"objective",
"abstain",
"method",
"objective",
"method",
"abstain",
"method",
"result",
"result",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"other",
"other"
] |
[
"The IMPRESSIONS section of a radiology report about an imaging study is a summary of the radiologist's reasoning and conclusions, and it also aids the referring physician in con-firming or excluding certain diagnoses.",
"A cascade of tasks are required to automatically generate an abstractive summary of the typical information-rich radiology report.",
"These tasks include acquisition of salient content from the report and generation of a concise, easily consumable IMPRESSIONS section.",
"Prior research on radiology report summarization has focused on single-step end-to-end models which subsume the task of salient content acquisition.",
"To fully explore the cascade structure and explainability of radiology report summarization, we introduce two innovations.",
"First, we design a two-step approach: extractive summarization followed by abstractive summarization.",
"Second, we additionally break down the extractive part into two independent tasks: extraction of salient (1) sentences and (2) keywords.",
"Experiments on English radiology reports from two clinical sites show our novel approach leads to a more precise summary compared to single-step and to two-step-with-single-extractive-process baselines with an overall improvement in F1 score of 3-4%.",
"A diagnostic radiology report about an examination includes FINDINGS in which the radiologist describes normal and abnormal imaging results of their analysis (Dunnick and Langlotz, 2008).",
"It also includes IMPRESSIONS or a summary that communicates conclusions about the findings and suggestions for the referring physician; a sample report is shown in Table 1.",
"FINDINGS are often lengthy and information-rich.",
"According to a survey of referring physicians, IMPRESSIONS may be the only part of the report that is read (Wallis and McCoubrie, 2011).",
"Overall, referring physicians seem to appreciate the explainability (or self-explanitoriness) of FINDINGS there is no evidence of midline shift or mass effect .",
"IMPRESSIONS as it helps them evaluate differential diagnoses while avoiding additional conversations with the radiologist or the need for repeat procedures.",
"A well known end-to-end method for text summarization is two-step : extractive summarization followed by abstractive summarization.",
"For instance, Chen and Bansal (2018) initially train extractive and abstractive systems separately and then use the extractive system as an agent in a single-agent reinforcement learning (RL) setup with the abstractive system as part of the environment.",
"Their extractive system extracts salient sentences and the abstractive system paraphrases these sentences to produce a summary.",
"This summary is in turn used to compute the reward for RL training.",
"However, this single-agent setup often fails to extract some 1542 salient sentences or it extracts irrelevant ones, leading to the generation of incomplete/incorrect IMPRESSIONS .",
"We hypothesize that granular categories of core concepts (e.g., abnormalities, procedures) can be leveraged for generating more comprehensive summaries.",
"Thus, a separate RL agent is dedicated to the task of extracting salient keywords (core concepts) in the two-step system.",
"The novelty in this approach is that the new, second agent can now collaborate with the first one and the two can influence each other in their extraction decisions.",
"Multiagent reinforcement learning (MARL) requires that an agent coordinate with the other agents to achieve the desired goal.",
"MARL often has centralized training and decentralized execution (Foerster et al., 2016; Kraemer and Banerjee, 2016).",
"There are several protocols for MARL training, such as sharing parameters between agents and explicit (Foerster et al., 2016, 2018; Sukhbaatar et al., 2016; Mordatch and Abbeel, 2018) or implicit (Tian et al., 2020) communication between agents by using an actor-critic policy gradient with a centralized critic for all agents (Foerster et al., 2018).",
"The aim of these protocols is to correctly assign credits so that an agent can deduce its contribution to the team's success.",
"To train our cooperative agents that extract salient sentences and keywords, we propose a novel Differentiable Multiagent Actor-Critic (DiMAC) RL learning method.",
"We learn independent agents in an actor-critic setup and use a communication channel to allow agents to coordinate by passing real-valued messages.",
"As gradients are pushed through the communication channel, DiMAC is end-to-end trainable across agents.",
"The novelties in the paper are threefold: a summarization system that leverages core concepts via keywords, refines them and makes them the basis for more fine-grained explainability a multi-agent RL (MARL) based extractive component for a two-step summarization framework, a Differentiable Multi-agent Actor-Critic (Di-MAC) with independent actors leveraging a communication channel for cooperation The remaining paper is structured as follows.",
"two-step framework.",
"In Section 3, we introduce the DiMAC training algorithm.",
"In Section 4, we describe training data and experiments.",
"In Section 5, we discuss the results.",
"In Section 6, we discuss related work.",
"In Section 7, we present our conclusions.",
"Problem statement.",
"We design a two-step summarization framework that takes the FINDINGS ( F ) section of a radiology report (consisting of a sequence of sentences) and a set of keywords ( K ) as input and produces an IMPRESSIONS ( I ) section (consisting of a sequence of sentences).",
"In the first step of the framework, the two extractors independently select words and sentences from FINDINGSF but also coordinate such that the selection of salient words is followed by the selection of the sentence comprising these words.",
"In the next step, a seq2seq abstractor paraphrases the selected sentences to generate IMPRESSIONSI .",
"Figure 1 illustrates the proposed framework.",
"We refer to Table 2 for basic notations used in this paper.",
"We often combine notations to indicate a framework component concisely.",
"Two-step summarization framework.",
"The proposed system includes encoder networks to encode words and sentences into vector representations.",
"It also includes two pointer extractor networks (Vinyals et al., 2015) to determine salient words and sentences by selecting their indices.",
"Both extractor networks run for the same number of steps; however, at each step, the output index of one extractor network is chosen while the other is set as empty ( ).",
"When the input is , an extractor pauses its activity and guides the other extractor in an optimal direction.",
"encoder, E w 2 w , is run on m word embeddings of FINDINGS sentences to obtain word representations, { h E w 2 w 1 , , h E w 2 w m } .",
"A convolutional network ( Conv ) is run on concatenated word ( v w ) and position ( v p ) embeddings in a sentence to obtain an intermediate sentence representation ( h s ).",
"Then, a bi-directional LSTM sentence encoder, E s 2 s , leverages the intermediate representations to obtain the final sentence representations, { h E s 2 s 1 , , h E s 2 s n } .",
"Extractors.",
"Two LSTM based pointer extractors, i.e., word, D w 2 w , and sentence, D s 2 s , select a source word and sentence index at each step of decoding respectively.",
"At any step j of decoding, each extractor independently uses its hidden state h D w 2 w j and h D s 2 s j to compute an attention score over its source item w i and s k as: wi,j = softmax ( v T ( WD h D w 2 w j + WE h E w 2 w i )) sk,j = softmax ( v T ( WD h D s 2 s j + WE h E s 2 s k )) where WD , WE , v , WD , WE and v are trainable parameters, T and are transpose and tanh functions respectively, and softmax normalizes the scores.",
"Word and sentence context vectors are computed using attention scores and encoder representations as c wj = (cid:80) mi =1 wi,j h E w 2 w i and c sj = (cid:80) nk =1 sk,j h E s 2 s k respectively.",
"word or sentence extractor output is set to is based on a switch probability q j = ( switch ( h D w 2 w j , c wj , h D s 2 s j , c sj )) , where switch is a feed-forward network (omitted in Figure 1).",
"The switch value of 0 or 1 indicates whether to set the output of sentence or word extractor to .",
"Based on its current cell state h D s 2 s j , D s 2 s computes the next cell state, both the context vectors c wj and c sj and the selected source item encoder representation, h E s 2 s .",
"Sharing context vectors between extractors is similar to the cross attention mechanism as described by Jadhav and Rajan (2018).",
"In case D s 2 s is at pause (i.e., q j =0), the E s 2 s end representation is taken as the selected item representation.",
"D w 2 w follows the same approach to compute its next state.",
"As we lack gold standard FINDINGS keywords and sentence-wise one-to-one match between IMPRESSIONS and FINDINGS to train networks to perform selection, we heuristically obtain such labels.",
"See Section 4.2 for details.",
"We perform a maximum-likelihood (ML) end-to-end training of the encoder-extractor networks to minimize the following loss; (cid:80) tj =1 (1 y qj )( y wj log wj ) y qj ( y sj log sk ) y qj log q j , where t is the step when D s 2 s selects a dummy END , which indicates end of the extraction, and y qj , y sj and y wj are heuristi-1544 cally obtained switch, word and sentence selection labels at step j respectively.",
"Abstractor.",
"The abstractor condenses each selected sentence to a concise summary.",
"We employ a pointer generator network (See et al., 2017) for this purpose.",
"It uses a copy mechanism to solve the out-of-vocabulary (OOV) problem and a coverage mechanism to solve the repetition problem.",
"See (See et al., 2017) for details.",
"We independently train the abstractor using heuristically obtained one-to-one matches between FINDINGS and IMPRESSIONS sentences.",
"As extractor and abstractor are separately trained in a two-step framework, Chen and Bansal (2018) proposed using RL training with the extractor assuming the agent role and the abstractor as part of the environment to address the separation.",
"Furthermore, as RL loss is computed out of final summary and ground-truth IMPRESSIONS , RL training addresses the error due to heuristic labels in the pretrained networks.",
"Unlike Chen and Bansal (2018), our setup involves multiple extractors, so we use MARL for the coordination.",
"In other words, the word and sentence extractors D w 2 w and D s 2 s operate as RL agents a w and a s (Figure 1, right sidie).",
"In (Foerster et al., 2018), an actor-critic MARL has a centralized critic and parameter-sharing actors.",
"In contrast, our extractors have different characteristics, e.g., amount of selection (salient words greater than sentences) and size of source representations; therefore, we exclude parameter sharing between actors.",
"Additionally, to not have actors influence each other's policies, we have a critic that estimates the value function by not conditioning on the actions of other agents, thereby ensuring actor independence.",
"Furthermore, we introduce a communicator ( m ) that coordinates actors through message passing.",
"The dedicated channel m addresses the issue of the environment appearing nonstationary due to independent agents; see (Foerster et al., 2016; Sukhbaatar et al., 2016; Mordatch and Abbeel, 2018).",
"The channel allows gradients to flow between actors, transforming the setup into an end-to-end Differentiable Multi-agent Actor Critic (DiMAC).",
"The actors and the communicator are initialized with the maximum-likelihood (ML) trained extractors and switch network, respectively.",
"{ s 1 , , s n } , respectively.",
"At any decoding step j , actors choose actions (i.e., source selection) u a w j and u a s j by using policy networks a w and a s and hidden states h a w j and h a s j .",
"Due to the communication between actors in DiMAC training, we intuitively expect some correlation in the actions.",
"Reward.",
"For any decoding step j , if the communicator indicates sentence selection ( m = 0 ), a sentence reward r a s j is computed using R 1 (ROUGE unigram recall) between the abstract summary s a s j of selected sentence s a s j (out of action u a s j ) from the abstractor and a ground-truth IMPRESSIONS sentence.",
"We sequentially match summary and IMPRESSIONS sentences such that a s learns to select relevant sentences sequentially.",
"Similarly, word reward r a w j for selected word w a w j out of action u a w j is 1 if the word is in the subset of keywords in FINDINGS , KF , else it is 0.",
"Again, we match selected and FINDINGS keywords sequentially.",
"When an agent selects extra items, the reward for those selections is 0, and thus, the agent learns to select only relevant sentences and keywords.",
"In addition, joint actions of actors eventually generate a global reward in a multi-agent cooperative setting as: r g = R 1 ( { s a s 1 , , , , s a s t } , I )+ R 1 ( { w a w 1 , , , , w a w t } , KF ) , where t is the step when a s selects END and is a hyperparameter to adjust the global word reward contribution.",
"As KF keywords are not gold-standard, we set = 0 .",
"1 ; this means that generated summary sentences drive most of the global learning.",
"r g is included as the reward at the last step t for both actors.",
"Action value functions Q a w j and Q a s j for actions u a w j and u a s j are estimated as E u a wj : t ,h a wj : t [ G a w j | h a w j , u a w j ] and E u a sj : t ,h a sj : t [ G a s j | h a s j , u a s j ] , respectively, where G a w j and G a s j are discounted rewards computed as (cid:80) t j l =0 l r a w j + l and (cid:80) t j l =0 l r a s j + l and = 0 .",
"99 is a hyperparameter.",
"Critic.",
"Like the actors, the critic c is an LSTM based network.",
"It runs for the same number of steps as the actors and estimates gradients to train them.",
"As the critic is used only in training, at each step j , the critic conditions on the actors' ground-truth selection indices, y sj and y wj , as the actions and uses these indices to obtain word and sentence encoder representations.",
"In addition to source representations, it uses its state, h c j , and attends to all encoder states, { h E w 2 w 1 , } and { h E s 2 s 1 , } ) to estimate a value function V j .",
"V j is then used to compute advantage functions A a w j 1545 Algorithm 1 Differentiable Multi-Agent Actor Critic 1: procedure TRAIN-DIMAC 2: Initialize parameters of actors ( a w and a s ), critic ( c ) & communicator ( m ) as a s := D s 2 s , a w := D w 2 w , c := D w 2 w & m := switch 3: for each training episode i do 4: step j 1 5: while action u a s j (cid:54) = END do 6: compute actors & critic states 7: sample actions u a s j & u a w j 8: compute rewards r a w j & r a s j for u a s j & u a w j 9: compute message m j & value function V j 10: j j + 1 11: compute global reward r g 12: for j = t to 1 do 13: compute discounted reward G a s j and G a w j 14: estimate action-value functions Q a w j & Q a s j 15: compute advantages A a s j & A a w j 16: accumulate critic gradient c 17: accumulate actor gradients a s & a w 18: update critic c i +1 = c i c 19: update actors as a s i +1 = a s i + a s & a w i +1 = a w i + a w 20: return a s , a w & m and A a s j for actors as Q a w j V j and Q a s j V j .",
"At any step, one of the two ground-truth actions y sj / y wj is empty.",
"Therefore, the computed value and action-value functions V j and Q j at that step intuitively become agent-specific, resulting in independent agent learning.",
"Finally, agent specific advantage functions are used to compute actor gradients as a w log a w j A a w j and a s log a s j A a s j .",
"Importantly, value, action-value and advantage can be calculated in a single forward pass of the actor and critic for each agent.",
"See appendix for details and proofs.",
"Communication.",
"The communicator m (Fig-ure 1, red circles) passes messages between the actors.",
"Actor previous hidden states and contexts, h a s j , h a w j , c s j and c w j , are fed to m and a sigmoidal m j is obtained.",
"Value m j is fed to a s while 1 m j is fed to a w .",
"The gradient of m j flows between actors during backpropagation and provides rich training signal that minimizes the learning effort.",
"See Algorithm 1 for DiMAC training algorithm details.",
"We preprocessed and filtered radiology reports from two medical centers in the USA (Courtesy of Princeton Radiology, Princeton and University",
"of Colorado Health).",
"1 The resulting dataset comprises 37,408 radiology reports, which we randomly split into training (31,808), validation (3,740) and test sets (1,860).",
"Table 3 gives dataset statistics.",
"Training labels.",
"Given an IMPRESSIONS sentence, we find a unique FINDINGS sentence with the highest sentence similarity score.",
"We follow Chen and Bansal (2018) and Liu and Lapata (2019) and use ROUGE-L as the sentence similarity scorer.",
"Furthermore, they use a greedy matching algorithm that takes similarity scores of all IMPRESSIONS and FINDINGS sentence combinations and yields a sequence of unique FINDINGS indices { y s 1 , } of size equivalent to the length of IMPRESSIONS .",
"There is a 1-to-1 correspondence between FINDINGS sentences at indices and IMPRESSIONS sentences.",
"We refer to the papers for more details.",
"These 1-to-1 correspondence are used for abstractor pretraining.",
"We use AutoPhrase (Shang et al., 2018) to extract keywords from training reports automatically.",
"We select only high-quality keywords, K , and avoid too frequent ones as these can bias the system to only perform keyword selection.",
"We implement an empirical threshold determined by hyperparameter search experiments.",
"2 We then find a subset of keywords, KF , in FINDINGSF and compile their indices { y w 1 , } .",
"As the two extractors run for the same number of steps, we interleave the above sentence and word indices { y s , } and { y w , } into one sequence.",
"In more detail, given a sentence index, all keywords indices within that sentence are placed in the sequence, followed by its index.",
"A binary switch variable y q (with values 0 and 1) distinguishes the 1 Sentences split using Stanford CoreNLP (Manning et al., 2014).",
"The following reports are excluded:",
"(a) no FINDINGS and/or IMPRESSIONS ;",
"(b) FINDINGS has fewer than 3 words;",
"(c) FINDINGS has fewer words or fewer sentences than IMPRESSIONS .",
"We replace special tokens like numbers, dates and abbreviations and used scispacy lemmatization.",
"2 AutoPhrase ranks keywords using a quality score based on frequency.",
"index type in the sequence, i.e., index refers to sentence vs. keyword.",
"Both extractors require, during a decoding step j , training labels y sj and y wj ; we set the value of non-available type as indicated by y qj to .",
"For example, when y qj is 0, y wj is .",
"Overall, an element in the final sequence is a tuple of y q , y s and y w and provides training labels for the switch, word and sentence extractor networks.",
"See Appendix A for details on the interleaving of indices.",
"Hyperparameters.",
"Included in Appendix C. Evaluation measure.",
"We follow standard practice and evaluate the quality of generated IMPRESSIONS by comparing against ground-truth IMPRESSIONS using ROUGE (Lin, 2004).",
"In this section we describe the baselines we compare our model against: a wide variety of extractive and abstractive systems.",
"Extractive systems LexRank (Erkan and Radev, 2011) is a graph-based method for computing relative importance in extractive summarization.",
"PTGEN (See et al., 2017) introduces an encoder-decoder model that can copy words from the source text via pointing, while retaining the ability to produce novel words through the generator.",
"PTGEN+Coverage (See et al., 2017) introduces a coverage mechanism to the original PTGEN model to avoid repetition.",
"Zhang et al. (2018) provides an automatic generation system for radiology IMPRESSIONS using neural seq2seq learning.",
"The model encodes background information of the radiology study and uses this information to guide the decoding process.",
"Self supervised learning has recently gained popularity as parameters of large models can be trained with little to no labeled data.",
"Pre-trained language models in which a transformer encoder is trained to reconstruct the original text from masked text, e.g., BERT (Devlin et al., 2018), have become an important component in recent summarization models (Liu and Lapata, 2019; Zhang et al., 2020; Za-heer et al., 2020).",
"We also present results from experiments using these summarization models .",
"Additionally, we experimented with a pre-trained seq2seq model which is learned using different self supervised techniques to reconstruct the original text, e.g., BART (Lewis et al., 2019).",
"BertSumExtAbs (Liu and Lapata, 2019) is an encoder-decoder summarization framework that adopts BERT as its encoder.",
"BERT is replaced by ClinicalBERT (Alsentzer et al., 2019) in all our experiments as it is adapted for the medical domain.",
"At the first stage, a model with the BERT encoder accomplishes an extraction task.",
"Then, the trained BERT encoder and a 6-layered transformer (Vaswani et al., 2017) are combined to form an abstractive system.",
"As the encoder in the abstractive system is pre-trained multiple times in comparison to the decoder, two separate Adam optimizers (each with different warm-up steps and learning rates) are used during training.",
"As the training is performed in two stages, BertSumExtAbs serves as the two-stage abstractive summarization system baseline for our experiments.",
"3 We also include results from BERTSUMAbs , a single-stage version in which encoder and decoder are trained only on the abstractive task.",
"BART (Lewis et al., 2019) is a state of the art transformer-based seq2seq model similar to BERTSUMAbs.",
"However, unlike BERTSUMAbs's fine-tuning of the encoder and denovo training of the decoder, for BART, both encoder and decoder are only fine-tuned.",
"Sentence Rewrite (Chen and Bansal, 2018) is a two-step summarization model that initially extracts and then rewrites the sentences.",
"This model serves as a two-step single agent baseline system for our experiments.",
"In this section, we compare results from our model and various baselines using both automatic and human evaluation.",
"Automatic Evaluation.",
"Table 4 shows report summarization results of various models trained and tested on the same data.",
"Our DiMAC model surpasses extractive-only and abstractive-only baselines, including LexRank and PTGEN+Coverage.",
"It also outperforms the two-step single agent baseline model (Sentence Rewrite (Chen and Bansal, 2018)) and the two-stage BERTSUMExtAbs (Liu and Lapata, 2019).",
"Besides the pre-trained encoder 3 We require hyperparameters somewhat different from the standard setup due to the small radiology report data size.",
"Hyperparameter tuning yielded the following values.",
"Batch size and initial learning rate of BERTSumExt are set to 16 and 5e-4, batch size in BERTSumExtAbs is 8 and initial learning rates of BERT and transformer decoder in BERTSumExtAbs are 0.0005 and 0.005.",
"of BertSumExtAbs, which is an advantage compared to other baselines, a denovo training of a large size decoder with a relatively small number of radiology reports may have led to overfitting.",
"This might explain the scores compared to the two-step systems.",
"Furthermore, a highly sophisticated semi-supervised training of the encoder and decoder of BART-base resulted in lower performance compared to our model, despite the relatively larger size (100x) of BART.",
"We hypothesize that pre-training mostly on a different domain text (e.g., Wikipedia, Books Corpus and News) and fine-tuning on small data could have adversely affected BART's performance in our setting.",
"The domain difference may also contribute to the relatively lower performance of BART-base versus BERTSUMExtAbs, thereby signifying the importance of pre-training with relevant domain text.",
"Moreover, DiMAC offers approximately 18 to 28% performance gains over (Zhang et al., 2018), a single-step single-agent summarization system designed specifically for the radiology domain.",
"In our opinion, the performance improvements observed with DiMAC are likely driven by the extract-then-abstract mechanism combined with auxiliary (and salient) information from keywords, which mimics the actual reasoning process of radiologists.",
"It is important to note that our model supports user-level validation by linking the predicted IMPRESSIONS sentences to sentences in FINDINGS , making the results explainable to radiologists and referring physicians.",
"Human Evaluation.",
"To assess the overall quality and factual correctness (Zhang et al., 2019) of the IMPRESSIONS generated by DiMAC, we obtained evaluations from two board-certified radiol-Gwet Win Tie Lose AC1 DiMAC vs. Base model Overall quality 25.00 59.37 15.63 .305 Factual correctness 12.50 84.37 03.13 .711 DiMAC vs. Ground Truth Overall quality 25.00 46.87 28.13 .082 Factual correctness 21.87 53.13 25.00 -.080",
"ogists.",
"We randomly selected 16 radiology reports from the test set.",
"For each radiology report, we presented to the evaluators its FINDINGS and three (blinded) versions of the summary, i.e., IMPRESSIONS : (1) the ground truth, (2) Sentence Rewrite (Chen and Bansal, 2018) and (3) DiMAC.",
"As Sentence Rewrite has a similar two-step approach, i.e., extract-then-abstract, we evaluate the qualitative performance of DiMAC with Sentence Rewrite as the base model (instead of BERTSUMExtAbs as it is a two-stage single-step system and also had lower Rouge scores compared to Sentence Rewrite).",
"We shuffled the three summaries such that the order cannot be guessed.",
"Each radiologist rated the summaries on two measures in relation to the FINDINGS : (1) overall quality and (2) factual correctness and completeness.",
"For example, the phrase pleu-ral effusions is a fact (or imaging finding); but the phrase small bilateral pleural effusions is a more precise description and should therefore have a better overall quality score.",
"For each measure, 1548 we asked the radiologists to score the summary as 1, 2 or 3 for bad, borderline or good.",
"Then we combined the assigned scores under two comparisons: (1) our model versus the base model and (2) our model versus ground truth.",
"We have 32 evaluations in total: 2 radiologists 16 reports.",
"We compared the scores provided by the radiologists to determine if they were the same (tie), higher (win) or lower (lose) for our model vs. ground truth and our model vs. base model.",
"Table 5 shows that DiMAC has clearly better factual correctness than the base model: 12.5% of cases are better, 3.13% are worse; gwet AC1 (Gwet, 2008) inter-rater agreement for this result is strong.",
"DiMAC exceeds the base model in 25% (vs. 15.6% lose) of evaluations for overall quality with moderate inter-rater agreement.",
"DiMAC is only slightly worse than ground truth in overall quality (win: 25%, lose: 28.13%) and factual correctness (win: 21.87%, lose: 25%) although inter-rater agreement is low in this case.",
"Table 6 shows a radiology report from our dataset ( FINDINGS and IMPRESSIONS ) and IMPRESSIONS generated by DiMAC and the base model.",
"Due to the hierarchical connections between words and sentences, there is significant overlap between the extracted sentences and words.",
"This phenomenon eventually contributes to the RL sentence extraction reward and helps to extract sentences with more keywords.",
"The keywords include disease or clinical diagnoses (e.g., nodule, lymphadenopathy, effusion), anatomical concepts (e.g., hepatic) and qualifiers (e.g., recent, multiple, bilateral).",
"The baseline model (Chen and Bansal, 2018) erroneously states right greater than left pleural effusions, i.e., it hallucinates.",
"In the sentence There is no axillary or hilar lymphadenopathy, the sentence reward is low and eventually it is not extracted despite having the keyword lymphadenopa-thy.",
"Abstractive Summarization.",
"An abstractive summary is a text consisting of novel phrases describing the content of the original text.",
"Abstractive summarization involves a cascade of topic fusion and text generation (Hovy et al., 1999).",
"Each task in this cascade typically requires expert-derived annotations, which is labor-intensive and time-FINDINGS from the report from a medical site There are multiple bilateral lung nodules , most consistent with metastatic disease .",
"consuming.",
"Thus, many recent abstractive summarization approaches focus on supervised/semi-supervised single-step end-to-end trainable models that implicitly address the sub-tasks of content acquisition and paraphrasing.",
"As part of two-stage but single step abstractive summarization, a pretrained encoder first learns the extraction task independently.",
"Then the pretrained encoder is embedded into an encoder-decoder abstractive summarization model to assist in better referencing the source content, e.g., Liu and Lapata (2019); Hsu et al. (2018).",
"On the other hand, in two-step abstractive summarization, extractive summarization is followed by abstractive summarization and is trained end-to-end, e.g., Chen and Bansal (2018).",
"Contrary to the two-stage singlestep approach, both extractive and abstractive summarization are pretrained (and function) separately in a two-step approach; however, an RL-based end-1549 to-end training enables alignment between them to generate better summaries.",
"DiMAC is a two-step abstractive system.",
"Multi-agent Reinforcement Learning (MARL).",
"In a single-agent actor-critic (Sutton et al., 1999; Konda and Tsitsiklis, 2000) policy gradient method, an agent policy is optimized by following a gradient computed using a value function estimated by a critic.",
"The simplest MARL setup applies policy gradients independently (each agent with its own actor and critic) and thereby restricts each agent to learn only from its own action history (Tan, 1993).",
"From each agent's point of view in this setting, the environment is not stationary and therefore, the RL stationary environment assumption is violated.",
"MARL with communication or collaboration protocols.",
"Foerster et al. (2018) proposed counterfactual policy gradients, which is an actor-critic policy gradient that leverages a centralized counterfactual critic that estimates value function for each actor by using actions performed by the other agents.",
"However, unlike our setting, actors in (Fo-erster et al., 2018) are similar and share parameters.",
"Additionally, the parameter sharing scheme has the limitation that the agents lack tighter coordination.",
"Foerster et al. (2016), Sukhbaatar et al. (2016) and Mordatch and Abbeel (2018) proposed to tightly coordinate independent agents rather than use a dedicated channel.",
"As incorporating an explicit communication channel mimics human (bidirec-tional) interactions, we design a similar Differentiable Multi-agent Actor-Critic (DiMAC) RL for our setup.",
"In DiMAC, each agent selects one of its actions and communicates with the others at every point in time.",
"Thus, the resulting joint action (influ-enced by the agents' communication) would aim to reach the desired (optimal) goal.",
"In the future, we will experiment with more variations of MARL (such as counter-factual critic) and transformer-based networks.",
"In this work, we introduce a novel extractive approach into a two-step RL-based summarization task (extractive-then-abstractive).",
"This approach is a MARL (rather than the traditional single-agent RL) which includes a new agent that extracts salient keywords from the source text and collaborates with an agent that extracts salient sentences.",
"We also present a Differentiable Multi-agent Actor-Critic (DiMAC) learning method, a novel yet simple MARL training for independent agents communicating via a dedicated channel.",
"We apply the proposed two-step summarization model with DiMAC MARL training to English radiology reports.",
"Results from our experiments indicate, based on automatic and human expert evaluations, that the DiMAC summarization model can outperform existing baseline models for text summarization.",
"Our summarization model generates the IMPRESSIONS to reflect human-level inference and actionable information (e.g., salient sentences and keywords) towards supporting improved workflow efficiency and better-informed clinical diagnosis based on medical imaging findings.",
"We thank Dr. Asik Ali Mohamed Ali and Dr. Abishek Balachandran for qualifying radiology reports and anonymized summaries for human evaluation.",
"We also thank Jashwanth N B and Siemens Healthineers supercomputing team for training infrastructure.",
"Furthermore, we thank the anonymous reviewers for their valuable feedback.",
"Disclaimer .",
"The concepts and information presented in this paper are based on research results that are not commercially available.",
"Future commercial availability cannot be guaranteed."
] |
[
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"other",
"method",
"other",
"other",
"method",
"objective",
"abstain",
"objective",
"objective",
"result",
"method",
"other",
"other",
"other",
"other",
"other",
"other"
] |
[
"Arguing without committing a fallacy is one of the main requirements of an ideal debate.",
"But even when debating rules are strictly enforced and fallacious arguments punished, arguers often lapse into attacking the opponent by an ad hominem argument.",
"As existing research lacks solid empirical investigation of the typology of ad hominem arguments as well as their potential causes, this paper fills this gap by (1) performing several large-scale annotation studies, (2) experimenting with various neural architectures and validating our working hypotheses, such as controversy or reasonableness, and (3) providing linguistic insights into triggers of ad hominem using explainable neural network architectures.",
"Human reasoning is lazy and biased but it perfectly serves its purpose in the argumentative context (Mercier and Sperber, 2017).",
"When challenged by genuine back-and-forth argumentation, humans do better in both generating and evaluating arguments (Mercier and Sperber, 2011).",
"The dialogical perspective on argumentation has been reflected in argumentation theory prominently by the pragma-dialectic model of argumentation (van Eemeren and Grootendorst, 1992).",
"Not only sketches this theory an ideal normative model of argumentation but also distinguishes the wrong argumentative moves, fallacies (van Eemeren and Grootendorst, 1987).",
"Among the plethora of prototypical fallacies, notwithstanding the controversy of most taxonomies (Boudry et al., 2015), ad hominem argument is perhaps the most famous one.",
"Arguing against the person is considered faulty, yet is prevalent in online and offline discourse.",
"1 1 According to Godwin's law' known from the internet pop-culture ( https://en.wikipedia.org/wiki/ Although the ad hominem fallacy has been known since Aristotle, surprisingly there are very few empirical works investigating its properties.",
"While Sahlane (2012) analyzed ad hominem and other fallacies in several hundred newspaper editorials, others usually only rely on few examples, as observed by de Wijze (2002).",
"As Macagno (2013) concludes, ad hominem arguments should be considered as multifaceted and complex strategies, involving not a simple argument, but several combined tactics.",
"However, such research, to the best of our knowledge, does not exist.",
"Very little is known not only about the feasibility of ad hominem theories in practical applications (the NLP perspective) but also about the dynamics and triggers of ad hominem (the theoretical counter-part).",
"This paper investigates the research gap at three levels of increasing discourse complexity: ad hominem in isolation, direct ad hominem without dialogical exchange, and ad hominem in large inter-personal discourse context.",
"We asked the following research questions.",
"First, what qualitative and quantative properties do ad hominem arguments have in Web debates and how does that reflect the common theoretical view (RQ1)?",
"Second, how much of the debate context do we need for recognizing ad hominem by humans and machine learning systems (RQ2)?",
"And finally, what are the actual triggers of ad hominem arguments and can we predict whether the discussion is go-ing to end up with one (RQ3)?",
"We tackle these questions by leveraging Web-based argumentation data ( Change my View on Reddit), performing several large-scale annotation studies, and creating a new dataset.",
"We experiment with various neural architectures and ex-Godwin's_law ), if a discussion goes on long enough, sooner or later someone will compare someone or something to Adolf Hitler.",
"trapolate the trained models to validate our working hypotheses.",
"Furthermore, we propose a list of potential linguistic and rhetorical triggers of ad hominem based on interpreting parameters of trained neural models.",
"2 This article thus presents the first NLP work on multi-faceted ad hominem fallacies in genuine dialogical argumentation.",
"We also release the data and the source code to the research community.",
"3 2 Theoretical background and related work The prevalent view on argumentation emphasizes its pragmatic goals, such as persuasion and group-based deliberation (van Eemeren et al., 2014), although numerous works have dealt with argument as product, that is, treating a single argument and its properties in isolation (Toulmin, 1958; Habernal and Gurevych, 2017).",
"Yet the social role of argumentation and its alleged responsibility for the very skill of human reasoning explained from the evolutionary perspective (Mercier and Sperber, 2017) provide convincing reasons to treat argumentation as an inherently dialogical tool.",
"The observation that some arguments are in fact deceptions in disguise' was made already by Aristotle (Aristotle and Kennedy (transla-tor), 1991), for which the term fallacy has been adopted.",
"Leaving the controversial typology of fallacies aside (Hamblin, 1970; van Eemeren and Grootendorst, 1987; Boudry et al., 2015), the ad hominem argument is addressed in most theories.",
"Ad hominem argumentation relies on the strategy of attacking the opponent and some feature of the opponent's character instead of the counter-arguments (Tindale, 2007).",
"With few exceptions, the following five sub-types of ad hominem are prevalent in the literature: abusive ad hominem (a pure attack on the character of the opponent), tu quoque ad hominem (essentially analogous to the He did it first defense of a three-year-old in a sandbox), circumstantial ad hominem (the practice what you preach attack and accusation of hypocrisy), bias ad hominem (the attacked opponent has a hidden agenda), and guilt by association (associating the opponent with somebody with a low credibility) (Schiappa and Nordin, 2 An attempt to address the plea for thinking about problems, cognitive science, and the details of human language (Manning, 2015).",
"3 https://github.com/UKPLab/ naacl2018-before-name-calling-habernal-et-al 2013; Macagno, 2013; Walton, 2007; Hansen, 2017; Woods, 2008).",
"We omit examples here as these provided in theoretical works or textbooks are usually artificial, as already criticized by (de Wijze, 2002) or (Boudry et al., 2015).",
"The topic of fallacies, which might be considered as sub-topic of argumentation quality, has recently been investigated also in the NLP field.",
"Existing works are, however, limited to the monological view (Wachsmuth et al., 2017; Habernal and Gurevych, 2016b,a; Stab and Gurevych, 2017) or they focus primarily on learning fallacy recognition by humans (Habernal et al., 2017, 2018a).",
"Another related NLP sub-field includes abusive language and personal attacks in general.",
"Wulczyn et al. (2017) investigated whether or not Wikipedia talk page comments are personal attacks and annotated 38k instances resulting in a highly skewed distribution (only 0.9% were actual attacks).",
"Regarding the participants' perspective, Jain et al. (2014) examined principal roles in 80 discussions from the Wikipedia: Article for Deletion pages (focusing on stubbornness or ignoredness, among others) and found several typical roles, including rebels', voices', or idiots'.",
"In contrast to our data under investigation (Change My View de-bates), Wikipedia talk pages do not adhere to strict argumentation rules with manual moderation and have a different pragmatic purpose.",
"Reddit as a source platform has also been used in other relevant works.",
"Saleem et al. (2016) detected hateful speech on Reddit by exploiting particular sub-communities to automatically obtain training data.",
"Wang et al. (2016) experimented with an unsupervised neural model to cluster social roles on sub-reddits dedicated to computer games.",
"Zhang et al. (2017) proposed a set of nine comment-level dialogue act categories and annotated 9k threads with 100k comments and built a CRF classifier for dialogue act labeling.",
"Unlike these works which were not related to argumentation, Tan et al. (2016) examined persuasion strategies on Change My View using word overlap features.",
"In contrast to our work, they focused solely on the successful strategies with delta-awarded posts.",
"Using the same dataset, Musi (2017) recently studied concession in argumentation.",
"derstand other perspectives on the issue', in other words an online platform for good-faith' argumentation hosted on Reddit.",
"4 A user posts a submission (also called original post(er); OP ) and other participants provide arguments to change the OP's view, forming a typical tree-form Web discussion.",
"A special feature of CMV is that the OP acknowledges convincing arguments by giving a delta point ( ).",
"Unlike the vast majority of internet discussion forums, CMV enforces obeying strict rules (such as no low effort' posts, or accusing of being unwilling to change view) whose violation results into deleting the comment by moderators.",
"These formal requirements of an ideal debate with the notion of violating rules correspond to incorrect moves in critical discussion in the normative pragma-dialectic theory (van Eemeren and Grootendorst, 1987).",
"Thus, violating the rule of not being rude or hostile' is equivalent to committing ad hominem fallacy.",
"For our experiments, we scraped, in cooperation with Reddit, the complete CMV including the content of the deleted comments so we could fully reconstruct the fallacious discussions, relying on the rule violation labels provided by the moderators.",
"The dataset contains 2M posts in 32k submissions, forming 780k unique threads.",
"We will set up the stage for further experiments by providing several quantitative statistics we performed on the dataset.",
"Only 0.2% posts in CMV are ad hominem arguments.",
"This contrasts with a typical online discussion: Coe et al. (2014) found 19.5% of comments under online news articles to be incivil.",
"Most threads contain only a single ad hominem argument (3,396 threads; there are 3,866 ad hominem arguments in total in CMV); only 35 threads contain more than three ad hominem arguments.",
"In 48.6% of threads containing a single ad hominem, the ad hominem argument is the very last comment.",
"This corresponds to the popular belief that if one is out of arguments, they start attacking and the discussion is over.",
"This trend is also shown in Figure 1 which displays the relative position of the first ad hominem argument in a thread.",
"Replying to ad hominem with another ad hominem happens only in 15% of the cases; this speaks for the attempts of CMV participants to keep up with the standards of a rather rational discussion.",
"Regarding ad hominem authors, about 66% of 4 https://www.reddit.com/r/changemyview/ Figure 1: No discussion after ad hominem.' Distribution of the number of comments before the first ad hominem is committed proportional to the thread length.",
"them start attacking out of blue', without any previous interaction in the thread.",
"On the other hand, 11% ad hominem authors write at least one nor-mal' argument in the thread (we found one outlier who committed ad hominem after writing 57 normal arguments in the thread).",
"Only in 20% cases, the ad hominem thread is an interplay between the original poster and another participant.",
"It means that there are usually more people involved in an ad hominem thread.",
"Unfortunately, sometimes the OP herself also commits ad hominem (12%).",
"We also investigated the relation between the presence of ad hominem arguments and the submission topic.",
"While most submissions are accompanied by only one or two ad hominem arguments (75% of submissions), there are also extremes with over 50 ad hominem arguments.",
"Manual analysis revealed that these extremes deal with religion, sexuality/gender, U.S. politics (mostly Trump), racism in the U.S., and veganism.",
"We will elaborate on that later in Section 4.2.",
"The experimental part is divided into three parts according to the increasing level of discourse complexity.",
"We first experiment with ad hominem in isolation in section 4.1, then with direct ad hominem replies to original posts without dialogical exchange in section 4.2, and finally with ad hominem in a larger inter-personal discourse context in section 4.3.",
"The first experimental set-up examines ad hominem arguments in Change my view regardless of its dialogical context.",
"Ad hominem arguments labeled by the CMV moderators come with no warranty.",
"To verify their reliability, we conducted the following annotation studies.",
"First, we needed to estimate parameters of crowdsourcing and its reliability.",
"We sampled 100 random arguments from CMV without context: positive candidates were the reported ad hominem arguments, whereas negative candidates were sampled from comments that either violate other argumentation rules or have a delta label.",
"To ensure the maximal content similarity of these two groups, for each positive instance the semantically closest negative instance was selected.",
"5 We then experimented with different numbers of Amazon Mechanical Turk workers and various thresholds of the MACE gold label estimator (Hovy et al., 2013); comparing two groups of six workers each and 0.9 threshold yielded almost perfect inter-annotator agreement (0.79 Cohen's ).",
"We then used this setting (six workers, 0.9 MACE threshold) to annotate another 452 random arguments sampled in the same way as above.",
"Crowdsourced gold' labels were then compared to the original CMV labels (balanced binary task: positive instances (ad hominem) and negative instances) reaching accuracy of 0.878.",
"This means that the ad hominem labels from CMV moderators are quite reliable.",
"Manual error analysis of disagreements revealed 11 missing ad hominem labels.",
"These were not spotted by the moderators but were annotated as such by crowd workers.",
"We sampled a larger balanced set of positive instances (ad hominem) and negative instances using the same methodology as in section 4.1.1, resulting in 7,242 instances, and casted the task of recognition of ad hominem arguments as a binary supervised task.",
"We trained two neural classifiers, namely a 2-stacked bi-directional LSTM network (Graves and Schmidhuber, 2005), and a convolutional network (Kim, 2014), and evaluated them using 10-fold cross validation.",
"Throughout the paper we use pre-trained word2vec word embed-dings (Mikolov et al., 2013).",
"Detailed hyperpa-5 Similarity was computed using a cosine similarity of average embedding vectors multiplied by the argument length difference to minimize length-related artifacts.",
"The sample was balanced with roughly 50% positive and 50% negative instances.",
"rameters are described in the source codes (link provided in section 1).",
"As results in Table 1 show, the task of recognizing ad hominem arguments is feasible and almost achieves the human upper bound performance.",
"4.1.3 Typology of ad hominem While binary classification of ad hominem as presented above might be sufficient for the purpose of red-flagging arguments, theories provide us with a much finer granularity (recall the typology in section 2).",
"To validate whether this typology is empirically relevant, we executed an annotation experiment to classify ad hominem arguments into the provided five types (plus other' if none ap-plies).",
"We sampled 200 ad hominem arguments from threads in which interlocution happens only between two persons and which end up with ad hominem.",
"The Mechanical Turk workers were shown this last ad hominem argument as well as the preceding one.",
"Each instance was annotated by 16 workers to achieve a stable distribution of labels as suggested by Aroyo and Welty (2015).",
"While 41% arguments were categorized as abusive , other categories ( tu quoque , circumstantial , and guilt by association ) were found to be rather ambiguous with very subtle differences.",
"In particular, we observed a very low percentage agreement on these categories and a label distribution spiked around two or more categories.",
"After a manual inspection we concluded that (1) the theoretical typology does not account for longer ad hominem arguments that mix up different attacks and that (2) there are actual phenomena in ad hominem arguments not covered by theoretical categories.",
"These observations reflect those of Macagno (2013, p. 399) about ad hominem moves as multifaceted strategies.",
"We thus propose a list of phenomena typical to ad hominem arguments in CMV based on our empirical study.",
"For this purpose, we follow up with another annotation experiment on 400 arguments, with seven workers per instance.",
"6 The goal was 6 Here we decided on seven workers per item by relying on other span annotation experiments done in a similar setup (Habernal et al., 2018b).",
"to annotate a text span which made the argument an ad hominem; a single argument could contain several spans.",
"We estimated the gold spans using MACE and performed a manual post-analysis by designing a typology of causes of ad hominem together with their frequency of occurrence.",
"The results and examples are summarized in Table",
"2. 4.1.4 Results and interpretation The data verification annotation study (section 4.1.1) has two direct consequences.",
"First, the high score (0.79) answers RQ2: for recognizing ad hominem argument, no previous context is necessary.",
"Second, we still found 5% overlooked ad hominem arguments in CMV thus a moderation-facilitating tool might come handy; this can be served by the well-performing CNN model (0.810 accuracy; section 4.1.2).",
"The existing theoretical typology of ad hominem arguments, as presented for example in most textbooks, provides only a very simplified view.",
"On the one hand, some of the categories which we found in the empirical labeling study (section 4.1.3) do map to their corresponding counterparts (such as the vulgar insults).",
"On the other hand, some ad hominem insults typical to online argumentation (illiteracy insults, condescension) are not present in studies on ad hominem.",
"Hence, we claim that any potential typology of ad hominem arguments should be multinomial rather than categorical, as we found multiple different spans in a single argument.",
"We already showed that ad hominem arguments are usually preceded by a discussion between the interlocutors.",
"However, 897 submissions (origi-nal posts; OPs) have at least one intermediate ad hominem (in other words, the original post is directly attacked).",
"We were thus interested in what triggers these first-level ad hominem arguments.",
"We hypothesize two causes: (1) the controversy of the OP, similarly to some related works on news comments (Coe et al., 2014) and (2) the reasonableness of the OP (whether the topic is reasonable to argue about).",
"We model both features on a three-point scale, namely controversy : 1 = not really controversial', 2 = somehow controversial', 3 = very controversial' and reasonableness : 1 = quite stupid', 2 = neutral', 3 = quite reason-able'.",
"7 We sampled two groups of OPs: those which had some ad hominem arguments in any of its threads but no delta (ad hominem group) and those without ad hominem but some deltas (Delta group).",
"In total, 1,800 balanced instances were annotated by five workers and the resulting value was averaged for each item.",
"8 Statistical analysis of the annotated 1,800 OPs revealed that ad hominem arguments are associated with more controversial OPs (mean controversy 1.23) while delta-awarded arguments with less controversial OPs (mean controversy 1.06; K-S test; 9 statistics 0.13, P-value: 7 . 97 10 7 ).",
"On the other hand, reasonableness does not seem to play such a role.",
"The difference between ad hominem in reasonable OPs (mean 1.20) and delta in reasonable OPs (mean 1.11) is not that statistically strong; (K-S test statistics: 0.07, P-value: 0.02).",
"We further built a regression model for predicting controversy and reasonableness of the OPs.",
"Along with Bi-LSTM and CNN networks (same models as in 4.1.2) we also developed a neural model that integrates CNN with topic distribution (CNN+LDA).",
"The motivation for a topic-incorporating model was based on our earlier observations presented in section",
"3. In particular, we trained an LDA topic model ( k = 50) (Blei et al., 2003) on the heldout OPs and during train-ing/testing, we merged the estimated topic distribution vector with the output layer after convolution and pooling.",
"We performed 10-fold cross validation on the 1,800 annotated OPs and got reasonable performance for controversy prediction ( 7 Examples of not really controversial: I Don't Think Monty Python is Funny , very controversial: Blacks are generally intellectual inferior to the other major races , quite stupid: Burritos are better than sandwiches , and quite reasonable: Nations whose leadership is based upon religion are fundamentally backwards .",
"8 A pilot crowd sourcing annotation with 5 + 5 workers showed a fair reliability for controversy (Spearman's 0.804) and medium reliability for reasonableness (Spearman's 0.646).",
"9 Kolmogorov-Smirnov (K-S) test is a non-parametric test without any assumptions about the underlying probability distribution.",
"0.569) and medium performance for reasonableness prediction ( 0.385), respectively; both using the CNN+LDA model (see Table 3).",
"We then used the trained model and extrapolated on all held-out OPs (1,267 ad hominem and 10,861 delta OPs, respectively).",
"The analysis again showed that ad hominem arguments tend to be found under more controversial OPs whereas delta arguments in the less controversial ones (K-S test statistics: 0.14, P-value: 1 10 18 ).",
"For reasonableness, the rather low performance of the predictor does not allow us draw any conclusions on the extrapolated data.",
"Controversy of the original post is immediately heating up the debate participants and correlates with a higher number of direct ad hominem responses.",
"This corresponds to observations made in comments in newswire where weightier' top-ics tended to stir incivility (Coe et al., 2014).",
"On the other hand, stupidity' (or reasonableness') does not seem to play any significant role.",
"The CNN+LDA model for predicting controversy ( 0.569) might come handy for signaling potentially heated' discussions.",
"In this section, we focus on the dialogical aspect of CMV debates and dynamics of ad hominem fallacies.",
"Although ad hominem arguments appear in many forms (Section 4.1.3), we treat all ad hominem arguments equal in the following experiments.",
"So far we explored what makes an ad hominem argument and whether debated topic influences the",
"number of intermediate attacks.",
"However, possible causes of the argumentative dynamics that ends up with an ad hominem argument remain an open question, which has been addressed in neither argumentation theory nor in cognitive psychology, to the best of our knowledge.",
"We thus cast an explanation of triggers and dynamics of ad hominem discussions as a supervised machine learning problem and draw theoretical insights by a retrospective interpretation of the learned models.",
"We sample positive instances by taking three contextual arguments preceding the ad hominem argument from threads which are an interplay between two persons.",
"Negative samples are drawn similarly from threads in which the argument is awarded with as shown in Figure",
"2. 10 Each instance consists of the three concatenated arguments delimited by a special OOV token.",
"This resulted in 2,582 balanced training instances.",
"The alleged lack of interpretability of neural networks has motivated several lines of approaches, such as layer-wise relevance propagation (Arras et al., 2017) or representation erasure (Li et al., 2016), both on sentiment analysis.",
"As our task at hand deals with multi-party discourse that presumably involves temporal relations important for the learned representation, we opted for a state-of-the-art self-attentive LSTM model.",
"In particular, we re-implemented the Structured Self-Attentive Embedding Neural Network (SSAE-NN) (Lin et al., 2017) which learns an embedding matrix representation of the input using attention weights.",
"To make the attention even more interpretable, we replaced the final non-linear MLP layers with a single linear classifier (softmax).",
"By summing over one dimension of the attention embedding matrix, each word from the input sequence gets associated 10 To ensure as much content similarity as possible, we used the same similarity sampling as in section 4.1.1.",
"with a single attention weight that gives us insights into the classifier's features' (still indirectly, as the true representation is a matrix; see the original paper).",
"11 The learning objective is to recognize whether the thread ends up in an ad hominem argument or a delta point.",
"We trained the model in 10-fold cross-validation and although our goal is not to achieve the best performance but rather to gain insight, we also tested a CNN model (accu-racy 0.7095) which performed slightly worse than the SSAE-NN model (accuracy 0.7208).",
"During testing the model, we projected attention weights to the original texts as heat maps and manually analyzed 191 true positives (ad hominem threads recognized correctly), as well as 77 false positives (ad hominem threads misclassi-fied as delta) and 84 false negatives (delta as ad hominem), in total about 120k tokens.",
"The full output is available in the supplementary materials, we use IDs as a reference in the following text.",
"In the following analysis, we solely relied on the weights of words or phrases learned by the attention model, see an example in Figure",
"3. Based on our observations, we summarize several linguistic and argumentative phenomena with examples most likely responsible for ad hominem threads in Table",
"4. The identified phenomena have few interesting properties in common.",
"First, they all are topic-independent rhetorical devices (except for the loaded keywords at the bottom).",
"Second, many of them deal with meta-level argumentation, i.e., arguing about argumentation (such as missing support or fallacy accusations).",
"Third, most of them do not contain profanity (in contrast to the actual ad hominem arguments of which a third are vulgar insults; cf. Table 2).",
"And finally, all of them should be easy to avoid.",
"Actual interest mixed with indifference in 11 We also experimented with regularizing the attention matrix as the authors proposed, but it resulted in worse performance.",
"Another problematic phenomena is also expressed disagreement ( 678(-2) overheated rhetoric , 203(-2) But I suppose this argument is ... , 230(-2) But I don't think it's quite ... , 938(-1) I disagree too, however ... ).",
"False negatives were caused basically by presence of many informative' content words ( 980 unemployment, quarterly publication, inflation data , 474 actual publications, this experiment, biological ailments, medical doctorate , 1214 graduate degree, education, health insurance ) and misinterpreted sarcasm ( 285(-1) Also this is a cute analogy ).",
"In this article, we investigated ad hominem argumentation on three levels of discourse complexity.",
"We looked into qualitative and quantative properties of ad hominem arguments, crowdsourced labeled data, experimented with models for prediction (0.810 accuracy; 4.1.2), and proposed an updated typology of ad hominem properties (4.1.3).",
"We then looked into the dynamics of argumentation to examine the relation between the quality of the original post and immediate ad hominem arguments (4.2).",
"Finally, we exploited the learned representation of Self-Attentive Embedding Neural Network to search for features triggering ad hominem in one-to-one discussions.",
"We found several categories of rhetorical devices as well as misleading features (4.3.3).",
"There are several points that deserve further investigation.",
"First, we have ignored meta-information of the debate participants, such as their overall activity (i.e., whether they are spam-mers or trolls).",
"Second, the proposed typology of ad hominem causes has not yet been post-verified empirically.",
"Third, we expect that personality traits of the participants (BIG5) may also play a significant role in the argumentative exchange.",
"We leave these points for future work.",
"We believe that our findings will help gain better understanding of, and hopefully keep restraining from, ad hominem fallacies in good-faith discussions.",
"This work has been supported by the ArguAna Project GU 798/20-1 (DFG), and by the DFG-funded research training group Adaptive Preparation of Information form Heterogeneous Sources (AIPHES, GRK 1994/1)."
] |
[
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"method",
"objective",
"method",
"method",
"objective",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"method",
"other"
] |
[
"We present a reading comprehension challenge in which questions can only be answered by taking into account information from multiple sentences.",
"We solicit and verify questions and answers for this challenge through a 4-step crowdsourcing experiment.",
"Our challenge dataset contains 6 k questions for +800 paragraphs across 7 different domains (ele-mentary school science, news, travel guides, fiction stories, etc) bringing in linguistic diversity to the texts and to the questions wordings.",
"On a subset of our dataset, we found human solvers to achieve an F1-score of 86 .",
"4% .",
"We analyze a range of baselines, including a recent state-of-art reading comprehension system, and demonstrate the difficulty of this challenge, despite a high human performance.",
"The dataset is the first to study multi-sentence inference at scale, with an open-ended set of question types that requires reasoning skills.",
"Machine Comprehension of natural language text is a fundamental challenge in AI and it has received significant attention throughout the history of AI (Greene, 1959; McCarthy, 1976; Reiter, 1976; Winograd, 1980).",
"In particular, in natural language processing (NLP) it has been studied under various settings, such as multiple-choice Question-Answering (QA) (Green Jr. et al., 1961), Reading Comprehension (RC) (Hirschman et al., 1999), Recognizing Textual Entailment (RTE) (Dagan et al., 2013) etc.",
"The area has seen rapidly increasing interest, thanks to the existence of sizable datasets and standard benchmarks.",
"CNN/Daily Mail (Hermann et al., 2015), SQuAD (Rajpurkar et al., 2016) and NewsQA (Trischler et al., 2016) to name a few, are some of the datasets that were released recently with the goal of facilitating research in machine comprehension.",
"Despite all the excitement fueled by that large data sets and the ability to directly train statistical learning models, current QA systems do not have capabilities comparable to elementary school or younger children (Clark and Etzioni, 2016).",
"For many of these datasets, researchers point out that models neither need to comprehend' in order to correctly predict an answer, nor do they learn to reason' in a way that generalizes across datasets.",
"For example, Khashabi et al. (2016) showed that adversarial perturbation in candidate answers results in a significant drop in performance of a few state-of-art science QA systems.",
"Similarly, Jia and Liang (2017) show that adding an adversarially selected sentence to the instances in the SQuAD datasets drastically reduces the performance of many of the existing baselines.",
"Chen et al. (2016) show that in the CNN/Daily Mail datasets, the required reasoning and inference level . . . is quite simple and that a relatively simple algorithm can get almost close to the upper-bound.",
"We believe that one key reason that simple algorithms can deal with the existing large datasets but, nevertheless, fail at generalization, is that the datasets do not actually require a deep understanding.",
"We propose to address this shortcoming by developing a reading comprehension challenge in which answering each of the questions requires reasoning over multiple sentences.",
"There is evidence that answering single-sentence questions', i.e. questions that can be answered from a single sentence of the given paragraph, is easier than answering multi-sentence questions', which require multiple sentences to answer a given question.",
"For example, Richardson et al. (2013) released a reading comprehension dataset that contained both single-sentence and multi-sentence questions; models proposed for this task yielded considerably better performance on the single-sentence questions than on the multi-252 sentence questions (according to Narasimhan and Barzilay (2015) accuracy of about 83% and 60% on these two types of questions, respectively).",
"There could be multiple reasons for this.",
"First, multi-sentence reasoning seems to be inherently a difficult task.",
"Research has shown that while complete-sentence construction emerges as early as first grade for many children, their ability to integrate sentences emerges only in fourth grade (Berninger et al., 2011).",
"Answering multi-sentence questions might be more challenging for an automated system because it involves more than just processing individual sentences but rather combining linguistic, semantic and background knowledge across sentencesa computational challenges in itself.",
"Despite these challenges, multi-sentence questions can be answered by humans and hence present an interesting yet reasonable goal for AI systems (Davis, 2014).",
"In this work, we propose a multi-sentence QA challenge in which questions can be answered only using information from multiple sentences.",
"Specifically, we present MultiRC (Multi-Sentence Reading Comprehension) 1 a dataset of short paragraphs and multi-sentence questions that can be answered from the content of the paragraph.",
"Each question is associated with several choices for answer-options, out of which one or more correctly answer the question.",
"Figure 1 shows two examples from our dataset.",
"Each instance consists of a multi-sentence paragraph, a question, and answer-options.",
"All instances were constructed such that it is not possible to answer a question correctly without gathering information from multiple sentences.",
"Due to space constraints, the figure shows only the relevant sentences from the original paragraph.",
"The entire corpus consists of 871 paragraphs and about 6 k multi-sentence questions.",
"The goal of this dataset is to encourage the research community to explore approaches that can do more than sophisticated lexical-level matching.",
"To accomplish this, we designed the dataset with three key challenges in mind.",
"(i) The number of correct answer-options for each question is not pre-specified.",
"This removes the over-reliance of current approaches on answer-options and forces them to decide on the correctness of each candidate answer independently of others.",
"In other words, unlike previous work, the task here is not 1 http://cogcomp.org/multirc/ S3: Hearing noises in the garage, Mary Murdock finds a bleeding man, mangled and impaled on her jeep's bumper.",
"to simply identify the best answer-option, but to evaluate the correctness of each answer-option individually.",
"For example, the first question in Figure 1 can be answered by combining information from sentences 3, 5, 10, 13 and 15.",
"It requires not only understanding that the stalker's name is Timothy but also that he is the man who Mary had hit.",
"(ii) The correct answer(s) is not required to be a span in the text.",
"For example, the correct answer, A, of the second question in Figure 1 is not present in the paragraph verbatim.",
"It is instead a combination of two spans from 2 sentences: 12 and 13.",
"Such answer-options force models to process and understand not only the paragraph and the question but also the answer-options.",
"(iii) The paragraphs in our dataset have diverse provenance by being extracted from 7 different domains such as news, fiction, historical text etc., and hence are expected to be more diverse in their contents as compared to single-domain datasets.",
"We also expect this to lead to diversity in the types of questions that can be constructed from the passage.",
"Overall, we introduce a reading comprehension dataset that significantly differs from most other datasets available today in the following ways: 253 6 k high-quality multiple-choice RC questions that are generated (and manually verified via crowdsourcing) to require integrating information from multiple sentences.",
"The questions are not constrained to have a single correct answer, generalizing existing paradigms for representing answer-options.",
"Our dataset is constructed using 7 different sources, allowing more diversity in content, style, and possible question types.",
"We show a significant performance gap between current solvers and human performance, indicating an opportunity for developing sophistical reasoning systems.",
"Automated reasoning is arguably one of the major problems in contemporary AI research.",
"Brachman et al. (2005) suggest challenges for developing AI program that can pass the SAT exams.",
"In similar spirit Clark and Etzioni (2016) advocate elementary-school tests as a new test for AI.",
"Davis (2014) proposes hand-construction of multiple-choice challenge sets that are easy for children but difficult for computers.",
"Despite Davis' claim on simplicity of his target questions, it is not clear how easy it is to generate such questions, as he doesn't provide any reasonably-sized dataset matching his proposal.",
"Weston et al. (2015) present a relatively small dataset of 10 reasoning categories, and propose to build a system that uses a world model and a linguistic model.",
"The fundamental limitation of the dataset is that it is generated according to a restricted set of reasoning categories, which possibly limits the complexity and diversity of questions.",
"Some other recent datasets proposed for machine comprehension also pay attention to type of questions and reasoning required.",
"For example, RACE (Lai et al., 2017) attempts to incorporate different types of reasoning phenomena, and MCTest (Richardson et al., 2013) attempted to contain at least 50% multi-sentence reasoning questions.",
"However, since the crowdsourced workers who created the dataset were only encouraged, and not required, to write such questions, it is not clear how many of these questions actually require multi-sentence reasoning (see Sec. 3.5).",
"Similarly, only about 25% of question in the RACE dataset require multi-sentence reasoning as reported in their paper.",
"Remedia (Hirschman et al., 1999) also contains 5 different types of questions (based on question words) but is a much smaller dataset.",
"Other datasets which do not deliberately attempt to include multi-sentence reasoning, like SQuAD (Rajpurkar et al., 2016) and the CNN/Daily Mail dataset (Hermann et al., 2015), suffer from even lower percentage of such questions (12% and 2% respectively (Lai et al., 2017)).",
"There are several other corpora which do not guarantee specific reasoning types, including MS MARCO (Nguyen et al., 2016), WikiQA (Yang et al., 2015), and TriviaQA (Joshi et al., 2017).",
"The complexity of reasoning required for a reading comprehension dataset would depend on several factors such as the source of questions or paragraphs; the way they are generated; and the order in which they are generated (i.e. questions from paragraphs, or the reverse).",
"Specifically, paragraphs' source could influence the complexity and diversity of the language of the paragraphs and questions, and hence the required level of reasoning capabilities.",
"Unlike most current datasets which rely on only one or two sources for their paragraphs (e.g. CNN/Daily Mail and SQuAD rely only on news and Wikipedia articles respectively) our dataset uses 7 different domains.",
"Another factor that distinguishes our dataset from previously proposed corpora is the way answers are represented.",
"Several datasets represent answers as multiple-choices with a single correct answer.",
"While multiple-choice questions are easy to grade, coming up with non-trivial correct and incorrect answers can be challenging.",
"Also, assuming exactly one correct answer (e.g., as in MCTest and RACE) inadvertently changes the task from choosing the correct answer to choosing the most likely answer.",
"Other datasets (e.g MS-MARCO and SQuAD) represent answers as a contiguous substring within the passage.",
"This assumption of the answer being a span of the paragraph, limits the questions to those whose answer is contained verbatim in the paragraph.",
"Unfortunately, it rules out more complicated questions whose answers are only implied by the text and hence require a deeper understanding.",
"Because of these limitations, we designed our dataset to use multiple-choice representations, but without specifying the number of correct answers for each question.",
"In this section we describe our principles and methodology of dataset collection.",
"This includes automatically collecting paragraphs, composing questions and answer-options through crowdsourcing platform, and manually curating the collected data.",
"We also summarize a pilot study that helped us design this process, and end with a summary of statistics of the collected corpus.",
"Questions and answers in our dataset are designed based on the following key principles:",
"Multi-sentenceness.",
"Questions in our challenge require models to use information from multiple sentences of a paragraph.",
"This is ensured through explicit validation.",
"We exclude any question that can be answered based on a single sentence from a paragraph.",
"Open-endedness.",
"Our dataset is not restricted to questions whose answer can be found verbatim in a paragraph.",
"Instead, we provide a set of handcrafted answer-options for each question.",
"Notably, they can represent information that is not explicitly stated in the text but is only inferable from it (e.g. implied counts, sentiments, and relation-ships).",
"Answers to be judged independently.",
"The total number of answer options per question is variable in our data and we explicitly allow multiple correct and incorrect answer options (e.g. 2 correct and 1 incorrect options).",
"As a consequence, correct answers cannot be guessed solely by a process of elimination or by simply choosing the best candidates out of the given options.",
"Through these principles, we encourage users to explicitly model the semantics of text beyond individual words and sentences, to incorporate extralinguistic reasoning mechanisms, and to handle answer options independently of one another.",
"Variability.",
"We encourage variability on different levels.",
"Our dataset is based on paragraphs from multiple domains, leading to linguistically diverse questions and answers.",
"Also, we do not impose any restrictions on the questions, to encourage different forms of reasoning.",
"The paragraphs used in our dataset are extracted from various sources.",
"Here is the complete list of the text types and sources used in our dataset, and the number of paragraphs extracted from each category (indicated in square brackets on the right):",
"1. News: [121] CNN (Hermann et al., 2015) WSJ (Ide et al., 2008) NYT (Ide et al., 2008)",
"2. Wikipedia articles [92]",
"3. Articles on society, law and justice (Ide and Suderman, 2006) [91]",
"4. Articles on history and anthropology (Ide et al., 2008) [65]",
"5. Elementary school science textbooks 2 [153]",
"6. 9/11 reports (Ide and Suderman, 2006) [72]",
"7. Fiction: [277] Stories from the Gutenberg project Children stories from MCTest (Richard-son et al., 2013) Movie plots from CMU Movie Summary corpus (Bamman et al., 2013) From each of the above-mentioned sources we extracted paragraphs that had enough content.",
"To ensure this we followed a 3 -step process.",
"In the first step we selected top few sentences from paragraphs such that they contained 1 k1 .",
"5 k characters.",
"To ensure coherence, all sentences were contiguous and extracted from the same paragraph.",
"In this process we also discarded paragraphs that seemed to deviate too much from third person narrative style.",
"For example, while processing Gutenberg corpus we considered files that had at least 5 k lines because we found that most of them were short poetic texts.",
"In the second step, we annotated (Khashabi et al., 2018b) the paragraphs and automatically filtered texts using conditions such as the average number of words per sentence; number of named entities; number of discourse connectives in the paragraph.",
"These were designed by the authors of this paper after reviewing a small sample of paragraphs.",
"A complete set of conditions is listed in Table",
"1. Finally in the last step, we manually verified each paragraph and filtered out the ones that had formatting issues or other concerns that seemed to compromise their usability.",
"In this section, we delineate details of the process for collecting questions and answers.",
"Figure 2 gives a high-level idea of the process.",
"The first two steps deal with creating multi-sentence questions, followed by two steps for construction of candidate answers.",
"Interested readers can find more details on set-ups of each step in Appendix I. Step 1: Generating questions.",
"The goal of the first step of our pipeline is to collect multi-sentence questions.",
"We show each paragraph to 5 turkers and ask them to write 3 5 questions such that: (1) the question is answerable from the passage, and (2) only those questions are allowed whose answer cannot be determined from a single sentence.",
"We clarify this point by providing example paragraphs and questions.",
"In order to encourage turkers to write meaningful questions that fit our criteria, we additionally ask them for a correct answer and for the sentence indices required to answer the question.",
"To ensure the grammatical quality of the questions collected in this step, we limit the turkers to the countries with English as their major language.",
"After the acquisition of questions in this step, we filter out questions which required less than 2 or more than 4 sentences to be answered; we also run them through an automatic spell-checker 3 and manually correct questions regarding typos and unusual wordings.",
"Step 2: Verifying multi-sentenceness of questions.",
"In a second step, we verify that each question can only be answered using more than one sentence.",
"For each question collected in the previous step, we create question-sentence pairs by pairing it with each of the sentences necessary for 3 Grammarly: www.grammarly.com answering it as indicated in the previous step.",
"For a given question-sentence pair, we then ask turkers to annotate if they could answer the question from the sentence it is paired with (binary anno-tation).",
"The underlying idea of this step is that a multi-sentence question would not be answerable from a single sentence, hence turkers should not be able to give a correct answer for any of the question-sentence pair.",
"Accordingly, we determine a question as requiring multiple sentences only if the correct answer cannot be guessed from any single question-sentence pair.",
"We collected at least 3 annotations per pair, and to avoid sharing of information across sentences, no two pairs shown to a turker came from the same paragraph.",
"We aggregate the above annotations for each question-answer pair and retain only those questions for which no pair was judged as answerable by a majority of turkers.",
"Step 3: Generating answer-options.",
"In this step, we collect answer-options that will be shown with each question.",
"Specifically, for each verified question from the previous steps, we ask 3 turkers to write as many correct and incorrect answer options as they can think of.",
"In order to not curb creativity, we do not place a restriction on the number of options they have to write.",
"We explicitly ask turkers to design difficult and non-trivial incorrect answer-options (e.g. if the question is about a person, a non-trivial incorrect answer-option would be other people mentioned in the paragraph).",
"After this step, we perform a light clean up of the candidate answers by manually correcting minor errors (such as typos), completing incomplete sentences and rephrasing any ambiguous sentences.",
"We further make sure there is not much repetition in the answer-options, to prevent potential exploitation of correlation between some candidate answers in order to find the correct answer.",
"For example, we drop obviously duplicate answer-options (i.e. identical options after lower-casing, lemmatization, and removing stop-words).",
"step serves as the final quality check for both questions and the answer-options generated in the previous steps.",
"We show each paragraph, its questions, and the corresponding answer-options to 3 turkers, and ask them to indicate if they find any errors (grammatical or otherwise), in the questions and/or answer-options.",
"We then manually review, 256 Step 1: generating multi-sentence questions given paragraphs Step 2: Verifying multi-sentenceness Step 3: Generating candidate answers Step 4: Judging quality of questions & candidates Figure 2: Pipeline of our dataset construction.",
"and correct if needed, all erroneous questions and answer-options.",
"This ensures that we have meaningful questions and answer-options.",
"In this step, we also want to verify that the correct (or incorrect) options obtained from Step 3 were indeed correct (or incorrect).",
"For this, we additionally ask the annotators to select all correct answer-options for the question.",
"If their annotations did not agree with the ones we had after Step 3 (e.g. if they unanimously selected an incorrect' option as the answer), we manually reviewed and corrected (if needed) the annotation.",
"The 4 -step process described above was a result of detailed analysis and substantial refinement after two small pilot studies.",
"In the first pilot study, we ran a set of 10 paragraphs extracted from the CMU Movie Summary Corpus through our pipeline.",
"Our then pipeline looked considerably different from the one described above.",
"We found the steps that required turkers to write questions and answer-options to often have grammatical errors, possibly because a large majority of turkers were non-native speakers of English.",
"This probslem was more prominent in questions than in answer-options.",
"Because of this, we decided to limit the task to native speakers.",
"Also, based on the results of this pilot, we overhauled the instructions of these steps by including examples of grammatically correctbut undesirable (not multi-sentence)questions and answer-options, in addition to several minor changes.",
"Thereafter, we decided to perform a manual validation of the verification steps (current Steps 2 and 4).",
"For this, we (the authors of this paper) performed additional annotations ourselves on the data shown to turkers, and compared our results with those provided by the turkers.",
"We found that in the verification of answer-options, our annotations were in high agreement ( 98% ) with those obtained from mechanical turk.",
"However, that was not the case for the verification of multi-sentence questions.",
"We made several further changes to the first two steps.",
"Among other things, we clarified in the instructions that turkers should not use their background knowledge when writing and verifying questions, and also included negative examples of such questions.",
"Additionally, when turkers judged a question to be answerable using a single sentence, we decided to encourage (but not require) them to guess the answer to the question.",
"This improved our results considerably, possibly because it forced annotators to think more carefully about what the answer might be, and whether they actually knew the answer or they just thought that they knew it (possibly because of background knowledge or because the sentence contained a lot of information relevant to the question).",
"Guessed answers in this step were only used to verify the validity of multi-sentence questions.",
"They were not used in the dataset or subsequent steps.",
"After revision, we ran a second pilot study in which we processed a set of 50 paragraphs through our updated pipeline.",
"This second pilot confirmed that our revisions were helpful, but thanks to its larger size, also allowed us to identify a couple of borderline cases for which additional clarifications were required.",
"Based on the results of the second pilot, we made some additional minor changes and then decided to apply the pipeline for creating the final dataset.",
"While collecting our dataset, we found that, even though Step 1 instructed turkers to write multi-sentence questions, not all generated questions indeed required multi-sentence reasoning.",
"This happened even after clarifications and revisions to the corresponding instructions, and we attribute it to honest mistakes.",
"Therefore, we designed the subsequent verification step (Step 2).",
"There are other datasets which aim to include multi-sentence reasoning questions, especially MCTest.",
"Using our verification step, we systematically verify their multi-sentenceness.",
"For this, we conducted a small pilot study on about 60 multi-sentence questions from MCTest.",
"As for our own verification, we created question-sentence pairs for each question and asked annotators to judge whether they can answer a question from the single sentence shown.",
"Because we did not know 257 which sentences contain information relevant to a question, we created question-sentence pairs using all sentences from a paragraph.",
"After aggregation of turker annotations, we found that about half of the questions annotated as multi-sentence could be answered from a single sentence of the paragraph.",
"This study, though performed on a subset of the data, underscores the necessity of rigorous verification step for multi-sentence reasoning when studying this phenomenon.",
"We now provide a brief summary of MultiRC .",
"Overall, it contains roughly 6 k multi-sentence questions collected for about +800 paragraphs.",
"4 The median number of correct and total answer options for each question is 2 and 5 , respectively.",
"Additional statistics are given in Table",
"2. In Step 1, we also asked annotators to identify sentences required to answer a given question.",
"We found that answering each question required 2 .",
"4 sentences on average.",
"Also, required sentences are often not contiguous, and the average distance between sentences is 2 .",
"4 .",
"Next, we analyze the types of questions in our dataset.",
"Figure 4 shows the count of first word(s) for our questions.",
"We can see that while the popular question words ( What , Who , etc.) are very common, there is a wide variety in the first word(s) indicating a diversity in question types.",
"About 28% of our questions require binary decisions (true/false or yes/no).",
"We randomly selected 60 multi-sentence questions from our corpus and asked two indepen-dent annotators to label them with the type of reasoning phenomenon required to answer them.",
"5 During this process, the annotators were shown a list of common reasoning phenomena (shown below), and they had to identify one or more of the phenomena relevant to a given question.",
"The list of phenomena shown to the annotators included the following categories: mathematical and logical reasoning, spatio-temporal reasoning, list/enumeration, coreference resolution (includ-ing implicit references, abstract pronouns, event coreference, etc.), causal relations, paraphrases and contrasts (including lexical relations such as synonyms, antonyms), commonsense knowledge, 4 We will also release the 3 .",
"7 k questions that did not pass Step",
"2. Though not multi-sentence questions, they could be a valuable resource on their own.",
"and other'.",
"The categories were selected after a manual inspection of a subset of questions by two of the authors.",
"The annotation process revealed that answering questions in our corpus requires a broad variety of reasoning phenomena.",
"The left plot in Figure 3 provides detailed results.",
"The figure shows that a large fraction of questions require coreference resolution, and a more careful inspection revealed that there were different types of coreference phenomena at play here.",
"To investigate these further, we conducted a follow-up experiment in which manually annotated all questions that required coreference resolution into finer categories.",
"Specifically, each question was shown to two annotators who were asked to select one or more of the following categories: entity coreference (between two entities), event coreference (between two events), set inclusion coreference (one item is part of or included in the other) and other'.",
"Figure 3 (right) shows the results of this experiment.",
"We can see that, as expected, entity coreference is the most common type of coreference resolution needed in our corpus.",
"However, a significant number of questions also require other types of coreference resolution.",
"We provide some examples of questions along with the required reasoning phenomena in Appendix II.",
"In this section, we provide a quantitative analysis of several baselines for our challenge.",
"Evaluation Metrics.",
"We define precision and recall for a question q as: Pre ( q ) = | A ( q ) A ( q ) | | A ( q ) | and Rec ( q ) = | A ( q ) A ( q ) | | A ( q ) | , where A ( q ) and A ( q ) are the sets of correct and selected answer-options.",
"We define (macro-average) F1 m as the harmonic mean of average-precision avg q Q ( Pre ( q )) and average-recall avg q Q ( Rec ( q )) with Q as the set of all questions.",
"Since by design, each answer-option can be judged independently, we consider another metric, F1 a , evaluating binary decisions on all the answer-options in the dataset.",
"We define F1 a to be the harmonic mean of Pre ( Q ) and Rec ( Q ) , with Pre ( Q ) = | A ( Q ) A ( Q ) | | A ( Q ) | ; A ( Q ) = S q QA ( q ) ; and similar definitions for A ( Q ) and Rec ( Q ) .",
"Human.",
"Human performance provides us with an estimate of the best achievable results on datasets.",
"Using mechanical turk, we ask 4 people (limited to native speakers) to solve our data.",
"We evaluate score of each label by averaging the decision of the individuals.",
"Random.",
"To get an estimate on the lower-bound we consider a random baseline, where each answer option is selected as correct with a probability of 50% (an unbiased coin toss).",
"The numbers reported for this baseline represent the expected outcome (statistical expectation).",
"IR (information retrieval baseline).",
"This baseline selects answer-options that best match sentences in a text corpus (Clark et al., 2016).",
"Specifically, for each question q and answer option a i , the IR solver sends q + a i as a query to a search engine (we use Lucene) on a corpus, and returns the search engine's score for the top retrieved sentence s , where s must have at least one non-stopword overlap with q , and at least one with a i .",
"We create two versions of this system.",
"In the first variation IR(paragraphs) we create a corpus of sentences extracted from all the paragraphs in the dataset.",
"In the second variation, IR(web) in addition to the knowledge of the paragraphs, we use extensive external knowledge extracted from the web (Wikipedia, science textbooks and study guidelines, and other webpages), with 5 10 10 tokens (280GB of plain text).",
"SurfaceLR (logistic regression baseline).",
"As a simple baseline that makes use of our small training set, we reimplemented and trained a logistic regression model using word-based overlap features.",
"As described in (Merkhofer et al., 2018), this baseline takes into account the lengths of a text, question and each answer candidate, as well as indicator features regarding the (co-)occurrences of any words in them.",
"SemanticILP (semi-structured baseline).",
"This state-of-the-art solver, originally proposed for science questions and biology tests, uses a semi-structured representation to formalize the scoring problem as a subgraph optimization problem over multiple layers of semantic abstrac-259 Dev Test F1 m F1 a F1 m F1 a Random 44.3 43.8 47.1 47.6 IR(paragraphs) 64.3 60.0 54.8 53.9 SurfaceLR 66.1 63.7 66.7 63.5 Human 86.4 83.8 84.3 81.8 Table 3: Performance comparison for different baselines tested on a subset of our dataset (in per-centage).",
"tions (Khashabi et al., 2018a).",
"Since the solver is designed for multiple-choice with single-correct answer, we adapt it to our setting by running it for each answer-option.",
"Specifically for each answer-option, we create a single-candidate question, and retrieve a real-valued score from the solver.",
"BiDAF (neural network baseline).",
"As a neural baseline, we apply this solver by Seo et al. (2017), which was originally proposed for SQuAD but has been shown to generalize well to another domain (Min et al., 2017).",
"Since BiDAF was designed for cloze style questions, we apply it to our multiple-choice setting following the procedure by Kembhavi et al. (2017): Specifically, we score each answer-option by computing the similarity value of it's output span with each of the candidate answers, computed by phrasal similarity tool of Wieting et al. (2015).",
"To get a sense of our dataset's hardness, we evaluate both human performance and multiple computational baselines.",
"Each baseline scores an answer-option with a real-valued score, which we threshold to decide whether an answer option is selected or not, where the threshold is tuned on the development set.",
"Table 3 shows performance results for different baselines.",
"The significantly high human performance shows that humans do not have much difficulties in answering the questions.",
"Similar observations can be made in Figure 5 where we plot avg q Q ( Pre ( q )) vs. avg q Q ( Rec ( q )) , for different threshold values.",
"In this paper we have presented MultiRC , a reading comprehension dataset in which questions require reasoning over multiple sentences to be an-Figure",
"an-Figure 5: PR curve for each of the baselines.",
"There is a considerable gap with the baselines and human.",
"swered.",
"Our dataset contains 6 k questions extracted from about +800 paragraphs.",
"For each question, it contains multiple answer-options out of which one or more can be correct.",
"The paragraphs (and questions) originate from different domains and hence are amenable to a wide variety and complexity of required reasoning phenomena.",
"We found human performance on this corpus to be about 88% while state-of-the-art machine comprehension models do not exceed a F1-score of 60% .",
"We hope that this significant difference in performance will encourage the community to work towards more sophisticated reasoning systems.",
"The authors would like to thank all the contributors to the project.",
"This work was supported by Contract HR0011-15-2-0025 with the US Defense Advanced Research Projects Agency (DARPA).",
"Approved for Public Release, Distribution Unlimited.",
"The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.",
"This work was partly funded by grants from the German Research Foundation (DFG EXC 284 and RO 4848/1-1), by the Allen Institute for Artificial Intelligence (allenai.org); by Google; and by IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR)a research collaboration as part of the IBM AI Horizons Network."
] |
[
"method",
"result",
"abstain",
"result",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"objective",
"objective",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"other",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"result",
"result",
"other",
"other",
"other",
"other",
"other"
] |
[
"We study the problem of building text classi-fiers with little or no training data, commonly known as zero and few-shot text classification.",
"In recent years, an approach based on neural textual entailment models has been found to give strong results on a diverse range of tasks.",
"In this work, we show that with proper pre-training, Siamese Networks that embed texts and labels offer a competitive alternative.",
"These models allow for a large reduction in inference cost: constant in the number of labels rather than linear.",
"Furthermore, we introduce label tuning, a simple and computationally efficient approach that allows to adapt the models in a few-shot setup by only changing the label embeddings.",
"While giving lower performance than model fine-tuning, this approach has the architectural advantage that a single encoder can be shared by many different tasks.",
"Few-shot learning is the problem of learning classi-fiers with only a few training examples.",
"Zero-shot learning (Larochelle et al., 2008), also known as dataless classification (Chang et al., 2008), is the extreme case, in which no labeled data is used.",
"For text data, this is usually accomplished by representing the labels of the task in a textual form, which can either be the name of the label or a concise textual description.",
"In recent years, there has been a surge in zero-shot and few-shot approaches to text classification.",
"One approach (Yin et al., 2019, 2020; Halder et al., 2020; Wang et al., 2021) makes use of entailment models.",
"Textual entailment (Dagan et al., 2006), also known as natural language inference (NLI) (Bowman et al., 2015), is the problem of predicting whether a textual premise implies a textual hypothesis in a logical sense.",
"For example, Emma loves apples implies that Emma likes apples .",
"representing the label as the hypothesis.",
"A NLI model is applied to each input pair and the entailment probability is used to identify the best matching label.",
"In this paper, we investigate an alternative based on Siamese Networks (SN) (Bromley et al., 1993), also known as dual encoders.",
"These models embed both input and label texts into a common vector space.",
"The similarity of the two items can then be computed using a similarity function such as the dot product.",
"The advantage is that input and label text are encoded independently, which means that the label embeddings can be pre-computed.",
"Therefore, at inference time, only a single call to the model per input is needed.",
"In contrast, the models typically applied in the entailment approach are Cross Attention (CA) models which need to be executed for every combination of text and label.",
"On the other hand, they allow for interaction between the tokens of label and input, so that in theory they should be superior in classification accuracy.",
"However, in this work we show that in practice, the difference in quality is small.",
"Both CA and SNs also support the few-shot learning setup by fine-tuning the models on a small number of labeled examples.",
"This is usually done by updating all parameters of the model, which in turn makes it impossible to share the models between different tasks.",
"In this work, we show that when using a SN, one can decide to only fine-tune the label embeddings.",
"We call this Label Tuning (LT).",
"With LT the encoder can be shared between different tasks, which greatly eases the deployment of this approach in a production setup.",
"LT comes with a certain drop in quality, but this drop can be compensated by using a variant of knowledge distillation (Hinton et al., 2014).",
"Our contributions are as follows: We perform a large study on a diverse set of tasks showing that CA models and SN yield similar performance for both zero-shot and few-shot text classification.",
"In contrast to most prior work, we also show that these results can also be achieved for languages other than English.",
"We compare the hypothesis patterns commonly used in the literature and using the plain label name (identity hypothesis) and find that on average there is no significant difference in performance.",
"Finally, we present LT as an alternative to full fine-tuning that allows using the same model for many tasks and thus greatly increases the scalability of the method.",
"We will release the code 1 and trained models used in our experiments.",
"Figure 1 explains the overall system.",
"We follow Reimers and Gurevych (2019) and apply symmetric Siamese Networks that embed both input texts using a single encoder.",
"The encoder consists of a transformer (Vaswani et al., 2017) that produces contextual token embeddings and a mean pooler that combines the token embeddings into a single text embedding.",
"We use the dot product as the similarity function.",
"We experimented with cosine similarity but did not find it to yield significantly better results.",
"As discussed, we can directly apply this model to zero-shot text classification by embedding the input text and a textual representation of the label.",
"For 1 https://tinyurl.com/label-tuning the label representation we experiment with a plain verbalization of the label, or identity hypothesis, as well as the hypotheses or prompts used in the related work.",
"In the case of few-shot learning, we need to adapt the model based on a small set of examples.",
"In gradient-based few-shot learning we attempt to improve the similarity scores for a small set of labeled examples.",
"Conceptually, we want to increase the similarity between every text and its correct label and decrease the similarity for every other label.",
"As the objective we use the so called batch softmax (Henderson et al., 2017): J = 1 BB (cid:88) i =1 S ( x i , y i ) log B (cid:88) j =1 e S ( x i ,y j ) Where B is the batch size and S ( x, y ) = f ( x ) f ( y ) the similarity between input x and label text y under the current model f .",
"All other elements of the batch are used as in-batch negatives .",
"To this end, we construct the batches so that every batch contains exactly one example of each label.",
"Note that this is similar to a typical softmax classification objective.",
"The only difference is that f ( y i ) is computed during the forward pass and not as a simple parameter look-up.",
"Regular fine-tuning has the drawback of requiring to update the weights of the complete network.",
"This results in slow training and large memory requirements for every new task, which in turn makes it challenging to deploy new models at scale.",
"As an alternative, we introduce label tuning , which does not change the weights of the encoder.",
"The main idea is to first pre-compute label embeddings for each class and later tune them using a small set of labeled examples.",
"Formally, we have a training set containing N pairs of an input text x i and its reference label index z i .",
"We pre-compute a matrix of the embedded input texts and embedded labels, X R N d and Y R K d , respectively.",
"d is the embedding dimension and K the size of the label set.",
"We now define the score for every input and label combination as S = X YT ( S R N K ) and tune it using cross entropy: J (cid:48) = 1 NN (cid:88) i =1 S i,z i log K (cid:88) j =1 e S i,j To avoid overfitting, we add a regularizer that penalizes moving too far from the initial label embeddings Y 0 as (cid:107) Y 0 Y (cid:107) F , where (cid:107) .",
"(cid:107)",
"F is the Frobenius norm.",
"2 Additionally, we also implement a version of dropout by masking some of the entries in the label embedding matrix at each gradient step.",
"To this end, we sample a random vector (cid:126)r of dimension d whose components are 0 with probability dropout and 1 otherwise.",
"We then multiply this vector component-wise with each row in the label embedding matrix Y .",
"The dropout rate and the strength of the regularizer are two hyper-parameters of the method.",
"The other hyperparameters are the learning rate for the stochastic gradient descent as well as the number of steps.",
"Following Logan IV et al. (2021), we tune them using 4-fold cross-validation on the few-shot training set.",
"Note that the only information to be stored for each tuned model are the d -dimensional label embeddings.",
"As mentioned, label tuning produces less accurate models than real fine-tuning.",
"We find that this can be compensated by a form of knowledge distillation (Hinton et al., 2014).",
"We first train a normal 2 https://en.wikipedia.org/wiki/Matrix_ norm#Frobenius_norm fine-tuned model and use that to produce label distributions for a set of unlabeled examples.",
"Later, this silver set is used to train the new label embeddings for the untuned model.",
"This increases the training cost of the approach and adds an additional requirement of unlabeled data but keeps the advantages that at inference time we can share one model across multiple tasks.",
"Pre-trained Language Models (LMs) have been proved to encode knowledge that, with task-specific guidance, can solve natural language understanding tasks (Petroni et al., 2019).",
"Leveraging that, Le Scao and Rush (2021) quantified a reduction in the need of labeled data of hundreds of instances with respect to traditional fine-tuning approaches (Devlin et al., 2019; Liu et al., 2019).",
"This has led to quality improvements in zero and few-shot learning.",
"Semantic Similarity methods Gabrilovich and Markovitch (2007) and Chang et al. (2008) use the explicit meaning of the label names to compute the similarity with the input text.",
"Prototypical Networks (Snell et al., 2017) create class prototypes by averaging embedded support examples and minimizing a distance metric to them for classification of input examples.",
"The class prototypes are similar to our label embeddings but we initialize them from the hypothesis and we only tune the embeddings instead of the entire encoder.",
"Recent advances in pre-trained LMs and their application to semantic textual similarity tasks, such as Sentence-BERT (Reimers and Gurevych, 2019), have shown a new opportunity to increase the quality of these methods and set the stage for this work.",
"Baldini Soares et al. (2019) use Siamese Networks apply to a few-shot relation extraction (RelEx) task.",
"Their architecture and similarity loss is similar to ours, but they update all encoder parameters when performing fine-tuning.",
"Chu et al. (2021) employ a technique called unsupervised label-refinement (LR).",
"They incorporated a modified k-means clustering algorithm for refining the outputs of cross attention and Siamese Networks.",
"We incorporate LR into our experiments and extend the analysis of their work.",
"We evaluate it against more extensive and diverse benchmarks.",
"In addition, we show that pre-training few-shot learners on their proposed textual similarity task NatCat underperforms pre-training on NLI datsets.",
"Prompt-based methods GPT-3 (Brown et al., 2020), a 175 billion parameter LM, has been shown to give good quality on few-shot learning tasks.",
"Pattern-Exploiting Training (PET) (Schick and Schutze, 2021) is a more computational and memory efficient alternative.",
"It is based on ensembles of smaller masked language models (MLMs) and was found to give few-shot results similar to GPT-3.",
"Logan IV et al. (2021) reduced the complexity of finding optimal templates in PET by using null-prompts and achieved competitive performance.",
"They incorporated BitFit (Ben-Zaken et al., 2021) and thus reached comparable accuracy fine-tuning only 0.1% of the parameters of the LMs.",
"Ham-bardzumyan et al. (2021) present a contemporary approach with a similar idea to label tuning .",
"As in our work, they use label embeddings initialized as the verbalization of the label names.",
"These task-specific embeddings, along with additional ones that are inserted into the input sequence, are the only learnable parameters during model training.",
"They optimize a cross entropy loss between the label embeddings and the output head of a MLM.",
"The major difference is that they employ a prompt-based approach while our method relies on embedding models.",
"Entailment methods The entailment approach (Yin et al., 2019; Halder et al., 2020) uses the label description to reformulate text classification as textual entailment.",
"The model predicts the entailment probability of every label description .",
"Wang et al. (2021) report results outperforming LM-BFF (Gao et al., 2021), an approach similar to PET.",
"True Few-Shot Learning Setting Perez et al. (2021) argue that for true few-shot learning , one should not tune parameters on large validation sets or use parameters or prompts that might have been tuned by others.",
"We follow their recommendation and rely on default parameters and some hyperparameters and prompts recommended by Wang et al. (2021), which according to the authors, were not tuned on the few-shot datasets.",
"For label tuning, we follow Logan IV et al. (2021) and tune parameters with cross-validation on the few-shot training set.",
"In this section we introduce the baselines and datasets used throughout experiments.",
"Random The theoretical performance of a random model that uniformly samples labels from the label set.",
"Word embeddings For the English experiments, we use Word2Vec (Mikolov et al., 2013) embeddings 3 .",
"For the multi-lingual experiments, we use FastText (Grave et al., 2018).",
"In all cases we preprocess using the NLTK tokenizer (Bird et al., 2009) and stop-words list and by filtering non-alphabetic tokens.",
"Sentence embeddings are computed by averaging the token embeddings.",
"Char-SVM For the few-shot experiments we implemented a Support Vector Machines (SVM) (Hearst et al., 1998) based on character n-grams.",
"The model was implemented using the text vector-izer of scikit-learn (Pedregosa et al., 2011) and uses bigrams to fivegrams.",
"Cross Attention For our experiments we use pre-trained models from HuggingFace (Wolf et al., 2020).",
"As the cross attention baseline, we trained a version of MPNET (Song et al., 2020) on Multi-Genre (MNLI, Williams et al. (2018)) and Stanford NLI (SNLI, Bowman et al. (2015)) using the parameters and code of Nie et al. (2020).",
"This model has approx.",
"110M parameters.",
"For the multilingual experiments, we trained the cross-lingual language model XLM roberta-base (Liu et al., 2019) on SNLI, MNLI, adversarial NLI (ANLI, Nie et al. (2020)) and cross-lingual NLI (XNLI, Conneau et al. (2018)), using the same code and parameters as above.",
"The model has approx.",
"280M parameters.",
"We give more details on the NLI datasets in Appendix G. Siamese Network We also use models based on MPNET for the experiments with the Siamese Networks.",
"paraphrase-mpnet-base-v2 4 is a sentence transformer model (Reimers and Gurevych, 2019) trained on a variety of paraphrasing datasets as well as SNLI and MNLI using a batch softmax loss (Henderson et al., 2017).",
"nli-mpnet-base-v2 5 is identical to the previous model but trained exclusively on MNLI and SNLI and thus comparable to the cross attention model.",
"For the multilingual experiments, we trained a model using the code 3 https://code.google.com/archive/p/ word2vec 4 https://tinyurl.com/pp-mpnet 5 https://tinyurl.com/nli-mpnet 8535 name task lang.",
"of the sentence transformers with the same batch softmax objective used for fine-tuning the few-shot models and on the same data we used for training the cross attention model.",
"Roberta-NatCat For comparison with the related work, we also trained a model based on RoBERTa (Liu et al., 2019) and fine-tuned on the NatCat dataset as discussed in Chu et al. (2021) using the code 6 and parameters of the authors.",
"We use a number of English text classification datasets used in the zero-shot and the few-shot literature (Yin et al., 2019; Gao et al., 2021; Wang et al., 2021).",
"In addition, we use several German and Spanish datasets for the multilingual experiments.",
"Table 1 provides more details.",
"These datasets are of a number of common text classification tasks such as topic classification, sentiment and emotion detection, and review rating.",
"However, we also included some less well-known tasks such as acceptability, whether an English sentence is deemed acceptable by a native speaker, and subjectivity, whether a statement is subjective or objective.",
"As some datasets do not have a standard split we split them randomly using a 9/1 ratio.",
"We use the same hypotheses for the cross attention model and for the Siamese network.",
"For Yahoo and Unified we use the hypotheses from Yin et al. 6 https://github.com/ZeweiChu/ULR (2019).",
"For SUBJ, COLA, TREC, Yelp, AG News and IMDB we use the same hypotheses as Wang et al. (2021).",
"For the remaining datasets we designed our own hypotheses.",
"These were written in an attempt to mirror what has been done for other datasets and they have not been tuned in any way.",
"Appendix B shows the patterns used.",
"We also explored using an identity hypothesis, that is the raw label names as the label representation and found this to give similar results.",
"Inspired by Wang et al. (2021), we investigate fine-tuning the models with 8, 64 and 512 examples per label.",
"For fine-tuning the cross attention models we follow the literature (Wang et al., 2021) and create examples of every possible combination of input text and label.",
"The example corresponding to the correct label is labeled as entailed while all other examples are labeled as refuted.",
"We then fine-tune the model using stochastic gradient descent and a cross-entropy loss.",
"We use a learning rate of 1e-5, a batch size of 8 and run the training for 10 epochs.",
"As discussed in the methodology Section 2.1, for the Siamese Networks every batch contains exactly one example of every label and therefore the batch size equals the number of labels of the task.",
"We use a learning rate of 2e-5 and of 2e-4 for the BitFit experiments.",
"Appendix D contains additional information on the hyper-parameters used.",
"We use macro F1-score as the evaluation metric.",
"We run all experiments with 5 different training sets and report the mean and standard deviation.",
"For 8536 name n Yahoo AG News Unified COLA SUBJ TREC IMDB SemEval Yelp pol Yelp full Amazon Mean random 0 10.0 25.0 10.0 50.0 50.0 16.7 50.0 33.3 50.0 20.0 20.0 30.5 W2V (IH) 0 44.8 0 .",
"the zero-shot experiments, we estimate the standard deviation using bootstrapping (Koehn, 2004).",
"In all cases, we use Welch's t-test 7 with a p-value of 0.05 to establish significance (following Logan IV et al. (2021)).",
"For the experiments with label refinement (Chu et al., 2021) and distillation, we use up to 10,000 unlabeled examples from the training set.",
"Here we present the results of our experiments.",
"The two main questions we want to answer are whether Siamese Networks (SN) give comparable results as Cross Attention models (CA) and how well Label Tuning (LT) compares to regular fine-tuning.",
"Table 2 shows results comparing SN with CA and various baselines.",
"As discussed above, SN and CA models are based on the MPNET architecture and trained on SNLI and MNLI.",
"For the zero-shot setup ( n =0 ) we see that all models out-perform the random baseline on average.",
"The word embedding baselines and RoBERTa-NatCat perform significantly worse than random on several of the datasets.",
"In contrast the SN and CA models only perform worse than random on COLA.",
"The SN outperforms the CA on average, 7 https://en.wikipedia.org/wiki/Welch% 27s_t-test but the results for the individual datasets are mixed.",
"The SN is significantly better for 4, significantly worse for 4 and on par for the remaining 3 datasets.",
"Regarding the use of a hypothesis pattern from the literature or just an identity hypothesis (IH), we find that, while there are significant differences on individual datasets, the IH setup shows higher but still comparable (within 1 point) average performance.",
"For the few-shots setup ( n = { 8 , 64 , 512 } ), we find that all models out-perform a Char-SVM trained with the same number of instances by a large margin.",
"Comparing SN and CA, we see that CA outperforms the SN on average but with a difference with-in the confidence interval.",
"For n =8 and n =64 , CA significantly outperforms SN on 3 datasets and performs comparably on the remaining 8.",
"For n =512 , we see an even more mixed picture.",
"CA is on par with SN on 6 datasets, outperforms it on 3 and is out-performed on 2.",
"We can conclude that for the English datasets, SN is more accurate for zero-shot while CA is more accurate for few-shot.",
"The average difference is small in both setups and we do not see a significant difference for most datasets.",
"Table 3 shows the multi-lingual experiments.",
"The RoBERTa XLM models were pre-trained on data from more than 100 languages and fine-tuned on an NLI data of 15 languages.",
"The cross-lingual data and the fact that there is only 7500 examples 8537 language German English Spanish name n GNAD Amazon deISEAR sb10k Amazon SemEval Unified Amazon HeadQA SAB s Mean random 0 11.1 20.0 14.3 33.3 20.0 33.3 10.0 20.0 16.7 33.3 21.2 FastText 0 17.3 1 .",
"for the languages other than English, explains why quality is lower than for the English-only experiments.",
"For the zero-shot scenario, all models outperform the random baseline on average, but with a smaller margin than for the English-only models.",
"The FastText baseline performs comparable to CA on average (26.0 vs 27.2), while SN is ahead by a large margin (27.2 vs 32.4).",
"The differences between models with hypotheses and identity hypothesis (IH) are smaller than for the English experiments.",
"Looking at the few-shot scenarios, we see that both models out-perform the Char-SVM by a large margin.",
"In general, the results are closer than for the English experiments, as well as in the number of datasets with significant differences (only 2-4 of datasets).",
"Similarly to English, we can conclude that at multilingual level, SN is more accurate in the zero-shot scenario whereas CA performs better in the few-shot one.",
"However, for few-shot we see only small average differences (less than 1 point except for n =64 ).",
"Table 4 shows a comparison of different fine-tuning approaches on the English datasets.",
"Appendix H contains the multi-lingual results and gives a similar picture.",
"We first compare Label Refinement (LR) as discussed in Chu et al. (2021) (see Section 3).",
"Recall that this approach makes use of unlabeled data.",
"We find that in the zero-shot scenario LR gives an average improvement of more than 2 points and significantly out-performing the baseline (mpnet) for 7 of the 11 datasets.",
"When combining LR with labeled data as discussed in Chu et al. (2021) we find this to only give modest improvements over the zero-shot model (e.g., 54.0 (zero-shot) vs 55.8 ( n =8 )).",
"Note that we apply LR to the untuned model, while Chu et al. (2021) proposed to apply it to a tuned model.",
"However, we find that to only give small improvements over an already tuned model (mpnet (FT) vs. mpnet (FT+LR)).",
"Also, in this work we are interested in approaches that do not change the initial model so that it can be shared between tasks to improve scalability.",
"Label Tuning (LT) improves results as n grows and out-performs LR and the Char-SVM baseline from Table 2.",
"Comparing regular Fine-Tuning (FT) and BitFit, we find them to perform quite similarly both on average and on individual datasets, with only few exceptions, such as the performance difference on TREC for the n =8 setup.",
"In comparison with FT and BitFit, LT is significantly out-performed on most datasets.",
"The average difference in performance is around 5 points, which is comparable to using 8 times less training data.",
"Using the knowledge distillation approach discussed before (LT-DIST), we find that for 8 and 64 examples, most of the difference in performance can be recovered while still keeping the high scalability.",
"For n =8 , we only find a significant differ-8538 name n Yahoo AG News Unified COLA SUBJ TREC IMDB SemEval Yelp pol Yelp full Amazon Mean mpnet 0 55.0 0 .",
"ence to mpnet (FT) for Yelp full.",
"Recall that the distillation is performed on up to 10,000 unlabeled examples from the training set.",
"We analyze the performance of the Cross Attention (CA) and Siamese Network-based (SN) models.",
"Unless otherwise noted, the analysis was run over all datasets and languages.",
"Table 5, gives a comparison of the processing speed of different models.",
"Details on the hardware used is given in Appendix F. As expected, the performance of the cross attention model halves when the label size doubles.",
"The performance of the Siamese network is inde-task emotions reviews sentiment negation no yes no yes no yes SN 23.0 14.3 49.0 44.4 37.3 45.1 CA 22.4 16.8 48.2 47.0 32.2 37.4 Table 7: Average macro F1 score for sets with and without a negation marker present.",
"pendent of the number of labels.",
"This shows that Siamese Networks have a huge advantage at inference time especially for tasks with many labels.",
"Table 6 shows the average F1 scores for different token lengths.",
"To this end the data was grouped in bins of roughly equal size.",
"SN has an advantage for shorter sequences ( 44 tokens), while CA performs better for longer texts ( > 160 tokens).",
"Table 7 shows an analysis based on whether the text does or does not contain negation markers.",
"We used an in-house list of 23 phrases for German and Spanish and 126 for English.",
"For emotion detection and review tasks, both models perform better on the subset without negations.",
"However, while SN outperforms CA on the data without negations, CA performs better on the data with negations.",
"The same trend does not hold for the sentiment datasets.",
"These are based on Twitter and thus contain shorter and simpler sentences.",
"For the sentiment datasets based on Twitter we also found that both models struggle to predict the neutral class.",
"CA classifies 8539 almost everything neutral tweet as positive or negative.",
"SN predicts the neutral class regularly but still with a relative high error rate.",
"Appendix E contains further analysis showing that label set size, language and task do not have a visible effect on the difference in accuracy of the two models.",
"We have shown that Cross Attention (CA) and Siamese Networks (SN) for zero-shot and few-shot text classification give comparable results across a diverse set of tasks and multiple languages.",
"The inference cost of SNs is low as label embeddings can be pre-computed and, in contrast to CA, does not scale with the number of labels.",
"We also showed that tuning only these label embeddings (Label Tuning (LT)) is an interesting alternative to regular Fine-Tuning (FT).",
"LT gets close to FT performance when combined with knowledge distillation and when the number of training samples is low, i.e., for realistic few-shot learning.",
"This is relevant for production scenarios, as it allows to share the same model among tasks.",
"However, it will require 60 times more memory to add a new task: For a 418 MB mpnet-base model, BitFit affects 470 kB of the parameters.",
"LT applied to a task with 10 labels and using a embedding dimension of 768 requires 7.5 kB.",
"The main disadvantage of BitFit, however, is that the weight sharing it requires is much harder to implement, especially in highly optimized environments such as NVIDIA Triton.",
"Therefore we think that LT is an interesting alternative for fast and scalable few-shot learning.",
"We would like to thank Francisco Rangel and the entire Symanto Research Team for early discussions, feedback and suggestions.",
"We would also like to thank the anonymous Reviewers.",
"The authors gratefully acknowledge the support of the Pro 2 Haters Proactive Profiling of Hate Speech Spreaders (CDTi IDI-20210776), XAI-DisInfodemics: eXplainable AI for disinformation and conspiracy detection during infodemics (MICIN PLEC2021-007681), and DETEMP Early Detection of Depression Detection in Social Media (IVACE IMINOD/2021/72) R&D grants."
] |
[
"method",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"other",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"other",
"method",
"abstain",
"other",
"other",
"other",
"other",
"objective",
"method",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"method",
"other",
"other",
"method",
"other",
"other",
"other",
"other",
"method",
"method",
"method",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"other",
"method",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"abstain",
"method",
"method",
"method",
"result",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"abstain",
"result",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"other",
"other"
] |
[
"Visual Dialog is a multimodal task of answering a sequence of questions grounded in an image (using the conversation history as context).",
"It entails challenges in vision, language, reasoning, and grounding.",
"However, studying these subtasks in isolation on large, real datasets is infeasible as it requires prohibitively-expensive complete annotation of the state' of all images and dialogs.",
"We develop CLEVR-Dialog, a large diagnostic dataset for studying multi-round reasoning in visual dialog.",
"Specifically, we construct a dialog grammar that is grounded in the scene graphs of the images from the CLEVR dataset.",
"This combination results in a dataset where all aspects of the visual dialog are fully annotated.",
"In total, CLEVR-Dialog contains 5 instances of 10-round dialogs for about 85 k CLEVR images, totaling to 4 .",
"25 M question-answer pairs.",
"We use CLEVR-Dialog to benchmark performance of standard visual dialog models; in particular, on visual coreference resolution (as a function of the coreference distance).",
"This is the first analysis of its kind for visual dialog models that was not possible without this dataset.",
"We hope the findings from CLEVR-Dialog will help inform the development of future models for visual dialog.",
"Our code and dataset are publicly available 1 .",
"The focus of this work is on intelligent systems that can see (perceive their surroundings through vision), talk (hold a visually grounded dialog), and reason (store entities in memory as a dialog progresses, refer back to them as appropriate, count, compare, etc .).",
"Recent works have begun studying such systems under the umbrella of Visual Dialog (Das et al., 2017a; de Vries et al., 2017), where 1 https://github.com/satwikkottur/clevr-dialog an agent must answer a sequence of questions grounded in an image.",
"As seen in Fig. 1, this entails challenges in vision ( e.g ., identifying objects and their attributes in the image), language/reasoning ( e.g ., keeping track of and referencing previous conversation via memory), and grounding ( e.g ., grounding textual entities in the image).",
"In order to train and evaluate agents for Visual Dialog, Das et al. (2017a) collected a large dataset of human-human dialog on real images collected between pairs of workers on Amazon Mechanical Turk (AMT).",
"While such large-scale realistic datasets enable new lines of research, it is difficult to study the different challenges (vision, language, reasoning, grounding) in isolation or to break down the performance of systems over different challenges to identify bottlenecks, because that would require prohibitively-expensive complete annotation of the state' of all images and dialogs (all entities, coreferences, etc .).",
"In this work, we draw inspiration from Johnson et al. (2017), and develop a large diagnostic datasetCLEVR-Dialogfor studying and benchmarking multi-round reasoning in visually-grounded dialog.",
"Each CLEVR image is synthetically rendered by a particular scene graph (Johnson et al., 2017) and thus, is by construction exhaustively annotated.",
"We construct a dialog grammar that is grounded in these scene graphs.",
"Specifically, similar to Das et al. (2017b), we view dialog generation as communication between an Answerer (A-er) who can see' the image and has the complete scene graph (say S a ), and a Questioner (Q-er), who does not see' the image and is trying to reconstruct the scene graph over rounds of dialog (say S tq ).",
"As illustrated in Fig. 1, the dialog begins by A-er providing a grounded caption for the image, which conveys some but not all information about S a .",
"The Q-er builds a partial scene graph S 0 q based on the caption, and follows up by asking questions Figure 1: CLEVR-Dialog: we view dialog generation as communication between an Answerer (A-er) who can see' the image I and has the complete scene graph S a (far right), and a Questioner (Q-er), who does not see' the image.",
"grounded in S 0 q , which the A-er answers, and the dialog progresses.",
"Our dialog grammar defines rules and templates for constructing this grounded dialog.",
"Note that A-er with access to S a (perfect vision) exists only during dialog generation to obtain ground truth answers.",
"While studying visual dialog on CLEVR-Dialog, models are forced to answer questions with just the image and dialog history as additional inputs.",
"In total, CLEVR-Dialog contains 5 instances of 10-round dialogs for each of 70 k (train) and 15 k (val) CLEVR images, totaling to 3 .",
"5 M (train) and 0 .",
"75 M (val) question-answer pairs.",
"We benchmark several visual dialog models on CLEVR-Dialog as strong baselines for future work.",
"The combination of CLEVR images (with full scene graph annotations) and our dialog grammar results in a dataset where all aspects of the visual dialog are fully annotated.",
"We use this to study one particularly difficult challenge in multi-dialog visual reasoning of visual coreference resolution .",
"A coreference arises when two or more phrases ( coreferring phrases ) in the conversation refer to the same entity ( referent ) in the image.",
"For instance, in the question What about that cylinder?' (Q3) from Fig. 1, the referent for the phrase that cylin-der' can be inferred only after resolving the phrase correctly based on the dialog history, as there are multiple cylinders in the image.",
"We use CLEVR-Dialog to diagnose performance of different methods as a function of the history dependency ( e.g ., coreference distancethe number of rounds between successive mentions of the same object) and find that the performance of a state-of-art model (CorefNMN) is at least 30 points inferior for questions involving coreference resolution compared to those which do not (Fig. 5), highlighting the challenging nature of our dataset.",
"This is the first analysis of its kind for visual dialog that was simply not possible without this dataset.",
"We hope the findings from CLEVR-Dialog will help inform the development of future models for visual dialog.",
"Coreference Resolution is a well studied problem in the NLP community (Ng, 2010; Wiseman et al., 2016; Lee et al., 2017; Clark and Manning, 2016a,b).",
"Our work focuses on visual coreference resolution the referent is now a visual entity to be grounded in visual data.",
"Several works have tackled visual coreference resolution in videos (Ra-manathan et al., 2014; Rohrbach et al., 2017) and 3D data (Kong et al., 2014), and have introduced real image datasets for the same (Hodosh et al., 2014).",
"Visual Dialog and Synthetic Datasets.",
"We contrast CLEVR-Dialog against four existing datasets: (1) CLEVR (Johnson et al., 2017) is a diagnostic dataset for visual question answering (VQA) (An-tol et al., 2015) on rendered images that contain objects like cylinders, cubes, etc",
"., against a plain background (Fig. 1).",
"While CLEVR-Dialog uses the same set of images, the key difference is that of focus and emphasis the objective of CLEVR-Figure 2: Example dialogs from MNIST Dialog, CLEVR-Dialog, and VisDial, with coreference chains manually marked for VisDial and automatically extracted for MNIST Dialog and CLEVR-Dialog.",
"VQA questions is to stress-test spatial reasoning in independent single-shot question answering; the objective of CLEVR-Dialog is to stress-test temporal or multi-round reasoning over the dialog history.",
"(2) CLEVR-Ref+ (Liu et al., 2019) is a diagnostic dataset based on CLEVR images for visual reasoning in referring expressions.",
"CLEVR-Dialog goes beyond CLEVR-Ref+, which focuses on grounding objects given a natural language expression, and deals with additional visual and linguistic challenges that require multi-round reasoning in visual dialog.",
"(3) MNIST-Dialog (Seo et al., 2017) is a synthetic dialog dataset on a grid of 4 4 stylized MNIST digits (Fig. 2).",
"While MNIST-Dialog is similar in spirit to CLEVR-Dialog, key difference is complexity the distance between a coreferring phrase and its antecedent is always 1 in MNIST-Dialog; in contrast, CLEVR-Dialog has a distribution ranging from 1 to 10.",
"(4) VisDial (Das et al., 2017a) is a large scale visual dialog dataset collected by pairing two human annotators (a Q-er and an A-er) on AMT, built on COCO (Lin et al., 2014) images.",
"VisDial being a large open-ended real dataset encompasses all the challenges of visual dialog, making it difficult to study and benchmark progress on individual challenges in isolation.",
"Fig. 2 qualitatively compares MNIST-Dialog, CLEVR-Dialog, and VisDial, and shows coreference chains (manually annotated for this VisDial example, and automatically computed for MNIST-Dialog and CLEVR-Dialog).",
"We can see that the chains in MNIST-Dialog are the simplest (distance always 1).",
"While coreferences in VisDial can be on a similar level of difficulty than CLEVR-Name CLEVR MNIST VisDial Dialog (ours) Dialog # Images 85 k 50 k 123 k # Dialogs 425 k 150 k 123 k # Questions 4 .",
"In this section, we describe the existing annotation for CLEVR images, then detail the generation process for CLEVR-Dialog, and present the dataset statistics in comparison to existing datasets.",
"Setup.",
"Every CLEVR image I has a full scene graph annotation, S a .",
"This contains information about all the objects in the scene, including four major attributes { color, shape, material, size } , 2D image and 3D world positions, and relationships { front, back, right, left } between these objects.",
"We only use objects, attributes, and relationships.",
"Dialog Grammar.",
"An important characteristic of visual dialog that makes it suitable for practical applications is that the questioner does not see'",
"the image (because if it did, it would not need to ask questions).",
"To mimic this setup, we condition our question generation at round t only on the partial scene graph S tq that accumulates information received so far from the dialog history (and not on S a ).",
"Specifically, we use a set of caption { T Ci } and question { T Qi } templates, which serve as the structural units of our dialog grammar.",
"The role of the caption is to seed the dialog and initialize S 0 q .",
"Each of the question templates is accompanied by a set of constraints on S tq , which decide if a particular template can be selected at the current round.",
"For instance, a question What shape is the blue object?' can be only be asked (generated) if the dialog so far has already mentioned a blue object', i.e",
"., only if S tq contains a (unique) blue object'.",
"The nature and difficulty of the dataset is highly dependent on these templates, thus making their selection crucial.",
"To this end, we carefully design four categories of caption templates:",
"(a) Obj-unique mentions an object with unique set of attributes in the image,",
"(b) Obj-count specifies the presence of a group of objects with common attributes,",
"(c) Obj-extreme describes an object at one of the positional extremes of the image (right, left, fore, rear, center),",
"(d) Obj-relation talks about the relationship between two objects along with their attributes in a way that allows them to be uniquely identified in the complete scene graph S a .",
"For the questions, we experiment with three different categories:",
"(a) Count questions ask for a count of objects in the image satisfying specific conditions, e.g",
"., How many objects share the same color as this one?' ,",
"(b) Existence questions are yes/no binary questions that verify conditions in the image, e.g",
"., Are there any other cubes?' , and",
"(c) Seek questions query attributes of objects, e.g",
"., What color is that cylinder?' .",
"Note that CLEVR-Dialog represents not just a static dataset but also a recipe for constructing increasingly challenging grounded dialog by expanding this grammar.",
"Refer to the appendix for further details.",
"Dialog Generation.",
"At a high level, dialog generation now simply' involves selecting a sequence of templates such that the accompanying constraints are satisfied by S t q at all t .",
"As a tractable approximation to this exponentially-large constraint satisfaction problem, we use beam search that finds a valid solution and enforces additional conditions to make the dialog interesting (see Fig. 4).",
"At every round of the dialog (after 3 rounds), we ensure that each of the question template typescount, existence, and seekfalls within a range (10% 20% for count/existence each, and 30% 60% for seek).",
"In addition, we identify independent questions that do not need history to answer them, e.g",
"., How many objects are present in the image?' , and limit their number to under 10%.",
"We found this to be effective both in terms of speed and dialog diversity.",
"Fig. 4 illustrates the diverse set of candidate questions generated at each round for a given image.",
"Dataset Statistics.",
"We compare CLEVR-Dialog to MNIST-Dialog and VisDial in Tab.",
"1, but the key measure of coreference distance cannot be reported for VisDial as it is not annotated.",
"Overall, CLEVR-Dialog has 3 the questions and a striking 206 the unique number of questions than MNIST-Dialog , indicating higher linguistic diversity.",
"CLEVR-Dialog questions are longer with a mean length of 10 .",
"6 compared to 8 .",
"9 for MNIST-Dialog.",
"Crucially, supporting our motivation, the mean distance (in terms of rounds) between the coreferring expressions in CLEVR-Dialog is 3 .",
"2 compared to 1 .",
"0 in MNIST-Dialog.",
"Moreover, the distances (see Fig. 3b) in CLEVR-Dialog vary (min of 1, max of 10), while it is constant (at 1) in MNIST-Dialog, making it easy for models to pick Figure 4: Dialog generation in CLEVR-Dialog.",
"up on this bias.",
"The distribution of caption and question templates is given in Fig. 3a.",
"See appendix for further analysis.",
"Baselines.",
"To benchmark performance, we evaluate several models on CLEVR-Dialog.",
"Random picks an answer at random.",
"Random-Q picks an answer at random among valid answers for a given question type ( e.g ., name of a color for color ques-tions).",
"Further, we adapt the discriminative visual dialog models from Das et al. (2017a):",
"(a) L ate F usion ( LF ) that models separately encode each of question (Q), history (H), and image (I); and then fuse them by concatenation.",
"(b) H ierarchical R ecurrent E ncoder ( HRE ) that models dialog via both dialog-level and sentence-level recurrent neural networks.",
"(c) M emory N etwork ( MN ) that stores history as memory units and retrieves them based on the current question.",
"We also consider neural modular architectures:",
"(a) CorefNMN (Kot-tur et al., 2018) that explicitly models coreferences in visual dialog by identifying the reference in the question (textual grounding) and then localizing the referent in the image (visual grounding),",
"(b) NMN (Hu et al., 2017), which is a history-agnostic ablation of CorefNMN.",
"Results.",
"We use multi-class classification accuracy for evaluation since CLEVR-Dialog has one-word answers.",
"Tab.",
"2 shows the performance of different models.",
"The key observations are:",
"(a) Neural models outperform random baselines by a large margin.",
"The best performing model, CorefNMN, outperforms Random-Q by 35%.",
"(b) History-agnostic models (LF-Q, LF-QI, NMN) also suffer in performance, highlighting the importance of history.",
"(c) Finally, we break down the performance of top-3 models on questions which depend on entire history ( All ), require coreference resolution ( Coref ), and are history-independent ( None ), in Fig. 5.",
"We find that CorefNMN is 30% worse on Coref than None questions, signifying the complexity of CLEVR-Dialog as the former are qualitatively harder to answer than the latter.",
"(d) More interestingly, HRE-QIH, though inferior to CorefNMN on Coref , outperforms the latter on All questions ( How many other objects?' ) by around 20%.",
"A possible explanation is that the former, owing to its dialog-level RNN, captures global summaries more efficiently than the latter.",
"This is the first analysis of its kind for visual dialog that was simply not possible without this dataset.",
"Appendix provides a further analysis of model performances.",
"Conclusion.",
"We proposed a large, synthetic dataset called CLEVR-Dialog, to study multi-round reasoning in visual dialog, and in particular the challenge of visual coreference resolution.",
"We benchmarked several qualitatively different models from prior work on this dataset, which act as baselines for future work.",
"Our dataset opens the door to evaluate how well models do on visual coreference resolution, without the need to collect expensive annotations on real datasets."
] |
[
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"result",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"result",
"abstain",
"result",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"method",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"other",
"method",
"other",
"other",
"abstain",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"other",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain"
] |
[
"Variational autoencoders (VAEs) are widely used for latent variable modeling of text.",
"We focus on variations that learn expressive prior distributions over the latent variable.",
"We find that existing training strategies are not effective for learning rich priors, so we add the importance-sampled log marginal likelihood as a second term to the standard VAE objective to help when learning the prior.",
"Doing so improves results for all priors evaluated, including a novel choice for sentence VAEs based on normalizing flows (NF).",
"Priors parameterized with NF are no longer constrained to a specific distribution family, allowing a more flexible way to encode the data distribution.",
"Our model, which we call FlowPrior, shows a substantial improvement in language modeling tasks compared to strong baselines.",
"We demonstrate that FlowPrior learns an expressive prior with analysis and several forms of evaluation involving generation.",
"Variational autoencoders (VAEs; Kingma and Welling, 2014) have been widely applied to many natural language processing tasks (Bowman et al., 2016; Zhang et al., 2016; Shen et al., 2017; Kim et al., 2018; Fang et al., 2019; Chen et al., 2019).",
"VAEs provide statistical transparency in describing observations in a latent space and flexibility when used in applications that require directly manipulating the learned representation (Hu et al., 2017).",
"Recent work (Li et al., 2020) has combined VAEs with BERT/GPT in representation learning and guided generation.",
"However, the representation capacity of VAEs is still limited for modeling sentences due to two main reasons.",
"One is known as the posterior collapse problem, in which the posterior collapses to the prior and the generator learns to ignore the latent variable (Bowman et al., 2016).",
"Many methods have been developed to address it: annealing (Fu et al., 2019), weakening the capacity of the generator (Semeniuta et al., 2017; Yang et al., 2017), manipulating training objectives (Burda et al., 2016; Higgins et al., 2017; Zhao et al., 2017), including the use of free bits (FB) (Kingma et al., 2016; Li et al., 2019), and changing training (He et al., 2019).",
"The other reason is the restrictive assumption of the parametric forms for the prior and approximate posterior.",
"While these forms are computationally efficient, they limit the expressivity of the model.",
"The main existing solutions (Kingma et al., 2016; Tomczak and Welling, 2018; Razavi et al., 2019) focus on enriching the variational posterior, while other work focuses on learning an expressive prior (Tomczak and Welling, 2018; Serban et al., 2017; Chen et al., 2017).",
"In this paper, we follow the latter line of research and draw upon methods in building and learning expressive priors.",
"We first show empirically that the original VAE objective, the evidence lower bound (ELBO), is not effective when learning priors.",
"The issue is not solely due to posterior collapse since it is not resolved by using modifications based on free bits.",
"To address this issue, we propose using a combined objective, adding to the ELBO a second objective (denoted MIS ) which is a different lower bound on the log marginal likelihood obtained using importance sampling (Burda et al., 2016).",
"Using the combination of the ELBO and MIS , we compare multiple choices for the prior, including a mixture of Gaussians, a prior based on a variational mixture of posteriors (VampPrior; Tomczak and Welling, 2018), and a prior based on normalizing flows (NF), specifically real NVP transformations (Dinh et al., 2016).",
"Using a real NVP prior entails creating an invertible mapping from a simple base distribution to the prior distribution of the latent variable in a VAE.",
"This choice allows a flexible prior distribution that is not constrained to a specific parametric family.",
"The hope is that it would be better at modeling the data distribution.",
"We perform an empirical evaluation of priors and objective functions for training VAE sentence models on four standard datasets.",
"We find the best performance overall when using the flow-based prior and the combined objective in the training objective.",
"We refer to this setting as FlowPrior .",
"The generation of prior samples with FlowPrior comports to the training distribution while maintaining a higher diversity than competing models in our quantitative and qualitative evaluation.",
"To summarize, this paper contributes: (1) a strategy for improved training of sentence VAEs based on combining multiple lower bounds on the log marginal likelihood; (2) the first results applying real NVP to model the prior in sentence VAEs; and (3) comprehensive evaluation and analysis with three expressive priors and training objective variations.",
"Variational autoencoders (VAEs; Kingma and Welling, 2014) are a popular framework for learning latent variable models with continuous latent variables.",
"Let x be the observed variable and z the latent variable.",
"The model factorizes the joint distribution over x and z into a prior p ( z ) and a generator p ( x | z ) .",
"Maximizing the log marginal likelihood log p ( x ) is intractable in general, so VAEs introduce an approximate posterior q ( z | x ) parameterized using a neural network (i.e., an in-ference network), and replace the log marginal likelihood with the evidence lower bound (ELBO): log p ( x ) E q ( z | x ) [log p ( x | z )] KL( q ( z | x ) || p ( z )) (1) Maximizing the right-hand side of the equation above can be viewed as a regularized autoencoder in which the first term is the negative reconstruction error and the second is the negative KL divergence between the approximate posterior q ( z | x ) and the latent variable prior p ( z ) .",
"It is common in practice to fix the prior p ( z ) to be a standard Gaussian distribution and only learn and (Bowman et al., 2016; Yang et al., 2017; Shen et al., 2017).",
"While constraining the prior to be a fixed standard Gaussian is common, it is not necessary for tractability.",
"Researchers have found benefit from using richer priors and posteriors (Rezende and Mohamed, 2015; Kingma et al., 2016; Chen et al., 2017; Ziegler and Rush, 2019; Ma et al., 2019).",
"In this paper, we consider investigating alternative priors while still using the standard Gaussian form for the approximate posterior.",
"We now describe the three kinds of priors we will compare in our experiments.",
"The first two are based on Gaussian mixtures (Sec. 3.1) and the third is based on normalizing flows (Sec. 3.2).",
"We take these three prior families into consideration because they represent the three main categories of work in learning priors: simple Gaussian mixtures (usually as baselines), defining the prior as a function of the approximate posterior (Tomczak and Welling, 2018; Chen et al., 2018), and flow-based priors (Chen et al., 2017; Ziegler and Rush, 2019; Ma et al., 2019; Lee et al., 2020).",
"Note that we do not make any changes to the approximate posterior distribution.",
"That is, the approximate posterior follows a Gaussian distribution with a diagonal covariance matrix as in standard VAEs.",
"Our first choice is a uniform mixture of K Gaussians (MoG):",
"where f ( z ; , ) is the density function of a d dimensional Gaussian with mean and covariance matrix .",
"The k and k are learnable parameter vectors with dimensionality d (which is 32 in our experiments).",
"This prior was used as a baseline by Tomczak and Welling (2018).",
"We refer to a VAE that uses this prior as MoG-VAE .",
"Tomczak and Welling (2018) extend MoG-VAE to a Variational Mixture of Posteriors prior (VampPrior).",
"This approach parameterizes the prior using a mixture of Gaussians with components given by a variational posterior conditioned on learnable pseudo-inputs: p ( z ) = 1 KK (cid:88) k =1 q ( z | u k ) (3) where K is the number of pseudo-inputs, each of which is denoted u k .",
"Pelsmaeker and Aziz (2020) applied this idea to text modeling and we follow their strategy for defining pseudo-inputs.",
"That is, each u k consists of a sequence of embeddings that have the same dimensionality as word embeddings.",
"For each component k , the lengths of pseudo-inputs can vary; they are sampled based on the statistics of the lengths in the training set.",
"We refer to a VAE with this prior as Vamp-VAE .",
"Our third choice for a prior distribution is to leverage normalizing flows (NF).",
"A normalizing flow is a sequence of invertible, deterministic transformations.",
"By repeatedly applying the rule for change of variables (see the Appendices for details), the base density is transformed into a more complex one.",
"Networks parameterized using NF can be trained through exact maximum log-likelihood computation.",
"Exact sampling is performed by drawing a sample from the base distribution and performing the chain of transformations.",
"This allows a flexible prior and is expected to have more expressive latent components compared to those based on Gaussian mixtures.",
"Computing the Jacobian of functions with high dimension and the determinants of large matrices (i.e., the two main computations in NF) are very expensive.",
"Our flow-based prior uses real-valued non-volume preserving (real NVP) transformations (Dinh et al., 2016) which are efficient in both training and sampling.",
"The transformations are based on scale and translation operations.",
"1 It is worth noting that these two operations are not used in computing the Jacobian determinant and inverse.",
"So one can design arbitrarily complex operations that allow a flexible transformation without incurring large computational cost.",
"More specifically, we apply real NVP as a prior by creating an invertible mapping between a base distribution p 0 ( z 0 ) (in our case, z 0 N (0 , I) ) and the prior distribution p ( z L ) in the VAE: z L = f L f L 1 ... f 1 ( z 0 ) (4) where z L is the sentence latent variable and f 1 , f 2 , ..., f L are all bijective functions.",
"Using the change-of-variables theorem, given a latent variable z L , we can compute the exact density under the prior with the image z 0 acquired by inverting the transformation: z 0 = f 1 1 ... f 1 L 1 f 1 L ( z L ) (5) log p ( z L ) = log p 0 ( z 0 ) L (cid:88) l =1 log | det( f l ( z l 1 ) z l 1 ) | (6) 1 More details about normalizing flows and real NVP are in the Appendices.",
"We refer to a VAE with a real NVP prior as real NVP-VAE.",
"We find our best setting to consist of a real NVP prior and the combined objective in Section 4.1 and we refer to this setting as FlowPrior .",
"ELBO.",
"Our preliminary experiments found that, when training with the standard ELBO, using more sophisticated priors does not improve perplexity compared to standard Gaussian priors (Table 3).",
"Though these priors could potentially be highly multimodal, the learned prior parameters yield approximately unimodal forms (Figure 1, left).",
"Several approaches have been proposed to mitigate or avoid collapse in the approximate posterior.",
"One method that we include in our experiments is a variation of KL divergence known as free bits (FB) KL (Li et al., 2019; Kingma et al., 2016).",
"Posterior collapse is mitigated, but the VAE models still do not benefit much from expressive priors (Tables 1-2).",
"Pelsmaeker and Aziz (2020) made similar observations with an improved FB objective.",
"We speculate that these undesirable results are due to the lack of learning signal for the prior parameters.",
"Marginal Likelihood via Importance Sampling.",
"In the ELBO, the prior distribution only appears in the KL term.",
"As a consequence, the prior parameters receive a limited amount of learning signal.",
"The posterior network, by contrast, receives gradient updates from both the reconstruction and KL terms.",
"When minimizing the KL term the potentially expressive prior density can collapse to a unimodal form, as this may facilitate minimizing the KL divergence between the approximate posterior and prior.",
"We consider optimizing another objective, a different lower bound on the log marginal likelihood obtained using importance sampling (Burda et al., 2016): log 1 NN (cid:88) i =1 p ( x | z ( i ) ) p ( z ( i ) ) q ( z ( i ) | x ) , s .",
"where x is an input in the training data and N is the number of samples in use.",
"This objective was proposed as the training objective in the importance-weighted autoencoder (IWAE; Burda et al., 2016), and was shown to be a tighter lower bound on the log marginal likelihood than the ELBO.",
"In this paper, we denote this objective by MIS .",
"In addition to providing a tighter lower bound, MIS also increases the flexibility of the approximate posterior, as shown by Cremer et al. (2017).",
"By increasing N , the approximate posterior has an implicit complex distribution that approaches the true posterior, which may also be beneficial in learning an expressive prior.",
"Combination of the Two.",
"However, MIS is not necessarily optimal by itself for training VAEs.",
"Rainforth et al. (2018) prove that using MIS with a large value of N is detrimental in learning the posterior, which is also shown in our empirical evaluation in Table 3. If we only have MIS , the approximate posterior q only appears in the denominator so learning seeks to make samples from the posterior q less likely under q , which could cause q to become a poor proposal distribution.",
"The ELBO, with its reconstruction loss, appears helpful in learning a better posterior.",
"Therefore, we optimize the sum of the ELBO and MIS , which was proposed by Rainforth et al. (2018).",
"Our combined training objective then contains three terms: MIS , reconstruction, and sample-based KL.",
"We draw N samples from q ( z | x ) , and compute the three terms using the same samples: L ( , , ; x ) = log 1 NN (cid:88) i =1 p ( x | z ( i ) ) p ( z ( i ) ) q ( z ( i ) | x ) + 1 NN (cid:88) i =1 log p ( x | z ( i ) ) KL , ( x, { z ( i ) } Ni =1 ) s .",
"t .",
"z ( i ) q ( z | x ) (7) When training with the ELBO alone, one typically uses a single sample from q ( z | x ) .",
"However, since we draw multiple samples anyway in order to compute MIS , we use those same samples for the reconstruction term, which can lead to more robust gradients of that term than the standard approach of using one sample.",
"The reason we use sample-based estimates for the KL divergence is because our choices for the prior preclude the possibility of a closed form for the KL.",
"We consider two different approaches when computing sample-based KLs: standard KL and a modified one inspired by free bits (Li et al., 2019; Pelsmaeker and Aziz, 2020; Kingma et al., 2016), which we refer to as FB KL .",
"in computing the KL divergence with N samples: KL , ( x, { z ( i ) } Ni =1 ) = 1 NN (cid:88) i =1 (log q ( z ( i ) | x ) log p ( z ( i ) )) (8) For the FB KL, we follow prior work (Kingma et al., 2016) that replaces the KL with a hinge loss term in each latent dimension:",
"FB KL , ( x, { z ( i ) } Ni =1 ) = d (cid:88) j =1 max( , KL j, ( x, { z ( i ) } Ni =1 ))",
"where KL j, denotes the KL computed only for dimension j of the latent variable, and is the target rate hyperparameter.",
"We describe our training procedure below for FlowPrior, which combines a real NVP prior with the objective in Eq.",
"7. For simplicity, our description only uses one input x .",
"In practice, we use mini-batches with a stochastic gradient based optimizer.",
"All the parameters ( , , ) are updated simultaneously during training.",
"1. Draw N samples z (1) L , z (2) L , ..., z ( N ) L from the inference network using the reparameterization trick.",
"2. Perform the inverse transformation to get the image of each point under the base distribution: z (1)0 , z (2)0 , ..., z ( N ) 0 .",
"3. Compute the exact log likelihood of the sample prior with change of variable theorem (Eq. 6).",
"4. Compute and backpropagate the loss (Eq. 7).",
"When using the other priors (standard Gaussian, MoG, and VampPrior), we do not need steps 2 and 3 above because those priors can be computed directly without the inverse transformation or change of variable theorem.",
"We consider four widely-used, publicly available English datasets: the Penn Treebank (PTB) (Mar-cus et al., 1993; Bowman et al., 2016), Yahoo (Yang et al., 2017; He et al., 2019), Yelp sentiment (Shen et al., 2017), and SNLI (Bowman et al., 2015).",
"Our baselines include standard VAE with linear KL annealing (Bowman et al., 2016); Cyc-VAE (Fu et al., 2019) in which the KL term is reweighted with a cyclical annealing schedule; Lag-VAE (He et al., 2019) which updates the encoder multiple times before each decoder update; VAE+FB (Kingma et al., 2016; Chen et al., 2017) which replaces the standard KL with FB (i.e., Eq. 9 with N = 1 ); and Pre-VAE+FB (Li et al., 2019) which initializes the VAE with a pretrained autoencoder and replaces standard KL with FB.",
"We evaluated these baselines using their open source implementations.",
"2 In addition, we include two prior-learning baselines: MoG-VAE (Eq. 2) and Vamp-VAE (Eq. 3).",
"We follow Pelsmaeker and Aziz (2020) and set 100 components/pseudo-inputs.",
"Unlike the earlier baselines, for which we used open source codebases, we implemented the MoG-VAE and Vamp-VAE models on top of our standard VAE implementation, which was also used for FlowPrior.",
"Across all the experiments for our implemented baselines (i.e., standard VAE, MoG-VAE, Vamp-VAE) and our proposed model FlowPrior, we follow prior work (Kim et al., 2018; He et al., 2019; Li et al., 2019) and use a single-layer LSTM encoder and decoder with a 32-dimensional latent variable.",
"We use a batch size of 32 and train using SGD.",
"3 5.4 Evaluation Metrics Our evaluation measures language modeling performance, the use of the latent variable, and the quality and diversity of generations from the prior and posterior.",
"The metrics are listed below: PPL: We estimate log marginal likelihood using importance sampling (Burda et al., 2016) and calculate perplexity on the test set.",
"4 KL: We report the KL term in the ELBO on the test set.",
"When training with FB KL, we still report standard KL.",
"For standard VAE, we compute KL with its closed-form expression.",
"Otherwise, we report the KL estimated with samples.",
"2 The links to their implementations are in the Appendix.",
"3 We use the open source implementations for other baselines.",
"All models are trained with the simple linear annealing schedule, with same hyperparameter search space.",
"We run each setting with 5 random seeds and report the medians.",
"4 We use 1000 samples which appears to be more than sufficient for estimation; Ziegler and Rush (2019) found that using more than 50 samples did not even show much difference.",
"MI: We follow Hoffman and Johnson (2016) and report estimated mutual information between the observation and its latent variable.",
"AU: A dimension z in the latent variable is considered active if Cov x ( E z (cid:118) q ( z | x ) [ z ]) > 10 2 .",
"AU is then the number of active latent dimensions (Burda et al., 2016).",
"F-PPL and R-PPL: These metrics measure the correspondence between generated sentences from the model and the training corpus.",
"We evaluate both F-PPL and R-PPL by estimating 5-gram language models using the KenLM toolkit (Heafield, 2011) with its default smoothing method.",
"For F-PPL, we estimate language models from the actual text and compute the perplexity of the generated samples.",
"For R-PPL, we estimate language models from the generated samples and compute the perplexity of the actual text.",
"5 Self-BLEU: The self-BLEU metric is one measure of the diversity of a set of samples (Zhu et al., 2018).",
"It is calculated by averaging the BLEU scores computed between all pairs of samples.",
"We first perform language modeling tasks to characterize models' efficacy at modeling texts in terms of modeling the distribution of language data and making use of the latent variable.",
"We refer to our model as FlowPrior , which uses the training objective in Eq.",
"7 which includes MIS and the standard KL (Eq. 8).",
"We use FlowPrior + FB to refer to our model with the FB KL (Eq. 9).",
"5 Our R-PPL is slightly different from that in Fang et al. (2019).",
"For R-PPL, we always concatenate the training set vocabulary (one word per line) to the set of samples from the models to ensure LMs have seen the entire vocabulary.",
"Comparison to baselines.",
"Table 1 shows results on the PTB dataset for several VAEs from prior work and our implemented models.",
"Since our contributions lie in learning the prior instead of changing the training procedure or manipulating the KL term, we set the baselines as standard VAE, MoG, and VampPrior for the rest of the paper.",
"We report the performance of FlowPrior and those baselines on Yahoo, Yelp, and SNLI in Table 2. From Tables 1 and 2, we observe that FlowPrior consistently outperforms the baselines in test set perplexity, sometimes by large margins.",
"This is not surprising since the MIS term in our training objective directly targets the perplexity metric because the expressions are identical (differing only in the number of samples used).",
"While FB typically improves models on PTB, and helps FlowPrior to reach a higher AU and KL on the other datasets, it does not lead to better test PPL and reconstruction.",
"We report additional results on measuring the impact of FB in the Appendix.",
"Another finding is that simply enriching the parametric family of the prior is not sufficient to improve our evaluation metrics.",
"Tables 1 and 2 show mixed results when moving from the VAE with its standard Gaussian prior to the MoGor Vamp-VAE.",
"Though these priors have the potential to be multimodal, they could still be unimodal after training.",
"For example, the MoG-VAE might learn a mixture in which all Gaussians have the same location and Prior PPL( ) KL AU( ) PTB Standard 101.8 / 101.4 / 98.4 0.0 / 0.0 / 3.2 0 / 0 / 2 MoG 101.9 / 98.2 / 96.7 0.0 / 0.0 / 0.0 0 / 0 / 0 Vamp 101.7 / 98.3 / 96.1 0.0 / 0.0 / 3.1 0 / 0 / 4 Real NVP 102.5 / 98.4 / 94.7 0.0 / 0.0 / 3.3 0 / 0 / 2 Yahoo Standard 65.6 / 65.8 / 63.9 0.0 / 0.0 / 2.7 0 / 0 / 1 MoG 65.6 / 64.6 / 62.7 0.0 / 0.0 / 0.5 0 / 0 / 1 Vamp 78.5 / 74.8 / 62.9 0.0 / 0.0 / 1.5 0 / 0 / 2 Real NVP 65.6 / 65.8 / 62.5 0.0 / 0.0 / 1.4 0 / 0 / 4 Yelp Standard 35.4 / 35.1 / 33.2 0.0 / 0.0 / 2.9 0 / 0 / 2 MoG 36.0 / 35.2 / 34.9 0.0 / 0.0 / 0.0 0 / 0 / 0 Vamp 38.0 / 35.0 / 33.7 0.0 / 0.0 / 4.1 0 / 0 / 1 Real NVP 35.6 / 35.1 / 31.8 0.0 / 0.0 / 4.2 0 / 0 / 2 SNLI Standard 27.4 / 26.0 / 25.3 0.0 / 0.0 / 1.2 0 / 0 / 3 MoG 27.2 / 28.1 / 24.3 0.0 / 0.4 / 4.2 0 / 1 / 5 Vamp 27.6 / 26.0 / 23.7 0.0 / 0.0 / 2.8 0 / 0 / 2 Real NVP 27.7 / 26.1 / 22.4 0.0 / 0.0 / 3.8 0 / 0 / 3 Table 3: Comparing training objectives with several choices for priors.",
"scale.",
"Also, the complexity of the prior learned by the Vamp-VAE is dependent upon the inference network, so if the inference network does not learn anything useful, the learned prior may not be useful either.",
"Impact of selection of objectives.",
"The learned prior baselines (MoG-VAE and Vamp-VAE) fail to learn to use the latent variable, as shown by the small numbers (nearly zero) for the AU and MI metrics in Tables 1-2.",
"Similar observations were made by Pelsmaeker and Aziz (2020).",
"We argue that only improving the prior might not be sufficient, as the ELBO objective is difficult to optimize and little information may be learnable for the prior from the ELBO alone.",
"To measure the utility of the MIS term, we introduce this term to standard-VAE, MoG-VAE, and Vamp-VAE and evaluate the improved models under the same language model metrics.",
"Table 3 compares models trained with MIS , the ELBO, and the combined training objective (Eq. 7).",
"The combined objective is beneficial to all metrics for all priors and datasets.",
"Our results are consistent with the observations of Rainforth et al. (2018) that tighter bounds are preferable for training the gener-Vamp-VAE + MIS Three people are sitting on a bench .",
"ative network, while looser bounds are preferable for training the inference network.",
"Still, FlowPrior (real NVP + MIS ) performs the best in PPL and MI compared to other models, showing the flexibility and the power of the real NVP architecture.",
"For the Standard setting in Table 3, the prior is fixed and not learned while in the other three settings the prior is learned.",
"The combination of ELBO and MIS is helpful across all settings.",
"6 6.2 Interpolations Between Prior Samples One appealing aspect of VAEs for sentence modeling is the potential for learning a smooth, interpretable space for sentences.",
"A qualitative way to explore the latent space is to interpolate between samples from the prior distribution.",
"We randomly sample two latent vectors from the prior and linearly interpolate between them with evenly divided intervals (Bowman et al., 2016).",
"7 We use greedy 6 For the MoG setting, we also performed experiments with setting the number of Gaussian components K = 1 and observed comparable or slightly worse test PPL under all 3 choices of training loss than Standard setting.",
"7 FlowPrior is slightly different.",
"Instead of directly sampling from the latent variable of VAE (in MoG-VAE and VampVAE), FlowPrior samples from the base distribution of real NVP, interpolates in the base distribution, and maps to the",
"decoding in generation.",
"8 Table 4 shows linear interpolation between prior samples in FlowPrior and Vamp-VAE + MIS (i.e., Vamp-VAE with the combined training objective).",
"We observe substantial improvement with FlowPrior, as it can generate sentences with smooth semantic evolution while maintaining plausible generations in terms of fluency.",
"This semantic evolution may reflect the complex structure in the learned prior distribution.",
"Interpolations with MoG-VAE + MIS and Vamp-VAE + MIS have more repetitions and do not transit smoothly from one to the other.",
"(Results with MoG-VAE are in the appendix.) 6.3 Visualization of Learned Priors We randomly select 4 dimensions from the learned priors per model and plot their densities in Fig. 1. In MoG-VAE, each dimension is a Gaussian mixture with 100 components.",
"When only using the ELBO for training (Fig.",
"1(a)), the four visualized components all have similar shapes.",
"After adding MIS (Fig.",
"1(b)), different dimensions have similar locations but different scales.",
"Vamp-VAE permits relatively complex components because the means and variances are acquired from the inference network applied to learned latent with Eq.",
"4. We also experiment with interpolating the two samples after mapping, namely interpolating in the VAE latent space, and find similar results.",
"8 We additionally tried various sampling methods for decoding.",
"This leads to more noise and becomes harder to interpret.",
"Generations can be found in the Appendices.",
"pseudo-inputs.",
"Fig.",
"1(c) shows that Vamp-VAE trained without MIS does not show much difference compared to MoG-VAE.",
"However, when training with MIS (Fig.",
"1(d)), the distributions in several dimensions appear to be multimodal.",
"The real NVP prior learns little information when training without MIS , as all dimensions are akin to standard normal distributions.",
"When training with MIS , different dimensions show distinct placement and shape.",
"The prior in FlowPrior is highly multimodal overall and smooth in each dimension.",
"Sampling from Prior.",
"To measure the expressiveness of the prior and the richness of the learned latent variable, we randomly sample 5000 times from the prior distribution and evaluate their greedy-decoded generations qualitatively and quantitatively.",
"Table 5 shows greedy generations from prior samples.",
"We observe substantial improvements in term of generation diversity when adding MIS in the training objective.",
"Note that these diverse samples are achieved with a purely deterministic decoding.",
"A diverse set of samples implies that (1) richer latent codes and a highly multimodal distribution is learned by the model; (2) and the generator is trained to attend to the latent codes.",
"Sample Mundanity and Coverage.",
"A strongly-performing generative model should be able to generate samples that comport to the training data distribution.",
"We use the forward and reverse PPL to estimate the similarity between the training data and samples.",
"We can consider F-PPL as a generation precision as it reflects the amount of information in the samples that is relevant to the actual text.",
"Analogously, we can consider R-PPL Yelp SNLIF-PPL R-PPL SB F-PPL R-PPL SB VAE 4 30248 96 4 51127 100 VAE+M IS 5 10818 30 4 19047 73 Vamp-VAE 4 32504 100 4 56050 100 Vamp-VAE+M IS 7 5280 10 5 8420 29 FlowPrior 209 1677 3 42 5725 13 Table 6: Forward PPL (F-PPL), Reverse PPL (R-PPL), and Self-BLEU (SB) of greedy-decoded prior samples.",
"as a generation recall' ' as it reflects how much the samples as a whole provide coverage of the actual text.",
"Moreover, both F-PPL and R-PPL reflect whether the decoder is able to attend to the latent variable in generation.",
"Table 6 shows the F-PPL and R-PPL with greedy generation from prior samples.",
"While Fang et al. (2019) treats a lower F-PPL as an indicator of better samples, we argue that it is not necessarily true.",
"A model could achieve a low F-PPL by simply generating identical (or nearly-identical) high-probability sequences, like those observed from the VAE, MoG-VAE, and Vamp-VAE in Table 5. This reflects how an overly-simplified or restrictive assumption in the prior can lead to less diversity in samples.",
"Indeed, we find that models with very low F-PPL values often have very high R-PPL values.",
"A lower R-PPL indicates the distribution of generated samples matches the distribution of the training data.",
"From Table 6 we observe that adding MIS is beneficial as it leads to a lower R-PPL.",
"FlowPrior has the best R-PPL, and shows the capability of capturing characteristics of the target distribution that are not captured by simpler priors.",
"Generation Diversity.",
"To identify which model has richer usage of latent variables, we use self-BLEU to measure the diversity of a set of samples.",
"We observe significant improvements in FlowPrior in Table 6, which implies a diverse latent representation and a better utilization of the latent variable.",
"When considering the parameterized family of VAE models, expressive latent components (i.e., posterior and prior) have been widely studied in computer vision (Dinh et al., 2015, 2016; Kingma and Dhariwal, 2018).",
"However, multimodal priors have been seldom applied to language, with some exceptions (Serban et al., 2017; He et al., 2018; Ziegler and Rush, 2019; Ma et al., 2019; Lee et al., 2020).",
"Chen et al. (2017) use autoregressive flow for the prior and posterior and experiment with images.",
"Ziegler and Rush (2019) propose several autoregressive NF architectures and characterize performance on character-level language modeling.",
"Ma et al. (2019) design priors using the Glow architecture to improve the performance of non-autoregressive neural machine translation.",
"Lee et al. (2020) empirically characterize the performance of NF and simple Gaussian priors in token-level latent variable models, and observe that flexible priors yield higher log-likelihoods but not better BLEU scores on machine translation tasks.",
"Our work differs from that of Ziegler and Rush (2019) and Chen et al. (2017) as we are using a non-autoregressive flow-based architecture for the prior, while they are using autoregressive NF.",
"Also, we focus on models with a single latent variable for an entire sentence, while similar prior work has focused on token-level latent variables (Ziegler and Rush, 2019; Ma et al., 2019; Lee et al., 2020).",
"Several others have employed NF for flexible modeling in NLP.",
"Setiawan et al. (2020) present a variational translation model that uses NF in the approximate posterior while keeping the prior as Gaussian.",
"Wang and Wang (2019) apply NF to a variational Wasserstein autoencoder in order to make the posterior more flexible.",
"Jin et al. (2019) use transformed distributions via NF to model the emission density, which improves parsing performance as compared to Gaussian baselines.",
"We proposed a method, FlowPrior, that uses normalizing flow to define the prior in a sentence VAE",
"and adds the importance-sampled marginal likelihood (MIS ) as a second term to the standard VAE objective.",
"Our empirical results show FlowPrior yields a substantial improvement in language modeling and generation tasks as compared to prior work.",
"Adding MIS improves performance for other models as well, especially in settings when the prior parameters are being learned.",
"We would like to thank Sam Wiseman, Qingming Tang, and Mingda Chen for helpful discussions, and the anonymous reviewers for their comments that improved this paper."
] |
[
"abstain",
"method",
"objective",
"abstain",
"abstain",
"result",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"abstain",
"abstain",
"objective",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"objective",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"method",
"method",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"result",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"other",
"other",
"other",
"other",
"objective",
"abstain",
"result",
"abstain",
"other"
] |
[
"As a crucial step in extractive document summarization, learning cross-sentence relations has been explored by a plethora of approaches.",
"An intuitive way is to put them in the graph-based neural network, which has a more complex structure for capturing inter-sentence relationships.",
"In this paper, we present a heterogeneous graph-based neural network for extractive summarization (HETERSUMGRAPH ), which contains semantic nodes of different granularity levels apart from sentences.",
"These additional nodes act as the intermediary between sentences and enrich the cross-sentence relations.",
"Besides, our graph structure is flexible in natural extension from a single-document setting to multi-document via introducing document nodes.",
"To our knowledge, we are the first one to introduce different types of nodes into graph-based neural networks for extractive document summarization and perform a comprehensive qualitative analysis to investigate their benefits.",
"The code will be released on Github 1 .",
"Extractive document summarization aims to extract relevant sentences from the original documents and reorganize them as the summary.",
"Recent years have seen a resounding success in the use of deep neural networks on this task (Cheng and Lapata, 2016; Narayan et al., 2018; Arumae and Liu, 2018; Zhong et al., 2019a; Liu and Lapata, 2019b).",
"These existing models mainly follow the encoder-decoder framework in which each sentence will be encoded by neural components with different forms.",
"the cross-sentence relations.",
"Most current models capture cross-sentence relations with recurrent neural networks (RNNs) (Cheng and Lapata, 2016; Nallapati et al., 2017; Zhou et al., 2018).",
"However, RNNs-based models are usually hard to capture sentence-level long-distance dependency, especially in the case of the long document or multi-documents.",
"One more intuitive way is to model the relations of sentences using the graph structure.",
"Nevertheless, it is challenging to find an effective graph structure for summarization.",
"Efforts have been made in various ways.",
"Early traditional work makes use of inter-sentence cosine similarity to build the connectivity graph like LexRank (Erkan and Radev, 2004) and TextRank (Mihalcea and Tarau, 2004).",
"Recently, some works account for discourse inter-sentential relationships when building summarization graphs, such as the Approximate Discourse Graph (ADG) with sentence personalization features (Yasunaga et al., 2017) and Rhetorical Structure Theory (RST) graph (Xu et al., 2019).",
"However, they usually rely on external tools and need to take account of the error propagation problem.",
"A more straightforward way is to create a sentence-level fully-connected graph.",
"To some extent, the Transformer encoder (Vaswani et al., 2017) used in recent work(Zhong et al., 2019a; Liu and Lapata, 2019b) can be classified into this type, which learns the pairwise interaction between sentences.",
"Despite their success, how to construct an effective graph structure for summarization remains an open question.",
"In this paper, we propose a heterogeneous graph network for extractive summarization.",
"Instead of solely building graphs on sentence-level nodes, we introduce more semantic units as additional nodes in the graph to enrich the relationships between sentences.",
"These additional nodes act as the intermediary that connects sentences.",
"Namely, each additional node can be viewed as a special relationship between sentences containing it.",
"During the massage passing over the heterogeneous graph, these additional nodes will be iteratively updated as well as sentence nodes.",
"Although more advanced features can be used (e.g., entities or topics), for simplicity, we use words as the semantic units in this paper.",
"Each sentence is connected to its contained words.",
"There are no direct edges for all the sentence pairs and word pairs.",
"The constructed heterogeneous word-sentence graph has the following advantages:",
"(a) Different sentences can interact with each other in consideration of the explicit overlapping word information.",
"(b) The word nodes can also aggregate information from sentences and get updated.",
"Unlike ours, existing models usually keep the words unchanged as the embedding layer.",
"(c) Different granularities of information can be fully used through multiple message passing processes.",
"(d) Our heterogeneous graph network is expandable for more types of nodes.",
"For example, we can introduce document nodes for multi-document summarization.",
"We highlight our contributions as follows: (1) To our knowledge, we are the first one to construct a heterogeneous graph network for extractive document summarization to model the relations between sentences, which contains not only sentence nodes but also other semantic units.",
"Although we just use word nodes in this paper, more superior semantic units (e.g. entities) can be incorporated.",
"(2) Our proposed framework is very flexible in extension that can be easily adapt from single-document to multi-document summarization tasks.",
"(3) Our model can outperform all existing competitors on three benchmark datasets without the pre-trained language models 2 .",
"Ablation studies and qualitative analysis show the effectiveness of our models.",
"Extractive Document Summarization With the development of neural networks, great progress has been made in extractive document summarization.",
"Most of them focus on the encoder-decoder framework and use recurrent neural networks (Cheng and Lapata, 2016; Nallapati et al., 2017; Zhou et al., 2018) or Transformer encoders 2 Since our proposed model is orthogonal to the methods that using pre-trained models, we believe our model can be further boosted by taking the pre-trained models to initialize the node representations, which we reserve for the future.",
"(Zhong et al., 2019b; Wang et al., 2019a) for the sentential encoding.",
"Recently, pre-trained language models are also applied in summarization for contextual word representations (Zhong et al., 2019a; Liu and Lapata, 2019b; Xu et al., 2019; Zhong et al., 2020).",
"Another intuitive structure for extractive summarization is the graph, which can better utilize the statistical or linguistic information between sentences.",
"Early works focus on document graphs constructed with the content similarity among sentences, like LexRank (Erkan and Radev, 2004) and TextRank (Mihalcea and Tarau, 2004).",
"Some recent works aim to incorporate a relational priori into the encoder by graph neural networks (GNNs) (Yasunaga et al., 2017; Xu et al., 2019).",
"Methodologically, these works only use one type of nodes, which formulate each document as a homogeneous graph.",
"Heterogeneous Graph for NLP Graph neural networks and their associated learning methods (i.e. message passing (Gilmer et al., 2017), self-attention (Velickovic et al., 2017)) are originally designed for the homogeneous graph where the whole graph shares the same type of nodes.",
"However, the graph in the real-world application usually comes with multiple types of nodes (Shi et al., 2016), namely the heterogeneous graph.",
"To model these structures, recent works have made preliminary exploration.",
"Tu et al. (2019) introduced a heterogeneous graph neural network to encode documents, entities and candidates together for multihop reading comprehension.",
"Linmei et al. (2019) focused on semi-supervised short text classification and constructed a topic-entity heterogeneous neural graph.",
"For summarization, Wei (2012) proposes a heterogeneous graph consisting of topic, word and sentence nodes and uses the markov chain model for the iterative update.",
"Wang et al. (2019b) modify TextRank for their graph with keywords and sentences and thus put forward HeteroRank.",
"Inspired by the success of the heterogeneous graph-based neural network on other NLP tasks, we introduce it to extractive text summarization to learn a better node representation.",
"Given a document D = { s 1 , , s n } with n sentences, we can formulate extractive summarization as a sequence labeling task as (Narayan et al., 2018;",
"Liu and Lapata, 2019b).",
"Our goal is to predict a sequence of labels y 1 , , y n ( y i { 0 , 1 } ) for sentences, where y i = 1 represents the i -th sentence should be included in the summaries.",
"The ground truth labels, which we call ORACLE , is extracted using the greedy approach introduced by Nallapati et al. (2016) with the automatic metrics ROUGE (Lin and Hovy, 2003).",
"Generally speaking, our heterogeneous summarization graph consists of two types of nodes: basic semantic nodes (e.g. words, concepts, etc.) as relay nodes and other units of discourse (e.g. phrases, sentences, documents, etc.) as supernodes.",
"Each supernode connects with basic nodes contained in it and takes the importance of the relation as their edge feature.",
"Thus, high-level discourse nodes can establish relationships between each other via basic nodes.",
"In this paper, we use words as the basic semantic nodes for simplicity.",
"HETERSUMGRAPH in Section 3.1 is a special case which only contains one type of supernodes (sentences) for classification, while HETERDOCSUMGRAPH in Section 3.5 use two (documents and sentences).",
"Based on our framework, other types of supernodes (such as paragraphs) can also be introduced and the only difference lies in the graph structure.",
"Given a graph G = { V, E } , where V stands for a node set and E represents edges between nodes, our undirected heterogeneous graph can be formally defined as V = V w V s and E = { e 11 , , e mn } .",
"Here, V w = { w 1 , , w m } denotes m unique words of the document and V s = { s 1 , , s n } corresponds to the n sentences in the document.",
"E is a real-value edge weight matrix and e ij (cid:54) = 0 ( i { 1 , , m } , j { 1 , , n } ) indicates the j -th sentence contains the i -th word.",
"Figure 1 presents the overview of our model, which mainly consists of three parts: graph initializers for nodes and edges, the heterogeneous graph layer and the sentence selector .",
"The initializers first create nodes and edges and encode them for the document graph.",
"Then the heterogeneous graph updates these node representations by iteratively passing messages between word and sentence nodes via Graph Attention Network (GAT) (Velickovic et al., 2017).",
"Finally, the representations of sentence nodes are extracted to predict labels for summaries.",
"Let X w R m d w and X s R n d s represent the input feature matrix of word and sentence nodes respectively, where d w is the dimension of the word embedding and d s is the dimension of each sentence representation vector.",
"Specifically, we first use Convolutional Neural Networks (CNN) (Le-Cun et al., 1998) with different kernel sizes to capture the local n-gram feature for each sentence l j and then use the bidirectional Long Short-Term Memory (BiLSTM) (Hochreiter and Schmidhuber, 1997) layer to get the sentence-level feature g j .",
"The concatenation of the CNN local feature and the BiLSTM global feature is used as the sentence node feature X s j = [ l j ; g j ] .",
"To further include information about the importance of relationships between word and sentence nodes, we infuse TF-IDF values in the edge weights.",
"The term frequency (TF) is the number of times w i occurs in s j and the inverse document frequency (IDF) is made as the inverse function of the out-degree of w i .",
"Given a constructed graph G with node features X w X s and edge features E , we use graph attention networks (Velickovic et al., 2017) to update the representations of our semantic nodes.",
"We refer to h i R d h , i { 1 , , ( m + n ) } as the hidden states of input nodes and the graph attention (GAT) layer is designed as follows: z ij = LeakyReLU ( W a [ W q h i ; W k h j ]) , (1) ij = exp( z ij ) (cid:80) l N i exp( z il ) , (2) u i = ( (cid:88) j N i ij W v h j ) , (3) where W a , W q , W k , W v are trainable weights and ij is the attention weight between h i and h j .",
"The multi-head attention can be denoted as: u i = (cid:107) Kk =1 (cid:88) j N i kij W k h i .",
"Besides, we also add a residual connection to avoid gradient vanishing after several iterations.",
"Therefore, the final output can be represented as: h (cid:48) i = u i + h i .",
"We further modify the GAT layer to infuse the scalar edge weights e ij , which are mapped to the multi-dimensional embedding space e ij R mn d e .",
"Thus, Equal 1 is modified as follows: z ij = LeakyReLU ( W a [ W q h i ; W k h j ; e ij ]) .",
"After each graph attention layer, we introduce a position-wise feed-forward (FFN) layer consisting of two linear transformations just as Transformer (Vaswani et al., 2017).",
"Iterative updating To pass messages between word and sentence nodes, we define the information propagation as Figure 2.",
"Specifically, after the initialization, we update sentence nodes with their neighbor word nodes via the above GAT and FFN layer: U 1 s w = GAT( H 0 s , H 0 w , H 0 w ) , (7) H 1 s = FFN (cid:0) U 1 s w + H 0 s (cid:1) , (8) where H 1 w = H 0 w = X w , H 0 s = X s and U 1 s w R m d h .",
"GAT ( H 0 s , H 0 w , H 0 w ) denotes that H 0 s is used as the attention query and H 0 w is used as the key and value.",
"After that, we obtain new representations for word nodes using the updated sentence nods and further update sentence nodes iteratively.",
"Each iteration contains a sentence-to-word and a word-to-sentence update process.",
"For the t -th iteration, the process can be represented as: U t +1 w s = GAT( H tw , H ts , H ts ) , (9) H t +1 w = FFN (cid:0) U t +1 w s + H tw (cid:1) , (10) U t +1 s w = GAT( H ts , H t +1 w , H t +1 w ) , (11) H t +1 s = FFN (cid:0) U t +1 s w + H ts (cid:1) .",
"(12)",
"As Figure 2 shows, word nodes can aggregate the document-level information from sentences.",
"For example, the high degree of a word node indicates the word occurs in many sentences and is likely to be the keyword of the document.",
"Regarding sentence nodes, the one with more important words tends to be selected as the summary.",
"Finally, we need to extract sentence nodes included in the summary from the heterogeneous graph.",
"Therefore, we do node classification for sentences and cross-entropy loss is used as the training objective for the whole system.",
"Trigram blocking Following Paulus et al. (2017) and Liu and Lapata (2019b), we use Trigram Blocking for decoding, which is simple but powerful version of Maximal Marginal Relevance (Carbonell and Goldstein, 1998).",
"Specifically, we rank sentences by their scores and discard those which have trigram overlappings with their predecessors.",
"For multi-document summarization, the document-level relation is crucial for better understanding the core topic and most important content of this cluster.",
"However, most existing neural models ignore this hierarchical structure and concatenate documents to a single flat sequence(Liu et al., 2018; Fabbri et al., 2019).",
"Others try to model this relation by attention-based full-connected graph or take advantage of similarity or discourse relations(Liu and Lapata, 2019a).",
"Our framework can establish the document-level relationship in the same way as the sentence-level by just adding supernodes for documents(as Figure 3), which means it can be easily adapted from single-document to multi-document summarization.",
"The heterogeneous graph is then extended to three types of nodes: V = V w V s V d and V d = { d 1 , , d l } and l is the number of source documents.",
"We name it as HETERDOCSUMGRAPH .",
"As we can see in Figure 3, word nodes become the bridges between sentences and documents.",
"Sentences containing the same words connect with each other regardless of their distance across documents, while documents establish relationships based on their similar contents.",
"Document nodes can be viewed as a special type of sentence nodes: a document node connects with contained word nodes and the TF-IDF value is used as the edge weight.",
"Besides, document nodes also share the same update process as sentence nodes.",
"The differences lie in the initialization, where the document node takes the mean-pooling of its sentence node features as its initial state.",
"During the sentence selection, the sentence nodes are concatenated with the corresponding document representations to obtain the final scores for multi-document summarization.",
"We evaluate our models both on singleand multi-document summarization tasks.",
"Below, we start our experiment with the description of the datasets.",
"CNN/DailyMail The CNN/DailyMail question answering dataset (Hermann et al., 2015; Nallapati et al., 2016) is the most widely used benchmark dataset for single-document summarization.",
"The standard dataset split contains 287,227/13,368/11,490 examples for training, validation, and test.",
"For the data prepossessing, we follow Liu and Lapata (2019b), which use the non-anonymized version as See et al. (2017), to get ground-truth labels.",
"NYT50 NYT50 is also a single-document summarization dataset, which was collected from New York Times Annotated Corpus (Sandhaus, 2008) and preprocessed by Durrett et al. (2016).",
"It contains 110,540 articles with summaries and is split into 100,834 and 9706 for training and test.",
"Following Durrett et al. (2016), we use the last 4,000 examples from the training set as validation and filter test examples to 3,452.",
"Multi-News The Multi-News dataset is a large-scale multi-document summarization introduced by Fabbri et al. (2019).",
"It contains 56,216 articles-summary pairs and each example consists of 2-10 source documents and a human-written summary.",
"Following their experimental settings, we split the dataset into 44,972/5,622/5,622 for training, validation and test examples and truncate input articles to 500 tokens.",
"For both single-document and multi-document summarization, we limit the vocabulary to 50,000 and initialize tokens with 300-dimensional GloVe embeddings (Pennington et al., 2014).",
"We filter stop words and punctuations when creating word nodes and truncate the input document to a maximum length of 50 sentences.",
"To get rid of the noisy common words, we further remove 10% of the vocabulary with low TF-IDF values over the whole dataset.",
"We initialize sentence nodes with d s = 128 and edge features e ij in GAT e with d e = 50 .",
"Each GAT layer is 8 heads and the hidden size is d h = 64 , while the inner hidden size of FFN layers is 512.",
"During training, we use a batch size of 32 and apply Adam optimizer (Kingma and Ba, 2014) with a learning rate 5e-4.",
"An early stop is performed when valid loss does not descent for three continuous epochs.",
"We select the number of iterations t = 1 based on the performance on the validation set.",
"3 For decoding, we select top-3 sentences for CNN/DailyMail and NYT50 datasets and top-9 for Multi-New according to the average length of their human-written summaries.",
"Ext-BiLSTM Extractive summarizer with BiLSTM encoder learns the cross-sentence relation by regarding a document as a sequence of sentences.",
"For simplification, we directly take out the initialization of sentence nodes for classification, which includes a CNN encoder for the word level and 2-layer BiLSTM for sentence level.",
"This model can also be viewed as an ablation study of our HETERSUMGRAPH on the updating of sentence nodes.",
"Ext-Transformer Extractive summarizers with Transformer encoder learn the pairwise interaction (Vaswani et al., 2017) between sentences in a purely data-driven way with a fully connected priori.",
"Following (Liu and Lapata, 2019b), we implement a Transformer-based extractor as a baseline, which contains the same encoder for words followed by 12 Transformer encoder layers for sentences.",
"Ext-Transformer can be regarded as the sentence-level fully connected graph.",
"HETERSUMGRAPH Our heterogeneous summarization graph model relations between sentences based on their common words, which can be denoted as sentence-word-sentence relationships.",
"HETERSUMGRAPH directly selects sentences for the summary by node classification, while HETERSUMGRAPH with trigram blocking further utilizes the n-gram blocking to reduce redundancy.",
"We evaluate our single-document model on CNN/DailyMail and NYT50 and report the uni-gram, bigram and longest common subsequence overlap with reference summaries by R-1, R-2 and R-L.",
"Due to the limited computational resource, we don't apply pre-trained contextualized encoder (i.e. BERT (Devlin et al., 2018)) to our models, which we will regard as our future work.",
"Therefore, here, we only compare with models without BERT for the sake of fairness.",
"Results on CNN/DailyMail Table 1 shows the results on CNN/DailyMail.",
"The first part is the LEAD -3 baseline and ORACLE upper bound, while the second part includes other summarization models.",
"We present our models (described in Section 4.3) in the third part.",
"Compared with Ext-BiLSTM, our heterogeneous graphs achieve more than 0.6/0.51/0.7 improvements on R-1, R-2 and R-L, which indicates the cross-sentence relationships learned by our sentence-word-sentence structure is more powerful than the sequential structure.",
"Besides, Our models also outperform Ext-Transformer based on fully connected relationships.",
"This demonstrates that our graph structures effectively prune unnecessary connections between sentences and thus improve the performance of sentence node classification.",
"Compared with the second block of Figure 1, we observe that HETERSUMGRAPH outperforms all previous non-BERT-based summarization systems and trigram blocking leads to a great improvement on all ROUGE metrics.",
"Among them, HER (Luo et al., 2019) is a comparable competitor to our HETERSUMGRAPH , which formulated the extractive summarization task as a contextual-bandit problem and solved it with reinforcement learning.",
"Since the reinforcement learning and our trigram blocking plays a similar role in reorganizing sentences into a summary (Zhong et al., 2019a), we additionally compare HER without policy gradient with HETERSUMGRAPH .",
"Our HETERSUMGRAPH achieve 0.61 improvements on R-1 over HER without policy for sentence scoring, and HETERSUMGRAPH with trigram blocking outperforms by 0.65 over HER for the reorganized summaries.",
"Results on NYT50 Results on NYT50 are summarized in Table 2.",
"Note that we use limited-length ROUGE recall as Durrett et al. (2016), where the selected sentences are truncated to the length of the human-written summaries and the recall scores are used instead of F1.",
"The first two lines are baselines given by Durrett et al. (2016) and the next two lines are our baselines for extractive summarization.",
"The second and third part report the performance of other non-BERT-based works and our models respectively.",
"Again, we observe that our cross-sentence relationship modeling performs better than BiLSTM and Transformer.",
"Our models also have strong advantages over other non-BERT-based approaches on NYT50.",
"Meanwhile, we find trigram block doesn't work as well as shown on CNN/DailyMail, and we attribute the reason to the special formation of summaries of CNN/DailyMail dataset.",
"Ablation on CNN/DailyMail In order to better understand the contribution of different modules to the performance, we conduct ablation study using our proposed HETERSUMGRAPH model on CNN/DailyMail dataset.",
"First, we remove the filtering mechanism for low TF-IDF words and the edge weights respectively.",
"We also remove residual connections between GAT layers.",
"As a compensation, we concatenate the initial sentence feature after updating messages from nearby word nodes in Equal 8: H 1 s = FFN (cid:0) [ U 1 s w ; H 0 s ] (cid:1) .",
"Furthermore, we make iteration number t = 0 , which deletes the word updating and use the sentence representation H 1 s for classification.",
"Finally, we remove the BiLSTM layer in the initialization of sentence nodes.",
"As Table 3 shows, the removal of low TF-IDF words leads to increases on R-1 and R-L but drops on R-2.",
"We suspect that filtering noisy words enable the model to better focus on useful word nodes, at the cost of losing some bigram information.",
"The residual connection plays an important role in the combination of the original representation and the updating message from another type of nodes, which cannot be replaced by the concatenation.",
"Besides, the introduction of edge features, word update and BiLSTM initialization for sentences also show their effectiveness.",
"We first take the concatenation of the First-k sentences from each source document as the baseline and use the codes and model outputs 5 released by Fabbri et al. (2019) for other models.",
"To explore the adaptability of our model to multi-document summarization, we concatenate multi-source documents to a single mega-document and apply HETERSUMGRAPH as the baseline.",
"For comparison, we extend HETERSUMGRAPH to multi-document settings HETERDOCSUMGRAPH 4 Nallapati et al. (2016) concatenate summary bullets, which are written for different parts of the article and have few overlaps with each other, as a multi-sentence summary.",
"However, when human write summaries for the whole article (such as NYT50 and Multi-News), they will use key phrases repeatedly.",
"This means roughly removing sentences by n-gram overlaps will lead to loss of important information.",
"5 https://github.com/Alex-Fabbri/ Multi-News Model R-1 R-2 R-LHSG 42.31 19.51 38.74 filter words 42.24 19.56 38.68 edge feature 42.14 19.41 38.60 residual connection 41.59 19.08 38.05 sentence update 41.59 19.03 38.04 word update 41.70 19.16 38.15 BiLSTM 41.70 19.09 38.13 Table 3: Ablation studies on CNN/DailyMail test set.",
"as described in Section 3.5.",
"Our results are presented in Table 4.",
"Specifically, we observe that both of our HETERSUMGRAPH and HETERDOCSUMGRAPH outperform previous methods while HETERDOCSUMGRAPH achieves better performance improvements.",
"This demonstrates the introduction of document nodes can better model the document-document relationships and is beneficial for multi-document summarization.",
"As mentioned above, trigram blocking does not work for the Multi-News dataset, since summaries are written as a whole instead of the concatenations of summary bullets for each source document.",
"We further design several experiments to probe into how our HETERSUMGRAPH and HETERDOC 0",
"Degree of word nodes In HETERSUMGRAPH , the degree of a word node indicates its occurrence across sentences and thus can measure the redundancy of the document to some extent.",
"Meanwhile, words with a high degree can aggregate information from multiple sentences, which means that they can benefit more from the iteration process.",
"Therefore, it is important to explore the influence of the node degree of words on the summarization performance.",
"We first calculate the average degree of word nodes for each example based on the constructed graph.",
"Then the test set of CNN/DailyMail is divided into 5 intervals based on it (x-axis in Figure 4).",
"We evaluate the performance of HETERSUMGRAPH and Ext-BiLSTM in various parts and the mean score of R-1, R-2, R-L is drawn as lines (left y-axis R ).",
"The ROUGE increases with the increasing of the average degree of word nodes in the document, which means that articles with a high redundancy are easier for neural models to summarize.",
"To make R between models more obvious, we draw it with histograms (right y-axis).",
"From Figure 4, we can observe that HETERSUMGRAPH performs much better for documents with a higher average word node degree.",
"This proves that the benefit brought by word nodes lies in the aggregation of information from sentences and the propagation of their global representations.",
"investigate how the number of source documents influ-ences the performance of our model.",
"To this end, 2 3 4 5 6 28 30 32 34 36 Number of source documents R First-3HSGHDSG Figure 5: Relationship between number of source documents (x-axis) and R (y-axis).",
"we divide the test set of Multi-News into different parts by the number of source documents and discard parts with less than 100 examples.",
"Then, we take First-3 as the baseline, which concatenates the top-3 sentences of each source document as the summary.",
"In Figure 5, we can observe that the lead baseline raises while both of our model performance degrade and finally they converge to the baseline.",
"This is because it is more challenging for models to extract limited-number sentences that can cover the main idea of all source documents with the increasing number of documents.",
"However, the First-3 baseline is forced to take sentences from each document which can ensure the coverage.",
"Besides, the increase of document number enlarges the performance gap between HETERSUMGRAPH and HETERDOCSUMGRAPH .",
"This indicates the benefit of document nodes will become more significant for more complex document-document relationships.",
"In this paper, we propose a heterogeneous graph-based neural network for extractive summarization.",
"The introduction of more fine-grained semantic units in the summarization graph helps our model to build more complex relationships between sentences .",
"It is also convenient to adapt our single-document graph to multi-document with document nodes.",
"Furthermore, our models have achieved the best results on CNN/DailyMail compared with non-BERT-based models, and we will take the pretrained language models into account for better encoding representations of nodes in the future.",
"This work was supported by the National Natural Science Foundation of China (No. U1936214 and 61672162), Shanghai Municipal Science and Technology Major Project (No. 2018SHZDZX01) and ZJLab."
] |
[
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"objective",
"result",
"result",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"other",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"abstain",
"result",
"abstain",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"method",
"abstain",
"result",
"other"
] |
[
"Aspect-based sentiment classification is a popular task aimed at identifying the corresponding emotion of a specific aspect.",
"One sentence may contain various sentiments for different aspects.",
"Many sophisticated methods such as attention mechanism and Convolutional Neural Networks (CNN) have been widely employed for handling this challenge.",
"Recently, semantic dependency tree implemented by Graph Convolutional Networks (GCN) is introduced to describe the inner connection between aspects and the associated emotion words.",
"But the improvement is limited due to the noise and instability of dependency trees.",
"To this end, we propose a dependency graph enhanced dual-transformer network (named DGEDT) by jointly considering the flat representations learnt from Transformer and graph-based representations learnt from the corresponding dependency graph in an iterative interaction manner.",
"Specifically, a dual-transformer structure is devised in DGEDT to support mutual reinforcement between the flat representation learning and graph-based representation learning.",
"The idea is to allow the dependency graph to guide the representation learning of the transformer encoder and vice versa.",
"The results on five datasets demonstrate that the proposed DGEDT outperforms all state-of-the-art alternatives with a large margin.",
"Aspect-based or aspect-level sentiment classification is a popular task with the purpose of identifying the sentiment polarity of the given aspect (Yang et al., 2017; Zhang and Liu, 2017; Zeng et al., 2019).",
"The goal is to predict the sentiment polarity of a given pair (sentence, aspect).",
"Aspects in our study are mostly noun phrases appearing in the Corresponding author.",
"input sentence.",
"As shown in Figure 1, where the comment is about the laptop review, the sentiment polarities of two aspects battery life and memory are positive and negative, respectively.",
"Giving a specific aspect is crucial for sentiment classification owing to the situation that one sentence sometimes contains several aspects, and these aspects may have different sentiment polarities.",
"Modern neural methods such as Recurrent Neural Networks (RNN), Convolutional Neural Networks (CNN) (Dong et al., 2014; Vo and Zhang, 2015) have already been widely applied to aspect-based sentiment classification.",
"Inspired by the work (Tang et al., 2016a) which demonstrates the importance of modeling the semantic connection between contextual words and aspects, RNN augmented by attention mechanism (Bahdanau et al., 2015; Luong et al., 2015; Xu et al., 2015) is widely utilized in recent methods for exploring the potentially relevant words with respect to the given aspect (Yang et al., 2017; Zhang and Liu, 2017; Zeng et al., 2019; Wang et al., 2016).",
"CNN based attention methods (Xue and Li, 2018; Li et al., 2018) are also proposed to enhance the phrase-level representation and achieved encouraging results.",
"Although attention-based models have achieved promising performance on several tasks, the limitation is still obvious because attention module may highlight the irrelevant words owing to the syntactical absence.",
"For example, given the sentence it has a bad memory but a great battery life. and aspect battery life , attention module may still assign a large weight to word bad rather than great , which adversely leads to a wrong sentiment polarity prediction.",
"To take advantages of syntactical information among aspects and contextual words, Zhang et al. (2019) proposed a novel aspect-based GCN method which incorporates dependency tree into the attention models.",
"Actually, using GCN (Kipf and Aspect: memory Sentiment: Negative Aspect: battery life Sentiment: Positive Figure1:Atypicalutterancesampleofaspect-basedsentimentclassificationtaskwithaproperdependencytree,noticethatdifferentaspectsmayhavedifferentsentimentpolarities. Welling,2017)toencodetheinformationconveyedbyadependencytreehasalreadybeeninvestigatedinseveralfields, e.g., modelingdocument-wordre-",
"(2018) used GCN over dependency trees in document dating and relation classification, respectively.",
"Yao et al. (2019) introduced GCN to text classification task with the guidance of document-word and word-word relations.",
"Furthermore, Zhang et al. (2019) introduced aspect-based GCN to cope with aspect-level sentiment classification task using dependency graphs.",
"On the other hand, Chen and Qian (2019) introduced and adapted Capsule Networks along with transfer learning to improve the performance of aspect-level sentiment classification.",
"Gao et al. (2019) introduced BERT into a target-based method, and Sun et al. (2019) constructed BERT-based auxiliary sentences to further improve the performance.",
"Since Transformer (Vaswani et al., 2017) and GCN are two crucial sub-modules in DGEDT, here we briefly introduce these two networks and illustrate the fact that GCN can be considered as a specialized Transformer.",
"Assume that there are three input matrices Q R n d k , K R m d k , V R m d v , which represent the queries, keys and values respectively.",
"n and m are the length of two inputs.",
"Q (cid:48) = Attention ( Q, K, V ) = softmax ( QKT d k ) V, (1) where Q (cid:48) R n d v , d k and d v are the dimension size of keys and values, respectively.",
"Actually, Transformer adopts multi-head attention mechanism to further enhance the representative ability as follows: h i = Attention ( QW Qi , KW Ki , V W Vi ) , (2) Q (cid:48) = Concat ([ h 1 , ... ]) WO , (3) where i [1 , H ] , H is the head size, W Qi R d k d k /H , W Ki R d k d k /H , W Vi R d v d v /H and WO R d v d v , and h i is the i -th head embedding.",
"Then, two normalization layers are employed to extract higher-level features as follows: Q (cid:48) 1 = Norm ( Q (cid:48) + Q ) , (4) Q (cid:48) 2 = Norm ( Q (cid:48) 1 + F F N ( Q (cid:48) 1 )) , (5) where F F N ( x ) = Relu ( xW 1 + b 1 ) W 2 + b 2 is a two-layer multi-layer perceptron (MLP) with the activation function Relu , Norm is a normalization Dual-transformer Structure Max-Pooling Classify Attention Module Dependency Graph (Aspect-modified) Aspect Representation Aspect Span Aspect Span SUMSUM Aspect Representation BiLSTM/BERT Input Figure 2: An overall demonstration of our proposed DGEDT.",
"layer, Q (cid:48) 2 is the output vector of this transformer layer.",
"Equations (1)-(5) can be repeated for T times.",
"Note that if Q = K = V , this operation can be considered as self alignment.",
"As for GCN, the computation can be conducted as follows when the adjacent matrix of each word in the input is explicitly provided.",
"Q (cid:48) = Norm ( Q + Relu ( 1 | A adj | A adj QW )) , (6) where A adj R n n is the adjacent matrix formed from the dependency graph, n is the number of words, Q R n d k , W R d k d k .",
"1 | A adj | A adj is similar to softmax ( QKT d k ) which is denoted as a generated alignment matrix, except for the main difference that A adj is fixed and discrete.",
"It is obvious that Equation (6) can be decomposed into Equations (1)-(4), and it can be also repeated for T times.",
"In our perspective, GCN is a specialized Transformer with the head size set to one and the generated alignment matrix replaced by a fixed adjacent matrix.",
"The network architecture of our proposed DGEDT is shown in Figure 2.",
"For a given input text, we Input Embedding Self Attention Feed Forward Add&Norm Add&Norm BiGCN Add&Norm Flat (with graph) Graph (with flat) MutualBiaffine Add&Norm Add&Norm T Figure 3: A simplified demonstration of dualtransformer structure, which consists of two submodules, one is a standard transformer, another is a transformer-like structure implemented by BiGCN withthesupervisionofdependencygraph.",
"the representations of two directions to produce the final output in each iteration, while other similar methods conduct the merging only in the last iteration.",
"BiGCN represents Equations (9)-(11).",
"We use a simple method to merge the adjacent matrix of the words in the same aspect span as follows: A (cid:48) adj i = MIN ( (cid:126) 1 , SUM ([ A adj spani ])) , (13) where A adj can be replaced by A outadj and A inadj , and we can thus get A outadj (cid:48) and A inadj (cid:48) .",
"Each span records the start and end position of the given aspect.",
"span i denotes the i-th span in original text.",
"BiAffine Module: Assume that there are two inputs S 1 R n h and S 2 R n (cid:48) h , we introduce a mutual BiAffine transformation process to interchange their relevant features as follows: A 1 = softmax ( S 1 W 1 ST 2 ) , (14) A 2 = softmax ( S 2 W 2 ST 1 ) , (15) S (cid:48) 1 = A 1 S 2 , (16) S (cid:48) 2 = A 2 S 1 , (17) S (cid:48) 1 , S (cid:48) 2 = Biaffine ( S 1 , S 2 ) , (18) where W 1 , W 2 R h h .",
"Here, S (cid:48) 1 can be considered as a projection from S 2 to S 1 , and S (cid:48) 2 follows the same principle.",
"Biaffine represents Equations (14)-(17).",
"A 1 and A 2 are temporary alignment matrices projecting from S 2 to S 1 and S 1 to S 2 , respectively.",
"The Whole Procedure: We can then assemble all the sub-modules mentioned above to construct our proposed dual-transformer structure, and the detailed procedures are listed below: S Tr (cid:48) t = T ransfomer ( S Trt ) , (19) SG (cid:48) t = BiGCN ( S Gt , A outadj (cid:48) , A inadj (cid:48) ) , (20) S Tr (cid:48)(cid:48) t , SG (cid:48)(cid:48) t = Biaffine ( S Tr (cid:48) t , SG (cid:48) t ) , (21) S Trt +1 = Norm ( S Tr (cid:48) t + S Tr (cid:48)(cid:48) t ) , (22) S Gt +1 = Norm ( SG (cid:48) t + SG (cid:48)(cid:48) t ) , (23) where S Tr 0 = SG 0 = H , and H RN s h denotes the contextual hidden representations { s 1 , ... } from the aspect-based encoder.",
"T ransfomer represents the process denoted by Equations (1)-(5).",
"Equations (19)-(23) can be repeatedly calculated for T times and t [0 , T ] .",
"We choose S TrT (flat (with graph) in Figure 3) as the last representation, because SGT (graph (with flat) in Figure 3) heavily depends on the dependency graph.",
"Given M aspect representations can be obtained through the above mentioned procedure, we can derive the final aspect representation by Max-Pooling operation.",
"Here, we utilize an attention mechanism to identify relevant words with respect to the aspect.",
"However, these would be M aspect representations which are all highly relevant to the aggregated aspect representation.",
"To avoid that these aspect mentions from being assigned with too high weight, we utilize a mask mechanism to explicitly set the attention values of aspect mentions to zeros.",
"Let I be the index set of these M aspect mentions, we form Mask vector as follows: Mask i = (cid:40) inf, if i I ; 0 , if other.",
"h f = MaxP ooling ([ S TrTi | i I ]) , (25) a f = softmax ( h f W 3 S TrTT + Mask ) , (26) h (cid:48) f = Relu ([ h f , a f S TrT ] W (cid:48) + b (cid:48) ) , (27) p = softmax ( h (cid:48) f W p + b p ) , (28)",
"where W 3 , W (cid:48) , W p and b (cid:48) , b p are learnable weights and biases, respectively.",
"The proposed DGEDT is optimized by the standard gradient descent algorithm with the cross-entropy loss and L2-regularization:",
"where D denotes the training dataset, y p is the ground-truth label and p y p means the y p -th element of p .",
"represents all trainable parameters, and is the coefficient of the regularization term.",
"Our experiments are conducted on five datasets, including one (Twitter) which is originally built by Dong et al. (2014), and the other four datasets (Lap14, Rest 14, Rest 15, Rest16) are respectively from SemEval 2014 task 4 (Pontiki et al., 2014), SemEval 2015 task 12 (Pontiki et al., 2015) and SemEval 2016 task 5 (Hercig et al., 2016), consisting",
"We compare the proposed DGEDT with a line of baselines and state-of-the-art alternatives, including LSTM, MemNet (Tang et al., 2016b), AOA (Huang et al., 2018), IAN (Ma et al., 2017), TNet-LF (Li et al., 2018), CAPSNet (Chen and Qian, 2019), Transfer-CAPS (Chen and Qian, 2019), TG-BERT (Gao et al., 2019), AS-CNN (Zhang et al., 2019) and AS-GCN (Zhang et al., 2019).",
"We conduct the experiments with our proposed DGEDT with BiLSTM as the aspect-based encoder, and DGEDT +BERT with BERT as the aspect-based encoder.",
"Several simplified variants of DGEDT are also investigated: DGEDT(Transformer) denotes that we keep standard Transformer and remove the BiGCN part, DGEDT(BiGCN) denotes that we keep BiGCN and remove the Transformer part.",
"The layer number or iteration number ( i.e., T ) of all available models is set to three for both Transformer and GCN.",
"We use Spacy toolkit to generate dependency trees.",
"We use BERT-base English version (Devlin et al., 2019), which contains 12 hidden layers and 768 hidden units for each layer.",
"We use Adam (Kingma and Ba, 2014) as the optimizer for BERT and our model with the learning rate initialized by 0.00001 and 0.001 respectively, and decay rate of learning is set as 0.98.",
"Except for the influence of decay rate, the learning rate decreases dynamically according to the current step number.",
"Batch shuffling available at https://github.com/tomsonsgs/DGEDT-senti-master.",
"is applied to the training set.",
"The hidden size of our basic BiLSTM is 256 and the size of all embeddings is set as 100.",
"The vocab size of BERT is 30,522.",
"The batch size of all model is set as 32.",
"As for regularization, dropout function is applied to word embeddings and the dropout rate is set as 0.3.",
"Besides, the coefficient for the L2-norm regularization is set as 0.0001.",
"We train our model up to 50 epochs and conduct the same experiment for 10 times with random initialization.",
"Accuracy and Macro-Averaged F1 are adopted as the evaluation metrics.",
"We follow the experimental setup in (Zhang et al., 2019; Chen and Qian, 2019) and report the average maximum value for all metrics on testing set.",
"If the model is not equipped with BERT, then we use word vectors that were pre-trained from Glove (Pennington et al., 2014).",
"As shown in Table 2, our model DGEDT outperforms all other alternatives on all five dataset.",
"BERT makes further improvement on the performance especially in Twitter, Rest14 and Rest 15.",
"We can conclude that traditional Transformer DGEDT(Transformer) obtains better performance than DGEDT(BiGCN) in the most datasets.",
"DGEDT employs and combines two sub-modules (traditional Transformer and dependency graph enhanced GCN) and outperforms any single submodule.",
"Using dependency tree indeed contributes to the performance when acting as a supplement rather than a single decisive module.",
"Note that the performance of individual modules is already reported in Table 2.",
"As shown in Table 3, we investigate and report four typical ablation conditions.",
"Mask' denotes that we remove the aspect-based attention mask mechanism, and MultiAspect' denotes that we only use the aspect representation of the first aspect mention instead of MaxPooling them.",
"We can see that these two procedures provide slight improvement.",
"BiGCN(+GCN)' means that we remove the bidirectional connection and only use original GCN, the results show that bidirectional GCN outperforms original GCN owing to the adequate connection information.",
"BiAffine' indicates that we remove the BiAffine process and use all the outputs of dual-transformer structure, we can thus conclude that BiAffine process is critical for our model, and utilizing simple concatenation of the Model Twitter Lap14 Rest14 Rest15 Rest16 Acc F1 Acc F1 Acc F1 Acc F1 Acc F1 LSTM 69.6 67.7 69.3 63.1 78.1 67.5 77.4 55.2 86.8 63.9 MemNet 71.5 69.9 70.6 65.2 79.6 69.6 77.3 58.3 85.4 66.0 AOA 72.3 70.2 72.6 67.5 80.0 70.4 78.2 57.0 87.5 66.2 IAN 72.5 70.8 72.1 67.4 79.3 70.1 78.6 52.7 84.7 55.2 TNet 73.0 71.4 74.6 70.1 80.4 71.0 78.5 59.5 89.1 70.4 AS-CNN 71.1 69.5 72.6 66.7 81.7 73.1 78.5 58.9 87.4 64.6 CAPSNet 72.7 68.8 78.8 69.7 Transfer-CAPS 73.9 70.2 79.3 70.9 AS-GCN 72.2 70.4 75.6 71.1 80.8 72.0 79.9 61.9 89.0 67.5 DGEDT(Transformer) 74.1 72.7 76.0 71.4 82.8 73.9 81.0 64.9 90.0 72.6 DGEDT(BiGCN) 72.8 71.0 76.2 71.8 81.8 72.5 80.4 62.9 89.4 70.4 DGEDT 74.8 73.4 76.8 72.3 83.9 75.1 82.1 65.9 90.8 73.8 TG-BERT 76.7 74.3 78.9 74.4 85.1 78.4 DGEDT-BERT 77.9 75.4 79.8 75.6 86.3 80.0 84.0 71.0 91.9 79.0 Table 2: Overall performance of accuracy and F1 on five datasets, AS means aspect-based.",
"outputs of Transformer and BiGCN is worse than DGEDT(Transformer).",
"As shown in Figure 4, we find that three is the best iteration number for Lap14 and Rest14.",
"Dependency information will not be fully broadcasted when the iteration number is too small.",
"The model will suffer from over-fitting and redundant information passing, which results in the performance drop when iteration number is too large.",
"So, numerous experiments need to be conducted to figure out a proper iteration number.",
"As shown in Figure 5, DGEDT and DGEDT(BiGCN) output correct prediction Negative while DGEDT(Transformer) fails for the sentence The management was less than accommodating .",
"To figure out the essential cause, we demonstrate the attention of self alignment in Figure",
"5. We can see that for the aspect management , DGEDT(Transformer) mainly focuses on accommodating , which is a positive word at document level.",
"Thus, DGEDT(Transformer) obtains an incorrect prediction Positive .",
"In the dependency tree, less which is often regarded as a negative word has a more related connection with aspect management , so DGEDT(BiGCN) outputs right sentiment Negative .",
"With the assistance of supplementary dependency graph, DGEDT also obtains right prediction Negative owing to the high attention value between management and less .",
"As shown in Figure 6, DGEDT and DGEDT(Transformer) output correct prediction Positive while DGEDT(BiGCN) fails for the sentence This little place is wonderfully warm welcoming .",
"To figure out the essential cause, we demonstrate the attention of self alignment and dependency tree in Figure",
"6. We can see that for the aspect place , DGEDT(Transformer) mainly focuses on wonderfully , which is a positive word at document level.",
"Thus, DGEDT(Transformer) obtains a correct prediction Positive .",
"In the dependency tree, little which is often regarded as a negative word has a more related connection with aspect place , so DGEDT(BiGCN) outputs incorrect sentiment Negative .",
"With the disturbance of inappropriate dependency tree, DGEDT still Aspect: management Golden: Negative DGEDT(Transformer): Positive DGEDT(BiGCN): Negative DGEDT: Negative",
"obtains right prediction Positive owing to the high attention value between place and wonderfully .",
"We can see from two examples above that DGEDT is capable of achieving the proper bal-ance between dependency graph enhanced BiGCN and traditional Transformer according to different situations.",
"Recently neural structures with syntactical information such as semantic dependency tree and constituent tree are widely employed to enhance the word-level representation of traditional neural networks.",
"These structures are often modeled and described by TreeLSTMs or GCNs.",
"To introduce Transformer into our task and diminish the error induced by incorrect dependency trees, we propose a dual-transformer structure which considers the connections in dependency tree as a supplementary GCN module and a Transformer-like structure for self alignment in traditional Transformer.",
"The results on five datasets demonstrate that dependency tree indeed promotes the final performance when utilized as a sub-module for dual-transformer structure.",
"In future work, we can further improve our method in the following aspects.",
"First, the edge information of the dependency trees needs to be exploited in later work.",
"We plan to employ an edge-aware graph neural network considering the edge labels.",
"Second and last, domain-specific knowledge can be incorporated into our method as an external learning source.",
"We thank the reviewers for their valuable comments.",
"This work is supported through the grants from National Natural Science Foundation of China (NSFC-61772378), the National Key research and Development Program of China (No.2017YFC1200500) and the Major Projects of the National Social Science Foundation of China (No.11&ZD189)."
] |
[
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"method",
"abstain",
"method",
"abstain",
"method",
"method",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"result",
"result",
"other",
"abstain",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"objective",
"abstain",
"result",
"abstain",
"method",
"abstain",
"other",
"other"
] |
[
"Grapheme to phoneme (G2P) conversion is an integral part in various text and speech processing systems, such as: Text to Speech system, Speech Recognition system, etc.",
"The existing methodologies for G2P conversion in Bangla language are mostly rule-based.",
"However, data-driven approaches have proved their superiority over rule-based approaches for large-scale G2P conversion in other languages, such as: English, German, etc.",
"As the performance of data-driven approaches for G2P conversion depend largely on pronunciation lexicon on which the system is trained, in this paper, we investigate on developing an improved training lexicon by identifying and categorizing the critical cases in Bangla language and include those critical cases in training lexicon for developing a robust G2P conversion system in Bangla language.",
"Additionally, we have incorporated nasal vowels in our proposed phoneme list.",
"Our methodology outperforms other state-of-the-art approaches for G2P conversion in Bangla language.",
"Grapheme to phoneme (G2P) conversion provides a mapping between a word and its pronunciation.",
"Such mapping provides opportunity for a nonnative person to learn the correct pronunciation of words of a foreign language.",
"Moreover, in modern Text to Speech (TTS) and Automatic Speech Recognition (ASR) systems, G2P conversion is an integral task.",
"The task of G2P conversion is generally language specific due to language specific conventions, rules, pronunciation constraints, etc.",
"In this paper, we focus on Modern Standard Bangla.",
"An example of G2P conversion in Bangla language: phonetic transcription of (practice) is /o n u sh i l O n/.",
"(Please refer to Table 1 for our phoneme symbols.)",
"The simplest means of G2P conversion is to build up a lexicon or dictionary containing the mapping from words to their corresponding pronunciations.",
"However, it fails to provide pronunciations for unknown words and inclusion of newer words increases memory requirement.",
"In another approach (Mosaddeque et al., 2006), there are predefined rules for the conversion of a word to its pronunciation.",
"Though such rule-based approach can work for any word, the system becomes complex when it tries to formulate rules for incorporating all irregularities of pronunciation in a language.",
"Clearly, these approaches are not feasible for large-scale G2P conversion which is necessary in any modern TTS or ASR system.",
"Data-driven machine learning approaches have great potential in such large-scale G2P conversion (Rao et al., 2015).",
"In such an approach, a machine learning model predicts the phoneme conversion of a grapheme, being trained on a lexicon.",
"A predominant work following such approach in Bangla language is by Google (Gutkin et al., 2016), where they train their system using 37 K words and achieve word-level accuracy of 81 : 5 % .",
"However, a system trained on their lexicon will face several shortcomings, such as: (mud) and (to cry) are pronounced differently but will have same phoneme representation in their system as: /k a d a/.",
"Similarly, (fairy) and (cid:278) (to read) are pronounced differently but will have same phoneme representation in their system as: /p o r i/.",
"Moreover, G2P system trained on their lexicon performs poorly on our identified critical cases from the most frequent 100 K words (Table 3).",
"Being motivated to increase the accuracy of grapheme to phoneme conversion in Bangla language, which will also perform well for critical inputs, we have developed a customized and robust G2P system for Bangla language.",
"Our major contributions are as follows:",
"(i) We identify and categorize the critical cases for grapheme to phoneme (G2P) conversion in Bangla language by analyzing the most frequent 100 K words.",
"(ii) We enrich the training lexicon for developing a robust G2P conversion system in Bangla language that performs much better for critical cases compared to other state-of-the-art G2P systems.",
"(iii) We perform phonetic transcriptions considering nasal vowels as separate phonemes.",
"(iv) We perform extensive simulations on large-scale dataset and show that our methodology outperforms other state-of-the-art approaches for G2P conversion in Bangla language by providing word-level accuracy of 90 : 2 % .",
"The rest of the paper is organized as follows: we discuss the previous works in Section 2, our phoneme list in Section 3, identification of critical cases and categorization of errors in Section 4, development of our system in Section 5, experimental results in Section 6, and conclusion and future works in Section 7.",
"The research works for G2P in English are quite extensive.",
"Chen (2003) investigate machine learning based systems for G2P in English.",
"They experiment with joint maximum entropy n-gram model, conditional maximum entropy model, etc.",
"Yao and Zweig (2015) utilize bi-directional LSTM (Long Short Term Memory) recurrent neural network for G2P and achieve 5 : 45 % PER on CMU dictionary (air, 2015).",
"Thu et al. (2016) show comparisons among various machine learning algorithms for G2P in Burmese language.",
"Joint sequence n-gram models aim to discover joint vocabulary consisting of graphemes and phonemes through the alignment of graphemes and phonemes.",
"Bisani and Ney (2008) develop a joint-sequence model for G2P.",
"Novak (2012), Novak et al. (2013) are other prominent works working on this model.",
"Neural sequence to sequence models are popular for G2P conversion.",
"Some prominent works on such models are: Caruana (1997), Jiampojamarn et al. (2007), Sutskever et al. (2014), Yao and Zweig (2015), Jiampojamarn et al. (2007), Rao et al. (2015), Yao and Zweig (2015), Schnober et al. (2016), Tsvetkov et al. (2016), He et al. (2016), Wu et al. (2016), Johnson et al. (2016), Toshniwal and Livescu (2016), and Vaswani et al. (2017).",
"Again, another line of research deals with G2P conversion for more than one language.",
"Such works include: Mana et al. (2001), Kim and Snyder (2012), Deri and Knight (2016), and Milde et al. (2017).",
"Most of the works related to G2P conversion that are focused on Bangla language, follow rule-based approach.",
"Rule-based approach of Mosad-deque et al. (2006) provides accuracy of 97 : 01 % on a previously seen corpus containing 736 words, but the system's accuracy is 81 : 95 % on an previously unobserved corpus containing 8399 words.",
"This work was extended by Alam et al. (2011) describing 3880 rules with an accuracy of 89 : 48 % on another corpus.",
"Basu et al. (2009) discuss a rule-based approach considering several information: parts-of-speech, subsequent context, etc.",
"Their work describes only 21 rules and provides an accuracy of 91 : 48 % on a corpus of 9294 words.",
"Ghosh et al. (2010) provide a heuristic for G2P that takes into account parts-of-speech, orthographic, and contextual information.",
"Their work provides 70 % accuracy on a corpus containing of 755 words.",
"A prominent work for data driven G2P in Bangla language is by Google (Gutkin et al. (2016)).",
"They develop a lexicon and achieve word-level accuracy of 81 : 5 % .",
"Chowdhury et al. (2017) use conditional random field for G2P in Bangla.",
"They report 14 : 88 % phoneme error rate on Google lexicon.",
"Our Phoneme symbols are provided in Table 1.",
"This table is a good reference for the 47 phoneme symbols that we have followed in this paper and their corresponding International Phonetic Alphabet (IPA) symbols.",
"Throughout the paper, we use these 47 phoneme symbols, not the IPA symbols.",
"There is disagreement between linguists whether nasal vowels should be considered as separate phonemes (Barman, 2009).",
"We added nasal vowels in our phoneme list to differentiate between a word with its nasalized counterpart, such as the word (to cry) and (mud).",
"Here, /a/, /e/, /u/, /i/, /o/, /O/, /E/, /an/, /en/, /un/, /in/, /on/, /On/, /En/ are normal vowels, /ew/, /ow/, /uw/, /iw/ are weak vowels, and the rest are consonants.",
"We envision of developing a robust G2P system that will perform reasonably well on any word in Bangla language.",
"A G2P system that performs well on the most frequent words, should also do well on other words.",
"With this motivation, we focus on increasing accuracy on the most frequent words.",
"Especially, we are concerned about those words that are among the most frequent words but non-trivial or critical for phonetic transcription, i.e., current state-of-the-art methodologies perform poorly on these critical words.",
"We investigate on identifying and categorizing such nontrivial or critical cases so that future research works can give special focus on developing methods for improving phonetic transcriptions of these critical words.",
"To get a hold of contemporary usage of Bangla language, we do extensive crawling.",
"We crawled 42 websites of various Bangla newspapers, blogs, e-book libraries, wikipedia, etc. covering various domains such as: politics, economics, sports, drama, novel, story, education, entertainment, general knowledge, history, etc.",
"After data cleaning and data normalization, we had about 10 M sentences.",
"We counted how many times each of the unique words appeared in those sentences.",
"We then consider the most frequent 100 K words and aim to identify the critical cases for phonetic transcription among these most frequent words.",
"Transcription After changing the Google lexicon (of size 60 K (around)) according to our phoneme symbols (Ta-ble 1), we prepare 4 versions of Google's lexicon of size 12 K , 24 K , 40 K , and 60 K respectively for identifying the critical cases for phonetic transcription.",
"Algorithm 1 shows prefix comparing algorithm that we use for compressing a phonetic lexicon or dictionary of grapheme sequence to phoneme sequence.",
"The algorithm matches the prefix of consecutive words (grapheme sequence) of a sorted dictionary (sorted according to ascending order of grapheme sequence of a word) and keeps a word (with its corresponding phoneme sequence) only if it does not share its prefix with any other words.",
"We run the algorithm successively 3 times, i.e., we use the destination dictionary of one iteration as the source dictionary of next iteration.",
"Each iteration produces a minimized version of the basic lexicon (Google lexicon).",
"After 3 iterations, the dictionary does not get any more compressed.",
"We find the phonetic transcriptions of each of the 100 K most frequent words using models trained on each of the 4 versions of Google's lexicon (ba-sic + 3 minimized).",
"So, from 4 models (each model trained on a version of the basic Google lexicon), we get 4 sets of transcriptions for the most frequent 100 K words.",
"For most of the words (around 70 K words), we observe that the phonetic transcriptions are exactly same in each of the 4 set.",
"However, for the remaining 30 K words ( 29105 words to be ex-act), we observe that at least one set provides different transcription.",
"We take these 30 K words to be the critical cases.",
"Our intuition is that if two G2P systems: one trained on a smaller version of the basic lexicon, and another trained on a larger version of the basic lexicon provide the same transcription for a word, then the word is a trivial case for phonetic transcription.",
"We then manually verify the phonetic transcriptions of these 30 K words taking help from 3 linguists and following Chowdhury (2016), and consider these 30 K words as critical cases for phonetic transcription.",
"We categorize the critical cases into 7 categories and observe the distribution of the critical transcriptions into these 7 categories.",
"These 7 cate-Algorithm 1 Algorithm for Compressing a Dictionary or Lexicon 1: sd sorted sourceDictionary 2: dd sorted destinationDictionary 3: a:grs grapheme sequence of lexicon 4: entry a 5: add sd [0] to dd 6: i = 1 7: while i = length ( sd ) do 8: pw = sd [ i (cid:0) 1] 9: cw = sd [ i ] 10: if length ( pw:grs ) (cid:21) 3 & pw:grs is prefix of cw:grs then 11: continue 12: else 13: add cw to dd 14: i i + 1 gories capture most of the errors.",
"The categories are: Open Close Vowel Confusion: G2P system provides pronunciation as close vowel that should be pronounced as open vowel ideally, and vice-versa.",
"For example, correct phoneme of (cid:300) (frog) is /b E n g/, but if G2P system provides output /b e n g/, then it is an error under this category as in the place of open vowel (here, /E/), G2P system is giving close vowel (here, /e/).",
"Inherent Vowel Confusion: G2P system does not provide inherent vowel as output where there should be an inherent vowel ideally.",
"For example, correct phoneme of (morning) is /sh O k a l/, but if G2P system provides output /sh k a l/, then it is an error under this category as the output of G2P does not give inherent vowel (here, /O/).",
"Diphthong Confusion: G2P system does not provide falling diphthong in output where there should be a falling diphthong ideally.",
"Or, system does not provide rising diphthong in output where there should be a rising diphthong ideally.",
"For example, correct phoneme of (friend) is /sh o iw/, but if G2P system provides output /sh o i/, then it is an error under this category as the output of G2P does not capture the falling diphthong (here, /o iw/).",
"be /sh/, and vice-versa.",
"For example, correct phoneme sequence of (organization) is /sh O N g O Th o n/, but if G2P system provides output /s O N g O Th o n/, then it is an error under this category as the output of G2P gives /s/ in place of /sh/.",
"s or ch Confusion: G2P system provides /s/ in phonetic transcription, where there should be /ch/, and vice-versa.",
"For example, correct phoneme sequence of (umbrella) is /ch a t a/, but if G2P system provides output /s a t a/, then it is an error under this category as the output of G2P gives /s/ in place of /ch/.",
"Nasal Confusion: G2P system does not provide any nasal vowel where there should be a nasal vowel, and vice-versa.",
"For example, correct phoneme sequence of (moon) is /c an d/, but if G2P system provides output /c a d/, then it is an error under this category as the output of G2P gives /a/ in place of /an/.",
"Other Vowel Confusion: G2P system provides completely different vowel than the corresponding vowel that should ideally be in that position of the phoneme sequence.",
"Note that, in the other error categories, for each position in the phoneme sequence, the generated and ideal phonemes were somehow related.",
"But in this category, at a specific position of the phoneme sequence, the generated and ideal phonemes are completely different.",
"For example, correct phoneme sequence of (cid:300) (perseverance) is /o d dh o b O sh a ew/, but if G2P system provides output /o d dh a b O sh a ew/, then it is an error under this category as the output of G2P gives /a/ in place of /o/ (fourth phoneme).",
"Algorithm 2 compares a machine-generated lexicon with a reference lexicon (manually verified), where both the lexicons have same grapheme sequences, but the corresponding phoneme sequences may be different.",
"This algorithm counts how many errors of each category are there in the machine generated lexicon.",
"The algorithm takes each entry of the generated lexicon and increases the count of the corresponding error category (if an error is present there).",
"We train attention mechanism based Transformer model on each of the 4 lexicons and get Algorithm 2 Comparing a Generated Lexicon ( gl ) with Reference Lexicon ( rl ) 1: N total number of entries in each lexicon 2: A; B; C; D; E; F; G are Open Close Vowel, s or sh, s or ch, Nasal, Dipthong, Other Vowel, and Inherent confusions, respectively, all initially zero 3: H denotes other errors not captured by the 7 categories, initially zero 4: vl and wl are lists of vowels and weak vowels respectively 5: a:phs phoneme sequence of lexicon 6: entry a 7: i 0 8: while i = N do 9: g = gl [ i ] :phs 10: r = rl [ i ] :phs 11: M = min ( length ( g ) ; length ( r )) 12: j 0 13: while j = M do 14: x = g [ j ] 15: y = r [ j ] 16: if x = y then 17: continue 18: ( x; y ) sorted ( x; y ) 19: T otal _ error = T otal _ error + 1 20: if ocConfusion ( x; y ) then 21: A A + 1 22: else if ( x; y ) = ( s ; sh ) then 23: B B + 1 24: else if ( x; y ) = ( ch ; s ) then 25: C C + 1 26: else if x + n = y then 27: D D + 1 28: else if x in wl or y in wl then 29: E E + 1 30: else if x in vl and y in vl then 31: F F + 1 32: else if removeV owel ( g ) = removeV owel ( r ) then 33: G G + 1 34: else 35: H H + 1 36: j j + 1 37: i i + 1 4 G2P models. We find the phoneme representation of 30 K critical cases using each of the 4 G2P models. Using Algorithm 2, we count the errors of each category for each of the 4 models. We report Algorithm 3 Procedure: ocConfusion ( x; y ) (Checks if open close vowel confusion) 1: ocSet [( O ; o ) ; ( E ; e ) ; 2: ( On ; on ) ; ( En ; en )] 3: if ( x; y ) in ocSet then 4: return True 5: else 6: return False Algorithm 4 Procedure: removeVowel (phoneme_sequence) 1: return phoneme_sequence removing all vowels from it the results in Table 2 and Figure 1. Here, other error denotes the errors that are not captured by these 7 categories. We see from these results that most of the errors are under Open Close vowel, s or sh, Diphthong, and Inherent Vowel confusions. 5 Developing an Improved G2P System for Bangla Language We develop an improved lexicon and use two machine learning based models trained on our lexicon to develop an improved G2P system for Bangla language. 5.1 Developing an Improved Lexicon For developing an improved lexicon, we include with Google's lexicon the manually verified 30 K non-trivial or critical entries. Also, we include the 70 K entries in which all of the 4 models (trained on each of the 4 versions of Google lexicon) unanimously agreed. After removing the repeated entries with same grapheme sequence, our lexicon consists of around 90 K entries. In case of repeated entries having same grapheme sequence, we carefully keep only the entry that has been manually verified. 5.2 G2P Models We use Neural Sequence to Sequence models. In these models, a conditional distribution of a sequence (here, phoneme sequence) is learned conditioned on another sequence (here, grapheme se-quence). We train two following Sequence to Sequence models on our lexicon for G2P conversion: LSTM-RNN : This is a plain Sequence to Sequence model that incorporates an encoder and decoder mechanism. Recurrent Neural Network Figure 1: Categorization of errors in critical cases, here each of the 60 K; 40 K; 24 K; and 12 K denotes the model trained on that particular lexicon. (RNN) is usually utilized in encoder and decoder design. For addressing vanishing gradient problem in RNN, Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) is used. We follow Yao and Zweig (2015) for implementation. Transformer Model : Transformer Model uses attention mechanism. Attention mechanism provides improvement upon plain Sequence to Sequence by easing the flow of information from source sequence to destination sequence. We follow Vaswani et al. (2017) for implementation. We show the performance of both of these models in Section 6. We observe that Transformer Model provides higher token-level accuracy (lower Word Error Rate) than LSTM-RNN. 6 Experimental Results We run extensive simulations and use two measures for evaluating the performances of the systems: Word Error Rate (WER): For calculating Word Error Rate (WER), we use the following formula: WER = ET where E denotes the number of words that have disagreement in their generated phoneme sequence and reference phoneme sequence, and T denotes the total number of words. Phoneme Error Rate (PER): For calculating Phoneme Error Rate (PER), we use the following formula: PER = I + S + D T where I , S , D denote respectively the total number of insertion, substitution, and deletion operations needed for all the words to align the generated phoneme sequence with the reference phoneme sequence for each word. T denotes the total number of phonemes present in all the words. Our best performing model is Transformer Model. We use batch size of 4096. Our neural network has 3 hidden layers, each containing 256 nodes. We use a computer having 8GB RAM, Intel Core i7 CPU, and Nvidia Geforce 1050 GPU for running all of the simulations. For each model, we run the simulations for around 110 K iterations taking around 5 hours. 6.1 Performance on Critical Cases We report the experiment results of Google's lexicon on critical cases in Table 3. We do not report our lexicon here as critical cases are already included in our lexicon. 6.2 Performance Comparison In General For comparing the performances of models trained on our lexicon and Google's lexicon, we randomly Error Type 60K 40K 24K 12K total error 6415 7222 8087 10400 Open Close Confusion (%) 32.0 30.7 34.8 23.6 Inherent Vowel Confusion (%) 28.4 36.7 32.0 40.1 s or sh confusion (%) 14.6 15.3 14.2 12.8 Dipthong confusion (%) 11.6 9.8 7.6 9.4 Other Vowel Confusion (%) 2.1 1.3 4.2 3.1 s or ch confusion (%) 0.8 0.7 0.2 0.5 Nasal Confusion (%) 0.2 0.1 0 0 Other Error (%) 10.4 5.4 7.1 10.6 Table 2: Error classification of 30 K critical cases, here each of the four rightmost columns denotes the model trained on that particular lexicon. Lexicon Model WER (%) PER (%) Google LSTM-RNN 25.7 3.26 Transformer Model 23.6 2.71 Table 3: Performance on Critical Cases take 9000 entries from our manually verified 30 K critical cases as test set. We use this test set for evaluating all the models. Though our actual lexicon contains these 9000 entries, we do not keep them in our lexicon while doing the experiments to fairly evaluate the performances of the lexicons. For both lexicons, we keep 90% of the lexicon in train set and remaining 10% in validation set. Table 4 shows the result. Models trained on our lexicon outperforms those trained on Google's lexicon by a significant margin. Moreover, Transformer Model performs better than LSTM-RNN. Figure 2 and Table 5 categorize the errors of systems trained on 3 types of lexicons (Roman-ized version of our lexicon is discussed in section 6.3) by using Algorithm 2. Here, we report the results of Transformer Model only as it has been better performing that LSTM-RNN in our experiments. We observe most of the errors are related to Open Close vowel, s or sh, Diphthong, and Inherent Vowel confusions this finding also conforms to Figure 1 and Table 2, which were error catego-Lexicon Model WER (%) PER (%) Google LSTM-RNN 17.1 2.32 Transformer Model 14.8 1.88 Our Lexicon LSTM-RNN 10.5 1.42 Transformer Model 9.8 1.33 Table 4: Performance Comparison In General Error Type Our Lexicon Romanized Lexicon Google Lexicon Total Error 1337 1406 2961 Inherent Vowel Confusion (%) 35.6 34.5 33.8 Open Close Confusion (%) 30.1 30.4 27.9 s or sh confusion (%) 13.3 13.0 12.1 Diphthong confusion (%) 10.7 9.8 13.1 Other Vowel Confusion (%) 1.9 4.3 2.7 s or ch confusion (%) 0.3 0.2 0.2 Nasal Confusion (%) 0.2 0.43 0.6 Other Error (%) 7.9 7.4 9.7 Table 5: Performance comparison on different error categories. Here each of the three rightmost columns denotes the model trained on that particular lexicon. rization of critical cases. 6.3 Effect of Romanization For doing experiments on the effect of romanization on G2P conversion, we romanized all grapheme sequences in our lexicon to prepare romanized counterpart of our lexicon. During romanization, each grapheme symbol in Bangla is replaced with a single English letter except that if a consonant grapheme is not followed by a vowel grapheme, O was added after the romanized symbol of that consonant as the roman symbol for ``'' , which is usually inherently pronounced in such cases. All the symbols used for romanization were completely disjoint to avoid any ambiguity in the lexicon. Figure 3 shows the WER and PER Figure 2: Performance Comparison on Different Error Categories Figure 3: Comparison in terms of WER and PER of systems trained on 3 types of lexicons. Here, we report the results of Transformer Model only as it has been better performing than LSTM-RNN in our experiments. Both versions of our lexicon perform better (lower WER and lower PER) than Google's lexicon. Also romanization does not significantly increase or decrease the performance. Figures 4 and 5 show respectively the Word Recognition Accuracy ( 1 (cid:0) WER) and Phoneme Recognition Accuracy ( 1 (cid:0) PER) with respect to number of iterations run during simulation. Both versions of our lexicon perform better (higher Word Recognition Accuracy and higher Phoneme Recognition Accuracy) than Google's lexicon. Figure 6 shows Negative Log Perplexity vs number of iterations. Both versions of our lexicon provide higher negative log perplexity than Google's lexicon. Figure 4: Word Recognition Accuracy vs Iteration 6.4 Effectiveness of Our Identified Critical Cases In this section, we want to establish that the improved performance of our lexicon comes not only from the increase in number of training samples, but also due to the fact that the critical cases identified by our novel methodology have been added as training samples. For this, we prepare a new training lexicon by combining a portion of the Google lexicon with a portion of our identified critical cases. As we have kept 9 K entries from the critical cases as our test set, we take the remaining 21 K critical cases and combine them with the randomly taken 39 K entries from Google lexicon to prepare a new lexicon of size 60 K . While taking entries from Google lexicon, we ensure that we do not take any repeated entry that has already been in the critical cases and added to the new lexicon. Figure 5: Phoneme Recognition Accuracy vs Iteration Figure 6: Negative Log Perplexity vs Iteration Lexicon Model WER (%) PER (%) Google LSTM-RNN 17.1 2.32 Transformer Model 14.8 1.88 New Lexicon LSTM-RNN 12.6 1.54 Transformer Model 11.2 1.49 Table 6: Effectiveness of critical cases. Both lexicons are of size 60 K . New Lexicon consists of 21 K critical cases and 39 K entries from Google lexicon. We then compare the performance of this new lexicon with the Google lexicon, both of which are of same size ( 60 K ), on our test set. The results are in Table 6. The results clearly show that even in the case of same sized lexicons, our identified critical cases can significantly improve the performance as evidenced by the lower WER and lower PER than those for the Google lexicon. 7 Conclusion and Future Works In this paper, we have identified the critical cases, categorized the errors, and increased the current state-of-the-art accuracy in G2P conversion for Bangla language by developing an improved lexicon. In future, we will do classification of errors using unsupervised clustering approaches. Tables 2 and 5 both show that most of the errors occur in Open Close Vowel, s or sh, Diphthong, and Inherent Vowel confusion categories. In future, we will focus on each category to devise novel methodology for mitigating the errors of that category and in turn further increase the accuracy of G2P system in Bangla. Acknowledgements This research work is conducted at the Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology (BUET) and is supported by Samsung Research. References air. 2015. https://svn.code.sf.net/p/ cmusphinx/code/branches/cmudict/cmudict-0.7b . Firoj Alam, SM Murtoza Habib, and Mumit Khan. 2011. Bangla text to speech using festival. In Conference on Human Language Technology for Development , pages 154161. Binoy Barman. 2009. A contrastive analysis of english and bangla phonemics. Dhaka University Journal of Linguistics , 2(4):1942. Joyanta Basu, Tulika Basu, Mridusmita Mitra, and Shyamal Kr Das Mandal. 2009. Grapheme to phoneme (g2p) conversion for bangla. In Speech Database and Assessments, 2009 Oriental CO-COSDA International Conference on , pages 6671. IEEE. Maximilian Bisani and Hermann Ney. 2008. Joint-sequence models for grapheme-to-phoneme conversion. Speech communication , 50(5):434451. Rich Caruana. 1997. Multitask learning. Machine learning , 28(1):4175. Stanley F Chen. 2003. Conditional and joint models for grapheme-to-phoneme conversion. In Eighth European Conference on Speech Communication and Technology . Jamil Chowdhury, editor. 2016. Adhunik Bangla Ovid-han . Bangla Academy, Bangla Academy, Dhaka -1000. Shammur Absar Chowdhury, Firoj Alam, Naira Khan, and Sheak RH Noori. 2017. Bangla grapheme to phoneme conversion using conditional random fields. In Computer and Information Technology (ICCIT), 2017 20th International Conference of , pages 16. IEEE. Aliya Deri and Kevin Knight. 2016. Grapheme-to-phoneme models for (almost) any language. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) , volume 1, pages 399408. Krishnendu Ghosh, Ramu V Reddy, NP Narendra, S Maity, SG Koolagudi, and KS Rao. 2010. Grapheme to phoneme conversion in bengali for festival based tts framework. In 8th international conference on natural language processing (ICON) . Macmillan Publishers. Alexander Gutkin, Linne Ha, Martin Jansche, Knot Pi-patsrisawat, and Richard Sproat. 2016. Tts for low resource languages: A bangla synthesizer. In LREC . Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition , pages 770 778. Sepp Hochreiter and Jrgen Schmidhuber. 1997. Long short-term memory. Neural computation , 9(8):17351780. Sittichai Jiampojamarn, Grzegorz Kondrak, and Tarek Sherif. 2007. Applying many-to-many alignments and hidden markov models to letter-to-phoneme conversion. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference , pages 372379. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vigas, Martin Wattenberg, Greg Corrado, et al. 2016. Google's multilingual neural machine translation system: enabling zero-shot translation. arXiv preprint arXiv:1611.04558 . Young-Bum Kim and Benjamin Snyder. 2012. Universal grapheme-to-phoneme prediction over latin alphabets. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning , pages 332343. Association for Computational Linguistics. Franco Mana, Paolo Massimino, and Alberto Pac-chiotti. 2001. Using machine learning techniques for grapheme to phoneme transcription. In Seventh European Conference on Speech Communication and Technology . Benjamin Milde, Christoph Schmidt, and Joachim Khler. 2017. Multitask sequence-to-sequence models for grapheme-to-phoneme conversion. Proc. Interspeech 2017 , pages 25362540. Ayesha Binte Mosaddeque, Naushad UzZaman, and Mumit Khan. 2006. Rule based automated pronunciation generator. J. R. Novak. 2012. https://github.com/ AdolfVonKleist/Phonetisaurus . Josef R Novak, Nobuaki Minematsu, and Keikichi Hi-rose. 2013. Failure transitions for joint n-gram models and g2p conversion. In INTERSPEECH , pages 18211825. Kanishka Rao, Fuchun Peng, Haim Sak, and Franoise Beaufays. 2015. Grapheme-to-phoneme conversion using long short-term memory recurrent neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on , pages 42254229. IEEE. Carsten Schnober, Steffen Eger, Erik-Ln Do Dinh, and Iryna Gurevych. 2016. Still not there? comparing traditional sequence-to-sequence models to encoder-decoder neural networks on monotone string translation tasks. arXiv preprint arXiv:1610.07796 . Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing systems , pages 31043112. Ye Kyaw Thu, Win Pa Pa, Yoshinori Sagisaka, and Naoto Iwahashi. 2016. Comparison of grapheme-to-phoneme conversion methods on a myanmar pronunciation dictionary. In Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WSSANLP2016) , pages 11 22. Shubham Toshniwal and Karen Livescu. 2016. Jointly learning to align and convert graphemes to phonemes with neural attention models. In Spoken Language Technology Workshop (SLT), 2016 IEEE , pages 7682. IEEE. Yulia Tsvetkov, Sunayana Sitaram, Manaal Faruqui, Guillaume Lample, Patrick Littell, David Mortensen, Alan W Black, Lori Levin, and Chris Dyer. 2016. Polyglot neural language models: A case study in cross-lingual phonetic representation learning. arXiv preprint arXiv:1605.03832 . Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems , pages 59986008. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation."
] |
[
"abstain",
"abstain",
"abstain",
"objective",
"objective",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"objective",
"objective",
"objective",
"objective",
"method",
"result",
"result",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"method",
"method",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"abstain",
"method",
"abstain",
"abstain",
"method",
"method",
"abstain",
"abstain",
"other",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"abstain",
"other"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.