id
stringlengths
40
40
pid
stringlengths
42
42
input
stringlengths
8.37k
169k
output
stringlengths
1
1.63k
cb370692fe0beef90cdaa9c8e43a0aab6f0e117a
cb370692fe0beef90cdaa9c8e43a0aab6f0e117a_0
Q: Do they report results only on English data? Text: Introduction Lately, there has been enormous increase in User Generated Contents (UGC) on the online platforms such as newsgroups, blogs, online forums and social networking websites. According to the January 2018 report, the number of active users in Facebook, YouTube, WhatsApp, Facebook Messenger and WeChat was more than 2.1, 1.5, 1.3, 1.3 and 0.98 billions respectively BIBREF1 . The UGCs, most of the times, are helpful but sometimes, they are in bad taste usually posted by trolls, spammers and bullies. According to a study by McAfee, 87% of the teens have observed cyberbullying online BIBREF2 . The Futures Company found that 54% of the teens witnessed cyber bullying on social media platforms BIBREF3 . Another study found 27% of all American internet users self-censor their online postings out of fear of online harassment BIBREF4 . Filtering toxic comments is a challenge for the content providers as their appearances result in the loss of subscriptions. In this paper, we will be using toxic and abusive terms interchangeably to represent comments which are inappropriate, disrespectful, threat or discriminative. Toxic comment classification on online channels is conventionally carried out either by moderators or with the help of text classification tools BIBREF5 . With recent advances in Deep Learning (DL) techniques, researchers are exploring if DL can be used for comment classification task. Jigsaw launched Perspective (www.perspectiveapi.com), which uses ML to automatically attach a confidence score to a comment to show the extent to which a comment is considered toxic. Kaggle also hosted an online competition on toxic classification challenge recently BIBREF6 . Text transformation is the very first step in any form of text classification. The online comments are generally in non-standard English and contain lots of spelling mistakes partly because of typos (resulting from small screens of the mobile devices) but more importantly because of the deliberate attempt to write the abusive comments in creative ways to dodge the automatic filters. In this paper we have identified 20 different atomic transformations (plus 15 sequence of transformations) to preprocess the texts. We will apply four different ML models which are considered among the best to see how much we gain by performing those transformations. The rest of the paper is organized as follows: Section 2 focuses on the relevant research in the area of toxic comment classification. Section 3 focuses on the preprocessing methods which are taken into account in this paper. Section 4 is on ML methods used. Section 5 is dedicated to results and section 6 is discussion and future work. Relevant Research A large number of studies have been done on comment classification in the news, finance and similar other domains. One such study to classify comments from news domain was done with the help of mixture of features such as the length of comments, uppercase and punctuation frequencies, lexical features such as spelling, profanity and readability by applying applied linear and tree based classifier BIBREF7 . FastText, developed by the Facebook AI research (FAIR) team, is a text classification tool suitable to model text involving out-of-vocabulary (OOV) words BIBREF8 BIBREF9 . Zhang et al shown that character level CNN works well for text classification without the need for words BIBREF10 . Abusive/toxic comment classification Toxic comment classification is relatively new field and in recent years, different studies have been carried out to automatically classify toxic comments.Yin et.al. proposed a supervised classification method with n-grams and manually developed regular expressions patterns to detect abusive language BIBREF11 . Sood et. al. used predefined blacklist words and edit distance metric to detect profanity which allowed them to catch words such as sh!+ or @ss as profane BIBREF12 . Warner and Hirschberg detected hate speech by annotating corpus of websites and user comments geared towards detecting anti-semitic hate BIBREF13 . Nobata et. al. used manually labeled online user comments from Yahoo! Finance and news website for detecting hate speech BIBREF5 . Chen et. al. performed feature engineering for classification of comments into abusive, non-abusive and undecided BIBREF14 . Georgakopoulos and Plagianakos compared performance of five different classifiers namely; Word embeddings and CNN, BoW approach SVM, NB, k-Nearest Neighbor (kNN) and Linear Discriminated Analysis (LDA) and found that CNN outperform all other methods in classifying toxic comments BIBREF15 . Preprocessing of online comments We found few dedicated papers that address the effect of incorporating different text transformations on the model accuracy for sentiment classification. Uysal and Gunal shown the impact of transformation on text classification by taking into account four transformations and their all possible combination on news and email domain to observe the classification accuracy. Their experimental analyses shown that choosing appropriate combination may result in significant improvement on classification accuracy BIBREF16 . Nobata et. al. used normalization of numbers, replacing very long unknown words and repeated punctuations with the same token BIBREF5 . Haddi et. al. explained the role of transformation in sentiment analyses and demonstrated with the help of SVM on movie review database that the accuracies improve significantly with the appropriate transformation and feature selection. They used transformation methods such as white space removal, expanding abbreviation, stemming, stop words removal and negation handling BIBREF17 . Other papers focus more on modeling as compared to transformation. For example, Wang and manning filter out anything from corpus that is not alphabet. However, this would filter out all the numbers, symbols, Instant Messages (IM) codes, acronyms such as $#!+, 13itch, </3 (broken heart), a$$ which gives completely different meaning to the words or miss out a lot of information. In another sentiment analyses study, Bao et. al. used five transformations namely URLs features reservation, negation transformation, repeated letters normalization, stemming and lemmatization on twitter data and applied linear classifier available in WEKA machine learning tool. They found the accuracy of the classification increases when URLs features reservation, negation transformation and repeated letters normalization are employed while decreases when stemming and lemmatization are applied BIBREF18 . Jianqiang and Xiaolin also looked at the effect of transformation on five different twitter datasets in order to perform sentiment classification and found that removal of URLs, the removal of stop words and the removal of numbers have minimal effect on accuracy whereas replacing negation and expanding acronyms can improve the accuracy. Most of the exploration regarding application of the transformation has been around the sentiment classification on twitter data which is length-restricted. The length of online comments varies and may range from a couple of words to a few paragraphs. Most of the authors used conventional ML models such as SVM, LR, RF and NB. We are expanding our candidate pool for transformations and using latest state-of-the-art models such as LR, NBSVM, XGBoost and Bidirectional LSTM model using fastText’s skipgram word vector. Preprocessing tasks The most intimidating challenge with the online comments data is that the words are non-standard English full of typos and spurious characters. The number of words in corpora are multi-folds because of different reasons including comments originating from mobile devices, use of acronyms, leetspeak words (http://1337.me/), or intentionally obfuscating words to avoid filters by inserting spurious characters, using phonemes, dropping characters etc. Having several forms of the same word result in feature explosion making it difficult for the model to train. Therefore, it seems natural to perform some transformation before feeding the data to the learning algorithm. To explore how helpful these transformations are, we incorporated 20 simple transformations and 15 additional sequences of transformations in our experiment to see their effect on different type of metrics on four different ML models (See Figure FIGREF3 ). The preprocessing steps are usually performed in sequence of multiple transformations. In this work, we considered 15 combinations of the above transformations that seemed natural to us: Preprocess-order-1 through 15 in the above table represent composite transformations. For instance, PPO-11-LWTN-CoAcBkPrCm represents sequence of the following transformations of the raw text in sequence: Change to lower case INLINEFORM0 remove white spaces INLINEFORM1 trim words len INLINEFORM2 remove Non Printable characters INLINEFORM3 replace contraction INLINEFORM4 replace acronym INLINEFORM5 replace blacklist using regex INLINEFORM6 replace profane words using fuzzy INLINEFORM7 replace common words using fuzzy. Datasets We downloaded the data for our experiment from the Kaggle’s toxic comment classification challenge sponsored by Jigsaw (An incubator within Alphabet). The dataset contains comments from Wikipedia’s talk page edits which have been labeled by human raters for toxicity. Although there are six classes in all: ‘toxic’, ‘severe toxic’, ‘obscene’, ‘threat’, ‘insult’ and ‘identity hate’, to simplify the problem, we combined all the labels and created another label ‘abusive’. A comment is labeled in any one of the six class, then it is categorized as ‘abusive’ else the comment is considered clean or non-abusive. We only used training data for our experiment which has 159,571 labeled comments. Models Used We used four classification algorithms: 1) Logistic regression, which is conventionally used in sentiment classification. Other three algorithms which are relatively new and has shown great results on sentiment classification types of problems are: 2) Naïve Bayes with SVM (NBSVM), 3) Extreme Gradient Boosting (XGBoost) and 4) FastText algorithm with Bidirectional LSTM (FastText-BiLSTM). The linear models such as logistic regression or classifiers are used by many researchers for Twitter comments sentiment analyses BIBREF7 BIBREF18 BIBREF19 BIBREF20 . Naveed et. al. used logistic regression for finding interestingness of tweet and the likelihood of a tweet being retweeted. Wang and Manning found that the logistic regression’s performance is at par with SVM for sentiment and topic classification purposes BIBREF21 . Wang and Manning, shown the variant of NB and SVM gave them the best result for sentiment classification. The NB did a good job on short texts while the SVM worked better on relatively longer texts BIBREF21 . Inclusion of bigrams produced consistent gains compared to methods such as Multinomial NB, SVM and BoWSVM (Bag of Words SVM). Considering these advantages, we decided to include NBSVM in our analyses as the length of online comments vary, ranging from few words to few paragraphs. The features are generated the way it is generated for the logit model above. Extreme Gradient Boosting (XGBoost) is a highly scalable tree-based supervised classifier BIBREF22 based on gradient boosting, proposed by Friedman BIBREF23 . This boosted models are ensemble of shallow trees which are weak learners with high bias and low variance. Although boosting in general has been used by many researchers for text classification BIBREF24 BIBREF25 , XGBoost implementation is relatively new and some of the winners of the ML competitions have used XGBoost BIBREF26 in their winning solution. We set the parameters of XGBoost as follows: number of round, evaluation metric, learning rate and maximum depth of the tree at 500, logloss, 0.01 and 6 respectively. FastText BIBREF9 is an open source library for word vector representation and text classification. It is highly memory efficient and significantly faster compared to other deep learning algorithms such as Char-CNN (days vs few seconds) and VDCNN (hours vs few seconds) and produce comparable accuracy BIBREF27 . The fastText uses both skipgram (words represented as bag of character n-grams) and continuous Bag of Words (CBOW) method. FastText is suitable to model text involving out-of-vocabulary (OOV) or rare words more suitable for detecting obscure words in online comments BIBREF9 . The Long Short Term Memory networks (LSTM) BIBREF28 , proposed by Hochreiter & Schmidhuber (1997), is a variant of RNN with an additional memory output for the self-looping connections and has the capability to remember inputs nearly 1000 time steps away. The Bidirectional LSTM (BiLSTM) is a further improvement on the LSTM where the network can see the context in either direction and can be trained using all available input information in the past and future of a specific time frame BIBREF29 BIBREF30 . We will be training our BiLSTM model on FastText skipgram (FastText-BiLSTM) embedding obtained using Facebook’s fastText algorithm. Using fastText algorithm, we created embedding matrix having width 100 and used Bidirectional LSTM followd by GlobalMaxPool1D, Dropout(0.2), Dense (50, activation = ‘relu’), Dropout(0.2), Dense (1, activation = ‘sigmoid’). Results We performed 10-fold cross validation by dividing the entire 159,571 comments into nearly 10 equal parts. We trained each of the four models mentioned above on nine folds and tested on the remaining tenth fold and repeated the same process for other folds as well. Eventually, we have Out-of-Fold (OOF) metrics for all 10 parts. We calculated average OOF CV metrics (accuracy, F1-score, logloss, number of misclassified samples) of all 10 folds. As the data distribution is highly skewed (16,225 out of 159,571 ( 10%) are abusive), the accuracy metric here is for reference purpose only as predicting only the majority class every single time can get us 90% accuracy. The transformation, ‘Raw’, represents the actual data free from any transformation and can be considered the baseline for comparison purposes. Overall, the algorithms showed similar trend for all the transformations or sequence of transformations. The NBSVM and FastText-BiLSTM showed similar accuracy with a slight upper edge to the FastText-BiLSTM (See the logloss plot in Fig. FIGREF15 ). For atomic transformations, NBSVM seemed to work better than fastText-BiLSTM and for composite transformations fastText-BiLSTM was better. Logistic regression performed better than the XGBoost algorithm and we guess that the XGBoost might be overfitting the data. A similar trend can be seen in the corresponding F1-score as well. One advantage about the NBSVM is that it is blazingly fast compared to the FastText-BiLSTM. We also calculated total number of misclassified comments (see Fig. FIGREF16 ). The transformation, Convert_to_lower, resulted in reduced accuracy for Logit and NBSVM and higher accuracy for fastText-BiLSTM and XGBoost. Similarly, removing_whitespaces had no effect on Logit, NBSM and XGBoost but the result of fastText-BiLSTM got worse. Only XGBoost was benefitted from replacing_acronyms and replace_contractions transformation. Both, remove_stopwords and remove_rare_words resulted in worse performance for all four algorithms. The transformation, remove_words_containing_non_alpha leads to drop in accuracy in all the four algorithms. This step might be dropping some useful words (sh**, sh1t, hello123 etc.) from the data and resulted in the worse performance. The widely used transformation, Remove_non_alphabet_chars (strip all non-alphabet characters from text), leads to lower performance for all except fastText-BiLSTM where the number of misclassified comments dropped from 6,229 to 5,794. The transformation Stemming seemed to be performing better compared with the Lemmatization for fastText-BiLSTM and XGBoost. For logistic regression and the XGBoost, the best result was achieved with PPO-15, where the number of misclassified comments reduced from 6,992 to 6,816 and from 9,864 to 8,919 respectively. For NBSVM, the best result was achieved using fuzzy_common_mapping (5,946 to 5,933) and for fastText-BiLSTM, the best result was with PPO-8 (6,217 to 5,715) (See Table 2). This shows that the NBSVM are not helped significantly by transformations. In contrast, transformations did help the fastText-BiLSTM significantly. We also looked at the effect of the transformations on the precision and recall the negative class. The fastText-BiLSTM and NBSVM performed consistently well for most of the transformations compared to the Logit and XGBoost. The precision for the XGBoost was the highest and the recall was lowest among the four algorithm pointing to the fact that the negative class data is not enough for this algorithm and the algorithm parameters needs to be tuned. The interpretation of F1-score is different based on the how the classes are distributed. For toxic data, toxic class is more important than the clean comments as the content providers do not want toxic comments to be shown to their users. Therefore, we want the negative class comments to have high F1-scores as compared to the clean comments. We also looked at the effect of the transformations on the precision and recall of the negative class. The F1-score for negative class is somewhere around 0.8 for NBSVM and fastText-BiLSTM, for logit this value is around 0.74 and for XGBoost, the value is around 0.57. The fastText-BiLSTM and NBSVM performed consistently well for most of the transformations compared to the Logit and XGBoost. The precision for the XGBoost was the highest and the recall was lowest among the four algorithm pointing to the fact that the negative class data is not enough for this algorithm and the algorithm parameters needs to be tuned. Discussion and Future Work We spent quite a bit of time on transformation of the toxic data set in the hope that it will ultimately increase the accuracy of our classifiers. However, we empirically found that our intuition, to a large extent, was wrong. Most of the transformations resulted in reduced accuracy for Logit and NBSVM. We considered a total of 35 different ways to transform the data. Since, there will be exponential number of possible transformation sequences to try, we selected only 15 that we thought reasonable. Changing the order can have a different outcome as well. Most of the papers on sentiment classification, that we reviewed, resulted in better accuracy after application of some of these transformations, however, for us it was not completely true. We are not sure about the reason but out best guess is that the twitter data is character-limited while our comment data has no restriction on the size. The toxic data is unbalanced and we did not try to balance the classes in this experiment. It would be interesting to know what happens when we do oversampling BIBREF31 of the minority class or under-sampling of majority class or a combination of both. Pseudo-labeling BIBREF32 can also be used to mitigate the class imbalance problem to some extent. We did not tune the parameters of different algorithms presented in our experiment. It will also be interesting to use word2vec/GloVe word embedding to see how they behave during the above transformations. Since the words in these word embedding are mostly clean and without any spurious/special characters, we can't use the pre-trained word vectors on raw data. To compare apple to apple, the embedding vectors needs to be trained on the corpora from scratch which is time consuming. Also, we only considered six composite transformations which is not comprehensive in any way and will be taking this issue up in the future. We also looked only at the Jigsaw's Wikipedia data only. This paper gives an idea to the NLP researchers on the worth of spending time on transformations of toxic data. Based on the results we have, our recommendation is not to spend too much time on the transformations rather focus on the selection of the best algorithms. All the codes, data and results can be found here: https://github.com/ifahim/toxic-preprocess Acknowledgements We would like to thank Joseph Batz and Christine Cheng for reviewing the draft and providing valuable feedback. We are also immensely grateful to Sasi Kuppanagari and Phani Vadali for their continued support and encouragement throughout this project.
Unanswerable
d0c636fa9ef99c4f44ab39e837a680217b140269
d0c636fa9ef99c4f44ab39e837a680217b140269_0
Q: Do the authors offer any hypothesis as to why the transformations sometimes disimproved performance? Text: Introduction Lately, there has been enormous increase in User Generated Contents (UGC) on the online platforms such as newsgroups, blogs, online forums and social networking websites. According to the January 2018 report, the number of active users in Facebook, YouTube, WhatsApp, Facebook Messenger and WeChat was more than 2.1, 1.5, 1.3, 1.3 and 0.98 billions respectively BIBREF1 . The UGCs, most of the times, are helpful but sometimes, they are in bad taste usually posted by trolls, spammers and bullies. According to a study by McAfee, 87% of the teens have observed cyberbullying online BIBREF2 . The Futures Company found that 54% of the teens witnessed cyber bullying on social media platforms BIBREF3 . Another study found 27% of all American internet users self-censor their online postings out of fear of online harassment BIBREF4 . Filtering toxic comments is a challenge for the content providers as their appearances result in the loss of subscriptions. In this paper, we will be using toxic and abusive terms interchangeably to represent comments which are inappropriate, disrespectful, threat or discriminative. Toxic comment classification on online channels is conventionally carried out either by moderators or with the help of text classification tools BIBREF5 . With recent advances in Deep Learning (DL) techniques, researchers are exploring if DL can be used for comment classification task. Jigsaw launched Perspective (www.perspectiveapi.com), which uses ML to automatically attach a confidence score to a comment to show the extent to which a comment is considered toxic. Kaggle also hosted an online competition on toxic classification challenge recently BIBREF6 . Text transformation is the very first step in any form of text classification. The online comments are generally in non-standard English and contain lots of spelling mistakes partly because of typos (resulting from small screens of the mobile devices) but more importantly because of the deliberate attempt to write the abusive comments in creative ways to dodge the automatic filters. In this paper we have identified 20 different atomic transformations (plus 15 sequence of transformations) to preprocess the texts. We will apply four different ML models which are considered among the best to see how much we gain by performing those transformations. The rest of the paper is organized as follows: Section 2 focuses on the relevant research in the area of toxic comment classification. Section 3 focuses on the preprocessing methods which are taken into account in this paper. Section 4 is on ML methods used. Section 5 is dedicated to results and section 6 is discussion and future work. Relevant Research A large number of studies have been done on comment classification in the news, finance and similar other domains. One such study to classify comments from news domain was done with the help of mixture of features such as the length of comments, uppercase and punctuation frequencies, lexical features such as spelling, profanity and readability by applying applied linear and tree based classifier BIBREF7 . FastText, developed by the Facebook AI research (FAIR) team, is a text classification tool suitable to model text involving out-of-vocabulary (OOV) words BIBREF8 BIBREF9 . Zhang et al shown that character level CNN works well for text classification without the need for words BIBREF10 . Abusive/toxic comment classification Toxic comment classification is relatively new field and in recent years, different studies have been carried out to automatically classify toxic comments.Yin et.al. proposed a supervised classification method with n-grams and manually developed regular expressions patterns to detect abusive language BIBREF11 . Sood et. al. used predefined blacklist words and edit distance metric to detect profanity which allowed them to catch words such as sh!+ or @ss as profane BIBREF12 . Warner and Hirschberg detected hate speech by annotating corpus of websites and user comments geared towards detecting anti-semitic hate BIBREF13 . Nobata et. al. used manually labeled online user comments from Yahoo! Finance and news website for detecting hate speech BIBREF5 . Chen et. al. performed feature engineering for classification of comments into abusive, non-abusive and undecided BIBREF14 . Georgakopoulos and Plagianakos compared performance of five different classifiers namely; Word embeddings and CNN, BoW approach SVM, NB, k-Nearest Neighbor (kNN) and Linear Discriminated Analysis (LDA) and found that CNN outperform all other methods in classifying toxic comments BIBREF15 . Preprocessing of online comments We found few dedicated papers that address the effect of incorporating different text transformations on the model accuracy for sentiment classification. Uysal and Gunal shown the impact of transformation on text classification by taking into account four transformations and their all possible combination on news and email domain to observe the classification accuracy. Their experimental analyses shown that choosing appropriate combination may result in significant improvement on classification accuracy BIBREF16 . Nobata et. al. used normalization of numbers, replacing very long unknown words and repeated punctuations with the same token BIBREF5 . Haddi et. al. explained the role of transformation in sentiment analyses and demonstrated with the help of SVM on movie review database that the accuracies improve significantly with the appropriate transformation and feature selection. They used transformation methods such as white space removal, expanding abbreviation, stemming, stop words removal and negation handling BIBREF17 . Other papers focus more on modeling as compared to transformation. For example, Wang and manning filter out anything from corpus that is not alphabet. However, this would filter out all the numbers, symbols, Instant Messages (IM) codes, acronyms such as $#!+, 13itch, </3 (broken heart), a$$ which gives completely different meaning to the words or miss out a lot of information. In another sentiment analyses study, Bao et. al. used five transformations namely URLs features reservation, negation transformation, repeated letters normalization, stemming and lemmatization on twitter data and applied linear classifier available in WEKA machine learning tool. They found the accuracy of the classification increases when URLs features reservation, negation transformation and repeated letters normalization are employed while decreases when stemming and lemmatization are applied BIBREF18 . Jianqiang and Xiaolin also looked at the effect of transformation on five different twitter datasets in order to perform sentiment classification and found that removal of URLs, the removal of stop words and the removal of numbers have minimal effect on accuracy whereas replacing negation and expanding acronyms can improve the accuracy. Most of the exploration regarding application of the transformation has been around the sentiment classification on twitter data which is length-restricted. The length of online comments varies and may range from a couple of words to a few paragraphs. Most of the authors used conventional ML models such as SVM, LR, RF and NB. We are expanding our candidate pool for transformations and using latest state-of-the-art models such as LR, NBSVM, XGBoost and Bidirectional LSTM model using fastText’s skipgram word vector. Preprocessing tasks The most intimidating challenge with the online comments data is that the words are non-standard English full of typos and spurious characters. The number of words in corpora are multi-folds because of different reasons including comments originating from mobile devices, use of acronyms, leetspeak words (http://1337.me/), or intentionally obfuscating words to avoid filters by inserting spurious characters, using phonemes, dropping characters etc. Having several forms of the same word result in feature explosion making it difficult for the model to train. Therefore, it seems natural to perform some transformation before feeding the data to the learning algorithm. To explore how helpful these transformations are, we incorporated 20 simple transformations and 15 additional sequences of transformations in our experiment to see their effect on different type of metrics on four different ML models (See Figure FIGREF3 ). The preprocessing steps are usually performed in sequence of multiple transformations. In this work, we considered 15 combinations of the above transformations that seemed natural to us: Preprocess-order-1 through 15 in the above table represent composite transformations. For instance, PPO-11-LWTN-CoAcBkPrCm represents sequence of the following transformations of the raw text in sequence: Change to lower case INLINEFORM0 remove white spaces INLINEFORM1 trim words len INLINEFORM2 remove Non Printable characters INLINEFORM3 replace contraction INLINEFORM4 replace acronym INLINEFORM5 replace blacklist using regex INLINEFORM6 replace profane words using fuzzy INLINEFORM7 replace common words using fuzzy. Datasets We downloaded the data for our experiment from the Kaggle’s toxic comment classification challenge sponsored by Jigsaw (An incubator within Alphabet). The dataset contains comments from Wikipedia’s talk page edits which have been labeled by human raters for toxicity. Although there are six classes in all: ‘toxic’, ‘severe toxic’, ‘obscene’, ‘threat’, ‘insult’ and ‘identity hate’, to simplify the problem, we combined all the labels and created another label ‘abusive’. A comment is labeled in any one of the six class, then it is categorized as ‘abusive’ else the comment is considered clean or non-abusive. We only used training data for our experiment which has 159,571 labeled comments. Models Used We used four classification algorithms: 1) Logistic regression, which is conventionally used in sentiment classification. Other three algorithms which are relatively new and has shown great results on sentiment classification types of problems are: 2) Naïve Bayes with SVM (NBSVM), 3) Extreme Gradient Boosting (XGBoost) and 4) FastText algorithm with Bidirectional LSTM (FastText-BiLSTM). The linear models such as logistic regression or classifiers are used by many researchers for Twitter comments sentiment analyses BIBREF7 BIBREF18 BIBREF19 BIBREF20 . Naveed et. al. used logistic regression for finding interestingness of tweet and the likelihood of a tweet being retweeted. Wang and Manning found that the logistic regression’s performance is at par with SVM for sentiment and topic classification purposes BIBREF21 . Wang and Manning, shown the variant of NB and SVM gave them the best result for sentiment classification. The NB did a good job on short texts while the SVM worked better on relatively longer texts BIBREF21 . Inclusion of bigrams produced consistent gains compared to methods such as Multinomial NB, SVM and BoWSVM (Bag of Words SVM). Considering these advantages, we decided to include NBSVM in our analyses as the length of online comments vary, ranging from few words to few paragraphs. The features are generated the way it is generated for the logit model above. Extreme Gradient Boosting (XGBoost) is a highly scalable tree-based supervised classifier BIBREF22 based on gradient boosting, proposed by Friedman BIBREF23 . This boosted models are ensemble of shallow trees which are weak learners with high bias and low variance. Although boosting in general has been used by many researchers for text classification BIBREF24 BIBREF25 , XGBoost implementation is relatively new and some of the winners of the ML competitions have used XGBoost BIBREF26 in their winning solution. We set the parameters of XGBoost as follows: number of round, evaluation metric, learning rate and maximum depth of the tree at 500, logloss, 0.01 and 6 respectively. FastText BIBREF9 is an open source library for word vector representation and text classification. It is highly memory efficient and significantly faster compared to other deep learning algorithms such as Char-CNN (days vs few seconds) and VDCNN (hours vs few seconds) and produce comparable accuracy BIBREF27 . The fastText uses both skipgram (words represented as bag of character n-grams) and continuous Bag of Words (CBOW) method. FastText is suitable to model text involving out-of-vocabulary (OOV) or rare words more suitable for detecting obscure words in online comments BIBREF9 . The Long Short Term Memory networks (LSTM) BIBREF28 , proposed by Hochreiter & Schmidhuber (1997), is a variant of RNN with an additional memory output for the self-looping connections and has the capability to remember inputs nearly 1000 time steps away. The Bidirectional LSTM (BiLSTM) is a further improvement on the LSTM where the network can see the context in either direction and can be trained using all available input information in the past and future of a specific time frame BIBREF29 BIBREF30 . We will be training our BiLSTM model on FastText skipgram (FastText-BiLSTM) embedding obtained using Facebook’s fastText algorithm. Using fastText algorithm, we created embedding matrix having width 100 and used Bidirectional LSTM followd by GlobalMaxPool1D, Dropout(0.2), Dense (50, activation = ‘relu’), Dropout(0.2), Dense (1, activation = ‘sigmoid’). Results We performed 10-fold cross validation by dividing the entire 159,571 comments into nearly 10 equal parts. We trained each of the four models mentioned above on nine folds and tested on the remaining tenth fold and repeated the same process for other folds as well. Eventually, we have Out-of-Fold (OOF) metrics for all 10 parts. We calculated average OOF CV metrics (accuracy, F1-score, logloss, number of misclassified samples) of all 10 folds. As the data distribution is highly skewed (16,225 out of 159,571 ( 10%) are abusive), the accuracy metric here is for reference purpose only as predicting only the majority class every single time can get us 90% accuracy. The transformation, ‘Raw’, represents the actual data free from any transformation and can be considered the baseline for comparison purposes. Overall, the algorithms showed similar trend for all the transformations or sequence of transformations. The NBSVM and FastText-BiLSTM showed similar accuracy with a slight upper edge to the FastText-BiLSTM (See the logloss plot in Fig. FIGREF15 ). For atomic transformations, NBSVM seemed to work better than fastText-BiLSTM and for composite transformations fastText-BiLSTM was better. Logistic regression performed better than the XGBoost algorithm and we guess that the XGBoost might be overfitting the data. A similar trend can be seen in the corresponding F1-score as well. One advantage about the NBSVM is that it is blazingly fast compared to the FastText-BiLSTM. We also calculated total number of misclassified comments (see Fig. FIGREF16 ). The transformation, Convert_to_lower, resulted in reduced accuracy for Logit and NBSVM and higher accuracy for fastText-BiLSTM and XGBoost. Similarly, removing_whitespaces had no effect on Logit, NBSM and XGBoost but the result of fastText-BiLSTM got worse. Only XGBoost was benefitted from replacing_acronyms and replace_contractions transformation. Both, remove_stopwords and remove_rare_words resulted in worse performance for all four algorithms. The transformation, remove_words_containing_non_alpha leads to drop in accuracy in all the four algorithms. This step might be dropping some useful words (sh**, sh1t, hello123 etc.) from the data and resulted in the worse performance. The widely used transformation, Remove_non_alphabet_chars (strip all non-alphabet characters from text), leads to lower performance for all except fastText-BiLSTM where the number of misclassified comments dropped from 6,229 to 5,794. The transformation Stemming seemed to be performing better compared with the Lemmatization for fastText-BiLSTM and XGBoost. For logistic regression and the XGBoost, the best result was achieved with PPO-15, where the number of misclassified comments reduced from 6,992 to 6,816 and from 9,864 to 8,919 respectively. For NBSVM, the best result was achieved using fuzzy_common_mapping (5,946 to 5,933) and for fastText-BiLSTM, the best result was with PPO-8 (6,217 to 5,715) (See Table 2). This shows that the NBSVM are not helped significantly by transformations. In contrast, transformations did help the fastText-BiLSTM significantly. We also looked at the effect of the transformations on the precision and recall the negative class. The fastText-BiLSTM and NBSVM performed consistently well for most of the transformations compared to the Logit and XGBoost. The precision for the XGBoost was the highest and the recall was lowest among the four algorithm pointing to the fact that the negative class data is not enough for this algorithm and the algorithm parameters needs to be tuned. The interpretation of F1-score is different based on the how the classes are distributed. For toxic data, toxic class is more important than the clean comments as the content providers do not want toxic comments to be shown to their users. Therefore, we want the negative class comments to have high F1-scores as compared to the clean comments. We also looked at the effect of the transformations on the precision and recall of the negative class. The F1-score for negative class is somewhere around 0.8 for NBSVM and fastText-BiLSTM, for logit this value is around 0.74 and for XGBoost, the value is around 0.57. The fastText-BiLSTM and NBSVM performed consistently well for most of the transformations compared to the Logit and XGBoost. The precision for the XGBoost was the highest and the recall was lowest among the four algorithm pointing to the fact that the negative class data is not enough for this algorithm and the algorithm parameters needs to be tuned. Discussion and Future Work We spent quite a bit of time on transformation of the toxic data set in the hope that it will ultimately increase the accuracy of our classifiers. However, we empirically found that our intuition, to a large extent, was wrong. Most of the transformations resulted in reduced accuracy for Logit and NBSVM. We considered a total of 35 different ways to transform the data. Since, there will be exponential number of possible transformation sequences to try, we selected only 15 that we thought reasonable. Changing the order can have a different outcome as well. Most of the papers on sentiment classification, that we reviewed, resulted in better accuracy after application of some of these transformations, however, for us it was not completely true. We are not sure about the reason but out best guess is that the twitter data is character-limited while our comment data has no restriction on the size. The toxic data is unbalanced and we did not try to balance the classes in this experiment. It would be interesting to know what happens when we do oversampling BIBREF31 of the minority class or under-sampling of majority class or a combination of both. Pseudo-labeling BIBREF32 can also be used to mitigate the class imbalance problem to some extent. We did not tune the parameters of different algorithms presented in our experiment. It will also be interesting to use word2vec/GloVe word embedding to see how they behave during the above transformations. Since the words in these word embedding are mostly clean and without any spurious/special characters, we can't use the pre-trained word vectors on raw data. To compare apple to apple, the embedding vectors needs to be trained on the corpora from scratch which is time consuming. Also, we only considered six composite transformations which is not comprehensive in any way and will be taking this issue up in the future. We also looked only at the Jigsaw's Wikipedia data only. This paper gives an idea to the NLP researchers on the worth of spending time on transformations of toxic data. Based on the results we have, our recommendation is not to spend too much time on the transformations rather focus on the selection of the best algorithms. All the codes, data and results can be found here: https://github.com/ifahim/toxic-preprocess Acknowledgements We would like to thank Joseph Batz and Christine Cheng for reviewing the draft and providing valuable feedback. We are also immensely grateful to Sasi Kuppanagari and Phani Vadali for their continued support and encouragement throughout this project.
No
c47f593a5b92abc2e3c536fe2baaca226913688b
c47f593a5b92abc2e3c536fe2baaca226913688b_0
Q: What preprocessing techniques are used in the experiments? Text: Introduction Lately, there has been enormous increase in User Generated Contents (UGC) on the online platforms such as newsgroups, blogs, online forums and social networking websites. According to the January 2018 report, the number of active users in Facebook, YouTube, WhatsApp, Facebook Messenger and WeChat was more than 2.1, 1.5, 1.3, 1.3 and 0.98 billions respectively BIBREF1 . The UGCs, most of the times, are helpful but sometimes, they are in bad taste usually posted by trolls, spammers and bullies. According to a study by McAfee, 87% of the teens have observed cyberbullying online BIBREF2 . The Futures Company found that 54% of the teens witnessed cyber bullying on social media platforms BIBREF3 . Another study found 27% of all American internet users self-censor their online postings out of fear of online harassment BIBREF4 . Filtering toxic comments is a challenge for the content providers as their appearances result in the loss of subscriptions. In this paper, we will be using toxic and abusive terms interchangeably to represent comments which are inappropriate, disrespectful, threat or discriminative. Toxic comment classification on online channels is conventionally carried out either by moderators or with the help of text classification tools BIBREF5 . With recent advances in Deep Learning (DL) techniques, researchers are exploring if DL can be used for comment classification task. Jigsaw launched Perspective (www.perspectiveapi.com), which uses ML to automatically attach a confidence score to a comment to show the extent to which a comment is considered toxic. Kaggle also hosted an online competition on toxic classification challenge recently BIBREF6 . Text transformation is the very first step in any form of text classification. The online comments are generally in non-standard English and contain lots of spelling mistakes partly because of typos (resulting from small screens of the mobile devices) but more importantly because of the deliberate attempt to write the abusive comments in creative ways to dodge the automatic filters. In this paper we have identified 20 different atomic transformations (plus 15 sequence of transformations) to preprocess the texts. We will apply four different ML models which are considered among the best to see how much we gain by performing those transformations. The rest of the paper is organized as follows: Section 2 focuses on the relevant research in the area of toxic comment classification. Section 3 focuses on the preprocessing methods which are taken into account in this paper. Section 4 is on ML methods used. Section 5 is dedicated to results and section 6 is discussion and future work. Relevant Research A large number of studies have been done on comment classification in the news, finance and similar other domains. One such study to classify comments from news domain was done with the help of mixture of features such as the length of comments, uppercase and punctuation frequencies, lexical features such as spelling, profanity and readability by applying applied linear and tree based classifier BIBREF7 . FastText, developed by the Facebook AI research (FAIR) team, is a text classification tool suitable to model text involving out-of-vocabulary (OOV) words BIBREF8 BIBREF9 . Zhang et al shown that character level CNN works well for text classification without the need for words BIBREF10 . Abusive/toxic comment classification Toxic comment classification is relatively new field and in recent years, different studies have been carried out to automatically classify toxic comments.Yin et.al. proposed a supervised classification method with n-grams and manually developed regular expressions patterns to detect abusive language BIBREF11 . Sood et. al. used predefined blacklist words and edit distance metric to detect profanity which allowed them to catch words such as sh!+ or @ss as profane BIBREF12 . Warner and Hirschberg detected hate speech by annotating corpus of websites and user comments geared towards detecting anti-semitic hate BIBREF13 . Nobata et. al. used manually labeled online user comments from Yahoo! Finance and news website for detecting hate speech BIBREF5 . Chen et. al. performed feature engineering for classification of comments into abusive, non-abusive and undecided BIBREF14 . Georgakopoulos and Plagianakos compared performance of five different classifiers namely; Word embeddings and CNN, BoW approach SVM, NB, k-Nearest Neighbor (kNN) and Linear Discriminated Analysis (LDA) and found that CNN outperform all other methods in classifying toxic comments BIBREF15 . Preprocessing of online comments We found few dedicated papers that address the effect of incorporating different text transformations on the model accuracy for sentiment classification. Uysal and Gunal shown the impact of transformation on text classification by taking into account four transformations and their all possible combination on news and email domain to observe the classification accuracy. Their experimental analyses shown that choosing appropriate combination may result in significant improvement on classification accuracy BIBREF16 . Nobata et. al. used normalization of numbers, replacing very long unknown words and repeated punctuations with the same token BIBREF5 . Haddi et. al. explained the role of transformation in sentiment analyses and demonstrated with the help of SVM on movie review database that the accuracies improve significantly with the appropriate transformation and feature selection. They used transformation methods such as white space removal, expanding abbreviation, stemming, stop words removal and negation handling BIBREF17 . Other papers focus more on modeling as compared to transformation. For example, Wang and manning filter out anything from corpus that is not alphabet. However, this would filter out all the numbers, symbols, Instant Messages (IM) codes, acronyms such as $#!+, 13itch, </3 (broken heart), a$$ which gives completely different meaning to the words or miss out a lot of information. In another sentiment analyses study, Bao et. al. used five transformations namely URLs features reservation, negation transformation, repeated letters normalization, stemming and lemmatization on twitter data and applied linear classifier available in WEKA machine learning tool. They found the accuracy of the classification increases when URLs features reservation, negation transformation and repeated letters normalization are employed while decreases when stemming and lemmatization are applied BIBREF18 . Jianqiang and Xiaolin also looked at the effect of transformation on five different twitter datasets in order to perform sentiment classification and found that removal of URLs, the removal of stop words and the removal of numbers have minimal effect on accuracy whereas replacing negation and expanding acronyms can improve the accuracy. Most of the exploration regarding application of the transformation has been around the sentiment classification on twitter data which is length-restricted. The length of online comments varies and may range from a couple of words to a few paragraphs. Most of the authors used conventional ML models such as SVM, LR, RF and NB. We are expanding our candidate pool for transformations and using latest state-of-the-art models such as LR, NBSVM, XGBoost and Bidirectional LSTM model using fastText’s skipgram word vector. Preprocessing tasks The most intimidating challenge with the online comments data is that the words are non-standard English full of typos and spurious characters. The number of words in corpora are multi-folds because of different reasons including comments originating from mobile devices, use of acronyms, leetspeak words (http://1337.me/), or intentionally obfuscating words to avoid filters by inserting spurious characters, using phonemes, dropping characters etc. Having several forms of the same word result in feature explosion making it difficult for the model to train. Therefore, it seems natural to perform some transformation before feeding the data to the learning algorithm. To explore how helpful these transformations are, we incorporated 20 simple transformations and 15 additional sequences of transformations in our experiment to see their effect on different type of metrics on four different ML models (See Figure FIGREF3 ). The preprocessing steps are usually performed in sequence of multiple transformations. In this work, we considered 15 combinations of the above transformations that seemed natural to us: Preprocess-order-1 through 15 in the above table represent composite transformations. For instance, PPO-11-LWTN-CoAcBkPrCm represents sequence of the following transformations of the raw text in sequence: Change to lower case INLINEFORM0 remove white spaces INLINEFORM1 trim words len INLINEFORM2 remove Non Printable characters INLINEFORM3 replace contraction INLINEFORM4 replace acronym INLINEFORM5 replace blacklist using regex INLINEFORM6 replace profane words using fuzzy INLINEFORM7 replace common words using fuzzy. Datasets We downloaded the data for our experiment from the Kaggle’s toxic comment classification challenge sponsored by Jigsaw (An incubator within Alphabet). The dataset contains comments from Wikipedia’s talk page edits which have been labeled by human raters for toxicity. Although there are six classes in all: ‘toxic’, ‘severe toxic’, ‘obscene’, ‘threat’, ‘insult’ and ‘identity hate’, to simplify the problem, we combined all the labels and created another label ‘abusive’. A comment is labeled in any one of the six class, then it is categorized as ‘abusive’ else the comment is considered clean or non-abusive. We only used training data for our experiment which has 159,571 labeled comments. Models Used We used four classification algorithms: 1) Logistic regression, which is conventionally used in sentiment classification. Other three algorithms which are relatively new and has shown great results on sentiment classification types of problems are: 2) Naïve Bayes with SVM (NBSVM), 3) Extreme Gradient Boosting (XGBoost) and 4) FastText algorithm with Bidirectional LSTM (FastText-BiLSTM). The linear models such as logistic regression or classifiers are used by many researchers for Twitter comments sentiment analyses BIBREF7 BIBREF18 BIBREF19 BIBREF20 . Naveed et. al. used logistic regression for finding interestingness of tweet and the likelihood of a tweet being retweeted. Wang and Manning found that the logistic regression’s performance is at par with SVM for sentiment and topic classification purposes BIBREF21 . Wang and Manning, shown the variant of NB and SVM gave them the best result for sentiment classification. The NB did a good job on short texts while the SVM worked better on relatively longer texts BIBREF21 . Inclusion of bigrams produced consistent gains compared to methods such as Multinomial NB, SVM and BoWSVM (Bag of Words SVM). Considering these advantages, we decided to include NBSVM in our analyses as the length of online comments vary, ranging from few words to few paragraphs. The features are generated the way it is generated for the logit model above. Extreme Gradient Boosting (XGBoost) is a highly scalable tree-based supervised classifier BIBREF22 based on gradient boosting, proposed by Friedman BIBREF23 . This boosted models are ensemble of shallow trees which are weak learners with high bias and low variance. Although boosting in general has been used by many researchers for text classification BIBREF24 BIBREF25 , XGBoost implementation is relatively new and some of the winners of the ML competitions have used XGBoost BIBREF26 in their winning solution. We set the parameters of XGBoost as follows: number of round, evaluation metric, learning rate and maximum depth of the tree at 500, logloss, 0.01 and 6 respectively. FastText BIBREF9 is an open source library for word vector representation and text classification. It is highly memory efficient and significantly faster compared to other deep learning algorithms such as Char-CNN (days vs few seconds) and VDCNN (hours vs few seconds) and produce comparable accuracy BIBREF27 . The fastText uses both skipgram (words represented as bag of character n-grams) and continuous Bag of Words (CBOW) method. FastText is suitable to model text involving out-of-vocabulary (OOV) or rare words more suitable for detecting obscure words in online comments BIBREF9 . The Long Short Term Memory networks (LSTM) BIBREF28 , proposed by Hochreiter & Schmidhuber (1997), is a variant of RNN with an additional memory output for the self-looping connections and has the capability to remember inputs nearly 1000 time steps away. The Bidirectional LSTM (BiLSTM) is a further improvement on the LSTM where the network can see the context in either direction and can be trained using all available input information in the past and future of a specific time frame BIBREF29 BIBREF30 . We will be training our BiLSTM model on FastText skipgram (FastText-BiLSTM) embedding obtained using Facebook’s fastText algorithm. Using fastText algorithm, we created embedding matrix having width 100 and used Bidirectional LSTM followd by GlobalMaxPool1D, Dropout(0.2), Dense (50, activation = ‘relu’), Dropout(0.2), Dense (1, activation = ‘sigmoid’). Results We performed 10-fold cross validation by dividing the entire 159,571 comments into nearly 10 equal parts. We trained each of the four models mentioned above on nine folds and tested on the remaining tenth fold and repeated the same process for other folds as well. Eventually, we have Out-of-Fold (OOF) metrics for all 10 parts. We calculated average OOF CV metrics (accuracy, F1-score, logloss, number of misclassified samples) of all 10 folds. As the data distribution is highly skewed (16,225 out of 159,571 ( 10%) are abusive), the accuracy metric here is for reference purpose only as predicting only the majority class every single time can get us 90% accuracy. The transformation, ‘Raw’, represents the actual data free from any transformation and can be considered the baseline for comparison purposes. Overall, the algorithms showed similar trend for all the transformations or sequence of transformations. The NBSVM and FastText-BiLSTM showed similar accuracy with a slight upper edge to the FastText-BiLSTM (See the logloss plot in Fig. FIGREF15 ). For atomic transformations, NBSVM seemed to work better than fastText-BiLSTM and for composite transformations fastText-BiLSTM was better. Logistic regression performed better than the XGBoost algorithm and we guess that the XGBoost might be overfitting the data. A similar trend can be seen in the corresponding F1-score as well. One advantage about the NBSVM is that it is blazingly fast compared to the FastText-BiLSTM. We also calculated total number of misclassified comments (see Fig. FIGREF16 ). The transformation, Convert_to_lower, resulted in reduced accuracy for Logit and NBSVM and higher accuracy for fastText-BiLSTM and XGBoost. Similarly, removing_whitespaces had no effect on Logit, NBSM and XGBoost but the result of fastText-BiLSTM got worse. Only XGBoost was benefitted from replacing_acronyms and replace_contractions transformation. Both, remove_stopwords and remove_rare_words resulted in worse performance for all four algorithms. The transformation, remove_words_containing_non_alpha leads to drop in accuracy in all the four algorithms. This step might be dropping some useful words (sh**, sh1t, hello123 etc.) from the data and resulted in the worse performance. The widely used transformation, Remove_non_alphabet_chars (strip all non-alphabet characters from text), leads to lower performance for all except fastText-BiLSTM where the number of misclassified comments dropped from 6,229 to 5,794. The transformation Stemming seemed to be performing better compared with the Lemmatization for fastText-BiLSTM and XGBoost. For logistic regression and the XGBoost, the best result was achieved with PPO-15, where the number of misclassified comments reduced from 6,992 to 6,816 and from 9,864 to 8,919 respectively. For NBSVM, the best result was achieved using fuzzy_common_mapping (5,946 to 5,933) and for fastText-BiLSTM, the best result was with PPO-8 (6,217 to 5,715) (See Table 2). This shows that the NBSVM are not helped significantly by transformations. In contrast, transformations did help the fastText-BiLSTM significantly. We also looked at the effect of the transformations on the precision and recall the negative class. The fastText-BiLSTM and NBSVM performed consistently well for most of the transformations compared to the Logit and XGBoost. The precision for the XGBoost was the highest and the recall was lowest among the four algorithm pointing to the fact that the negative class data is not enough for this algorithm and the algorithm parameters needs to be tuned. The interpretation of F1-score is different based on the how the classes are distributed. For toxic data, toxic class is more important than the clean comments as the content providers do not want toxic comments to be shown to their users. Therefore, we want the negative class comments to have high F1-scores as compared to the clean comments. We also looked at the effect of the transformations on the precision and recall of the negative class. The F1-score for negative class is somewhere around 0.8 for NBSVM and fastText-BiLSTM, for logit this value is around 0.74 and for XGBoost, the value is around 0.57. The fastText-BiLSTM and NBSVM performed consistently well for most of the transformations compared to the Logit and XGBoost. The precision for the XGBoost was the highest and the recall was lowest among the four algorithm pointing to the fact that the negative class data is not enough for this algorithm and the algorithm parameters needs to be tuned. Discussion and Future Work We spent quite a bit of time on transformation of the toxic data set in the hope that it will ultimately increase the accuracy of our classifiers. However, we empirically found that our intuition, to a large extent, was wrong. Most of the transformations resulted in reduced accuracy for Logit and NBSVM. We considered a total of 35 different ways to transform the data. Since, there will be exponential number of possible transformation sequences to try, we selected only 15 that we thought reasonable. Changing the order can have a different outcome as well. Most of the papers on sentiment classification, that we reviewed, resulted in better accuracy after application of some of these transformations, however, for us it was not completely true. We are not sure about the reason but out best guess is that the twitter data is character-limited while our comment data has no restriction on the size. The toxic data is unbalanced and we did not try to balance the classes in this experiment. It would be interesting to know what happens when we do oversampling BIBREF31 of the minority class or under-sampling of majority class or a combination of both. Pseudo-labeling BIBREF32 can also be used to mitigate the class imbalance problem to some extent. We did not tune the parameters of different algorithms presented in our experiment. It will also be interesting to use word2vec/GloVe word embedding to see how they behave during the above transformations. Since the words in these word embedding are mostly clean and without any spurious/special characters, we can't use the pre-trained word vectors on raw data. To compare apple to apple, the embedding vectors needs to be trained on the corpora from scratch which is time consuming. Also, we only considered six composite transformations which is not comprehensive in any way and will be taking this issue up in the future. We also looked only at the Jigsaw's Wikipedia data only. This paper gives an idea to the NLP researchers on the worth of spending time on transformations of toxic data. Based on the results we have, our recommendation is not to spend too much time on the transformations rather focus on the selection of the best algorithms. All the codes, data and results can be found here: https://github.com/ifahim/toxic-preprocess Acknowledgements We would like to thank Joseph Batz and Christine Cheng for reviewing the draft and providing valuable feedback. We are also immensely grateful to Sasi Kuppanagari and Phani Vadali for their continued support and encouragement throughout this project.
See Figure FIGREF3
c3a9732599849ba4a9f07170ce1e50867cf7d7bf
c3a9732599849ba4a9f07170ce1e50867cf7d7bf_0
Q: What state of the art models are used in the experiments? Text: Introduction Lately, there has been enormous increase in User Generated Contents (UGC) on the online platforms such as newsgroups, blogs, online forums and social networking websites. According to the January 2018 report, the number of active users in Facebook, YouTube, WhatsApp, Facebook Messenger and WeChat was more than 2.1, 1.5, 1.3, 1.3 and 0.98 billions respectively BIBREF1 . The UGCs, most of the times, are helpful but sometimes, they are in bad taste usually posted by trolls, spammers and bullies. According to a study by McAfee, 87% of the teens have observed cyberbullying online BIBREF2 . The Futures Company found that 54% of the teens witnessed cyber bullying on social media platforms BIBREF3 . Another study found 27% of all American internet users self-censor their online postings out of fear of online harassment BIBREF4 . Filtering toxic comments is a challenge for the content providers as their appearances result in the loss of subscriptions. In this paper, we will be using toxic and abusive terms interchangeably to represent comments which are inappropriate, disrespectful, threat or discriminative. Toxic comment classification on online channels is conventionally carried out either by moderators or with the help of text classification tools BIBREF5 . With recent advances in Deep Learning (DL) techniques, researchers are exploring if DL can be used for comment classification task. Jigsaw launched Perspective (www.perspectiveapi.com), which uses ML to automatically attach a confidence score to a comment to show the extent to which a comment is considered toxic. Kaggle also hosted an online competition on toxic classification challenge recently BIBREF6 . Text transformation is the very first step in any form of text classification. The online comments are generally in non-standard English and contain lots of spelling mistakes partly because of typos (resulting from small screens of the mobile devices) but more importantly because of the deliberate attempt to write the abusive comments in creative ways to dodge the automatic filters. In this paper we have identified 20 different atomic transformations (plus 15 sequence of transformations) to preprocess the texts. We will apply four different ML models which are considered among the best to see how much we gain by performing those transformations. The rest of the paper is organized as follows: Section 2 focuses on the relevant research in the area of toxic comment classification. Section 3 focuses on the preprocessing methods which are taken into account in this paper. Section 4 is on ML methods used. Section 5 is dedicated to results and section 6 is discussion and future work. Relevant Research A large number of studies have been done on comment classification in the news, finance and similar other domains. One such study to classify comments from news domain was done with the help of mixture of features such as the length of comments, uppercase and punctuation frequencies, lexical features such as spelling, profanity and readability by applying applied linear and tree based classifier BIBREF7 . FastText, developed by the Facebook AI research (FAIR) team, is a text classification tool suitable to model text involving out-of-vocabulary (OOV) words BIBREF8 BIBREF9 . Zhang et al shown that character level CNN works well for text classification without the need for words BIBREF10 . Abusive/toxic comment classification Toxic comment classification is relatively new field and in recent years, different studies have been carried out to automatically classify toxic comments.Yin et.al. proposed a supervised classification method with n-grams and manually developed regular expressions patterns to detect abusive language BIBREF11 . Sood et. al. used predefined blacklist words and edit distance metric to detect profanity which allowed them to catch words such as sh!+ or @ss as profane BIBREF12 . Warner and Hirschberg detected hate speech by annotating corpus of websites and user comments geared towards detecting anti-semitic hate BIBREF13 . Nobata et. al. used manually labeled online user comments from Yahoo! Finance and news website for detecting hate speech BIBREF5 . Chen et. al. performed feature engineering for classification of comments into abusive, non-abusive and undecided BIBREF14 . Georgakopoulos and Plagianakos compared performance of five different classifiers namely; Word embeddings and CNN, BoW approach SVM, NB, k-Nearest Neighbor (kNN) and Linear Discriminated Analysis (LDA) and found that CNN outperform all other methods in classifying toxic comments BIBREF15 . Preprocessing of online comments We found few dedicated papers that address the effect of incorporating different text transformations on the model accuracy for sentiment classification. Uysal and Gunal shown the impact of transformation on text classification by taking into account four transformations and their all possible combination on news and email domain to observe the classification accuracy. Their experimental analyses shown that choosing appropriate combination may result in significant improvement on classification accuracy BIBREF16 . Nobata et. al. used normalization of numbers, replacing very long unknown words and repeated punctuations with the same token BIBREF5 . Haddi et. al. explained the role of transformation in sentiment analyses and demonstrated with the help of SVM on movie review database that the accuracies improve significantly with the appropriate transformation and feature selection. They used transformation methods such as white space removal, expanding abbreviation, stemming, stop words removal and negation handling BIBREF17 . Other papers focus more on modeling as compared to transformation. For example, Wang and manning filter out anything from corpus that is not alphabet. However, this would filter out all the numbers, symbols, Instant Messages (IM) codes, acronyms such as $#!+, 13itch, </3 (broken heart), a$$ which gives completely different meaning to the words or miss out a lot of information. In another sentiment analyses study, Bao et. al. used five transformations namely URLs features reservation, negation transformation, repeated letters normalization, stemming and lemmatization on twitter data and applied linear classifier available in WEKA machine learning tool. They found the accuracy of the classification increases when URLs features reservation, negation transformation and repeated letters normalization are employed while decreases when stemming and lemmatization are applied BIBREF18 . Jianqiang and Xiaolin also looked at the effect of transformation on five different twitter datasets in order to perform sentiment classification and found that removal of URLs, the removal of stop words and the removal of numbers have minimal effect on accuracy whereas replacing negation and expanding acronyms can improve the accuracy. Most of the exploration regarding application of the transformation has been around the sentiment classification on twitter data which is length-restricted. The length of online comments varies and may range from a couple of words to a few paragraphs. Most of the authors used conventional ML models such as SVM, LR, RF and NB. We are expanding our candidate pool for transformations and using latest state-of-the-art models such as LR, NBSVM, XGBoost and Bidirectional LSTM model using fastText’s skipgram word vector. Preprocessing tasks The most intimidating challenge with the online comments data is that the words are non-standard English full of typos and spurious characters. The number of words in corpora are multi-folds because of different reasons including comments originating from mobile devices, use of acronyms, leetspeak words (http://1337.me/), or intentionally obfuscating words to avoid filters by inserting spurious characters, using phonemes, dropping characters etc. Having several forms of the same word result in feature explosion making it difficult for the model to train. Therefore, it seems natural to perform some transformation before feeding the data to the learning algorithm. To explore how helpful these transformations are, we incorporated 20 simple transformations and 15 additional sequences of transformations in our experiment to see their effect on different type of metrics on four different ML models (See Figure FIGREF3 ). The preprocessing steps are usually performed in sequence of multiple transformations. In this work, we considered 15 combinations of the above transformations that seemed natural to us: Preprocess-order-1 through 15 in the above table represent composite transformations. For instance, PPO-11-LWTN-CoAcBkPrCm represents sequence of the following transformations of the raw text in sequence: Change to lower case INLINEFORM0 remove white spaces INLINEFORM1 trim words len INLINEFORM2 remove Non Printable characters INLINEFORM3 replace contraction INLINEFORM4 replace acronym INLINEFORM5 replace blacklist using regex INLINEFORM6 replace profane words using fuzzy INLINEFORM7 replace common words using fuzzy. Datasets We downloaded the data for our experiment from the Kaggle’s toxic comment classification challenge sponsored by Jigsaw (An incubator within Alphabet). The dataset contains comments from Wikipedia’s talk page edits which have been labeled by human raters for toxicity. Although there are six classes in all: ‘toxic’, ‘severe toxic’, ‘obscene’, ‘threat’, ‘insult’ and ‘identity hate’, to simplify the problem, we combined all the labels and created another label ‘abusive’. A comment is labeled in any one of the six class, then it is categorized as ‘abusive’ else the comment is considered clean or non-abusive. We only used training data for our experiment which has 159,571 labeled comments. Models Used We used four classification algorithms: 1) Logistic regression, which is conventionally used in sentiment classification. Other three algorithms which are relatively new and has shown great results on sentiment classification types of problems are: 2) Naïve Bayes with SVM (NBSVM), 3) Extreme Gradient Boosting (XGBoost) and 4) FastText algorithm with Bidirectional LSTM (FastText-BiLSTM). The linear models such as logistic regression or classifiers are used by many researchers for Twitter comments sentiment analyses BIBREF7 BIBREF18 BIBREF19 BIBREF20 . Naveed et. al. used logistic regression for finding interestingness of tweet and the likelihood of a tweet being retweeted. Wang and Manning found that the logistic regression’s performance is at par with SVM for sentiment and topic classification purposes BIBREF21 . Wang and Manning, shown the variant of NB and SVM gave them the best result for sentiment classification. The NB did a good job on short texts while the SVM worked better on relatively longer texts BIBREF21 . Inclusion of bigrams produced consistent gains compared to methods such as Multinomial NB, SVM and BoWSVM (Bag of Words SVM). Considering these advantages, we decided to include NBSVM in our analyses as the length of online comments vary, ranging from few words to few paragraphs. The features are generated the way it is generated for the logit model above. Extreme Gradient Boosting (XGBoost) is a highly scalable tree-based supervised classifier BIBREF22 based on gradient boosting, proposed by Friedman BIBREF23 . This boosted models are ensemble of shallow trees which are weak learners with high bias and low variance. Although boosting in general has been used by many researchers for text classification BIBREF24 BIBREF25 , XGBoost implementation is relatively new and some of the winners of the ML competitions have used XGBoost BIBREF26 in their winning solution. We set the parameters of XGBoost as follows: number of round, evaluation metric, learning rate and maximum depth of the tree at 500, logloss, 0.01 and 6 respectively. FastText BIBREF9 is an open source library for word vector representation and text classification. It is highly memory efficient and significantly faster compared to other deep learning algorithms such as Char-CNN (days vs few seconds) and VDCNN (hours vs few seconds) and produce comparable accuracy BIBREF27 . The fastText uses both skipgram (words represented as bag of character n-grams) and continuous Bag of Words (CBOW) method. FastText is suitable to model text involving out-of-vocabulary (OOV) or rare words more suitable for detecting obscure words in online comments BIBREF9 . The Long Short Term Memory networks (LSTM) BIBREF28 , proposed by Hochreiter & Schmidhuber (1997), is a variant of RNN with an additional memory output for the self-looping connections and has the capability to remember inputs nearly 1000 time steps away. The Bidirectional LSTM (BiLSTM) is a further improvement on the LSTM where the network can see the context in either direction and can be trained using all available input information in the past and future of a specific time frame BIBREF29 BIBREF30 . We will be training our BiLSTM model on FastText skipgram (FastText-BiLSTM) embedding obtained using Facebook’s fastText algorithm. Using fastText algorithm, we created embedding matrix having width 100 and used Bidirectional LSTM followd by GlobalMaxPool1D, Dropout(0.2), Dense (50, activation = ‘relu’), Dropout(0.2), Dense (1, activation = ‘sigmoid’). Results We performed 10-fold cross validation by dividing the entire 159,571 comments into nearly 10 equal parts. We trained each of the four models mentioned above on nine folds and tested on the remaining tenth fold and repeated the same process for other folds as well. Eventually, we have Out-of-Fold (OOF) metrics for all 10 parts. We calculated average OOF CV metrics (accuracy, F1-score, logloss, number of misclassified samples) of all 10 folds. As the data distribution is highly skewed (16,225 out of 159,571 ( 10%) are abusive), the accuracy metric here is for reference purpose only as predicting only the majority class every single time can get us 90% accuracy. The transformation, ‘Raw’, represents the actual data free from any transformation and can be considered the baseline for comparison purposes. Overall, the algorithms showed similar trend for all the transformations or sequence of transformations. The NBSVM and FastText-BiLSTM showed similar accuracy with a slight upper edge to the FastText-BiLSTM (See the logloss plot in Fig. FIGREF15 ). For atomic transformations, NBSVM seemed to work better than fastText-BiLSTM and for composite transformations fastText-BiLSTM was better. Logistic regression performed better than the XGBoost algorithm and we guess that the XGBoost might be overfitting the data. A similar trend can be seen in the corresponding F1-score as well. One advantage about the NBSVM is that it is blazingly fast compared to the FastText-BiLSTM. We also calculated total number of misclassified comments (see Fig. FIGREF16 ). The transformation, Convert_to_lower, resulted in reduced accuracy for Logit and NBSVM and higher accuracy for fastText-BiLSTM and XGBoost. Similarly, removing_whitespaces had no effect on Logit, NBSM and XGBoost but the result of fastText-BiLSTM got worse. Only XGBoost was benefitted from replacing_acronyms and replace_contractions transformation. Both, remove_stopwords and remove_rare_words resulted in worse performance for all four algorithms. The transformation, remove_words_containing_non_alpha leads to drop in accuracy in all the four algorithms. This step might be dropping some useful words (sh**, sh1t, hello123 etc.) from the data and resulted in the worse performance. The widely used transformation, Remove_non_alphabet_chars (strip all non-alphabet characters from text), leads to lower performance for all except fastText-BiLSTM where the number of misclassified comments dropped from 6,229 to 5,794. The transformation Stemming seemed to be performing better compared with the Lemmatization for fastText-BiLSTM and XGBoost. For logistic regression and the XGBoost, the best result was achieved with PPO-15, where the number of misclassified comments reduced from 6,992 to 6,816 and from 9,864 to 8,919 respectively. For NBSVM, the best result was achieved using fuzzy_common_mapping (5,946 to 5,933) and for fastText-BiLSTM, the best result was with PPO-8 (6,217 to 5,715) (See Table 2). This shows that the NBSVM are not helped significantly by transformations. In contrast, transformations did help the fastText-BiLSTM significantly. We also looked at the effect of the transformations on the precision and recall the negative class. The fastText-BiLSTM and NBSVM performed consistently well for most of the transformations compared to the Logit and XGBoost. The precision for the XGBoost was the highest and the recall was lowest among the four algorithm pointing to the fact that the negative class data is not enough for this algorithm and the algorithm parameters needs to be tuned. The interpretation of F1-score is different based on the how the classes are distributed. For toxic data, toxic class is more important than the clean comments as the content providers do not want toxic comments to be shown to their users. Therefore, we want the negative class comments to have high F1-scores as compared to the clean comments. We also looked at the effect of the transformations on the precision and recall of the negative class. The F1-score for negative class is somewhere around 0.8 for NBSVM and fastText-BiLSTM, for logit this value is around 0.74 and for XGBoost, the value is around 0.57. The fastText-BiLSTM and NBSVM performed consistently well for most of the transformations compared to the Logit and XGBoost. The precision for the XGBoost was the highest and the recall was lowest among the four algorithm pointing to the fact that the negative class data is not enough for this algorithm and the algorithm parameters needs to be tuned. Discussion and Future Work We spent quite a bit of time on transformation of the toxic data set in the hope that it will ultimately increase the accuracy of our classifiers. However, we empirically found that our intuition, to a large extent, was wrong. Most of the transformations resulted in reduced accuracy for Logit and NBSVM. We considered a total of 35 different ways to transform the data. Since, there will be exponential number of possible transformation sequences to try, we selected only 15 that we thought reasonable. Changing the order can have a different outcome as well. Most of the papers on sentiment classification, that we reviewed, resulted in better accuracy after application of some of these transformations, however, for us it was not completely true. We are not sure about the reason but out best guess is that the twitter data is character-limited while our comment data has no restriction on the size. The toxic data is unbalanced and we did not try to balance the classes in this experiment. It would be interesting to know what happens when we do oversampling BIBREF31 of the minority class or under-sampling of majority class or a combination of both. Pseudo-labeling BIBREF32 can also be used to mitigate the class imbalance problem to some extent. We did not tune the parameters of different algorithms presented in our experiment. It will also be interesting to use word2vec/GloVe word embedding to see how they behave during the above transformations. Since the words in these word embedding are mostly clean and without any spurious/special characters, we can't use the pre-trained word vectors on raw data. To compare apple to apple, the embedding vectors needs to be trained on the corpora from scratch which is time consuming. Also, we only considered six composite transformations which is not comprehensive in any way and will be taking this issue up in the future. We also looked only at the Jigsaw's Wikipedia data only. This paper gives an idea to the NLP researchers on the worth of spending time on transformations of toxic data. Based on the results we have, our recommendation is not to spend too much time on the transformations rather focus on the selection of the best algorithms. All the codes, data and results can be found here: https://github.com/ifahim/toxic-preprocess Acknowledgements We would like to thank Joseph Batz and Christine Cheng for reviewing the draft and providing valuable feedback. We are also immensely grateful to Sasi Kuppanagari and Phani Vadali for their continued support and encouragement throughout this project.
2) Naïve Bayes with SVM (NBSVM), 3) Extreme Gradient Boosting (XGBoost), 4) FastText algorithm with Bidirectional LSTM (FastText-BiLSTM)
0fd678d24c86122b9ab27b73ef20216bbd9847d1
0fd678d24c86122b9ab27b73ef20216bbd9847d1_0
Q: What evaluation metrics are used? Text: Introduction The distributed representation plays an important role in deep learning based natural language processing (NLP) BIBREF0 , BIBREF1 , BIBREF2 . On word level, many successful methods have been proposed to learn a good representation for single word, which is also called word embedding, such as skip-gram BIBREF3 , GloVe BIBREF4 , etc. There are also pre-trained word embeddings, which can easily used in downstream tasks. However, on sentence level, there is still no generic sentence representation which is suitable for various NLP tasks. Currently, most of sentence encoding models are trained specifically for a certain task in a supervised way, which results to different representations for the same sentence in different tasks. Taking the following sentence as an example for domain classification task and sentiment classification task, general text classification models always learn two representations separately. For domain classification, the model can learn a better representation of “infantile cart” while for sentiment classification, the model is able to learn a better representation of “easy to use”. However, to train a good task-specific sentence representation from scratch, we always need to prepare a large dataset which is always unavailable or costly. To alleviate this problem, one approach is pre-training the model on large unlabeled corpora by unsupervised learning tasks, such as language modeling BIBREF0 . This unsupervised pre-training may be helpful to improve the final performance, but the improvement is not guaranteed since it does not directly optimize the desired task. Another approach is multi-task learning BIBREF5 , which is an effective approach to improve the performance of a single task with the help of other related tasks. However, most existing models on multi-task learning attempt to divide the representation of a sentence into private and shared spaces. The shared representation is used in all tasks, and the private one is different for each task. The two typical information sharing schemes are stacked shared-private scheme and parallel shared-private scheme (as shown in Figure SECREF2 and SECREF3 respectively). However, we cannot guarantee that a good sentence encoding model is learned by the shared layer. To learn a better shareable sentence representation, we propose a new information-sharing scheme for multi-task learning in this paper. In our proposed scheme, the representation of every sentence is fully shared among all different tasks. To extract the task-specific feature, we utilize the attention mechanism and introduce a task-dependent query vector to select the task-specific information from the shared sentence representation. The query vector of each task can be regarded as learnable parameters (static) or be generated dynamically. If we take the former example, in our proposed model these two classification tasks share the same representation which includes both domain information and sentiment information. On top of this shared representation, a task-specific query vector will be used to focus “infantile cart” for domain classification and “easy to use” for sentiment classification. The contributions of this papers can be summarized as follows. Neural Sentence Encoding Model The primary role of sentence encoding models is to represent the variable-length sentence or paragraphs as fixed-length dense vector (distributed representation). Currently, the effective neural sentence encoding models include neural Bag-of-words (NBOW), recurrent neural networks (RNN) BIBREF2 , BIBREF6 , convolutional neural networks (CNN) BIBREF1 , BIBREF7 , BIBREF8 , and syntactic-based compositional model BIBREF9 , BIBREF10 , BIBREF11 . Given a text sequence INLINEFORM0 , we first use a lookup layer to get the vector representation (word embedding) INLINEFORM1 of each word INLINEFORM2 . Then we can use CNN or RNN to calculate the hidden state INLINEFORM3 of each position INLINEFORM4 . The final representation of a sentence could be either the final hidden state of the RNN or the max (or average) pooling from all hidden states of RNN (or CNN). We use bidirectional LSTM (BiLSTM) to gain some dependency between adjacent words. The update rule of each LSTM unit can be written as follows: DISPLAYFORM0 where INLINEFORM0 represents all the parameters of BiLSTM. The representation of the whole sequence is the average of the hidden states of all the positions, where INLINEFORM1 denotes the concatenation operation. Shared-Private Scheme in Multi-task Learning Multi-task Learning BIBREF5 utilizes the correlation between related tasks to improve classification by learning tasks in parallel, which has been widely used in various natural language processing tasks, such as text classification BIBREF12 , semantic role labeling BIBREF13 , machine translation BIBREF14 , and so on. To facilitate this, we give some explanation for notations used in this paper. Formally, we refer to INLINEFORM0 as a dataset with INLINEFORM1 samples for task INLINEFORM2 . Specifically, DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 denote a sentence and corresponding label for task INLINEFORM2 . A common information sharing scheme is to divide the feature spaces into two parts: one is used to store task-specific features, the other is used to capture task-invariant features. As shown in Figure SECREF2 and SECREF3 , there are two schemes: stacked shared-private (SSP) scheme and parallel shared-private (PSP) scheme. In stacked scheme, the output of the shared LSTM layer is fed into the private LSTM layer, whose output is the final task-specific sentence representation. In parallel scheme, the final task-specific sentence representation is the concatenation of outputs from the shared LSTM layer and the private LSTM layer. For a sentence INLINEFORM0 and its label INLINEFORM1 in task INLINEFORM2 , its final representation is ultimately fed into the corresponding task-specific softmax layer for classification or other tasks. DISPLAYFORM0 where INLINEFORM0 is prediction probabilities; INLINEFORM1 is the final task-specific representation; INLINEFORM2 and INLINEFORM3 are task-specific weight matrix and bias vector respectively. The total loss INLINEFORM0 can be computed as: DISPLAYFORM0 where INLINEFORM0 (usually set to 1) is the weights for each task INLINEFORM1 respectively; INLINEFORM2 is the cross-entropy of the predicted and true distributions. A New Information-Sharing Scheme for Multi-task Learning The key factor of multi-task learning is the information sharing scheme in latent representation space. Different from the traditional shared-private scheme, we introduce a new scheme for multi-task learning on NLP tasks, in which the sentence representation is shared among all the tasks, the task-specific information is selected by attention mechanism. In a certain task, not all information of a sentence is useful for the task, therefore we just need to select the key information from the sentence. Attention mechanism BIBREF15 , BIBREF16 is an effective method to select related information from a set of candidates. The attention mechanism can effectively solve the capacity problem of sequence models, thereby is widely used in many NLP tasks, such as machine translation BIBREF17 , textual entailment BIBREF18 and summarization BIBREF19 . Static Task-Attentive Sentence Encoding We first introduce the static task-attentive sentence encoding model, in which the task query vector is a static learnable parameter. As shown in Figure FIGREF19 , our model consists of one shared BiLSTM layer and an attention layer. Formally, for a sentence in task INLINEFORM0 , we first use BiLSTM to calculate the shared representation INLINEFORM1 . Then we use attention mechanism to select the task-specific information from a generic task-independent sentence representation. Following BIBREF17 , we use the dot-product attention to compute the attention distribution. We introduce a task-specific query vector INLINEFORM2 to calculate the attention distribution INLINEFORM3 over all positions. DISPLAYFORM0 where the task-specific query vector INLINEFORM0 is a learned parameter. The final task-specific representation INLINEFORM1 is summarized by DISPLAYFORM0 At last, a task-specific fully connected layer followed by a softmax non-linear layer processes the task-specific context INLINEFORM0 and predicts the probability distribution over classes. Dynamic Task-Attentive Sentence Encoding Different from the static task-attentive sentence encoding model, the query vectors of the dynamic task-attentive sentence encoding model are generated dynamically. When each task belongs to a different domain, we can introduce an auxiliary domain classifier to predict the domain (or task) of the specific sentence. Thus, the domain information is also included in the shared sentence representation, which can be used to generate the task-specific query vector of attention. The original tasks and the auxiliary task of domain classification (DC) are joint learned in our multi-task learning framework. The query vector INLINEFORM0 of DC task is static and needs be learned in training phrase. The domain information is also selected with attention mechanism. DISPLAYFORM0 where INLINEFORM0 is attention distribution of auxiliary DC task, and INLINEFORM1 is the attentive information for DC task, which is fed into the final classifier to predict its domain INLINEFORM2 . Since INLINEFORM0 contains the domain information, we can use it to generate a more flexible query vector DISPLAYFORM0 where INLINEFORM0 is a shared learnable weight matrix and INLINEFORM1 is a task-specific bias vector. When we set INLINEFORM2 , the dynamic query is equivalent to the static one. Experiment In this section, we investigate the empirical performances of our proposed architectures on three experiments. Exp I: Sentiment Classification We first conduct a multi-task experiment on sentiment classification. We use 16 different datasets from several popular review corpora used in BIBREF20 . These datasets consist of 14 product review datasets and two movie review datasets. All the datasets in each task are partitioned randomly into training set, development set and testing set with the proportion of 70%, 10% and 20% respectively. The detailed statistics about all the datasets are listed in Table TABREF27 . We compare our proposed two information sharing schemes, static attentive sentence encoding (SA-MTL) and dynamic attentive sentence encoding (DA-MTL), with the following multi-task learning frameworks. FS-MTL: This model is a combination of a fully shared BiLSTM and a classifier. SSP-MTL: This is the stacked shared-private model as shown in Figure SECREF2 whose output of the shared BiLSTM layer is fed into the private BiLSTM layer. PSP-MTL: The is the parallel shared-private model as shown in Figure SECREF3 . The final sentence representation is the concatenation of both private and shared BiLSTM. ASP-MTL: This model is proposed by BIBREF20 based on PSP-MTL with uni-directional LSTM. The model uses adversarial training to separate task-invariant and task-specific features from different tasks. We initialize word embeddings with the 200d GloVe vectors (840B token version, BIBREF4 ). The other parameters are initialized by randomly sampling from uniform distribution in [-0.1, 0.1]. The mini-batch size is set to 32. For each task, we take hyperparameters which achieve the best performance on the development set via a small grid search. We use ADAM optimizer BIBREF21 with the learning rate of INLINEFORM0 . The BiLSTM models have 200 dimensions in each direction, and dropout with probability of INLINEFORM1 . During the training step of multi-task models, we select different tasks randomly. After the training step, we fix the parameters of the shared BiLSTM and fine tune every task. Table TABREF34 shows the performances of the different methods. From the table, we can see that the performances of most tasks can be improved with the help of multi-task learning. FS-MTL shows the minimum performance gain from multi-task learning since it puts all private and shared information into a unified space. SSP-MTL and PSP-MTL achieve similar performance and are outperformed by ASP-MTL which can better separate the task-specific and task-invariant features by using adversarial training. Our proposed models (SA-MTL and DA-MTL) outperform ASP-MTL because we model a richer representation from these 16 tasks. Compared to SA-MTL, DA-MTL achieves a further improvement of INLINEFORM0 accuracy with the help of the dynamic and flexible query vector. It is noteworthy that our models are also space efficient since the task-specific information is extracted by using only a query vector, instead of a BiLSTM layer in the shared-private models. We also present the convergence properties of our models on the development datasets compared to other multi-task models in Figure FIGREF36 . We can see that PSP-MTL converges much more slowly than the rest four models because each task-specific classifier should consider the output of shared layer which is quite unstable during the beginning of training phrase. Moreover, benefit from the attention mechanism which is useful in feature extraction, SA-TML and DA-MTL are converged much more quickly than the rest of models. Since all the tasks share the same sentence encoding layer, the query vector INLINEFORM0 of each task determines which part of the sentence to attend. Thus, similar tasks should have the similar query vectors. Here we simply calculate the Frobenius norm of each pair of tasks' INLINEFORM1 as the similarity. Figure FIGREF38 shows the similarity matrix of different task's query vector INLINEFORM2 in static attentive model. A darker cell means the higher similarity of the two task's INLINEFORM3 . Since the cells in the diagnose of the matrix denotes the similarity of one task, we leave them blank because they are meaningless. It's easy to find that INLINEFORM4 of “DVD”, “Video” and “IMDB” have very high similarity. It makes sense because they are all reviews related to movie. However, another movie review “MR” has very low similarity to these three task. It's probably that the text in “MR” is very short that makes it different from these tasks. The similarity of INLINEFORM5 from “Books” and “Video” is also very high because these two datasets share a lot of similar sentiment expressions. As shown in Figure FIGREF40 , we also show the attention distributions on a real example selected from the book review dataset. This piece of text involves two domains. The review is negative in the book domain while it is positive from the perspective of movie review. In our SA-MTL model, the “Books” review classifier from SA-MTL focus on the negative aspect of the book and evaluate the text as negative. In contrast, the “DVD” review classifier focuses on the positive part of the movie and produce the result as positive. In case of DA-MTL, the model first focuses on the two domain words “book” and “movie” and judge the text is a book review because “book” has a higher weight. Then, the model dynamically generates a query INLINEFORM0 and focuses on the part of the book review in this text, thereby finally predicting a negative sentiment. Exp II: Transferability of Shared Sentence Representation With attention mechanism, the shared sentence encoder in our proposed models can generate more generic task-invariant representations, which can be considered as off-the-shelf knowledge and then be used for unseen new tasks. To test the transferability of our learned shared representation, we also design an experiment shown in Table TABREF46 . The multi-task learning results are derived by training the first 6 tasks in general multi-task learning. For transfer learning, we choose the last 10 tasks to train our model with multi-task learning, then the learned shared sentence encoding layer are kept frozen and transferred to train the first 6 tasks. As shown in Table TABREF46 , we can see that SA-MTL and DA-MTL achieves better transfer learning performances compared to SSP-MTL and PSP-MTL. The reason is that by using attention mechanism, richer information can be captured into the shared representation layer, thereby benefiting the other task. Exp III: Introducing Sequence Labeling as Auxiliary Task A good sentence representation should include its linguistic information. Therefore, we incorporate sequence labeling task (such as POS Tagging and Chunking) as an auxiliary task into the multi-task learning framework, which is trained jointly with the primary tasks (the above 16 tasks of sentiment classification). The auxiliary task shares the sentence encoding layer with the primary tasks and connected to a private fully connected layer followed by a softmax non-linear layer to process every hidden state INLINEFORM0 and predicts the labels. We use CoNLL 2000 BIBREF22 sequence labeling dataset for both POS Tagging and Chunking tasks. There are 8774 sentences in training data, 500 sentences in development data and 1512 sentences in test data. The average sentence length is 24 and has a total vocabulary size as 17k. The experiment results are shown in Table TABREF51 . We use the same hyperparameters and training procedure as the former experiments. The result shows that by leveraging auxiliary tasks, the performances of SA-MTL and DA-MTL achieve more improvement than PSP-MTL and SSP-MTL. For further analysis, Figure FIGREF53 shows the attention distribution produced by models trained with and without Chunking task on two pieces of texts. In the first piece of text, both of the models attend to the first “like” because it represents positive sentiment on the book. The model trained with Chunking task also labels the three “like” as 'B-VP' (beginning of verb phrase) correctly. However, in the second piece of text, the same work “like” denotes a preposition and has no sentiment meaning. The model trained without Chunking task fails to tell the difference with the former text and focuses on it and produces the result as positive. Meanwhile, the model trained with Chunking task successfully labels the “like” as 'B-PP' (beginning of prepositional phrase) and pays little attention to it and produces the right answer as negative. This example shows how the model trained with auxiliary task helps the primary tasks. Related Work Neural networks based multi-task learning has been proven effective in many NLP problems BIBREF13 , BIBREF23 , BIBREF12 , BIBREF20 , BIBREF24 In most of these models, there exists a task-dependent private layer separated from the shared layer. The private layers play more important role in these models. Different from them, our model encodes all information into a shared representation layer, and uses attention mechanism to select the task-specific information from the shared representation layer. Thus, our model can learn a better generic sentence representation, which also has a strong transferability. Some recent work have also proposed sentence representation using attention mechanism. BIBREF25 uses a 2-D matrix, whose each row attending on a different part of the sentence, to represent the embedding. BIBREF26 introduces multi-head attention to jointly attend to information from different representation subspaces at different positions. BIBREF27 introduces human reading time as attention weights to improve sentence representation. Different from these work, we use attention vector to select the task-specific information from a shared sentence representation. Thus the learned sentence representation is much more generic and easy to transfer information to new tasks. Conclusion In this paper, we propose a new information-sharing scheme for multi-task learning, which uses attention mechanism to select the task-specific information from a shared sentence encoding layer. We conduct extensive experiments on 16 different sentiment classification tasks, which demonstrates the benefits of our models. Moreover, the shared sentence encoding model can be transferred to other tasks, which can be further boosted by introducing auxiliary tasks.
Accuracy on each dataset and the average accuracy on all datasets.
b556fd3a9e0cff0b33c63fa1aef3aed825f13e28
b556fd3a9e0cff0b33c63fa1aef3aed825f13e28_0
Q: What dataset did they use? Text: Introduction The distributed representation plays an important role in deep learning based natural language processing (NLP) BIBREF0 , BIBREF1 , BIBREF2 . On word level, many successful methods have been proposed to learn a good representation for single word, which is also called word embedding, such as skip-gram BIBREF3 , GloVe BIBREF4 , etc. There are also pre-trained word embeddings, which can easily used in downstream tasks. However, on sentence level, there is still no generic sentence representation which is suitable for various NLP tasks. Currently, most of sentence encoding models are trained specifically for a certain task in a supervised way, which results to different representations for the same sentence in different tasks. Taking the following sentence as an example for domain classification task and sentiment classification task, general text classification models always learn two representations separately. For domain classification, the model can learn a better representation of “infantile cart” while for sentiment classification, the model is able to learn a better representation of “easy to use”. However, to train a good task-specific sentence representation from scratch, we always need to prepare a large dataset which is always unavailable or costly. To alleviate this problem, one approach is pre-training the model on large unlabeled corpora by unsupervised learning tasks, such as language modeling BIBREF0 . This unsupervised pre-training may be helpful to improve the final performance, but the improvement is not guaranteed since it does not directly optimize the desired task. Another approach is multi-task learning BIBREF5 , which is an effective approach to improve the performance of a single task with the help of other related tasks. However, most existing models on multi-task learning attempt to divide the representation of a sentence into private and shared spaces. The shared representation is used in all tasks, and the private one is different for each task. The two typical information sharing schemes are stacked shared-private scheme and parallel shared-private scheme (as shown in Figure SECREF2 and SECREF3 respectively). However, we cannot guarantee that a good sentence encoding model is learned by the shared layer. To learn a better shareable sentence representation, we propose a new information-sharing scheme for multi-task learning in this paper. In our proposed scheme, the representation of every sentence is fully shared among all different tasks. To extract the task-specific feature, we utilize the attention mechanism and introduce a task-dependent query vector to select the task-specific information from the shared sentence representation. The query vector of each task can be regarded as learnable parameters (static) or be generated dynamically. If we take the former example, in our proposed model these two classification tasks share the same representation which includes both domain information and sentiment information. On top of this shared representation, a task-specific query vector will be used to focus “infantile cart” for domain classification and “easy to use” for sentiment classification. The contributions of this papers can be summarized as follows. Neural Sentence Encoding Model The primary role of sentence encoding models is to represent the variable-length sentence or paragraphs as fixed-length dense vector (distributed representation). Currently, the effective neural sentence encoding models include neural Bag-of-words (NBOW), recurrent neural networks (RNN) BIBREF2 , BIBREF6 , convolutional neural networks (CNN) BIBREF1 , BIBREF7 , BIBREF8 , and syntactic-based compositional model BIBREF9 , BIBREF10 , BIBREF11 . Given a text sequence INLINEFORM0 , we first use a lookup layer to get the vector representation (word embedding) INLINEFORM1 of each word INLINEFORM2 . Then we can use CNN or RNN to calculate the hidden state INLINEFORM3 of each position INLINEFORM4 . The final representation of a sentence could be either the final hidden state of the RNN or the max (or average) pooling from all hidden states of RNN (or CNN). We use bidirectional LSTM (BiLSTM) to gain some dependency between adjacent words. The update rule of each LSTM unit can be written as follows: DISPLAYFORM0 where INLINEFORM0 represents all the parameters of BiLSTM. The representation of the whole sequence is the average of the hidden states of all the positions, where INLINEFORM1 denotes the concatenation operation. Shared-Private Scheme in Multi-task Learning Multi-task Learning BIBREF5 utilizes the correlation between related tasks to improve classification by learning tasks in parallel, which has been widely used in various natural language processing tasks, such as text classification BIBREF12 , semantic role labeling BIBREF13 , machine translation BIBREF14 , and so on. To facilitate this, we give some explanation for notations used in this paper. Formally, we refer to INLINEFORM0 as a dataset with INLINEFORM1 samples for task INLINEFORM2 . Specifically, DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 denote a sentence and corresponding label for task INLINEFORM2 . A common information sharing scheme is to divide the feature spaces into two parts: one is used to store task-specific features, the other is used to capture task-invariant features. As shown in Figure SECREF2 and SECREF3 , there are two schemes: stacked shared-private (SSP) scheme and parallel shared-private (PSP) scheme. In stacked scheme, the output of the shared LSTM layer is fed into the private LSTM layer, whose output is the final task-specific sentence representation. In parallel scheme, the final task-specific sentence representation is the concatenation of outputs from the shared LSTM layer and the private LSTM layer. For a sentence INLINEFORM0 and its label INLINEFORM1 in task INLINEFORM2 , its final representation is ultimately fed into the corresponding task-specific softmax layer for classification or other tasks. DISPLAYFORM0 where INLINEFORM0 is prediction probabilities; INLINEFORM1 is the final task-specific representation; INLINEFORM2 and INLINEFORM3 are task-specific weight matrix and bias vector respectively. The total loss INLINEFORM0 can be computed as: DISPLAYFORM0 where INLINEFORM0 (usually set to 1) is the weights for each task INLINEFORM1 respectively; INLINEFORM2 is the cross-entropy of the predicted and true distributions. A New Information-Sharing Scheme for Multi-task Learning The key factor of multi-task learning is the information sharing scheme in latent representation space. Different from the traditional shared-private scheme, we introduce a new scheme for multi-task learning on NLP tasks, in which the sentence representation is shared among all the tasks, the task-specific information is selected by attention mechanism. In a certain task, not all information of a sentence is useful for the task, therefore we just need to select the key information from the sentence. Attention mechanism BIBREF15 , BIBREF16 is an effective method to select related information from a set of candidates. The attention mechanism can effectively solve the capacity problem of sequence models, thereby is widely used in many NLP tasks, such as machine translation BIBREF17 , textual entailment BIBREF18 and summarization BIBREF19 . Static Task-Attentive Sentence Encoding We first introduce the static task-attentive sentence encoding model, in which the task query vector is a static learnable parameter. As shown in Figure FIGREF19 , our model consists of one shared BiLSTM layer and an attention layer. Formally, for a sentence in task INLINEFORM0 , we first use BiLSTM to calculate the shared representation INLINEFORM1 . Then we use attention mechanism to select the task-specific information from a generic task-independent sentence representation. Following BIBREF17 , we use the dot-product attention to compute the attention distribution. We introduce a task-specific query vector INLINEFORM2 to calculate the attention distribution INLINEFORM3 over all positions. DISPLAYFORM0 where the task-specific query vector INLINEFORM0 is a learned parameter. The final task-specific representation INLINEFORM1 is summarized by DISPLAYFORM0 At last, a task-specific fully connected layer followed by a softmax non-linear layer processes the task-specific context INLINEFORM0 and predicts the probability distribution over classes. Dynamic Task-Attentive Sentence Encoding Different from the static task-attentive sentence encoding model, the query vectors of the dynamic task-attentive sentence encoding model are generated dynamically. When each task belongs to a different domain, we can introduce an auxiliary domain classifier to predict the domain (or task) of the specific sentence. Thus, the domain information is also included in the shared sentence representation, which can be used to generate the task-specific query vector of attention. The original tasks and the auxiliary task of domain classification (DC) are joint learned in our multi-task learning framework. The query vector INLINEFORM0 of DC task is static and needs be learned in training phrase. The domain information is also selected with attention mechanism. DISPLAYFORM0 where INLINEFORM0 is attention distribution of auxiliary DC task, and INLINEFORM1 is the attentive information for DC task, which is fed into the final classifier to predict its domain INLINEFORM2 . Since INLINEFORM0 contains the domain information, we can use it to generate a more flexible query vector DISPLAYFORM0 where INLINEFORM0 is a shared learnable weight matrix and INLINEFORM1 is a task-specific bias vector. When we set INLINEFORM2 , the dynamic query is equivalent to the static one. Experiment In this section, we investigate the empirical performances of our proposed architectures on three experiments. Exp I: Sentiment Classification We first conduct a multi-task experiment on sentiment classification. We use 16 different datasets from several popular review corpora used in BIBREF20 . These datasets consist of 14 product review datasets and two movie review datasets. All the datasets in each task are partitioned randomly into training set, development set and testing set with the proportion of 70%, 10% and 20% respectively. The detailed statistics about all the datasets are listed in Table TABREF27 . We compare our proposed two information sharing schemes, static attentive sentence encoding (SA-MTL) and dynamic attentive sentence encoding (DA-MTL), with the following multi-task learning frameworks. FS-MTL: This model is a combination of a fully shared BiLSTM and a classifier. SSP-MTL: This is the stacked shared-private model as shown in Figure SECREF2 whose output of the shared BiLSTM layer is fed into the private BiLSTM layer. PSP-MTL: The is the parallel shared-private model as shown in Figure SECREF3 . The final sentence representation is the concatenation of both private and shared BiLSTM. ASP-MTL: This model is proposed by BIBREF20 based on PSP-MTL with uni-directional LSTM. The model uses adversarial training to separate task-invariant and task-specific features from different tasks. We initialize word embeddings with the 200d GloVe vectors (840B token version, BIBREF4 ). The other parameters are initialized by randomly sampling from uniform distribution in [-0.1, 0.1]. The mini-batch size is set to 32. For each task, we take hyperparameters which achieve the best performance on the development set via a small grid search. We use ADAM optimizer BIBREF21 with the learning rate of INLINEFORM0 . The BiLSTM models have 200 dimensions in each direction, and dropout with probability of INLINEFORM1 . During the training step of multi-task models, we select different tasks randomly. After the training step, we fix the parameters of the shared BiLSTM and fine tune every task. Table TABREF34 shows the performances of the different methods. From the table, we can see that the performances of most tasks can be improved with the help of multi-task learning. FS-MTL shows the minimum performance gain from multi-task learning since it puts all private and shared information into a unified space. SSP-MTL and PSP-MTL achieve similar performance and are outperformed by ASP-MTL which can better separate the task-specific and task-invariant features by using adversarial training. Our proposed models (SA-MTL and DA-MTL) outperform ASP-MTL because we model a richer representation from these 16 tasks. Compared to SA-MTL, DA-MTL achieves a further improvement of INLINEFORM0 accuracy with the help of the dynamic and flexible query vector. It is noteworthy that our models are also space efficient since the task-specific information is extracted by using only a query vector, instead of a BiLSTM layer in the shared-private models. We also present the convergence properties of our models on the development datasets compared to other multi-task models in Figure FIGREF36 . We can see that PSP-MTL converges much more slowly than the rest four models because each task-specific classifier should consider the output of shared layer which is quite unstable during the beginning of training phrase. Moreover, benefit from the attention mechanism which is useful in feature extraction, SA-TML and DA-MTL are converged much more quickly than the rest of models. Since all the tasks share the same sentence encoding layer, the query vector INLINEFORM0 of each task determines which part of the sentence to attend. Thus, similar tasks should have the similar query vectors. Here we simply calculate the Frobenius norm of each pair of tasks' INLINEFORM1 as the similarity. Figure FIGREF38 shows the similarity matrix of different task's query vector INLINEFORM2 in static attentive model. A darker cell means the higher similarity of the two task's INLINEFORM3 . Since the cells in the diagnose of the matrix denotes the similarity of one task, we leave them blank because they are meaningless. It's easy to find that INLINEFORM4 of “DVD”, “Video” and “IMDB” have very high similarity. It makes sense because they are all reviews related to movie. However, another movie review “MR” has very low similarity to these three task. It's probably that the text in “MR” is very short that makes it different from these tasks. The similarity of INLINEFORM5 from “Books” and “Video” is also very high because these two datasets share a lot of similar sentiment expressions. As shown in Figure FIGREF40 , we also show the attention distributions on a real example selected from the book review dataset. This piece of text involves two domains. The review is negative in the book domain while it is positive from the perspective of movie review. In our SA-MTL model, the “Books” review classifier from SA-MTL focus on the negative aspect of the book and evaluate the text as negative. In contrast, the “DVD” review classifier focuses on the positive part of the movie and produce the result as positive. In case of DA-MTL, the model first focuses on the two domain words “book” and “movie” and judge the text is a book review because “book” has a higher weight. Then, the model dynamically generates a query INLINEFORM0 and focuses on the part of the book review in this text, thereby finally predicting a negative sentiment. Exp II: Transferability of Shared Sentence Representation With attention mechanism, the shared sentence encoder in our proposed models can generate more generic task-invariant representations, which can be considered as off-the-shelf knowledge and then be used for unseen new tasks. To test the transferability of our learned shared representation, we also design an experiment shown in Table TABREF46 . The multi-task learning results are derived by training the first 6 tasks in general multi-task learning. For transfer learning, we choose the last 10 tasks to train our model with multi-task learning, then the learned shared sentence encoding layer are kept frozen and transferred to train the first 6 tasks. As shown in Table TABREF46 , we can see that SA-MTL and DA-MTL achieves better transfer learning performances compared to SSP-MTL and PSP-MTL. The reason is that by using attention mechanism, richer information can be captured into the shared representation layer, thereby benefiting the other task. Exp III: Introducing Sequence Labeling as Auxiliary Task A good sentence representation should include its linguistic information. Therefore, we incorporate sequence labeling task (such as POS Tagging and Chunking) as an auxiliary task into the multi-task learning framework, which is trained jointly with the primary tasks (the above 16 tasks of sentiment classification). The auxiliary task shares the sentence encoding layer with the primary tasks and connected to a private fully connected layer followed by a softmax non-linear layer to process every hidden state INLINEFORM0 and predicts the labels. We use CoNLL 2000 BIBREF22 sequence labeling dataset for both POS Tagging and Chunking tasks. There are 8774 sentences in training data, 500 sentences in development data and 1512 sentences in test data. The average sentence length is 24 and has a total vocabulary size as 17k. The experiment results are shown in Table TABREF51 . We use the same hyperparameters and training procedure as the former experiments. The result shows that by leveraging auxiliary tasks, the performances of SA-MTL and DA-MTL achieve more improvement than PSP-MTL and SSP-MTL. For further analysis, Figure FIGREF53 shows the attention distribution produced by models trained with and without Chunking task on two pieces of texts. In the first piece of text, both of the models attend to the first “like” because it represents positive sentiment on the book. The model trained with Chunking task also labels the three “like” as 'B-VP' (beginning of verb phrase) correctly. However, in the second piece of text, the same work “like” denotes a preposition and has no sentiment meaning. The model trained without Chunking task fails to tell the difference with the former text and focuses on it and produces the result as positive. Meanwhile, the model trained with Chunking task successfully labels the “like” as 'B-PP' (beginning of prepositional phrase) and pays little attention to it and produces the right answer as negative. This example shows how the model trained with auxiliary task helps the primary tasks. Related Work Neural networks based multi-task learning has been proven effective in many NLP problems BIBREF13 , BIBREF23 , BIBREF12 , BIBREF20 , BIBREF24 In most of these models, there exists a task-dependent private layer separated from the shared layer. The private layers play more important role in these models. Different from them, our model encodes all information into a shared representation layer, and uses attention mechanism to select the task-specific information from the shared representation layer. Thus, our model can learn a better generic sentence representation, which also has a strong transferability. Some recent work have also proposed sentence representation using attention mechanism. BIBREF25 uses a 2-D matrix, whose each row attending on a different part of the sentence, to represent the embedding. BIBREF26 introduces multi-head attention to jointly attend to information from different representation subspaces at different positions. BIBREF27 introduces human reading time as attention weights to improve sentence representation. Different from these work, we use attention vector to select the task-specific information from a shared sentence representation. Thus the learned sentence representation is much more generic and easy to transfer information to new tasks. Conclusion In this paper, we propose a new information-sharing scheme for multi-task learning, which uses attention mechanism to select the task-specific information from a shared sentence encoding layer. We conduct extensive experiments on 16 different sentiment classification tasks, which demonstrates the benefits of our models. Moreover, the shared sentence encoding model can be transferred to other tasks, which can be further boosted by introducing auxiliary tasks.
16 different datasets from several popular review corpora used in BIBREF20, CoNLL 2000 BIBREF22
0db1ba66a7e75e91e93d78c31f877364c3724a65
0db1ba66a7e75e91e93d78c31f877364c3724a65_0
Q: What tasks did they experiment with? Text: Introduction The distributed representation plays an important role in deep learning based natural language processing (NLP) BIBREF0 , BIBREF1 , BIBREF2 . On word level, many successful methods have been proposed to learn a good representation for single word, which is also called word embedding, such as skip-gram BIBREF3 , GloVe BIBREF4 , etc. There are also pre-trained word embeddings, which can easily used in downstream tasks. However, on sentence level, there is still no generic sentence representation which is suitable for various NLP tasks. Currently, most of sentence encoding models are trained specifically for a certain task in a supervised way, which results to different representations for the same sentence in different tasks. Taking the following sentence as an example for domain classification task and sentiment classification task, general text classification models always learn two representations separately. For domain classification, the model can learn a better representation of “infantile cart” while for sentiment classification, the model is able to learn a better representation of “easy to use”. However, to train a good task-specific sentence representation from scratch, we always need to prepare a large dataset which is always unavailable or costly. To alleviate this problem, one approach is pre-training the model on large unlabeled corpora by unsupervised learning tasks, such as language modeling BIBREF0 . This unsupervised pre-training may be helpful to improve the final performance, but the improvement is not guaranteed since it does not directly optimize the desired task. Another approach is multi-task learning BIBREF5 , which is an effective approach to improve the performance of a single task with the help of other related tasks. However, most existing models on multi-task learning attempt to divide the representation of a sentence into private and shared spaces. The shared representation is used in all tasks, and the private one is different for each task. The two typical information sharing schemes are stacked shared-private scheme and parallel shared-private scheme (as shown in Figure SECREF2 and SECREF3 respectively). However, we cannot guarantee that a good sentence encoding model is learned by the shared layer. To learn a better shareable sentence representation, we propose a new information-sharing scheme for multi-task learning in this paper. In our proposed scheme, the representation of every sentence is fully shared among all different tasks. To extract the task-specific feature, we utilize the attention mechanism and introduce a task-dependent query vector to select the task-specific information from the shared sentence representation. The query vector of each task can be regarded as learnable parameters (static) or be generated dynamically. If we take the former example, in our proposed model these two classification tasks share the same representation which includes both domain information and sentiment information. On top of this shared representation, a task-specific query vector will be used to focus “infantile cart” for domain classification and “easy to use” for sentiment classification. The contributions of this papers can be summarized as follows. Neural Sentence Encoding Model The primary role of sentence encoding models is to represent the variable-length sentence or paragraphs as fixed-length dense vector (distributed representation). Currently, the effective neural sentence encoding models include neural Bag-of-words (NBOW), recurrent neural networks (RNN) BIBREF2 , BIBREF6 , convolutional neural networks (CNN) BIBREF1 , BIBREF7 , BIBREF8 , and syntactic-based compositional model BIBREF9 , BIBREF10 , BIBREF11 . Given a text sequence INLINEFORM0 , we first use a lookup layer to get the vector representation (word embedding) INLINEFORM1 of each word INLINEFORM2 . Then we can use CNN or RNN to calculate the hidden state INLINEFORM3 of each position INLINEFORM4 . The final representation of a sentence could be either the final hidden state of the RNN or the max (or average) pooling from all hidden states of RNN (or CNN). We use bidirectional LSTM (BiLSTM) to gain some dependency between adjacent words. The update rule of each LSTM unit can be written as follows: DISPLAYFORM0 where INLINEFORM0 represents all the parameters of BiLSTM. The representation of the whole sequence is the average of the hidden states of all the positions, where INLINEFORM1 denotes the concatenation operation. Shared-Private Scheme in Multi-task Learning Multi-task Learning BIBREF5 utilizes the correlation between related tasks to improve classification by learning tasks in parallel, which has been widely used in various natural language processing tasks, such as text classification BIBREF12 , semantic role labeling BIBREF13 , machine translation BIBREF14 , and so on. To facilitate this, we give some explanation for notations used in this paper. Formally, we refer to INLINEFORM0 as a dataset with INLINEFORM1 samples for task INLINEFORM2 . Specifically, DISPLAYFORM0 where INLINEFORM0 and INLINEFORM1 denote a sentence and corresponding label for task INLINEFORM2 . A common information sharing scheme is to divide the feature spaces into two parts: one is used to store task-specific features, the other is used to capture task-invariant features. As shown in Figure SECREF2 and SECREF3 , there are two schemes: stacked shared-private (SSP) scheme and parallel shared-private (PSP) scheme. In stacked scheme, the output of the shared LSTM layer is fed into the private LSTM layer, whose output is the final task-specific sentence representation. In parallel scheme, the final task-specific sentence representation is the concatenation of outputs from the shared LSTM layer and the private LSTM layer. For a sentence INLINEFORM0 and its label INLINEFORM1 in task INLINEFORM2 , its final representation is ultimately fed into the corresponding task-specific softmax layer for classification or other tasks. DISPLAYFORM0 where INLINEFORM0 is prediction probabilities; INLINEFORM1 is the final task-specific representation; INLINEFORM2 and INLINEFORM3 are task-specific weight matrix and bias vector respectively. The total loss INLINEFORM0 can be computed as: DISPLAYFORM0 where INLINEFORM0 (usually set to 1) is the weights for each task INLINEFORM1 respectively; INLINEFORM2 is the cross-entropy of the predicted and true distributions. A New Information-Sharing Scheme for Multi-task Learning The key factor of multi-task learning is the information sharing scheme in latent representation space. Different from the traditional shared-private scheme, we introduce a new scheme for multi-task learning on NLP tasks, in which the sentence representation is shared among all the tasks, the task-specific information is selected by attention mechanism. In a certain task, not all information of a sentence is useful for the task, therefore we just need to select the key information from the sentence. Attention mechanism BIBREF15 , BIBREF16 is an effective method to select related information from a set of candidates. The attention mechanism can effectively solve the capacity problem of sequence models, thereby is widely used in many NLP tasks, such as machine translation BIBREF17 , textual entailment BIBREF18 and summarization BIBREF19 . Static Task-Attentive Sentence Encoding We first introduce the static task-attentive sentence encoding model, in which the task query vector is a static learnable parameter. As shown in Figure FIGREF19 , our model consists of one shared BiLSTM layer and an attention layer. Formally, for a sentence in task INLINEFORM0 , we first use BiLSTM to calculate the shared representation INLINEFORM1 . Then we use attention mechanism to select the task-specific information from a generic task-independent sentence representation. Following BIBREF17 , we use the dot-product attention to compute the attention distribution. We introduce a task-specific query vector INLINEFORM2 to calculate the attention distribution INLINEFORM3 over all positions. DISPLAYFORM0 where the task-specific query vector INLINEFORM0 is a learned parameter. The final task-specific representation INLINEFORM1 is summarized by DISPLAYFORM0 At last, a task-specific fully connected layer followed by a softmax non-linear layer processes the task-specific context INLINEFORM0 and predicts the probability distribution over classes. Dynamic Task-Attentive Sentence Encoding Different from the static task-attentive sentence encoding model, the query vectors of the dynamic task-attentive sentence encoding model are generated dynamically. When each task belongs to a different domain, we can introduce an auxiliary domain classifier to predict the domain (or task) of the specific sentence. Thus, the domain information is also included in the shared sentence representation, which can be used to generate the task-specific query vector of attention. The original tasks and the auxiliary task of domain classification (DC) are joint learned in our multi-task learning framework. The query vector INLINEFORM0 of DC task is static and needs be learned in training phrase. The domain information is also selected with attention mechanism. DISPLAYFORM0 where INLINEFORM0 is attention distribution of auxiliary DC task, and INLINEFORM1 is the attentive information for DC task, which is fed into the final classifier to predict its domain INLINEFORM2 . Since INLINEFORM0 contains the domain information, we can use it to generate a more flexible query vector DISPLAYFORM0 where INLINEFORM0 is a shared learnable weight matrix and INLINEFORM1 is a task-specific bias vector. When we set INLINEFORM2 , the dynamic query is equivalent to the static one. Experiment In this section, we investigate the empirical performances of our proposed architectures on three experiments. Exp I: Sentiment Classification We first conduct a multi-task experiment on sentiment classification. We use 16 different datasets from several popular review corpora used in BIBREF20 . These datasets consist of 14 product review datasets and two movie review datasets. All the datasets in each task are partitioned randomly into training set, development set and testing set with the proportion of 70%, 10% and 20% respectively. The detailed statistics about all the datasets are listed in Table TABREF27 . We compare our proposed two information sharing schemes, static attentive sentence encoding (SA-MTL) and dynamic attentive sentence encoding (DA-MTL), with the following multi-task learning frameworks. FS-MTL: This model is a combination of a fully shared BiLSTM and a classifier. SSP-MTL: This is the stacked shared-private model as shown in Figure SECREF2 whose output of the shared BiLSTM layer is fed into the private BiLSTM layer. PSP-MTL: The is the parallel shared-private model as shown in Figure SECREF3 . The final sentence representation is the concatenation of both private and shared BiLSTM. ASP-MTL: This model is proposed by BIBREF20 based on PSP-MTL with uni-directional LSTM. The model uses adversarial training to separate task-invariant and task-specific features from different tasks. We initialize word embeddings with the 200d GloVe vectors (840B token version, BIBREF4 ). The other parameters are initialized by randomly sampling from uniform distribution in [-0.1, 0.1]. The mini-batch size is set to 32. For each task, we take hyperparameters which achieve the best performance on the development set via a small grid search. We use ADAM optimizer BIBREF21 with the learning rate of INLINEFORM0 . The BiLSTM models have 200 dimensions in each direction, and dropout with probability of INLINEFORM1 . During the training step of multi-task models, we select different tasks randomly. After the training step, we fix the parameters of the shared BiLSTM and fine tune every task. Table TABREF34 shows the performances of the different methods. From the table, we can see that the performances of most tasks can be improved with the help of multi-task learning. FS-MTL shows the minimum performance gain from multi-task learning since it puts all private and shared information into a unified space. SSP-MTL and PSP-MTL achieve similar performance and are outperformed by ASP-MTL which can better separate the task-specific and task-invariant features by using adversarial training. Our proposed models (SA-MTL and DA-MTL) outperform ASP-MTL because we model a richer representation from these 16 tasks. Compared to SA-MTL, DA-MTL achieves a further improvement of INLINEFORM0 accuracy with the help of the dynamic and flexible query vector. It is noteworthy that our models are also space efficient since the task-specific information is extracted by using only a query vector, instead of a BiLSTM layer in the shared-private models. We also present the convergence properties of our models on the development datasets compared to other multi-task models in Figure FIGREF36 . We can see that PSP-MTL converges much more slowly than the rest four models because each task-specific classifier should consider the output of shared layer which is quite unstable during the beginning of training phrase. Moreover, benefit from the attention mechanism which is useful in feature extraction, SA-TML and DA-MTL are converged much more quickly than the rest of models. Since all the tasks share the same sentence encoding layer, the query vector INLINEFORM0 of each task determines which part of the sentence to attend. Thus, similar tasks should have the similar query vectors. Here we simply calculate the Frobenius norm of each pair of tasks' INLINEFORM1 as the similarity. Figure FIGREF38 shows the similarity matrix of different task's query vector INLINEFORM2 in static attentive model. A darker cell means the higher similarity of the two task's INLINEFORM3 . Since the cells in the diagnose of the matrix denotes the similarity of one task, we leave them blank because they are meaningless. It's easy to find that INLINEFORM4 of “DVD”, “Video” and “IMDB” have very high similarity. It makes sense because they are all reviews related to movie. However, another movie review “MR” has very low similarity to these three task. It's probably that the text in “MR” is very short that makes it different from these tasks. The similarity of INLINEFORM5 from “Books” and “Video” is also very high because these two datasets share a lot of similar sentiment expressions. As shown in Figure FIGREF40 , we also show the attention distributions on a real example selected from the book review dataset. This piece of text involves two domains. The review is negative in the book domain while it is positive from the perspective of movie review. In our SA-MTL model, the “Books” review classifier from SA-MTL focus on the negative aspect of the book and evaluate the text as negative. In contrast, the “DVD” review classifier focuses on the positive part of the movie and produce the result as positive. In case of DA-MTL, the model first focuses on the two domain words “book” and “movie” and judge the text is a book review because “book” has a higher weight. Then, the model dynamically generates a query INLINEFORM0 and focuses on the part of the book review in this text, thereby finally predicting a negative sentiment. Exp II: Transferability of Shared Sentence Representation With attention mechanism, the shared sentence encoder in our proposed models can generate more generic task-invariant representations, which can be considered as off-the-shelf knowledge and then be used for unseen new tasks. To test the transferability of our learned shared representation, we also design an experiment shown in Table TABREF46 . The multi-task learning results are derived by training the first 6 tasks in general multi-task learning. For transfer learning, we choose the last 10 tasks to train our model with multi-task learning, then the learned shared sentence encoding layer are kept frozen and transferred to train the first 6 tasks. As shown in Table TABREF46 , we can see that SA-MTL and DA-MTL achieves better transfer learning performances compared to SSP-MTL and PSP-MTL. The reason is that by using attention mechanism, richer information can be captured into the shared representation layer, thereby benefiting the other task. Exp III: Introducing Sequence Labeling as Auxiliary Task A good sentence representation should include its linguistic information. Therefore, we incorporate sequence labeling task (such as POS Tagging and Chunking) as an auxiliary task into the multi-task learning framework, which is trained jointly with the primary tasks (the above 16 tasks of sentiment classification). The auxiliary task shares the sentence encoding layer with the primary tasks and connected to a private fully connected layer followed by a softmax non-linear layer to process every hidden state INLINEFORM0 and predicts the labels. We use CoNLL 2000 BIBREF22 sequence labeling dataset for both POS Tagging and Chunking tasks. There are 8774 sentences in training data, 500 sentences in development data and 1512 sentences in test data. The average sentence length is 24 and has a total vocabulary size as 17k. The experiment results are shown in Table TABREF51 . We use the same hyperparameters and training procedure as the former experiments. The result shows that by leveraging auxiliary tasks, the performances of SA-MTL and DA-MTL achieve more improvement than PSP-MTL and SSP-MTL. For further analysis, Figure FIGREF53 shows the attention distribution produced by models trained with and without Chunking task on two pieces of texts. In the first piece of text, both of the models attend to the first “like” because it represents positive sentiment on the book. The model trained with Chunking task also labels the three “like” as 'B-VP' (beginning of verb phrase) correctly. However, in the second piece of text, the same work “like” denotes a preposition and has no sentiment meaning. The model trained without Chunking task fails to tell the difference with the former text and focuses on it and produces the result as positive. Meanwhile, the model trained with Chunking task successfully labels the “like” as 'B-PP' (beginning of prepositional phrase) and pays little attention to it and produces the right answer as negative. This example shows how the model trained with auxiliary task helps the primary tasks. Related Work Neural networks based multi-task learning has been proven effective in many NLP problems BIBREF13 , BIBREF23 , BIBREF12 , BIBREF20 , BIBREF24 In most of these models, there exists a task-dependent private layer separated from the shared layer. The private layers play more important role in these models. Different from them, our model encodes all information into a shared representation layer, and uses attention mechanism to select the task-specific information from the shared representation layer. Thus, our model can learn a better generic sentence representation, which also has a strong transferability. Some recent work have also proposed sentence representation using attention mechanism. BIBREF25 uses a 2-D matrix, whose each row attending on a different part of the sentence, to represent the embedding. BIBREF26 introduces multi-head attention to jointly attend to information from different representation subspaces at different positions. BIBREF27 introduces human reading time as attention weights to improve sentence representation. Different from these work, we use attention vector to select the task-specific information from a shared sentence representation. Thus the learned sentence representation is much more generic and easy to transfer information to new tasks. Conclusion In this paper, we propose a new information-sharing scheme for multi-task learning, which uses attention mechanism to select the task-specific information from a shared sentence encoding layer. We conduct extensive experiments on 16 different sentiment classification tasks, which demonstrates the benefits of our models. Moreover, the shared sentence encoding model can be transferred to other tasks, which can be further boosted by introducing auxiliary tasks.
Sentiment Classification, Transferability of Shared Sentence Representation, Introducing Sequence Labeling as Auxiliary Task
b44ce9aae8b1479820555b99ce234443168dc1fe
b44ce9aae8b1479820555b99ce234443168dc1fe_0
Q: What multilingual parallel data is used for training proposed model? Text: Introduction Paraphrasing is to express the same meaning using different expressions. Paraphrase generation plays an important role in various natural language processing (NLP) tasks such as response diversification in dialogue system, query reformulation in information retrieval, and data augmentation in machine translation. Recently, models based on Seq2Seq learning BIBREF1 have achieved the state-of-the-art results on paraphrase generation. Most of these models BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 focus on training the paraphrasing models based on a paraphrase corpus, which contains a number of pairs of paraphrases. However, high-quality paraphrases are usually difficult to acquire in practice, which becomes the major limitation of these methods. Therefore, we focus on zero-shot paraphrase generation approach in this paper, which aims to generate paraphrases without requiring a paraphrase corpus. A natural choice is to leverage the bilingual or multilingual parallel data used in machine translation, which are of great quantity and quality. The basic assumption is that if two sentences in one language (e.g., English) have the same translation in another language (e.g., French), they are assumed to have the same meaning, i.e., they are paraphrases of each other. Therefore, one typical solution for paraphrasing in one language is to pivot over a translation in another language. Specifically, it is implemented as the round-trip translation, where the input sentence is translated into a foreign sentence, then back-translated into a sentence in the same language as input BIBREF7. The process is shown in Figure FIGREF1. Apparently, two machine translation systems (English$\rightarrow $French and French$\leftarrow $English) are needed to conduct the generation of a paraphrase. Although the pivoting approach works in general, there are several intrinsic defects. First, the round-trip system can hardly explore all the paths of paraphrasing, since it is pivoted through the finite intermedia outputs of a translation system. More formally, let $Z$ denote the meaning representation of a sentence $X$, and finding paraphrases of $X$ can be treated as sampling another sentence $Y$ conditioning on the representation $Z$. Ideally, paraphrases should be generated by following $P(Y|X) = \int _{Z} P(Y|Z)P(Z|X)dZ$, which is marginalized over all possible values of $Z$. However, in the round-trip translation, only one or several $Z$s are sampled from the machine translation system $P(Z|X)$, which can lead to an inaccurate approximation of the whole distribution and is prone to the problem of semantic drift due to the sampling variances. Second, the results are determined by the pre-existing translation systems, and it is difficult to optimize the pipeline end-to-end. Last, the system is not efficient especially at the inference stage, because it needs two rounds of translation decoding. To address these issues, we propose a single-step zero-shot paraphrase generation model, which can be trained on machine translation corpora in an end-to-end fashion. Unlike the pivoting approach, our proposed model does not involve explicit translation between multiple languages. Instead, it directly learns the paraphrasing distribution $P(Y|X)$ from the parallel data sampled from $P(Z|X)$ and $P(Y|Z)$. Specifically, we build a Transformer-based BIBREF8 language model, which is trained on the concatenated bilingual parallel sentences with language indicators. At inference stage, given a input sentence in a particular language, the model is guided to generate sentences in the same language, which are deemed as paraphrases of the input. Our model is simple and compact, and can empirically reduce the risk of semantic drift to a large extent. Moreover, we can initialize our model with generative pre-training (GPT) BIBREF0 on monolingual data, which can benefit the generation in low-resource languages. Finally, we borrow the idea of denoising auto-encoder (DAE) to further enhance robustness in paraphrase generation. We conduct experiments on zero-shot paraphrase generation task, and find that the proposed model significantly outperforms the pivoting approach in terms of both automatic and human evaluations. Meanwhile, the training and inference cost are largely reduced compared to the pivot-based methods which involves multiple systems. Methodology ::: Transformer-based Language Model Transformer-based language model (TLM) is a neural language model constructed with a stack of Transformer decoder layers BIBREF8. Given a sequence of tokens, TLM is trained with maximizing the likelihood: where $X=[x_1,x_2,\ldots ,x_n]$ is a sentence in a language (e.g., English), and $\theta $ denotes the parameters of the model. Each Transformer layer is composed of multi-head self-attention, layer normalization and a feed-forward network. We refer reader to the original paper for details of each component. Formally, the decoding probability is given by where $x_i$ denotes the token embedding, $p_i$ denote the positional embedding and $h_i$ denotes the output states of the $i$-th token, and $W_e$ and $W_o$ are the input and output embedding matrices. Although TLM is normally employed to model monolingual sequences, there is no barrier to utilize TLM to model sequences in multiple languages. In this paper, inspired by BIBREF9, we concatenate pairs of sentences from bilingual parallel corpora (e.g., English$\rightarrow $French) as training instances to the model. Let $X$ and $Y$ denote the parallel sentences in two different languages, the training objective becomes This bilingual language model can be regarded as the decoder-only model compared to the traditional encoder-decoder model. It has been proved to work effectively on monolingual text-to-text generation tasks such as summarization BIBREF10. The advantages of such architecture include less model parameters, easier optimization and potential better performance for longer sequences. Furthermore, it naturally integrates with language models pre-training on monolingual corpus. For each input sequence of concatenated sentences, we add special tokens $\langle $bos$\rangle $ and $\langle $eos$\rangle $ at the beginning and the end, and $\langle $delim$\rangle $ in between the sentences. Moreover, at the beginning of each sentence, we add a special token as its language identifier, for instance, $\langle $en$\rangle $ for English, $\langle $fr$\rangle $ for French. One example of English$\rightarrow $French training sequence is “$\langle $bos$\rangle $ $\langle $en$\rangle $ cat sat on the mat $\langle $delim$\rangle $ $\langle $fr$\rangle $ chat assis sur le tapis $\langle $eos$\rangle $". At inference stage, the model predicts the next word as the conventional auto-regressive model: Methodology ::: Zero-shot Paraphrase Generation We train the bilingual language model on multiple bilingual corpora, for example, English$\leftrightarrow $French and German$\leftrightarrow $Chinese. Once the language model has been trained, we can conduct zero-shot paraphrase generation based on the model. Specifically, given an input sentence that is fed into the language model, we set the output language identifier the same as input, and then simply conduct decoding to generate paraphrases of the input sentence. Figure FIGREF2 illustrates the training and decoding process of our model. In the training stage, the model is trained to sequentially generate the input sentence and its translation in a specific language. Training is conducted in the way of teacher-forcing. In the decoding stage, after an English sentence “$\langle $bos$\rangle $ $\langle $en$\rangle $ cat sat on the mat $\langle $delim$\rangle $" is fed to the model, we intentionally set the output language identifier as “$\langle $en$\rangle $", in order to guide the model to continue to generate English words. At the same time, since the model has been trained on translation corpus, it implicitly learns to keep the semantic meaning of the output sentence the same as the input. Accordingly, the model will probably generate the paraphrases of the input sentence, such as “the cat sitting on the carpet $\langle $eos$\rangle $". It should be noted our model can obviously be trained on parallel paraphrase data without any modification. But in this paper, we will mainly focus on the research and evaluation in the zero-shot learning setting. In the preliminary experiments of zero-shot paraphrasing, we find the model does not perform consistently well and sometimes fails to generate the words in the correct language as indicated by the language identifier. Similar phenomenon has been observed in the research of zero-shot neural machine translation BIBREF11, BIBREF12, BIBREF13, which is referred as the degeneracy problem by BIBREF13. To address these problems in zero-shot paraphrase generation, we propose several techniques to improve the quality and diversity of the model as follows. Methodology ::: Zero-shot Paraphrase Generation ::: Language Embeddings The language identifier prior to the sentence does not always guarantee the language of the sequences generated by the model. In order to keep the language consistency, we introduce language embeddings, where each language is assigned a specific vector representation. Supposing that the language embedding for the $i$-th token in a sentence is $a_i$, we concatenate the language embedding with the Transformer output states and feed it to the softmax layer for predicting each token: We empirically demonstrate that the language embedding added to each tokens can effectively guide the model to generate sentences in the required language. Note that we still let the model to learn the output distribution for each language rather than simply restricting the vocabularies of output space. This offers flexibility to handle coding switching cases commonly seen in real-world data, e.g., English words could also appear in French sentences. Methodology ::: Zero-shot Paraphrase Generation ::: Pre-Training on Monolingual Corpora Language model pre-training has shown its effectiveness in language generation tasks such as machine translation, text summarization and generative question answering BIBREF14, BIBREF15, BIBREF16. It is particularly helpful to the low/zero-resource tasks since the knowledge learned from large-scale monolingual corpus can be transferred to downstream tasks via the pre-training-then-fine-tuning approach. Since our model for paraphrase generation shares the same architecture as the language model, we are able to pre-train the model on massive monolingual data. Pre-training on monolingual data is conducted in the same way as training on parallel data, except that each training example contains only one sentence with the beginning/end of sequence tokens and the language identifier. The language embeddings are also employed. The pre-training objective is the same as Equation (DISPLAY_FORM4). In our experiments, we first pre-train the model on monolingual corpora of multiple languages respectively, and then fine-tune the model on parallel corpora. Methodology ::: Zero-shot Paraphrase Generation ::: Denoising Auto-Encoder We adopt the idea of denoising auto-encoder (DAE) to further improve the robustness of our paraphrasing model. DAE is originally proposed to learn intermediate representations that are robust to partial corruption of the inputs in training auto-encoders BIBREF17. Specifically, the initial input $X$ is first partially corrupted as $\tilde{X}$, which can be treated as sampling from a noise distribution $\tilde{X}\sim {q(\tilde{X}|X)}$. Then, an auto-encoder is trained to recover the original $X$ from the noisy input $\tilde{X}$ by minimizing the reconstruction error. In the applications of text generation BIBREF18 and machine translation BIBREF19, DAE has shown to be able to learn representations that are more robust to input noises and also generalize to unseen examples. Inspired by BIBREF19, we directly inject three different types of noises into input sentence that are commonly encountered in real applications. 1) Deletion: We randomly delete 1% tokens from source sentences, for example, “cat sat on the mat $\mapsto $ cat on the mat." 2) Insertion: We insert a random token into source sentences in 1% random positions, for example, “cat sat on the mat $\mapsto $ cat sat on red the mat." 3) Reordering: We randomly swap 1% tokens in source sentences, and keep the distance between tokens being swapped within 5. “cat sat on the mat $\mapsto $ mat sat on the cat." By introducing such noises into the input sentences while keeping the target sentences clean in training, our model can be more stable in generating paraphrases and generalisable to unseen sentences in the training corpus. The training objective with DAE becomes Once the model is trained, we generate paraphrases of a given sentence based on $P(Y|X;\theta )$. Experiments ::: Datasets We adopt the mixture of two multilingual translation corpus as our training data: MultiUN BIBREF20 and OpenSubtitles BIBREF21. MultiUN consists of 463,406 official documents in six languages, containing around 300M words for each language. OpenSubtitles is a corpus consisting of movie and TV subtitles, which contains 2.6B sentences over 60 languages. We select four shared languages of the two corpora: English, Spanish, Russian and Chinese. Statistics of the training corpus are shown in Table TABREF14. Sentences are tokenized by Wordpiece as in BERT. A multilingual vocabulary of 50K tokens is used. For validation and testing, we randomly sample 10000 sentences respectively from each language pair. The rest data are used for training. For monolingual pre-training, we use English Wikipedia corpus, which contains 2,500M words. Experiments ::: Experimental Settings We implement our model in Tensorflow BIBREF22. The size of our Transformer model is identical to BERT-base BIBREF23. The model is constituted by 12 layers of Transformer blocks. Number of dimension of token embedding, position embedding and transformer hidden state are 768, while that of states in position-wise feed-forward networks are 3072. The number of attention heads is 12. Models are train using Adam optimization BIBREF24 with a learning rate up to $1e-4$, $\beta _1=0.9$, $\beta _2=0.999$ and $L2$ weight decay of 0.01. We use top-k truncated random sampling strategy for inference that only sample from k candidate words with highest probabilities. Throughout our experiments, we train and evaluate two models for paraphrase generation: the bilingual model and the multilingual model. The bilingual models are trained only with English$\leftrightarrow $Chinese, while the multilingual models are trained with all the data between the four languages. The round-trip translation baseline is based on the Transformer-based neural translation model. Experiments ::: Automatic Evaluation We evaluate the relevance between input and generated paraphrase as well as the diversity among multiple generated paraphrases from the same input. For relevance, we use the cosine similarity between the sentential representations BIBREF25. Specifically, we use the Glove-840B embeddings BIBREF26 for word representation and Vector Extrema BIBREF25 for sentential representation. For generation diversity, we employ two evaluation metrics: Distinct-2 and inverse Self-BLEU (defined as: $1-$Self-BLEU) BIBREF27. Larger values of Distinct-2 and inverse Self-BLEU indicate higher diversity of the generation. For each model, we draw curves in Figure FIGREF15 with the aforementioned metrics as coordinates, and each data-point is obtained at a specific sampling temperature. Since a good paraphrasing model should generate both relevant and diverse paraphrases, the model with curve lying towards the up-right corner is regarded as with good performance. Experiments ::: Automatic Evaluation ::: Comparison with Baseline First we compare our models with the conventional pivoting method, i.e., round-trip translation. As shown in Figure FIGREF15 (a)(b), either the bilingual or the multilingual model is better than the baseline in terms of relevance and diversity in most cases. In other words, with the same generation diversity (measured by both Distinct-2 and Self-BLEU), our models can generate paraphrase with more semantically similarity to the input sentence. Note that in Figure FIGREF15 (a), there is a cross point between the curve of the bilingual model and the baseline curve when relevance is around 0.71. We particularly investigate generated paraphrases around this point and find that the baseline actually achieves better relevance when Distinct-2 is at a high level ($>$0.3). It means our bilingual model is semantically drifting faster than the baseline model as the Distinct-2 diversity increases. The round-trip translation performs two-round of supervised translations, while the zero-shot paraphrasing performs single-round unsupervised `translation' (paraphrasing). We suspect that the unsupervised paraphrasing can be more sensitive to the decoding strategy. It also implies the latent, language-agnostic representation may be not well learned in our bilingual model. While on the other hand, our multilingual model alleviate this insufficiency. We further verify and analyze it as follows. Experiments ::: Automatic Evaluation ::: Multilingual Models As mentioned above, our bilingual model can be unstable in some cases due to the lack of a well-learned language-agnostic semantic representation. A natural method is to introduce multilingual corpus, which consists of various translation directions. Training over multilingual corpus forces the model to decouple the language type and semantic representation. Empirical results shows that our multilingual model performs significantly better than the bilingual model. The red and blue curves in Figure FIGREF15 (a)(b) demonstrates a great improvement of our multilingual model over the bilingual model. In addition, the multilingual model also significantly outperforms the baseline in the setting with the reasonable relevance scores. Experiments ::: Automatic Evaluation ::: Denoising Auto-Encoder To verify the effectiveness of DAE in our model, various experiments with different hyper-parameters were conducted. We find that DAE works the best when uniformly perturbing input sentences with probability 0.01, using only Deletion and Reordering operations. We investigate DAE over both bilingual and multilingual models as plotted in Figure FIGREF15 (c)(d). Curves with the yellow circles represent models with DAE training. Results in the Figure FIGREF15 (c)(d) demonstrate positive effects of DAE in either bilingual or multilingual models. It is worth to note that, while DAE have marginal impact on multilingual model, it improves bilingual model significantly. This is an evidence indicating that DAE can improve the model in learning a more robust representation. More specifically, since Deletion forces model to focus on sentence-level semantics rather than word-level meaning while Reordering forces model to focus more on meaning rather than their positions, it would be more difficult for a model to learn shortcuts (e.g. copy words). In other words, DAE improves models' capability in extracting deep semantic representation, which has a similar effect to introducing multilingual data. Experiments ::: Automatic Evaluation ::: Monolingual Pre-Training As shown in Figure FIGREF15 (a)(b), the model with language model pre-training almost performs equally to its contemporary without pre-training. However, evaluations on fluency uncover the value of pre-training. We evaluate a group of models over our test set in terms of fluency, using a n-grams language model trained on 14k public domain books. As depicted in Table TABREF25, models with language model pre-training stably achieves greater log-probabilities than the model without pre-training. Namely, language model pre-training brings better fluency. Experiments ::: Human Evaluation 200 sentences are sampled from our test set for human evaluation. The human evaluation guidance generally follows that of BIBREF5 but with a compressed scoring range from [1, 5] to [1, 4]. We recruit five human annotators to evaluate models in semantic relevance and fluency. A test example consists of one input sentence, one generated sentence from baseline model and one generated sentence from our model. We randomly permute a pair of generated sentences to reduce annotators' bias on a certain model. Each example is evaluated by two annotators. As shown in Table TABREF28, our method outperforms the baseline in both relevance and fluency significantly. We further calculate agreement (Cohen's kappa) between two annotators. Both round-trip translation and our method performs well as to fluency. But the huge gap of relevance between the two systems draw much attention of us. We investigate the test set in details and find that round-trip approach indeed generate more noise as shown in case studies. Experiments ::: Case Studies We further study some generated cases from different models. All results in Table TABREF30 are generated over our test set using randomly sampling. For both baseline and multilingual model, we tune their sampling temperatures to control the Distinct-2 and the inverse Self-BLEU at 0.31 and 0.47 respectively. In the case studies, we find that our method usually generates sentences with better relevance to source inputs, while the round-trip translation method can sometimes run into serious semantic drift. In the second case, our model demonstrates a good feature that it maintains the meaning and even a proper noun $guide$ unchanged while modifies the source sentence by both changing and reordering words. This feature may be introduced by DAE perturbation strategies which improves model's robustness and diversity simultaneously. These results evidence that our methods outperforms the baseline in both relevance and diversity. Related Work Generating paraphrases based on deep neural networks, especially Seq2Seq models, has become the mainstream approach. A majority of neural paraphrasing models tried to improve generation quality and diversity with high-quality paraphrase corpora. BIBREF2 starts a deep learning line of paraphrase generation through introducing stacked residual LSTM network. A word constraint model proposed by BIBREF3 improves both generation quality and diversity. BIBREF4 adopts variational auto-encoder to further improve generation diversity. BIBREF5 utilize neural reinforcement learning and adversarial training to promote generation quality. BIBREF6 decompose paraphrase generation into phrase-level and sentence-level. Several works tried to generate paraphrases from monolingual non-parallel or translation corpora. BIBREF28 exploits Markov Network model to extract paraphrase tables from monolingual corpus. BIBREF29, BIBREF30 and BIBREF31 create paraphrase corpus through clustering and aligning paraphrases from crawled articles or headlines. With parallel translation corpora, pivoting approaches such round-trip translation BIBREF7 and back-translation BIBREF32 are explored. However, to the best knowledge of us, none of these paraphrase generation models has been trained directly from parallel translation corpora as a single-round end-to-end model. Conclusions In this work, we have proposed a Transformer-based model for zero-shot paraphrase generation, which can leverage huge amount of off-the-shelf translation corpora. Moreover, we improve generation fluency of our model with language model pre-training. Empirical results from both automatic and human evaluation demonstrate that our model surpasses the conventional pivoting approaches in terms of relevance, diversity, fluency and efficiency. Nevertheless, there are some interesting directions to be explored. For instance, how to obtain a better latent semantic representation with multi-modal data and how to further improve the generation diversity without sacrificing relevance. We plan to strike these challenging yet valuable problems in the future.
MultiUN BIBREF20, OpenSubtitles BIBREF21
b9c0049a7a5639c33efdb6178c2594b8efdefabb
b9c0049a7a5639c33efdb6178c2594b8efdefabb_0
Q: How much better are results of proposed model compared to pivoting method? Text: Introduction Paraphrasing is to express the same meaning using different expressions. Paraphrase generation plays an important role in various natural language processing (NLP) tasks such as response diversification in dialogue system, query reformulation in information retrieval, and data augmentation in machine translation. Recently, models based on Seq2Seq learning BIBREF1 have achieved the state-of-the-art results on paraphrase generation. Most of these models BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6 focus on training the paraphrasing models based on a paraphrase corpus, which contains a number of pairs of paraphrases. However, high-quality paraphrases are usually difficult to acquire in practice, which becomes the major limitation of these methods. Therefore, we focus on zero-shot paraphrase generation approach in this paper, which aims to generate paraphrases without requiring a paraphrase corpus. A natural choice is to leverage the bilingual or multilingual parallel data used in machine translation, which are of great quantity and quality. The basic assumption is that if two sentences in one language (e.g., English) have the same translation in another language (e.g., French), they are assumed to have the same meaning, i.e., they are paraphrases of each other. Therefore, one typical solution for paraphrasing in one language is to pivot over a translation in another language. Specifically, it is implemented as the round-trip translation, where the input sentence is translated into a foreign sentence, then back-translated into a sentence in the same language as input BIBREF7. The process is shown in Figure FIGREF1. Apparently, two machine translation systems (English$\rightarrow $French and French$\leftarrow $English) are needed to conduct the generation of a paraphrase. Although the pivoting approach works in general, there are several intrinsic defects. First, the round-trip system can hardly explore all the paths of paraphrasing, since it is pivoted through the finite intermedia outputs of a translation system. More formally, let $Z$ denote the meaning representation of a sentence $X$, and finding paraphrases of $X$ can be treated as sampling another sentence $Y$ conditioning on the representation $Z$. Ideally, paraphrases should be generated by following $P(Y|X) = \int _{Z} P(Y|Z)P(Z|X)dZ$, which is marginalized over all possible values of $Z$. However, in the round-trip translation, only one or several $Z$s are sampled from the machine translation system $P(Z|X)$, which can lead to an inaccurate approximation of the whole distribution and is prone to the problem of semantic drift due to the sampling variances. Second, the results are determined by the pre-existing translation systems, and it is difficult to optimize the pipeline end-to-end. Last, the system is not efficient especially at the inference stage, because it needs two rounds of translation decoding. To address these issues, we propose a single-step zero-shot paraphrase generation model, which can be trained on machine translation corpora in an end-to-end fashion. Unlike the pivoting approach, our proposed model does not involve explicit translation between multiple languages. Instead, it directly learns the paraphrasing distribution $P(Y|X)$ from the parallel data sampled from $P(Z|X)$ and $P(Y|Z)$. Specifically, we build a Transformer-based BIBREF8 language model, which is trained on the concatenated bilingual parallel sentences with language indicators. At inference stage, given a input sentence in a particular language, the model is guided to generate sentences in the same language, which are deemed as paraphrases of the input. Our model is simple and compact, and can empirically reduce the risk of semantic drift to a large extent. Moreover, we can initialize our model with generative pre-training (GPT) BIBREF0 on monolingual data, which can benefit the generation in low-resource languages. Finally, we borrow the idea of denoising auto-encoder (DAE) to further enhance robustness in paraphrase generation. We conduct experiments on zero-shot paraphrase generation task, and find that the proposed model significantly outperforms the pivoting approach in terms of both automatic and human evaluations. Meanwhile, the training and inference cost are largely reduced compared to the pivot-based methods which involves multiple systems. Methodology ::: Transformer-based Language Model Transformer-based language model (TLM) is a neural language model constructed with a stack of Transformer decoder layers BIBREF8. Given a sequence of tokens, TLM is trained with maximizing the likelihood: where $X=[x_1,x_2,\ldots ,x_n]$ is a sentence in a language (e.g., English), and $\theta $ denotes the parameters of the model. Each Transformer layer is composed of multi-head self-attention, layer normalization and a feed-forward network. We refer reader to the original paper for details of each component. Formally, the decoding probability is given by where $x_i$ denotes the token embedding, $p_i$ denote the positional embedding and $h_i$ denotes the output states of the $i$-th token, and $W_e$ and $W_o$ are the input and output embedding matrices. Although TLM is normally employed to model monolingual sequences, there is no barrier to utilize TLM to model sequences in multiple languages. In this paper, inspired by BIBREF9, we concatenate pairs of sentences from bilingual parallel corpora (e.g., English$\rightarrow $French) as training instances to the model. Let $X$ and $Y$ denote the parallel sentences in two different languages, the training objective becomes This bilingual language model can be regarded as the decoder-only model compared to the traditional encoder-decoder model. It has been proved to work effectively on monolingual text-to-text generation tasks such as summarization BIBREF10. The advantages of such architecture include less model parameters, easier optimization and potential better performance for longer sequences. Furthermore, it naturally integrates with language models pre-training on monolingual corpus. For each input sequence of concatenated sentences, we add special tokens $\langle $bos$\rangle $ and $\langle $eos$\rangle $ at the beginning and the end, and $\langle $delim$\rangle $ in between the sentences. Moreover, at the beginning of each sentence, we add a special token as its language identifier, for instance, $\langle $en$\rangle $ for English, $\langle $fr$\rangle $ for French. One example of English$\rightarrow $French training sequence is “$\langle $bos$\rangle $ $\langle $en$\rangle $ cat sat on the mat $\langle $delim$\rangle $ $\langle $fr$\rangle $ chat assis sur le tapis $\langle $eos$\rangle $". At inference stage, the model predicts the next word as the conventional auto-regressive model: Methodology ::: Zero-shot Paraphrase Generation We train the bilingual language model on multiple bilingual corpora, for example, English$\leftrightarrow $French and German$\leftrightarrow $Chinese. Once the language model has been trained, we can conduct zero-shot paraphrase generation based on the model. Specifically, given an input sentence that is fed into the language model, we set the output language identifier the same as input, and then simply conduct decoding to generate paraphrases of the input sentence. Figure FIGREF2 illustrates the training and decoding process of our model. In the training stage, the model is trained to sequentially generate the input sentence and its translation in a specific language. Training is conducted in the way of teacher-forcing. In the decoding stage, after an English sentence “$\langle $bos$\rangle $ $\langle $en$\rangle $ cat sat on the mat $\langle $delim$\rangle $" is fed to the model, we intentionally set the output language identifier as “$\langle $en$\rangle $", in order to guide the model to continue to generate English words. At the same time, since the model has been trained on translation corpus, it implicitly learns to keep the semantic meaning of the output sentence the same as the input. Accordingly, the model will probably generate the paraphrases of the input sentence, such as “the cat sitting on the carpet $\langle $eos$\rangle $". It should be noted our model can obviously be trained on parallel paraphrase data without any modification. But in this paper, we will mainly focus on the research and evaluation in the zero-shot learning setting. In the preliminary experiments of zero-shot paraphrasing, we find the model does not perform consistently well and sometimes fails to generate the words in the correct language as indicated by the language identifier. Similar phenomenon has been observed in the research of zero-shot neural machine translation BIBREF11, BIBREF12, BIBREF13, which is referred as the degeneracy problem by BIBREF13. To address these problems in zero-shot paraphrase generation, we propose several techniques to improve the quality and diversity of the model as follows. Methodology ::: Zero-shot Paraphrase Generation ::: Language Embeddings The language identifier prior to the sentence does not always guarantee the language of the sequences generated by the model. In order to keep the language consistency, we introduce language embeddings, where each language is assigned a specific vector representation. Supposing that the language embedding for the $i$-th token in a sentence is $a_i$, we concatenate the language embedding with the Transformer output states and feed it to the softmax layer for predicting each token: We empirically demonstrate that the language embedding added to each tokens can effectively guide the model to generate sentences in the required language. Note that we still let the model to learn the output distribution for each language rather than simply restricting the vocabularies of output space. This offers flexibility to handle coding switching cases commonly seen in real-world data, e.g., English words could also appear in French sentences. Methodology ::: Zero-shot Paraphrase Generation ::: Pre-Training on Monolingual Corpora Language model pre-training has shown its effectiveness in language generation tasks such as machine translation, text summarization and generative question answering BIBREF14, BIBREF15, BIBREF16. It is particularly helpful to the low/zero-resource tasks since the knowledge learned from large-scale monolingual corpus can be transferred to downstream tasks via the pre-training-then-fine-tuning approach. Since our model for paraphrase generation shares the same architecture as the language model, we are able to pre-train the model on massive monolingual data. Pre-training on monolingual data is conducted in the same way as training on parallel data, except that each training example contains only one sentence with the beginning/end of sequence tokens and the language identifier. The language embeddings are also employed. The pre-training objective is the same as Equation (DISPLAY_FORM4). In our experiments, we first pre-train the model on monolingual corpora of multiple languages respectively, and then fine-tune the model on parallel corpora. Methodology ::: Zero-shot Paraphrase Generation ::: Denoising Auto-Encoder We adopt the idea of denoising auto-encoder (DAE) to further improve the robustness of our paraphrasing model. DAE is originally proposed to learn intermediate representations that are robust to partial corruption of the inputs in training auto-encoders BIBREF17. Specifically, the initial input $X$ is first partially corrupted as $\tilde{X}$, which can be treated as sampling from a noise distribution $\tilde{X}\sim {q(\tilde{X}|X)}$. Then, an auto-encoder is trained to recover the original $X$ from the noisy input $\tilde{X}$ by minimizing the reconstruction error. In the applications of text generation BIBREF18 and machine translation BIBREF19, DAE has shown to be able to learn representations that are more robust to input noises and also generalize to unseen examples. Inspired by BIBREF19, we directly inject three different types of noises into input sentence that are commonly encountered in real applications. 1) Deletion: We randomly delete 1% tokens from source sentences, for example, “cat sat on the mat $\mapsto $ cat on the mat." 2) Insertion: We insert a random token into source sentences in 1% random positions, for example, “cat sat on the mat $\mapsto $ cat sat on red the mat." 3) Reordering: We randomly swap 1% tokens in source sentences, and keep the distance between tokens being swapped within 5. “cat sat on the mat $\mapsto $ mat sat on the cat." By introducing such noises into the input sentences while keeping the target sentences clean in training, our model can be more stable in generating paraphrases and generalisable to unseen sentences in the training corpus. The training objective with DAE becomes Once the model is trained, we generate paraphrases of a given sentence based on $P(Y|X;\theta )$. Experiments ::: Datasets We adopt the mixture of two multilingual translation corpus as our training data: MultiUN BIBREF20 and OpenSubtitles BIBREF21. MultiUN consists of 463,406 official documents in six languages, containing around 300M words for each language. OpenSubtitles is a corpus consisting of movie and TV subtitles, which contains 2.6B sentences over 60 languages. We select four shared languages of the two corpora: English, Spanish, Russian and Chinese. Statistics of the training corpus are shown in Table TABREF14. Sentences are tokenized by Wordpiece as in BERT. A multilingual vocabulary of 50K tokens is used. For validation and testing, we randomly sample 10000 sentences respectively from each language pair. The rest data are used for training. For monolingual pre-training, we use English Wikipedia corpus, which contains 2,500M words. Experiments ::: Experimental Settings We implement our model in Tensorflow BIBREF22. The size of our Transformer model is identical to BERT-base BIBREF23. The model is constituted by 12 layers of Transformer blocks. Number of dimension of token embedding, position embedding and transformer hidden state are 768, while that of states in position-wise feed-forward networks are 3072. The number of attention heads is 12. Models are train using Adam optimization BIBREF24 with a learning rate up to $1e-4$, $\beta _1=0.9$, $\beta _2=0.999$ and $L2$ weight decay of 0.01. We use top-k truncated random sampling strategy for inference that only sample from k candidate words with highest probabilities. Throughout our experiments, we train and evaluate two models for paraphrase generation: the bilingual model and the multilingual model. The bilingual models are trained only with English$\leftrightarrow $Chinese, while the multilingual models are trained with all the data between the four languages. The round-trip translation baseline is based on the Transformer-based neural translation model. Experiments ::: Automatic Evaluation We evaluate the relevance between input and generated paraphrase as well as the diversity among multiple generated paraphrases from the same input. For relevance, we use the cosine similarity between the sentential representations BIBREF25. Specifically, we use the Glove-840B embeddings BIBREF26 for word representation and Vector Extrema BIBREF25 for sentential representation. For generation diversity, we employ two evaluation metrics: Distinct-2 and inverse Self-BLEU (defined as: $1-$Self-BLEU) BIBREF27. Larger values of Distinct-2 and inverse Self-BLEU indicate higher diversity of the generation. For each model, we draw curves in Figure FIGREF15 with the aforementioned metrics as coordinates, and each data-point is obtained at a specific sampling temperature. Since a good paraphrasing model should generate both relevant and diverse paraphrases, the model with curve lying towards the up-right corner is regarded as with good performance. Experiments ::: Automatic Evaluation ::: Comparison with Baseline First we compare our models with the conventional pivoting method, i.e., round-trip translation. As shown in Figure FIGREF15 (a)(b), either the bilingual or the multilingual model is better than the baseline in terms of relevance and diversity in most cases. In other words, with the same generation diversity (measured by both Distinct-2 and Self-BLEU), our models can generate paraphrase with more semantically similarity to the input sentence. Note that in Figure FIGREF15 (a), there is a cross point between the curve of the bilingual model and the baseline curve when relevance is around 0.71. We particularly investigate generated paraphrases around this point and find that the baseline actually achieves better relevance when Distinct-2 is at a high level ($>$0.3). It means our bilingual model is semantically drifting faster than the baseline model as the Distinct-2 diversity increases. The round-trip translation performs two-round of supervised translations, while the zero-shot paraphrasing performs single-round unsupervised `translation' (paraphrasing). We suspect that the unsupervised paraphrasing can be more sensitive to the decoding strategy. It also implies the latent, language-agnostic representation may be not well learned in our bilingual model. While on the other hand, our multilingual model alleviate this insufficiency. We further verify and analyze it as follows. Experiments ::: Automatic Evaluation ::: Multilingual Models As mentioned above, our bilingual model can be unstable in some cases due to the lack of a well-learned language-agnostic semantic representation. A natural method is to introduce multilingual corpus, which consists of various translation directions. Training over multilingual corpus forces the model to decouple the language type and semantic representation. Empirical results shows that our multilingual model performs significantly better than the bilingual model. The red and blue curves in Figure FIGREF15 (a)(b) demonstrates a great improvement of our multilingual model over the bilingual model. In addition, the multilingual model also significantly outperforms the baseline in the setting with the reasonable relevance scores. Experiments ::: Automatic Evaluation ::: Denoising Auto-Encoder To verify the effectiveness of DAE in our model, various experiments with different hyper-parameters were conducted. We find that DAE works the best when uniformly perturbing input sentences with probability 0.01, using only Deletion and Reordering operations. We investigate DAE over both bilingual and multilingual models as plotted in Figure FIGREF15 (c)(d). Curves with the yellow circles represent models with DAE training. Results in the Figure FIGREF15 (c)(d) demonstrate positive effects of DAE in either bilingual or multilingual models. It is worth to note that, while DAE have marginal impact on multilingual model, it improves bilingual model significantly. This is an evidence indicating that DAE can improve the model in learning a more robust representation. More specifically, since Deletion forces model to focus on sentence-level semantics rather than word-level meaning while Reordering forces model to focus more on meaning rather than their positions, it would be more difficult for a model to learn shortcuts (e.g. copy words). In other words, DAE improves models' capability in extracting deep semantic representation, which has a similar effect to introducing multilingual data. Experiments ::: Automatic Evaluation ::: Monolingual Pre-Training As shown in Figure FIGREF15 (a)(b), the model with language model pre-training almost performs equally to its contemporary without pre-training. However, evaluations on fluency uncover the value of pre-training. We evaluate a group of models over our test set in terms of fluency, using a n-grams language model trained on 14k public domain books. As depicted in Table TABREF25, models with language model pre-training stably achieves greater log-probabilities than the model without pre-training. Namely, language model pre-training brings better fluency. Experiments ::: Human Evaluation 200 sentences are sampled from our test set for human evaluation. The human evaluation guidance generally follows that of BIBREF5 but with a compressed scoring range from [1, 5] to [1, 4]. We recruit five human annotators to evaluate models in semantic relevance and fluency. A test example consists of one input sentence, one generated sentence from baseline model and one generated sentence from our model. We randomly permute a pair of generated sentences to reduce annotators' bias on a certain model. Each example is evaluated by two annotators. As shown in Table TABREF28, our method outperforms the baseline in both relevance and fluency significantly. We further calculate agreement (Cohen's kappa) between two annotators. Both round-trip translation and our method performs well as to fluency. But the huge gap of relevance between the two systems draw much attention of us. We investigate the test set in details and find that round-trip approach indeed generate more noise as shown in case studies. Experiments ::: Case Studies We further study some generated cases from different models. All results in Table TABREF30 are generated over our test set using randomly sampling. For both baseline and multilingual model, we tune their sampling temperatures to control the Distinct-2 and the inverse Self-BLEU at 0.31 and 0.47 respectively. In the case studies, we find that our method usually generates sentences with better relevance to source inputs, while the round-trip translation method can sometimes run into serious semantic drift. In the second case, our model demonstrates a good feature that it maintains the meaning and even a proper noun $guide$ unchanged while modifies the source sentence by both changing and reordering words. This feature may be introduced by DAE perturbation strategies which improves model's robustness and diversity simultaneously. These results evidence that our methods outperforms the baseline in both relevance and diversity. Related Work Generating paraphrases based on deep neural networks, especially Seq2Seq models, has become the mainstream approach. A majority of neural paraphrasing models tried to improve generation quality and diversity with high-quality paraphrase corpora. BIBREF2 starts a deep learning line of paraphrase generation through introducing stacked residual LSTM network. A word constraint model proposed by BIBREF3 improves both generation quality and diversity. BIBREF4 adopts variational auto-encoder to further improve generation diversity. BIBREF5 utilize neural reinforcement learning and adversarial training to promote generation quality. BIBREF6 decompose paraphrase generation into phrase-level and sentence-level. Several works tried to generate paraphrases from monolingual non-parallel or translation corpora. BIBREF28 exploits Markov Network model to extract paraphrase tables from monolingual corpus. BIBREF29, BIBREF30 and BIBREF31 create paraphrase corpus through clustering and aligning paraphrases from crawled articles or headlines. With parallel translation corpora, pivoting approaches such round-trip translation BIBREF7 and back-translation BIBREF32 are explored. However, to the best knowledge of us, none of these paraphrase generation models has been trained directly from parallel translation corpora as a single-round end-to-end model. Conclusions In this work, we have proposed a Transformer-based model for zero-shot paraphrase generation, which can leverage huge amount of off-the-shelf translation corpora. Moreover, we improve generation fluency of our model with language model pre-training. Empirical results from both automatic and human evaluation demonstrate that our model surpasses the conventional pivoting approaches in terms of relevance, diversity, fluency and efficiency. Nevertheless, there are some interesting directions to be explored. For instance, how to obtain a better latent semantic representation with multi-modal data and how to further improve the generation diversity without sacrificing relevance. We plan to strike these challenging yet valuable problems in the future.
our method outperforms the baseline in both relevance and fluency significantly.
99c50d51a428db09edaca0d07f4dab0503af1b94
99c50d51a428db09edaca0d07f4dab0503af1b94_0
Q: What kind of Youtube video transcripts did they use? Text: Introduction The goal of Automatic Speech Recognition (ASR) is to transform spoken data into a written representation, thus enabling natural human-machine interaction BIBREF0 with further Natural Language Processing (NLP) tasks. Machine translation, question answering, semantic parsing, POS tagging, sentiment analysis and automatic text summarization; originally developed to work with formal written texts, can be applied over the transcripts made by ASR systems BIBREF1 , BIBREF2 , BIBREF3 . However, before applying any of these NLP tasks a segmentation process called Sentence Boundary Detection (SBD) should be performed over ASR transcripts to reach a minimal syntactic information in the text. To measure the performance of a SBD system, the automatically segmented transcript is evaluated against a single reference normally done by a human. But given a transcript, does it exist a unique reference? Or, is it possible that the same transcript could be segmented in five different ways by five different people in the same conditions? If so, which one is correct; and more important, how to fairly evaluate the automatically segmented transcript? These questions are the foundations of Window-based Sentence Boundary Evaluation (WiSeBE), a new semi-supervised metric for evaluating SBD systems based on multi-reference (dis)agreement. The rest of this article is organized as follows. In Section SECREF2 we set the frame of SBD and how it is normally evaluated. WiSeBE is formally described in Section SECREF3 , followed by a multi-reference evaluation in Section SECREF4 . Further analysis of WiSeBE and discussion over the method and alternative multi-reference evaluation is presented in Section SECREF5 . Finally, Section SECREF6 concludes the paper. Sentence Boundary Detection Sentence Boundary Detection (SBD) has been a major research topic science ASR moved to more general domains as conversational speech BIBREF4 , BIBREF5 , BIBREF6 . Performance of ASR systems has improved over the years with the inclusion and combination of new Deep Neural Networks methods BIBREF7 , BIBREF8 , BIBREF0 . As a general rule, the output of ASR systems lacks of any syntactic information such as capitalization and sentence boundaries, showing the interst of ASR systems to obtain the correct sequence of words with almost no concern of the overall structure of the document BIBREF9 . Similar to SBD is the Punctuation Marks Disambiguation (PMD) or Sentence Boundary Disambiguation. This task aims to segment a formal written text into well formed sentences based on the existent punctuation marks BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . In this context a sentence is defined (for English) by the Cambridge Dictionary as: “a group of words, usually containing a verb, that expresses a thought in the form of a statement, question, instruction, or exclamation and starts with a capital letter when written”. PMD carries certain complications, some given the ambiguity of punctuation marks within a sentence. A period can denote an acronym, an abbreviation, the end of the sentence or a combination of them as in the following example: The U.S. president, Mr. Donald Trump, is meeting with the F.B.I. director Christopher A. Wray next Thursday at 8p.m. However its difficulties, DPM profits of morphological and lexical information to achieve a correct sentence segmentation. By contrast, segmenting an ASR transcript should be done without any (or almost any) lexical information and a flurry definition of sentence. The obvious division in spoken language may be considered speaker utterances. However, in a normal conversation or even in a monologue, the way ideas are organized differs largely from written text. This differences, added to disfluencies like revisions, repetitions, restarts, interruptions and hesitations make the definition of a sentence unclear thus complicating the segmentation task BIBREF14 . Table TABREF2 exemplifies some of the difficulties that are present when working with spoken language. Stolcke & Shriberg BIBREF6 considered a set of linguistic structures as segments including the following list: In BIBREF4 , Meteer & Iyer divided speaker utterances into segments, consisting each of a single independent clause. A segment was considered to begin either at the beginning of an utterance, or after the end of the preceding segment. Any dysfluency between the end of the previous segments and the begging of current one was considered part of the current segments. Rott & Červa BIBREF15 aimed to summarize news delivered orally segmenting the transcripts into “something that is similar to sentences”. They used a syntatic analyzer to identify the phrases within the text. A wide study focused in unbalanced data for the SBD task was performed by Liu et al. BIBREF16 . During this study they followed the segmentation scheme proposed by the Linguistic Data Consortium on the Simple Metadata Annotation Specification V5.0 guideline (SimpleMDE_V5.0) BIBREF14 , dividing the transcripts in Semantic Units. A Semantic Unit (SU) is considered to be an atomic element of the transcript that manages to express a complete thought or idea on the part of the speaker BIBREF14 . Sometimes a SU corresponds to the equivalent of a sentence in written text, but other times (the most part of them) a SU corresponds to a phrase or a single word. SUs seem to be an inclusive conception of a segment, they embrace different previous segment definitions and are flexible enough to deal with the majority of spoken language troubles. For these reasons we will adopt SUs as our segment definition. Sentence Boundary Evaluation SBD research has been focused on two different aspects; features and methods. Regarding the features, some work focused on acoustic elements like pauses duration, fundamental frequencies, energy, rate of speech, volume change and speaker turn BIBREF17 , BIBREF18 , BIBREF19 . The other kind of features used in SBD are textual or lexical features. They rely on the transcript content to extract features like bag-of-word, POS tags or word embeddings BIBREF20 , BIBREF18 , BIBREF21 , BIBREF22 , BIBREF15 , BIBREF6 , BIBREF23 . Mixture of acoustic and lexical features have also been explored BIBREF24 , BIBREF25 , BIBREF19 , BIBREF26 , which is advantageous when both audio signal and transcript are available. With respect to the methods used for SBD, they mostly rely on statistical/neural machine translation BIBREF18 , BIBREF27 , language models BIBREF9 , BIBREF16 , BIBREF22 , BIBREF6 , conditional random fields BIBREF21 , BIBREF28 , BIBREF23 and deep neural networks BIBREF29 , BIBREF20 , BIBREF13 . Despite their differences in features and/or methodology, almost all previous cited research share a common element; the evaluation methodology. Metrics as Precision, Recall, F1-score, Classification Error Rate and Slot Error Rate (SER) are used to evaluate the proposed system against one reference. As discussed in Section SECREF1 , further NLP tasks rely on the result of SBD, meaning that is crucial to have a good segmentation. But comparing the output of a system against a unique reference will provide a reliable score to decide if the system is good or bad? Bohac et al. BIBREF24 compared the human ability to punctuate recognized spontaneous speech. They asked 10 people (correctors) to punctuate about 30 minutes of ASR transcripts in Czech. For an average of 3,962 words, the punctuation marks placed by correctors varied between 557 and 801; this means a difference of 244 segments for the same transcript. Over all correctors, the absolute consensus for period (.) was only 4.6% caused by the replacement of other punctuation marks as semicolons (;) and exclamation marks (!). These results are understandable if we consider the difficulties presented previously in this section. To our knowledge, the amount of studies that have tried to target the sentence boundary evaluation with a multi-reference approach is very small. In BIBREF24 , Bohac et al. evaluated the overall punctuation accuracy for Czech in a straightforward multi-reference framework. They considered a period (.) valid if at least five of their 10 correctors agreed on its position. Kolář & Lamel BIBREF25 considered two independent references to evaluate their system and proposed two approaches. The fist one was to calculate the SER for each of one the two available references and then compute their mean. They found this approach to be very strict because for those boundaries where no agreement between references existed, the system was going to be partially wrong even the fact that it has correctly predicted the boundary. Their second approach tried to moderate the number of unjust penalizations. For this case, a classification was considered incorrect only if it didn't match either of the two references. These two examples exemplify the real need and some straightforward solutions for multi-reference evaluation metrics. However, we think that it is possible to consider in a more inclusive approach the similarities and differences that multiple references could provide into a sentence boundary evaluation protocol. Window-Based Sentence Boundary Evaluation Window-Based Sentence Boundary Evaluation (WiSeBE) is a semi-automatic multi-reference sentence boundary evaluation protocol which considers the performance of a candidate segmentation over a set of segmentation references and the agreement between those references. Let INLINEFORM0 be the set of all available references given a transcript INLINEFORM1 , where INLINEFORM2 is the INLINEFORM3 word in the transcript; a reference INLINEFORM4 is defined as a binary vector in terms of the existent SU boundaries in INLINEFORM5 . DISPLAYFORM0 where INLINEFORM0 Given a transcript INLINEFORM0 , the candidate segmentation INLINEFORM1 is defined similar to INLINEFORM2 . DISPLAYFORM0 where INLINEFORM0 General Reference and Agreement Ratio A General Reference ( INLINEFORM0 ) is then constructed to calculate the agreement ratio between all references in. It is defined by the boundary frequencies of each reference INLINEFORM1 . DISPLAYFORM0 where DISPLAYFORM0 The Agreement Ratio ( INLINEFORM0 ) is needed to get a numerical value of the distribution of SU boundaries over INLINEFORM1 . A value of INLINEFORM2 close to 0 means a low agreement between references in INLINEFORM3 , while INLINEFORM4 means a perfect agreement ( INLINEFORM5 ) in INLINEFORM6 . DISPLAYFORM0 In the equation above, INLINEFORM0 corresponds to the ponderated common boundaries of INLINEFORM1 and INLINEFORM2 to its hypothetical maximum agreement. DISPLAYFORM0 DISPLAYFORM1 Window-Boundaries Reference In Section SECREF2 we discussed about how disfluencies complicate SU segmentation. In a multi-reference environment this causes disagreement between references around a same SU boundary. The way WiSeBE handle disagreements produced by disfluencies is with a Window-boundaries Reference ( INLINEFORM0 ) defined as: DISPLAYFORM0 where each window INLINEFORM0 considers one or more boundaries INLINEFORM1 from INLINEFORM2 with a window separation limit equal to INLINEFORM3 . DISPLAYFORM0 WiSeBEWiSeBE WiSeBE is a normalized score dependent of 1) the performance of INLINEFORM0 over INLINEFORM1 and 2) the agreement between all references in INLINEFORM2 . It is defined as: DISPLAYFORM0 where INLINEFORM0 corresponds to the harmonic mean of precision and recall of INLINEFORM1 with respect to INLINEFORM2 (equation EQREF23 ), while INLINEFORM3 is the agreement ratio defined in ( EQREF15 ). INLINEFORM4 can be interpreted as a scaling factor; a low value will penalize the overall WiSeBE score given the low agreement between references. By contrast, for a high agreement in INLINEFORM5 ( INLINEFORM6 ), INLINEFORM7 . DISPLAYFORM0 DISPLAYFORM1 Equations EQREF24 and EQREF25 describe precision and recall of INLINEFORM0 with respect to INLINEFORM1 . Precision is the number of boundaries INLINEFORM2 inside any window INLINEFORM3 from INLINEFORM4 divided by the total number of boundaries INLINEFORM5 in INLINEFORM6 . Recall corresponds to the number of windows INLINEFORM7 with at least one boundary INLINEFORM8 divided by the number of windows INLINEFORM9 in INLINEFORM10 . Evaluating with WiSeBEWiSeBE To exemplify the INLINEFORM0 score we evaluated and compared the performance of two different SBD systems over a set of YouTube videos in a multi-reference enviroment. The first system (S1) employs a Convolutional Neural Network to determine if the middle word of a sliding window corresponds to a SU boundary or not BIBREF30 . The second approach (S2) by contrast, introduces a bidirectional Recurrent Neural Network model with attention mechanism for boundary detection BIBREF31 . In a first glance we performed the evaluation of the systems against each one of the references independently. Then, we implemented a multi-reference evaluation with INLINEFORM0 . Dataset We focused evaluation over a small but diversified dataset composed by 10 YouTube videos in the English language in the news context. The selected videos cover different topics like technology, human rights, terrorism and politics with a length variation between 2 and 10 minutes. To encourage the diversity of content format we included newscasts, interviews, reports and round tables. During the transcription phase we opted for a manual transcription process because we observed that using transcripts from an ASR system will difficult in a large degree the manual segmentation process. The number of words per transcript oscilate between 271 and 1,602 with a total number of 8,080. We gave clear instructions to three evaluators ( INLINEFORM0 ) of how segmentation was needed to be perform, including the SU concept and how punctuation marks were going to be taken into account. Periods (.), question marks (?), exclamation marks (!) and semicolons (;) were considered SU delimiters (boundaries) while colons (:) and commas (,) were considered as internal SU marks. The number of segments per transcript and reference can be seen in Table TABREF27 . An interesting remark is that INLINEFORM1 assigns about INLINEFORM2 less boundaries than the mean of the other two references. Evaluation We ran both systems (S1 & S2) over the manually transcribed videos obtaining the number of boundaries shown in Table TABREF29 . In general, it can be seen that S1 predicts INLINEFORM0 more segments than S2. This difference can affect the performance of S1, increasing its probabilities of false positives. Table TABREF30 condenses the performance of both systems evaluated against each one of the references independently. If we focus on F1 scores, performance of both systems varies depending of the reference. For INLINEFORM0 , S1 was better in 5 occasions with respect of S2; S1 was better in 2 occasions only for INLINEFORM1 ; S1 overperformed S2 in 3 occasions concerning INLINEFORM2 and in 4 occasions for INLINEFORM3 (bold). Also from Table TABREF30 we can observe that INLINEFORM0 has a bigger similarity to S1 in 5 occasions compared to other two references, while INLINEFORM1 is more similar to S2 in 7 transcripts (underline). After computing the mean F1 scores over the transcripts, it can be concluded that in average S2 had a better performance segmenting the dataset compared to S1, obtaining a F1 score equal to 0.510. But... What about the complexity of the dataset? Regardless all references have been considered, nor agreement or disagreement between them has been taken into account. All values related to the INLINEFORM0 score are displayed in Table TABREF31 . The Agreement Ratio ( INLINEFORM1 ) between references oscillates between 0.525 for INLINEFORM2 and 0.767 for INLINEFORM3 . The lower the INLINEFORM4 , the bigger the penalization INLINEFORM5 will give to the final score. A good example is S2 for transcript INLINEFORM6 where INLINEFORM7 reaches a value of 0.800, but after considering INLINEFORM8 the INLINEFORM9 score falls to 0.462. It is feasible to think that if all references are taken into account at the same time during evaluation ( INLINEFORM0 ), the score will be bigger compared to an average of independent evaluations ( INLINEFORM1 ); however this is not always true. That is the case of S1 in INLINEFORM2 , which present a slight decrease for INLINEFORM3 compared to INLINEFORM4 . An important remark is the behavior of S1 and S2 concerning INLINEFORM0 . If evaluated without considering any (dis)agreement between references ( INLINEFORM1 ), S2 overperforms S1; this is inverted once the systems are evaluated with INLINEFORM2 . R G AR R_{G_{AR}} and Fleiss' Kappa correlation In Section SECREF3 we described the INLINEFORM0 score and how it relies on the INLINEFORM1 value to scale the performance of INLINEFORM2 over INLINEFORM3 . INLINEFORM4 can intuitively be consider an agreement value over all elements of INLINEFORM5 . To test this hypothesis, we computed the Pearson correlation coefficient ( INLINEFORM6 ) BIBREF32 between INLINEFORM7 and the Fleiss' Kappa BIBREF33 of each video in the dataset ( INLINEFORM8 ). A linear correlation between INLINEFORM0 and INLINEFORM1 can be observed in Table TABREF33 . This is confirmed by a INLINEFORM2 value equal to INLINEFORM3 , which means a very strong positive linear correlation between them. F1 mean F1_{mean} vs. WiSeBEWiSeBE Results form Table TABREF31 may give an idea that INLINEFORM0 is just an scaled INLINEFORM1 . While it is true that they show a linear correlation, INLINEFORM2 may produce a different system ranking than INLINEFORM3 given the integral multi-reference principle it follows. However, what we consider the most profitable about INLINEFORM4 is the twofold inclusion of all available references it performs. First, the construction of INLINEFORM5 to provide a more inclusive reference against to whom be evaluated and then, the computation of INLINEFORM6 , which scales the result depending of the agreement between references. Conclusions In this paper we presented WiSeBE, a semi-automatic multi-reference sentence boundary evaluation protocol based on the necessity of having a more reliable way for evaluating the SBD task. We showed how INLINEFORM0 is an inclusive metric which not only evaluates the performance of a system against all references, but also takes into account the agreement between them. According to your point of view, this inclusivity is very important given the difficulties that are present when working with spoken language and the possible disagreements that a task like SBD could provoke. INLINEFORM0 shows to be correlated with standard SBD metrics, however we want to measure its correlation with extrinsic evaluations techniques like automatic summarization and machine translation. Acknowledgments We would like to acknowledge the support of CHIST-ERA for funding this work through the Access Multilingual Information opinionS (AMIS), (France - Europe) project. We also like to acknowledge the support given by the Prof. Hanifa Boucheneb from VERIFORM Laboratory (École Polytechnique de Montréal).
youtube video transcripts on news covering different topics like technology, human rights, terrorism and politics
d1747b1b56fddb05bb1225e98fd3c4c043d74592
d1747b1b56fddb05bb1225e98fd3c4c043d74592_0
Q: Which SBD systems did they compare? Text: Introduction The goal of Automatic Speech Recognition (ASR) is to transform spoken data into a written representation, thus enabling natural human-machine interaction BIBREF0 with further Natural Language Processing (NLP) tasks. Machine translation, question answering, semantic parsing, POS tagging, sentiment analysis and automatic text summarization; originally developed to work with formal written texts, can be applied over the transcripts made by ASR systems BIBREF1 , BIBREF2 , BIBREF3 . However, before applying any of these NLP tasks a segmentation process called Sentence Boundary Detection (SBD) should be performed over ASR transcripts to reach a minimal syntactic information in the text. To measure the performance of a SBD system, the automatically segmented transcript is evaluated against a single reference normally done by a human. But given a transcript, does it exist a unique reference? Or, is it possible that the same transcript could be segmented in five different ways by five different people in the same conditions? If so, which one is correct; and more important, how to fairly evaluate the automatically segmented transcript? These questions are the foundations of Window-based Sentence Boundary Evaluation (WiSeBE), a new semi-supervised metric for evaluating SBD systems based on multi-reference (dis)agreement. The rest of this article is organized as follows. In Section SECREF2 we set the frame of SBD and how it is normally evaluated. WiSeBE is formally described in Section SECREF3 , followed by a multi-reference evaluation in Section SECREF4 . Further analysis of WiSeBE and discussion over the method and alternative multi-reference evaluation is presented in Section SECREF5 . Finally, Section SECREF6 concludes the paper. Sentence Boundary Detection Sentence Boundary Detection (SBD) has been a major research topic science ASR moved to more general domains as conversational speech BIBREF4 , BIBREF5 , BIBREF6 . Performance of ASR systems has improved over the years with the inclusion and combination of new Deep Neural Networks methods BIBREF7 , BIBREF8 , BIBREF0 . As a general rule, the output of ASR systems lacks of any syntactic information such as capitalization and sentence boundaries, showing the interst of ASR systems to obtain the correct sequence of words with almost no concern of the overall structure of the document BIBREF9 . Similar to SBD is the Punctuation Marks Disambiguation (PMD) or Sentence Boundary Disambiguation. This task aims to segment a formal written text into well formed sentences based on the existent punctuation marks BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . In this context a sentence is defined (for English) by the Cambridge Dictionary as: “a group of words, usually containing a verb, that expresses a thought in the form of a statement, question, instruction, or exclamation and starts with a capital letter when written”. PMD carries certain complications, some given the ambiguity of punctuation marks within a sentence. A period can denote an acronym, an abbreviation, the end of the sentence or a combination of them as in the following example: The U.S. president, Mr. Donald Trump, is meeting with the F.B.I. director Christopher A. Wray next Thursday at 8p.m. However its difficulties, DPM profits of morphological and lexical information to achieve a correct sentence segmentation. By contrast, segmenting an ASR transcript should be done without any (or almost any) lexical information and a flurry definition of sentence. The obvious division in spoken language may be considered speaker utterances. However, in a normal conversation or even in a monologue, the way ideas are organized differs largely from written text. This differences, added to disfluencies like revisions, repetitions, restarts, interruptions and hesitations make the definition of a sentence unclear thus complicating the segmentation task BIBREF14 . Table TABREF2 exemplifies some of the difficulties that are present when working with spoken language. Stolcke & Shriberg BIBREF6 considered a set of linguistic structures as segments including the following list: In BIBREF4 , Meteer & Iyer divided speaker utterances into segments, consisting each of a single independent clause. A segment was considered to begin either at the beginning of an utterance, or after the end of the preceding segment. Any dysfluency between the end of the previous segments and the begging of current one was considered part of the current segments. Rott & Červa BIBREF15 aimed to summarize news delivered orally segmenting the transcripts into “something that is similar to sentences”. They used a syntatic analyzer to identify the phrases within the text. A wide study focused in unbalanced data for the SBD task was performed by Liu et al. BIBREF16 . During this study they followed the segmentation scheme proposed by the Linguistic Data Consortium on the Simple Metadata Annotation Specification V5.0 guideline (SimpleMDE_V5.0) BIBREF14 , dividing the transcripts in Semantic Units. A Semantic Unit (SU) is considered to be an atomic element of the transcript that manages to express a complete thought or idea on the part of the speaker BIBREF14 . Sometimes a SU corresponds to the equivalent of a sentence in written text, but other times (the most part of them) a SU corresponds to a phrase or a single word. SUs seem to be an inclusive conception of a segment, they embrace different previous segment definitions and are flexible enough to deal with the majority of spoken language troubles. For these reasons we will adopt SUs as our segment definition. Sentence Boundary Evaluation SBD research has been focused on two different aspects; features and methods. Regarding the features, some work focused on acoustic elements like pauses duration, fundamental frequencies, energy, rate of speech, volume change and speaker turn BIBREF17 , BIBREF18 , BIBREF19 . The other kind of features used in SBD are textual or lexical features. They rely on the transcript content to extract features like bag-of-word, POS tags or word embeddings BIBREF20 , BIBREF18 , BIBREF21 , BIBREF22 , BIBREF15 , BIBREF6 , BIBREF23 . Mixture of acoustic and lexical features have also been explored BIBREF24 , BIBREF25 , BIBREF19 , BIBREF26 , which is advantageous when both audio signal and transcript are available. With respect to the methods used for SBD, they mostly rely on statistical/neural machine translation BIBREF18 , BIBREF27 , language models BIBREF9 , BIBREF16 , BIBREF22 , BIBREF6 , conditional random fields BIBREF21 , BIBREF28 , BIBREF23 and deep neural networks BIBREF29 , BIBREF20 , BIBREF13 . Despite their differences in features and/or methodology, almost all previous cited research share a common element; the evaluation methodology. Metrics as Precision, Recall, F1-score, Classification Error Rate and Slot Error Rate (SER) are used to evaluate the proposed system against one reference. As discussed in Section SECREF1 , further NLP tasks rely on the result of SBD, meaning that is crucial to have a good segmentation. But comparing the output of a system against a unique reference will provide a reliable score to decide if the system is good or bad? Bohac et al. BIBREF24 compared the human ability to punctuate recognized spontaneous speech. They asked 10 people (correctors) to punctuate about 30 minutes of ASR transcripts in Czech. For an average of 3,962 words, the punctuation marks placed by correctors varied between 557 and 801; this means a difference of 244 segments for the same transcript. Over all correctors, the absolute consensus for period (.) was only 4.6% caused by the replacement of other punctuation marks as semicolons (;) and exclamation marks (!). These results are understandable if we consider the difficulties presented previously in this section. To our knowledge, the amount of studies that have tried to target the sentence boundary evaluation with a multi-reference approach is very small. In BIBREF24 , Bohac et al. evaluated the overall punctuation accuracy for Czech in a straightforward multi-reference framework. They considered a period (.) valid if at least five of their 10 correctors agreed on its position. Kolář & Lamel BIBREF25 considered two independent references to evaluate their system and proposed two approaches. The fist one was to calculate the SER for each of one the two available references and then compute their mean. They found this approach to be very strict because for those boundaries where no agreement between references existed, the system was going to be partially wrong even the fact that it has correctly predicted the boundary. Their second approach tried to moderate the number of unjust penalizations. For this case, a classification was considered incorrect only if it didn't match either of the two references. These two examples exemplify the real need and some straightforward solutions for multi-reference evaluation metrics. However, we think that it is possible to consider in a more inclusive approach the similarities and differences that multiple references could provide into a sentence boundary evaluation protocol. Window-Based Sentence Boundary Evaluation Window-Based Sentence Boundary Evaluation (WiSeBE) is a semi-automatic multi-reference sentence boundary evaluation protocol which considers the performance of a candidate segmentation over a set of segmentation references and the agreement between those references. Let INLINEFORM0 be the set of all available references given a transcript INLINEFORM1 , where INLINEFORM2 is the INLINEFORM3 word in the transcript; a reference INLINEFORM4 is defined as a binary vector in terms of the existent SU boundaries in INLINEFORM5 . DISPLAYFORM0 where INLINEFORM0 Given a transcript INLINEFORM0 , the candidate segmentation INLINEFORM1 is defined similar to INLINEFORM2 . DISPLAYFORM0 where INLINEFORM0 General Reference and Agreement Ratio A General Reference ( INLINEFORM0 ) is then constructed to calculate the agreement ratio between all references in. It is defined by the boundary frequencies of each reference INLINEFORM1 . DISPLAYFORM0 where DISPLAYFORM0 The Agreement Ratio ( INLINEFORM0 ) is needed to get a numerical value of the distribution of SU boundaries over INLINEFORM1 . A value of INLINEFORM2 close to 0 means a low agreement between references in INLINEFORM3 , while INLINEFORM4 means a perfect agreement ( INLINEFORM5 ) in INLINEFORM6 . DISPLAYFORM0 In the equation above, INLINEFORM0 corresponds to the ponderated common boundaries of INLINEFORM1 and INLINEFORM2 to its hypothetical maximum agreement. DISPLAYFORM0 DISPLAYFORM1 Window-Boundaries Reference In Section SECREF2 we discussed about how disfluencies complicate SU segmentation. In a multi-reference environment this causes disagreement between references around a same SU boundary. The way WiSeBE handle disagreements produced by disfluencies is with a Window-boundaries Reference ( INLINEFORM0 ) defined as: DISPLAYFORM0 where each window INLINEFORM0 considers one or more boundaries INLINEFORM1 from INLINEFORM2 with a window separation limit equal to INLINEFORM3 . DISPLAYFORM0 WiSeBEWiSeBE WiSeBE is a normalized score dependent of 1) the performance of INLINEFORM0 over INLINEFORM1 and 2) the agreement between all references in INLINEFORM2 . It is defined as: DISPLAYFORM0 where INLINEFORM0 corresponds to the harmonic mean of precision and recall of INLINEFORM1 with respect to INLINEFORM2 (equation EQREF23 ), while INLINEFORM3 is the agreement ratio defined in ( EQREF15 ). INLINEFORM4 can be interpreted as a scaling factor; a low value will penalize the overall WiSeBE score given the low agreement between references. By contrast, for a high agreement in INLINEFORM5 ( INLINEFORM6 ), INLINEFORM7 . DISPLAYFORM0 DISPLAYFORM1 Equations EQREF24 and EQREF25 describe precision and recall of INLINEFORM0 with respect to INLINEFORM1 . Precision is the number of boundaries INLINEFORM2 inside any window INLINEFORM3 from INLINEFORM4 divided by the total number of boundaries INLINEFORM5 in INLINEFORM6 . Recall corresponds to the number of windows INLINEFORM7 with at least one boundary INLINEFORM8 divided by the number of windows INLINEFORM9 in INLINEFORM10 . Evaluating with WiSeBEWiSeBE To exemplify the INLINEFORM0 score we evaluated and compared the performance of two different SBD systems over a set of YouTube videos in a multi-reference enviroment. The first system (S1) employs a Convolutional Neural Network to determine if the middle word of a sliding window corresponds to a SU boundary or not BIBREF30 . The second approach (S2) by contrast, introduces a bidirectional Recurrent Neural Network model with attention mechanism for boundary detection BIBREF31 . In a first glance we performed the evaluation of the systems against each one of the references independently. Then, we implemented a multi-reference evaluation with INLINEFORM0 . Dataset We focused evaluation over a small but diversified dataset composed by 10 YouTube videos in the English language in the news context. The selected videos cover different topics like technology, human rights, terrorism and politics with a length variation between 2 and 10 minutes. To encourage the diversity of content format we included newscasts, interviews, reports and round tables. During the transcription phase we opted for a manual transcription process because we observed that using transcripts from an ASR system will difficult in a large degree the manual segmentation process. The number of words per transcript oscilate between 271 and 1,602 with a total number of 8,080. We gave clear instructions to three evaluators ( INLINEFORM0 ) of how segmentation was needed to be perform, including the SU concept and how punctuation marks were going to be taken into account. Periods (.), question marks (?), exclamation marks (!) and semicolons (;) were considered SU delimiters (boundaries) while colons (:) and commas (,) were considered as internal SU marks. The number of segments per transcript and reference can be seen in Table TABREF27 . An interesting remark is that INLINEFORM1 assigns about INLINEFORM2 less boundaries than the mean of the other two references. Evaluation We ran both systems (S1 & S2) over the manually transcribed videos obtaining the number of boundaries shown in Table TABREF29 . In general, it can be seen that S1 predicts INLINEFORM0 more segments than S2. This difference can affect the performance of S1, increasing its probabilities of false positives. Table TABREF30 condenses the performance of both systems evaluated against each one of the references independently. If we focus on F1 scores, performance of both systems varies depending of the reference. For INLINEFORM0 , S1 was better in 5 occasions with respect of S2; S1 was better in 2 occasions only for INLINEFORM1 ; S1 overperformed S2 in 3 occasions concerning INLINEFORM2 and in 4 occasions for INLINEFORM3 (bold). Also from Table TABREF30 we can observe that INLINEFORM0 has a bigger similarity to S1 in 5 occasions compared to other two references, while INLINEFORM1 is more similar to S2 in 7 transcripts (underline). After computing the mean F1 scores over the transcripts, it can be concluded that in average S2 had a better performance segmenting the dataset compared to S1, obtaining a F1 score equal to 0.510. But... What about the complexity of the dataset? Regardless all references have been considered, nor agreement or disagreement between them has been taken into account. All values related to the INLINEFORM0 score are displayed in Table TABREF31 . The Agreement Ratio ( INLINEFORM1 ) between references oscillates between 0.525 for INLINEFORM2 and 0.767 for INLINEFORM3 . The lower the INLINEFORM4 , the bigger the penalization INLINEFORM5 will give to the final score. A good example is S2 for transcript INLINEFORM6 where INLINEFORM7 reaches a value of 0.800, but after considering INLINEFORM8 the INLINEFORM9 score falls to 0.462. It is feasible to think that if all references are taken into account at the same time during evaluation ( INLINEFORM0 ), the score will be bigger compared to an average of independent evaluations ( INLINEFORM1 ); however this is not always true. That is the case of S1 in INLINEFORM2 , which present a slight decrease for INLINEFORM3 compared to INLINEFORM4 . An important remark is the behavior of S1 and S2 concerning INLINEFORM0 . If evaluated without considering any (dis)agreement between references ( INLINEFORM1 ), S2 overperforms S1; this is inverted once the systems are evaluated with INLINEFORM2 . R G AR R_{G_{AR}} and Fleiss' Kappa correlation In Section SECREF3 we described the INLINEFORM0 score and how it relies on the INLINEFORM1 value to scale the performance of INLINEFORM2 over INLINEFORM3 . INLINEFORM4 can intuitively be consider an agreement value over all elements of INLINEFORM5 . To test this hypothesis, we computed the Pearson correlation coefficient ( INLINEFORM6 ) BIBREF32 between INLINEFORM7 and the Fleiss' Kappa BIBREF33 of each video in the dataset ( INLINEFORM8 ). A linear correlation between INLINEFORM0 and INLINEFORM1 can be observed in Table TABREF33 . This is confirmed by a INLINEFORM2 value equal to INLINEFORM3 , which means a very strong positive linear correlation between them. F1 mean F1_{mean} vs. WiSeBEWiSeBE Results form Table TABREF31 may give an idea that INLINEFORM0 is just an scaled INLINEFORM1 . While it is true that they show a linear correlation, INLINEFORM2 may produce a different system ranking than INLINEFORM3 given the integral multi-reference principle it follows. However, what we consider the most profitable about INLINEFORM4 is the twofold inclusion of all available references it performs. First, the construction of INLINEFORM5 to provide a more inclusive reference against to whom be evaluated and then, the computation of INLINEFORM6 , which scales the result depending of the agreement between references. Conclusions In this paper we presented WiSeBE, a semi-automatic multi-reference sentence boundary evaluation protocol based on the necessity of having a more reliable way for evaluating the SBD task. We showed how INLINEFORM0 is an inclusive metric which not only evaluates the performance of a system against all references, but also takes into account the agreement between them. According to your point of view, this inclusivity is very important given the difficulties that are present when working with spoken language and the possible disagreements that a task like SBD could provoke. INLINEFORM0 shows to be correlated with standard SBD metrics, however we want to measure its correlation with extrinsic evaluations techniques like automatic summarization and machine translation. Acknowledgments We would like to acknowledge the support of CHIST-ERA for funding this work through the Access Multilingual Information opinionS (AMIS), (France - Europe) project. We also like to acknowledge the support given by the Prof. Hanifa Boucheneb from VERIFORM Laboratory (École Polytechnique de Montréal).
Convolutional Neural Network , bidirectional Recurrent Neural Network model with attention mechanism
5a29b1f9181f5809e2b0f97b4d0e00aea8996892
5a29b1f9181f5809e2b0f97b4d0e00aea8996892_0
Q: What makes it a more reliable metric? Text: Introduction The goal of Automatic Speech Recognition (ASR) is to transform spoken data into a written representation, thus enabling natural human-machine interaction BIBREF0 with further Natural Language Processing (NLP) tasks. Machine translation, question answering, semantic parsing, POS tagging, sentiment analysis and automatic text summarization; originally developed to work with formal written texts, can be applied over the transcripts made by ASR systems BIBREF1 , BIBREF2 , BIBREF3 . However, before applying any of these NLP tasks a segmentation process called Sentence Boundary Detection (SBD) should be performed over ASR transcripts to reach a minimal syntactic information in the text. To measure the performance of a SBD system, the automatically segmented transcript is evaluated against a single reference normally done by a human. But given a transcript, does it exist a unique reference? Or, is it possible that the same transcript could be segmented in five different ways by five different people in the same conditions? If so, which one is correct; and more important, how to fairly evaluate the automatically segmented transcript? These questions are the foundations of Window-based Sentence Boundary Evaluation (WiSeBE), a new semi-supervised metric for evaluating SBD systems based on multi-reference (dis)agreement. The rest of this article is organized as follows. In Section SECREF2 we set the frame of SBD and how it is normally evaluated. WiSeBE is formally described in Section SECREF3 , followed by a multi-reference evaluation in Section SECREF4 . Further analysis of WiSeBE and discussion over the method and alternative multi-reference evaluation is presented in Section SECREF5 . Finally, Section SECREF6 concludes the paper. Sentence Boundary Detection Sentence Boundary Detection (SBD) has been a major research topic science ASR moved to more general domains as conversational speech BIBREF4 , BIBREF5 , BIBREF6 . Performance of ASR systems has improved over the years with the inclusion and combination of new Deep Neural Networks methods BIBREF7 , BIBREF8 , BIBREF0 . As a general rule, the output of ASR systems lacks of any syntactic information such as capitalization and sentence boundaries, showing the interst of ASR systems to obtain the correct sequence of words with almost no concern of the overall structure of the document BIBREF9 . Similar to SBD is the Punctuation Marks Disambiguation (PMD) or Sentence Boundary Disambiguation. This task aims to segment a formal written text into well formed sentences based on the existent punctuation marks BIBREF10 , BIBREF11 , BIBREF12 , BIBREF13 . In this context a sentence is defined (for English) by the Cambridge Dictionary as: “a group of words, usually containing a verb, that expresses a thought in the form of a statement, question, instruction, or exclamation and starts with a capital letter when written”. PMD carries certain complications, some given the ambiguity of punctuation marks within a sentence. A period can denote an acronym, an abbreviation, the end of the sentence or a combination of them as in the following example: The U.S. president, Mr. Donald Trump, is meeting with the F.B.I. director Christopher A. Wray next Thursday at 8p.m. However its difficulties, DPM profits of morphological and lexical information to achieve a correct sentence segmentation. By contrast, segmenting an ASR transcript should be done without any (or almost any) lexical information and a flurry definition of sentence. The obvious division in spoken language may be considered speaker utterances. However, in a normal conversation or even in a monologue, the way ideas are organized differs largely from written text. This differences, added to disfluencies like revisions, repetitions, restarts, interruptions and hesitations make the definition of a sentence unclear thus complicating the segmentation task BIBREF14 . Table TABREF2 exemplifies some of the difficulties that are present when working with spoken language. Stolcke & Shriberg BIBREF6 considered a set of linguistic structures as segments including the following list: In BIBREF4 , Meteer & Iyer divided speaker utterances into segments, consisting each of a single independent clause. A segment was considered to begin either at the beginning of an utterance, or after the end of the preceding segment. Any dysfluency between the end of the previous segments and the begging of current one was considered part of the current segments. Rott & Červa BIBREF15 aimed to summarize news delivered orally segmenting the transcripts into “something that is similar to sentences”. They used a syntatic analyzer to identify the phrases within the text. A wide study focused in unbalanced data for the SBD task was performed by Liu et al. BIBREF16 . During this study they followed the segmentation scheme proposed by the Linguistic Data Consortium on the Simple Metadata Annotation Specification V5.0 guideline (SimpleMDE_V5.0) BIBREF14 , dividing the transcripts in Semantic Units. A Semantic Unit (SU) is considered to be an atomic element of the transcript that manages to express a complete thought or idea on the part of the speaker BIBREF14 . Sometimes a SU corresponds to the equivalent of a sentence in written text, but other times (the most part of them) a SU corresponds to a phrase or a single word. SUs seem to be an inclusive conception of a segment, they embrace different previous segment definitions and are flexible enough to deal with the majority of spoken language troubles. For these reasons we will adopt SUs as our segment definition. Sentence Boundary Evaluation SBD research has been focused on two different aspects; features and methods. Regarding the features, some work focused on acoustic elements like pauses duration, fundamental frequencies, energy, rate of speech, volume change and speaker turn BIBREF17 , BIBREF18 , BIBREF19 . The other kind of features used in SBD are textual or lexical features. They rely on the transcript content to extract features like bag-of-word, POS tags or word embeddings BIBREF20 , BIBREF18 , BIBREF21 , BIBREF22 , BIBREF15 , BIBREF6 , BIBREF23 . Mixture of acoustic and lexical features have also been explored BIBREF24 , BIBREF25 , BIBREF19 , BIBREF26 , which is advantageous when both audio signal and transcript are available. With respect to the methods used for SBD, they mostly rely on statistical/neural machine translation BIBREF18 , BIBREF27 , language models BIBREF9 , BIBREF16 , BIBREF22 , BIBREF6 , conditional random fields BIBREF21 , BIBREF28 , BIBREF23 and deep neural networks BIBREF29 , BIBREF20 , BIBREF13 . Despite their differences in features and/or methodology, almost all previous cited research share a common element; the evaluation methodology. Metrics as Precision, Recall, F1-score, Classification Error Rate and Slot Error Rate (SER) are used to evaluate the proposed system against one reference. As discussed in Section SECREF1 , further NLP tasks rely on the result of SBD, meaning that is crucial to have a good segmentation. But comparing the output of a system against a unique reference will provide a reliable score to decide if the system is good or bad? Bohac et al. BIBREF24 compared the human ability to punctuate recognized spontaneous speech. They asked 10 people (correctors) to punctuate about 30 minutes of ASR transcripts in Czech. For an average of 3,962 words, the punctuation marks placed by correctors varied between 557 and 801; this means a difference of 244 segments for the same transcript. Over all correctors, the absolute consensus for period (.) was only 4.6% caused by the replacement of other punctuation marks as semicolons (;) and exclamation marks (!). These results are understandable if we consider the difficulties presented previously in this section. To our knowledge, the amount of studies that have tried to target the sentence boundary evaluation with a multi-reference approach is very small. In BIBREF24 , Bohac et al. evaluated the overall punctuation accuracy for Czech in a straightforward multi-reference framework. They considered a period (.) valid if at least five of their 10 correctors agreed on its position. Kolář & Lamel BIBREF25 considered two independent references to evaluate their system and proposed two approaches. The fist one was to calculate the SER for each of one the two available references and then compute their mean. They found this approach to be very strict because for those boundaries where no agreement between references existed, the system was going to be partially wrong even the fact that it has correctly predicted the boundary. Their second approach tried to moderate the number of unjust penalizations. For this case, a classification was considered incorrect only if it didn't match either of the two references. These two examples exemplify the real need and some straightforward solutions for multi-reference evaluation metrics. However, we think that it is possible to consider in a more inclusive approach the similarities and differences that multiple references could provide into a sentence boundary evaluation protocol. Window-Based Sentence Boundary Evaluation Window-Based Sentence Boundary Evaluation (WiSeBE) is a semi-automatic multi-reference sentence boundary evaluation protocol which considers the performance of a candidate segmentation over a set of segmentation references and the agreement between those references. Let INLINEFORM0 be the set of all available references given a transcript INLINEFORM1 , where INLINEFORM2 is the INLINEFORM3 word in the transcript; a reference INLINEFORM4 is defined as a binary vector in terms of the existent SU boundaries in INLINEFORM5 . DISPLAYFORM0 where INLINEFORM0 Given a transcript INLINEFORM0 , the candidate segmentation INLINEFORM1 is defined similar to INLINEFORM2 . DISPLAYFORM0 where INLINEFORM0 General Reference and Agreement Ratio A General Reference ( INLINEFORM0 ) is then constructed to calculate the agreement ratio between all references in. It is defined by the boundary frequencies of each reference INLINEFORM1 . DISPLAYFORM0 where DISPLAYFORM0 The Agreement Ratio ( INLINEFORM0 ) is needed to get a numerical value of the distribution of SU boundaries over INLINEFORM1 . A value of INLINEFORM2 close to 0 means a low agreement between references in INLINEFORM3 , while INLINEFORM4 means a perfect agreement ( INLINEFORM5 ) in INLINEFORM6 . DISPLAYFORM0 In the equation above, INLINEFORM0 corresponds to the ponderated common boundaries of INLINEFORM1 and INLINEFORM2 to its hypothetical maximum agreement. DISPLAYFORM0 DISPLAYFORM1 Window-Boundaries Reference In Section SECREF2 we discussed about how disfluencies complicate SU segmentation. In a multi-reference environment this causes disagreement between references around a same SU boundary. The way WiSeBE handle disagreements produced by disfluencies is with a Window-boundaries Reference ( INLINEFORM0 ) defined as: DISPLAYFORM0 where each window INLINEFORM0 considers one or more boundaries INLINEFORM1 from INLINEFORM2 with a window separation limit equal to INLINEFORM3 . DISPLAYFORM0 WiSeBEWiSeBE WiSeBE is a normalized score dependent of 1) the performance of INLINEFORM0 over INLINEFORM1 and 2) the agreement between all references in INLINEFORM2 . It is defined as: DISPLAYFORM0 where INLINEFORM0 corresponds to the harmonic mean of precision and recall of INLINEFORM1 with respect to INLINEFORM2 (equation EQREF23 ), while INLINEFORM3 is the agreement ratio defined in ( EQREF15 ). INLINEFORM4 can be interpreted as a scaling factor; a low value will penalize the overall WiSeBE score given the low agreement between references. By contrast, for a high agreement in INLINEFORM5 ( INLINEFORM6 ), INLINEFORM7 . DISPLAYFORM0 DISPLAYFORM1 Equations EQREF24 and EQREF25 describe precision and recall of INLINEFORM0 with respect to INLINEFORM1 . Precision is the number of boundaries INLINEFORM2 inside any window INLINEFORM3 from INLINEFORM4 divided by the total number of boundaries INLINEFORM5 in INLINEFORM6 . Recall corresponds to the number of windows INLINEFORM7 with at least one boundary INLINEFORM8 divided by the number of windows INLINEFORM9 in INLINEFORM10 . Evaluating with WiSeBEWiSeBE To exemplify the INLINEFORM0 score we evaluated and compared the performance of two different SBD systems over a set of YouTube videos in a multi-reference enviroment. The first system (S1) employs a Convolutional Neural Network to determine if the middle word of a sliding window corresponds to a SU boundary or not BIBREF30 . The second approach (S2) by contrast, introduces a bidirectional Recurrent Neural Network model with attention mechanism for boundary detection BIBREF31 . In a first glance we performed the evaluation of the systems against each one of the references independently. Then, we implemented a multi-reference evaluation with INLINEFORM0 . Dataset We focused evaluation over a small but diversified dataset composed by 10 YouTube videos in the English language in the news context. The selected videos cover different topics like technology, human rights, terrorism and politics with a length variation between 2 and 10 minutes. To encourage the diversity of content format we included newscasts, interviews, reports and round tables. During the transcription phase we opted for a manual transcription process because we observed that using transcripts from an ASR system will difficult in a large degree the manual segmentation process. The number of words per transcript oscilate between 271 and 1,602 with a total number of 8,080. We gave clear instructions to three evaluators ( INLINEFORM0 ) of how segmentation was needed to be perform, including the SU concept and how punctuation marks were going to be taken into account. Periods (.), question marks (?), exclamation marks (!) and semicolons (;) were considered SU delimiters (boundaries) while colons (:) and commas (,) were considered as internal SU marks. The number of segments per transcript and reference can be seen in Table TABREF27 . An interesting remark is that INLINEFORM1 assigns about INLINEFORM2 less boundaries than the mean of the other two references. Evaluation We ran both systems (S1 & S2) over the manually transcribed videos obtaining the number of boundaries shown in Table TABREF29 . In general, it can be seen that S1 predicts INLINEFORM0 more segments than S2. This difference can affect the performance of S1, increasing its probabilities of false positives. Table TABREF30 condenses the performance of both systems evaluated against each one of the references independently. If we focus on F1 scores, performance of both systems varies depending of the reference. For INLINEFORM0 , S1 was better in 5 occasions with respect of S2; S1 was better in 2 occasions only for INLINEFORM1 ; S1 overperformed S2 in 3 occasions concerning INLINEFORM2 and in 4 occasions for INLINEFORM3 (bold). Also from Table TABREF30 we can observe that INLINEFORM0 has a bigger similarity to S1 in 5 occasions compared to other two references, while INLINEFORM1 is more similar to S2 in 7 transcripts (underline). After computing the mean F1 scores over the transcripts, it can be concluded that in average S2 had a better performance segmenting the dataset compared to S1, obtaining a F1 score equal to 0.510. But... What about the complexity of the dataset? Regardless all references have been considered, nor agreement or disagreement between them has been taken into account. All values related to the INLINEFORM0 score are displayed in Table TABREF31 . The Agreement Ratio ( INLINEFORM1 ) between references oscillates between 0.525 for INLINEFORM2 and 0.767 for INLINEFORM3 . The lower the INLINEFORM4 , the bigger the penalization INLINEFORM5 will give to the final score. A good example is S2 for transcript INLINEFORM6 where INLINEFORM7 reaches a value of 0.800, but after considering INLINEFORM8 the INLINEFORM9 score falls to 0.462. It is feasible to think that if all references are taken into account at the same time during evaluation ( INLINEFORM0 ), the score will be bigger compared to an average of independent evaluations ( INLINEFORM1 ); however this is not always true. That is the case of S1 in INLINEFORM2 , which present a slight decrease for INLINEFORM3 compared to INLINEFORM4 . An important remark is the behavior of S1 and S2 concerning INLINEFORM0 . If evaluated without considering any (dis)agreement between references ( INLINEFORM1 ), S2 overperforms S1; this is inverted once the systems are evaluated with INLINEFORM2 . R G AR R_{G_{AR}} and Fleiss' Kappa correlation In Section SECREF3 we described the INLINEFORM0 score and how it relies on the INLINEFORM1 value to scale the performance of INLINEFORM2 over INLINEFORM3 . INLINEFORM4 can intuitively be consider an agreement value over all elements of INLINEFORM5 . To test this hypothesis, we computed the Pearson correlation coefficient ( INLINEFORM6 ) BIBREF32 between INLINEFORM7 and the Fleiss' Kappa BIBREF33 of each video in the dataset ( INLINEFORM8 ). A linear correlation between INLINEFORM0 and INLINEFORM1 can be observed in Table TABREF33 . This is confirmed by a INLINEFORM2 value equal to INLINEFORM3 , which means a very strong positive linear correlation between them. F1 mean F1_{mean} vs. WiSeBEWiSeBE Results form Table TABREF31 may give an idea that INLINEFORM0 is just an scaled INLINEFORM1 . While it is true that they show a linear correlation, INLINEFORM2 may produce a different system ranking than INLINEFORM3 given the integral multi-reference principle it follows. However, what we consider the most profitable about INLINEFORM4 is the twofold inclusion of all available references it performs. First, the construction of INLINEFORM5 to provide a more inclusive reference against to whom be evaluated and then, the computation of INLINEFORM6 , which scales the result depending of the agreement between references. Conclusions In this paper we presented WiSeBE, a semi-automatic multi-reference sentence boundary evaluation protocol based on the necessity of having a more reliable way for evaluating the SBD task. We showed how INLINEFORM0 is an inclusive metric which not only evaluates the performance of a system against all references, but also takes into account the agreement between them. According to your point of view, this inclusivity is very important given the difficulties that are present when working with spoken language and the possible disagreements that a task like SBD could provoke. INLINEFORM0 shows to be correlated with standard SBD metrics, however we want to measure its correlation with extrinsic evaluations techniques like automatic summarization and machine translation. Acknowledgments We would like to acknowledge the support of CHIST-ERA for funding this work through the Access Multilingual Information opinionS (AMIS), (France - Europe) project. We also like to acknowledge the support given by the Prof. Hanifa Boucheneb from VERIFORM Laboratory (École Polytechnique de Montréal).
It takes into account the agreement between different systems
f5db12cd0a8cd706a232c69d94b2258596aa068c
f5db12cd0a8cd706a232c69d94b2258596aa068c_0
Q: How much in experiments is performance improved for models trained with generated adversarial examples? Text: Introduction Paraphrase identification is to determine whether a pair of sentences are paraphrases of each other BIBREF0. It is important for applications such as duplicate post matching on social media BIBREF1, plagiarism detection BIBREF2, and automatic evaluation for machine translation BIBREF3 or text summarization BIBREF4. Paraphrase identification can be viewed as a sentence matching problem. Many deep models have recently been proposed and their performance has been greatly advanced on benchmark datasets BIBREF5, BIBREF6, BIBREF7. However, previous research shows that deep models are vulnerable to adversarial examples BIBREF8, BIBREF9 which are particularly constructed to make models fail. Adversarial examples are of high value for revealing the weakness and robustness issues of models, and can thereby be utilized to improve the model performance for challenging cases, robustness, and also security. In this paper, we propose a novel algorithm to generate a new type of adversarial examples for paraphrase identification. To generate an adversarial example that consists of a sentence pair, we first sample an original sentence pair from the dataset, and then adversarially replace some word pairs with difficult common words respectively. Here each pair of words consists of two words from the two sentences respectively. And difficult common words are words that we adversarially select to appear in both sentences such that the example becomes harder for the target model. The target model is likely to be distracted by difficult common words and fail to judge the similarity or difference in the context, thereby making a wrong prediction. Our adversarial examples are motivated by two observations. Firstly, for a sentence pair with a label matched, when some common word pairs are replaced with difficult common words respectively, models can be fooled to predict an incorrect label unmatched. As the first example in Figure FIGREF1 shows, we can replace two pairs of common words, “purpose” and “life”, with another common words “measure” and “value” respectively. The modified sentence pair remains matched but fools the target model. It is mainly due to the bias between different words and some words are more difficult for the model. When such words appear in the example, the model fails to combine them with the unmodified context and judge the overall similarity of the sentence pair. Secondly, for an unmatched sentence pair, when some word pairs, not necessarily common words, are replaced with difficult common words, models can be fooled to predict an incorrect label matched. As the second example in Figure FIGREF1 shows, we can replace words “Gmail” and “school” with a common word “credit”, and replace words “account” and “management” with ”score”. The modified sentences remain unmatched, but the target model can be fooled to predict matched for being distracted by the common words while ignoring the difference in the unmodified context. Following these observations, we focus on robustness issues regarding capturing semantic similarity or difference in the unmodified part when distracted by difficult common words in the modified part. We try to modify an original example into an adversarial one with multiple steps. In each step, for a matched example, we replace some pair of common words together, with another word adversarially selected from the vocabulary; and for an unmatched example, we replace some word pair, not necessarily a common word pair, with a common word. In this way, we replace a pair of words together from two sentences respectively with an adversarially selected word in each step. To preserve the original label and grammaticality, we impose a few heuristic constraints on replaceable positions, and apply a language model to generate substitution words that are compatible with the context. We aim to adversarially find a word replacement solution that maximizes the target model loss and makes the model fail, using beam search. We generate valid adversarial examples that are substantially different from those in previous work for paraphrase identification. Our adversarial examples are not limited to be semantically equivalent to original sentences and the unmodified parts of the two sentences are of low lexical similarity. To the best of our knowledge, none of previous work is able to generate such kind of adversarial examples. We further discuss our difference with previous work in Section 2.2. In summary, we mainly make the following contributions: We propose an algorithm to generate new adversarial examples for paraphrase identification. Our adversarial examples focus on robustness issues that are substantially different from those in previous work. We reveal a new type of robustness issues in deep paraphrase identification models regarding difficult common words. Experiments show that the target models have a severe performance drop on the adversarial examples, while human annotators are much less affected and most modified sentences retain a good grammaticality. Using our adversarial examples in adversarial training can mitigate the robustness issues, and these examples can foster future research. Related Work ::: Deep Paraphrase Identification Paraphrase identification can be viewed as a problem of sentence matching. Recently, many deep models for sentence matching have been proposed and achieved great advancements on benchmark datasets. Among those, some approaches encode each sentence independently and apply a classifier on the embeddings of two sentences BIBREF10, BIBREF11, BIBREF12. In addition, some models make strong interactions between two sentences by jointly encoding and matching sentences BIBREF5, BIBREF13, BIBREF14 or hierarchically extracting matching features from the interaction space of the sentence pair BIBREF15, BIBREF16, BIBREF6. Notably, BERT pre-trained on large-scale corpora achieved even better results BIBREF7. In this paper, we study the robustness of recent typical deep models for paraphrase identification and generate new adversarial examples for revealing their robustness issues and improving their robustness. Related Work ::: Adversarial Examples for NLP Many methods have been proposed to find different types of adversarial examples for NLP tasks. We focus on those that can be applied to paraphrase identification. Some of them generate adversarial examples by adding semantic-preserving perturbations to the input sentences. BIBREF17 added perturbations to word embeddings. BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22 employed several character-level or word-level manipulations. BIBREF23 used syntactically controlled paraphrasing, and BIBREF24 paraphrased sentences with extracted rules. However, for some tasks including paraphrase identification, adversarial examples can be semantically different from original sentences, to study other robustness issues tailored to the corresponding tasks. For sentence matching and paraphrase identification, other types of adversarial examples can be obtained by considering the relation and the correspondence between two sentences. BIBREF25 considered logical rules of sentence relations but can only generate unlabelled adversarial examples. BIBREF26 and BIBREF27 generated a sentence pair by modifying a single original sentence. They combined both original and modified sentences to form a pair. They modified the original sentence using back translation, word swapping, or single word replacement with lexical knowledge. Among them, back translation still aimed to produce semantically equivalent sentences; the others generated pairs of sentences with large Bag-of-Words (BOW) similarities, and the unmodified parts of the two sentences are exactly the same, so these same unmodified parts required little matching by target models. By contrast, we generate new adversarial examples with targeted labels by modifying a pair of original sentences together, using difficult common words. The modified sentences can be semantically different from original ones but still valid. The generated sentence pairs have much lower BOW similarities, and the unmodified parts are lexically diverse to reveal robustness issues regarding matching these parts when distracted by difficult common words in the modified parts. Thereby we study a new kind of robustness issues in paraphrase identification. Related Work ::: Adversarial Example Generation For a certain type of adversarial examples, adversarial attacks or adversarial example generation aim to find examples that are within the defined type and make existing models fail. Some work has no access to the target model until an adversarial dataset is generated BIBREF28, BIBREF26, BIBREF23, BIBREF24, BIBREF29, BIBREF27. However, in many cases including ours, finding successful adversarial examples, i.e. examples on which the target model fails, is challenging, and employing an attack algorithm with access to the target model during generation is often necessary to ensure a high success rate. Some prior work used gradient-based methods BIBREF30, BIBREF19, BIBREF31, requiring the model gradients to be accessible in addition to the output, and thus are inapplicable in black-box settings BIBREF21 where only model outputs are accessible. Though, the beam search in BIBREF19 can be adapted to black-box settings. Gradient-free methods for NLP generally construct adversarial examples by querying the target model for output scores and making generation decisions to maximize the model loss. BIBREF25 searched in the solution space. One approach in BIBREF28 greedily made word replacements and queried the target model in several steps. BIBREF21 employed a genetic algorithm. BIBREF32 proposed a two-stage greedy algorithm and a method with gumbel softmax to improve the efficiency. In this work, we also focus on a black-box setting, which is more challenging than white-box settings. We use a two-stage beam search to find adversarial examples in multiple steps. We clarify that the major focus of this work is on studying new robustness issues and a new type of adversarial examples, instead of attack algorithms for an existing certain type of adversarial examples. Therefore, the choice of the attack algorithm is minor for this work as long as the success rates are sufficiently high. Methodology ::: Task Definition Paraphrase identification can be formulated as follows: given two sentences $P=p_1p_2\cdots p_n$ and $Q=q_1q_2\cdots q_m$, the goal is to predict whether $P$ and $Q$ are paraphrases of each other, by estimating a probability distribution where $y\in \mathcal {Y} = \lbrace matched, unmatched \rbrace $. For each label $y$, the model outputs a score $[Z (P, Q)]_{y}$ which is the predicted probability of this label. We aim to generate an adversarial example by adversarially modifying an original sentence pair $(P, Q)$ while preserving the label and grammaticality. The goal is to make the target model fail on the adversarially modified example $(\hat{P}, \hat{Q})$: where $y$ indicates the gold label and $\overline{y}$ is the wrong label opposite to the gold one. Methodology ::: Algorithm Framework Figure FIGREF12 illustrates the work flow of our algorithm. We generate an adversarial example by firstly sampling an original example from the corpus and then constructing adversarial modifications. We use beam search and take multiple steps to modify the example, until the target model fails or the step number limit is reached. In each step, we modify the sentences by replacing a word pair with a difficult common word. There are two stages in deciding the word replacements. We first determine the best replaceable position pairs in the sentence pair, and next determine the best substitution words for the corresponding positions. We evaluate different options according to the target model loss they raise, and we retain $B$ best options after each stage of each step during beam search. Finally, the adversarially modified example is returned. Methodology ::: Original Example Sampling To sample an original example from the dataset for subsequent adversarial modifications, we consider two different cases regarding whether the label is unmatched or matched. For the unmatched case, we sample two different sentence pairs $(P_1, Q_1)$ and $(P_2, Q_2)$ from the original data, and then form an unmatched example $(P_1, Q_2, unmatched)$ with sentences from two sentence pairs respectively. We also limit the length difference $||P_1|-|Q_2||$ and resample until the limit is satisfied, since sentence pairs with large length difference inherently tend to be unmatched and are too easy for models. By sampling two sentences from different examples, the two sentences tend to have less in common originally, which can help better preserve the label during adversarial modifications, while this also makes it more challenging for our algorithm to make the target model fail. On the other hand, matched examples cannot be sampled in this way, and thus for the matched case, we simply sample an example with a matched label from the dataset, namely, $(P, Q, matched)$. Methodology ::: Replaceable Position Pairs During adversarial modifications, we replace a word pair at each step. We set heuristic rules on replaceable position pairs to preserve the label and grammaticality. First of all, we require the words on the replaceable positions to be one of nouns, verbs, or adjectives, and not stopwords meanwhile. We also require a pair of replaceable words to have similar Part-of-Speech (POS) tags, i.e. the two words are both nouns, both verbs, or both adjectives. For a matched example, we further require the two words on each replaceable position pair to be exactly the same. Figure FIGREF15 shows two examples of determining replaceable positions. For the first example (matched), only common words “purpose” and “life” can be replaced. And since they are replaced simultaneously with another common words, the modified sentences are likely to talk about another same thing, e.g. changing from “purpose of life” to “measure of value”, and thereby the new sentences tend to remain matched. As for the second example (unmatched), each noun in the first sentence, “Gmail” and “account”, can form replaceable word pairs with each noun in the second sentence, “school”, “management” and “software”. The irreplaceable part determines that the modified sentences are “How can I get $\cdots $ back ? ” and “What is the best $\cdots $ ?” respectively. Sentences based on these two templates are likely to discuss about different things or different aspects, even when filled with common words, and thus they are likely to remain unmatched. In this way, the labels can be preserved in most cases. Methodology ::: Candidate Substitution Word Generation For a pair of replaceable positions, we generate candidate substitution words that can replace the current words on the two positions. To preserve the grammaticality and keep the modified sentences like human language, substitution words should be compatible with the context. Therefore, we apply a BERT language model BIBREF7 to generate candidate substitution words. Specifically, when some words in a text are masked, the BERT masked language model can predict the masked words based on the context. For a sentence $x_1x_2\cdots x_l$ where the $k$-th token is masked, the BERT masked language model gives the following probability distribution: Thereby, to replace word $p_i$ and $q_j$ from the two sentences respectively, we mask $p_i$ and $q_j$ and present each sentence to the BERT masked language model. We aim to replace $p_i$ and $q_j$ with a common word $w$, which can be regarded as the masked word to be predicted. From the language model output, we obtain a joint probability distribution as follows: We rank all the words within the vocabulary of the target model and choose top $K$ words with the largest probabilities, as the candidate substitution words for the corresponding positions. Methodology ::: Beam Search for Finding Adversarial Examples Once the replaceable positions and candidate substitution words can be determined, we use beam search with beam size $B$ to find optimal adversarial modifications in multiple steps. At step $t$, we perform a modification in two stages to determine replaceable positions and the corresponding substitution words respectively, based on the two-stage greedy framework by BIBREF32. To determine the best replaceable positions, we enumerate all the possible position pairs, and obtain a set of candidate intermediate examples, $C_{pos}^{(t)}$, by replacing words on each position pair with a special token [PAD] respectively. We then query the target model with the examples in $C_{pos}^{(t)}$ to obtain the model output. We take top $B$ examples that maximize the output score of the opposite label $\overline{y}$ (we define this operation as $\mathop {\arg {\rm top}B}$), obtaining a set of intermediate examples $\lbrace (\hat{P}_{pos}^{(t,k)}, \hat{Q}_{pos}^{(t,k)}) \rbrace _{k=1}^{B}$, as follows: We then determine difficult common words to replace the [PAD] placeholders. For each example in $\lbrace (\hat{P}_{pos}^{(t, k)}, \hat{Q}_{pos}^{(t, k)}) \rbrace _{k=1}^B$, we enumerate all the words in the candidate substitution word set of the corresponding positions with [PAD]. We obtain a set of candidate examples, $C^{(t)}$, by replacing the [PAD] placeholders with each candidate substitution word respectively. Similarly to the first stage, we take top $B$ examples that maximize the output score of the opposite label $\overline{y}$. This yields a set of modified example after step $t$, $\lbrace (\hat{P}^{(t, k)}, \hat{Q}^{(t, k)}) \rbrace _{k=1}^{B}$, as follows: After $t$ steps, for some modified example $(\hat{P}^{(t,k)}, \hat{Q}^{(t,k)})$, if the label predicted by the target model is already $\overline{y}$, i.e. $[Z(\hat{P}^{(t,k)}, \hat{Q}^{(t,k)})]_{\overline{y}} > [Z(\hat{P}^{(t,k)},\hat{Q}^{(t,k)})]_y$, this example is a successful adversarial example and thus we terminate the modification process. Otherwise, we continue taking another step, until the step number limit $S$ is reached and in case of that an unsuccessful adversarial example is returned. Experiments ::: Datasets We adopt the following two datasets: Quora BIBREF1: The Quora Question Pairs dataset contains question pairs annotated with labels indicating whether the two questions are paraphrases. We use the same dataset partition as BIBREF5, with 384,348/10,000/10,000 pairs in the training/development/test set respectively. MRPC BIBREF34: The Microsoft Research Paraphrase Corpus consists of sentence pairs collected from online news. Each pair is annotated with a label indicating whether the two sentences are semantically equivalent. There are 4,076/1,725 pairs in the training/test set respectively. Experiments ::: Target Models We adopt the following typical deep models as the target models in our experiments: BiMPM BIBREF5, the Bilateral Multi-Perspective Matching model, matches two sentences on all combinations of time stamps from multiple perspectives, with BiLSTM layers to encode the sentences and aggregate matching results. DIIN BIBREF6, the Densely Interactive Inference Network, creates a word-by-word interaction matrix by computing similarities on sentence representations encoded by a highway network and self-attention, and then adopts DenseNet BIBREF35 to extract interaction features for matching. BERT BIBREF7, the Bidirectional Encoder Representations from Transformers, is pre-trained on large-scale corpora, and then fine-tuned on this task. The matching result is obtained by applying a classifier on the encoded hidden states of the two sentences. Experiments ::: Implementation Details We adopt existing open source codes for target models BiMPM, DIIN and BERT, and also the BERT masked language model. For Quora, the step number limit $S$ is set to 5; the number of candidate substitution words generated using the language model $K$ and the beam size $B$ are both set to 25. $S$, $K$ and $B$ are doubled for MRPC where sentences are generally longer. The length difference between unmatched sentence pairs is limited to be no more than 3. Experiments ::: Main Results We train each target model on the original training data, and then generate adversarial examples for the target models. For each dataset, we sample 1,000 original examples with balanced labels from the corresponding test set, and adversarially modify them for each target model. We evaluate the accuracies of target models on the corresponding adversarial examples, compared with their accuracies on the original examples. Let $s$ be the success rate of generating adversarial examples that the target model fails, the accuracy of the target model on the returned adversarial examples is $1-s$. Table TABREF18 presents the results. The target models have high overall accuracies on the original examples, especially on the sampled ones since we form an unmatched original example with independently sampled sentences. The models have relatively lower accuracies on the unmatched examples in the full original test set of MRPC because MRPC is relatively small while the two labels are imbalanced in the original data (3,900 matched examples and 1,901 unmatched examples). Therefore, we generate adversarial examples with balanced labels instead of following the original distribution. After adversarial modifications, the performance of the original target models (those without the “-adv” suffix) drops dramatically (e.g. the overall accuracy of BERT on Quora drops from 94.6% to 24.1%), revealing that the target models are vulnerable to our adversarial examples. Particularly, even though our generation is constrained by a BERT language model, BERT is still vulnerable to our adversarial examples. These results demonstrate the effectiveness of our algorithm for generating adversarial examples and also revealing the corresponding robustness issues. Moreover, we present some generated adversarial examples in the appendix. We notice that the original models are more vulnerable to unmatched adversarial examples, because there are generally more replaceable position choices during the generation. Nevertheless, the results of the matched case are also sufficiently strong to reveal the robustness issues. We do not quantitatively compare the performance drop of the target models on the adversarial examples with previous work, because we generate a new type of adversarial examples that previous methods are not capable of. We have different experiment settings, including original example sampling and constraints on adversarial modifications, which are tailored to the robustness issues we study. Performance drop on different kinds of adversarial examples with little overlap is not comparable, and thus surpassing other adversarial examples on model performance drop is unnecessary and irrelevant to support our contributions. Therefore, such comparisons are not included in this paper. Experiments ::: Manual Evaluation To verify the validity our generated adversarial examples, we further perform a manual evaluation. For each dataset, using BERT as the target model, we randomly sample 100 successful adversarial examples on which the target model fails, with balanced labels. We blend these adversarial examples with the corresponding original examples, and present each example to three workers on Amazon Mechanical Turk. We ask the workers to label the examples and also rate the grammaticality of the sentences with a scale of 1/2/3 (3 for no grammar error, 2 for minor errors, and 1 for vital errors). We integrate annotations from different workers with majority voting for labels and averaging for grammaticality. Table TABREF35 shows the results. Unlike target models whose performance drops dramatically on adversarial examples, human annotators retain high accuracies with a much smaller drop, while the accuracies of the target models are 0 on these adversarial examples. This demonstrates that the labels of most adversarial examples are successfully preserved to be consistent with original examples. Results also show that the grammaticality difference between the original examples and adversarial examples is also small, suggesting that most adversarial examples retain a good grammaticality. This verifies the validity of our adversarial examples. Experiments ::: Adversarial Training Adversarial training can often improve model robustness BIBREF25, BIBREF27. We also fine-tune the target models using adversarial training. At each training step, we train the model with a batch of original examples along with adversarial examples with balanced labels. The adversarial examples account for around 10% in a batch. During training, we generate adversarial examples with the current model as the target and update the model parameters with the hybrid batch iteratively. The beam size for generation is set to 1 to reduce the computation cost, since the generation success rate is minor in adversarial training. We evaluate the adversarially trained models, as shown in Table TABREF18. After adversarial training, the performance of all the target models raises significantly, while that on the original examples remain comparable. Note that since the focus of this paper is on model robustness which can hardly be reflected in original data, we do not expect performance improvement on original data. The results demonstrate that adversarial training with our adversarial examples can significantly improve the robustness we focus on without remarkably hurting the performance on original data. Moreover, although the adversarial example generation is constrained by a BERT language model, BiMPM and DIIN which do not use the BERT language model can also significantly benefit from the adversarial examples, further demonstrating the effectiveness of our method. Experiments ::: Sentence Pair BOW Similarity To quantitatively demonstrate the difference between the adversarial examples we generate and those by previous work BIBREF26, BIBREF27, we compute the average BOW cosine similarity between the generated pairs of sentences. We only compare with previous methods that also aim to generate labeled adversarial examples that are not limited to be semantically equivalent to original sentences. Results are shown in Table TABREF38. Each pair of adversarial sentences by BIBREF26 differ by only one word. And in BIBREF27, sentence pairs generated with word swapping have exactly the same BOW. These two approaches both have high BOW similarities. By contrast, our method generates sentence pairs with much lower BOW similarities. This demonstrates a significant difference between our examples and the others. Unlike previous methods, we generate adversarial examples that can focus on robustness issues regarding the distraction from modified words that are the same for both sentences, towards matching the unmodified parts that are diverse for two sentences. Experiments ::: Effectiveness of Paired Common Words We further analyse the necessity and effectiveness of modifying sentences with paired common words. We consider another version that replaces one single word independently at each step without using paired common words, namely the unpaired version. Firstly, for matched adversarial examples that can be semantically different from original sentences, the unpaired version is inapplicable, because the matched label can be easily broken if common words from two sentences are changed into other words independently. And for the unmatched case, we show that the unpaired version is much less effective. For a more fair comparison, we double the step number limit for the unpaired version. As shown in Table TABREF41, the performance of target models on unmatched examples generated by the unpaired version, particularly that of BERT, is mostly much higher than those by our full algorithm, except for BiMPM on MRPC but its accuracies have almost reached 0 (0.0% for unpaired and 0.2% for paired). This demonstrates that our algorithm using paired common words are more effective in generating adversarial examples, on which the performance of the target model is generally much lower. An advantage of using difficult common words for unmatched examples is that such words tend to make target models over-confident about common words and distract the models on recognizing the semantic difference in the unmodified part. Our algorithm explicitly utilizes this property and thus can well reveal such a robustness issue. Moreover, although there is no such a property for the matched case, replacing existing common words with more difficult ones can still distract the target model on judging the semantic similarity in the unmodified part, due to the bias between different words learned by the model, and thus our algorithm for generating adversarial examples with difficult common words works for both matched and unmatched cases. Conclusion In this paper, we propose a novel algorithm to generate new adversarial examples for paraphrase identification, by adversarially modifying original examples with difficult common words. We generate labeled adversarial examples that can be semantically different from original sentences and the BOW similarity between each pair of sentences is generally low. Such examples reveal robustness issues that previous methods are not able for. The accuracies of the target models drop dramatically on our adversarial examples, while human annotators are much less affected and the modified sentences retain a good grammarticality. We also show that model robustness can be improved using adversarial training with our adversarial examples. Moreover, our adversarial examples can foster future research for further improving model robustness.
Answer with content missing: (Table 1) The performance of all the target models raises significantly, while that on the original examples remain comparable (e.g. the overall accuracy of BERT on modified examples raises from 24.1% to 66.0% on Quora)
2c8d5e3941a6cc5697b242e64222f5d97dba453c
2c8d5e3941a6cc5697b242e64222f5d97dba453c_0
Q: How much dramatically results drop for models on generated adversarial examples? Text: Introduction Paraphrase identification is to determine whether a pair of sentences are paraphrases of each other BIBREF0. It is important for applications such as duplicate post matching on social media BIBREF1, plagiarism detection BIBREF2, and automatic evaluation for machine translation BIBREF3 or text summarization BIBREF4. Paraphrase identification can be viewed as a sentence matching problem. Many deep models have recently been proposed and their performance has been greatly advanced on benchmark datasets BIBREF5, BIBREF6, BIBREF7. However, previous research shows that deep models are vulnerable to adversarial examples BIBREF8, BIBREF9 which are particularly constructed to make models fail. Adversarial examples are of high value for revealing the weakness and robustness issues of models, and can thereby be utilized to improve the model performance for challenging cases, robustness, and also security. In this paper, we propose a novel algorithm to generate a new type of adversarial examples for paraphrase identification. To generate an adversarial example that consists of a sentence pair, we first sample an original sentence pair from the dataset, and then adversarially replace some word pairs with difficult common words respectively. Here each pair of words consists of two words from the two sentences respectively. And difficult common words are words that we adversarially select to appear in both sentences such that the example becomes harder for the target model. The target model is likely to be distracted by difficult common words and fail to judge the similarity or difference in the context, thereby making a wrong prediction. Our adversarial examples are motivated by two observations. Firstly, for a sentence pair with a label matched, when some common word pairs are replaced with difficult common words respectively, models can be fooled to predict an incorrect label unmatched. As the first example in Figure FIGREF1 shows, we can replace two pairs of common words, “purpose” and “life”, with another common words “measure” and “value” respectively. The modified sentence pair remains matched but fools the target model. It is mainly due to the bias between different words and some words are more difficult for the model. When such words appear in the example, the model fails to combine them with the unmodified context and judge the overall similarity of the sentence pair. Secondly, for an unmatched sentence pair, when some word pairs, not necessarily common words, are replaced with difficult common words, models can be fooled to predict an incorrect label matched. As the second example in Figure FIGREF1 shows, we can replace words “Gmail” and “school” with a common word “credit”, and replace words “account” and “management” with ”score”. The modified sentences remain unmatched, but the target model can be fooled to predict matched for being distracted by the common words while ignoring the difference in the unmodified context. Following these observations, we focus on robustness issues regarding capturing semantic similarity or difference in the unmodified part when distracted by difficult common words in the modified part. We try to modify an original example into an adversarial one with multiple steps. In each step, for a matched example, we replace some pair of common words together, with another word adversarially selected from the vocabulary; and for an unmatched example, we replace some word pair, not necessarily a common word pair, with a common word. In this way, we replace a pair of words together from two sentences respectively with an adversarially selected word in each step. To preserve the original label and grammaticality, we impose a few heuristic constraints on replaceable positions, and apply a language model to generate substitution words that are compatible with the context. We aim to adversarially find a word replacement solution that maximizes the target model loss and makes the model fail, using beam search. We generate valid adversarial examples that are substantially different from those in previous work for paraphrase identification. Our adversarial examples are not limited to be semantically equivalent to original sentences and the unmodified parts of the two sentences are of low lexical similarity. To the best of our knowledge, none of previous work is able to generate such kind of adversarial examples. We further discuss our difference with previous work in Section 2.2. In summary, we mainly make the following contributions: We propose an algorithm to generate new adversarial examples for paraphrase identification. Our adversarial examples focus on robustness issues that are substantially different from those in previous work. We reveal a new type of robustness issues in deep paraphrase identification models regarding difficult common words. Experiments show that the target models have a severe performance drop on the adversarial examples, while human annotators are much less affected and most modified sentences retain a good grammaticality. Using our adversarial examples in adversarial training can mitigate the robustness issues, and these examples can foster future research. Related Work ::: Deep Paraphrase Identification Paraphrase identification can be viewed as a problem of sentence matching. Recently, many deep models for sentence matching have been proposed and achieved great advancements on benchmark datasets. Among those, some approaches encode each sentence independently and apply a classifier on the embeddings of two sentences BIBREF10, BIBREF11, BIBREF12. In addition, some models make strong interactions between two sentences by jointly encoding and matching sentences BIBREF5, BIBREF13, BIBREF14 or hierarchically extracting matching features from the interaction space of the sentence pair BIBREF15, BIBREF16, BIBREF6. Notably, BERT pre-trained on large-scale corpora achieved even better results BIBREF7. In this paper, we study the robustness of recent typical deep models for paraphrase identification and generate new adversarial examples for revealing their robustness issues and improving their robustness. Related Work ::: Adversarial Examples for NLP Many methods have been proposed to find different types of adversarial examples for NLP tasks. We focus on those that can be applied to paraphrase identification. Some of them generate adversarial examples by adding semantic-preserving perturbations to the input sentences. BIBREF17 added perturbations to word embeddings. BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22 employed several character-level or word-level manipulations. BIBREF23 used syntactically controlled paraphrasing, and BIBREF24 paraphrased sentences with extracted rules. However, for some tasks including paraphrase identification, adversarial examples can be semantically different from original sentences, to study other robustness issues tailored to the corresponding tasks. For sentence matching and paraphrase identification, other types of adversarial examples can be obtained by considering the relation and the correspondence between two sentences. BIBREF25 considered logical rules of sentence relations but can only generate unlabelled adversarial examples. BIBREF26 and BIBREF27 generated a sentence pair by modifying a single original sentence. They combined both original and modified sentences to form a pair. They modified the original sentence using back translation, word swapping, or single word replacement with lexical knowledge. Among them, back translation still aimed to produce semantically equivalent sentences; the others generated pairs of sentences with large Bag-of-Words (BOW) similarities, and the unmodified parts of the two sentences are exactly the same, so these same unmodified parts required little matching by target models. By contrast, we generate new adversarial examples with targeted labels by modifying a pair of original sentences together, using difficult common words. The modified sentences can be semantically different from original ones but still valid. The generated sentence pairs have much lower BOW similarities, and the unmodified parts are lexically diverse to reveal robustness issues regarding matching these parts when distracted by difficult common words in the modified parts. Thereby we study a new kind of robustness issues in paraphrase identification. Related Work ::: Adversarial Example Generation For a certain type of adversarial examples, adversarial attacks or adversarial example generation aim to find examples that are within the defined type and make existing models fail. Some work has no access to the target model until an adversarial dataset is generated BIBREF28, BIBREF26, BIBREF23, BIBREF24, BIBREF29, BIBREF27. However, in many cases including ours, finding successful adversarial examples, i.e. examples on which the target model fails, is challenging, and employing an attack algorithm with access to the target model during generation is often necessary to ensure a high success rate. Some prior work used gradient-based methods BIBREF30, BIBREF19, BIBREF31, requiring the model gradients to be accessible in addition to the output, and thus are inapplicable in black-box settings BIBREF21 where only model outputs are accessible. Though, the beam search in BIBREF19 can be adapted to black-box settings. Gradient-free methods for NLP generally construct adversarial examples by querying the target model for output scores and making generation decisions to maximize the model loss. BIBREF25 searched in the solution space. One approach in BIBREF28 greedily made word replacements and queried the target model in several steps. BIBREF21 employed a genetic algorithm. BIBREF32 proposed a two-stage greedy algorithm and a method with gumbel softmax to improve the efficiency. In this work, we also focus on a black-box setting, which is more challenging than white-box settings. We use a two-stage beam search to find adversarial examples in multiple steps. We clarify that the major focus of this work is on studying new robustness issues and a new type of adversarial examples, instead of attack algorithms for an existing certain type of adversarial examples. Therefore, the choice of the attack algorithm is minor for this work as long as the success rates are sufficiently high. Methodology ::: Task Definition Paraphrase identification can be formulated as follows: given two sentences $P=p_1p_2\cdots p_n$ and $Q=q_1q_2\cdots q_m$, the goal is to predict whether $P$ and $Q$ are paraphrases of each other, by estimating a probability distribution where $y\in \mathcal {Y} = \lbrace matched, unmatched \rbrace $. For each label $y$, the model outputs a score $[Z (P, Q)]_{y}$ which is the predicted probability of this label. We aim to generate an adversarial example by adversarially modifying an original sentence pair $(P, Q)$ while preserving the label and grammaticality. The goal is to make the target model fail on the adversarially modified example $(\hat{P}, \hat{Q})$: where $y$ indicates the gold label and $\overline{y}$ is the wrong label opposite to the gold one. Methodology ::: Algorithm Framework Figure FIGREF12 illustrates the work flow of our algorithm. We generate an adversarial example by firstly sampling an original example from the corpus and then constructing adversarial modifications. We use beam search and take multiple steps to modify the example, until the target model fails or the step number limit is reached. In each step, we modify the sentences by replacing a word pair with a difficult common word. There are two stages in deciding the word replacements. We first determine the best replaceable position pairs in the sentence pair, and next determine the best substitution words for the corresponding positions. We evaluate different options according to the target model loss they raise, and we retain $B$ best options after each stage of each step during beam search. Finally, the adversarially modified example is returned. Methodology ::: Original Example Sampling To sample an original example from the dataset for subsequent adversarial modifications, we consider two different cases regarding whether the label is unmatched or matched. For the unmatched case, we sample two different sentence pairs $(P_1, Q_1)$ and $(P_2, Q_2)$ from the original data, and then form an unmatched example $(P_1, Q_2, unmatched)$ with sentences from two sentence pairs respectively. We also limit the length difference $||P_1|-|Q_2||$ and resample until the limit is satisfied, since sentence pairs with large length difference inherently tend to be unmatched and are too easy for models. By sampling two sentences from different examples, the two sentences tend to have less in common originally, which can help better preserve the label during adversarial modifications, while this also makes it more challenging for our algorithm to make the target model fail. On the other hand, matched examples cannot be sampled in this way, and thus for the matched case, we simply sample an example with a matched label from the dataset, namely, $(P, Q, matched)$. Methodology ::: Replaceable Position Pairs During adversarial modifications, we replace a word pair at each step. We set heuristic rules on replaceable position pairs to preserve the label and grammaticality. First of all, we require the words on the replaceable positions to be one of nouns, verbs, or adjectives, and not stopwords meanwhile. We also require a pair of replaceable words to have similar Part-of-Speech (POS) tags, i.e. the two words are both nouns, both verbs, or both adjectives. For a matched example, we further require the two words on each replaceable position pair to be exactly the same. Figure FIGREF15 shows two examples of determining replaceable positions. For the first example (matched), only common words “purpose” and “life” can be replaced. And since they are replaced simultaneously with another common words, the modified sentences are likely to talk about another same thing, e.g. changing from “purpose of life” to “measure of value”, and thereby the new sentences tend to remain matched. As for the second example (unmatched), each noun in the first sentence, “Gmail” and “account”, can form replaceable word pairs with each noun in the second sentence, “school”, “management” and “software”. The irreplaceable part determines that the modified sentences are “How can I get $\cdots $ back ? ” and “What is the best $\cdots $ ?” respectively. Sentences based on these two templates are likely to discuss about different things or different aspects, even when filled with common words, and thus they are likely to remain unmatched. In this way, the labels can be preserved in most cases. Methodology ::: Candidate Substitution Word Generation For a pair of replaceable positions, we generate candidate substitution words that can replace the current words on the two positions. To preserve the grammaticality and keep the modified sentences like human language, substitution words should be compatible with the context. Therefore, we apply a BERT language model BIBREF7 to generate candidate substitution words. Specifically, when some words in a text are masked, the BERT masked language model can predict the masked words based on the context. For a sentence $x_1x_2\cdots x_l$ where the $k$-th token is masked, the BERT masked language model gives the following probability distribution: Thereby, to replace word $p_i$ and $q_j$ from the two sentences respectively, we mask $p_i$ and $q_j$ and present each sentence to the BERT masked language model. We aim to replace $p_i$ and $q_j$ with a common word $w$, which can be regarded as the masked word to be predicted. From the language model output, we obtain a joint probability distribution as follows: We rank all the words within the vocabulary of the target model and choose top $K$ words with the largest probabilities, as the candidate substitution words for the corresponding positions. Methodology ::: Beam Search for Finding Adversarial Examples Once the replaceable positions and candidate substitution words can be determined, we use beam search with beam size $B$ to find optimal adversarial modifications in multiple steps. At step $t$, we perform a modification in two stages to determine replaceable positions and the corresponding substitution words respectively, based on the two-stage greedy framework by BIBREF32. To determine the best replaceable positions, we enumerate all the possible position pairs, and obtain a set of candidate intermediate examples, $C_{pos}^{(t)}$, by replacing words on each position pair with a special token [PAD] respectively. We then query the target model with the examples in $C_{pos}^{(t)}$ to obtain the model output. We take top $B$ examples that maximize the output score of the opposite label $\overline{y}$ (we define this operation as $\mathop {\arg {\rm top}B}$), obtaining a set of intermediate examples $\lbrace (\hat{P}_{pos}^{(t,k)}, \hat{Q}_{pos}^{(t,k)}) \rbrace _{k=1}^{B}$, as follows: We then determine difficult common words to replace the [PAD] placeholders. For each example in $\lbrace (\hat{P}_{pos}^{(t, k)}, \hat{Q}_{pos}^{(t, k)}) \rbrace _{k=1}^B$, we enumerate all the words in the candidate substitution word set of the corresponding positions with [PAD]. We obtain a set of candidate examples, $C^{(t)}$, by replacing the [PAD] placeholders with each candidate substitution word respectively. Similarly to the first stage, we take top $B$ examples that maximize the output score of the opposite label $\overline{y}$. This yields a set of modified example after step $t$, $\lbrace (\hat{P}^{(t, k)}, \hat{Q}^{(t, k)}) \rbrace _{k=1}^{B}$, as follows: After $t$ steps, for some modified example $(\hat{P}^{(t,k)}, \hat{Q}^{(t,k)})$, if the label predicted by the target model is already $\overline{y}$, i.e. $[Z(\hat{P}^{(t,k)}, \hat{Q}^{(t,k)})]_{\overline{y}} > [Z(\hat{P}^{(t,k)},\hat{Q}^{(t,k)})]_y$, this example is a successful adversarial example and thus we terminate the modification process. Otherwise, we continue taking another step, until the step number limit $S$ is reached and in case of that an unsuccessful adversarial example is returned. Experiments ::: Datasets We adopt the following two datasets: Quora BIBREF1: The Quora Question Pairs dataset contains question pairs annotated with labels indicating whether the two questions are paraphrases. We use the same dataset partition as BIBREF5, with 384,348/10,000/10,000 pairs in the training/development/test set respectively. MRPC BIBREF34: The Microsoft Research Paraphrase Corpus consists of sentence pairs collected from online news. Each pair is annotated with a label indicating whether the two sentences are semantically equivalent. There are 4,076/1,725 pairs in the training/test set respectively. Experiments ::: Target Models We adopt the following typical deep models as the target models in our experiments: BiMPM BIBREF5, the Bilateral Multi-Perspective Matching model, matches two sentences on all combinations of time stamps from multiple perspectives, with BiLSTM layers to encode the sentences and aggregate matching results. DIIN BIBREF6, the Densely Interactive Inference Network, creates a word-by-word interaction matrix by computing similarities on sentence representations encoded by a highway network and self-attention, and then adopts DenseNet BIBREF35 to extract interaction features for matching. BERT BIBREF7, the Bidirectional Encoder Representations from Transformers, is pre-trained on large-scale corpora, and then fine-tuned on this task. The matching result is obtained by applying a classifier on the encoded hidden states of the two sentences. Experiments ::: Implementation Details We adopt existing open source codes for target models BiMPM, DIIN and BERT, and also the BERT masked language model. For Quora, the step number limit $S$ is set to 5; the number of candidate substitution words generated using the language model $K$ and the beam size $B$ are both set to 25. $S$, $K$ and $B$ are doubled for MRPC where sentences are generally longer. The length difference between unmatched sentence pairs is limited to be no more than 3. Experiments ::: Main Results We train each target model on the original training data, and then generate adversarial examples for the target models. For each dataset, we sample 1,000 original examples with balanced labels from the corresponding test set, and adversarially modify them for each target model. We evaluate the accuracies of target models on the corresponding adversarial examples, compared with their accuracies on the original examples. Let $s$ be the success rate of generating adversarial examples that the target model fails, the accuracy of the target model on the returned adversarial examples is $1-s$. Table TABREF18 presents the results. The target models have high overall accuracies on the original examples, especially on the sampled ones since we form an unmatched original example with independently sampled sentences. The models have relatively lower accuracies on the unmatched examples in the full original test set of MRPC because MRPC is relatively small while the two labels are imbalanced in the original data (3,900 matched examples and 1,901 unmatched examples). Therefore, we generate adversarial examples with balanced labels instead of following the original distribution. After adversarial modifications, the performance of the original target models (those without the “-adv” suffix) drops dramatically (e.g. the overall accuracy of BERT on Quora drops from 94.6% to 24.1%), revealing that the target models are vulnerable to our adversarial examples. Particularly, even though our generation is constrained by a BERT language model, BERT is still vulnerable to our adversarial examples. These results demonstrate the effectiveness of our algorithm for generating adversarial examples and also revealing the corresponding robustness issues. Moreover, we present some generated adversarial examples in the appendix. We notice that the original models are more vulnerable to unmatched adversarial examples, because there are generally more replaceable position choices during the generation. Nevertheless, the results of the matched case are also sufficiently strong to reveal the robustness issues. We do not quantitatively compare the performance drop of the target models on the adversarial examples with previous work, because we generate a new type of adversarial examples that previous methods are not capable of. We have different experiment settings, including original example sampling and constraints on adversarial modifications, which are tailored to the robustness issues we study. Performance drop on different kinds of adversarial examples with little overlap is not comparable, and thus surpassing other adversarial examples on model performance drop is unnecessary and irrelevant to support our contributions. Therefore, such comparisons are not included in this paper. Experiments ::: Manual Evaluation To verify the validity our generated adversarial examples, we further perform a manual evaluation. For each dataset, using BERT as the target model, we randomly sample 100 successful adversarial examples on which the target model fails, with balanced labels. We blend these adversarial examples with the corresponding original examples, and present each example to three workers on Amazon Mechanical Turk. We ask the workers to label the examples and also rate the grammaticality of the sentences with a scale of 1/2/3 (3 for no grammar error, 2 for minor errors, and 1 for vital errors). We integrate annotations from different workers with majority voting for labels and averaging for grammaticality. Table TABREF35 shows the results. Unlike target models whose performance drops dramatically on adversarial examples, human annotators retain high accuracies with a much smaller drop, while the accuracies of the target models are 0 on these adversarial examples. This demonstrates that the labels of most adversarial examples are successfully preserved to be consistent with original examples. Results also show that the grammaticality difference between the original examples and adversarial examples is also small, suggesting that most adversarial examples retain a good grammaticality. This verifies the validity of our adversarial examples. Experiments ::: Adversarial Training Adversarial training can often improve model robustness BIBREF25, BIBREF27. We also fine-tune the target models using adversarial training. At each training step, we train the model with a batch of original examples along with adversarial examples with balanced labels. The adversarial examples account for around 10% in a batch. During training, we generate adversarial examples with the current model as the target and update the model parameters with the hybrid batch iteratively. The beam size for generation is set to 1 to reduce the computation cost, since the generation success rate is minor in adversarial training. We evaluate the adversarially trained models, as shown in Table TABREF18. After adversarial training, the performance of all the target models raises significantly, while that on the original examples remain comparable. Note that since the focus of this paper is on model robustness which can hardly be reflected in original data, we do not expect performance improvement on original data. The results demonstrate that adversarial training with our adversarial examples can significantly improve the robustness we focus on without remarkably hurting the performance on original data. Moreover, although the adversarial example generation is constrained by a BERT language model, BiMPM and DIIN which do not use the BERT language model can also significantly benefit from the adversarial examples, further demonstrating the effectiveness of our method. Experiments ::: Sentence Pair BOW Similarity To quantitatively demonstrate the difference between the adversarial examples we generate and those by previous work BIBREF26, BIBREF27, we compute the average BOW cosine similarity between the generated pairs of sentences. We only compare with previous methods that also aim to generate labeled adversarial examples that are not limited to be semantically equivalent to original sentences. Results are shown in Table TABREF38. Each pair of adversarial sentences by BIBREF26 differ by only one word. And in BIBREF27, sentence pairs generated with word swapping have exactly the same BOW. These two approaches both have high BOW similarities. By contrast, our method generates sentence pairs with much lower BOW similarities. This demonstrates a significant difference between our examples and the others. Unlike previous methods, we generate adversarial examples that can focus on robustness issues regarding the distraction from modified words that are the same for both sentences, towards matching the unmodified parts that are diverse for two sentences. Experiments ::: Effectiveness of Paired Common Words We further analyse the necessity and effectiveness of modifying sentences with paired common words. We consider another version that replaces one single word independently at each step without using paired common words, namely the unpaired version. Firstly, for matched adversarial examples that can be semantically different from original sentences, the unpaired version is inapplicable, because the matched label can be easily broken if common words from two sentences are changed into other words independently. And for the unmatched case, we show that the unpaired version is much less effective. For a more fair comparison, we double the step number limit for the unpaired version. As shown in Table TABREF41, the performance of target models on unmatched examples generated by the unpaired version, particularly that of BERT, is mostly much higher than those by our full algorithm, except for BiMPM on MRPC but its accuracies have almost reached 0 (0.0% for unpaired and 0.2% for paired). This demonstrates that our algorithm using paired common words are more effective in generating adversarial examples, on which the performance of the target model is generally much lower. An advantage of using difficult common words for unmatched examples is that such words tend to make target models over-confident about common words and distract the models on recognizing the semantic difference in the unmodified part. Our algorithm explicitly utilizes this property and thus can well reveal such a robustness issue. Moreover, although there is no such a property for the matched case, replacing existing common words with more difficult ones can still distract the target model on judging the semantic similarity in the unmodified part, due to the bias between different words learned by the model, and thus our algorithm for generating adversarial examples with difficult common words works for both matched and unmatched cases. Conclusion In this paper, we propose a novel algorithm to generate new adversarial examples for paraphrase identification, by adversarially modifying original examples with difficult common words. We generate labeled adversarial examples that can be semantically different from original sentences and the BOW similarity between each pair of sentences is generally low. Such examples reveal robustness issues that previous methods are not able for. The accuracies of the target models drop dramatically on our adversarial examples, while human annotators are much less affected and the modified sentences retain a good grammarticality. We also show that model robustness can be improved using adversarial training with our adversarial examples. Moreover, our adversarial examples can foster future research for further improving model robustness.
BERT on Quora drops from 94.6% to 24.1%
78102422a5dc99812739b8dd2541e4fdb5fe3c7a
78102422a5dc99812739b8dd2541e4fdb5fe3c7a_0
Q: What is discriminator in this generative adversarial setup? Text: Introduction Paraphrase identification is to determine whether a pair of sentences are paraphrases of each other BIBREF0. It is important for applications such as duplicate post matching on social media BIBREF1, plagiarism detection BIBREF2, and automatic evaluation for machine translation BIBREF3 or text summarization BIBREF4. Paraphrase identification can be viewed as a sentence matching problem. Many deep models have recently been proposed and their performance has been greatly advanced on benchmark datasets BIBREF5, BIBREF6, BIBREF7. However, previous research shows that deep models are vulnerable to adversarial examples BIBREF8, BIBREF9 which are particularly constructed to make models fail. Adversarial examples are of high value for revealing the weakness and robustness issues of models, and can thereby be utilized to improve the model performance for challenging cases, robustness, and also security. In this paper, we propose a novel algorithm to generate a new type of adversarial examples for paraphrase identification. To generate an adversarial example that consists of a sentence pair, we first sample an original sentence pair from the dataset, and then adversarially replace some word pairs with difficult common words respectively. Here each pair of words consists of two words from the two sentences respectively. And difficult common words are words that we adversarially select to appear in both sentences such that the example becomes harder for the target model. The target model is likely to be distracted by difficult common words and fail to judge the similarity or difference in the context, thereby making a wrong prediction. Our adversarial examples are motivated by two observations. Firstly, for a sentence pair with a label matched, when some common word pairs are replaced with difficult common words respectively, models can be fooled to predict an incorrect label unmatched. As the first example in Figure FIGREF1 shows, we can replace two pairs of common words, “purpose” and “life”, with another common words “measure” and “value” respectively. The modified sentence pair remains matched but fools the target model. It is mainly due to the bias between different words and some words are more difficult for the model. When such words appear in the example, the model fails to combine them with the unmodified context and judge the overall similarity of the sentence pair. Secondly, for an unmatched sentence pair, when some word pairs, not necessarily common words, are replaced with difficult common words, models can be fooled to predict an incorrect label matched. As the second example in Figure FIGREF1 shows, we can replace words “Gmail” and “school” with a common word “credit”, and replace words “account” and “management” with ”score”. The modified sentences remain unmatched, but the target model can be fooled to predict matched for being distracted by the common words while ignoring the difference in the unmodified context. Following these observations, we focus on robustness issues regarding capturing semantic similarity or difference in the unmodified part when distracted by difficult common words in the modified part. We try to modify an original example into an adversarial one with multiple steps. In each step, for a matched example, we replace some pair of common words together, with another word adversarially selected from the vocabulary; and for an unmatched example, we replace some word pair, not necessarily a common word pair, with a common word. In this way, we replace a pair of words together from two sentences respectively with an adversarially selected word in each step. To preserve the original label and grammaticality, we impose a few heuristic constraints on replaceable positions, and apply a language model to generate substitution words that are compatible with the context. We aim to adversarially find a word replacement solution that maximizes the target model loss and makes the model fail, using beam search. We generate valid adversarial examples that are substantially different from those in previous work for paraphrase identification. Our adversarial examples are not limited to be semantically equivalent to original sentences and the unmodified parts of the two sentences are of low lexical similarity. To the best of our knowledge, none of previous work is able to generate such kind of adversarial examples. We further discuss our difference with previous work in Section 2.2. In summary, we mainly make the following contributions: We propose an algorithm to generate new adversarial examples for paraphrase identification. Our adversarial examples focus on robustness issues that are substantially different from those in previous work. We reveal a new type of robustness issues in deep paraphrase identification models regarding difficult common words. Experiments show that the target models have a severe performance drop on the adversarial examples, while human annotators are much less affected and most modified sentences retain a good grammaticality. Using our adversarial examples in adversarial training can mitigate the robustness issues, and these examples can foster future research. Related Work ::: Deep Paraphrase Identification Paraphrase identification can be viewed as a problem of sentence matching. Recently, many deep models for sentence matching have been proposed and achieved great advancements on benchmark datasets. Among those, some approaches encode each sentence independently and apply a classifier on the embeddings of two sentences BIBREF10, BIBREF11, BIBREF12. In addition, some models make strong interactions between two sentences by jointly encoding and matching sentences BIBREF5, BIBREF13, BIBREF14 or hierarchically extracting matching features from the interaction space of the sentence pair BIBREF15, BIBREF16, BIBREF6. Notably, BERT pre-trained on large-scale corpora achieved even better results BIBREF7. In this paper, we study the robustness of recent typical deep models for paraphrase identification and generate new adversarial examples for revealing their robustness issues and improving their robustness. Related Work ::: Adversarial Examples for NLP Many methods have been proposed to find different types of adversarial examples for NLP tasks. We focus on those that can be applied to paraphrase identification. Some of them generate adversarial examples by adding semantic-preserving perturbations to the input sentences. BIBREF17 added perturbations to word embeddings. BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22 employed several character-level or word-level manipulations. BIBREF23 used syntactically controlled paraphrasing, and BIBREF24 paraphrased sentences with extracted rules. However, for some tasks including paraphrase identification, adversarial examples can be semantically different from original sentences, to study other robustness issues tailored to the corresponding tasks. For sentence matching and paraphrase identification, other types of adversarial examples can be obtained by considering the relation and the correspondence between two sentences. BIBREF25 considered logical rules of sentence relations but can only generate unlabelled adversarial examples. BIBREF26 and BIBREF27 generated a sentence pair by modifying a single original sentence. They combined both original and modified sentences to form a pair. They modified the original sentence using back translation, word swapping, or single word replacement with lexical knowledge. Among them, back translation still aimed to produce semantically equivalent sentences; the others generated pairs of sentences with large Bag-of-Words (BOW) similarities, and the unmodified parts of the two sentences are exactly the same, so these same unmodified parts required little matching by target models. By contrast, we generate new adversarial examples with targeted labels by modifying a pair of original sentences together, using difficult common words. The modified sentences can be semantically different from original ones but still valid. The generated sentence pairs have much lower BOW similarities, and the unmodified parts are lexically diverse to reveal robustness issues regarding matching these parts when distracted by difficult common words in the modified parts. Thereby we study a new kind of robustness issues in paraphrase identification. Related Work ::: Adversarial Example Generation For a certain type of adversarial examples, adversarial attacks or adversarial example generation aim to find examples that are within the defined type and make existing models fail. Some work has no access to the target model until an adversarial dataset is generated BIBREF28, BIBREF26, BIBREF23, BIBREF24, BIBREF29, BIBREF27. However, in many cases including ours, finding successful adversarial examples, i.e. examples on which the target model fails, is challenging, and employing an attack algorithm with access to the target model during generation is often necessary to ensure a high success rate. Some prior work used gradient-based methods BIBREF30, BIBREF19, BIBREF31, requiring the model gradients to be accessible in addition to the output, and thus are inapplicable in black-box settings BIBREF21 where only model outputs are accessible. Though, the beam search in BIBREF19 can be adapted to black-box settings. Gradient-free methods for NLP generally construct adversarial examples by querying the target model for output scores and making generation decisions to maximize the model loss. BIBREF25 searched in the solution space. One approach in BIBREF28 greedily made word replacements and queried the target model in several steps. BIBREF21 employed a genetic algorithm. BIBREF32 proposed a two-stage greedy algorithm and a method with gumbel softmax to improve the efficiency. In this work, we also focus on a black-box setting, which is more challenging than white-box settings. We use a two-stage beam search to find adversarial examples in multiple steps. We clarify that the major focus of this work is on studying new robustness issues and a new type of adversarial examples, instead of attack algorithms for an existing certain type of adversarial examples. Therefore, the choice of the attack algorithm is minor for this work as long as the success rates are sufficiently high. Methodology ::: Task Definition Paraphrase identification can be formulated as follows: given two sentences $P=p_1p_2\cdots p_n$ and $Q=q_1q_2\cdots q_m$, the goal is to predict whether $P$ and $Q$ are paraphrases of each other, by estimating a probability distribution where $y\in \mathcal {Y} = \lbrace matched, unmatched \rbrace $. For each label $y$, the model outputs a score $[Z (P, Q)]_{y}$ which is the predicted probability of this label. We aim to generate an adversarial example by adversarially modifying an original sentence pair $(P, Q)$ while preserving the label and grammaticality. The goal is to make the target model fail on the adversarially modified example $(\hat{P}, \hat{Q})$: where $y$ indicates the gold label and $\overline{y}$ is the wrong label opposite to the gold one. Methodology ::: Algorithm Framework Figure FIGREF12 illustrates the work flow of our algorithm. We generate an adversarial example by firstly sampling an original example from the corpus and then constructing adversarial modifications. We use beam search and take multiple steps to modify the example, until the target model fails or the step number limit is reached. In each step, we modify the sentences by replacing a word pair with a difficult common word. There are two stages in deciding the word replacements. We first determine the best replaceable position pairs in the sentence pair, and next determine the best substitution words for the corresponding positions. We evaluate different options according to the target model loss they raise, and we retain $B$ best options after each stage of each step during beam search. Finally, the adversarially modified example is returned. Methodology ::: Original Example Sampling To sample an original example from the dataset for subsequent adversarial modifications, we consider two different cases regarding whether the label is unmatched or matched. For the unmatched case, we sample two different sentence pairs $(P_1, Q_1)$ and $(P_2, Q_2)$ from the original data, and then form an unmatched example $(P_1, Q_2, unmatched)$ with sentences from two sentence pairs respectively. We also limit the length difference $||P_1|-|Q_2||$ and resample until the limit is satisfied, since sentence pairs with large length difference inherently tend to be unmatched and are too easy for models. By sampling two sentences from different examples, the two sentences tend to have less in common originally, which can help better preserve the label during adversarial modifications, while this also makes it more challenging for our algorithm to make the target model fail. On the other hand, matched examples cannot be sampled in this way, and thus for the matched case, we simply sample an example with a matched label from the dataset, namely, $(P, Q, matched)$. Methodology ::: Replaceable Position Pairs During adversarial modifications, we replace a word pair at each step. We set heuristic rules on replaceable position pairs to preserve the label and grammaticality. First of all, we require the words on the replaceable positions to be one of nouns, verbs, or adjectives, and not stopwords meanwhile. We also require a pair of replaceable words to have similar Part-of-Speech (POS) tags, i.e. the two words are both nouns, both verbs, or both adjectives. For a matched example, we further require the two words on each replaceable position pair to be exactly the same. Figure FIGREF15 shows two examples of determining replaceable positions. For the first example (matched), only common words “purpose” and “life” can be replaced. And since they are replaced simultaneously with another common words, the modified sentences are likely to talk about another same thing, e.g. changing from “purpose of life” to “measure of value”, and thereby the new sentences tend to remain matched. As for the second example (unmatched), each noun in the first sentence, “Gmail” and “account”, can form replaceable word pairs with each noun in the second sentence, “school”, “management” and “software”. The irreplaceable part determines that the modified sentences are “How can I get $\cdots $ back ? ” and “What is the best $\cdots $ ?” respectively. Sentences based on these two templates are likely to discuss about different things or different aspects, even when filled with common words, and thus they are likely to remain unmatched. In this way, the labels can be preserved in most cases. Methodology ::: Candidate Substitution Word Generation For a pair of replaceable positions, we generate candidate substitution words that can replace the current words on the two positions. To preserve the grammaticality and keep the modified sentences like human language, substitution words should be compatible with the context. Therefore, we apply a BERT language model BIBREF7 to generate candidate substitution words. Specifically, when some words in a text are masked, the BERT masked language model can predict the masked words based on the context. For a sentence $x_1x_2\cdots x_l$ where the $k$-th token is masked, the BERT masked language model gives the following probability distribution: Thereby, to replace word $p_i$ and $q_j$ from the two sentences respectively, we mask $p_i$ and $q_j$ and present each sentence to the BERT masked language model. We aim to replace $p_i$ and $q_j$ with a common word $w$, which can be regarded as the masked word to be predicted. From the language model output, we obtain a joint probability distribution as follows: We rank all the words within the vocabulary of the target model and choose top $K$ words with the largest probabilities, as the candidate substitution words for the corresponding positions. Methodology ::: Beam Search for Finding Adversarial Examples Once the replaceable positions and candidate substitution words can be determined, we use beam search with beam size $B$ to find optimal adversarial modifications in multiple steps. At step $t$, we perform a modification in two stages to determine replaceable positions and the corresponding substitution words respectively, based on the two-stage greedy framework by BIBREF32. To determine the best replaceable positions, we enumerate all the possible position pairs, and obtain a set of candidate intermediate examples, $C_{pos}^{(t)}$, by replacing words on each position pair with a special token [PAD] respectively. We then query the target model with the examples in $C_{pos}^{(t)}$ to obtain the model output. We take top $B$ examples that maximize the output score of the opposite label $\overline{y}$ (we define this operation as $\mathop {\arg {\rm top}B}$), obtaining a set of intermediate examples $\lbrace (\hat{P}_{pos}^{(t,k)}, \hat{Q}_{pos}^{(t,k)}) \rbrace _{k=1}^{B}$, as follows: We then determine difficult common words to replace the [PAD] placeholders. For each example in $\lbrace (\hat{P}_{pos}^{(t, k)}, \hat{Q}_{pos}^{(t, k)}) \rbrace _{k=1}^B$, we enumerate all the words in the candidate substitution word set of the corresponding positions with [PAD]. We obtain a set of candidate examples, $C^{(t)}$, by replacing the [PAD] placeholders with each candidate substitution word respectively. Similarly to the first stage, we take top $B$ examples that maximize the output score of the opposite label $\overline{y}$. This yields a set of modified example after step $t$, $\lbrace (\hat{P}^{(t, k)}, \hat{Q}^{(t, k)}) \rbrace _{k=1}^{B}$, as follows: After $t$ steps, for some modified example $(\hat{P}^{(t,k)}, \hat{Q}^{(t,k)})$, if the label predicted by the target model is already $\overline{y}$, i.e. $[Z(\hat{P}^{(t,k)}, \hat{Q}^{(t,k)})]_{\overline{y}} > [Z(\hat{P}^{(t,k)},\hat{Q}^{(t,k)})]_y$, this example is a successful adversarial example and thus we terminate the modification process. Otherwise, we continue taking another step, until the step number limit $S$ is reached and in case of that an unsuccessful adversarial example is returned. Experiments ::: Datasets We adopt the following two datasets: Quora BIBREF1: The Quora Question Pairs dataset contains question pairs annotated with labels indicating whether the two questions are paraphrases. We use the same dataset partition as BIBREF5, with 384,348/10,000/10,000 pairs in the training/development/test set respectively. MRPC BIBREF34: The Microsoft Research Paraphrase Corpus consists of sentence pairs collected from online news. Each pair is annotated with a label indicating whether the two sentences are semantically equivalent. There are 4,076/1,725 pairs in the training/test set respectively. Experiments ::: Target Models We adopt the following typical deep models as the target models in our experiments: BiMPM BIBREF5, the Bilateral Multi-Perspective Matching model, matches two sentences on all combinations of time stamps from multiple perspectives, with BiLSTM layers to encode the sentences and aggregate matching results. DIIN BIBREF6, the Densely Interactive Inference Network, creates a word-by-word interaction matrix by computing similarities on sentence representations encoded by a highway network and self-attention, and then adopts DenseNet BIBREF35 to extract interaction features for matching. BERT BIBREF7, the Bidirectional Encoder Representations from Transformers, is pre-trained on large-scale corpora, and then fine-tuned on this task. The matching result is obtained by applying a classifier on the encoded hidden states of the two sentences. Experiments ::: Implementation Details We adopt existing open source codes for target models BiMPM, DIIN and BERT, and also the BERT masked language model. For Quora, the step number limit $S$ is set to 5; the number of candidate substitution words generated using the language model $K$ and the beam size $B$ are both set to 25. $S$, $K$ and $B$ are doubled for MRPC where sentences are generally longer. The length difference between unmatched sentence pairs is limited to be no more than 3. Experiments ::: Main Results We train each target model on the original training data, and then generate adversarial examples for the target models. For each dataset, we sample 1,000 original examples with balanced labels from the corresponding test set, and adversarially modify them for each target model. We evaluate the accuracies of target models on the corresponding adversarial examples, compared with their accuracies on the original examples. Let $s$ be the success rate of generating adversarial examples that the target model fails, the accuracy of the target model on the returned adversarial examples is $1-s$. Table TABREF18 presents the results. The target models have high overall accuracies on the original examples, especially on the sampled ones since we form an unmatched original example with independently sampled sentences. The models have relatively lower accuracies on the unmatched examples in the full original test set of MRPC because MRPC is relatively small while the two labels are imbalanced in the original data (3,900 matched examples and 1,901 unmatched examples). Therefore, we generate adversarial examples with balanced labels instead of following the original distribution. After adversarial modifications, the performance of the original target models (those without the “-adv” suffix) drops dramatically (e.g. the overall accuracy of BERT on Quora drops from 94.6% to 24.1%), revealing that the target models are vulnerable to our adversarial examples. Particularly, even though our generation is constrained by a BERT language model, BERT is still vulnerable to our adversarial examples. These results demonstrate the effectiveness of our algorithm for generating adversarial examples and also revealing the corresponding robustness issues. Moreover, we present some generated adversarial examples in the appendix. We notice that the original models are more vulnerable to unmatched adversarial examples, because there are generally more replaceable position choices during the generation. Nevertheless, the results of the matched case are also sufficiently strong to reveal the robustness issues. We do not quantitatively compare the performance drop of the target models on the adversarial examples with previous work, because we generate a new type of adversarial examples that previous methods are not capable of. We have different experiment settings, including original example sampling and constraints on adversarial modifications, which are tailored to the robustness issues we study. Performance drop on different kinds of adversarial examples with little overlap is not comparable, and thus surpassing other adversarial examples on model performance drop is unnecessary and irrelevant to support our contributions. Therefore, such comparisons are not included in this paper. Experiments ::: Manual Evaluation To verify the validity our generated adversarial examples, we further perform a manual evaluation. For each dataset, using BERT as the target model, we randomly sample 100 successful adversarial examples on which the target model fails, with balanced labels. We blend these adversarial examples with the corresponding original examples, and present each example to three workers on Amazon Mechanical Turk. We ask the workers to label the examples and also rate the grammaticality of the sentences with a scale of 1/2/3 (3 for no grammar error, 2 for minor errors, and 1 for vital errors). We integrate annotations from different workers with majority voting for labels and averaging for grammaticality. Table TABREF35 shows the results. Unlike target models whose performance drops dramatically on adversarial examples, human annotators retain high accuracies with a much smaller drop, while the accuracies of the target models are 0 on these adversarial examples. This demonstrates that the labels of most adversarial examples are successfully preserved to be consistent with original examples. Results also show that the grammaticality difference between the original examples and adversarial examples is also small, suggesting that most adversarial examples retain a good grammaticality. This verifies the validity of our adversarial examples. Experiments ::: Adversarial Training Adversarial training can often improve model robustness BIBREF25, BIBREF27. We also fine-tune the target models using adversarial training. At each training step, we train the model with a batch of original examples along with adversarial examples with balanced labels. The adversarial examples account for around 10% in a batch. During training, we generate adversarial examples with the current model as the target and update the model parameters with the hybrid batch iteratively. The beam size for generation is set to 1 to reduce the computation cost, since the generation success rate is minor in adversarial training. We evaluate the adversarially trained models, as shown in Table TABREF18. After adversarial training, the performance of all the target models raises significantly, while that on the original examples remain comparable. Note that since the focus of this paper is on model robustness which can hardly be reflected in original data, we do not expect performance improvement on original data. The results demonstrate that adversarial training with our adversarial examples can significantly improve the robustness we focus on without remarkably hurting the performance on original data. Moreover, although the adversarial example generation is constrained by a BERT language model, BiMPM and DIIN which do not use the BERT language model can also significantly benefit from the adversarial examples, further demonstrating the effectiveness of our method. Experiments ::: Sentence Pair BOW Similarity To quantitatively demonstrate the difference between the adversarial examples we generate and those by previous work BIBREF26, BIBREF27, we compute the average BOW cosine similarity between the generated pairs of sentences. We only compare with previous methods that also aim to generate labeled adversarial examples that are not limited to be semantically equivalent to original sentences. Results are shown in Table TABREF38. Each pair of adversarial sentences by BIBREF26 differ by only one word. And in BIBREF27, sentence pairs generated with word swapping have exactly the same BOW. These two approaches both have high BOW similarities. By contrast, our method generates sentence pairs with much lower BOW similarities. This demonstrates a significant difference between our examples and the others. Unlike previous methods, we generate adversarial examples that can focus on robustness issues regarding the distraction from modified words that are the same for both sentences, towards matching the unmodified parts that are diverse for two sentences. Experiments ::: Effectiveness of Paired Common Words We further analyse the necessity and effectiveness of modifying sentences with paired common words. We consider another version that replaces one single word independently at each step without using paired common words, namely the unpaired version. Firstly, for matched adversarial examples that can be semantically different from original sentences, the unpaired version is inapplicable, because the matched label can be easily broken if common words from two sentences are changed into other words independently. And for the unmatched case, we show that the unpaired version is much less effective. For a more fair comparison, we double the step number limit for the unpaired version. As shown in Table TABREF41, the performance of target models on unmatched examples generated by the unpaired version, particularly that of BERT, is mostly much higher than those by our full algorithm, except for BiMPM on MRPC but its accuracies have almost reached 0 (0.0% for unpaired and 0.2% for paired). This demonstrates that our algorithm using paired common words are more effective in generating adversarial examples, on which the performance of the target model is generally much lower. An advantage of using difficult common words for unmatched examples is that such words tend to make target models over-confident about common words and distract the models on recognizing the semantic difference in the unmodified part. Our algorithm explicitly utilizes this property and thus can well reveal such a robustness issue. Moreover, although there is no such a property for the matched case, replacing existing common words with more difficult ones can still distract the target model on judging the semantic similarity in the unmodified part, due to the bias between different words learned by the model, and thus our algorithm for generating adversarial examples with difficult common words works for both matched and unmatched cases. Conclusion In this paper, we propose a novel algorithm to generate new adversarial examples for paraphrase identification, by adversarially modifying original examples with difficult common words. We generate labeled adversarial examples that can be semantically different from original sentences and the BOW similarity between each pair of sentences is generally low. Such examples reveal robustness issues that previous methods are not able for. The accuracies of the target models drop dramatically on our adversarial examples, while human annotators are much less affected and the modified sentences retain a good grammarticality. We also show that model robustness can be improved using adversarial training with our adversarial examples. Moreover, our adversarial examples can foster future research for further improving model robustness.
current model
930c51b9f3936d936ee745716536a4b40f531c7f
930c51b9f3936d936ee745716536a4b40f531c7f_0
Q: What are benhmark datasets for paraphrase identification? Text: Introduction Paraphrase identification is to determine whether a pair of sentences are paraphrases of each other BIBREF0. It is important for applications such as duplicate post matching on social media BIBREF1, plagiarism detection BIBREF2, and automatic evaluation for machine translation BIBREF3 or text summarization BIBREF4. Paraphrase identification can be viewed as a sentence matching problem. Many deep models have recently been proposed and their performance has been greatly advanced on benchmark datasets BIBREF5, BIBREF6, BIBREF7. However, previous research shows that deep models are vulnerable to adversarial examples BIBREF8, BIBREF9 which are particularly constructed to make models fail. Adversarial examples are of high value for revealing the weakness and robustness issues of models, and can thereby be utilized to improve the model performance for challenging cases, robustness, and also security. In this paper, we propose a novel algorithm to generate a new type of adversarial examples for paraphrase identification. To generate an adversarial example that consists of a sentence pair, we first sample an original sentence pair from the dataset, and then adversarially replace some word pairs with difficult common words respectively. Here each pair of words consists of two words from the two sentences respectively. And difficult common words are words that we adversarially select to appear in both sentences such that the example becomes harder for the target model. The target model is likely to be distracted by difficult common words and fail to judge the similarity or difference in the context, thereby making a wrong prediction. Our adversarial examples are motivated by two observations. Firstly, for a sentence pair with a label matched, when some common word pairs are replaced with difficult common words respectively, models can be fooled to predict an incorrect label unmatched. As the first example in Figure FIGREF1 shows, we can replace two pairs of common words, “purpose” and “life”, with another common words “measure” and “value” respectively. The modified sentence pair remains matched but fools the target model. It is mainly due to the bias between different words and some words are more difficult for the model. When such words appear in the example, the model fails to combine them with the unmodified context and judge the overall similarity of the sentence pair. Secondly, for an unmatched sentence pair, when some word pairs, not necessarily common words, are replaced with difficult common words, models can be fooled to predict an incorrect label matched. As the second example in Figure FIGREF1 shows, we can replace words “Gmail” and “school” with a common word “credit”, and replace words “account” and “management” with ”score”. The modified sentences remain unmatched, but the target model can be fooled to predict matched for being distracted by the common words while ignoring the difference in the unmodified context. Following these observations, we focus on robustness issues regarding capturing semantic similarity or difference in the unmodified part when distracted by difficult common words in the modified part. We try to modify an original example into an adversarial one with multiple steps. In each step, for a matched example, we replace some pair of common words together, with another word adversarially selected from the vocabulary; and for an unmatched example, we replace some word pair, not necessarily a common word pair, with a common word. In this way, we replace a pair of words together from two sentences respectively with an adversarially selected word in each step. To preserve the original label and grammaticality, we impose a few heuristic constraints on replaceable positions, and apply a language model to generate substitution words that are compatible with the context. We aim to adversarially find a word replacement solution that maximizes the target model loss and makes the model fail, using beam search. We generate valid adversarial examples that are substantially different from those in previous work for paraphrase identification. Our adversarial examples are not limited to be semantically equivalent to original sentences and the unmodified parts of the two sentences are of low lexical similarity. To the best of our knowledge, none of previous work is able to generate such kind of adversarial examples. We further discuss our difference with previous work in Section 2.2. In summary, we mainly make the following contributions: We propose an algorithm to generate new adversarial examples for paraphrase identification. Our adversarial examples focus on robustness issues that are substantially different from those in previous work. We reveal a new type of robustness issues in deep paraphrase identification models regarding difficult common words. Experiments show that the target models have a severe performance drop on the adversarial examples, while human annotators are much less affected and most modified sentences retain a good grammaticality. Using our adversarial examples in adversarial training can mitigate the robustness issues, and these examples can foster future research. Related Work ::: Deep Paraphrase Identification Paraphrase identification can be viewed as a problem of sentence matching. Recently, many deep models for sentence matching have been proposed and achieved great advancements on benchmark datasets. Among those, some approaches encode each sentence independently and apply a classifier on the embeddings of two sentences BIBREF10, BIBREF11, BIBREF12. In addition, some models make strong interactions between two sentences by jointly encoding and matching sentences BIBREF5, BIBREF13, BIBREF14 or hierarchically extracting matching features from the interaction space of the sentence pair BIBREF15, BIBREF16, BIBREF6. Notably, BERT pre-trained on large-scale corpora achieved even better results BIBREF7. In this paper, we study the robustness of recent typical deep models for paraphrase identification and generate new adversarial examples for revealing their robustness issues and improving their robustness. Related Work ::: Adversarial Examples for NLP Many methods have been proposed to find different types of adversarial examples for NLP tasks. We focus on those that can be applied to paraphrase identification. Some of them generate adversarial examples by adding semantic-preserving perturbations to the input sentences. BIBREF17 added perturbations to word embeddings. BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22 employed several character-level or word-level manipulations. BIBREF23 used syntactically controlled paraphrasing, and BIBREF24 paraphrased sentences with extracted rules. However, for some tasks including paraphrase identification, adversarial examples can be semantically different from original sentences, to study other robustness issues tailored to the corresponding tasks. For sentence matching and paraphrase identification, other types of adversarial examples can be obtained by considering the relation and the correspondence between two sentences. BIBREF25 considered logical rules of sentence relations but can only generate unlabelled adversarial examples. BIBREF26 and BIBREF27 generated a sentence pair by modifying a single original sentence. They combined both original and modified sentences to form a pair. They modified the original sentence using back translation, word swapping, or single word replacement with lexical knowledge. Among them, back translation still aimed to produce semantically equivalent sentences; the others generated pairs of sentences with large Bag-of-Words (BOW) similarities, and the unmodified parts of the two sentences are exactly the same, so these same unmodified parts required little matching by target models. By contrast, we generate new adversarial examples with targeted labels by modifying a pair of original sentences together, using difficult common words. The modified sentences can be semantically different from original ones but still valid. The generated sentence pairs have much lower BOW similarities, and the unmodified parts are lexically diverse to reveal robustness issues regarding matching these parts when distracted by difficult common words in the modified parts. Thereby we study a new kind of robustness issues in paraphrase identification. Related Work ::: Adversarial Example Generation For a certain type of adversarial examples, adversarial attacks or adversarial example generation aim to find examples that are within the defined type and make existing models fail. Some work has no access to the target model until an adversarial dataset is generated BIBREF28, BIBREF26, BIBREF23, BIBREF24, BIBREF29, BIBREF27. However, in many cases including ours, finding successful adversarial examples, i.e. examples on which the target model fails, is challenging, and employing an attack algorithm with access to the target model during generation is often necessary to ensure a high success rate. Some prior work used gradient-based methods BIBREF30, BIBREF19, BIBREF31, requiring the model gradients to be accessible in addition to the output, and thus are inapplicable in black-box settings BIBREF21 where only model outputs are accessible. Though, the beam search in BIBREF19 can be adapted to black-box settings. Gradient-free methods for NLP generally construct adversarial examples by querying the target model for output scores and making generation decisions to maximize the model loss. BIBREF25 searched in the solution space. One approach in BIBREF28 greedily made word replacements and queried the target model in several steps. BIBREF21 employed a genetic algorithm. BIBREF32 proposed a two-stage greedy algorithm and a method with gumbel softmax to improve the efficiency. In this work, we also focus on a black-box setting, which is more challenging than white-box settings. We use a two-stage beam search to find adversarial examples in multiple steps. We clarify that the major focus of this work is on studying new robustness issues and a new type of adversarial examples, instead of attack algorithms for an existing certain type of adversarial examples. Therefore, the choice of the attack algorithm is minor for this work as long as the success rates are sufficiently high. Methodology ::: Task Definition Paraphrase identification can be formulated as follows: given two sentences $P=p_1p_2\cdots p_n$ and $Q=q_1q_2\cdots q_m$, the goal is to predict whether $P$ and $Q$ are paraphrases of each other, by estimating a probability distribution where $y\in \mathcal {Y} = \lbrace matched, unmatched \rbrace $. For each label $y$, the model outputs a score $[Z (P, Q)]_{y}$ which is the predicted probability of this label. We aim to generate an adversarial example by adversarially modifying an original sentence pair $(P, Q)$ while preserving the label and grammaticality. The goal is to make the target model fail on the adversarially modified example $(\hat{P}, \hat{Q})$: where $y$ indicates the gold label and $\overline{y}$ is the wrong label opposite to the gold one. Methodology ::: Algorithm Framework Figure FIGREF12 illustrates the work flow of our algorithm. We generate an adversarial example by firstly sampling an original example from the corpus and then constructing adversarial modifications. We use beam search and take multiple steps to modify the example, until the target model fails or the step number limit is reached. In each step, we modify the sentences by replacing a word pair with a difficult common word. There are two stages in deciding the word replacements. We first determine the best replaceable position pairs in the sentence pair, and next determine the best substitution words for the corresponding positions. We evaluate different options according to the target model loss they raise, and we retain $B$ best options after each stage of each step during beam search. Finally, the adversarially modified example is returned. Methodology ::: Original Example Sampling To sample an original example from the dataset for subsequent adversarial modifications, we consider two different cases regarding whether the label is unmatched or matched. For the unmatched case, we sample two different sentence pairs $(P_1, Q_1)$ and $(P_2, Q_2)$ from the original data, and then form an unmatched example $(P_1, Q_2, unmatched)$ with sentences from two sentence pairs respectively. We also limit the length difference $||P_1|-|Q_2||$ and resample until the limit is satisfied, since sentence pairs with large length difference inherently tend to be unmatched and are too easy for models. By sampling two sentences from different examples, the two sentences tend to have less in common originally, which can help better preserve the label during adversarial modifications, while this also makes it more challenging for our algorithm to make the target model fail. On the other hand, matched examples cannot be sampled in this way, and thus for the matched case, we simply sample an example with a matched label from the dataset, namely, $(P, Q, matched)$. Methodology ::: Replaceable Position Pairs During adversarial modifications, we replace a word pair at each step. We set heuristic rules on replaceable position pairs to preserve the label and grammaticality. First of all, we require the words on the replaceable positions to be one of nouns, verbs, or adjectives, and not stopwords meanwhile. We also require a pair of replaceable words to have similar Part-of-Speech (POS) tags, i.e. the two words are both nouns, both verbs, or both adjectives. For a matched example, we further require the two words on each replaceable position pair to be exactly the same. Figure FIGREF15 shows two examples of determining replaceable positions. For the first example (matched), only common words “purpose” and “life” can be replaced. And since they are replaced simultaneously with another common words, the modified sentences are likely to talk about another same thing, e.g. changing from “purpose of life” to “measure of value”, and thereby the new sentences tend to remain matched. As for the second example (unmatched), each noun in the first sentence, “Gmail” and “account”, can form replaceable word pairs with each noun in the second sentence, “school”, “management” and “software”. The irreplaceable part determines that the modified sentences are “How can I get $\cdots $ back ? ” and “What is the best $\cdots $ ?” respectively. Sentences based on these two templates are likely to discuss about different things or different aspects, even when filled with common words, and thus they are likely to remain unmatched. In this way, the labels can be preserved in most cases. Methodology ::: Candidate Substitution Word Generation For a pair of replaceable positions, we generate candidate substitution words that can replace the current words on the two positions. To preserve the grammaticality and keep the modified sentences like human language, substitution words should be compatible with the context. Therefore, we apply a BERT language model BIBREF7 to generate candidate substitution words. Specifically, when some words in a text are masked, the BERT masked language model can predict the masked words based on the context. For a sentence $x_1x_2\cdots x_l$ where the $k$-th token is masked, the BERT masked language model gives the following probability distribution: Thereby, to replace word $p_i$ and $q_j$ from the two sentences respectively, we mask $p_i$ and $q_j$ and present each sentence to the BERT masked language model. We aim to replace $p_i$ and $q_j$ with a common word $w$, which can be regarded as the masked word to be predicted. From the language model output, we obtain a joint probability distribution as follows: We rank all the words within the vocabulary of the target model and choose top $K$ words with the largest probabilities, as the candidate substitution words for the corresponding positions. Methodology ::: Beam Search for Finding Adversarial Examples Once the replaceable positions and candidate substitution words can be determined, we use beam search with beam size $B$ to find optimal adversarial modifications in multiple steps. At step $t$, we perform a modification in two stages to determine replaceable positions and the corresponding substitution words respectively, based on the two-stage greedy framework by BIBREF32. To determine the best replaceable positions, we enumerate all the possible position pairs, and obtain a set of candidate intermediate examples, $C_{pos}^{(t)}$, by replacing words on each position pair with a special token [PAD] respectively. We then query the target model with the examples in $C_{pos}^{(t)}$ to obtain the model output. We take top $B$ examples that maximize the output score of the opposite label $\overline{y}$ (we define this operation as $\mathop {\arg {\rm top}B}$), obtaining a set of intermediate examples $\lbrace (\hat{P}_{pos}^{(t,k)}, \hat{Q}_{pos}^{(t,k)}) \rbrace _{k=1}^{B}$, as follows: We then determine difficult common words to replace the [PAD] placeholders. For each example in $\lbrace (\hat{P}_{pos}^{(t, k)}, \hat{Q}_{pos}^{(t, k)}) \rbrace _{k=1}^B$, we enumerate all the words in the candidate substitution word set of the corresponding positions with [PAD]. We obtain a set of candidate examples, $C^{(t)}$, by replacing the [PAD] placeholders with each candidate substitution word respectively. Similarly to the first stage, we take top $B$ examples that maximize the output score of the opposite label $\overline{y}$. This yields a set of modified example after step $t$, $\lbrace (\hat{P}^{(t, k)}, \hat{Q}^{(t, k)}) \rbrace _{k=1}^{B}$, as follows: After $t$ steps, for some modified example $(\hat{P}^{(t,k)}, \hat{Q}^{(t,k)})$, if the label predicted by the target model is already $\overline{y}$, i.e. $[Z(\hat{P}^{(t,k)}, \hat{Q}^{(t,k)})]_{\overline{y}} > [Z(\hat{P}^{(t,k)},\hat{Q}^{(t,k)})]_y$, this example is a successful adversarial example and thus we terminate the modification process. Otherwise, we continue taking another step, until the step number limit $S$ is reached and in case of that an unsuccessful adversarial example is returned. Experiments ::: Datasets We adopt the following two datasets: Quora BIBREF1: The Quora Question Pairs dataset contains question pairs annotated with labels indicating whether the two questions are paraphrases. We use the same dataset partition as BIBREF5, with 384,348/10,000/10,000 pairs in the training/development/test set respectively. MRPC BIBREF34: The Microsoft Research Paraphrase Corpus consists of sentence pairs collected from online news. Each pair is annotated with a label indicating whether the two sentences are semantically equivalent. There are 4,076/1,725 pairs in the training/test set respectively. Experiments ::: Target Models We adopt the following typical deep models as the target models in our experiments: BiMPM BIBREF5, the Bilateral Multi-Perspective Matching model, matches two sentences on all combinations of time stamps from multiple perspectives, with BiLSTM layers to encode the sentences and aggregate matching results. DIIN BIBREF6, the Densely Interactive Inference Network, creates a word-by-word interaction matrix by computing similarities on sentence representations encoded by a highway network and self-attention, and then adopts DenseNet BIBREF35 to extract interaction features for matching. BERT BIBREF7, the Bidirectional Encoder Representations from Transformers, is pre-trained on large-scale corpora, and then fine-tuned on this task. The matching result is obtained by applying a classifier on the encoded hidden states of the two sentences. Experiments ::: Implementation Details We adopt existing open source codes for target models BiMPM, DIIN and BERT, and also the BERT masked language model. For Quora, the step number limit $S$ is set to 5; the number of candidate substitution words generated using the language model $K$ and the beam size $B$ are both set to 25. $S$, $K$ and $B$ are doubled for MRPC where sentences are generally longer. The length difference between unmatched sentence pairs is limited to be no more than 3. Experiments ::: Main Results We train each target model on the original training data, and then generate adversarial examples for the target models. For each dataset, we sample 1,000 original examples with balanced labels from the corresponding test set, and adversarially modify them for each target model. We evaluate the accuracies of target models on the corresponding adversarial examples, compared with their accuracies on the original examples. Let $s$ be the success rate of generating adversarial examples that the target model fails, the accuracy of the target model on the returned adversarial examples is $1-s$. Table TABREF18 presents the results. The target models have high overall accuracies on the original examples, especially on the sampled ones since we form an unmatched original example with independently sampled sentences. The models have relatively lower accuracies on the unmatched examples in the full original test set of MRPC because MRPC is relatively small while the two labels are imbalanced in the original data (3,900 matched examples and 1,901 unmatched examples). Therefore, we generate adversarial examples with balanced labels instead of following the original distribution. After adversarial modifications, the performance of the original target models (those without the “-adv” suffix) drops dramatically (e.g. the overall accuracy of BERT on Quora drops from 94.6% to 24.1%), revealing that the target models are vulnerable to our adversarial examples. Particularly, even though our generation is constrained by a BERT language model, BERT is still vulnerable to our adversarial examples. These results demonstrate the effectiveness of our algorithm for generating adversarial examples and also revealing the corresponding robustness issues. Moreover, we present some generated adversarial examples in the appendix. We notice that the original models are more vulnerable to unmatched adversarial examples, because there are generally more replaceable position choices during the generation. Nevertheless, the results of the matched case are also sufficiently strong to reveal the robustness issues. We do not quantitatively compare the performance drop of the target models on the adversarial examples with previous work, because we generate a new type of adversarial examples that previous methods are not capable of. We have different experiment settings, including original example sampling and constraints on adversarial modifications, which are tailored to the robustness issues we study. Performance drop on different kinds of adversarial examples with little overlap is not comparable, and thus surpassing other adversarial examples on model performance drop is unnecessary and irrelevant to support our contributions. Therefore, such comparisons are not included in this paper. Experiments ::: Manual Evaluation To verify the validity our generated adversarial examples, we further perform a manual evaluation. For each dataset, using BERT as the target model, we randomly sample 100 successful adversarial examples on which the target model fails, with balanced labels. We blend these adversarial examples with the corresponding original examples, and present each example to three workers on Amazon Mechanical Turk. We ask the workers to label the examples and also rate the grammaticality of the sentences with a scale of 1/2/3 (3 for no grammar error, 2 for minor errors, and 1 for vital errors). We integrate annotations from different workers with majority voting for labels and averaging for grammaticality. Table TABREF35 shows the results. Unlike target models whose performance drops dramatically on adversarial examples, human annotators retain high accuracies with a much smaller drop, while the accuracies of the target models are 0 on these adversarial examples. This demonstrates that the labels of most adversarial examples are successfully preserved to be consistent with original examples. Results also show that the grammaticality difference between the original examples and adversarial examples is also small, suggesting that most adversarial examples retain a good grammaticality. This verifies the validity of our adversarial examples. Experiments ::: Adversarial Training Adversarial training can often improve model robustness BIBREF25, BIBREF27. We also fine-tune the target models using adversarial training. At each training step, we train the model with a batch of original examples along with adversarial examples with balanced labels. The adversarial examples account for around 10% in a batch. During training, we generate adversarial examples with the current model as the target and update the model parameters with the hybrid batch iteratively. The beam size for generation is set to 1 to reduce the computation cost, since the generation success rate is minor in adversarial training. We evaluate the adversarially trained models, as shown in Table TABREF18. After adversarial training, the performance of all the target models raises significantly, while that on the original examples remain comparable. Note that since the focus of this paper is on model robustness which can hardly be reflected in original data, we do not expect performance improvement on original data. The results demonstrate that adversarial training with our adversarial examples can significantly improve the robustness we focus on without remarkably hurting the performance on original data. Moreover, although the adversarial example generation is constrained by a BERT language model, BiMPM and DIIN which do not use the BERT language model can also significantly benefit from the adversarial examples, further demonstrating the effectiveness of our method. Experiments ::: Sentence Pair BOW Similarity To quantitatively demonstrate the difference between the adversarial examples we generate and those by previous work BIBREF26, BIBREF27, we compute the average BOW cosine similarity between the generated pairs of sentences. We only compare with previous methods that also aim to generate labeled adversarial examples that are not limited to be semantically equivalent to original sentences. Results are shown in Table TABREF38. Each pair of adversarial sentences by BIBREF26 differ by only one word. And in BIBREF27, sentence pairs generated with word swapping have exactly the same BOW. These two approaches both have high BOW similarities. By contrast, our method generates sentence pairs with much lower BOW similarities. This demonstrates a significant difference between our examples and the others. Unlike previous methods, we generate adversarial examples that can focus on robustness issues regarding the distraction from modified words that are the same for both sentences, towards matching the unmodified parts that are diverse for two sentences. Experiments ::: Effectiveness of Paired Common Words We further analyse the necessity and effectiveness of modifying sentences with paired common words. We consider another version that replaces one single word independently at each step without using paired common words, namely the unpaired version. Firstly, for matched adversarial examples that can be semantically different from original sentences, the unpaired version is inapplicable, because the matched label can be easily broken if common words from two sentences are changed into other words independently. And for the unmatched case, we show that the unpaired version is much less effective. For a more fair comparison, we double the step number limit for the unpaired version. As shown in Table TABREF41, the performance of target models on unmatched examples generated by the unpaired version, particularly that of BERT, is mostly much higher than those by our full algorithm, except for BiMPM on MRPC but its accuracies have almost reached 0 (0.0% for unpaired and 0.2% for paired). This demonstrates that our algorithm using paired common words are more effective in generating adversarial examples, on which the performance of the target model is generally much lower. An advantage of using difficult common words for unmatched examples is that such words tend to make target models over-confident about common words and distract the models on recognizing the semantic difference in the unmodified part. Our algorithm explicitly utilizes this property and thus can well reveal such a robustness issue. Moreover, although there is no such a property for the matched case, replacing existing common words with more difficult ones can still distract the target model on judging the semantic similarity in the unmodified part, due to the bias between different words learned by the model, and thus our algorithm for generating adversarial examples with difficult common words works for both matched and unmatched cases. Conclusion In this paper, we propose a novel algorithm to generate new adversarial examples for paraphrase identification, by adversarially modifying original examples with difficult common words. We generate labeled adversarial examples that can be semantically different from original sentences and the BOW similarity between each pair of sentences is generally low. Such examples reveal robustness issues that previous methods are not able for. The accuracies of the target models drop dramatically on our adversarial examples, while human annotators are much less affected and the modified sentences retain a good grammarticality. We also show that model robustness can be improved using adversarial training with our adversarial examples. Moreover, our adversarial examples can foster future research for further improving model robustness.
Quora, MRPC
20eb673b01d202b731e7ba4f84efc10a18616dd3
20eb673b01d202b731e7ba4f84efc10a18616dd3_0
Q: What representations are presented by this paper? Text: 1.1em ::: 1.1.1em ::: ::: 1.1.1.1em Mahault Garnerin, Solange Rossato, Laurent Besacier LIG, Univ. Grenoble Alpes, CNRS, Grenoble INP, FR-38000 Grenoble, France [email protected] With the rise of artificial intelligence (AI) and the growing use of deep-learning architectures, the question of ethics, transparency and fairness of AI systems has become a central concern within the research community. We address transparency and fairness in spoken language systems by proposing a study about gender representation in speech resources available through the Open Speech and Language Resource platform. We show that finding gender information in open source corpora is not straightforward and that gender balance depends on other corpus characteristics (elicited/non elicited speech, low/high resource language, speech task targeted). The paper ends with recommendations about metadata and gender information for researchers in order to assure better transparency of the speech systems built using such corpora. speech resources, gender, metadata, open speech language resources (OpenSLR) Introduction The ever growing use of machine learning has put data at the center of the industrial and research spheres. Indeed, for a system to learn how to associate an input X to an output Y, many paired examples are needed to learn this mapping process. This need for data coupled with the improvement in computing power and algorithm efficiency has led to the era of big data. But data is not only needed in mass, but also with a certain level of quality. In this paper we argue that one of the main quality of data is its transparency. In recent years, concerns have been raised about the biases existing in the systems. A well-known case in Natural Language Processing (NLP) is the example of word embeddings, with the studies of bolukbasi2016man and caliskan2017semantics which showed that data are socially constructed and hence encapsulate a handful of social representations and power structures, such as gender stereotypes. Gender-bias has also been found in machine translation tasks BIBREF0, as well as facial recognition BIBREF1 and is now at the center of research debates. In previous work, we investigated the impact of gender imbalance in training data on the performance of an automatic speech recognition (ASR) system, showing that the under-representation of women led to a performance bias of the system for female speakers BIBREF2. In this paper, we survey the gender representation within an open platform gathering speech and language resources to develop speech processing tools. The aim of this survey is twofold: firstly, we investigate the gender balance within speech corpora in terms of speaker representation but also in terms of speech time available for each gender category. Secondly we propose a reflection about general practices when releasing resources, basing ourselves on some recommendations from previous work. Contributions. The contributions of our work are the following: an exploration of 66 different speech corpora in terms of gender, showing that gender balance is achieved in terms of speakers in elicited corpora, but that it is not the case for non-elicited speech, nor for the speech time allocated to each gender category an assessment of the global lack of meta-data within free open source corpora, alongside recommendations and guidelines for resources descriptions, based on previous work OpenSLR Open Speech Language Resources (OpenSLR) is a platform created by Daniel Povey. It provides a central hub to gather open speech and language resources, allowing them to be accessed and downloaded freely. OpenSLR currently hosts 83 resources. These resources consist of speech recordings with transcriptions but also of softwares as well as lexicons and textual data for language modeling. As resources are costly to produce, they are most of the time a paying service. Therefore it is hard to study gender representation at scale. We thus focus on the corpora available on OpenSLR due to their free access and to the fact that OpenSLR is explicitly made to help develop speech systems (mostly ASR but also text-to-speech (TTS) systems). In our work, we focus on speech data only. Out of the 83 resources gathered on the platform, we recorded 53 speech resources. We did not take into account multiple releases of the same corpora but only kept the last version (e.g. TED LIUM BIBREF3) and we also removed subsets of bigger corpora (e.g. LibriTTS corpus BIBREF4). We make the distinction between a resource and a corpus, as each resource can contain several languages (e.g. Vystadial korvas2014) or several accent/dialect of a same language (e.g. the crowdsourced high-quality UK and Ireland English Dialect speech data set googleuken2019). In our terminology, we define a corpus as monolingual and monodialectal, so resources containing different dialects or languages will be considered as containing different corpora. We ended up with 66 corpora, in 33 different languages with 51 dialect/accent variations. The variety is also great in terms of speech types (elicited and read speech, broadcast news, TEDTalks, meetings, phonecalls, audiobooks, etc.), which is not suprising, given the many different actors who contributed to this platform. We consider this sample to be of reasonable size to tackle the question of gender representation in speech corpora. OpenSLR also constitutes a good indicator of general practice as it does not expect a defined format nor does have explicit requirements about data structures, hence attesting of what metadata resources creators consider important to share when releasing resources for free on the Web. Methodology In order to study gender representation within speech resources, let us start by defining what gender is. In this work, we consider gender as a binary category (male and female speakers). Nevertheless, we are aware that gender as an identity also exists outside of these two categories, but we did not find any mention of non-binary speakers within the corpora surveyed in our study. Following work by doukhan2018open, we wanted to explore the corpora looking at the number of speakers of each gender category as well as their speech duration, considering both variables as good features to account for gender representation. After the download, we manually extracted information about gender representation in each corpus. Methodology ::: Speaker Information and Lack of Meta-Data The first difficulty we came across was the general absence of information. As gender in technology is a relatively recent research interest, most of the time gender demographics are not made available by the resources creators. So, on top of the further-mentioned general corpus characteristics (see Section SECREF11), we also report in our final table where the gender information was found and whether it was provided in the first place or not. The provided attribute corresponds to whether gender info was given somewhere, and the found_in attribute corresponds to where we extracted the gender demographics from. The different modalities are paper, if a paper was explicitly cited along the resource, metadata if a metadata file was included, indexed if the gender was explicitly indexed within data or if data was structured in terms of gender and manually if the gender information are the results of a manual research made by ourselves, trying to either find a paper describing the resources, or by relying on regularities that seems like speaker ID and listening to the recordings. We acknowledge that this last method has some methodological shortcomings: we relied on our perceptual stereotypes to distinguish male from female speakers, most of the time for languages we have no knowledge of, but considering the global lack of data, we used it when corpora were small enough in order to increase our sample size. Methodology ::: Speech Time Information and Data Consistency The second difficulty regards the fact that speech time information are not standardised, making impossible to obtain speech time for individual speakers or gender categories. When speech time information is provided, the statistics given do not all refer to the same measurements. Some authors report speech duration in hours e.g. panayotov2015librispeech,hernandez2018ted, some the number of utterances (e.g BIBREF5) or sentences (e.g. googleuken2019), the definition of these two terms never being clearly defined. We gathered all information available, meaning that our final table contains some empty cells, and we found that there was no consistency between speech duration and number of utterances, excluding the possibility to approximate one by the other. As a result, we decided to rely on the size of the corpora as a (rough) approximation of the amount of speech data available, the text files representing a small proportion of the resources size. This method however has drawbacks as not all corpora used the same file format, nor the same sampling rate. Sampling rate has been provided as well in the final table, but we decided to rely on qualitative categories, a corpus being considered small if its size is under 5GB, medium if it is between 5 and 50GB and large if above. Methodology ::: Corpora Characteristics The final result consists of a table reporting all the characteristics of the corpora. The chosen features are the following: the resource identifier (id) as defined on OpenSLR the language (lang) the dialect or accent if specified (dial) the total number of speakers as well as the number of male and female speakers (#spk, #spk_m, #spk_f) the total number of utterances as well as the total number of utterances for male and female speakers (#utt, #utt_m, #utt_f) the total duration, or speech time, as well as the duration for male and female speakers (dur, dur_m, dur_f) the size of the resource in gigabytes (sizeGB) as well as a qualitative label (size, taking its value between “big", “medium", “small") the sampling rate (sampling) the speech task targeted for the resource (task) is it elicited speech or not: we define as non-elicited speech data which would have existed without the creation of the resources (e.g TedTalks, audiobooks, etc.), other speech data are considered as elicited the language status (lang_status): a language is considered either as high- or low-resourced. The language status is defined from a technological point of view (i.e. are there resources or NLP systems available for this language?). It is fixed at the language granularity (hence the name), regardless of the dialect or accent (if provided). the year of the release (year) the authors of the resource (producer) Analysis ::: Gender Information Availability Before diving into the gender analysis, we report the number of corpora for which gender information was provided. Indeed, 36.4% of the corpora do not give any gender information regarding the speakers. Moreover, almost 20% of the corpora do not provide any speaker information whatsoever. Table sums up the number of corpora for which speaker's gender information was provided and if it was, where it was found. We first looked at the metadata file if available. If no metadata was provided, we searched whether gender was indexed within the data structure. At last, if we still could not find anything, we looked for a paper describing the data set. This search pipeline results in ordered levels for our found_in category, meaning papers might also be available for corpora with the “metadata" or “indexed" modalities. When gender information was given it was most of the time in terms of number of speakers in each gender categories, as only five corpora provide speech time for each category. Table reports what type of information was provided in terms of gender, in the subset of the 42 corpora containing gender information. We observe that gender information is easier to find when it regards the number of speakers, than when it accounts for the quantity of data available for each gender group. Due to this lack of data, we did not study the speech time per gender category as intended, but we relied on utterance count when available. It is worth noticing however, that we did not find any consistency between speech time and number of utterances, so such results must be taken with caution. Out of the 42 corpora providing gender information, 41 reported speaker counts for each gender category. We manually gathered speaker gender information for 7 more corpora, as explained in the previous section, reaching a final sample size of 47 corpora. Analysis ::: Gender Distribution Among Speakers ::: Elicited vs Non-Elicited Data Generally, when gender demographics are provided, we observe the following distribution: out of the 6,072 speakers, 3,050 are women and 3,022 are men, so parity is almost achieved. We then look at whether data was elicited or not, non-elicited speech being speech that would have existed without the corpus creation such as TEDTalks, interviews, radio broadcast and so on. We assume that if data was not elicited, gender imbalance might emerge. Indeed, non-elicited data often comes from the media, and it has been shown, that women are under-represented in this type of data BIBREF6. This disparity of gender representation in French media BIBREF7, BIBREF8 precisely led us to the present survey. Our expectations are reinforced by examples such as the resource of Spanish TEDTalks, which states in its description regarding the speakers that “most of them are men" mena2019. We report results in Table . In both cases (respectively elicited and non-elicited speech), gender difference is relatively small (respectively 5.6 percentage points and 5.8 points), far from the 30 percentage points difference observed in BIBREF2. A possible explanation is that either elicited or not, corpora are the result of a controlled process, so gender disparity will be reduced as much as possible by the corpus authors. However, we notice that, apart from Librispeech BIBREF9, all the non-elicited corpora are small corpora. When removing Librispeech from the analysis, we observe a 1/3-2/3 female to male ratio, coherent with our previous findings. This can be explained by the care put by the creators of the Librispeech data set to "[ensure] a gender balance at the speaker level and in terms of the amount of data available for each gender" BIBREF9, while general gender disparity is observed in smaller corpora. What emerges from these results is that when data sets are not elicited or carefully balanced, gender disparity creeps in. This gender imbalance is not observed at the scale of the entire OpenSLR platform, due to the fact that most of the corpora are elicited (89.1%). Hence, the existence of such gender gap is prevented by a careful control during the data set creation process. Analysis ::: Gender Distribution Among Speakers ::: High-resource vs Low-resource Languages In the elicited corpora made available on OpenSLR, some are of low-resource languages other high-resource languages (mostly regional variation of high-resources languages). When looking at gender in these elicited corpora, we do not observe a difference depending on the language status. However, we can notice that high-resource corpora contain twice as many speakers, all low-resource language corpora being small corpora. Analysis ::: Gender Distribution Among Speakers ::: “How Can I Help?": Spoken Language Tasks Speech corpora are built in order to train systems, most of the time ASR or TTS ones. We carry out our gender analysis taking into account the task addressed and obtain the results reported in Table . We observe that if gender representation is almost balanced within ASR corpora, women are better represented in TTS-oriented data sets. This can be related to the UN report of recommendation for gender-equal digital education stating that nowadays, most of the vocal assistants are given female voices which raises educational and societal problems BIBREF10. This gendered design of vocal assistants is sometimes justified by relying on gender stereotypes such as “female voices are perceived as more helpful, sympathetic or pleasant." TTS systems being often used to create such assistants, we can assume that using female voices has become general practice to ensure the adoption of the system by the users. This claim can however be nuanced by nass2005wired who showed that other factors might be worth taking into account to design gendered voices, such as social identification and cultural gender stereotypes. Analysis ::: Speech Time and Gender Due to a global lack of speech time information, we did not analyse the amount of data available per speaker category. However, utterance counts were often reported, or easily found within the corpora. We gathered utterance counts for a total of 32 corpora. We observe that if gender balance is almost achieved in terms of number of speakers, at the utterance level, men speech is more represented. But this disparity is only the effect of three corpora containing 51,463 and 26,567 korvas2014 and 8376 mena2019 utterances for male speakers, while the mean number of utterances per corpora is respectively 1942 for male speakers and 1983 for female speakers. Removing these three outliers, we observe that utterances count is balanced between gender categories. It is worth noticing, that the high amount of utterances of the outliers is surprising considering that these three corpora are small (2.1GB, 2.8GB) and medium (5.2GB). This highlights the problem of the notion of utterance which is never being explicitly defined. Such difference in granularity is thus preventing comparison between corpora. Analysis ::: Evolution over Time When collecting data, we noticed that the more recent the resources, the easier it was to find gender information, attesting of the emergence of gender in technology as a relevant topic. As pointed out by Kate crawford2017nips in her NeurIPS keynote talk, fairness in AI has recently become a huge part of the research effort in AI and machine learning. As a result, methodology papers have been published, with for example the work of bender2018data, for NLP data and systems, encouraging the community towards rich and explicit data statements. Figure FIGREF34 shows the evolution of gender information availability in the last 10 years. We can see that this peek of interest is also present in our data, with more resources provided with gender information after 2017. Recommendations The social impact of big data and the ethical problems raised by NLP systems have already been discussed by previous work. wilkinson2016fair developed principles for scientific data management and stewardship, the FAIR Data Principles, based on four foundational data characteristics that are Findability, Accessibility, Interoperability and Reusability BIBREF11. In our case, findability and accessibility are taken into account by design, resources on OpenSLR being freely accessible. Interoperability and Reusability of data are however not yet achieved. Another attempt to integrate this discussion about data description within the NLP community has been made by COUILLAULT14.424, who proposed an Ethics and Big Data Charter, to help resources creators describe data from a legal and ethical point of view. hovy2016social highlighted the different social implications of NLP systems, such as exclusion, overgeneralisation and exposure problems. More recently, work by bender2018data proposed the notion of data statement to ensure data transparency. The common point of all these studies is that information is key. The FAIR Principles are a baseline to guarantee the reproducibility of scientific findings. We need data to be described exhaustively in order to acknowledge demographic bias that may exist within our corpora. As pointed out by hovy2016social, language is always situated and so are language resources. This demographic bias in itself will always exist, but by not mentioning it in the data description we might create tools and systems that will have negative impacts on society. The authors presented the notion of exclusion as a demographic misrepresentation leading to exclusion of certain groups in the use of a technology, due to the fact that this technology fail to take them into account during its developing process. This directly relates to our work on ASR performance on women speech, and we can assume that this can be extended to other speaker characteristics, such as accent or age. To prevent such collateral consequences of NLP systems, bender2018data advocated the use of data statement, as a professional and research practice. We hope the present study will encourage researchers and resources creators to describe exhaustively their data sets, following the guidelines proposed by these authors. Recommendations ::: On the Importance of Meta-Data The first take-away of our survey is that obtaining an exhaustive description of the speakers within speech resources is not straightforward. This lack of meta-data is a problem in itself as it prevents guaranteeing the generalisability of systems or linguistics findings based on these corpora, as pointed out by bender2018data. As they rightly highlighted in their paper, the problem is also an ethical one as we have no way of controlling the existence of representation disparity in data. And this disparity may lead to bias in our systems. We observed that most of the speech resources available contain elicited speech and that on average, researchers are careful as to balance the speakers in terms of gender when crafting data. But this cannot be said about corpora containing non-elicited speech. And apart from Librispeech, we observed a general gender imbalance, which can lead to a performance decrease on female speech BIBREF2. Speech time measurements are not consistent throughout our panel of resources and utterance counts are not reliable. We gathered the size of the corpora as well as the sampling rate in order to estimate the amount of speech time available, but variation in terms of precision, bit-rate, encoding and containers prevent us from reaching reliable results. Yet, speech time information enables us to know the quantity of data available for each category and this directly impacts the systems. This information is now given in papers such as the one describing the latest version of TEDLIUM, as this information is paramount for speaker adaptation. bender2018data proposed to provide the following information alongside corpus releases: curation rationale, language variety, speaker demographic, annotator demographic, speech situation, text characteristics, recording quality and others. Information we can add to their recommendations relates to the duration of the data sets in hours or minutes, globally and per speaker and/or gender category. This could allow to quickly check the gender balance in terms of quantity of data available for each category, without relying on an unreliable notion of utterance. This descriptive work is of importance for the future corpora, but should also be made for the data sets already released as they are likely to be used again by the community. Recommendations ::: Transparency in Evaluation Word Error Rate (WER) is usually computed as the sum of the errors made on the test data set divided by the total number of words. But if such an evaluation allows for an easy comparison of the systems, it fails to acknowledge for their performance variations. In our survey, 13 of the 66 corpora had a paper describing the resources. When the paper reported ASR results, none of them reported gendered evaluation even if gender information about the data was provided. Reporting results for different categories is the most straightforward way to check for performance bias or overfitting behaviours. Providing data statements is a first step towards, but for an open and fair science, the next step should be to also take into account such information in the evaluation process. A recent work in this direction has been made by mitchell2019model who proposed to describe model performance in model cards, thus encouraging a transparent report of model results. Conclusion In our gender survey of the corpora available on the OpenSLR platform, we observe the following trends: parity is globally achieved on the whole, but interactions with other corpus characteristics reveal that gender misrepresentation needs more than just a number of speakers to be identified. In non-elicited data (meaning type of speech that would have existed without the creation of the corpus, such as TEDTalks or radio broadcast), we found that, except in Librispeech where gender balance is controlled, men are more represented than women. It also seems that most of the corpora aimed at developing TTS systems contain mostly female voices, maybe due to the stereotype associating female voice with caring activities. We also observe that gender description of data has been taken into account by the community, with an increased number of corpora provided with gender meta-data in the last two years. Our sample containing only 66 corpora, we acknowledge that our results cannot necessarily be extended to all language resources, however it allows us to open discussion about general corpus description practices, pointing out a lack of meta-data and to actualise the discourse around the social implications of NLP systems. We advocate for a more open science and technology by following guidelines such as the FAIR Data Principle or providing data statements, in order to ensure scientific generalisation and interoperability while preventing social harm. Acknowledgements This work was partially supported by MIAI@Grenoble-Alpes (ANR-19-P3IA-0003). Copyrights The Language Resources and Evaluation Conference (LREC) proceedings are published by the European Language Resources Association (ELRA). They are available online from the conference website. ELRA's policy is to acquire copyright for all LREC contributions. In assigning your copyright, you are not forfeiting your right to use your contribution elsewhere. This you may do without seeking permission and is subject only to normal acknowledgement to the LREC proceedings. The LREC 2020 Proceedings are licensed under CC-BY-NC, the Creative Commons Attribution-Non-Commercial 4.0 International License. Language Resource References lrec languageresource
the number of speakers of each gender category, their speech duration
e8e6719d531e7bef5d827ac92c7b1ab0b8ec3c8e
e8e6719d531e7bef5d827ac92c7b1ab0b8ec3c8e_0
Q: What corpus characteristics correlate with more equitable gender balance? Text: 1.1em ::: 1.1.1em ::: ::: 1.1.1.1em Mahault Garnerin, Solange Rossato, Laurent Besacier LIG, Univ. Grenoble Alpes, CNRS, Grenoble INP, FR-38000 Grenoble, France [email protected] With the rise of artificial intelligence (AI) and the growing use of deep-learning architectures, the question of ethics, transparency and fairness of AI systems has become a central concern within the research community. We address transparency and fairness in spoken language systems by proposing a study about gender representation in speech resources available through the Open Speech and Language Resource platform. We show that finding gender information in open source corpora is not straightforward and that gender balance depends on other corpus characteristics (elicited/non elicited speech, low/high resource language, speech task targeted). The paper ends with recommendations about metadata and gender information for researchers in order to assure better transparency of the speech systems built using such corpora. speech resources, gender, metadata, open speech language resources (OpenSLR) Introduction The ever growing use of machine learning has put data at the center of the industrial and research spheres. Indeed, for a system to learn how to associate an input X to an output Y, many paired examples are needed to learn this mapping process. This need for data coupled with the improvement in computing power and algorithm efficiency has led to the era of big data. But data is not only needed in mass, but also with a certain level of quality. In this paper we argue that one of the main quality of data is its transparency. In recent years, concerns have been raised about the biases existing in the systems. A well-known case in Natural Language Processing (NLP) is the example of word embeddings, with the studies of bolukbasi2016man and caliskan2017semantics which showed that data are socially constructed and hence encapsulate a handful of social representations and power structures, such as gender stereotypes. Gender-bias has also been found in machine translation tasks BIBREF0, as well as facial recognition BIBREF1 and is now at the center of research debates. In previous work, we investigated the impact of gender imbalance in training data on the performance of an automatic speech recognition (ASR) system, showing that the under-representation of women led to a performance bias of the system for female speakers BIBREF2. In this paper, we survey the gender representation within an open platform gathering speech and language resources to develop speech processing tools. The aim of this survey is twofold: firstly, we investigate the gender balance within speech corpora in terms of speaker representation but also in terms of speech time available for each gender category. Secondly we propose a reflection about general practices when releasing resources, basing ourselves on some recommendations from previous work. Contributions. The contributions of our work are the following: an exploration of 66 different speech corpora in terms of gender, showing that gender balance is achieved in terms of speakers in elicited corpora, but that it is not the case for non-elicited speech, nor for the speech time allocated to each gender category an assessment of the global lack of meta-data within free open source corpora, alongside recommendations and guidelines for resources descriptions, based on previous work OpenSLR Open Speech Language Resources (OpenSLR) is a platform created by Daniel Povey. It provides a central hub to gather open speech and language resources, allowing them to be accessed and downloaded freely. OpenSLR currently hosts 83 resources. These resources consist of speech recordings with transcriptions but also of softwares as well as lexicons and textual data for language modeling. As resources are costly to produce, they are most of the time a paying service. Therefore it is hard to study gender representation at scale. We thus focus on the corpora available on OpenSLR due to their free access and to the fact that OpenSLR is explicitly made to help develop speech systems (mostly ASR but also text-to-speech (TTS) systems). In our work, we focus on speech data only. Out of the 83 resources gathered on the platform, we recorded 53 speech resources. We did not take into account multiple releases of the same corpora but only kept the last version (e.g. TED LIUM BIBREF3) and we also removed subsets of bigger corpora (e.g. LibriTTS corpus BIBREF4). We make the distinction between a resource and a corpus, as each resource can contain several languages (e.g. Vystadial korvas2014) or several accent/dialect of a same language (e.g. the crowdsourced high-quality UK and Ireland English Dialect speech data set googleuken2019). In our terminology, we define a corpus as monolingual and monodialectal, so resources containing different dialects or languages will be considered as containing different corpora. We ended up with 66 corpora, in 33 different languages with 51 dialect/accent variations. The variety is also great in terms of speech types (elicited and read speech, broadcast news, TEDTalks, meetings, phonecalls, audiobooks, etc.), which is not suprising, given the many different actors who contributed to this platform. We consider this sample to be of reasonable size to tackle the question of gender representation in speech corpora. OpenSLR also constitutes a good indicator of general practice as it does not expect a defined format nor does have explicit requirements about data structures, hence attesting of what metadata resources creators consider important to share when releasing resources for free on the Web. Methodology In order to study gender representation within speech resources, let us start by defining what gender is. In this work, we consider gender as a binary category (male and female speakers). Nevertheless, we are aware that gender as an identity also exists outside of these two categories, but we did not find any mention of non-binary speakers within the corpora surveyed in our study. Following work by doukhan2018open, we wanted to explore the corpora looking at the number of speakers of each gender category as well as their speech duration, considering both variables as good features to account for gender representation. After the download, we manually extracted information about gender representation in each corpus. Methodology ::: Speaker Information and Lack of Meta-Data The first difficulty we came across was the general absence of information. As gender in technology is a relatively recent research interest, most of the time gender demographics are not made available by the resources creators. So, on top of the further-mentioned general corpus characteristics (see Section SECREF11), we also report in our final table where the gender information was found and whether it was provided in the first place or not. The provided attribute corresponds to whether gender info was given somewhere, and the found_in attribute corresponds to where we extracted the gender demographics from. The different modalities are paper, if a paper was explicitly cited along the resource, metadata if a metadata file was included, indexed if the gender was explicitly indexed within data or if data was structured in terms of gender and manually if the gender information are the results of a manual research made by ourselves, trying to either find a paper describing the resources, or by relying on regularities that seems like speaker ID and listening to the recordings. We acknowledge that this last method has some methodological shortcomings: we relied on our perceptual stereotypes to distinguish male from female speakers, most of the time for languages we have no knowledge of, but considering the global lack of data, we used it when corpora were small enough in order to increase our sample size. Methodology ::: Speech Time Information and Data Consistency The second difficulty regards the fact that speech time information are not standardised, making impossible to obtain speech time for individual speakers or gender categories. When speech time information is provided, the statistics given do not all refer to the same measurements. Some authors report speech duration in hours e.g. panayotov2015librispeech,hernandez2018ted, some the number of utterances (e.g BIBREF5) or sentences (e.g. googleuken2019), the definition of these two terms never being clearly defined. We gathered all information available, meaning that our final table contains some empty cells, and we found that there was no consistency between speech duration and number of utterances, excluding the possibility to approximate one by the other. As a result, we decided to rely on the size of the corpora as a (rough) approximation of the amount of speech data available, the text files representing a small proportion of the resources size. This method however has drawbacks as not all corpora used the same file format, nor the same sampling rate. Sampling rate has been provided as well in the final table, but we decided to rely on qualitative categories, a corpus being considered small if its size is under 5GB, medium if it is between 5 and 50GB and large if above. Methodology ::: Corpora Characteristics The final result consists of a table reporting all the characteristics of the corpora. The chosen features are the following: the resource identifier (id) as defined on OpenSLR the language (lang) the dialect or accent if specified (dial) the total number of speakers as well as the number of male and female speakers (#spk, #spk_m, #spk_f) the total number of utterances as well as the total number of utterances for male and female speakers (#utt, #utt_m, #utt_f) the total duration, or speech time, as well as the duration for male and female speakers (dur, dur_m, dur_f) the size of the resource in gigabytes (sizeGB) as well as a qualitative label (size, taking its value between “big", “medium", “small") the sampling rate (sampling) the speech task targeted for the resource (task) is it elicited speech or not: we define as non-elicited speech data which would have existed without the creation of the resources (e.g TedTalks, audiobooks, etc.), other speech data are considered as elicited the language status (lang_status): a language is considered either as high- or low-resourced. The language status is defined from a technological point of view (i.e. are there resources or NLP systems available for this language?). It is fixed at the language granularity (hence the name), regardless of the dialect or accent (if provided). the year of the release (year) the authors of the resource (producer) Analysis ::: Gender Information Availability Before diving into the gender analysis, we report the number of corpora for which gender information was provided. Indeed, 36.4% of the corpora do not give any gender information regarding the speakers. Moreover, almost 20% of the corpora do not provide any speaker information whatsoever. Table sums up the number of corpora for which speaker's gender information was provided and if it was, where it was found. We first looked at the metadata file if available. If no metadata was provided, we searched whether gender was indexed within the data structure. At last, if we still could not find anything, we looked for a paper describing the data set. This search pipeline results in ordered levels for our found_in category, meaning papers might also be available for corpora with the “metadata" or “indexed" modalities. When gender information was given it was most of the time in terms of number of speakers in each gender categories, as only five corpora provide speech time for each category. Table reports what type of information was provided in terms of gender, in the subset of the 42 corpora containing gender information. We observe that gender information is easier to find when it regards the number of speakers, than when it accounts for the quantity of data available for each gender group. Due to this lack of data, we did not study the speech time per gender category as intended, but we relied on utterance count when available. It is worth noticing however, that we did not find any consistency between speech time and number of utterances, so such results must be taken with caution. Out of the 42 corpora providing gender information, 41 reported speaker counts for each gender category. We manually gathered speaker gender information for 7 more corpora, as explained in the previous section, reaching a final sample size of 47 corpora. Analysis ::: Gender Distribution Among Speakers ::: Elicited vs Non-Elicited Data Generally, when gender demographics are provided, we observe the following distribution: out of the 6,072 speakers, 3,050 are women and 3,022 are men, so parity is almost achieved. We then look at whether data was elicited or not, non-elicited speech being speech that would have existed without the corpus creation such as TEDTalks, interviews, radio broadcast and so on. We assume that if data was not elicited, gender imbalance might emerge. Indeed, non-elicited data often comes from the media, and it has been shown, that women are under-represented in this type of data BIBREF6. This disparity of gender representation in French media BIBREF7, BIBREF8 precisely led us to the present survey. Our expectations are reinforced by examples such as the resource of Spanish TEDTalks, which states in its description regarding the speakers that “most of them are men" mena2019. We report results in Table . In both cases (respectively elicited and non-elicited speech), gender difference is relatively small (respectively 5.6 percentage points and 5.8 points), far from the 30 percentage points difference observed in BIBREF2. A possible explanation is that either elicited or not, corpora are the result of a controlled process, so gender disparity will be reduced as much as possible by the corpus authors. However, we notice that, apart from Librispeech BIBREF9, all the non-elicited corpora are small corpora. When removing Librispeech from the analysis, we observe a 1/3-2/3 female to male ratio, coherent with our previous findings. This can be explained by the care put by the creators of the Librispeech data set to "[ensure] a gender balance at the speaker level and in terms of the amount of data available for each gender" BIBREF9, while general gender disparity is observed in smaller corpora. What emerges from these results is that when data sets are not elicited or carefully balanced, gender disparity creeps in. This gender imbalance is not observed at the scale of the entire OpenSLR platform, due to the fact that most of the corpora are elicited (89.1%). Hence, the existence of such gender gap is prevented by a careful control during the data set creation process. Analysis ::: Gender Distribution Among Speakers ::: High-resource vs Low-resource Languages In the elicited corpora made available on OpenSLR, some are of low-resource languages other high-resource languages (mostly regional variation of high-resources languages). When looking at gender in these elicited corpora, we do not observe a difference depending on the language status. However, we can notice that high-resource corpora contain twice as many speakers, all low-resource language corpora being small corpora. Analysis ::: Gender Distribution Among Speakers ::: “How Can I Help?": Spoken Language Tasks Speech corpora are built in order to train systems, most of the time ASR or TTS ones. We carry out our gender analysis taking into account the task addressed and obtain the results reported in Table . We observe that if gender representation is almost balanced within ASR corpora, women are better represented in TTS-oriented data sets. This can be related to the UN report of recommendation for gender-equal digital education stating that nowadays, most of the vocal assistants are given female voices which raises educational and societal problems BIBREF10. This gendered design of vocal assistants is sometimes justified by relying on gender stereotypes such as “female voices are perceived as more helpful, sympathetic or pleasant." TTS systems being often used to create such assistants, we can assume that using female voices has become general practice to ensure the adoption of the system by the users. This claim can however be nuanced by nass2005wired who showed that other factors might be worth taking into account to design gendered voices, such as social identification and cultural gender stereotypes. Analysis ::: Speech Time and Gender Due to a global lack of speech time information, we did not analyse the amount of data available per speaker category. However, utterance counts were often reported, or easily found within the corpora. We gathered utterance counts for a total of 32 corpora. We observe that if gender balance is almost achieved in terms of number of speakers, at the utterance level, men speech is more represented. But this disparity is only the effect of three corpora containing 51,463 and 26,567 korvas2014 and 8376 mena2019 utterances for male speakers, while the mean number of utterances per corpora is respectively 1942 for male speakers and 1983 for female speakers. Removing these three outliers, we observe that utterances count is balanced between gender categories. It is worth noticing, that the high amount of utterances of the outliers is surprising considering that these three corpora are small (2.1GB, 2.8GB) and medium (5.2GB). This highlights the problem of the notion of utterance which is never being explicitly defined. Such difference in granularity is thus preventing comparison between corpora. Analysis ::: Evolution over Time When collecting data, we noticed that the more recent the resources, the easier it was to find gender information, attesting of the emergence of gender in technology as a relevant topic. As pointed out by Kate crawford2017nips in her NeurIPS keynote talk, fairness in AI has recently become a huge part of the research effort in AI and machine learning. As a result, methodology papers have been published, with for example the work of bender2018data, for NLP data and systems, encouraging the community towards rich and explicit data statements. Figure FIGREF34 shows the evolution of gender information availability in the last 10 years. We can see that this peek of interest is also present in our data, with more resources provided with gender information after 2017. Recommendations The social impact of big data and the ethical problems raised by NLP systems have already been discussed by previous work. wilkinson2016fair developed principles for scientific data management and stewardship, the FAIR Data Principles, based on four foundational data characteristics that are Findability, Accessibility, Interoperability and Reusability BIBREF11. In our case, findability and accessibility are taken into account by design, resources on OpenSLR being freely accessible. Interoperability and Reusability of data are however not yet achieved. Another attempt to integrate this discussion about data description within the NLP community has been made by COUILLAULT14.424, who proposed an Ethics and Big Data Charter, to help resources creators describe data from a legal and ethical point of view. hovy2016social highlighted the different social implications of NLP systems, such as exclusion, overgeneralisation and exposure problems. More recently, work by bender2018data proposed the notion of data statement to ensure data transparency. The common point of all these studies is that information is key. The FAIR Principles are a baseline to guarantee the reproducibility of scientific findings. We need data to be described exhaustively in order to acknowledge demographic bias that may exist within our corpora. As pointed out by hovy2016social, language is always situated and so are language resources. This demographic bias in itself will always exist, but by not mentioning it in the data description we might create tools and systems that will have negative impacts on society. The authors presented the notion of exclusion as a demographic misrepresentation leading to exclusion of certain groups in the use of a technology, due to the fact that this technology fail to take them into account during its developing process. This directly relates to our work on ASR performance on women speech, and we can assume that this can be extended to other speaker characteristics, such as accent or age. To prevent such collateral consequences of NLP systems, bender2018data advocated the use of data statement, as a professional and research practice. We hope the present study will encourage researchers and resources creators to describe exhaustively their data sets, following the guidelines proposed by these authors. Recommendations ::: On the Importance of Meta-Data The first take-away of our survey is that obtaining an exhaustive description of the speakers within speech resources is not straightforward. This lack of meta-data is a problem in itself as it prevents guaranteeing the generalisability of systems or linguistics findings based on these corpora, as pointed out by bender2018data. As they rightly highlighted in their paper, the problem is also an ethical one as we have no way of controlling the existence of representation disparity in data. And this disparity may lead to bias in our systems. We observed that most of the speech resources available contain elicited speech and that on average, researchers are careful as to balance the speakers in terms of gender when crafting data. But this cannot be said about corpora containing non-elicited speech. And apart from Librispeech, we observed a general gender imbalance, which can lead to a performance decrease on female speech BIBREF2. Speech time measurements are not consistent throughout our panel of resources and utterance counts are not reliable. We gathered the size of the corpora as well as the sampling rate in order to estimate the amount of speech time available, but variation in terms of precision, bit-rate, encoding and containers prevent us from reaching reliable results. Yet, speech time information enables us to know the quantity of data available for each category and this directly impacts the systems. This information is now given in papers such as the one describing the latest version of TEDLIUM, as this information is paramount for speaker adaptation. bender2018data proposed to provide the following information alongside corpus releases: curation rationale, language variety, speaker demographic, annotator demographic, speech situation, text characteristics, recording quality and others. Information we can add to their recommendations relates to the duration of the data sets in hours or minutes, globally and per speaker and/or gender category. This could allow to quickly check the gender balance in terms of quantity of data available for each category, without relying on an unreliable notion of utterance. This descriptive work is of importance for the future corpora, but should also be made for the data sets already released as they are likely to be used again by the community. Recommendations ::: Transparency in Evaluation Word Error Rate (WER) is usually computed as the sum of the errors made on the test data set divided by the total number of words. But if such an evaluation allows for an easy comparison of the systems, it fails to acknowledge for their performance variations. In our survey, 13 of the 66 corpora had a paper describing the resources. When the paper reported ASR results, none of them reported gendered evaluation even if gender information about the data was provided. Reporting results for different categories is the most straightforward way to check for performance bias or overfitting behaviours. Providing data statements is a first step towards, but for an open and fair science, the next step should be to also take into account such information in the evaluation process. A recent work in this direction has been made by mitchell2019model who proposed to describe model performance in model cards, thus encouraging a transparent report of model results. Conclusion In our gender survey of the corpora available on the OpenSLR platform, we observe the following trends: parity is globally achieved on the whole, but interactions with other corpus characteristics reveal that gender misrepresentation needs more than just a number of speakers to be identified. In non-elicited data (meaning type of speech that would have existed without the creation of the corpus, such as TEDTalks or radio broadcast), we found that, except in Librispeech where gender balance is controlled, men are more represented than women. It also seems that most of the corpora aimed at developing TTS systems contain mostly female voices, maybe due to the stereotype associating female voice with caring activities. We also observe that gender description of data has been taken into account by the community, with an increased number of corpora provided with gender meta-data in the last two years. Our sample containing only 66 corpora, we acknowledge that our results cannot necessarily be extended to all language resources, however it allows us to open discussion about general corpus description practices, pointing out a lack of meta-data and to actualise the discourse around the social implications of NLP systems. We advocate for a more open science and technology by following guidelines such as the FAIR Data Principle or providing data statements, in order to ensure scientific generalisation and interoperability while preventing social harm. Acknowledgements This work was partially supported by MIAI@Grenoble-Alpes (ANR-19-P3IA-0003). Copyrights The Language Resources and Evaluation Conference (LREC) proceedings are published by the European Language Resources Association (ELRA). They are available online from the conference website. ELRA's policy is to acquire copyright for all LREC contributions. In assigning your copyright, you are not forfeiting your right to use your contribution elsewhere. This you may do without seeking permission and is subject only to normal acknowledgement to the LREC proceedings. The LREC 2020 Proceedings are licensed under CC-BY-NC, the Creative Commons Attribution-Non-Commercial 4.0 International License. Language Resource References lrec languageresource
Unanswerable
f6e5febf2ea53ec80135bbd532d6bb769d843dd8
f6e5febf2ea53ec80135bbd532d6bb769d843dd8_0
Q: What natural languages are represented in the speech resources studied? Text: 1.1em ::: 1.1.1em ::: ::: 1.1.1.1em Mahault Garnerin, Solange Rossato, Laurent Besacier LIG, Univ. Grenoble Alpes, CNRS, Grenoble INP, FR-38000 Grenoble, France [email protected] With the rise of artificial intelligence (AI) and the growing use of deep-learning architectures, the question of ethics, transparency and fairness of AI systems has become a central concern within the research community. We address transparency and fairness in spoken language systems by proposing a study about gender representation in speech resources available through the Open Speech and Language Resource platform. We show that finding gender information in open source corpora is not straightforward and that gender balance depends on other corpus characteristics (elicited/non elicited speech, low/high resource language, speech task targeted). The paper ends with recommendations about metadata and gender information for researchers in order to assure better transparency of the speech systems built using such corpora. speech resources, gender, metadata, open speech language resources (OpenSLR) Introduction The ever growing use of machine learning has put data at the center of the industrial and research spheres. Indeed, for a system to learn how to associate an input X to an output Y, many paired examples are needed to learn this mapping process. This need for data coupled with the improvement in computing power and algorithm efficiency has led to the era of big data. But data is not only needed in mass, but also with a certain level of quality. In this paper we argue that one of the main quality of data is its transparency. In recent years, concerns have been raised about the biases existing in the systems. A well-known case in Natural Language Processing (NLP) is the example of word embeddings, with the studies of bolukbasi2016man and caliskan2017semantics which showed that data are socially constructed and hence encapsulate a handful of social representations and power structures, such as gender stereotypes. Gender-bias has also been found in machine translation tasks BIBREF0, as well as facial recognition BIBREF1 and is now at the center of research debates. In previous work, we investigated the impact of gender imbalance in training data on the performance of an automatic speech recognition (ASR) system, showing that the under-representation of women led to a performance bias of the system for female speakers BIBREF2. In this paper, we survey the gender representation within an open platform gathering speech and language resources to develop speech processing tools. The aim of this survey is twofold: firstly, we investigate the gender balance within speech corpora in terms of speaker representation but also in terms of speech time available for each gender category. Secondly we propose a reflection about general practices when releasing resources, basing ourselves on some recommendations from previous work. Contributions. The contributions of our work are the following: an exploration of 66 different speech corpora in terms of gender, showing that gender balance is achieved in terms of speakers in elicited corpora, but that it is not the case for non-elicited speech, nor for the speech time allocated to each gender category an assessment of the global lack of meta-data within free open source corpora, alongside recommendations and guidelines for resources descriptions, based on previous work OpenSLR Open Speech Language Resources (OpenSLR) is a platform created by Daniel Povey. It provides a central hub to gather open speech and language resources, allowing them to be accessed and downloaded freely. OpenSLR currently hosts 83 resources. These resources consist of speech recordings with transcriptions but also of softwares as well as lexicons and textual data for language modeling. As resources are costly to produce, they are most of the time a paying service. Therefore it is hard to study gender representation at scale. We thus focus on the corpora available on OpenSLR due to their free access and to the fact that OpenSLR is explicitly made to help develop speech systems (mostly ASR but also text-to-speech (TTS) systems). In our work, we focus on speech data only. Out of the 83 resources gathered on the platform, we recorded 53 speech resources. We did not take into account multiple releases of the same corpora but only kept the last version (e.g. TED LIUM BIBREF3) and we also removed subsets of bigger corpora (e.g. LibriTTS corpus BIBREF4). We make the distinction between a resource and a corpus, as each resource can contain several languages (e.g. Vystadial korvas2014) or several accent/dialect of a same language (e.g. the crowdsourced high-quality UK and Ireland English Dialect speech data set googleuken2019). In our terminology, we define a corpus as monolingual and monodialectal, so resources containing different dialects or languages will be considered as containing different corpora. We ended up with 66 corpora, in 33 different languages with 51 dialect/accent variations. The variety is also great in terms of speech types (elicited and read speech, broadcast news, TEDTalks, meetings, phonecalls, audiobooks, etc.), which is not suprising, given the many different actors who contributed to this platform. We consider this sample to be of reasonable size to tackle the question of gender representation in speech corpora. OpenSLR also constitutes a good indicator of general practice as it does not expect a defined format nor does have explicit requirements about data structures, hence attesting of what metadata resources creators consider important to share when releasing resources for free on the Web. Methodology In order to study gender representation within speech resources, let us start by defining what gender is. In this work, we consider gender as a binary category (male and female speakers). Nevertheless, we are aware that gender as an identity also exists outside of these two categories, but we did not find any mention of non-binary speakers within the corpora surveyed in our study. Following work by doukhan2018open, we wanted to explore the corpora looking at the number of speakers of each gender category as well as their speech duration, considering both variables as good features to account for gender representation. After the download, we manually extracted information about gender representation in each corpus. Methodology ::: Speaker Information and Lack of Meta-Data The first difficulty we came across was the general absence of information. As gender in technology is a relatively recent research interest, most of the time gender demographics are not made available by the resources creators. So, on top of the further-mentioned general corpus characteristics (see Section SECREF11), we also report in our final table where the gender information was found and whether it was provided in the first place or not. The provided attribute corresponds to whether gender info was given somewhere, and the found_in attribute corresponds to where we extracted the gender demographics from. The different modalities are paper, if a paper was explicitly cited along the resource, metadata if a metadata file was included, indexed if the gender was explicitly indexed within data or if data was structured in terms of gender and manually if the gender information are the results of a manual research made by ourselves, trying to either find a paper describing the resources, or by relying on regularities that seems like speaker ID and listening to the recordings. We acknowledge that this last method has some methodological shortcomings: we relied on our perceptual stereotypes to distinguish male from female speakers, most of the time for languages we have no knowledge of, but considering the global lack of data, we used it when corpora were small enough in order to increase our sample size. Methodology ::: Speech Time Information and Data Consistency The second difficulty regards the fact that speech time information are not standardised, making impossible to obtain speech time for individual speakers or gender categories. When speech time information is provided, the statistics given do not all refer to the same measurements. Some authors report speech duration in hours e.g. panayotov2015librispeech,hernandez2018ted, some the number of utterances (e.g BIBREF5) or sentences (e.g. googleuken2019), the definition of these two terms never being clearly defined. We gathered all information available, meaning that our final table contains some empty cells, and we found that there was no consistency between speech duration and number of utterances, excluding the possibility to approximate one by the other. As a result, we decided to rely on the size of the corpora as a (rough) approximation of the amount of speech data available, the text files representing a small proportion of the resources size. This method however has drawbacks as not all corpora used the same file format, nor the same sampling rate. Sampling rate has been provided as well in the final table, but we decided to rely on qualitative categories, a corpus being considered small if its size is under 5GB, medium if it is between 5 and 50GB and large if above. Methodology ::: Corpora Characteristics The final result consists of a table reporting all the characteristics of the corpora. The chosen features are the following: the resource identifier (id) as defined on OpenSLR the language (lang) the dialect or accent if specified (dial) the total number of speakers as well as the number of male and female speakers (#spk, #spk_m, #spk_f) the total number of utterances as well as the total number of utterances for male and female speakers (#utt, #utt_m, #utt_f) the total duration, or speech time, as well as the duration for male and female speakers (dur, dur_m, dur_f) the size of the resource in gigabytes (sizeGB) as well as a qualitative label (size, taking its value between “big", “medium", “small") the sampling rate (sampling) the speech task targeted for the resource (task) is it elicited speech or not: we define as non-elicited speech data which would have existed without the creation of the resources (e.g TedTalks, audiobooks, etc.), other speech data are considered as elicited the language status (lang_status): a language is considered either as high- or low-resourced. The language status is defined from a technological point of view (i.e. are there resources or NLP systems available for this language?). It is fixed at the language granularity (hence the name), regardless of the dialect or accent (if provided). the year of the release (year) the authors of the resource (producer) Analysis ::: Gender Information Availability Before diving into the gender analysis, we report the number of corpora for which gender information was provided. Indeed, 36.4% of the corpora do not give any gender information regarding the speakers. Moreover, almost 20% of the corpora do not provide any speaker information whatsoever. Table sums up the number of corpora for which speaker's gender information was provided and if it was, where it was found. We first looked at the metadata file if available. If no metadata was provided, we searched whether gender was indexed within the data structure. At last, if we still could not find anything, we looked for a paper describing the data set. This search pipeline results in ordered levels for our found_in category, meaning papers might also be available for corpora with the “metadata" or “indexed" modalities. When gender information was given it was most of the time in terms of number of speakers in each gender categories, as only five corpora provide speech time for each category. Table reports what type of information was provided in terms of gender, in the subset of the 42 corpora containing gender information. We observe that gender information is easier to find when it regards the number of speakers, than when it accounts for the quantity of data available for each gender group. Due to this lack of data, we did not study the speech time per gender category as intended, but we relied on utterance count when available. It is worth noticing however, that we did not find any consistency between speech time and number of utterances, so such results must be taken with caution. Out of the 42 corpora providing gender information, 41 reported speaker counts for each gender category. We manually gathered speaker gender information for 7 more corpora, as explained in the previous section, reaching a final sample size of 47 corpora. Analysis ::: Gender Distribution Among Speakers ::: Elicited vs Non-Elicited Data Generally, when gender demographics are provided, we observe the following distribution: out of the 6,072 speakers, 3,050 are women and 3,022 are men, so parity is almost achieved. We then look at whether data was elicited or not, non-elicited speech being speech that would have existed without the corpus creation such as TEDTalks, interviews, radio broadcast and so on. We assume that if data was not elicited, gender imbalance might emerge. Indeed, non-elicited data often comes from the media, and it has been shown, that women are under-represented in this type of data BIBREF6. This disparity of gender representation in French media BIBREF7, BIBREF8 precisely led us to the present survey. Our expectations are reinforced by examples such as the resource of Spanish TEDTalks, which states in its description regarding the speakers that “most of them are men" mena2019. We report results in Table . In both cases (respectively elicited and non-elicited speech), gender difference is relatively small (respectively 5.6 percentage points and 5.8 points), far from the 30 percentage points difference observed in BIBREF2. A possible explanation is that either elicited or not, corpora are the result of a controlled process, so gender disparity will be reduced as much as possible by the corpus authors. However, we notice that, apart from Librispeech BIBREF9, all the non-elicited corpora are small corpora. When removing Librispeech from the analysis, we observe a 1/3-2/3 female to male ratio, coherent with our previous findings. This can be explained by the care put by the creators of the Librispeech data set to "[ensure] a gender balance at the speaker level and in terms of the amount of data available for each gender" BIBREF9, while general gender disparity is observed in smaller corpora. What emerges from these results is that when data sets are not elicited or carefully balanced, gender disparity creeps in. This gender imbalance is not observed at the scale of the entire OpenSLR platform, due to the fact that most of the corpora are elicited (89.1%). Hence, the existence of such gender gap is prevented by a careful control during the data set creation process. Analysis ::: Gender Distribution Among Speakers ::: High-resource vs Low-resource Languages In the elicited corpora made available on OpenSLR, some are of low-resource languages other high-resource languages (mostly regional variation of high-resources languages). When looking at gender in these elicited corpora, we do not observe a difference depending on the language status. However, we can notice that high-resource corpora contain twice as many speakers, all low-resource language corpora being small corpora. Analysis ::: Gender Distribution Among Speakers ::: “How Can I Help?": Spoken Language Tasks Speech corpora are built in order to train systems, most of the time ASR or TTS ones. We carry out our gender analysis taking into account the task addressed and obtain the results reported in Table . We observe that if gender representation is almost balanced within ASR corpora, women are better represented in TTS-oriented data sets. This can be related to the UN report of recommendation for gender-equal digital education stating that nowadays, most of the vocal assistants are given female voices which raises educational and societal problems BIBREF10. This gendered design of vocal assistants is sometimes justified by relying on gender stereotypes such as “female voices are perceived as more helpful, sympathetic or pleasant." TTS systems being often used to create such assistants, we can assume that using female voices has become general practice to ensure the adoption of the system by the users. This claim can however be nuanced by nass2005wired who showed that other factors might be worth taking into account to design gendered voices, such as social identification and cultural gender stereotypes. Analysis ::: Speech Time and Gender Due to a global lack of speech time information, we did not analyse the amount of data available per speaker category. However, utterance counts were often reported, or easily found within the corpora. We gathered utterance counts for a total of 32 corpora. We observe that if gender balance is almost achieved in terms of number of speakers, at the utterance level, men speech is more represented. But this disparity is only the effect of three corpora containing 51,463 and 26,567 korvas2014 and 8376 mena2019 utterances for male speakers, while the mean number of utterances per corpora is respectively 1942 for male speakers and 1983 for female speakers. Removing these three outliers, we observe that utterances count is balanced between gender categories. It is worth noticing, that the high amount of utterances of the outliers is surprising considering that these three corpora are small (2.1GB, 2.8GB) and medium (5.2GB). This highlights the problem of the notion of utterance which is never being explicitly defined. Such difference in granularity is thus preventing comparison between corpora. Analysis ::: Evolution over Time When collecting data, we noticed that the more recent the resources, the easier it was to find gender information, attesting of the emergence of gender in technology as a relevant topic. As pointed out by Kate crawford2017nips in her NeurIPS keynote talk, fairness in AI has recently become a huge part of the research effort in AI and machine learning. As a result, methodology papers have been published, with for example the work of bender2018data, for NLP data and systems, encouraging the community towards rich and explicit data statements. Figure FIGREF34 shows the evolution of gender information availability in the last 10 years. We can see that this peek of interest is also present in our data, with more resources provided with gender information after 2017. Recommendations The social impact of big data and the ethical problems raised by NLP systems have already been discussed by previous work. wilkinson2016fair developed principles for scientific data management and stewardship, the FAIR Data Principles, based on four foundational data characteristics that are Findability, Accessibility, Interoperability and Reusability BIBREF11. In our case, findability and accessibility are taken into account by design, resources on OpenSLR being freely accessible. Interoperability and Reusability of data are however not yet achieved. Another attempt to integrate this discussion about data description within the NLP community has been made by COUILLAULT14.424, who proposed an Ethics and Big Data Charter, to help resources creators describe data from a legal and ethical point of view. hovy2016social highlighted the different social implications of NLP systems, such as exclusion, overgeneralisation and exposure problems. More recently, work by bender2018data proposed the notion of data statement to ensure data transparency. The common point of all these studies is that information is key. The FAIR Principles are a baseline to guarantee the reproducibility of scientific findings. We need data to be described exhaustively in order to acknowledge demographic bias that may exist within our corpora. As pointed out by hovy2016social, language is always situated and so are language resources. This demographic bias in itself will always exist, but by not mentioning it in the data description we might create tools and systems that will have negative impacts on society. The authors presented the notion of exclusion as a demographic misrepresentation leading to exclusion of certain groups in the use of a technology, due to the fact that this technology fail to take them into account during its developing process. This directly relates to our work on ASR performance on women speech, and we can assume that this can be extended to other speaker characteristics, such as accent or age. To prevent such collateral consequences of NLP systems, bender2018data advocated the use of data statement, as a professional and research practice. We hope the present study will encourage researchers and resources creators to describe exhaustively their data sets, following the guidelines proposed by these authors. Recommendations ::: On the Importance of Meta-Data The first take-away of our survey is that obtaining an exhaustive description of the speakers within speech resources is not straightforward. This lack of meta-data is a problem in itself as it prevents guaranteeing the generalisability of systems or linguistics findings based on these corpora, as pointed out by bender2018data. As they rightly highlighted in their paper, the problem is also an ethical one as we have no way of controlling the existence of representation disparity in data. And this disparity may lead to bias in our systems. We observed that most of the speech resources available contain elicited speech and that on average, researchers are careful as to balance the speakers in terms of gender when crafting data. But this cannot be said about corpora containing non-elicited speech. And apart from Librispeech, we observed a general gender imbalance, which can lead to a performance decrease on female speech BIBREF2. Speech time measurements are not consistent throughout our panel of resources and utterance counts are not reliable. We gathered the size of the corpora as well as the sampling rate in order to estimate the amount of speech time available, but variation in terms of precision, bit-rate, encoding and containers prevent us from reaching reliable results. Yet, speech time information enables us to know the quantity of data available for each category and this directly impacts the systems. This information is now given in papers such as the one describing the latest version of TEDLIUM, as this information is paramount for speaker adaptation. bender2018data proposed to provide the following information alongside corpus releases: curation rationale, language variety, speaker demographic, annotator demographic, speech situation, text characteristics, recording quality and others. Information we can add to their recommendations relates to the duration of the data sets in hours or minutes, globally and per speaker and/or gender category. This could allow to quickly check the gender balance in terms of quantity of data available for each category, without relying on an unreliable notion of utterance. This descriptive work is of importance for the future corpora, but should also be made for the data sets already released as they are likely to be used again by the community. Recommendations ::: Transparency in Evaluation Word Error Rate (WER) is usually computed as the sum of the errors made on the test data set divided by the total number of words. But if such an evaluation allows for an easy comparison of the systems, it fails to acknowledge for their performance variations. In our survey, 13 of the 66 corpora had a paper describing the resources. When the paper reported ASR results, none of them reported gendered evaluation even if gender information about the data was provided. Reporting results for different categories is the most straightforward way to check for performance bias or overfitting behaviours. Providing data statements is a first step towards, but for an open and fair science, the next step should be to also take into account such information in the evaluation process. A recent work in this direction has been made by mitchell2019model who proposed to describe model performance in model cards, thus encouraging a transparent report of model results. Conclusion In our gender survey of the corpora available on the OpenSLR platform, we observe the following trends: parity is globally achieved on the whole, but interactions with other corpus characteristics reveal that gender misrepresentation needs more than just a number of speakers to be identified. In non-elicited data (meaning type of speech that would have existed without the creation of the corpus, such as TEDTalks or radio broadcast), we found that, except in Librispeech where gender balance is controlled, men are more represented than women. It also seems that most of the corpora aimed at developing TTS systems contain mostly female voices, maybe due to the stereotype associating female voice with caring activities. We also observe that gender description of data has been taken into account by the community, with an increased number of corpora provided with gender meta-data in the last two years. Our sample containing only 66 corpora, we acknowledge that our results cannot necessarily be extended to all language resources, however it allows us to open discussion about general corpus description practices, pointing out a lack of meta-data and to actualise the discourse around the social implications of NLP systems. We advocate for a more open science and technology by following guidelines such as the FAIR Data Principle or providing data statements, in order to ensure scientific generalisation and interoperability while preventing social harm. Acknowledgements This work was partially supported by MIAI@Grenoble-Alpes (ANR-19-P3IA-0003). Copyrights The Language Resources and Evaluation Conference (LREC) proceedings are published by the European Language Resources Association (ELRA). They are available online from the conference website. ELRA's policy is to acquire copyright for all LREC contributions. In assigning your copyright, you are not forfeiting your right to use your contribution elsewhere. This you may do without seeking permission and is subject only to normal acknowledgement to the LREC proceedings. The LREC 2020 Proceedings are licensed under CC-BY-NC, the Creative Commons Attribution-Non-Commercial 4.0 International License. Language Resource References lrec languageresource
Unanswerable
4059c6f395640a6acf20a0ed451d0ad8681bc59b
4059c6f395640a6acf20a0ed451d0ad8681bc59b_0
Q: How is the delta-softmax calculated? Text: Introduction The development of formal frameworks for the analysis of discourse relations has long gone hand in hand with work on signaling devices. The analysis of discourse relations is also closely tied to what a discourse structure should look like and what discourse goals should be fulfilled in relation to the interpretation of discourse relations BIBREF0. Earlier work on the establishment of inventories of discourse relations and their formalization (BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6) relied on the existence of `discourse markers' (DMs) or `connectives', including conjunctions such as because or if, adverbials such as however or as a result, and coordinations such as but, to identify and distinguish relations such as condition in SECREF1, concession in SECREF1, cause in SECREF1, or contrast, result etc., depending on the postulated inventory of relations (signals for these relations as identified by human analysts are given in bold; examples come from the GUM corpus BIBREF7, presented in Section SECREF3). . [If you work for a company,]$_{\textsc {condition}}$ [they pay you that money.] . [Albeit limited,]$_{\textsc {concession}}$ [these results provide valuable insight into SI interpretation by Chitonga-speaking children.] . [not all would have been interviewed at Wave 3] [due to differential patterns of temporary attrition]$_{\textsc {cause}}$ The same reasoning of identifying relations based on overt signals has been applied to the comparison of discourse relations across languages, by comparing inventories of similar function words cross-linguistically (BIBREF8, BIBREF9); and the annotation guidelines of prominent contemporary corpora rely on such markers as well: for instance, the Penn Discourse Treebank (see BIBREF10) explicitly refers to either the presence of DMs or the possibility of their insertion in cases of implicit discourse relations, and DM analysis in Rhetorical Structure Theory BIBREF11 has also shown the important role of DMs as signals of discourse relations at all hierarchical levels of discourse analysis BIBREF12. At the same time, research over the past two decades analyzing the full range of possible cues that humans use to identify the presence of discourse relations has suggested that classic DMs such as conjunctions and adverbials are only a part of the network of signals that writers or speakers can harness for discourse structuring, which also includes entity-based cohesion devices (e.g. certain uses of anaphora, see BIBREF13), alternative lexicalizations using content words, as well as syntactic constructions (see BIBREF14 and the addition of alternative lexicalization constructions, AltLexC, in the latest version of PDTB, BIBREF15). In previous work, two main approaches to extracting the inventory of discourse signal types in an open-ended framework can be identified: data-driven approaches, which attempt to extract relevant words from distributional properties of the data, using frequencies or association measures capturing their co-occurrences with certain relation types (e.g. BIBREF16, BIBREF17); and manual annotation efforts (e.g. BIBREF10, BIBREF18), which develop categorization schemes and guidelines for human evaluation of signaling devices. The former family of methods benefits from an unbiased openness to any and every type of word which may reliably co-occur with some relation types, whether or not a human might notice it while annotating, as well as the naturally graded and comparable nature of the resulting quantitative scores, but, as we will show, falls short in identifying specific cases of a word being a signal (or not) in context. By contrast, the latter approach allows for the identification of individual instances of signaling devices, but relies on less open-ended guidelines and is categorical in nature: a word either is or isn't a signal in context, providing less access to concepts such as signaling strength. The goal of this paper is to develop and evaluate a model of discourse signal identification that is built bottom up from the data, but retains sensitivity to context in the evaluation of each individual example. In addition, even though this work is conducted within Rhetorical Structural Theory, we hope that it can shed light on signal identification of discourse relations across genres and provide empirical evidence to motivate research on theory-neutral and genre-diverse discourse processing, which would be beneficial for pushing forward theories of discourse across frameworks or formalisms. Furthermore, employing a computational approach to studying discourse relations has a promising impact on various NLP downstream tasks such as question answering and document summarization etc. For example, BIBREF20 incorporated discourse information into the task of automated text comprehension and benefited from such information without relying on explicit annotations of discourse structure during training, which outperformed state-of-the-art text comprehension systems at the time. Towards this goal, we begin by reviewing some previous work in the traditions sketched out above in the next section, and point out some open questions which we would like to address. In Section SECREF3 we present the discourse annotated data that we will be using, which covers a number of English text types from the Web annotated for 20 discourse relations in the framework of Rhetorical Structure Theory, and is enriched with human annotations of discourse relation signaling devices for a subset of the data. Moreover, we also propose a taxonomy of anchored signals based on the discourse annotated data used in this paper, illustrating the properties and the distribution of the anchorable signals. In Section SECREF4 we then train a distantly supervised neural network model which is made aware of the relations present in the data, but attempts to learn which words signal those relations without any exposure to explicit signal annotations. We evaluate the accuracy of our model using state-of-the-art pretrained and contextualized character and word embeddings, and develop a metric for signaling strength based on a masking concept similar to permutation importance, which naturally lends itself to the definition of both positive and negative or `anti-signals', which we will refer to as `distractors'. In Section SECREF5, we combine the anchoring annotation data from Section SECREF3 with the model's predictions to evaluate how `human-like' its performance is, using an information retrieval approach measuring recall@k and assessing the stability of different signal types based on how the model scores them. We develop a visualization for tokenwise signaling strength and perform error analysis for some signals found by the model which were not flagged by humans and vice versa, and point out the strengths and weaknesses of the architecture. Section SECREF6 offers further discussion of what we can learn from the model, what kinds of additional features it might benefit from given the error analysis, and what the distributions of scores for individual signals can teach us about the ambiguity and reliability of different signal types, opening up avenues for further research. Previous Work ::: Data-driven Approaches A straightforward approach to identifying discourse relation signals in corpora with discourse parses is to extract frequency counts for all lexical types or lemmas and cross-tabulate them with discourse relations, such as sentences annotated as cause, elaboration, etc. (e.g. BIBREF21, BIBREF22, BIBREF17). Table TABREF4, reproduced from BIBREF17, illustrates this approach for the Russian RST Treebank. This approach quickly reveals the core inventory of cue words in the language, and in particular the class of low-ambiguity discourse markers (DMs), such as odnako `however' signaling contrast (see Fraser 1999 on delimiting the class of explicit DMs) or relative pronouns signaling elaboration. As such, it can be very helpful for corpus-based lexicography of discourse markers (cf. BIBREF23). The approach can potentially include multiword expressions, if applied equally to multi-token spans (e.g. as a result), and because it is language-independent, it also allows for a straightforward comparison of connectives or other DMs across languages. Results may also converge across frameworks, as the frequency analysis may reveal the same items in different corpora annotated using different frameworks. For instance, the inventory of connectives found in work on the Penn Discourse Treebank (PDTB, see BIBREF10) largely converges with findings on connectives using RST (see BIBREF24, BIBREF18): conjunctions such as but can mark different kinds of contrastive relations at a high level, and adverbs such as meanwhile can convey contemporaneousness, among other things, even when more fine-grained analyses are applied. However, a purely frequentist approach runs into problems on multiple levels, as we will show in Section SECREF4: high frequency and specificity to a small number of relations characterize only the most common and unambiguous discourse markers, but not less common ones. Additionally, differentiating actual and potentially ambiguous usages of candidate words in context requires substantial qualitative analysis (see BIBREF25), which is not reflected in aggregated counts, and signals that belong to a class of relations (e.g. a variety of distinct contrastive relation types) may appear to be non-specific, when in fact they reliably mark a superset of relations. Other studies have used more sophisticated metrics, such as point-wise mutual information (PMI), to identify words associated with particular relations BIBREF16. Using the PDTB corpus, BIBREF16 extracted such scores and measured the contribution of different signal types based on the information gain which they deliver for the classification of discourse relations at various degrees of granularity, as expressed by the hierarchical labels of PDTB relation types. This approach is most similar to the goal given to our own model in Section SECREF4, but is less detailed in that the aggregation process assigns a single number to each candidate lexical item, rather than assigning contextual scores to each instance. Finally we note that for hierarchical discourse annotation schemes, the data-driven approaches described here become less feasible at higher levels of abstraction, as relations connecting entire paragraphs encompass large amounts of text, and it is therefore difficult to find words with high specificity to those relations. As a result, approaches using human annotation of discourse relation signals may ultimately be irreplaceable. Previous Work ::: Discourse Relation Signal Annotations Discourse relation signals are broadly classified into two categorizes: anchored signals and unanchored signals. By `anchoring' we refer to associating signals with concrete token spans in texts. Intuitively, most of the signals are anchorable since they correspond to certain token spans. However, it is also possible for a discourse relation to be signaled but remain unanchored. Results from BIBREF27 indicated that there are several signaled but unanchored relations such as preparation and background since they are high-level discourse relations that capture and correspond to genre features such as interview layout in interviews where the conversation is constructed as a question-answer scheme, and are thus rarely anchored to tokens. The Penn Discourse Treebank (PDTB V3, BIBREF15) is the largest discourse annotated corpus of English, and the largest resource annotated explicitly for discourse relation signals such as connectives, with similar corpora having been developed for a variety of languages (e.g. BIBREF28 for Turkish, BIBREF29 for Chinese). However the annotation scheme used by PDTB is ahierarchical, annotating only pairs of textual argument spans connected by a discourse relation, and disregarding relations at higher levels, such as relations between paragraphs or other groups of discourse units. Additionally, the annotation scheme used for explicit signals is limited to specific sets of expressions and constructions, and does not include some types of potential signals, such as the graphical layout of a document, lexical chains of (non-coreferring) content words that are not seen as connectives, or genre conventions which may signal the discourse function for parts of a text. It is nevertheless a very useful resource for obtaining frequency lists of the most prevalent DMs in English, as well as data on a range of phenomena such as anaphoric relations signaled by entities, and some explicitly annotated syntactic constructions. Working in the hierarchical framework of Rhetorical Structure Theory BIBREF11, BIBREF18 re-annotated the existing RST Discourse Treebank BIBREF30, by taking the existing discourse relation annotations in the corpus as a ground truth and analyzing any possible information in the data, including content words, patterns of repetition or genre conventions, as a possibly present discourse relation signaling device. The resulting RST Signalling Corpus (RST-SC, BIBREF31) consists of 385 Wall Street Journal articles from the Penn Treebank BIBREF32, a smaller subset of the same corpus used in PDTB. It contains 20,123 instances of 78 relation types (e.g. attribution, circumstance, result etc.), which are enriched with 29,297 signal annotations. BIBREF12 showed that when all types of signals are considered, over 86% of discourse relations annotated in the corpus were signaled in some way, but among these, just under 20% of cases were marked by a DM. However, unlike PDTB, the RST Signalling Corpus does not provide a concrete span of tokens for the locus of each signal, indicating instead only the type of signaling device used. Although the signal annotations in RST-SC have a broader scope than those in PDTB and are made more complex by extending to hierarchical relations, BIBREF33 have shown that RST-SC's annotation scheme can be `anchored' by associating discourse signal categories from RST-SC with concrete token spans. BIBREF27 applied the same scheme to a data set described in Section SECREF3, which we will use to evaluate our model in Section SECREF5. Since that data set is based on the same annotation scheme of signal types as RST-SC, we will describe the data for the present study and RST-SC signal type annotation scheme next. Data ::: Anchored Signals in the GUM Corpus In order to study open-ended signals anchored to concrete tokens, we use the signal-annotated subset of the freely available Georgetown University Multilayer (GUM) Corpus BIBREF7 from BIBREF27. Our choice to use a multi-genre RST-annotated corpus rather than using PDTB, which also contains discourse relation signal annotation to a large extent is motivated by three reasons: The first reason is that we wish to explore the full range of potential signals, as laid out in the work on the Signalling Corpus BIBREF12, BIBREF34, whereas PDTB annotates only a subset of the possible cues identified by human annotators. Secondly, the use of RST as a framework allows us to examine discourse relations at all hierarchical levels, including long distance, high-level relations between structures as large as paragraphs or sections, which often have different types of signals allowing their identification. Finally, although the entire GUM corpus is only about half the size of RST-DT (109K tokens), using GUM offers the advantage of a more varied range of genres than PDTB and RST-SC, both of which annotate Wall Street Journal data. The signal annotated subset of GUM includes academic papers, how-to guides, interviews and news text, encompassing over 11,000 tokens. Although this data set may be too small to train a successful neural model for signal detection, we will not be using it for this purpose; instead, we will reserve it for use solely as a test set, and use the remainder of the data (about 98K tokens) to build our model (see Section SECREF28 for more details about the subsets and splits), including data from four further genres, for which the corpus also contains RST annotations but no signaling annotations: travel guides, biographies, fiction, and Reddit forum discussions. The GUM corpus is manually annotated with a large number of layers, including document layout (headings, paragraphs, figures, etc.); multiple POS tags (Penn tags, CLAWS5, Universal POS); lemmas; sentence types (e.g. imperative, wh-question etc., BIBREF35); Universal Dependencies BIBREF36; (non-)named entity types; coreference and bridging resolution; and discourse parses using Rhetorical Structure Theory BIBREF11. In particular, the RST annotations in the corpus use a set of 20 commonly used RST relation labels, which are given in Table TABREF10, along with their frequencies in the corpus. The relations cover asymmetrical prominence relations (satellite-nucleus) and symmetrical ones (multinuclear relations), with the restatement relation being realized in two versions, one for each type. The signaling annotation in the corpus follows the scheme developed by RST-SC, with some additions. Although RST-SC does not indicate token positions for signals, it provides a detailed taxonomy of signal types which is hierarchically structured into three levels: signal class, denoting the signal's degree of complexity signal type, indicating the linguistic system to which it belongs specific signal, which gives the most fine-grained subtypes of signals within each type It is assumed that any number of word tokens can be associated with any number of signals (including the same tokens participating in multiple signals), that signals can arise without corresponding to specific tokens (e.g. due to graphical layout of paragraphs), and that each relation can have an unbounded number of signals ($0-n$), each of which is characterized by all three levels. The signal class level is divided into single, combined (for complex signals), and unsure for unclear signals which cannot be identified conclusively, but are noted for further study. For each signal (regardless of its class), signal type and specific signal are identified. According to RST-SC's taxonomy, signal type includes 9 types such as DMs, genre, graphical, lexical, morphological, numerical, reference, semantic, and syntactic. Each type then has specific subcategories. For instance, the signal type semantic has 7 specific signal subtypes: synonymy, antonymy, meronymy, repetition, indicative word pair, lexical chain, and general word. We will describe some of these in more depth below. In addition to the 9 signal types, RST-SC has 6 combined signal types such as reference+syntactic, semantic+syntactic, and graphical+syntactic etc., and 15 specific signals are identified for the combined signals. Although the rich signaling annotations in RST-SC offer an excellent overview of the relative prevalence of different signal types in the Wall Street Journal corpus, it is difficult to apply the original scheme to the study of individual signal words, since actual signal positions are not identified. While recovering these positions may be possible for some categories using the original guidelines, most signaling annotations (e.g. lexical chains, repetition) cannot be automatically paired with actual tokens, meaning that, in order to use the original RST-SC for our study, we would need to re-annotate it for signal token positions. As this effort is beyond the scope of our study, we will use the smaller data set with anchored signaling annotations from BIBREF27: This data is annotated with the same signal categories as RST-SC, but also includes exact token positions for each signal, including possibly no tokens for unanchorable signals such as some types of genre conventions or graphical layout which are not expressible in terms of specific words. In order to get a better sense of how the annotations work, we consider example SECREF7. . [5] Sociologists have explored the adverse consequences of discrimination; [6] psychologists have examined the mental processes that underpin conscious and unconscious biases; [7] neuroscientists have examined the neurobiological underpinnings of discrimination; [8] and evolutionary theorists have explored the various ways that in-group/out-group biases emerged across the history of our species. – joint [GUM_academic_discrimination] In this example, there is a joint relation between four spans in a fragment from an RST discourse tree. The first tokens in each span form a parallel construction and include semantically related items such as explored and examined (signal class `combined', type `semantic+syntactic', specific subtype `parallel syntactic construction + lexical chain'). The words corresponding to this signal in each span are highlighted in Figure FIGREF15, and are considered to signal each instance of the joint relation. Additionally, the joint relation is also signaled by a number of further signals which are highlighted in the figure as well, such as the semicolons between spans, which correspond to a type `graphical', subtype `semicolon' in RST-SC. The data model of the corpus records which tokens are associated with which categorized signals, and allows for multiple membership of the same token in several signal annotations. In terms of annotation reliability, BIBREF12 reported a weighted kappa of 0.71 for signal subtypes in RST-SC without regard to the span of words corresponding to a signal, while a study by BIBREF37 suggests that signal anchoring, i.e. associating RST-SC signal categories with specific tokens achieves a 90.9% perfect agreement score on which tokens constitute signals, or a Cohen's Kappa value of 0.77. As anchored signal positions will be of the greatest interest to our study, we will consider how signal token positions are distributed in the corpus next, and develop an anchoring taxonomy which we will refer back to for the remainder of this paper. Data ::: A Taxonomy of Anchored Signals From a structural point of view, one of the most fundamental distinctions with regard to signal realization recognized in previous work is the classification of signaling tokens into satellite or nucleus-oriented positions, i.e. whether a signal for the relation appears within the modifier span or the span being modified BIBREF38. While some relation types exhibit a strong preference for signal position (e.g. using a discourse marker such as because in the satellite for cause, BIBREF39), others, such as concession are more balanced (almost evenly split signals between satellite and nucleus in BIBREF38). In this study we would like to further refine the taxonomy of signal positions, breaking it down into several features. At the highest level, we have the distinction between anchorable and non-anchorable signals, i.e. signals which correspond to no token in the text (e.g. genre conventions, graphical layout). Below this level, we follow BIBREF38 in classifying signals as satellite or nucleus-oriented, based on whether they appear in the more prominent Elementary Discourse Unit (EDU) of a relation or its dependent. However, several further distinctions may be drawn: Whether the signal appears before or after the relation in text order; since we consider the relation to be instantiated as soon as its second argument in the text appears, `before' is interpreted as any token before the second head unit in the discourse tree begins, and `after' is any subsequent token Whether the signal appears in the head unit of the satellite/nucleus, or in a dependent of that unit; this distinction only matters for satellite or nucleus subtrees that consist of more than one unit Whether the signal is anywhere within the structure dominated by the units participating in the relation, or completely outside of this structure Table TABREF20 gives an overview of the taxonomy proposed here, which includes the possible combinations of these properties and the distribution of the corresponding anchorable signals found in the signal-annotated subset of the GUM Corpus from BIBREF27. Individual feature combinations can be referred to either as acronyms, e.g. ABIHS for `Anchorable, Before the second EDU of the relation, Inside the relation's subtree, Head unit of the Satellite', or using the group IDs near the bottom of the table (in this case the category numbered Roman I). We will refer back to these categories in our comparison of manually annotated and automatically predicted signals. To illustrate how the taxonomy works in practice, we can consider the example in Figure FIGREF23, which shows a signal whose associated tokens instantiate categories I and IV in a discourse tree – the words demographic variables appear both within a preparation satellite (unit [50], category I), which precedes and points to its nucleus [51–54], and within a satellite inside that block (unit [52], a dependent inside the nucleus block, category IV). Based on the RST-SC annotation scheme, the signal class is Simple, with the type Semantic and specific sub-type Lexical chain. The numbers at the bottom of Table TABREF20 show the number of tokens signaling each relation at each position, as well as the number of relations which have signal tokens at the relevant positions. The hypothetical categories V and X, with signal tokens which are not within the subtree of satellite or nucleus descendants, are not attested in our data, as far as annotators were able to identify. Automatic Signal Extraction ::: A Contextless Frequentist Approach To motivate the need for a fine-grained and contextualized approach to describing discourse relation signals in our data, we begin by extracting some basic data-driven descriptions of our data along the lines presented in Section SECREF3. In order to constrain candidate words to just the most relevant ones for marking a specific signal, we first need a way to address a caveat of the frequentist approach: higher order relations which often connect entire paragraphs (notably background and elaboration) must be prevented from allowing most or even all words in the document to be considered as signaling them. A simple approach to achieving this is to assume `Strong Nuclearity', relying on Marcu's (BIBREF42) Compositionality Criterion for Discourse Trees (CCDT), which suggests that if a relation holds between two blocks of EDUs, then it also holds between their head EDUs. While this simplification may not be entirely accurate in all cases, Table TABREF20 suggests that it captures most signals, and allows us to reduce the space of candidate signal tokens to just the two head EDUs implicated in a relation. We will refer to signals within the head units of a relation as `endocentric' and signals outside this region as `exocentric'. Figure FIGREF25 illustrates this, where units [64] and [65] are the respective heads of two blocks of EDUs, and unit [65] in fact contains a plausible endocentric signal for the result relation, the discourse marker thus. More problematic caveats for the frequentist approach are the potential for over/underfitting and ambiguity. The issue of overfitting is especially thorny in small datasets, in which certain content words appear coincidentally in discourse segments with a certain function. Table TABREF27 shows the most distinctive lexical types for several discourse relations in GUM based on pure ratio of occurrence in head EDUs marked for those relations. On the left, types are chosen which have a maximal frequency in the relevant relationship compared with their overall frequency in the corpus. This quickly overfits the contents of the corpus, selecting irrelevant words such as holiest and Slate for the circumstance relation, or hypnotizing and currency for concession. The same lack of filtering can, however, yield some potentially relevant lexical items, such as causing for result or even highly specific content words such as ammonium, which are certainly not discourse markers, but whose appearance in a sequence is not accidental: the word is in this case typical for sequences in how-to guides, where use of ingredients in a recipe is described in a sequence. Even if these kinds of items may be undesirable candidates for signal words in general, it seems likely that some rare content words may function as signals in context, such as evaluative adjectives (e.g. exquisite) enabling readers to recognize an evaluation. If we are willing to give up on the latter kind of rare items, the overfitting problem can be alleviated somewhat by setting a frequency threshold for each potential signal lexeme, thereby suppressing rare items. The items on the right of the table are limited to types occurring more than 10 times. Since the most distinctive items on the left are all comparatively rare (and therefore exclusive to their relations), they do not overlap with the items on the right. Looking at the items on the right, several signals make intuitive sense, especially for relations such as solutionhood (used for question-answer pairs) or concession, which show the expected WH words and auxiliary did, or discourse markers such as though, respectively. At the same time, some high frequency items may be spurious, such as NATO for justify, which could perhaps be filtered out based on low dispersion across documents, but also stuff for cause, which probably could not be. Another problem with the lists on the right is that some expected strong signals, such as the word and for sequence are absent from the table. This is not because and is not frequent in sequences, but rather because it is a ubiquitous word, and as a result, it is not very specific to the relation. However if we look at actual examples of and inside and outside of sequences, it is easy to notice that the kind of and that does signal a relation in context is often clause initial as in SECREF24 and very different from the adnominal coordinating ands in SECREF24, which do not signal the relation: . [she was made a Dame by Elizabeth II for services to architecture,] [and in 2015 she became the first and only woman to be awarded the Royal Gold Medal]$_{\textsc {sequence}}$ . [Gordon visited England and Scotland in 1686.] [In 1687 and 1689 he took part in expeditions against the Tatars in the Crimea]$_{\textsc {sequence}}$ These examples suggest that a data-driven approach to signal detection needs some way of taking context into account. In particular, we would like to be able to compare instances of signals and quantify how strong the signal is in each case. In the next section, we will attempt to apply a neural model with contextualized word embeddings BIBREF44 to this problem, which will be capable of learning contextualized representations of words within the discourse graph. Automatic Signal Extraction ::: A Contextualized Neural Model ::: Task and Model Architecture Since we are interested in identifying unrestricted signaling devices, we deliberately avoid a supervised learning approach as used in automatic signal detection trained on resources such as PDTB. While recent work on PDTB connective detection (BIBREF26, BIBREF45) achieves good results (F-Scores of around 88-89 for English PDTB explicit connectives), the use of such supervised approaches would not tell us about new signaling devices, and especially about unrestricted lexical signals and other coherence devices not annotated in PDTB. Additionally, we would be restricted to the newspaper text types represented in the Wall Street Journal corpus, since no other large English corpus has been annotated for anchored signals. Instead, we will adopt a distantly supervised approach: we will task a model with supervised discourse relation classification on data that has not been annotated for signals, and infer the positions of signals in the text by analyzing the model's behavior. A key assumption, which we will motivate below, is that signals can have different levels of signaling strength, corresponding to their relative importance in identifying a relation. We would like to assume that different signal strength is in fact relevant to human analysts' decision making in relation identification, though in practice we will be focusing on model estimates of strength, the usefulness of which will become apparent below. As a framework, we use the sentence classifier configuration of FLAIR BIBREF46 with a biLSTM encoder/classifier architecture fed by character and word level representations composed of a concatenation of fixed 300 dimensional GloVe embeddings BIBREF47, pre-trained contextualized FLAIR word embeddings, and pre-trained contextualized character embeddings from AllenNLP BIBREF48 with FLAIR's default hyperparameters. The model's architecture is shown in Figure FIGREF30. Contextualized embeddings BIBREF44 have the advantage of giving distinct representations to different instances of the same word based on the surrounding words, meaning that an adnominal and connecting two NPs can be distinguished from one connecting two verbs based on its vector space representation in the model. Using character embeddings, which give vector space representations to substrings within each word, means that the model can learn the importance of morphological forms, such as the English gerund's -ing suffix, even for out-of-vocabulary items not seen during training. Formally, the input to our system is formed of EDU pairs which are the head units within the respective blocks of discourse units that they belong to, which are in turn connected by an instance of a discourse relation. This means that every discourse relation in the corpus is expressed as exactly one EDU pair. Each EDU is encoded as a (possibly padded) sequence of $n$-dimensional vector representations of each word ${x_1,..,x_T}$, with some added separators which are encoded in the same way and described below. The bidirectional LSTM composes representations and context for the input, and a fully connected softmax layer gives the probability of each relation: where the probability of each relation $rel_i$ is derived from the composed output of the function $h$ across time steps $0 \ldots t$, $\delta \in \lbrace b,f\rbrace $ is the direction of the respective LSTMs, $c_t^\delta $ is the recurrent context in each direction and $\theta = {W,b}$ gives the model weights and bias parameters (see BIBREF46 for details). Note that although the output of the system is ostensibly a probability distribution over relation types, we will not be directly interested in the most probable relation as outputted by the classifier, but rather in analyzing the model's behavior with respect to the input word representations as potential signals of each relation. In order to capitalize on the system's natural language modeling knowledge, EDU satellite-nucleus pairs are presented to the model in text order (i.e. either the nucleus or the satellite may come first). However, the model is given special separator symbols indicating the positions of the satellite and nucleus, which are essential for deciding the relation type (e.g. cause vs. result, which may have similar cue words but lead to opposite labels), and a separator symbol indicating the transition between satellite and nucleus. This setup is illustrated in SECREF29. . $<$s$>$ Sometimes this information is available , $<$sep$>$ but usually not . $<$n$>$ Label: concession In this example, the satellite precedes the nucleus and is therefore presented first. The model is made aware of the fact that the segment on the left is the satellite thanks to the tag <s>. Since the LSTM is bi-directional, it is aware of positions being within the nucleus or satellite, as well as their proximity to the separator, at every time step. We reserve the signal-annotated subset of 12 documents from GUM for testing, which contains 1,185 head EDU pairs (each representing one discourse relation), and a random selection of 12 further documents from the remaining RST-annotated GUM data (1,078 pairs) is taken as development data, leaving 102 documents (5,828 pairs) for training. The same EDUs appear in multiple pairs if a unit has multiple children with distinct relations, but no instances of EDUs are shard across partitions, since the splits are based on document boundaries. We note again that for the training and development data, we have no signaling annotations of any kind; this is possible since the network does not actually use the human signaling annotations we will be evaluating against: its distant supervision consists solely of the RST relation labels. Automatic Signal Extraction ::: A Contextualized Neural Model ::: Relation Classification Performance Although only used as an auxiliary training task, we can look at the model's performance on predicting discourse relations, which is given in Table TABREF34. Unsurprisingly, the model performs best on the most frequent relations in the corpus, such as elaboration or joint, but also on rarer ones which tend to be signaled explicitly, such as condition (often signaled explicitly by if), solutionhood (used for question-answer pairs signaled by question marks and WH words), or concession (DMs such as although). However, the model also performs reasonably well for some trickier (i.e. less often introduced by unambiguous DMs) but frequent relations, such as preparation, circumstance, and sequence. Rare relations with complex contextual environments, such as result, justify or antithesis, unsurprisingly do not perform well, with the latter two showing an F-score of 0. The relation restatement, which also shows no correct classifications, reveals a weakness of the model: while it is capable of recognizing signals in context, it cannot learn that repetition in and of itself, regardless of specific areas in vector space, is important (see Section SECREF6 for more discussion of these and other classification weaknesses). Although this is not the actual task targeted by the current paper, we may note that the overall performance of the model, with an F-Score of 44.37, is not bad, though below the performance of state-of-the-art full discourse parsers (see BIBREF49) – this is to be expected, since the model is not aware of the entire RST tree, rather looking only at EDU pairs out of context, and given that standard scores on RST-DT come from a larger and more homogeneous corpus, with with fewer relations and some easy cases that are absent from GUM. Given the model's performance on relation classification, which is far from perfect, one might wonder whether signal predictions made by our analysis should be trusted. This question can be answered in two ways: first, quantitatively, we will see in Section SECREF5 that model signal predictions overlap considerably with human judgments, even when the predicted relation is incorrect. Intuitively, for similar relations, such as concession or contrast, both of which are adversative, the model may notice a relevant cue (e.g. `but', or contrasting lexical items) despite choosing the wrong one. Second, as we will see below, we will be analyzing the model's behavior with respect to the probability of the correct relation, regardless of the label it ultimately chooses, meaning that the importance of predicting the correct label exactly will be diminished further. Automatic Signal Extraction ::: A Contextualized Neural Model ::: Signaling Metric The actual performance we are interested in evaluating is the model's ability to extract signals for given discourse relations, rather than its accuracy in predicting the relations. To do so, we must extract anchored signal predictions from the model, which is non-trivial. While earlier work on interpreting neural models has focused on token-wise softmax probability BIBREF50 or attention weights BIBREF51, using contextualized embeddings complicates the evaluation: since word representations are adjusted to reflect neighboring words, the model may assign higher importance to the word standing next to what a human annotator may interpret as a signal. Example SECREF36 illustrates the problem: . [RGB]230, 230, 230To [RGB]53, 53, 53provide [RGB]165, 165, 165information [RGB]179, 179, 179on [RGB]175, 175, 175the [RGB]160, 160, 160analytical [RGB]157, 157, 157sample [RGB]187, 187, 187as [RGB]170, 170, 170a [RGB]168, 168, 168whole [RGB]207, 207, 207, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]168, 168, 168two [RGB]170, 170, 170additional [RGB]164, 164, 164demographic [RGB]175, 175, 175variables [RGB]182, 182, 182are [RGB]165, 165, 165included [RGB]230, 230, 230. Each word in SECREF36 is shaded based on the softmax probability assigned to the correct relation of the satellite, i.e. how `convincing' the model found the word in terms of local probability. In addition, the top-scoring word in each sentence is rendered in boldface for emphasis. The gold label for the relation is placed above the arrow, which indicates the direction of the relation (satellite to nucleus), and the model's predicted label is shown under the arrow. Intuitively, the strongest signal of the purpose relation in SECREF36 is the initial infinitive marker To – however, the model ranks the adjacent provide higher and almost ignores To. We suspect that the reason for this, and many similar examples in the model evaluated based on relation probabilities, is that contextual embeddings allow for a special representation of the word provide next to To, making it difficult to tease apart the locus of the most meaningful signal. To overcome this complication, we use the logic of permutation importance, treating the neural model as a black box and manipulating the input to discover relevant features in the data (cf. BIBREF52). We reason that this type of evaluation is more robust than, for example, examining model internal attention weights because such weights are not designed or trained with a reward ensuring they are informative – they are simply trained on the same classification error loss as the rest of the model. Instead, we can withhold potentially relevant information from the model directly: After training is complete, we feed the test data to the model in two forms – as-is, and with each word masked, as shown in SECREF36. . Original: <$s$>$ To\quad \ p̄rovide īnformation .. <$sep$>$ .. <$n$>$ Original: \: $<$s$>$ \: \ To \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked1: \: $<$s$>$ \: $<$X$>$ \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked2: \: $<$s$>$ \: \ To \: \ $<$X$>$ \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked3: \: $<$s$>$ \: \ To \: provide \: \ $<$X$>$ \: ... \: $<$sep$>$ \: ... \: $<$n$>$ $ Label: purpose We reason that, if a token is important for predicting the correct label, masking it will degrade the model's classification accuracy, or at least reduce its reported classification certainty. In SECREF36, it seems reasonable to assume that masking the word `To' has a greater impact on predicting the label purpose than masking the word `provide', and even less so, the following noun `information'. We therefore use reduction in softmax probability of the correct relation as our signaling strength metric for the model. We call this metric ${\Delta }_s$ (for delta-softmax), which can be written as: where $rel$ is the true relation of the EDU pair, $t_i$ represents the token at index $i$ of $N$ tokens, and $X_{mask=i}$ represents the input sequence with the masked position $i$ (for $i \in 1 \ldots N$ ignoring separators, or $\phi $, the empty set). To visualize the model's predictions, we compare ${\Delta }_s$ for a particular token to two numbers: the maximum ${\Delta }_s$ achieved by any token in the current pair (a measure of relative importance for the current classification) and the maximum ${\Delta }_s$ achieved by any token in the current document (a measure of how strongly the current relation is signaled compared to other relations in the text). We then shade each token 50% based on the first number and 50% based on the second. As a result, the most valid cues in an EDU pair are darker than their neighbors, but EDU pairs with no good cues are overall very light, whereas pairs with many good signals are darker. Some examples of this visualization are given in SECREF36-SECREF36 (human annotated endocentric signal tokens are marked by double underlines). . [RGB]61, 61, 61To [RGB]112, 112, 112provide [RGB]205, 205, 205information [RGB]230, 230, 230on [RGB]230, 230, 230the [RGB]230, 230, 230analytical [RGB]230, 230, 230sample [RGB]230, 230, 230as [RGB]230, 230, 230a [RGB]230, 230, 230whole [RGB]230, 230, 230, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]230, 230, 230two [RGB]183, 183, 183additional [RGB]230, 230, 230demographic [RGB]230, 230, 230variables [RGB]94, 94, 94are [RGB]194, 194, 194included [RGB]163, 163, 163. . [RGB]230, 230, 230Telling [RGB]230, 230, 230good [RGB]230, 230, 230jokes [RGB]230, 230, 230is [RGB]230, 230, 230an [RGB]230, 230, 230art [RGB]230, 230, 230that [RGB]230, 230, 230comes [RGB]230, 230, 230naturally [RGB]230, 230, 230to [RGB]230, 230, 230some [RGB]211, 211, 211people [RGB]135, 135, 135, $\xleftarrow[\text{pred:contrast}]{\text{gold:contrast}}$ [RGB]21, 21, 21but [RGB]209, 209, 209for [RGB]207, 207, 207others [RGB]230, 230, 230it [RGB]217, 217, 217takes [RGB]230, 230, 230practice [RGB]230, 230, 230and [RGB]189, 189, 189hard [RGB]230, 230, 230work [RGB]230, 230, 230. . [RGB]230, 230, 230It [RGB]230, 230, 230is [RGB]230, 230, 230possible [RGB]230, 230, 230that [RGB]230, 230, 230these [RGB]230, 230, 230two [RGB]230, 230, 230children [RGB]230, 230, 230understood [RGB]230, 230, 230the [RGB]230, 230, 230task [RGB]230, 230, 230and [RGB]230, 230, 230really [RGB]230, 230, 230did [RGB]230, 230, 230believe [RGB]230, 230, 230that [RGB]230, 230, 230the [RGB]230, 230, 230puppet [RGB]230, 230, 230did [RGB]230, 230, 230not [RGB]230, 230, 230produce [RGB]230, 230, 230any [RGB]230, 230, 230poor [RGB]230, 230, 230descriptions [RGB]230, 230, 230, [RGB]230, 230, 230and [RGB]230, 230, 230in [RGB]230, 230, 230this [RGB]230, 230, 230regard [RGB]230, 230, 230, [RGB]230, 230, 230are [RGB]230, 230, 230not [RGB]230, 230, 230yet [RGB]230, 230, 230adult-like [RGB]230, 230, 230in [RGB]230, 230, 230their [RGB]230, 230, 230SI [RGB]230, 230, 230interpretations [RGB]230, 230, 230. $\xleftarrow[\text{pred:evaluation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230This [RGB]230, 230, 230is [RGB]41, 41, 41unlikely The highlighting in SECREF36 illustrates the benefits of the masking based evaluation compared to SECREF36: the token To is now clearly the strongest signal, and the verb is taken to be less important, followed by the even less important object of the verb. This is because removing the initial To hinders classification much more than the removal of the verb or noun. We note also that although the model in fact misclassified this example as preparation, we can still use masking importance to identify To, since the score queried from the model corresponds to a relative decrease in the probability of the correct relation, purpose, even if this was not the highest scoring relation overall. In SECREF36 we see the model's ability to correctly predict contrast based on the DM but. Note that despite a rather long sentence, the model does not need any other word nearly as much for the classification. Although the model is not trained explicitly to detect discourse markers, the DM can be recognized due to the fact that masking it leads to a drop of 66% softmax probability (${\Delta }_s$=0.66) of this pair representing the contrast relation. We can also note that a somewhat lower scoring content word is also marked: hard (${\Delta }_s$=0.18). In our gold signaling annotations, this word was marked together with comes naturally as a signal, due to the contrast between the two concepts (additionally, some people is flagged as a signal along with others). The fact that the model finds hard helpful, but does not need the contextual near antonym naturally, suggests that it is merely learning that words in the semantic space near hard may indicate contrast, and not learning about the antonymous relationship – otherwise we would expect to see `naturally' have a stronger score (see also the discussion in Section SECREF6). Finally SECREF36 shows that, much like in the case of hard, the model is not biased towards traditional DMs, confirming that it is capable of learning about content words, or neighborhoods of content words in vector space. In a long EDU pair of 41 words, the model relies almost exclusively on the word unlikely (${\Delta }_s$=0.36) to correctly label the relation as evaluation. By contrast, the anaphoric demonstrative `This' flagged by the human annotator, which is a more common function word, is disregarded, perhaps because it can appear with several other relations, and is not particularly exclusive to evaluation. These results suggest that the model may be capable of recognizing signals through distant supervision, allowing it to validate human annotations, to potentially point out signals that may be missed by annotators, and most importantly, to quantify signaling strength on a sliding scale. At the same time, we need a way to evaluate the model's quality and assess the kinds of errors it makes, as well as what we can learn from them. We therefore move on to evaluating the model and its errors next. Evaluation and Error Analysis ::: Evaluation Metric To evaluate the neural model, we would like to know how well ${\Delta }_s$ corresponds to annotators' gold standard labels. This leads to two kinds of problems: the first is that the model is distantly supervised, and therefore does not know about signal types, subtypes, or any aspect of signaling annotation and its relational structure. The second problem is that signaling annotations are categorical, and do not correspond to the ratio-scaled predictions provided by ${\Delta }_s$ (this is in fact one of the motivations for desiring a model-based estimate of signaling strength). The first issue means that we can only examine the model's ability to locate signals – not to classify them. Although there may be some conceivable ways of analyzing model output to identify classes such as DMs (which are highly lexicalized, rather than representing broad regions of vector space, as words such as unlikely might), or more contextual relational signals, such as pronouns, this line of investigation is beyond the scope of the present paper. A naive solution to the second problem might be to identify a cutoff point, e.g. deciding that all and only words scoring ${\Delta }_s>$0.15 are predicted to be signals. The problem with the latter approach is that sentences can be very different in many ways, and specifically in both length and in levels of ambiguity. Sentences with multiple, mutually redundant cues, may produce lower ${\Delta }_s$ scores compared to shorter sentences with a subset of the same cues. Conversely, in very short sentences with low signal strength, the model may reasonably be expected to degrade very badly with the deletion of almost any word, as the context becomes increasingly incomprehensible. For these reasons, we choose to adopt an evaluation metric from the paradigm of information retrieval, and focus on recall@k (recall at rank k, for $k=1,2,3$...). The idea is to poll the model for each sentence in which some signals have been identified, and see whether the model is able to find them if we let it guess using the word with the maximal ${\Delta }_s$ score (recall@1), regardless of how high that score is, or alternatively relax the evaluation criteria and see whether the human annotator's signal tokens appear at rank 2 or 3. Figure FIGREF40 shows numbers for recall@k for the top 3 ranks outputted by the model, next to random guess baselines. The left, middle and right panels in Figure FIGREF40 correspond to measurements when all signals are included, only cases contained entirely in the head EDUs shown to the model, and only DMs, respectively. The scenario on the left is rather unreasonable and is included only for completeness: here the model is also penalized for not detecting signals such as lexical chains, part of which is outside the units that the model is being shown. An example of such a case can be seen in Figure FIGREF41. The phrase Respondents in unit [23] signals the relation elaboration, since it is coreferential with a previous mention of the respondents in [21]. However, because the model is only given heads of EDU blocks to classify, it does not have access to the first occurrence of respondents while predicting the elaboration relation – the first half of the signal token set is situated in a child of the nucleus EDU before the relation, i.e. it belongs to group IV in the taxonomy in Table TABREF20. Realistically, our model can only be expected to learn about signals from `directly participating' EDUs, i.e. groups I, II, VI and VII, the `endocentric' signal groups from Section SECREF16. Although most signals belong to endocentric categories (71.62% of signaled relations belong to these groups, cf. Table TABREF20), exocentric cases form a substantial portion of signals which we have little hope of capturing with the architecture used here. As a result, recall metrics in the `all signals' scenario are closest to the random baselines, though the signals detected in other instances still place the model well above the baseline. A more reasonable evaluation is the one in the middle panel of Figure FIGREF40, which includes only endocentric signals as defined in the taxonomy. EDUs with no endocentric signals are completely disregarded in this scenario, which substantially reduces the number of tokens considered to be signals, since, while many tokens are part of some meaningful lexical chain in the document, requiring signals to be contained only in the pair of head units eliminates a wide range of candidates. Although the random baseline is actually very slightly higher (perhaps because eliminated EDUs were often longer ones, sharing small amounts of material with larger parts of the text, and therefore prone to penalizing the baseline; many words mean more chances for a random guess to be wrong), model accuracy is substantially better in this scenario, reaching a 40% chance of hitting a signal with only one guess, exceeding 53% with two guesses, and capping at 64% for recall@3, over 20 points above baseline. Finally, the right panel in the figure shows recall when only DMs are considered. In this scenario, a random guess fares very poorly, since most words are not DMs. The model, by contrast, achieves the highest results in all metrics, since DMs have the highest cue validity for relation classification, and the model attends to them most strongly. With just one guess, recall is over 56%, and goes as high as 67% for recall@3. The baseline only goes as high as 16% for three guesses. Evaluation and Error Analysis ::: Qualitative Analysis Looking at the model's performance qualitatively, it is clear that it can detect not only DMs, but also morphological cues (e.g. gerunds as markers of elaboration, as in SECREF43), semantic classes and sentiment, such as positive and negative evaluatory terms in SECREF43, as well as multiple signals within the same EDU, as in SECREF43. In fact, only about 8.3% of the tokens correctly identified by the model in Table TABREF45 below are of the DM type, whereas about 7.2% of all tokens flagged by human annotators were DMs, meaning that the model frequently matches non-DM items to discourse relation signals (see Performance on Signal Types below). It should also be noted that signals can be recognized even when the model misclassifies relations, since ${\Delta }_s$ does not rely on correct classification: it merely quantifies the contribution of a word in context toward the correct label's score. If we examine the influence of each word on the score of the correct relation, that impact should and does still correlate with human judgments based on what the system may tag as the second or third best class to choose. . [RGB]230, 230, 230For [RGB]230, 230, 230the [RGB]230, 230, 230present [RGB]230, 230, 230analysis [RGB]230, 230, 230, [RGB]230, 230, 230these [RGB]230, 230, 230responses [RGB]230, 230, 230were [RGB]230, 230, 230recoded [RGB]230, 230, 230into [RGB]230, 230, 230nine [RGB]230, 230, 230mutually [RGB]230, 230, 230exclusive [RGB]230, 230, 230categories $\xleftarrow[\text{pred:elaboration}]{\text{gold:result}}$ [RGB]63, 63, 63capturing [RGB]219, 219, 219the [RGB]230, 230, 230following [RGB]230, 230, 230options [RGB]135, 135, 135: . [RGB]185, 185, 185Professor [RGB]219, 219, 219Eastman [RGB]223, 223, 223said [RGB]207, 207, 207he [RGB]194, 194, 194is [RGB]64, 64, 64alarmed [RGB]230, 230, 230by [RGB]230, 230, 230what [RGB]230, 230, 230they [RGB]230, 230, 230found [RGB]230, 230, 230. $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230" [RGB]230, 230, 230Pregnant [RGB]229, 229, 229women [RGB]187, 187, 187in [RGB]230, 230, 230Australia [RGB]98, 98, 98are [RGB]213, 213, 213getting [RGB]230, 230, 230about [RGB]230, 230, 230half [RGB]171, 171, 171as [RGB]159, 159, 159much [RGB]230, 230, 230as [RGB]230, 230, 230what [RGB]155, 155, 155they [RGB]155, 155, 155require [RGB]223, 223, 223on [RGB]214, 214, 214a [RGB]109, 109, 109daily [RGB]176, 176, 176basis [RGB]111, 111, 111. . [RGB]195, 195, 195Even [RGB]230, 230, 230so [RGB]230, 230, 230, [RGB]230, 230, 230estimates [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230prevalence [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]230, 230, 230discrimination [RGB]219, 219, 219remains [RGB]230, 230, 230rare $\xleftarrow[\text{pred:evidence}]{\text{gold:concession}}$ [RGB]111, 111, 111At [RGB]63, 63, 63least [RGB]230, 230, 230one [RGB]230, 230, 230prior [RGB]230, 230, 230study [RGB]230, 230, 230by [RGB]230, 230, 230Kessler [RGB]225, 225, 225and [RGB]230, 230, 230colleagues [RGB]230, 230, 230[ [RGB]230, 230, 23015 [RGB]161, 161, 161] [RGB]200, 200, 200, [RGB]136, 136, 136however [RGB]222, 222, 222, [RGB]228, 228, 228using [RGB]230, 230, 230measures [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]224, 224, 224discrimination [RGB]217, 217, 217in [RGB]230, 230, 230a [RGB]230, 230, 230large [RGB]218, 218, 218American [RGB]230, 230, 230sample [RGB]230, 230, 230, [RGB]230, 230, 230reported [RGB]230, 230, 230that [RGB]230, 230, 230approximately [RGB]230, 230, 23033 [RGB]212, 212, 212% [RGB]230, 230, 230of [RGB]230, 230, 230respondents [RGB]156, 156, 156reported [RGB]169, 169, 169some [RGB]122, 122, 122form [RGB]168, 168, 168of [RGB]230, 230, 230discrimination Unsurprisingly, the model sometimes make sporadic errors in signal detection for which good explanations are hard to find, especially when its predicted relation is incorrect, as in SECREF43. Here the evaluative adjective remarkable is missed in favor of neighboring words such as agreed and a subject pronoun, which are not indicative of the evaluation relation in this context but are part of several cohorts of high scoring words. However, the most interesting and interpretable errors arise when ${\Delta }_s$ scores are high compared to an entire document, and not just among words in one EDU pair, in which most or even all words may be relatively weak signals. As an example of such a false positive with high confidence, we can consider SECREF43. In this example, the model correctly assigns the highest score to the DM so marking a purpose relation. However, it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action, which contributes to the purpose reading and helps to disambiguate so, despite not being considered a signal by annotators. . [RGB]216, 216, 216The [RGB]99, 99, 99agreement [RGB]89, 89, 89was [RGB]230, 230, 230that [RGB]131, 131, 131Gorbachev [RGB]102, 102, 102agreed [RGB]230, 230, 230to [RGB]230, 230, 230a [RGB]230, 230, 230quite [RGB]230, 230, 230remarkable [RGB]125, 125, 125concession [RGB]230, 230, 230: $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]64, 64, 64he [RGB]81, 81, 81agreed [RGB]230, 230, 230to [RGB]230, 230, 230let [RGB]220, 220, 220a [RGB]143, 143, 143united [RGB]149, 149, 149Germany [RGB]230, 230, 230join [RGB]83, 83, 83the [RGB]230, 230, 230NATO [RGB]230, 230, 230military [RGB]230, 230, 230alliance [RGB]230, 230, 230. . [RGB]230, 230, 230The [RGB]220, 220, 220opening [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230joke [RGB]230, 230, 230— [RGB]230, 230, 230or [RGB]230, 230, 230setup [RGB]230, 230, 230— [RGB]230, 230, 230should [RGB]230, 230, 230have [RGB]230, 230, 230a [RGB]230, 230, 230basis [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230real [RGB]200, 200, 200world $\xleftarrow[\text{pred:purpose}]{\text{gold:purpose}}$ [RGB]7, 7, 7so [RGB]73, 73, 73your [RGB]230, 230, 230audience [RGB]230, 230, 230can [RGB]230, 230, 230relate [RGB]230, 230, 230to [RGB]230, 230, 230it [RGB]230, 230, 230, In other cases, the model points out plausible signals which were passed over by an annotator, and may be considered errors in the gold standard. For example, the model easily notices that question marks indicate the solutionhood relation, even where these were skipped by annotators in favor of marking WH words instead: . [RGB]230, 230, 230Which [RGB]230, 230, 230previous [RGB]230, 230, 230Virginia [RGB]230, 230, 230Governor(s) [RGB]230, 230, 230do [RGB]230, 230, 230you [RGB]230, 230, 230most [RGB]230, 230, 230admire [RGB]230, 230, 230and [RGB]230, 230, 230why [RGB]12, 12, 12? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:solutionhood}}$ [RGB]230, 230, 230Thomas [RGB]230, 230, 230Jefferson [RGB]183, 183, 183. From the model's perspective, the question mark, which scores ${\Delta }_s$=0.79, is the single most important signal, and virtually sufficient for classifying the relation correctly, though it was left out of the gold annotations. The WH word Which and the sentence final why, by contrast, were noticed by annotators but were are not as unambiguous (the former could be a determiner, and the latter in sentence final position could be part of an embedded clause). In the presence of the question mark, their individual removal has much less impact on the classification decision. Although the model's behavior is sensible and can reveal annotation errors, it also suggests that ${\Delta }_s$ will be blind to auxiliary signals in the presence of very strong, independently sufficient cues. Using the difference in likelihood of correct relation prediction as a metric also raises the possibility of an opposite concept to signals, which we will refer to as distractors. Since ${\Delta }_s$ is a signed measure of difference, it is in fact possible to obtain negative values whenever the removal or masking of a word results in an improvement in the model's ability to predict the relation. In such cases, and especially when the negative value is of a large magnitude, it seems like a reasonable interpretation to say that a word functions as a sort of anti-signal, preventing or complicating the recognition of what might otherwise be a more clear-cut case. Examples SECREF43–SECREF43 show some instances of distractors identified by the masking procedure (distractors with ${\Delta }_s<$-0.2 are underlined). . [RGB]230, 230, 230How [RGB]230, 230, 230do [RGB]230, 230, 230they [RGB]201, 201, 201treat [RGB]167, 167, 167those [RGB]210, 210, 210not [RGB]190, 190, 190like [RGB]230, 230, 230themselves [RGB]100, 100, 100? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:preparation}}$ [RGB]52, 52, 52then [RGB]230, 230, 230they [RGB]230, 230, 230're [RGB]230, 230, 230either [RGB]230, 230, 230over-zealous [RGB]230, 230, 230, [RGB]230, 230, 230ignorant [RGB]230, 230, 230of [RGB]230, 230, 230other [RGB]230, 230, 230people [RGB]230, 230, 230or [RGB]230, 230, 230what [RGB]230, 230, 230to [RGB]230, 230, 230avoid [RGB]230, 230, 230those [RGB]230, 230, 230that [RGB]230, 230, 230contradict [RGB]230, 230, 230their [RGB]230, 230, 230fantasy [RGB]230, 230, 230land [RGB]230, 230, 230that [RGB]220, 220, 220caters [RGB]230, 230, 230to [RGB]230, 230, 230them [RGB]230, 230, 230and [RGB]230, 230, 230them [RGB]230, 230, 230only [RGB]230, 230, 230. . [RGB]230, 230, 230God [RGB]230, 230, 230, [RGB]230, 230, 230I [RGB]230, 230, 230do [RGB]230, 230, 230n't [RGB]230, 230, 230know [RGB]51, 51, 51! $\xrightarrow[\text{pred:preparation}]{\text{gold:preparation}}$ [RGB]230, 230, 230but [RGB]230, 230, 230nobody [RGB]230, 230, 230will [RGB]230, 230, 230go [RGB]230, 230, 230to [RGB]230, 230, 230fight [RGB]230, 230, 230for [RGB]230, 230, 230noses [RGB]230, 230, 230any [RGB]219, 219, 219more [RGB]169, 169, 169. In SECREF43, a rhetorical question trips up the classifier, which predicts the question-answer relation solutionhood instead of preparation. Here the initial WH word How and the subsequent auxiliary do-support both distract (with ${\Delta }_s$=-0.23 and -0.25) from the preparation relation, which is however being signaled positively by the DM then in the nucleus unit. Later on, the adverb only is also disruptive (${\Delta }_s$=-0.31), perhaps due to a better association with adversative relations, such as contrast. In SECREF43, a preparatory “God, I don't know!” is followed up with a nucleus starting with but, which typically marks a concession or other adversative relation. In fact, the DM but is related to a concessive relation with another EDU (not shown), which the model is not aware of while making the classification for the preparation. Although this example reveals a weakness in the model's inability to consider broader context, it also reveals the difficulty of expecting DMs to fall in line with a strong nuclearity assumption: since units serve multiple functions as satellites and nuclei, signals which aid the recognition of one relation may hinder the recognition of another. Evaluation and Error Analysis ::: Performance on Signal Types To better understand the kinds of signals which the model captures better or worse, Table TABREF45 gives a breakdown of performance by signal type and specific signal categories, for categories attested over 20 times (note that the categories are human labels assigned to the corresponding positions – the system does not predict signal types). To evaluate performance for all types we cannot use recall@1–3, since some sentences contain more than 3 signal tokens, which would lead to recall errors even if the top 3 ranks are correctly identified signals. The scores in the table therefore express how many of the signal tokens belonging to each subtype in the gold annotations are recognized if we allow the system to make as many guesses as there are signal tokens in each EDU pair, plus a tolerance of a maximum of 2 additional tokens (similarly to recall@3). We also note that a single token may be associated with multiple signal types, in which case its identification or omission is counted separately for each type. Three of the top four categories which the model performs best for are, perhaps unsurprisingly, the most lexical ones: alternate expression captures non-DM phrases such as I mean (for elaboration), or the problem is (for concession), and indicative word includes lexical items such as imperative see (consistently marking evidence in references within academic articles) or evaluative adjectives such as interesting for evaluation. The good performance of the category colon captures the model's recognition of colons as important punctuation, primarily predicting preparation. The only case of a `relational' category, requiring attention to two separate positions in the input, which also fares well is synonymy, though this is often based on flagging only one of two items annotated as synonymous, and is based on rather few examples. We can find only one example, SECREF44, where both sides of a pair of similar words is actually noticed, which both belong to the same stem (decline/declining): . [RGB]230, 230, 230The [RGB]230, 230, 230report [RGB]209, 209, 209says [RGB]213, 213, 213the [RGB]172, 172, 172decline [RGB]220, 220, 220in [RGB]228, 228, 228iodine [RGB]230, 230, 230intake [RGB]215, 215, 215appears [RGB]230, 230, 230to [RGB]230, 230, 230be [RGB]230, 230, 230due [RGB]230, 230, 230to [RGB]230, 230, 230changes [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230dairy [RGB]230, 230, 230industry [RGB]230, 230, 230, [RGB]230, 230, 230where [RGB]230, 230, 230chlorine-containing [RGB]230, 230, 230sanitisers [RGB]226, 226, 226have [RGB]230, 230, 230replaced [RGB]230, 230, 230iodine-containing [RGB]230, 230, 230sanitisers [RGB]230, 230, 230. $\xleftarrow[\text{pred:background}]{\text{gold:justify}}$ [RGB]193, 193, 193Iodine [RGB]230, 230, 230released [RGB]230, 230, 230from [RGB]230, 230, 230these [RGB]230, 230, 230chemicals [RGB]230, 230, 230into [RGB]216, 216, 216milk [RGB]230, 230, 230has [RGB]230, 230, 230been [RGB]230, 230, 230the [RGB]230, 230, 230major [RGB]230, 230, 230source [RGB]230, 230, 230of [RGB]226, 226, 226dietary [RGB]206, 206, 206iodine [RGB]230, 230, 230in [RGB]230, 230, 230Australia [RGB]230, 230, 230for [RGB]230, 230, 230at [RGB]230, 230, 230least [RGB]230, 230, 230four [RGB]230, 230, 230decades [RGB]202, 202, 202, [RGB]153, 153, 153but [RGB]230, 230, 230is [RGB]230, 230, 230now [RGB]63, 63, 63declining [RGB]79, 79, 79. We note that our evaluation is actually rather harsh towards the model, since in multiword expressions, often only one central word is flagged by ${\Delta }_s$ (e.g. problem in “the problem is”), while the model is penalized in Table TABREF45 for each token that is not recognized (i.e. the and is, which were all flagged by a human annotator as signals in the data). Interestingly, the model fares rather well in identifying morphological tense cues, even though these are marked by both inflected lexical verbs and semantically poor auxiliaries (e.g. past perfect auxiliary had marking background); but modality cues (especially can or could for evaluation) are less successfully identified, suggesting they are either more ambiguous, or mainly relevant in the presence of evaluative content words which out-score them. Other relational categories from the middle of the table which ostensibly require matching pairs of words, such as repetition, meronymy, or personal reference (coreference) are mainly captured by the model when a single item is a sufficiently powerful cue, often ignoring the other half of the signal, as shown in SECREF44. . [RGB]230, 230, 230On [RGB]230, 230, 230a [RGB]230, 230, 230new [RGB]230, 230, 230website [RGB]230, 230, 230, [RGB]230, 230, 230" [RGB]230, 230, 230The [RGB]230, 230, 230Internet [RGB]230, 230, 230Explorer [RGB]230, 230, 2306 [RGB]230, 230, 230Countdown [RGB]230, 230, 230" [RGB]230, 230, 230, [RGB]230, 230, 230Microsoft [RGB]230, 230, 230has [RGB]230, 230, 230launched [RGB]230, 230, 230an [RGB]230, 230, 230aggressive [RGB]230, 230, 230campaign [RGB]230, 230, 230to [RGB]230, 230, 230persuade [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230stop [RGB]171, 171, 171using [RGB]133, 133, 133IE6 $\xleftarrow[\text{pred:elaboration}]{\text{gold:elaboration}}$ [RGB]56, 56, 56Its [RGB]197, 197, 197goal [RGB]167, 167, 167is [RGB]230, 230, 230to [RGB]230, 230, 230decrease [RGB]230, 230, 230IE6 [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230less [RGB]230, 230, 230than [RGB]230, 230, 230one [RGB]124, 124, 124percent [RGB]229, 229, 229. Here the model has learned that an initial possessive pronoun, perhaps in the context of a subject NP in a copula sentence (note the shading of the following is) is an indicator of an elaboration relation, even though there is no indication that the model has noticed which word is the antecedent. Similarly for the count category, the model only learns to notice the possible importance of some numbers, but is not actually aware of whether they are identical (e.g. for restatement) or different (e.g. in contrast). Finally, some categories are actually recognized fairly reliably, but are penalized by the same partial substring issue identified above: Date expressions are consistently flagged as indicators of circumstance, but often a single word, such as a weekday in SECREF44, is dominant, while the model is penalized for not scoring other words as highly (including commas within dates, which are marked as part of the signal token span in the gold standard, but whose removal does not degrade prediction accuracy). In this case it seems fair to say that the model has successfully recognized the date signal of `Wednesday April 13', yet it loses points for missing two instances of `,', and the `2011', which is no longer necessary for recognizing that this is a date. . [RGB]230, 230, 230NASA [RGB]230, 230, 230celebrates [RGB]230, 230, 23030th [RGB]230, 230, 230anniversary [RGB]230, 230, 230of [RGB]230, 230, 230first [RGB]230, 230, 230shuttle [RGB]230, 230, 230launch [RGB]230, 230, 230; $\xleftarrow[\text{pred:circumstance}]{\text{gold:circumstance}}$ [RGB]11, 11, 11Wednesday [RGB]186, 186, 186, [RGB]115, 115, 115April [RGB]153, 153, 15313 [RGB]219, 219, 219, [RGB]230, 230, 2302011 Discussion This paper has used a corpus annotated for discourse relation signals within the framework of the RST Signalling Corpus (BIBREF12) and extended with anchored signal annotations (BIBREF27) to develop a taxonomy of unrestricted and hierarchically aware discourse signal positions, as well as a data-driven neural network model to explore distantly supervised signal word extraction. The results shed light on the distribution of signal categories from the RST-SC taxonomy in terms of associated word forms, and show the promise of neural models with contextual embeddings for the extraction of context dependent and gradient discourse signal detection in individual texts. The metric developed for the evaluation, $\Delta _s$, allows us to assess the relative importance of signal words for automatic relation classification, and reveal observations for further study, as well as shortcomings which point to the need to develop richer feature representations and system architectures in future work. The model presented in the previous sections is clearly incomplete in both its classification accuracy and its ability to recognize the same signals that humans do. However, given the fact that it is trained entirely without access to discourse signal annotations and is unaware of any of the guidelines used to create the gold standard that it is evaluated on, its performance may be considered surprisingly good. As an approach to extracting discourse signals in a data-driven way, similar to frequentist methods or association measures used in previous work, we suggest that this model forms a more fine grained tool, capable of taking context into consideration and delivering scores for each instance of a signal candidate, rather than resulting in a table of undifferentiated signal word types. Additionally, although we consider human signal annotations to be the gold standard in identifying the presence of relevant cues, the ${\Delta }_s$ metric gives new insights into signaling which cannot be approached using manual signaling annotations. Firstly, the quantitative nature of the metric allows us to rank signaling strength in a way that humans have not to date been able to apply: using ${\Delta }_s$, we can say which instances of which signals are evaluated as stronger, by how much, and which words within a multi-word signal instance are the most important (e.g. weekdays in dates are important, the commas are not). Secondly, the potential for negative values of the metric opens the door to the study of negative signals, or `distractors', which we have only touched upon briefly in this paper. And finally, we consider the availability of multiple measurements for a single DM or other discourse signal to be a potentially very interesting window into the relative ambiguity of different signaling devices (cf. BIBREF16) and for research on the contexts in which such ambiguity results. To see how ambiguity is reflected in multiple measurements of ${\Delta }_s$, we can consider Figure FIGREF47. The figure shows boxplots for multiple instances of the same signal tokens. We can see that words like and are usually not strong signals, with the entire interquartile range scoring less than 0.02, i.e. aiding relation classification by less than 2%, with some values dipping into the negative region (i.e. cases functioning as distractors). However, some outliers are also present, reaching almost as high as 0.25 – these are likely to be coordinating predicates, which may signal relations such as sequence or joint. A word such as but is more important overall, with the box far above and, but still covering a wide range of values: these can correspond to more or less ambiguous cases of but, but also to cases in which the word is more or less irreplaceable as a signal. In the presence of multiple signals for the same relation, the presence of but should be less important. We can also see that but can be a distractor with negative values, as we saw in example SECREF43 above. As far as we are aware, this is the first empirical corpus-based evidence giving a quantitative confirmation to the intuition that `but' in context is significantly less ambiguous as a discourse marker than `and'; the overlap in their bar plots indicate that they can be similarly ambiguous or even distracting in some cases, but the difference in interquartile ranges makes it clear that these are exceptions. For less ambiguous DMs, such as if, we can also see a contrast between lower and upper case instances: upper case If is almost always a marker of condition, but the lower case if is sometimes part of an embedded object clause, which is not segmented in the corpus and does not mark a conditional relation (e.g. “they wanted to see if...”). For the word to, the figure suggests a strongly bimodal distribution, with a core population of (primarily prepositional) discourse-irrelevant to, and a substantial number of outliers above a large gap, representing to in infinitival purpose clauses (though not all to infinitives mark such clauses, as in adnominal “a chance to go”, which the model is usually able to distinguish in context). In other words, our model can not only disambiguate ambiguous strings into grammatical categories, but also rank members of the same category by importance in context, as evidenced by its ability to correctly classify high frequency items like `to' or `and' as true positives. A frequentist approach would not only lack this ability – it would miss such items altogether, due to its overall high string frequency and low specificity. Beyond what the results can tell us about discourse signals in this particular corpus, the fact that the neural model is sensitive to mutual redundancy of signals raises interesting theoretical questions about what human annotators are doing when they characterize multiple features of a discourse unit as signals. If it is already evident from the presence of a conventional DM that some relation applies, are other, less explicit signals which might be relied on in the absence of the DM, equally `there'? Do we need a concept of primary and auxiliary signals, or graded signaling strength, in the way that a metric such as ${\Delta }_s$ suggests? Another open question relates to the postulation of distractors as an opposite concept to discourse relation signals. While we have not tested this so far, it is interesting to ask to what extent human analysts are aware of distractors, whether we could form annotation guidelines to recognize them, and how humans weigh the value of signals and potential distractors in extrapolating intended discourse relations. It seems likely that distractors affecting humans may be found in cases of misunderstanding or ambiguity of discourse relations (see also BIBREF25). Finally, the error analysis for signal detection complements the otherwise opaque relation classification results in Table TABREF34 in showing some of the missing sources of information that our model would need in order to work better. We have seen that relational information, such as identifying not just the presence of a pronoun but also its antecedent, or both sides of lexical semantic relations such as synonymy, meronymy or antonymy, as well as comparing count information, are still unavailable to the classifier – if they were being used, then ${\Delta }_s$ would reflect the effects of their removal, but this is largely not the case. This suggests that, in the absence of vastly larger discourse annotated corpora, discourse relation recognition may require the construction of either features, architectures, or both, which can harness abstract relational information of this nature beyond the memorization of specific pairs of words (or regions of vector space with similar words) that are already attested in the limited training data. In this vein, BIBREF54 conducted a series of experiments on automatic sense prediction for four top-level implicit discourse relations within the PDTB framework, which also suggested benefits for using linguistically-informed features such as verb information, polarity tags, context, lexical items (e.g. first and last words of the arguments; first three words in the sentence) etc. The model architecture and input data are also in need of improvements, as the current architecture can only be expected to identify endocentric signals. The substantial amount of exocentric signaling cases is in itself an interesting finding, as it suggests that relation classification from head EDU pairs may ultimately have a natural ceiling that is considerably below what could be inferred from looking at larger contexts. We predict that as we add more features to the model and improve its architecture in ways that allow it to recognize the kinds of signals that humans do, classification accuracy will increase; and conversely, as classification accuracy rises, measurements based on ${\Delta }_s$ will overlap increasingly with human annotations of anchored signals. In sum, we believe that there is room for much research on what relation classification models should look like, and how they can represent the kinds of information found in non-trivial signals. The results of this line of work can therefore benefit NLP systems targeting discourse relations by suggesting locations within the text which systems should attend to in one way or another. Moreover, we think that using distant-supervised techniques for learning discourse relations (e.g. BIBREF55) is promising in the development of discourse models using the proposed dataset. We hope to see further analyses benefit from this work and the application of metrics such as ${\Delta }_s$ to other datasets, within more complex models, and using additional features to capture such information. We also hope to see applications of discourse relations such as machine comprehension BIBREF20 and sentiment analysis BIBREF55 etc. benefit from the proposed model architecture as well as the dataset.
Answer with content missing: (Formula) Formula is the answer.
99d7bef0ef395360b939a3f446eff67239551a9d
99d7bef0ef395360b939a3f446eff67239551a9d_0
Q: Are some models evaluated using this metric, what are the findings? Text: Introduction The development of formal frameworks for the analysis of discourse relations has long gone hand in hand with work on signaling devices. The analysis of discourse relations is also closely tied to what a discourse structure should look like and what discourse goals should be fulfilled in relation to the interpretation of discourse relations BIBREF0. Earlier work on the establishment of inventories of discourse relations and their formalization (BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6) relied on the existence of `discourse markers' (DMs) or `connectives', including conjunctions such as because or if, adverbials such as however or as a result, and coordinations such as but, to identify and distinguish relations such as condition in SECREF1, concession in SECREF1, cause in SECREF1, or contrast, result etc., depending on the postulated inventory of relations (signals for these relations as identified by human analysts are given in bold; examples come from the GUM corpus BIBREF7, presented in Section SECREF3). . [If you work for a company,]$_{\textsc {condition}}$ [they pay you that money.] . [Albeit limited,]$_{\textsc {concession}}$ [these results provide valuable insight into SI interpretation by Chitonga-speaking children.] . [not all would have been interviewed at Wave 3] [due to differential patterns of temporary attrition]$_{\textsc {cause}}$ The same reasoning of identifying relations based on overt signals has been applied to the comparison of discourse relations across languages, by comparing inventories of similar function words cross-linguistically (BIBREF8, BIBREF9); and the annotation guidelines of prominent contemporary corpora rely on such markers as well: for instance, the Penn Discourse Treebank (see BIBREF10) explicitly refers to either the presence of DMs or the possibility of their insertion in cases of implicit discourse relations, and DM analysis in Rhetorical Structure Theory BIBREF11 has also shown the important role of DMs as signals of discourse relations at all hierarchical levels of discourse analysis BIBREF12. At the same time, research over the past two decades analyzing the full range of possible cues that humans use to identify the presence of discourse relations has suggested that classic DMs such as conjunctions and adverbials are only a part of the network of signals that writers or speakers can harness for discourse structuring, which also includes entity-based cohesion devices (e.g. certain uses of anaphora, see BIBREF13), alternative lexicalizations using content words, as well as syntactic constructions (see BIBREF14 and the addition of alternative lexicalization constructions, AltLexC, in the latest version of PDTB, BIBREF15). In previous work, two main approaches to extracting the inventory of discourse signal types in an open-ended framework can be identified: data-driven approaches, which attempt to extract relevant words from distributional properties of the data, using frequencies or association measures capturing their co-occurrences with certain relation types (e.g. BIBREF16, BIBREF17); and manual annotation efforts (e.g. BIBREF10, BIBREF18), which develop categorization schemes and guidelines for human evaluation of signaling devices. The former family of methods benefits from an unbiased openness to any and every type of word which may reliably co-occur with some relation types, whether or not a human might notice it while annotating, as well as the naturally graded and comparable nature of the resulting quantitative scores, but, as we will show, falls short in identifying specific cases of a word being a signal (or not) in context. By contrast, the latter approach allows for the identification of individual instances of signaling devices, but relies on less open-ended guidelines and is categorical in nature: a word either is or isn't a signal in context, providing less access to concepts such as signaling strength. The goal of this paper is to develop and evaluate a model of discourse signal identification that is built bottom up from the data, but retains sensitivity to context in the evaluation of each individual example. In addition, even though this work is conducted within Rhetorical Structural Theory, we hope that it can shed light on signal identification of discourse relations across genres and provide empirical evidence to motivate research on theory-neutral and genre-diverse discourse processing, which would be beneficial for pushing forward theories of discourse across frameworks or formalisms. Furthermore, employing a computational approach to studying discourse relations has a promising impact on various NLP downstream tasks such as question answering and document summarization etc. For example, BIBREF20 incorporated discourse information into the task of automated text comprehension and benefited from such information without relying on explicit annotations of discourse structure during training, which outperformed state-of-the-art text comprehension systems at the time. Towards this goal, we begin by reviewing some previous work in the traditions sketched out above in the next section, and point out some open questions which we would like to address. In Section SECREF3 we present the discourse annotated data that we will be using, which covers a number of English text types from the Web annotated for 20 discourse relations in the framework of Rhetorical Structure Theory, and is enriched with human annotations of discourse relation signaling devices for a subset of the data. Moreover, we also propose a taxonomy of anchored signals based on the discourse annotated data used in this paper, illustrating the properties and the distribution of the anchorable signals. In Section SECREF4 we then train a distantly supervised neural network model which is made aware of the relations present in the data, but attempts to learn which words signal those relations without any exposure to explicit signal annotations. We evaluate the accuracy of our model using state-of-the-art pretrained and contextualized character and word embeddings, and develop a metric for signaling strength based on a masking concept similar to permutation importance, which naturally lends itself to the definition of both positive and negative or `anti-signals', which we will refer to as `distractors'. In Section SECREF5, we combine the anchoring annotation data from Section SECREF3 with the model's predictions to evaluate how `human-like' its performance is, using an information retrieval approach measuring recall@k and assessing the stability of different signal types based on how the model scores them. We develop a visualization for tokenwise signaling strength and perform error analysis for some signals found by the model which were not flagged by humans and vice versa, and point out the strengths and weaknesses of the architecture. Section SECREF6 offers further discussion of what we can learn from the model, what kinds of additional features it might benefit from given the error analysis, and what the distributions of scores for individual signals can teach us about the ambiguity and reliability of different signal types, opening up avenues for further research. Previous Work ::: Data-driven Approaches A straightforward approach to identifying discourse relation signals in corpora with discourse parses is to extract frequency counts for all lexical types or lemmas and cross-tabulate them with discourse relations, such as sentences annotated as cause, elaboration, etc. (e.g. BIBREF21, BIBREF22, BIBREF17). Table TABREF4, reproduced from BIBREF17, illustrates this approach for the Russian RST Treebank. This approach quickly reveals the core inventory of cue words in the language, and in particular the class of low-ambiguity discourse markers (DMs), such as odnako `however' signaling contrast (see Fraser 1999 on delimiting the class of explicit DMs) or relative pronouns signaling elaboration. As such, it can be very helpful for corpus-based lexicography of discourse markers (cf. BIBREF23). The approach can potentially include multiword expressions, if applied equally to multi-token spans (e.g. as a result), and because it is language-independent, it also allows for a straightforward comparison of connectives or other DMs across languages. Results may also converge across frameworks, as the frequency analysis may reveal the same items in different corpora annotated using different frameworks. For instance, the inventory of connectives found in work on the Penn Discourse Treebank (PDTB, see BIBREF10) largely converges with findings on connectives using RST (see BIBREF24, BIBREF18): conjunctions such as but can mark different kinds of contrastive relations at a high level, and adverbs such as meanwhile can convey contemporaneousness, among other things, even when more fine-grained analyses are applied. However, a purely frequentist approach runs into problems on multiple levels, as we will show in Section SECREF4: high frequency and specificity to a small number of relations characterize only the most common and unambiguous discourse markers, but not less common ones. Additionally, differentiating actual and potentially ambiguous usages of candidate words in context requires substantial qualitative analysis (see BIBREF25), which is not reflected in aggregated counts, and signals that belong to a class of relations (e.g. a variety of distinct contrastive relation types) may appear to be non-specific, when in fact they reliably mark a superset of relations. Other studies have used more sophisticated metrics, such as point-wise mutual information (PMI), to identify words associated with particular relations BIBREF16. Using the PDTB corpus, BIBREF16 extracted such scores and measured the contribution of different signal types based on the information gain which they deliver for the classification of discourse relations at various degrees of granularity, as expressed by the hierarchical labels of PDTB relation types. This approach is most similar to the goal given to our own model in Section SECREF4, but is less detailed in that the aggregation process assigns a single number to each candidate lexical item, rather than assigning contextual scores to each instance. Finally we note that for hierarchical discourse annotation schemes, the data-driven approaches described here become less feasible at higher levels of abstraction, as relations connecting entire paragraphs encompass large amounts of text, and it is therefore difficult to find words with high specificity to those relations. As a result, approaches using human annotation of discourse relation signals may ultimately be irreplaceable. Previous Work ::: Discourse Relation Signal Annotations Discourse relation signals are broadly classified into two categorizes: anchored signals and unanchored signals. By `anchoring' we refer to associating signals with concrete token spans in texts. Intuitively, most of the signals are anchorable since they correspond to certain token spans. However, it is also possible for a discourse relation to be signaled but remain unanchored. Results from BIBREF27 indicated that there are several signaled but unanchored relations such as preparation and background since they are high-level discourse relations that capture and correspond to genre features such as interview layout in interviews where the conversation is constructed as a question-answer scheme, and are thus rarely anchored to tokens. The Penn Discourse Treebank (PDTB V3, BIBREF15) is the largest discourse annotated corpus of English, and the largest resource annotated explicitly for discourse relation signals such as connectives, with similar corpora having been developed for a variety of languages (e.g. BIBREF28 for Turkish, BIBREF29 for Chinese). However the annotation scheme used by PDTB is ahierarchical, annotating only pairs of textual argument spans connected by a discourse relation, and disregarding relations at higher levels, such as relations between paragraphs or other groups of discourse units. Additionally, the annotation scheme used for explicit signals is limited to specific sets of expressions and constructions, and does not include some types of potential signals, such as the graphical layout of a document, lexical chains of (non-coreferring) content words that are not seen as connectives, or genre conventions which may signal the discourse function for parts of a text. It is nevertheless a very useful resource for obtaining frequency lists of the most prevalent DMs in English, as well as data on a range of phenomena such as anaphoric relations signaled by entities, and some explicitly annotated syntactic constructions. Working in the hierarchical framework of Rhetorical Structure Theory BIBREF11, BIBREF18 re-annotated the existing RST Discourse Treebank BIBREF30, by taking the existing discourse relation annotations in the corpus as a ground truth and analyzing any possible information in the data, including content words, patterns of repetition or genre conventions, as a possibly present discourse relation signaling device. The resulting RST Signalling Corpus (RST-SC, BIBREF31) consists of 385 Wall Street Journal articles from the Penn Treebank BIBREF32, a smaller subset of the same corpus used in PDTB. It contains 20,123 instances of 78 relation types (e.g. attribution, circumstance, result etc.), which are enriched with 29,297 signal annotations. BIBREF12 showed that when all types of signals are considered, over 86% of discourse relations annotated in the corpus were signaled in some way, but among these, just under 20% of cases were marked by a DM. However, unlike PDTB, the RST Signalling Corpus does not provide a concrete span of tokens for the locus of each signal, indicating instead only the type of signaling device used. Although the signal annotations in RST-SC have a broader scope than those in PDTB and are made more complex by extending to hierarchical relations, BIBREF33 have shown that RST-SC's annotation scheme can be `anchored' by associating discourse signal categories from RST-SC with concrete token spans. BIBREF27 applied the same scheme to a data set described in Section SECREF3, which we will use to evaluate our model in Section SECREF5. Since that data set is based on the same annotation scheme of signal types as RST-SC, we will describe the data for the present study and RST-SC signal type annotation scheme next. Data ::: Anchored Signals in the GUM Corpus In order to study open-ended signals anchored to concrete tokens, we use the signal-annotated subset of the freely available Georgetown University Multilayer (GUM) Corpus BIBREF7 from BIBREF27. Our choice to use a multi-genre RST-annotated corpus rather than using PDTB, which also contains discourse relation signal annotation to a large extent is motivated by three reasons: The first reason is that we wish to explore the full range of potential signals, as laid out in the work on the Signalling Corpus BIBREF12, BIBREF34, whereas PDTB annotates only a subset of the possible cues identified by human annotators. Secondly, the use of RST as a framework allows us to examine discourse relations at all hierarchical levels, including long distance, high-level relations between structures as large as paragraphs or sections, which often have different types of signals allowing their identification. Finally, although the entire GUM corpus is only about half the size of RST-DT (109K tokens), using GUM offers the advantage of a more varied range of genres than PDTB and RST-SC, both of which annotate Wall Street Journal data. The signal annotated subset of GUM includes academic papers, how-to guides, interviews and news text, encompassing over 11,000 tokens. Although this data set may be too small to train a successful neural model for signal detection, we will not be using it for this purpose; instead, we will reserve it for use solely as a test set, and use the remainder of the data (about 98K tokens) to build our model (see Section SECREF28 for more details about the subsets and splits), including data from four further genres, for which the corpus also contains RST annotations but no signaling annotations: travel guides, biographies, fiction, and Reddit forum discussions. The GUM corpus is manually annotated with a large number of layers, including document layout (headings, paragraphs, figures, etc.); multiple POS tags (Penn tags, CLAWS5, Universal POS); lemmas; sentence types (e.g. imperative, wh-question etc., BIBREF35); Universal Dependencies BIBREF36; (non-)named entity types; coreference and bridging resolution; and discourse parses using Rhetorical Structure Theory BIBREF11. In particular, the RST annotations in the corpus use a set of 20 commonly used RST relation labels, which are given in Table TABREF10, along with their frequencies in the corpus. The relations cover asymmetrical prominence relations (satellite-nucleus) and symmetrical ones (multinuclear relations), with the restatement relation being realized in two versions, one for each type. The signaling annotation in the corpus follows the scheme developed by RST-SC, with some additions. Although RST-SC does not indicate token positions for signals, it provides a detailed taxonomy of signal types which is hierarchically structured into three levels: signal class, denoting the signal's degree of complexity signal type, indicating the linguistic system to which it belongs specific signal, which gives the most fine-grained subtypes of signals within each type It is assumed that any number of word tokens can be associated with any number of signals (including the same tokens participating in multiple signals), that signals can arise without corresponding to specific tokens (e.g. due to graphical layout of paragraphs), and that each relation can have an unbounded number of signals ($0-n$), each of which is characterized by all three levels. The signal class level is divided into single, combined (for complex signals), and unsure for unclear signals which cannot be identified conclusively, but are noted for further study. For each signal (regardless of its class), signal type and specific signal are identified. According to RST-SC's taxonomy, signal type includes 9 types such as DMs, genre, graphical, lexical, morphological, numerical, reference, semantic, and syntactic. Each type then has specific subcategories. For instance, the signal type semantic has 7 specific signal subtypes: synonymy, antonymy, meronymy, repetition, indicative word pair, lexical chain, and general word. We will describe some of these in more depth below. In addition to the 9 signal types, RST-SC has 6 combined signal types such as reference+syntactic, semantic+syntactic, and graphical+syntactic etc., and 15 specific signals are identified for the combined signals. Although the rich signaling annotations in RST-SC offer an excellent overview of the relative prevalence of different signal types in the Wall Street Journal corpus, it is difficult to apply the original scheme to the study of individual signal words, since actual signal positions are not identified. While recovering these positions may be possible for some categories using the original guidelines, most signaling annotations (e.g. lexical chains, repetition) cannot be automatically paired with actual tokens, meaning that, in order to use the original RST-SC for our study, we would need to re-annotate it for signal token positions. As this effort is beyond the scope of our study, we will use the smaller data set with anchored signaling annotations from BIBREF27: This data is annotated with the same signal categories as RST-SC, but also includes exact token positions for each signal, including possibly no tokens for unanchorable signals such as some types of genre conventions or graphical layout which are not expressible in terms of specific words. In order to get a better sense of how the annotations work, we consider example SECREF7. . [5] Sociologists have explored the adverse consequences of discrimination; [6] psychologists have examined the mental processes that underpin conscious and unconscious biases; [7] neuroscientists have examined the neurobiological underpinnings of discrimination; [8] and evolutionary theorists have explored the various ways that in-group/out-group biases emerged across the history of our species. – joint [GUM_academic_discrimination] In this example, there is a joint relation between four spans in a fragment from an RST discourse tree. The first tokens in each span form a parallel construction and include semantically related items such as explored and examined (signal class `combined', type `semantic+syntactic', specific subtype `parallel syntactic construction + lexical chain'). The words corresponding to this signal in each span are highlighted in Figure FIGREF15, and are considered to signal each instance of the joint relation. Additionally, the joint relation is also signaled by a number of further signals which are highlighted in the figure as well, such as the semicolons between spans, which correspond to a type `graphical', subtype `semicolon' in RST-SC. The data model of the corpus records which tokens are associated with which categorized signals, and allows for multiple membership of the same token in several signal annotations. In terms of annotation reliability, BIBREF12 reported a weighted kappa of 0.71 for signal subtypes in RST-SC without regard to the span of words corresponding to a signal, while a study by BIBREF37 suggests that signal anchoring, i.e. associating RST-SC signal categories with specific tokens achieves a 90.9% perfect agreement score on which tokens constitute signals, or a Cohen's Kappa value of 0.77. As anchored signal positions will be of the greatest interest to our study, we will consider how signal token positions are distributed in the corpus next, and develop an anchoring taxonomy which we will refer back to for the remainder of this paper. Data ::: A Taxonomy of Anchored Signals From a structural point of view, one of the most fundamental distinctions with regard to signal realization recognized in previous work is the classification of signaling tokens into satellite or nucleus-oriented positions, i.e. whether a signal for the relation appears within the modifier span or the span being modified BIBREF38. While some relation types exhibit a strong preference for signal position (e.g. using a discourse marker such as because in the satellite for cause, BIBREF39), others, such as concession are more balanced (almost evenly split signals between satellite and nucleus in BIBREF38). In this study we would like to further refine the taxonomy of signal positions, breaking it down into several features. At the highest level, we have the distinction between anchorable and non-anchorable signals, i.e. signals which correspond to no token in the text (e.g. genre conventions, graphical layout). Below this level, we follow BIBREF38 in classifying signals as satellite or nucleus-oriented, based on whether they appear in the more prominent Elementary Discourse Unit (EDU) of a relation or its dependent. However, several further distinctions may be drawn: Whether the signal appears before or after the relation in text order; since we consider the relation to be instantiated as soon as its second argument in the text appears, `before' is interpreted as any token before the second head unit in the discourse tree begins, and `after' is any subsequent token Whether the signal appears in the head unit of the satellite/nucleus, or in a dependent of that unit; this distinction only matters for satellite or nucleus subtrees that consist of more than one unit Whether the signal is anywhere within the structure dominated by the units participating in the relation, or completely outside of this structure Table TABREF20 gives an overview of the taxonomy proposed here, which includes the possible combinations of these properties and the distribution of the corresponding anchorable signals found in the signal-annotated subset of the GUM Corpus from BIBREF27. Individual feature combinations can be referred to either as acronyms, e.g. ABIHS for `Anchorable, Before the second EDU of the relation, Inside the relation's subtree, Head unit of the Satellite', or using the group IDs near the bottom of the table (in this case the category numbered Roman I). We will refer back to these categories in our comparison of manually annotated and automatically predicted signals. To illustrate how the taxonomy works in practice, we can consider the example in Figure FIGREF23, which shows a signal whose associated tokens instantiate categories I and IV in a discourse tree – the words demographic variables appear both within a preparation satellite (unit [50], category I), which precedes and points to its nucleus [51–54], and within a satellite inside that block (unit [52], a dependent inside the nucleus block, category IV). Based on the RST-SC annotation scheme, the signal class is Simple, with the type Semantic and specific sub-type Lexical chain. The numbers at the bottom of Table TABREF20 show the number of tokens signaling each relation at each position, as well as the number of relations which have signal tokens at the relevant positions. The hypothetical categories V and X, with signal tokens which are not within the subtree of satellite or nucleus descendants, are not attested in our data, as far as annotators were able to identify. Automatic Signal Extraction ::: A Contextless Frequentist Approach To motivate the need for a fine-grained and contextualized approach to describing discourse relation signals in our data, we begin by extracting some basic data-driven descriptions of our data along the lines presented in Section SECREF3. In order to constrain candidate words to just the most relevant ones for marking a specific signal, we first need a way to address a caveat of the frequentist approach: higher order relations which often connect entire paragraphs (notably background and elaboration) must be prevented from allowing most or even all words in the document to be considered as signaling them. A simple approach to achieving this is to assume `Strong Nuclearity', relying on Marcu's (BIBREF42) Compositionality Criterion for Discourse Trees (CCDT), which suggests that if a relation holds between two blocks of EDUs, then it also holds between their head EDUs. While this simplification may not be entirely accurate in all cases, Table TABREF20 suggests that it captures most signals, and allows us to reduce the space of candidate signal tokens to just the two head EDUs implicated in a relation. We will refer to signals within the head units of a relation as `endocentric' and signals outside this region as `exocentric'. Figure FIGREF25 illustrates this, where units [64] and [65] are the respective heads of two blocks of EDUs, and unit [65] in fact contains a plausible endocentric signal for the result relation, the discourse marker thus. More problematic caveats for the frequentist approach are the potential for over/underfitting and ambiguity. The issue of overfitting is especially thorny in small datasets, in which certain content words appear coincidentally in discourse segments with a certain function. Table TABREF27 shows the most distinctive lexical types for several discourse relations in GUM based on pure ratio of occurrence in head EDUs marked for those relations. On the left, types are chosen which have a maximal frequency in the relevant relationship compared with their overall frequency in the corpus. This quickly overfits the contents of the corpus, selecting irrelevant words such as holiest and Slate for the circumstance relation, or hypnotizing and currency for concession. The same lack of filtering can, however, yield some potentially relevant lexical items, such as causing for result or even highly specific content words such as ammonium, which are certainly not discourse markers, but whose appearance in a sequence is not accidental: the word is in this case typical for sequences in how-to guides, where use of ingredients in a recipe is described in a sequence. Even if these kinds of items may be undesirable candidates for signal words in general, it seems likely that some rare content words may function as signals in context, such as evaluative adjectives (e.g. exquisite) enabling readers to recognize an evaluation. If we are willing to give up on the latter kind of rare items, the overfitting problem can be alleviated somewhat by setting a frequency threshold for each potential signal lexeme, thereby suppressing rare items. The items on the right of the table are limited to types occurring more than 10 times. Since the most distinctive items on the left are all comparatively rare (and therefore exclusive to their relations), they do not overlap with the items on the right. Looking at the items on the right, several signals make intuitive sense, especially for relations such as solutionhood (used for question-answer pairs) or concession, which show the expected WH words and auxiliary did, or discourse markers such as though, respectively. At the same time, some high frequency items may be spurious, such as NATO for justify, which could perhaps be filtered out based on low dispersion across documents, but also stuff for cause, which probably could not be. Another problem with the lists on the right is that some expected strong signals, such as the word and for sequence are absent from the table. This is not because and is not frequent in sequences, but rather because it is a ubiquitous word, and as a result, it is not very specific to the relation. However if we look at actual examples of and inside and outside of sequences, it is easy to notice that the kind of and that does signal a relation in context is often clause initial as in SECREF24 and very different from the adnominal coordinating ands in SECREF24, which do not signal the relation: . [she was made a Dame by Elizabeth II for services to architecture,] [and in 2015 she became the first and only woman to be awarded the Royal Gold Medal]$_{\textsc {sequence}}$ . [Gordon visited England and Scotland in 1686.] [In 1687 and 1689 he took part in expeditions against the Tatars in the Crimea]$_{\textsc {sequence}}$ These examples suggest that a data-driven approach to signal detection needs some way of taking context into account. In particular, we would like to be able to compare instances of signals and quantify how strong the signal is in each case. In the next section, we will attempt to apply a neural model with contextualized word embeddings BIBREF44 to this problem, which will be capable of learning contextualized representations of words within the discourse graph. Automatic Signal Extraction ::: A Contextualized Neural Model ::: Task and Model Architecture Since we are interested in identifying unrestricted signaling devices, we deliberately avoid a supervised learning approach as used in automatic signal detection trained on resources such as PDTB. While recent work on PDTB connective detection (BIBREF26, BIBREF45) achieves good results (F-Scores of around 88-89 for English PDTB explicit connectives), the use of such supervised approaches would not tell us about new signaling devices, and especially about unrestricted lexical signals and other coherence devices not annotated in PDTB. Additionally, we would be restricted to the newspaper text types represented in the Wall Street Journal corpus, since no other large English corpus has been annotated for anchored signals. Instead, we will adopt a distantly supervised approach: we will task a model with supervised discourse relation classification on data that has not been annotated for signals, and infer the positions of signals in the text by analyzing the model's behavior. A key assumption, which we will motivate below, is that signals can have different levels of signaling strength, corresponding to their relative importance in identifying a relation. We would like to assume that different signal strength is in fact relevant to human analysts' decision making in relation identification, though in practice we will be focusing on model estimates of strength, the usefulness of which will become apparent below. As a framework, we use the sentence classifier configuration of FLAIR BIBREF46 with a biLSTM encoder/classifier architecture fed by character and word level representations composed of a concatenation of fixed 300 dimensional GloVe embeddings BIBREF47, pre-trained contextualized FLAIR word embeddings, and pre-trained contextualized character embeddings from AllenNLP BIBREF48 with FLAIR's default hyperparameters. The model's architecture is shown in Figure FIGREF30. Contextualized embeddings BIBREF44 have the advantage of giving distinct representations to different instances of the same word based on the surrounding words, meaning that an adnominal and connecting two NPs can be distinguished from one connecting two verbs based on its vector space representation in the model. Using character embeddings, which give vector space representations to substrings within each word, means that the model can learn the importance of morphological forms, such as the English gerund's -ing suffix, even for out-of-vocabulary items not seen during training. Formally, the input to our system is formed of EDU pairs which are the head units within the respective blocks of discourse units that they belong to, which are in turn connected by an instance of a discourse relation. This means that every discourse relation in the corpus is expressed as exactly one EDU pair. Each EDU is encoded as a (possibly padded) sequence of $n$-dimensional vector representations of each word ${x_1,..,x_T}$, with some added separators which are encoded in the same way and described below. The bidirectional LSTM composes representations and context for the input, and a fully connected softmax layer gives the probability of each relation: where the probability of each relation $rel_i$ is derived from the composed output of the function $h$ across time steps $0 \ldots t$, $\delta \in \lbrace b,f\rbrace $ is the direction of the respective LSTMs, $c_t^\delta $ is the recurrent context in each direction and $\theta = {W,b}$ gives the model weights and bias parameters (see BIBREF46 for details). Note that although the output of the system is ostensibly a probability distribution over relation types, we will not be directly interested in the most probable relation as outputted by the classifier, but rather in analyzing the model's behavior with respect to the input word representations as potential signals of each relation. In order to capitalize on the system's natural language modeling knowledge, EDU satellite-nucleus pairs are presented to the model in text order (i.e. either the nucleus or the satellite may come first). However, the model is given special separator symbols indicating the positions of the satellite and nucleus, which are essential for deciding the relation type (e.g. cause vs. result, which may have similar cue words but lead to opposite labels), and a separator symbol indicating the transition between satellite and nucleus. This setup is illustrated in SECREF29. . $<$s$>$ Sometimes this information is available , $<$sep$>$ but usually not . $<$n$>$ Label: concession In this example, the satellite precedes the nucleus and is therefore presented first. The model is made aware of the fact that the segment on the left is the satellite thanks to the tag <s>. Since the LSTM is bi-directional, it is aware of positions being within the nucleus or satellite, as well as their proximity to the separator, at every time step. We reserve the signal-annotated subset of 12 documents from GUM for testing, which contains 1,185 head EDU pairs (each representing one discourse relation), and a random selection of 12 further documents from the remaining RST-annotated GUM data (1,078 pairs) is taken as development data, leaving 102 documents (5,828 pairs) for training. The same EDUs appear in multiple pairs if a unit has multiple children with distinct relations, but no instances of EDUs are shard across partitions, since the splits are based on document boundaries. We note again that for the training and development data, we have no signaling annotations of any kind; this is possible since the network does not actually use the human signaling annotations we will be evaluating against: its distant supervision consists solely of the RST relation labels. Automatic Signal Extraction ::: A Contextualized Neural Model ::: Relation Classification Performance Although only used as an auxiliary training task, we can look at the model's performance on predicting discourse relations, which is given in Table TABREF34. Unsurprisingly, the model performs best on the most frequent relations in the corpus, such as elaboration or joint, but also on rarer ones which tend to be signaled explicitly, such as condition (often signaled explicitly by if), solutionhood (used for question-answer pairs signaled by question marks and WH words), or concession (DMs such as although). However, the model also performs reasonably well for some trickier (i.e. less often introduced by unambiguous DMs) but frequent relations, such as preparation, circumstance, and sequence. Rare relations with complex contextual environments, such as result, justify or antithesis, unsurprisingly do not perform well, with the latter two showing an F-score of 0. The relation restatement, which also shows no correct classifications, reveals a weakness of the model: while it is capable of recognizing signals in context, it cannot learn that repetition in and of itself, regardless of specific areas in vector space, is important (see Section SECREF6 for more discussion of these and other classification weaknesses). Although this is not the actual task targeted by the current paper, we may note that the overall performance of the model, with an F-Score of 44.37, is not bad, though below the performance of state-of-the-art full discourse parsers (see BIBREF49) – this is to be expected, since the model is not aware of the entire RST tree, rather looking only at EDU pairs out of context, and given that standard scores on RST-DT come from a larger and more homogeneous corpus, with with fewer relations and some easy cases that are absent from GUM. Given the model's performance on relation classification, which is far from perfect, one might wonder whether signal predictions made by our analysis should be trusted. This question can be answered in two ways: first, quantitatively, we will see in Section SECREF5 that model signal predictions overlap considerably with human judgments, even when the predicted relation is incorrect. Intuitively, for similar relations, such as concession or contrast, both of which are adversative, the model may notice a relevant cue (e.g. `but', or contrasting lexical items) despite choosing the wrong one. Second, as we will see below, we will be analyzing the model's behavior with respect to the probability of the correct relation, regardless of the label it ultimately chooses, meaning that the importance of predicting the correct label exactly will be diminished further. Automatic Signal Extraction ::: A Contextualized Neural Model ::: Signaling Metric The actual performance we are interested in evaluating is the model's ability to extract signals for given discourse relations, rather than its accuracy in predicting the relations. To do so, we must extract anchored signal predictions from the model, which is non-trivial. While earlier work on interpreting neural models has focused on token-wise softmax probability BIBREF50 or attention weights BIBREF51, using contextualized embeddings complicates the evaluation: since word representations are adjusted to reflect neighboring words, the model may assign higher importance to the word standing next to what a human annotator may interpret as a signal. Example SECREF36 illustrates the problem: . [RGB]230, 230, 230To [RGB]53, 53, 53provide [RGB]165, 165, 165information [RGB]179, 179, 179on [RGB]175, 175, 175the [RGB]160, 160, 160analytical [RGB]157, 157, 157sample [RGB]187, 187, 187as [RGB]170, 170, 170a [RGB]168, 168, 168whole [RGB]207, 207, 207, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]168, 168, 168two [RGB]170, 170, 170additional [RGB]164, 164, 164demographic [RGB]175, 175, 175variables [RGB]182, 182, 182are [RGB]165, 165, 165included [RGB]230, 230, 230. Each word in SECREF36 is shaded based on the softmax probability assigned to the correct relation of the satellite, i.e. how `convincing' the model found the word in terms of local probability. In addition, the top-scoring word in each sentence is rendered in boldface for emphasis. The gold label for the relation is placed above the arrow, which indicates the direction of the relation (satellite to nucleus), and the model's predicted label is shown under the arrow. Intuitively, the strongest signal of the purpose relation in SECREF36 is the initial infinitive marker To – however, the model ranks the adjacent provide higher and almost ignores To. We suspect that the reason for this, and many similar examples in the model evaluated based on relation probabilities, is that contextual embeddings allow for a special representation of the word provide next to To, making it difficult to tease apart the locus of the most meaningful signal. To overcome this complication, we use the logic of permutation importance, treating the neural model as a black box and manipulating the input to discover relevant features in the data (cf. BIBREF52). We reason that this type of evaluation is more robust than, for example, examining model internal attention weights because such weights are not designed or trained with a reward ensuring they are informative – they are simply trained on the same classification error loss as the rest of the model. Instead, we can withhold potentially relevant information from the model directly: After training is complete, we feed the test data to the model in two forms – as-is, and with each word masked, as shown in SECREF36. . Original: <$s$>$ To\quad \ p̄rovide īnformation .. <$sep$>$ .. <$n$>$ Original: \: $<$s$>$ \: \ To \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked1: \: $<$s$>$ \: $<$X$>$ \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked2: \: $<$s$>$ \: \ To \: \ $<$X$>$ \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked3: \: $<$s$>$ \: \ To \: provide \: \ $<$X$>$ \: ... \: $<$sep$>$ \: ... \: $<$n$>$ $ Label: purpose We reason that, if a token is important for predicting the correct label, masking it will degrade the model's classification accuracy, or at least reduce its reported classification certainty. In SECREF36, it seems reasonable to assume that masking the word `To' has a greater impact on predicting the label purpose than masking the word `provide', and even less so, the following noun `information'. We therefore use reduction in softmax probability of the correct relation as our signaling strength metric for the model. We call this metric ${\Delta }_s$ (for delta-softmax), which can be written as: where $rel$ is the true relation of the EDU pair, $t_i$ represents the token at index $i$ of $N$ tokens, and $X_{mask=i}$ represents the input sequence with the masked position $i$ (for $i \in 1 \ldots N$ ignoring separators, or $\phi $, the empty set). To visualize the model's predictions, we compare ${\Delta }_s$ for a particular token to two numbers: the maximum ${\Delta }_s$ achieved by any token in the current pair (a measure of relative importance for the current classification) and the maximum ${\Delta }_s$ achieved by any token in the current document (a measure of how strongly the current relation is signaled compared to other relations in the text). We then shade each token 50% based on the first number and 50% based on the second. As a result, the most valid cues in an EDU pair are darker than their neighbors, but EDU pairs with no good cues are overall very light, whereas pairs with many good signals are darker. Some examples of this visualization are given in SECREF36-SECREF36 (human annotated endocentric signal tokens are marked by double underlines). . [RGB]61, 61, 61To [RGB]112, 112, 112provide [RGB]205, 205, 205information [RGB]230, 230, 230on [RGB]230, 230, 230the [RGB]230, 230, 230analytical [RGB]230, 230, 230sample [RGB]230, 230, 230as [RGB]230, 230, 230a [RGB]230, 230, 230whole [RGB]230, 230, 230, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]230, 230, 230two [RGB]183, 183, 183additional [RGB]230, 230, 230demographic [RGB]230, 230, 230variables [RGB]94, 94, 94are [RGB]194, 194, 194included [RGB]163, 163, 163. . [RGB]230, 230, 230Telling [RGB]230, 230, 230good [RGB]230, 230, 230jokes [RGB]230, 230, 230is [RGB]230, 230, 230an [RGB]230, 230, 230art [RGB]230, 230, 230that [RGB]230, 230, 230comes [RGB]230, 230, 230naturally [RGB]230, 230, 230to [RGB]230, 230, 230some [RGB]211, 211, 211people [RGB]135, 135, 135, $\xleftarrow[\text{pred:contrast}]{\text{gold:contrast}}$ [RGB]21, 21, 21but [RGB]209, 209, 209for [RGB]207, 207, 207others [RGB]230, 230, 230it [RGB]217, 217, 217takes [RGB]230, 230, 230practice [RGB]230, 230, 230and [RGB]189, 189, 189hard [RGB]230, 230, 230work [RGB]230, 230, 230. . [RGB]230, 230, 230It [RGB]230, 230, 230is [RGB]230, 230, 230possible [RGB]230, 230, 230that [RGB]230, 230, 230these [RGB]230, 230, 230two [RGB]230, 230, 230children [RGB]230, 230, 230understood [RGB]230, 230, 230the [RGB]230, 230, 230task [RGB]230, 230, 230and [RGB]230, 230, 230really [RGB]230, 230, 230did [RGB]230, 230, 230believe [RGB]230, 230, 230that [RGB]230, 230, 230the [RGB]230, 230, 230puppet [RGB]230, 230, 230did [RGB]230, 230, 230not [RGB]230, 230, 230produce [RGB]230, 230, 230any [RGB]230, 230, 230poor [RGB]230, 230, 230descriptions [RGB]230, 230, 230, [RGB]230, 230, 230and [RGB]230, 230, 230in [RGB]230, 230, 230this [RGB]230, 230, 230regard [RGB]230, 230, 230, [RGB]230, 230, 230are [RGB]230, 230, 230not [RGB]230, 230, 230yet [RGB]230, 230, 230adult-like [RGB]230, 230, 230in [RGB]230, 230, 230their [RGB]230, 230, 230SI [RGB]230, 230, 230interpretations [RGB]230, 230, 230. $\xleftarrow[\text{pred:evaluation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230This [RGB]230, 230, 230is [RGB]41, 41, 41unlikely The highlighting in SECREF36 illustrates the benefits of the masking based evaluation compared to SECREF36: the token To is now clearly the strongest signal, and the verb is taken to be less important, followed by the even less important object of the verb. This is because removing the initial To hinders classification much more than the removal of the verb or noun. We note also that although the model in fact misclassified this example as preparation, we can still use masking importance to identify To, since the score queried from the model corresponds to a relative decrease in the probability of the correct relation, purpose, even if this was not the highest scoring relation overall. In SECREF36 we see the model's ability to correctly predict contrast based on the DM but. Note that despite a rather long sentence, the model does not need any other word nearly as much for the classification. Although the model is not trained explicitly to detect discourse markers, the DM can be recognized due to the fact that masking it leads to a drop of 66% softmax probability (${\Delta }_s$=0.66) of this pair representing the contrast relation. We can also note that a somewhat lower scoring content word is also marked: hard (${\Delta }_s$=0.18). In our gold signaling annotations, this word was marked together with comes naturally as a signal, due to the contrast between the two concepts (additionally, some people is flagged as a signal along with others). The fact that the model finds hard helpful, but does not need the contextual near antonym naturally, suggests that it is merely learning that words in the semantic space near hard may indicate contrast, and not learning about the antonymous relationship – otherwise we would expect to see `naturally' have a stronger score (see also the discussion in Section SECREF6). Finally SECREF36 shows that, much like in the case of hard, the model is not biased towards traditional DMs, confirming that it is capable of learning about content words, or neighborhoods of content words in vector space. In a long EDU pair of 41 words, the model relies almost exclusively on the word unlikely (${\Delta }_s$=0.36) to correctly label the relation as evaluation. By contrast, the anaphoric demonstrative `This' flagged by the human annotator, which is a more common function word, is disregarded, perhaps because it can appear with several other relations, and is not particularly exclusive to evaluation. These results suggest that the model may be capable of recognizing signals through distant supervision, allowing it to validate human annotations, to potentially point out signals that may be missed by annotators, and most importantly, to quantify signaling strength on a sliding scale. At the same time, we need a way to evaluate the model's quality and assess the kinds of errors it makes, as well as what we can learn from them. We therefore move on to evaluating the model and its errors next. Evaluation and Error Analysis ::: Evaluation Metric To evaluate the neural model, we would like to know how well ${\Delta }_s$ corresponds to annotators' gold standard labels. This leads to two kinds of problems: the first is that the model is distantly supervised, and therefore does not know about signal types, subtypes, or any aspect of signaling annotation and its relational structure. The second problem is that signaling annotations are categorical, and do not correspond to the ratio-scaled predictions provided by ${\Delta }_s$ (this is in fact one of the motivations for desiring a model-based estimate of signaling strength). The first issue means that we can only examine the model's ability to locate signals – not to classify them. Although there may be some conceivable ways of analyzing model output to identify classes such as DMs (which are highly lexicalized, rather than representing broad regions of vector space, as words such as unlikely might), or more contextual relational signals, such as pronouns, this line of investigation is beyond the scope of the present paper. A naive solution to the second problem might be to identify a cutoff point, e.g. deciding that all and only words scoring ${\Delta }_s>$0.15 are predicted to be signals. The problem with the latter approach is that sentences can be very different in many ways, and specifically in both length and in levels of ambiguity. Sentences with multiple, mutually redundant cues, may produce lower ${\Delta }_s$ scores compared to shorter sentences with a subset of the same cues. Conversely, in very short sentences with low signal strength, the model may reasonably be expected to degrade very badly with the deletion of almost any word, as the context becomes increasingly incomprehensible. For these reasons, we choose to adopt an evaluation metric from the paradigm of information retrieval, and focus on recall@k (recall at rank k, for $k=1,2,3$...). The idea is to poll the model for each sentence in which some signals have been identified, and see whether the model is able to find them if we let it guess using the word with the maximal ${\Delta }_s$ score (recall@1), regardless of how high that score is, or alternatively relax the evaluation criteria and see whether the human annotator's signal tokens appear at rank 2 or 3. Figure FIGREF40 shows numbers for recall@k for the top 3 ranks outputted by the model, next to random guess baselines. The left, middle and right panels in Figure FIGREF40 correspond to measurements when all signals are included, only cases contained entirely in the head EDUs shown to the model, and only DMs, respectively. The scenario on the left is rather unreasonable and is included only for completeness: here the model is also penalized for not detecting signals such as lexical chains, part of which is outside the units that the model is being shown. An example of such a case can be seen in Figure FIGREF41. The phrase Respondents in unit [23] signals the relation elaboration, since it is coreferential with a previous mention of the respondents in [21]. However, because the model is only given heads of EDU blocks to classify, it does not have access to the first occurrence of respondents while predicting the elaboration relation – the first half of the signal token set is situated in a child of the nucleus EDU before the relation, i.e. it belongs to group IV in the taxonomy in Table TABREF20. Realistically, our model can only be expected to learn about signals from `directly participating' EDUs, i.e. groups I, II, VI and VII, the `endocentric' signal groups from Section SECREF16. Although most signals belong to endocentric categories (71.62% of signaled relations belong to these groups, cf. Table TABREF20), exocentric cases form a substantial portion of signals which we have little hope of capturing with the architecture used here. As a result, recall metrics in the `all signals' scenario are closest to the random baselines, though the signals detected in other instances still place the model well above the baseline. A more reasonable evaluation is the one in the middle panel of Figure FIGREF40, which includes only endocentric signals as defined in the taxonomy. EDUs with no endocentric signals are completely disregarded in this scenario, which substantially reduces the number of tokens considered to be signals, since, while many tokens are part of some meaningful lexical chain in the document, requiring signals to be contained only in the pair of head units eliminates a wide range of candidates. Although the random baseline is actually very slightly higher (perhaps because eliminated EDUs were often longer ones, sharing small amounts of material with larger parts of the text, and therefore prone to penalizing the baseline; many words mean more chances for a random guess to be wrong), model accuracy is substantially better in this scenario, reaching a 40% chance of hitting a signal with only one guess, exceeding 53% with two guesses, and capping at 64% for recall@3, over 20 points above baseline. Finally, the right panel in the figure shows recall when only DMs are considered. In this scenario, a random guess fares very poorly, since most words are not DMs. The model, by contrast, achieves the highest results in all metrics, since DMs have the highest cue validity for relation classification, and the model attends to them most strongly. With just one guess, recall is over 56%, and goes as high as 67% for recall@3. The baseline only goes as high as 16% for three guesses. Evaluation and Error Analysis ::: Qualitative Analysis Looking at the model's performance qualitatively, it is clear that it can detect not only DMs, but also morphological cues (e.g. gerunds as markers of elaboration, as in SECREF43), semantic classes and sentiment, such as positive and negative evaluatory terms in SECREF43, as well as multiple signals within the same EDU, as in SECREF43. In fact, only about 8.3% of the tokens correctly identified by the model in Table TABREF45 below are of the DM type, whereas about 7.2% of all tokens flagged by human annotators were DMs, meaning that the model frequently matches non-DM items to discourse relation signals (see Performance on Signal Types below). It should also be noted that signals can be recognized even when the model misclassifies relations, since ${\Delta }_s$ does not rely on correct classification: it merely quantifies the contribution of a word in context toward the correct label's score. If we examine the influence of each word on the score of the correct relation, that impact should and does still correlate with human judgments based on what the system may tag as the second or third best class to choose. . [RGB]230, 230, 230For [RGB]230, 230, 230the [RGB]230, 230, 230present [RGB]230, 230, 230analysis [RGB]230, 230, 230, [RGB]230, 230, 230these [RGB]230, 230, 230responses [RGB]230, 230, 230were [RGB]230, 230, 230recoded [RGB]230, 230, 230into [RGB]230, 230, 230nine [RGB]230, 230, 230mutually [RGB]230, 230, 230exclusive [RGB]230, 230, 230categories $\xleftarrow[\text{pred:elaboration}]{\text{gold:result}}$ [RGB]63, 63, 63capturing [RGB]219, 219, 219the [RGB]230, 230, 230following [RGB]230, 230, 230options [RGB]135, 135, 135: . [RGB]185, 185, 185Professor [RGB]219, 219, 219Eastman [RGB]223, 223, 223said [RGB]207, 207, 207he [RGB]194, 194, 194is [RGB]64, 64, 64alarmed [RGB]230, 230, 230by [RGB]230, 230, 230what [RGB]230, 230, 230they [RGB]230, 230, 230found [RGB]230, 230, 230. $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230" [RGB]230, 230, 230Pregnant [RGB]229, 229, 229women [RGB]187, 187, 187in [RGB]230, 230, 230Australia [RGB]98, 98, 98are [RGB]213, 213, 213getting [RGB]230, 230, 230about [RGB]230, 230, 230half [RGB]171, 171, 171as [RGB]159, 159, 159much [RGB]230, 230, 230as [RGB]230, 230, 230what [RGB]155, 155, 155they [RGB]155, 155, 155require [RGB]223, 223, 223on [RGB]214, 214, 214a [RGB]109, 109, 109daily [RGB]176, 176, 176basis [RGB]111, 111, 111. . [RGB]195, 195, 195Even [RGB]230, 230, 230so [RGB]230, 230, 230, [RGB]230, 230, 230estimates [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230prevalence [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]230, 230, 230discrimination [RGB]219, 219, 219remains [RGB]230, 230, 230rare $\xleftarrow[\text{pred:evidence}]{\text{gold:concession}}$ [RGB]111, 111, 111At [RGB]63, 63, 63least [RGB]230, 230, 230one [RGB]230, 230, 230prior [RGB]230, 230, 230study [RGB]230, 230, 230by [RGB]230, 230, 230Kessler [RGB]225, 225, 225and [RGB]230, 230, 230colleagues [RGB]230, 230, 230[ [RGB]230, 230, 23015 [RGB]161, 161, 161] [RGB]200, 200, 200, [RGB]136, 136, 136however [RGB]222, 222, 222, [RGB]228, 228, 228using [RGB]230, 230, 230measures [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]224, 224, 224discrimination [RGB]217, 217, 217in [RGB]230, 230, 230a [RGB]230, 230, 230large [RGB]218, 218, 218American [RGB]230, 230, 230sample [RGB]230, 230, 230, [RGB]230, 230, 230reported [RGB]230, 230, 230that [RGB]230, 230, 230approximately [RGB]230, 230, 23033 [RGB]212, 212, 212% [RGB]230, 230, 230of [RGB]230, 230, 230respondents [RGB]156, 156, 156reported [RGB]169, 169, 169some [RGB]122, 122, 122form [RGB]168, 168, 168of [RGB]230, 230, 230discrimination Unsurprisingly, the model sometimes make sporadic errors in signal detection for which good explanations are hard to find, especially when its predicted relation is incorrect, as in SECREF43. Here the evaluative adjective remarkable is missed in favor of neighboring words such as agreed and a subject pronoun, which are not indicative of the evaluation relation in this context but are part of several cohorts of high scoring words. However, the most interesting and interpretable errors arise when ${\Delta }_s$ scores are high compared to an entire document, and not just among words in one EDU pair, in which most or even all words may be relatively weak signals. As an example of such a false positive with high confidence, we can consider SECREF43. In this example, the model correctly assigns the highest score to the DM so marking a purpose relation. However, it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action, which contributes to the purpose reading and helps to disambiguate so, despite not being considered a signal by annotators. . [RGB]216, 216, 216The [RGB]99, 99, 99agreement [RGB]89, 89, 89was [RGB]230, 230, 230that [RGB]131, 131, 131Gorbachev [RGB]102, 102, 102agreed [RGB]230, 230, 230to [RGB]230, 230, 230a [RGB]230, 230, 230quite [RGB]230, 230, 230remarkable [RGB]125, 125, 125concession [RGB]230, 230, 230: $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]64, 64, 64he [RGB]81, 81, 81agreed [RGB]230, 230, 230to [RGB]230, 230, 230let [RGB]220, 220, 220a [RGB]143, 143, 143united [RGB]149, 149, 149Germany [RGB]230, 230, 230join [RGB]83, 83, 83the [RGB]230, 230, 230NATO [RGB]230, 230, 230military [RGB]230, 230, 230alliance [RGB]230, 230, 230. . [RGB]230, 230, 230The [RGB]220, 220, 220opening [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230joke [RGB]230, 230, 230— [RGB]230, 230, 230or [RGB]230, 230, 230setup [RGB]230, 230, 230— [RGB]230, 230, 230should [RGB]230, 230, 230have [RGB]230, 230, 230a [RGB]230, 230, 230basis [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230real [RGB]200, 200, 200world $\xleftarrow[\text{pred:purpose}]{\text{gold:purpose}}$ [RGB]7, 7, 7so [RGB]73, 73, 73your [RGB]230, 230, 230audience [RGB]230, 230, 230can [RGB]230, 230, 230relate [RGB]230, 230, 230to [RGB]230, 230, 230it [RGB]230, 230, 230, In other cases, the model points out plausible signals which were passed over by an annotator, and may be considered errors in the gold standard. For example, the model easily notices that question marks indicate the solutionhood relation, even where these were skipped by annotators in favor of marking WH words instead: . [RGB]230, 230, 230Which [RGB]230, 230, 230previous [RGB]230, 230, 230Virginia [RGB]230, 230, 230Governor(s) [RGB]230, 230, 230do [RGB]230, 230, 230you [RGB]230, 230, 230most [RGB]230, 230, 230admire [RGB]230, 230, 230and [RGB]230, 230, 230why [RGB]12, 12, 12? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:solutionhood}}$ [RGB]230, 230, 230Thomas [RGB]230, 230, 230Jefferson [RGB]183, 183, 183. From the model's perspective, the question mark, which scores ${\Delta }_s$=0.79, is the single most important signal, and virtually sufficient for classifying the relation correctly, though it was left out of the gold annotations. The WH word Which and the sentence final why, by contrast, were noticed by annotators but were are not as unambiguous (the former could be a determiner, and the latter in sentence final position could be part of an embedded clause). In the presence of the question mark, their individual removal has much less impact on the classification decision. Although the model's behavior is sensible and can reveal annotation errors, it also suggests that ${\Delta }_s$ will be blind to auxiliary signals in the presence of very strong, independently sufficient cues. Using the difference in likelihood of correct relation prediction as a metric also raises the possibility of an opposite concept to signals, which we will refer to as distractors. Since ${\Delta }_s$ is a signed measure of difference, it is in fact possible to obtain negative values whenever the removal or masking of a word results in an improvement in the model's ability to predict the relation. In such cases, and especially when the negative value is of a large magnitude, it seems like a reasonable interpretation to say that a word functions as a sort of anti-signal, preventing or complicating the recognition of what might otherwise be a more clear-cut case. Examples SECREF43–SECREF43 show some instances of distractors identified by the masking procedure (distractors with ${\Delta }_s<$-0.2 are underlined). . [RGB]230, 230, 230How [RGB]230, 230, 230do [RGB]230, 230, 230they [RGB]201, 201, 201treat [RGB]167, 167, 167those [RGB]210, 210, 210not [RGB]190, 190, 190like [RGB]230, 230, 230themselves [RGB]100, 100, 100? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:preparation}}$ [RGB]52, 52, 52then [RGB]230, 230, 230they [RGB]230, 230, 230're [RGB]230, 230, 230either [RGB]230, 230, 230over-zealous [RGB]230, 230, 230, [RGB]230, 230, 230ignorant [RGB]230, 230, 230of [RGB]230, 230, 230other [RGB]230, 230, 230people [RGB]230, 230, 230or [RGB]230, 230, 230what [RGB]230, 230, 230to [RGB]230, 230, 230avoid [RGB]230, 230, 230those [RGB]230, 230, 230that [RGB]230, 230, 230contradict [RGB]230, 230, 230their [RGB]230, 230, 230fantasy [RGB]230, 230, 230land [RGB]230, 230, 230that [RGB]220, 220, 220caters [RGB]230, 230, 230to [RGB]230, 230, 230them [RGB]230, 230, 230and [RGB]230, 230, 230them [RGB]230, 230, 230only [RGB]230, 230, 230. . [RGB]230, 230, 230God [RGB]230, 230, 230, [RGB]230, 230, 230I [RGB]230, 230, 230do [RGB]230, 230, 230n't [RGB]230, 230, 230know [RGB]51, 51, 51! $\xrightarrow[\text{pred:preparation}]{\text{gold:preparation}}$ [RGB]230, 230, 230but [RGB]230, 230, 230nobody [RGB]230, 230, 230will [RGB]230, 230, 230go [RGB]230, 230, 230to [RGB]230, 230, 230fight [RGB]230, 230, 230for [RGB]230, 230, 230noses [RGB]230, 230, 230any [RGB]219, 219, 219more [RGB]169, 169, 169. In SECREF43, a rhetorical question trips up the classifier, which predicts the question-answer relation solutionhood instead of preparation. Here the initial WH word How and the subsequent auxiliary do-support both distract (with ${\Delta }_s$=-0.23 and -0.25) from the preparation relation, which is however being signaled positively by the DM then in the nucleus unit. Later on, the adverb only is also disruptive (${\Delta }_s$=-0.31), perhaps due to a better association with adversative relations, such as contrast. In SECREF43, a preparatory “God, I don't know!” is followed up with a nucleus starting with but, which typically marks a concession or other adversative relation. In fact, the DM but is related to a concessive relation with another EDU (not shown), which the model is not aware of while making the classification for the preparation. Although this example reveals a weakness in the model's inability to consider broader context, it also reveals the difficulty of expecting DMs to fall in line with a strong nuclearity assumption: since units serve multiple functions as satellites and nuclei, signals which aid the recognition of one relation may hinder the recognition of another. Evaluation and Error Analysis ::: Performance on Signal Types To better understand the kinds of signals which the model captures better or worse, Table TABREF45 gives a breakdown of performance by signal type and specific signal categories, for categories attested over 20 times (note that the categories are human labels assigned to the corresponding positions – the system does not predict signal types). To evaluate performance for all types we cannot use recall@1–3, since some sentences contain more than 3 signal tokens, which would lead to recall errors even if the top 3 ranks are correctly identified signals. The scores in the table therefore express how many of the signal tokens belonging to each subtype in the gold annotations are recognized if we allow the system to make as many guesses as there are signal tokens in each EDU pair, plus a tolerance of a maximum of 2 additional tokens (similarly to recall@3). We also note that a single token may be associated with multiple signal types, in which case its identification or omission is counted separately for each type. Three of the top four categories which the model performs best for are, perhaps unsurprisingly, the most lexical ones: alternate expression captures non-DM phrases such as I mean (for elaboration), or the problem is (for concession), and indicative word includes lexical items such as imperative see (consistently marking evidence in references within academic articles) or evaluative adjectives such as interesting for evaluation. The good performance of the category colon captures the model's recognition of colons as important punctuation, primarily predicting preparation. The only case of a `relational' category, requiring attention to two separate positions in the input, which also fares well is synonymy, though this is often based on flagging only one of two items annotated as synonymous, and is based on rather few examples. We can find only one example, SECREF44, where both sides of a pair of similar words is actually noticed, which both belong to the same stem (decline/declining): . [RGB]230, 230, 230The [RGB]230, 230, 230report [RGB]209, 209, 209says [RGB]213, 213, 213the [RGB]172, 172, 172decline [RGB]220, 220, 220in [RGB]228, 228, 228iodine [RGB]230, 230, 230intake [RGB]215, 215, 215appears [RGB]230, 230, 230to [RGB]230, 230, 230be [RGB]230, 230, 230due [RGB]230, 230, 230to [RGB]230, 230, 230changes [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230dairy [RGB]230, 230, 230industry [RGB]230, 230, 230, [RGB]230, 230, 230where [RGB]230, 230, 230chlorine-containing [RGB]230, 230, 230sanitisers [RGB]226, 226, 226have [RGB]230, 230, 230replaced [RGB]230, 230, 230iodine-containing [RGB]230, 230, 230sanitisers [RGB]230, 230, 230. $\xleftarrow[\text{pred:background}]{\text{gold:justify}}$ [RGB]193, 193, 193Iodine [RGB]230, 230, 230released [RGB]230, 230, 230from [RGB]230, 230, 230these [RGB]230, 230, 230chemicals [RGB]230, 230, 230into [RGB]216, 216, 216milk [RGB]230, 230, 230has [RGB]230, 230, 230been [RGB]230, 230, 230the [RGB]230, 230, 230major [RGB]230, 230, 230source [RGB]230, 230, 230of [RGB]226, 226, 226dietary [RGB]206, 206, 206iodine [RGB]230, 230, 230in [RGB]230, 230, 230Australia [RGB]230, 230, 230for [RGB]230, 230, 230at [RGB]230, 230, 230least [RGB]230, 230, 230four [RGB]230, 230, 230decades [RGB]202, 202, 202, [RGB]153, 153, 153but [RGB]230, 230, 230is [RGB]230, 230, 230now [RGB]63, 63, 63declining [RGB]79, 79, 79. We note that our evaluation is actually rather harsh towards the model, since in multiword expressions, often only one central word is flagged by ${\Delta }_s$ (e.g. problem in “the problem is”), while the model is penalized in Table TABREF45 for each token that is not recognized (i.e. the and is, which were all flagged by a human annotator as signals in the data). Interestingly, the model fares rather well in identifying morphological tense cues, even though these are marked by both inflected lexical verbs and semantically poor auxiliaries (e.g. past perfect auxiliary had marking background); but modality cues (especially can or could for evaluation) are less successfully identified, suggesting they are either more ambiguous, or mainly relevant in the presence of evaluative content words which out-score them. Other relational categories from the middle of the table which ostensibly require matching pairs of words, such as repetition, meronymy, or personal reference (coreference) are mainly captured by the model when a single item is a sufficiently powerful cue, often ignoring the other half of the signal, as shown in SECREF44. . [RGB]230, 230, 230On [RGB]230, 230, 230a [RGB]230, 230, 230new [RGB]230, 230, 230website [RGB]230, 230, 230, [RGB]230, 230, 230" [RGB]230, 230, 230The [RGB]230, 230, 230Internet [RGB]230, 230, 230Explorer [RGB]230, 230, 2306 [RGB]230, 230, 230Countdown [RGB]230, 230, 230" [RGB]230, 230, 230, [RGB]230, 230, 230Microsoft [RGB]230, 230, 230has [RGB]230, 230, 230launched [RGB]230, 230, 230an [RGB]230, 230, 230aggressive [RGB]230, 230, 230campaign [RGB]230, 230, 230to [RGB]230, 230, 230persuade [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230stop [RGB]171, 171, 171using [RGB]133, 133, 133IE6 $\xleftarrow[\text{pred:elaboration}]{\text{gold:elaboration}}$ [RGB]56, 56, 56Its [RGB]197, 197, 197goal [RGB]167, 167, 167is [RGB]230, 230, 230to [RGB]230, 230, 230decrease [RGB]230, 230, 230IE6 [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230less [RGB]230, 230, 230than [RGB]230, 230, 230one [RGB]124, 124, 124percent [RGB]229, 229, 229. Here the model has learned that an initial possessive pronoun, perhaps in the context of a subject NP in a copula sentence (note the shading of the following is) is an indicator of an elaboration relation, even though there is no indication that the model has noticed which word is the antecedent. Similarly for the count category, the model only learns to notice the possible importance of some numbers, but is not actually aware of whether they are identical (e.g. for restatement) or different (e.g. in contrast). Finally, some categories are actually recognized fairly reliably, but are penalized by the same partial substring issue identified above: Date expressions are consistently flagged as indicators of circumstance, but often a single word, such as a weekday in SECREF44, is dominant, while the model is penalized for not scoring other words as highly (including commas within dates, which are marked as part of the signal token span in the gold standard, but whose removal does not degrade prediction accuracy). In this case it seems fair to say that the model has successfully recognized the date signal of `Wednesday April 13', yet it loses points for missing two instances of `,', and the `2011', which is no longer necessary for recognizing that this is a date. . [RGB]230, 230, 230NASA [RGB]230, 230, 230celebrates [RGB]230, 230, 23030th [RGB]230, 230, 230anniversary [RGB]230, 230, 230of [RGB]230, 230, 230first [RGB]230, 230, 230shuttle [RGB]230, 230, 230launch [RGB]230, 230, 230; $\xleftarrow[\text{pred:circumstance}]{\text{gold:circumstance}}$ [RGB]11, 11, 11Wednesday [RGB]186, 186, 186, [RGB]115, 115, 115April [RGB]153, 153, 15313 [RGB]219, 219, 219, [RGB]230, 230, 2302011 Discussion This paper has used a corpus annotated for discourse relation signals within the framework of the RST Signalling Corpus (BIBREF12) and extended with anchored signal annotations (BIBREF27) to develop a taxonomy of unrestricted and hierarchically aware discourse signal positions, as well as a data-driven neural network model to explore distantly supervised signal word extraction. The results shed light on the distribution of signal categories from the RST-SC taxonomy in terms of associated word forms, and show the promise of neural models with contextual embeddings for the extraction of context dependent and gradient discourse signal detection in individual texts. The metric developed for the evaluation, $\Delta _s$, allows us to assess the relative importance of signal words for automatic relation classification, and reveal observations for further study, as well as shortcomings which point to the need to develop richer feature representations and system architectures in future work. The model presented in the previous sections is clearly incomplete in both its classification accuracy and its ability to recognize the same signals that humans do. However, given the fact that it is trained entirely without access to discourse signal annotations and is unaware of any of the guidelines used to create the gold standard that it is evaluated on, its performance may be considered surprisingly good. As an approach to extracting discourse signals in a data-driven way, similar to frequentist methods or association measures used in previous work, we suggest that this model forms a more fine grained tool, capable of taking context into consideration and delivering scores for each instance of a signal candidate, rather than resulting in a table of undifferentiated signal word types. Additionally, although we consider human signal annotations to be the gold standard in identifying the presence of relevant cues, the ${\Delta }_s$ metric gives new insights into signaling which cannot be approached using manual signaling annotations. Firstly, the quantitative nature of the metric allows us to rank signaling strength in a way that humans have not to date been able to apply: using ${\Delta }_s$, we can say which instances of which signals are evaluated as stronger, by how much, and which words within a multi-word signal instance are the most important (e.g. weekdays in dates are important, the commas are not). Secondly, the potential for negative values of the metric opens the door to the study of negative signals, or `distractors', which we have only touched upon briefly in this paper. And finally, we consider the availability of multiple measurements for a single DM or other discourse signal to be a potentially very interesting window into the relative ambiguity of different signaling devices (cf. BIBREF16) and for research on the contexts in which such ambiguity results. To see how ambiguity is reflected in multiple measurements of ${\Delta }_s$, we can consider Figure FIGREF47. The figure shows boxplots for multiple instances of the same signal tokens. We can see that words like and are usually not strong signals, with the entire interquartile range scoring less than 0.02, i.e. aiding relation classification by less than 2%, with some values dipping into the negative region (i.e. cases functioning as distractors). However, some outliers are also present, reaching almost as high as 0.25 – these are likely to be coordinating predicates, which may signal relations such as sequence or joint. A word such as but is more important overall, with the box far above and, but still covering a wide range of values: these can correspond to more or less ambiguous cases of but, but also to cases in which the word is more or less irreplaceable as a signal. In the presence of multiple signals for the same relation, the presence of but should be less important. We can also see that but can be a distractor with negative values, as we saw in example SECREF43 above. As far as we are aware, this is the first empirical corpus-based evidence giving a quantitative confirmation to the intuition that `but' in context is significantly less ambiguous as a discourse marker than `and'; the overlap in their bar plots indicate that they can be similarly ambiguous or even distracting in some cases, but the difference in interquartile ranges makes it clear that these are exceptions. For less ambiguous DMs, such as if, we can also see a contrast between lower and upper case instances: upper case If is almost always a marker of condition, but the lower case if is sometimes part of an embedded object clause, which is not segmented in the corpus and does not mark a conditional relation (e.g. “they wanted to see if...”). For the word to, the figure suggests a strongly bimodal distribution, with a core population of (primarily prepositional) discourse-irrelevant to, and a substantial number of outliers above a large gap, representing to in infinitival purpose clauses (though not all to infinitives mark such clauses, as in adnominal “a chance to go”, which the model is usually able to distinguish in context). In other words, our model can not only disambiguate ambiguous strings into grammatical categories, but also rank members of the same category by importance in context, as evidenced by its ability to correctly classify high frequency items like `to' or `and' as true positives. A frequentist approach would not only lack this ability – it would miss such items altogether, due to its overall high string frequency and low specificity. Beyond what the results can tell us about discourse signals in this particular corpus, the fact that the neural model is sensitive to mutual redundancy of signals raises interesting theoretical questions about what human annotators are doing when they characterize multiple features of a discourse unit as signals. If it is already evident from the presence of a conventional DM that some relation applies, are other, less explicit signals which might be relied on in the absence of the DM, equally `there'? Do we need a concept of primary and auxiliary signals, or graded signaling strength, in the way that a metric such as ${\Delta }_s$ suggests? Another open question relates to the postulation of distractors as an opposite concept to discourse relation signals. While we have not tested this so far, it is interesting to ask to what extent human analysts are aware of distractors, whether we could form annotation guidelines to recognize them, and how humans weigh the value of signals and potential distractors in extrapolating intended discourse relations. It seems likely that distractors affecting humans may be found in cases of misunderstanding or ambiguity of discourse relations (see also BIBREF25). Finally, the error analysis for signal detection complements the otherwise opaque relation classification results in Table TABREF34 in showing some of the missing sources of information that our model would need in order to work better. We have seen that relational information, such as identifying not just the presence of a pronoun but also its antecedent, or both sides of lexical semantic relations such as synonymy, meronymy or antonymy, as well as comparing count information, are still unavailable to the classifier – if they were being used, then ${\Delta }_s$ would reflect the effects of their removal, but this is largely not the case. This suggests that, in the absence of vastly larger discourse annotated corpora, discourse relation recognition may require the construction of either features, architectures, or both, which can harness abstract relational information of this nature beyond the memorization of specific pairs of words (or regions of vector space with similar words) that are already attested in the limited training data. In this vein, BIBREF54 conducted a series of experiments on automatic sense prediction for four top-level implicit discourse relations within the PDTB framework, which also suggested benefits for using linguistically-informed features such as verb information, polarity tags, context, lexical items (e.g. first and last words of the arguments; first three words in the sentence) etc. The model architecture and input data are also in need of improvements, as the current architecture can only be expected to identify endocentric signals. The substantial amount of exocentric signaling cases is in itself an interesting finding, as it suggests that relation classification from head EDU pairs may ultimately have a natural ceiling that is considerably below what could be inferred from looking at larger contexts. We predict that as we add more features to the model and improve its architecture in ways that allow it to recognize the kinds of signals that humans do, classification accuracy will increase; and conversely, as classification accuracy rises, measurements based on ${\Delta }_s$ will overlap increasingly with human annotations of anchored signals. In sum, we believe that there is room for much research on what relation classification models should look like, and how they can represent the kinds of information found in non-trivial signals. The results of this line of work can therefore benefit NLP systems targeting discourse relations by suggesting locations within the text which systems should attend to in one way or another. Moreover, we think that using distant-supervised techniques for learning discourse relations (e.g. BIBREF55) is promising in the development of discourse models using the proposed dataset. We hope to see further analyses benefit from this work and the application of metrics such as ${\Delta }_s$ to other datasets, within more complex models, and using additional features to capture such information. We also hope to see applications of discourse relations such as machine comprehension BIBREF20 and sentiment analysis BIBREF55 etc. benefit from the proposed model architecture as well as the dataset.
Yes
a1097ce59270d6f521d92df8d2e3a279abee3e67
a1097ce59270d6f521d92df8d2e3a279abee3e67_0
Q: Where does proposed metric differ from juman judgement? Text: Introduction The development of formal frameworks for the analysis of discourse relations has long gone hand in hand with work on signaling devices. The analysis of discourse relations is also closely tied to what a discourse structure should look like and what discourse goals should be fulfilled in relation to the interpretation of discourse relations BIBREF0. Earlier work on the establishment of inventories of discourse relations and their formalization (BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6) relied on the existence of `discourse markers' (DMs) or `connectives', including conjunctions such as because or if, adverbials such as however or as a result, and coordinations such as but, to identify and distinguish relations such as condition in SECREF1, concession in SECREF1, cause in SECREF1, or contrast, result etc., depending on the postulated inventory of relations (signals for these relations as identified by human analysts are given in bold; examples come from the GUM corpus BIBREF7, presented in Section SECREF3). . [If you work for a company,]$_{\textsc {condition}}$ [they pay you that money.] . [Albeit limited,]$_{\textsc {concession}}$ [these results provide valuable insight into SI interpretation by Chitonga-speaking children.] . [not all would have been interviewed at Wave 3] [due to differential patterns of temporary attrition]$_{\textsc {cause}}$ The same reasoning of identifying relations based on overt signals has been applied to the comparison of discourse relations across languages, by comparing inventories of similar function words cross-linguistically (BIBREF8, BIBREF9); and the annotation guidelines of prominent contemporary corpora rely on such markers as well: for instance, the Penn Discourse Treebank (see BIBREF10) explicitly refers to either the presence of DMs or the possibility of their insertion in cases of implicit discourse relations, and DM analysis in Rhetorical Structure Theory BIBREF11 has also shown the important role of DMs as signals of discourse relations at all hierarchical levels of discourse analysis BIBREF12. At the same time, research over the past two decades analyzing the full range of possible cues that humans use to identify the presence of discourse relations has suggested that classic DMs such as conjunctions and adverbials are only a part of the network of signals that writers or speakers can harness for discourse structuring, which also includes entity-based cohesion devices (e.g. certain uses of anaphora, see BIBREF13), alternative lexicalizations using content words, as well as syntactic constructions (see BIBREF14 and the addition of alternative lexicalization constructions, AltLexC, in the latest version of PDTB, BIBREF15). In previous work, two main approaches to extracting the inventory of discourse signal types in an open-ended framework can be identified: data-driven approaches, which attempt to extract relevant words from distributional properties of the data, using frequencies or association measures capturing their co-occurrences with certain relation types (e.g. BIBREF16, BIBREF17); and manual annotation efforts (e.g. BIBREF10, BIBREF18), which develop categorization schemes and guidelines for human evaluation of signaling devices. The former family of methods benefits from an unbiased openness to any and every type of word which may reliably co-occur with some relation types, whether or not a human might notice it while annotating, as well as the naturally graded and comparable nature of the resulting quantitative scores, but, as we will show, falls short in identifying specific cases of a word being a signal (or not) in context. By contrast, the latter approach allows for the identification of individual instances of signaling devices, but relies on less open-ended guidelines and is categorical in nature: a word either is or isn't a signal in context, providing less access to concepts such as signaling strength. The goal of this paper is to develop and evaluate a model of discourse signal identification that is built bottom up from the data, but retains sensitivity to context in the evaluation of each individual example. In addition, even though this work is conducted within Rhetorical Structural Theory, we hope that it can shed light on signal identification of discourse relations across genres and provide empirical evidence to motivate research on theory-neutral and genre-diverse discourse processing, which would be beneficial for pushing forward theories of discourse across frameworks or formalisms. Furthermore, employing a computational approach to studying discourse relations has a promising impact on various NLP downstream tasks such as question answering and document summarization etc. For example, BIBREF20 incorporated discourse information into the task of automated text comprehension and benefited from such information without relying on explicit annotations of discourse structure during training, which outperformed state-of-the-art text comprehension systems at the time. Towards this goal, we begin by reviewing some previous work in the traditions sketched out above in the next section, and point out some open questions which we would like to address. In Section SECREF3 we present the discourse annotated data that we will be using, which covers a number of English text types from the Web annotated for 20 discourse relations in the framework of Rhetorical Structure Theory, and is enriched with human annotations of discourse relation signaling devices for a subset of the data. Moreover, we also propose a taxonomy of anchored signals based on the discourse annotated data used in this paper, illustrating the properties and the distribution of the anchorable signals. In Section SECREF4 we then train a distantly supervised neural network model which is made aware of the relations present in the data, but attempts to learn which words signal those relations without any exposure to explicit signal annotations. We evaluate the accuracy of our model using state-of-the-art pretrained and contextualized character and word embeddings, and develop a metric for signaling strength based on a masking concept similar to permutation importance, which naturally lends itself to the definition of both positive and negative or `anti-signals', which we will refer to as `distractors'. In Section SECREF5, we combine the anchoring annotation data from Section SECREF3 with the model's predictions to evaluate how `human-like' its performance is, using an information retrieval approach measuring recall@k and assessing the stability of different signal types based on how the model scores them. We develop a visualization for tokenwise signaling strength and perform error analysis for some signals found by the model which were not flagged by humans and vice versa, and point out the strengths and weaknesses of the architecture. Section SECREF6 offers further discussion of what we can learn from the model, what kinds of additional features it might benefit from given the error analysis, and what the distributions of scores for individual signals can teach us about the ambiguity and reliability of different signal types, opening up avenues for further research. Previous Work ::: Data-driven Approaches A straightforward approach to identifying discourse relation signals in corpora with discourse parses is to extract frequency counts for all lexical types or lemmas and cross-tabulate them with discourse relations, such as sentences annotated as cause, elaboration, etc. (e.g. BIBREF21, BIBREF22, BIBREF17). Table TABREF4, reproduced from BIBREF17, illustrates this approach for the Russian RST Treebank. This approach quickly reveals the core inventory of cue words in the language, and in particular the class of low-ambiguity discourse markers (DMs), such as odnako `however' signaling contrast (see Fraser 1999 on delimiting the class of explicit DMs) or relative pronouns signaling elaboration. As such, it can be very helpful for corpus-based lexicography of discourse markers (cf. BIBREF23). The approach can potentially include multiword expressions, if applied equally to multi-token spans (e.g. as a result), and because it is language-independent, it also allows for a straightforward comparison of connectives or other DMs across languages. Results may also converge across frameworks, as the frequency analysis may reveal the same items in different corpora annotated using different frameworks. For instance, the inventory of connectives found in work on the Penn Discourse Treebank (PDTB, see BIBREF10) largely converges with findings on connectives using RST (see BIBREF24, BIBREF18): conjunctions such as but can mark different kinds of contrastive relations at a high level, and adverbs such as meanwhile can convey contemporaneousness, among other things, even when more fine-grained analyses are applied. However, a purely frequentist approach runs into problems on multiple levels, as we will show in Section SECREF4: high frequency and specificity to a small number of relations characterize only the most common and unambiguous discourse markers, but not less common ones. Additionally, differentiating actual and potentially ambiguous usages of candidate words in context requires substantial qualitative analysis (see BIBREF25), which is not reflected in aggregated counts, and signals that belong to a class of relations (e.g. a variety of distinct contrastive relation types) may appear to be non-specific, when in fact they reliably mark a superset of relations. Other studies have used more sophisticated metrics, such as point-wise mutual information (PMI), to identify words associated with particular relations BIBREF16. Using the PDTB corpus, BIBREF16 extracted such scores and measured the contribution of different signal types based on the information gain which they deliver for the classification of discourse relations at various degrees of granularity, as expressed by the hierarchical labels of PDTB relation types. This approach is most similar to the goal given to our own model in Section SECREF4, but is less detailed in that the aggregation process assigns a single number to each candidate lexical item, rather than assigning contextual scores to each instance. Finally we note that for hierarchical discourse annotation schemes, the data-driven approaches described here become less feasible at higher levels of abstraction, as relations connecting entire paragraphs encompass large amounts of text, and it is therefore difficult to find words with high specificity to those relations. As a result, approaches using human annotation of discourse relation signals may ultimately be irreplaceable. Previous Work ::: Discourse Relation Signal Annotations Discourse relation signals are broadly classified into two categorizes: anchored signals and unanchored signals. By `anchoring' we refer to associating signals with concrete token spans in texts. Intuitively, most of the signals are anchorable since they correspond to certain token spans. However, it is also possible for a discourse relation to be signaled but remain unanchored. Results from BIBREF27 indicated that there are several signaled but unanchored relations such as preparation and background since they are high-level discourse relations that capture and correspond to genre features such as interview layout in interviews where the conversation is constructed as a question-answer scheme, and are thus rarely anchored to tokens. The Penn Discourse Treebank (PDTB V3, BIBREF15) is the largest discourse annotated corpus of English, and the largest resource annotated explicitly for discourse relation signals such as connectives, with similar corpora having been developed for a variety of languages (e.g. BIBREF28 for Turkish, BIBREF29 for Chinese). However the annotation scheme used by PDTB is ahierarchical, annotating only pairs of textual argument spans connected by a discourse relation, and disregarding relations at higher levels, such as relations between paragraphs or other groups of discourse units. Additionally, the annotation scheme used for explicit signals is limited to specific sets of expressions and constructions, and does not include some types of potential signals, such as the graphical layout of a document, lexical chains of (non-coreferring) content words that are not seen as connectives, or genre conventions which may signal the discourse function for parts of a text. It is nevertheless a very useful resource for obtaining frequency lists of the most prevalent DMs in English, as well as data on a range of phenomena such as anaphoric relations signaled by entities, and some explicitly annotated syntactic constructions. Working in the hierarchical framework of Rhetorical Structure Theory BIBREF11, BIBREF18 re-annotated the existing RST Discourse Treebank BIBREF30, by taking the existing discourse relation annotations in the corpus as a ground truth and analyzing any possible information in the data, including content words, patterns of repetition or genre conventions, as a possibly present discourse relation signaling device. The resulting RST Signalling Corpus (RST-SC, BIBREF31) consists of 385 Wall Street Journal articles from the Penn Treebank BIBREF32, a smaller subset of the same corpus used in PDTB. It contains 20,123 instances of 78 relation types (e.g. attribution, circumstance, result etc.), which are enriched with 29,297 signal annotations. BIBREF12 showed that when all types of signals are considered, over 86% of discourse relations annotated in the corpus were signaled in some way, but among these, just under 20% of cases were marked by a DM. However, unlike PDTB, the RST Signalling Corpus does not provide a concrete span of tokens for the locus of each signal, indicating instead only the type of signaling device used. Although the signal annotations in RST-SC have a broader scope than those in PDTB and are made more complex by extending to hierarchical relations, BIBREF33 have shown that RST-SC's annotation scheme can be `anchored' by associating discourse signal categories from RST-SC with concrete token spans. BIBREF27 applied the same scheme to a data set described in Section SECREF3, which we will use to evaluate our model in Section SECREF5. Since that data set is based on the same annotation scheme of signal types as RST-SC, we will describe the data for the present study and RST-SC signal type annotation scheme next. Data ::: Anchored Signals in the GUM Corpus In order to study open-ended signals anchored to concrete tokens, we use the signal-annotated subset of the freely available Georgetown University Multilayer (GUM) Corpus BIBREF7 from BIBREF27. Our choice to use a multi-genre RST-annotated corpus rather than using PDTB, which also contains discourse relation signal annotation to a large extent is motivated by three reasons: The first reason is that we wish to explore the full range of potential signals, as laid out in the work on the Signalling Corpus BIBREF12, BIBREF34, whereas PDTB annotates only a subset of the possible cues identified by human annotators. Secondly, the use of RST as a framework allows us to examine discourse relations at all hierarchical levels, including long distance, high-level relations between structures as large as paragraphs or sections, which often have different types of signals allowing their identification. Finally, although the entire GUM corpus is only about half the size of RST-DT (109K tokens), using GUM offers the advantage of a more varied range of genres than PDTB and RST-SC, both of which annotate Wall Street Journal data. The signal annotated subset of GUM includes academic papers, how-to guides, interviews and news text, encompassing over 11,000 tokens. Although this data set may be too small to train a successful neural model for signal detection, we will not be using it for this purpose; instead, we will reserve it for use solely as a test set, and use the remainder of the data (about 98K tokens) to build our model (see Section SECREF28 for more details about the subsets and splits), including data from four further genres, for which the corpus also contains RST annotations but no signaling annotations: travel guides, biographies, fiction, and Reddit forum discussions. The GUM corpus is manually annotated with a large number of layers, including document layout (headings, paragraphs, figures, etc.); multiple POS tags (Penn tags, CLAWS5, Universal POS); lemmas; sentence types (e.g. imperative, wh-question etc., BIBREF35); Universal Dependencies BIBREF36; (non-)named entity types; coreference and bridging resolution; and discourse parses using Rhetorical Structure Theory BIBREF11. In particular, the RST annotations in the corpus use a set of 20 commonly used RST relation labels, which are given in Table TABREF10, along with their frequencies in the corpus. The relations cover asymmetrical prominence relations (satellite-nucleus) and symmetrical ones (multinuclear relations), with the restatement relation being realized in two versions, one for each type. The signaling annotation in the corpus follows the scheme developed by RST-SC, with some additions. Although RST-SC does not indicate token positions for signals, it provides a detailed taxonomy of signal types which is hierarchically structured into three levels: signal class, denoting the signal's degree of complexity signal type, indicating the linguistic system to which it belongs specific signal, which gives the most fine-grained subtypes of signals within each type It is assumed that any number of word tokens can be associated with any number of signals (including the same tokens participating in multiple signals), that signals can arise without corresponding to specific tokens (e.g. due to graphical layout of paragraphs), and that each relation can have an unbounded number of signals ($0-n$), each of which is characterized by all three levels. The signal class level is divided into single, combined (for complex signals), and unsure for unclear signals which cannot be identified conclusively, but are noted for further study. For each signal (regardless of its class), signal type and specific signal are identified. According to RST-SC's taxonomy, signal type includes 9 types such as DMs, genre, graphical, lexical, morphological, numerical, reference, semantic, and syntactic. Each type then has specific subcategories. For instance, the signal type semantic has 7 specific signal subtypes: synonymy, antonymy, meronymy, repetition, indicative word pair, lexical chain, and general word. We will describe some of these in more depth below. In addition to the 9 signal types, RST-SC has 6 combined signal types such as reference+syntactic, semantic+syntactic, and graphical+syntactic etc., and 15 specific signals are identified for the combined signals. Although the rich signaling annotations in RST-SC offer an excellent overview of the relative prevalence of different signal types in the Wall Street Journal corpus, it is difficult to apply the original scheme to the study of individual signal words, since actual signal positions are not identified. While recovering these positions may be possible for some categories using the original guidelines, most signaling annotations (e.g. lexical chains, repetition) cannot be automatically paired with actual tokens, meaning that, in order to use the original RST-SC for our study, we would need to re-annotate it for signal token positions. As this effort is beyond the scope of our study, we will use the smaller data set with anchored signaling annotations from BIBREF27: This data is annotated with the same signal categories as RST-SC, but also includes exact token positions for each signal, including possibly no tokens for unanchorable signals such as some types of genre conventions or graphical layout which are not expressible in terms of specific words. In order to get a better sense of how the annotations work, we consider example SECREF7. . [5] Sociologists have explored the adverse consequences of discrimination; [6] psychologists have examined the mental processes that underpin conscious and unconscious biases; [7] neuroscientists have examined the neurobiological underpinnings of discrimination; [8] and evolutionary theorists have explored the various ways that in-group/out-group biases emerged across the history of our species. – joint [GUM_academic_discrimination] In this example, there is a joint relation between four spans in a fragment from an RST discourse tree. The first tokens in each span form a parallel construction and include semantically related items such as explored and examined (signal class `combined', type `semantic+syntactic', specific subtype `parallel syntactic construction + lexical chain'). The words corresponding to this signal in each span are highlighted in Figure FIGREF15, and are considered to signal each instance of the joint relation. Additionally, the joint relation is also signaled by a number of further signals which are highlighted in the figure as well, such as the semicolons between spans, which correspond to a type `graphical', subtype `semicolon' in RST-SC. The data model of the corpus records which tokens are associated with which categorized signals, and allows for multiple membership of the same token in several signal annotations. In terms of annotation reliability, BIBREF12 reported a weighted kappa of 0.71 for signal subtypes in RST-SC without regard to the span of words corresponding to a signal, while a study by BIBREF37 suggests that signal anchoring, i.e. associating RST-SC signal categories with specific tokens achieves a 90.9% perfect agreement score on which tokens constitute signals, or a Cohen's Kappa value of 0.77. As anchored signal positions will be of the greatest interest to our study, we will consider how signal token positions are distributed in the corpus next, and develop an anchoring taxonomy which we will refer back to for the remainder of this paper. Data ::: A Taxonomy of Anchored Signals From a structural point of view, one of the most fundamental distinctions with regard to signal realization recognized in previous work is the classification of signaling tokens into satellite or nucleus-oriented positions, i.e. whether a signal for the relation appears within the modifier span or the span being modified BIBREF38. While some relation types exhibit a strong preference for signal position (e.g. using a discourse marker such as because in the satellite for cause, BIBREF39), others, such as concession are more balanced (almost evenly split signals between satellite and nucleus in BIBREF38). In this study we would like to further refine the taxonomy of signal positions, breaking it down into several features. At the highest level, we have the distinction between anchorable and non-anchorable signals, i.e. signals which correspond to no token in the text (e.g. genre conventions, graphical layout). Below this level, we follow BIBREF38 in classifying signals as satellite or nucleus-oriented, based on whether they appear in the more prominent Elementary Discourse Unit (EDU) of a relation or its dependent. However, several further distinctions may be drawn: Whether the signal appears before or after the relation in text order; since we consider the relation to be instantiated as soon as its second argument in the text appears, `before' is interpreted as any token before the second head unit in the discourse tree begins, and `after' is any subsequent token Whether the signal appears in the head unit of the satellite/nucleus, or in a dependent of that unit; this distinction only matters for satellite or nucleus subtrees that consist of more than one unit Whether the signal is anywhere within the structure dominated by the units participating in the relation, or completely outside of this structure Table TABREF20 gives an overview of the taxonomy proposed here, which includes the possible combinations of these properties and the distribution of the corresponding anchorable signals found in the signal-annotated subset of the GUM Corpus from BIBREF27. Individual feature combinations can be referred to either as acronyms, e.g. ABIHS for `Anchorable, Before the second EDU of the relation, Inside the relation's subtree, Head unit of the Satellite', or using the group IDs near the bottom of the table (in this case the category numbered Roman I). We will refer back to these categories in our comparison of manually annotated and automatically predicted signals. To illustrate how the taxonomy works in practice, we can consider the example in Figure FIGREF23, which shows a signal whose associated tokens instantiate categories I and IV in a discourse tree – the words demographic variables appear both within a preparation satellite (unit [50], category I), which precedes and points to its nucleus [51–54], and within a satellite inside that block (unit [52], a dependent inside the nucleus block, category IV). Based on the RST-SC annotation scheme, the signal class is Simple, with the type Semantic and specific sub-type Lexical chain. The numbers at the bottom of Table TABREF20 show the number of tokens signaling each relation at each position, as well as the number of relations which have signal tokens at the relevant positions. The hypothetical categories V and X, with signal tokens which are not within the subtree of satellite or nucleus descendants, are not attested in our data, as far as annotators were able to identify. Automatic Signal Extraction ::: A Contextless Frequentist Approach To motivate the need for a fine-grained and contextualized approach to describing discourse relation signals in our data, we begin by extracting some basic data-driven descriptions of our data along the lines presented in Section SECREF3. In order to constrain candidate words to just the most relevant ones for marking a specific signal, we first need a way to address a caveat of the frequentist approach: higher order relations which often connect entire paragraphs (notably background and elaboration) must be prevented from allowing most or even all words in the document to be considered as signaling them. A simple approach to achieving this is to assume `Strong Nuclearity', relying on Marcu's (BIBREF42) Compositionality Criterion for Discourse Trees (CCDT), which suggests that if a relation holds between two blocks of EDUs, then it also holds between their head EDUs. While this simplification may not be entirely accurate in all cases, Table TABREF20 suggests that it captures most signals, and allows us to reduce the space of candidate signal tokens to just the two head EDUs implicated in a relation. We will refer to signals within the head units of a relation as `endocentric' and signals outside this region as `exocentric'. Figure FIGREF25 illustrates this, where units [64] and [65] are the respective heads of two blocks of EDUs, and unit [65] in fact contains a plausible endocentric signal for the result relation, the discourse marker thus. More problematic caveats for the frequentist approach are the potential for over/underfitting and ambiguity. The issue of overfitting is especially thorny in small datasets, in which certain content words appear coincidentally in discourse segments with a certain function. Table TABREF27 shows the most distinctive lexical types for several discourse relations in GUM based on pure ratio of occurrence in head EDUs marked for those relations. On the left, types are chosen which have a maximal frequency in the relevant relationship compared with their overall frequency in the corpus. This quickly overfits the contents of the corpus, selecting irrelevant words such as holiest and Slate for the circumstance relation, or hypnotizing and currency for concession. The same lack of filtering can, however, yield some potentially relevant lexical items, such as causing for result or even highly specific content words such as ammonium, which are certainly not discourse markers, but whose appearance in a sequence is not accidental: the word is in this case typical for sequences in how-to guides, where use of ingredients in a recipe is described in a sequence. Even if these kinds of items may be undesirable candidates for signal words in general, it seems likely that some rare content words may function as signals in context, such as evaluative adjectives (e.g. exquisite) enabling readers to recognize an evaluation. If we are willing to give up on the latter kind of rare items, the overfitting problem can be alleviated somewhat by setting a frequency threshold for each potential signal lexeme, thereby suppressing rare items. The items on the right of the table are limited to types occurring more than 10 times. Since the most distinctive items on the left are all comparatively rare (and therefore exclusive to their relations), they do not overlap with the items on the right. Looking at the items on the right, several signals make intuitive sense, especially for relations such as solutionhood (used for question-answer pairs) or concession, which show the expected WH words and auxiliary did, or discourse markers such as though, respectively. At the same time, some high frequency items may be spurious, such as NATO for justify, which could perhaps be filtered out based on low dispersion across documents, but also stuff for cause, which probably could not be. Another problem with the lists on the right is that some expected strong signals, such as the word and for sequence are absent from the table. This is not because and is not frequent in sequences, but rather because it is a ubiquitous word, and as a result, it is not very specific to the relation. However if we look at actual examples of and inside and outside of sequences, it is easy to notice that the kind of and that does signal a relation in context is often clause initial as in SECREF24 and very different from the adnominal coordinating ands in SECREF24, which do not signal the relation: . [she was made a Dame by Elizabeth II for services to architecture,] [and in 2015 she became the first and only woman to be awarded the Royal Gold Medal]$_{\textsc {sequence}}$ . [Gordon visited England and Scotland in 1686.] [In 1687 and 1689 he took part in expeditions against the Tatars in the Crimea]$_{\textsc {sequence}}$ These examples suggest that a data-driven approach to signal detection needs some way of taking context into account. In particular, we would like to be able to compare instances of signals and quantify how strong the signal is in each case. In the next section, we will attempt to apply a neural model with contextualized word embeddings BIBREF44 to this problem, which will be capable of learning contextualized representations of words within the discourse graph. Automatic Signal Extraction ::: A Contextualized Neural Model ::: Task and Model Architecture Since we are interested in identifying unrestricted signaling devices, we deliberately avoid a supervised learning approach as used in automatic signal detection trained on resources such as PDTB. While recent work on PDTB connective detection (BIBREF26, BIBREF45) achieves good results (F-Scores of around 88-89 for English PDTB explicit connectives), the use of such supervised approaches would not tell us about new signaling devices, and especially about unrestricted lexical signals and other coherence devices not annotated in PDTB. Additionally, we would be restricted to the newspaper text types represented in the Wall Street Journal corpus, since no other large English corpus has been annotated for anchored signals. Instead, we will adopt a distantly supervised approach: we will task a model with supervised discourse relation classification on data that has not been annotated for signals, and infer the positions of signals in the text by analyzing the model's behavior. A key assumption, which we will motivate below, is that signals can have different levels of signaling strength, corresponding to their relative importance in identifying a relation. We would like to assume that different signal strength is in fact relevant to human analysts' decision making in relation identification, though in practice we will be focusing on model estimates of strength, the usefulness of which will become apparent below. As a framework, we use the sentence classifier configuration of FLAIR BIBREF46 with a biLSTM encoder/classifier architecture fed by character and word level representations composed of a concatenation of fixed 300 dimensional GloVe embeddings BIBREF47, pre-trained contextualized FLAIR word embeddings, and pre-trained contextualized character embeddings from AllenNLP BIBREF48 with FLAIR's default hyperparameters. The model's architecture is shown in Figure FIGREF30. Contextualized embeddings BIBREF44 have the advantage of giving distinct representations to different instances of the same word based on the surrounding words, meaning that an adnominal and connecting two NPs can be distinguished from one connecting two verbs based on its vector space representation in the model. Using character embeddings, which give vector space representations to substrings within each word, means that the model can learn the importance of morphological forms, such as the English gerund's -ing suffix, even for out-of-vocabulary items not seen during training. Formally, the input to our system is formed of EDU pairs which are the head units within the respective blocks of discourse units that they belong to, which are in turn connected by an instance of a discourse relation. This means that every discourse relation in the corpus is expressed as exactly one EDU pair. Each EDU is encoded as a (possibly padded) sequence of $n$-dimensional vector representations of each word ${x_1,..,x_T}$, with some added separators which are encoded in the same way and described below. The bidirectional LSTM composes representations and context for the input, and a fully connected softmax layer gives the probability of each relation: where the probability of each relation $rel_i$ is derived from the composed output of the function $h$ across time steps $0 \ldots t$, $\delta \in \lbrace b,f\rbrace $ is the direction of the respective LSTMs, $c_t^\delta $ is the recurrent context in each direction and $\theta = {W,b}$ gives the model weights and bias parameters (see BIBREF46 for details). Note that although the output of the system is ostensibly a probability distribution over relation types, we will not be directly interested in the most probable relation as outputted by the classifier, but rather in analyzing the model's behavior with respect to the input word representations as potential signals of each relation. In order to capitalize on the system's natural language modeling knowledge, EDU satellite-nucleus pairs are presented to the model in text order (i.e. either the nucleus or the satellite may come first). However, the model is given special separator symbols indicating the positions of the satellite and nucleus, which are essential for deciding the relation type (e.g. cause vs. result, which may have similar cue words but lead to opposite labels), and a separator symbol indicating the transition between satellite and nucleus. This setup is illustrated in SECREF29. . $<$s$>$ Sometimes this information is available , $<$sep$>$ but usually not . $<$n$>$ Label: concession In this example, the satellite precedes the nucleus and is therefore presented first. The model is made aware of the fact that the segment on the left is the satellite thanks to the tag <s>. Since the LSTM is bi-directional, it is aware of positions being within the nucleus or satellite, as well as their proximity to the separator, at every time step. We reserve the signal-annotated subset of 12 documents from GUM for testing, which contains 1,185 head EDU pairs (each representing one discourse relation), and a random selection of 12 further documents from the remaining RST-annotated GUM data (1,078 pairs) is taken as development data, leaving 102 documents (5,828 pairs) for training. The same EDUs appear in multiple pairs if a unit has multiple children with distinct relations, but no instances of EDUs are shard across partitions, since the splits are based on document boundaries. We note again that for the training and development data, we have no signaling annotations of any kind; this is possible since the network does not actually use the human signaling annotations we will be evaluating against: its distant supervision consists solely of the RST relation labels. Automatic Signal Extraction ::: A Contextualized Neural Model ::: Relation Classification Performance Although only used as an auxiliary training task, we can look at the model's performance on predicting discourse relations, which is given in Table TABREF34. Unsurprisingly, the model performs best on the most frequent relations in the corpus, such as elaboration or joint, but also on rarer ones which tend to be signaled explicitly, such as condition (often signaled explicitly by if), solutionhood (used for question-answer pairs signaled by question marks and WH words), or concession (DMs such as although). However, the model also performs reasonably well for some trickier (i.e. less often introduced by unambiguous DMs) but frequent relations, such as preparation, circumstance, and sequence. Rare relations with complex contextual environments, such as result, justify or antithesis, unsurprisingly do not perform well, with the latter two showing an F-score of 0. The relation restatement, which also shows no correct classifications, reveals a weakness of the model: while it is capable of recognizing signals in context, it cannot learn that repetition in and of itself, regardless of specific areas in vector space, is important (see Section SECREF6 for more discussion of these and other classification weaknesses). Although this is not the actual task targeted by the current paper, we may note that the overall performance of the model, with an F-Score of 44.37, is not bad, though below the performance of state-of-the-art full discourse parsers (see BIBREF49) – this is to be expected, since the model is not aware of the entire RST tree, rather looking only at EDU pairs out of context, and given that standard scores on RST-DT come from a larger and more homogeneous corpus, with with fewer relations and some easy cases that are absent from GUM. Given the model's performance on relation classification, which is far from perfect, one might wonder whether signal predictions made by our analysis should be trusted. This question can be answered in two ways: first, quantitatively, we will see in Section SECREF5 that model signal predictions overlap considerably with human judgments, even when the predicted relation is incorrect. Intuitively, for similar relations, such as concession or contrast, both of which are adversative, the model may notice a relevant cue (e.g. `but', or contrasting lexical items) despite choosing the wrong one. Second, as we will see below, we will be analyzing the model's behavior with respect to the probability of the correct relation, regardless of the label it ultimately chooses, meaning that the importance of predicting the correct label exactly will be diminished further. Automatic Signal Extraction ::: A Contextualized Neural Model ::: Signaling Metric The actual performance we are interested in evaluating is the model's ability to extract signals for given discourse relations, rather than its accuracy in predicting the relations. To do so, we must extract anchored signal predictions from the model, which is non-trivial. While earlier work on interpreting neural models has focused on token-wise softmax probability BIBREF50 or attention weights BIBREF51, using contextualized embeddings complicates the evaluation: since word representations are adjusted to reflect neighboring words, the model may assign higher importance to the word standing next to what a human annotator may interpret as a signal. Example SECREF36 illustrates the problem: . [RGB]230, 230, 230To [RGB]53, 53, 53provide [RGB]165, 165, 165information [RGB]179, 179, 179on [RGB]175, 175, 175the [RGB]160, 160, 160analytical [RGB]157, 157, 157sample [RGB]187, 187, 187as [RGB]170, 170, 170a [RGB]168, 168, 168whole [RGB]207, 207, 207, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]168, 168, 168two [RGB]170, 170, 170additional [RGB]164, 164, 164demographic [RGB]175, 175, 175variables [RGB]182, 182, 182are [RGB]165, 165, 165included [RGB]230, 230, 230. Each word in SECREF36 is shaded based on the softmax probability assigned to the correct relation of the satellite, i.e. how `convincing' the model found the word in terms of local probability. In addition, the top-scoring word in each sentence is rendered in boldface for emphasis. The gold label for the relation is placed above the arrow, which indicates the direction of the relation (satellite to nucleus), and the model's predicted label is shown under the arrow. Intuitively, the strongest signal of the purpose relation in SECREF36 is the initial infinitive marker To – however, the model ranks the adjacent provide higher and almost ignores To. We suspect that the reason for this, and many similar examples in the model evaluated based on relation probabilities, is that contextual embeddings allow for a special representation of the word provide next to To, making it difficult to tease apart the locus of the most meaningful signal. To overcome this complication, we use the logic of permutation importance, treating the neural model as a black box and manipulating the input to discover relevant features in the data (cf. BIBREF52). We reason that this type of evaluation is more robust than, for example, examining model internal attention weights because such weights are not designed or trained with a reward ensuring they are informative – they are simply trained on the same classification error loss as the rest of the model. Instead, we can withhold potentially relevant information from the model directly: After training is complete, we feed the test data to the model in two forms – as-is, and with each word masked, as shown in SECREF36. . Original: <$s$>$ To\quad \ p̄rovide īnformation .. <$sep$>$ .. <$n$>$ Original: \: $<$s$>$ \: \ To \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked1: \: $<$s$>$ \: $<$X$>$ \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked2: \: $<$s$>$ \: \ To \: \ $<$X$>$ \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked3: \: $<$s$>$ \: \ To \: provide \: \ $<$X$>$ \: ... \: $<$sep$>$ \: ... \: $<$n$>$ $ Label: purpose We reason that, if a token is important for predicting the correct label, masking it will degrade the model's classification accuracy, or at least reduce its reported classification certainty. In SECREF36, it seems reasonable to assume that masking the word `To' has a greater impact on predicting the label purpose than masking the word `provide', and even less so, the following noun `information'. We therefore use reduction in softmax probability of the correct relation as our signaling strength metric for the model. We call this metric ${\Delta }_s$ (for delta-softmax), which can be written as: where $rel$ is the true relation of the EDU pair, $t_i$ represents the token at index $i$ of $N$ tokens, and $X_{mask=i}$ represents the input sequence with the masked position $i$ (for $i \in 1 \ldots N$ ignoring separators, or $\phi $, the empty set). To visualize the model's predictions, we compare ${\Delta }_s$ for a particular token to two numbers: the maximum ${\Delta }_s$ achieved by any token in the current pair (a measure of relative importance for the current classification) and the maximum ${\Delta }_s$ achieved by any token in the current document (a measure of how strongly the current relation is signaled compared to other relations in the text). We then shade each token 50% based on the first number and 50% based on the second. As a result, the most valid cues in an EDU pair are darker than their neighbors, but EDU pairs with no good cues are overall very light, whereas pairs with many good signals are darker. Some examples of this visualization are given in SECREF36-SECREF36 (human annotated endocentric signal tokens are marked by double underlines). . [RGB]61, 61, 61To [RGB]112, 112, 112provide [RGB]205, 205, 205information [RGB]230, 230, 230on [RGB]230, 230, 230the [RGB]230, 230, 230analytical [RGB]230, 230, 230sample [RGB]230, 230, 230as [RGB]230, 230, 230a [RGB]230, 230, 230whole [RGB]230, 230, 230, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]230, 230, 230two [RGB]183, 183, 183additional [RGB]230, 230, 230demographic [RGB]230, 230, 230variables [RGB]94, 94, 94are [RGB]194, 194, 194included [RGB]163, 163, 163. . [RGB]230, 230, 230Telling [RGB]230, 230, 230good [RGB]230, 230, 230jokes [RGB]230, 230, 230is [RGB]230, 230, 230an [RGB]230, 230, 230art [RGB]230, 230, 230that [RGB]230, 230, 230comes [RGB]230, 230, 230naturally [RGB]230, 230, 230to [RGB]230, 230, 230some [RGB]211, 211, 211people [RGB]135, 135, 135, $\xleftarrow[\text{pred:contrast}]{\text{gold:contrast}}$ [RGB]21, 21, 21but [RGB]209, 209, 209for [RGB]207, 207, 207others [RGB]230, 230, 230it [RGB]217, 217, 217takes [RGB]230, 230, 230practice [RGB]230, 230, 230and [RGB]189, 189, 189hard [RGB]230, 230, 230work [RGB]230, 230, 230. . [RGB]230, 230, 230It [RGB]230, 230, 230is [RGB]230, 230, 230possible [RGB]230, 230, 230that [RGB]230, 230, 230these [RGB]230, 230, 230two [RGB]230, 230, 230children [RGB]230, 230, 230understood [RGB]230, 230, 230the [RGB]230, 230, 230task [RGB]230, 230, 230and [RGB]230, 230, 230really [RGB]230, 230, 230did [RGB]230, 230, 230believe [RGB]230, 230, 230that [RGB]230, 230, 230the [RGB]230, 230, 230puppet [RGB]230, 230, 230did [RGB]230, 230, 230not [RGB]230, 230, 230produce [RGB]230, 230, 230any [RGB]230, 230, 230poor [RGB]230, 230, 230descriptions [RGB]230, 230, 230, [RGB]230, 230, 230and [RGB]230, 230, 230in [RGB]230, 230, 230this [RGB]230, 230, 230regard [RGB]230, 230, 230, [RGB]230, 230, 230are [RGB]230, 230, 230not [RGB]230, 230, 230yet [RGB]230, 230, 230adult-like [RGB]230, 230, 230in [RGB]230, 230, 230their [RGB]230, 230, 230SI [RGB]230, 230, 230interpretations [RGB]230, 230, 230. $\xleftarrow[\text{pred:evaluation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230This [RGB]230, 230, 230is [RGB]41, 41, 41unlikely The highlighting in SECREF36 illustrates the benefits of the masking based evaluation compared to SECREF36: the token To is now clearly the strongest signal, and the verb is taken to be less important, followed by the even less important object of the verb. This is because removing the initial To hinders classification much more than the removal of the verb or noun. We note also that although the model in fact misclassified this example as preparation, we can still use masking importance to identify To, since the score queried from the model corresponds to a relative decrease in the probability of the correct relation, purpose, even if this was not the highest scoring relation overall. In SECREF36 we see the model's ability to correctly predict contrast based on the DM but. Note that despite a rather long sentence, the model does not need any other word nearly as much for the classification. Although the model is not trained explicitly to detect discourse markers, the DM can be recognized due to the fact that masking it leads to a drop of 66% softmax probability (${\Delta }_s$=0.66) of this pair representing the contrast relation. We can also note that a somewhat lower scoring content word is also marked: hard (${\Delta }_s$=0.18). In our gold signaling annotations, this word was marked together with comes naturally as a signal, due to the contrast between the two concepts (additionally, some people is flagged as a signal along with others). The fact that the model finds hard helpful, but does not need the contextual near antonym naturally, suggests that it is merely learning that words in the semantic space near hard may indicate contrast, and not learning about the antonymous relationship – otherwise we would expect to see `naturally' have a stronger score (see also the discussion in Section SECREF6). Finally SECREF36 shows that, much like in the case of hard, the model is not biased towards traditional DMs, confirming that it is capable of learning about content words, or neighborhoods of content words in vector space. In a long EDU pair of 41 words, the model relies almost exclusively on the word unlikely (${\Delta }_s$=0.36) to correctly label the relation as evaluation. By contrast, the anaphoric demonstrative `This' flagged by the human annotator, which is a more common function word, is disregarded, perhaps because it can appear with several other relations, and is not particularly exclusive to evaluation. These results suggest that the model may be capable of recognizing signals through distant supervision, allowing it to validate human annotations, to potentially point out signals that may be missed by annotators, and most importantly, to quantify signaling strength on a sliding scale. At the same time, we need a way to evaluate the model's quality and assess the kinds of errors it makes, as well as what we can learn from them. We therefore move on to evaluating the model and its errors next. Evaluation and Error Analysis ::: Evaluation Metric To evaluate the neural model, we would like to know how well ${\Delta }_s$ corresponds to annotators' gold standard labels. This leads to two kinds of problems: the first is that the model is distantly supervised, and therefore does not know about signal types, subtypes, or any aspect of signaling annotation and its relational structure. The second problem is that signaling annotations are categorical, and do not correspond to the ratio-scaled predictions provided by ${\Delta }_s$ (this is in fact one of the motivations for desiring a model-based estimate of signaling strength). The first issue means that we can only examine the model's ability to locate signals – not to classify them. Although there may be some conceivable ways of analyzing model output to identify classes such as DMs (which are highly lexicalized, rather than representing broad regions of vector space, as words such as unlikely might), or more contextual relational signals, such as pronouns, this line of investigation is beyond the scope of the present paper. A naive solution to the second problem might be to identify a cutoff point, e.g. deciding that all and only words scoring ${\Delta }_s>$0.15 are predicted to be signals. The problem with the latter approach is that sentences can be very different in many ways, and specifically in both length and in levels of ambiguity. Sentences with multiple, mutually redundant cues, may produce lower ${\Delta }_s$ scores compared to shorter sentences with a subset of the same cues. Conversely, in very short sentences with low signal strength, the model may reasonably be expected to degrade very badly with the deletion of almost any word, as the context becomes increasingly incomprehensible. For these reasons, we choose to adopt an evaluation metric from the paradigm of information retrieval, and focus on recall@k (recall at rank k, for $k=1,2,3$...). The idea is to poll the model for each sentence in which some signals have been identified, and see whether the model is able to find them if we let it guess using the word with the maximal ${\Delta }_s$ score (recall@1), regardless of how high that score is, or alternatively relax the evaluation criteria and see whether the human annotator's signal tokens appear at rank 2 or 3. Figure FIGREF40 shows numbers for recall@k for the top 3 ranks outputted by the model, next to random guess baselines. The left, middle and right panels in Figure FIGREF40 correspond to measurements when all signals are included, only cases contained entirely in the head EDUs shown to the model, and only DMs, respectively. The scenario on the left is rather unreasonable and is included only for completeness: here the model is also penalized for not detecting signals such as lexical chains, part of which is outside the units that the model is being shown. An example of such a case can be seen in Figure FIGREF41. The phrase Respondents in unit [23] signals the relation elaboration, since it is coreferential with a previous mention of the respondents in [21]. However, because the model is only given heads of EDU blocks to classify, it does not have access to the first occurrence of respondents while predicting the elaboration relation – the first half of the signal token set is situated in a child of the nucleus EDU before the relation, i.e. it belongs to group IV in the taxonomy in Table TABREF20. Realistically, our model can only be expected to learn about signals from `directly participating' EDUs, i.e. groups I, II, VI and VII, the `endocentric' signal groups from Section SECREF16. Although most signals belong to endocentric categories (71.62% of signaled relations belong to these groups, cf. Table TABREF20), exocentric cases form a substantial portion of signals which we have little hope of capturing with the architecture used here. As a result, recall metrics in the `all signals' scenario are closest to the random baselines, though the signals detected in other instances still place the model well above the baseline. A more reasonable evaluation is the one in the middle panel of Figure FIGREF40, which includes only endocentric signals as defined in the taxonomy. EDUs with no endocentric signals are completely disregarded in this scenario, which substantially reduces the number of tokens considered to be signals, since, while many tokens are part of some meaningful lexical chain in the document, requiring signals to be contained only in the pair of head units eliminates a wide range of candidates. Although the random baseline is actually very slightly higher (perhaps because eliminated EDUs were often longer ones, sharing small amounts of material with larger parts of the text, and therefore prone to penalizing the baseline; many words mean more chances for a random guess to be wrong), model accuracy is substantially better in this scenario, reaching a 40% chance of hitting a signal with only one guess, exceeding 53% with two guesses, and capping at 64% for recall@3, over 20 points above baseline. Finally, the right panel in the figure shows recall when only DMs are considered. In this scenario, a random guess fares very poorly, since most words are not DMs. The model, by contrast, achieves the highest results in all metrics, since DMs have the highest cue validity for relation classification, and the model attends to them most strongly. With just one guess, recall is over 56%, and goes as high as 67% for recall@3. The baseline only goes as high as 16% for three guesses. Evaluation and Error Analysis ::: Qualitative Analysis Looking at the model's performance qualitatively, it is clear that it can detect not only DMs, but also morphological cues (e.g. gerunds as markers of elaboration, as in SECREF43), semantic classes and sentiment, such as positive and negative evaluatory terms in SECREF43, as well as multiple signals within the same EDU, as in SECREF43. In fact, only about 8.3% of the tokens correctly identified by the model in Table TABREF45 below are of the DM type, whereas about 7.2% of all tokens flagged by human annotators were DMs, meaning that the model frequently matches non-DM items to discourse relation signals (see Performance on Signal Types below). It should also be noted that signals can be recognized even when the model misclassifies relations, since ${\Delta }_s$ does not rely on correct classification: it merely quantifies the contribution of a word in context toward the correct label's score. If we examine the influence of each word on the score of the correct relation, that impact should and does still correlate with human judgments based on what the system may tag as the second or third best class to choose. . [RGB]230, 230, 230For [RGB]230, 230, 230the [RGB]230, 230, 230present [RGB]230, 230, 230analysis [RGB]230, 230, 230, [RGB]230, 230, 230these [RGB]230, 230, 230responses [RGB]230, 230, 230were [RGB]230, 230, 230recoded [RGB]230, 230, 230into [RGB]230, 230, 230nine [RGB]230, 230, 230mutually [RGB]230, 230, 230exclusive [RGB]230, 230, 230categories $\xleftarrow[\text{pred:elaboration}]{\text{gold:result}}$ [RGB]63, 63, 63capturing [RGB]219, 219, 219the [RGB]230, 230, 230following [RGB]230, 230, 230options [RGB]135, 135, 135: . [RGB]185, 185, 185Professor [RGB]219, 219, 219Eastman [RGB]223, 223, 223said [RGB]207, 207, 207he [RGB]194, 194, 194is [RGB]64, 64, 64alarmed [RGB]230, 230, 230by [RGB]230, 230, 230what [RGB]230, 230, 230they [RGB]230, 230, 230found [RGB]230, 230, 230. $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230" [RGB]230, 230, 230Pregnant [RGB]229, 229, 229women [RGB]187, 187, 187in [RGB]230, 230, 230Australia [RGB]98, 98, 98are [RGB]213, 213, 213getting [RGB]230, 230, 230about [RGB]230, 230, 230half [RGB]171, 171, 171as [RGB]159, 159, 159much [RGB]230, 230, 230as [RGB]230, 230, 230what [RGB]155, 155, 155they [RGB]155, 155, 155require [RGB]223, 223, 223on [RGB]214, 214, 214a [RGB]109, 109, 109daily [RGB]176, 176, 176basis [RGB]111, 111, 111. . [RGB]195, 195, 195Even [RGB]230, 230, 230so [RGB]230, 230, 230, [RGB]230, 230, 230estimates [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230prevalence [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]230, 230, 230discrimination [RGB]219, 219, 219remains [RGB]230, 230, 230rare $\xleftarrow[\text{pred:evidence}]{\text{gold:concession}}$ [RGB]111, 111, 111At [RGB]63, 63, 63least [RGB]230, 230, 230one [RGB]230, 230, 230prior [RGB]230, 230, 230study [RGB]230, 230, 230by [RGB]230, 230, 230Kessler [RGB]225, 225, 225and [RGB]230, 230, 230colleagues [RGB]230, 230, 230[ [RGB]230, 230, 23015 [RGB]161, 161, 161] [RGB]200, 200, 200, [RGB]136, 136, 136however [RGB]222, 222, 222, [RGB]228, 228, 228using [RGB]230, 230, 230measures [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]224, 224, 224discrimination [RGB]217, 217, 217in [RGB]230, 230, 230a [RGB]230, 230, 230large [RGB]218, 218, 218American [RGB]230, 230, 230sample [RGB]230, 230, 230, [RGB]230, 230, 230reported [RGB]230, 230, 230that [RGB]230, 230, 230approximately [RGB]230, 230, 23033 [RGB]212, 212, 212% [RGB]230, 230, 230of [RGB]230, 230, 230respondents [RGB]156, 156, 156reported [RGB]169, 169, 169some [RGB]122, 122, 122form [RGB]168, 168, 168of [RGB]230, 230, 230discrimination Unsurprisingly, the model sometimes make sporadic errors in signal detection for which good explanations are hard to find, especially when its predicted relation is incorrect, as in SECREF43. Here the evaluative adjective remarkable is missed in favor of neighboring words such as agreed and a subject pronoun, which are not indicative of the evaluation relation in this context but are part of several cohorts of high scoring words. However, the most interesting and interpretable errors arise when ${\Delta }_s$ scores are high compared to an entire document, and not just among words in one EDU pair, in which most or even all words may be relatively weak signals. As an example of such a false positive with high confidence, we can consider SECREF43. In this example, the model correctly assigns the highest score to the DM so marking a purpose relation. However, it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action, which contributes to the purpose reading and helps to disambiguate so, despite not being considered a signal by annotators. . [RGB]216, 216, 216The [RGB]99, 99, 99agreement [RGB]89, 89, 89was [RGB]230, 230, 230that [RGB]131, 131, 131Gorbachev [RGB]102, 102, 102agreed [RGB]230, 230, 230to [RGB]230, 230, 230a [RGB]230, 230, 230quite [RGB]230, 230, 230remarkable [RGB]125, 125, 125concession [RGB]230, 230, 230: $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]64, 64, 64he [RGB]81, 81, 81agreed [RGB]230, 230, 230to [RGB]230, 230, 230let [RGB]220, 220, 220a [RGB]143, 143, 143united [RGB]149, 149, 149Germany [RGB]230, 230, 230join [RGB]83, 83, 83the [RGB]230, 230, 230NATO [RGB]230, 230, 230military [RGB]230, 230, 230alliance [RGB]230, 230, 230. . [RGB]230, 230, 230The [RGB]220, 220, 220opening [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230joke [RGB]230, 230, 230— [RGB]230, 230, 230or [RGB]230, 230, 230setup [RGB]230, 230, 230— [RGB]230, 230, 230should [RGB]230, 230, 230have [RGB]230, 230, 230a [RGB]230, 230, 230basis [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230real [RGB]200, 200, 200world $\xleftarrow[\text{pred:purpose}]{\text{gold:purpose}}$ [RGB]7, 7, 7so [RGB]73, 73, 73your [RGB]230, 230, 230audience [RGB]230, 230, 230can [RGB]230, 230, 230relate [RGB]230, 230, 230to [RGB]230, 230, 230it [RGB]230, 230, 230, In other cases, the model points out plausible signals which were passed over by an annotator, and may be considered errors in the gold standard. For example, the model easily notices that question marks indicate the solutionhood relation, even where these were skipped by annotators in favor of marking WH words instead: . [RGB]230, 230, 230Which [RGB]230, 230, 230previous [RGB]230, 230, 230Virginia [RGB]230, 230, 230Governor(s) [RGB]230, 230, 230do [RGB]230, 230, 230you [RGB]230, 230, 230most [RGB]230, 230, 230admire [RGB]230, 230, 230and [RGB]230, 230, 230why [RGB]12, 12, 12? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:solutionhood}}$ [RGB]230, 230, 230Thomas [RGB]230, 230, 230Jefferson [RGB]183, 183, 183. From the model's perspective, the question mark, which scores ${\Delta }_s$=0.79, is the single most important signal, and virtually sufficient for classifying the relation correctly, though it was left out of the gold annotations. The WH word Which and the sentence final why, by contrast, were noticed by annotators but were are not as unambiguous (the former could be a determiner, and the latter in sentence final position could be part of an embedded clause). In the presence of the question mark, their individual removal has much less impact on the classification decision. Although the model's behavior is sensible and can reveal annotation errors, it also suggests that ${\Delta }_s$ will be blind to auxiliary signals in the presence of very strong, independently sufficient cues. Using the difference in likelihood of correct relation prediction as a metric also raises the possibility of an opposite concept to signals, which we will refer to as distractors. Since ${\Delta }_s$ is a signed measure of difference, it is in fact possible to obtain negative values whenever the removal or masking of a word results in an improvement in the model's ability to predict the relation. In such cases, and especially when the negative value is of a large magnitude, it seems like a reasonable interpretation to say that a word functions as a sort of anti-signal, preventing or complicating the recognition of what might otherwise be a more clear-cut case. Examples SECREF43–SECREF43 show some instances of distractors identified by the masking procedure (distractors with ${\Delta }_s<$-0.2 are underlined). . [RGB]230, 230, 230How [RGB]230, 230, 230do [RGB]230, 230, 230they [RGB]201, 201, 201treat [RGB]167, 167, 167those [RGB]210, 210, 210not [RGB]190, 190, 190like [RGB]230, 230, 230themselves [RGB]100, 100, 100? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:preparation}}$ [RGB]52, 52, 52then [RGB]230, 230, 230they [RGB]230, 230, 230're [RGB]230, 230, 230either [RGB]230, 230, 230over-zealous [RGB]230, 230, 230, [RGB]230, 230, 230ignorant [RGB]230, 230, 230of [RGB]230, 230, 230other [RGB]230, 230, 230people [RGB]230, 230, 230or [RGB]230, 230, 230what [RGB]230, 230, 230to [RGB]230, 230, 230avoid [RGB]230, 230, 230those [RGB]230, 230, 230that [RGB]230, 230, 230contradict [RGB]230, 230, 230their [RGB]230, 230, 230fantasy [RGB]230, 230, 230land [RGB]230, 230, 230that [RGB]220, 220, 220caters [RGB]230, 230, 230to [RGB]230, 230, 230them [RGB]230, 230, 230and [RGB]230, 230, 230them [RGB]230, 230, 230only [RGB]230, 230, 230. . [RGB]230, 230, 230God [RGB]230, 230, 230, [RGB]230, 230, 230I [RGB]230, 230, 230do [RGB]230, 230, 230n't [RGB]230, 230, 230know [RGB]51, 51, 51! $\xrightarrow[\text{pred:preparation}]{\text{gold:preparation}}$ [RGB]230, 230, 230but [RGB]230, 230, 230nobody [RGB]230, 230, 230will [RGB]230, 230, 230go [RGB]230, 230, 230to [RGB]230, 230, 230fight [RGB]230, 230, 230for [RGB]230, 230, 230noses [RGB]230, 230, 230any [RGB]219, 219, 219more [RGB]169, 169, 169. In SECREF43, a rhetorical question trips up the classifier, which predicts the question-answer relation solutionhood instead of preparation. Here the initial WH word How and the subsequent auxiliary do-support both distract (with ${\Delta }_s$=-0.23 and -0.25) from the preparation relation, which is however being signaled positively by the DM then in the nucleus unit. Later on, the adverb only is also disruptive (${\Delta }_s$=-0.31), perhaps due to a better association with adversative relations, such as contrast. In SECREF43, a preparatory “God, I don't know!” is followed up with a nucleus starting with but, which typically marks a concession or other adversative relation. In fact, the DM but is related to a concessive relation with another EDU (not shown), which the model is not aware of while making the classification for the preparation. Although this example reveals a weakness in the model's inability to consider broader context, it also reveals the difficulty of expecting DMs to fall in line with a strong nuclearity assumption: since units serve multiple functions as satellites and nuclei, signals which aid the recognition of one relation may hinder the recognition of another. Evaluation and Error Analysis ::: Performance on Signal Types To better understand the kinds of signals which the model captures better or worse, Table TABREF45 gives a breakdown of performance by signal type and specific signal categories, for categories attested over 20 times (note that the categories are human labels assigned to the corresponding positions – the system does not predict signal types). To evaluate performance for all types we cannot use recall@1–3, since some sentences contain more than 3 signal tokens, which would lead to recall errors even if the top 3 ranks are correctly identified signals. The scores in the table therefore express how many of the signal tokens belonging to each subtype in the gold annotations are recognized if we allow the system to make as many guesses as there are signal tokens in each EDU pair, plus a tolerance of a maximum of 2 additional tokens (similarly to recall@3). We also note that a single token may be associated with multiple signal types, in which case its identification or omission is counted separately for each type. Three of the top four categories which the model performs best for are, perhaps unsurprisingly, the most lexical ones: alternate expression captures non-DM phrases such as I mean (for elaboration), or the problem is (for concession), and indicative word includes lexical items such as imperative see (consistently marking evidence in references within academic articles) or evaluative adjectives such as interesting for evaluation. The good performance of the category colon captures the model's recognition of colons as important punctuation, primarily predicting preparation. The only case of a `relational' category, requiring attention to two separate positions in the input, which also fares well is synonymy, though this is often based on flagging only one of two items annotated as synonymous, and is based on rather few examples. We can find only one example, SECREF44, where both sides of a pair of similar words is actually noticed, which both belong to the same stem (decline/declining): . [RGB]230, 230, 230The [RGB]230, 230, 230report [RGB]209, 209, 209says [RGB]213, 213, 213the [RGB]172, 172, 172decline [RGB]220, 220, 220in [RGB]228, 228, 228iodine [RGB]230, 230, 230intake [RGB]215, 215, 215appears [RGB]230, 230, 230to [RGB]230, 230, 230be [RGB]230, 230, 230due [RGB]230, 230, 230to [RGB]230, 230, 230changes [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230dairy [RGB]230, 230, 230industry [RGB]230, 230, 230, [RGB]230, 230, 230where [RGB]230, 230, 230chlorine-containing [RGB]230, 230, 230sanitisers [RGB]226, 226, 226have [RGB]230, 230, 230replaced [RGB]230, 230, 230iodine-containing [RGB]230, 230, 230sanitisers [RGB]230, 230, 230. $\xleftarrow[\text{pred:background}]{\text{gold:justify}}$ [RGB]193, 193, 193Iodine [RGB]230, 230, 230released [RGB]230, 230, 230from [RGB]230, 230, 230these [RGB]230, 230, 230chemicals [RGB]230, 230, 230into [RGB]216, 216, 216milk [RGB]230, 230, 230has [RGB]230, 230, 230been [RGB]230, 230, 230the [RGB]230, 230, 230major [RGB]230, 230, 230source [RGB]230, 230, 230of [RGB]226, 226, 226dietary [RGB]206, 206, 206iodine [RGB]230, 230, 230in [RGB]230, 230, 230Australia [RGB]230, 230, 230for [RGB]230, 230, 230at [RGB]230, 230, 230least [RGB]230, 230, 230four [RGB]230, 230, 230decades [RGB]202, 202, 202, [RGB]153, 153, 153but [RGB]230, 230, 230is [RGB]230, 230, 230now [RGB]63, 63, 63declining [RGB]79, 79, 79. We note that our evaluation is actually rather harsh towards the model, since in multiword expressions, often only one central word is flagged by ${\Delta }_s$ (e.g. problem in “the problem is”), while the model is penalized in Table TABREF45 for each token that is not recognized (i.e. the and is, which were all flagged by a human annotator as signals in the data). Interestingly, the model fares rather well in identifying morphological tense cues, even though these are marked by both inflected lexical verbs and semantically poor auxiliaries (e.g. past perfect auxiliary had marking background); but modality cues (especially can or could for evaluation) are less successfully identified, suggesting they are either more ambiguous, or mainly relevant in the presence of evaluative content words which out-score them. Other relational categories from the middle of the table which ostensibly require matching pairs of words, such as repetition, meronymy, or personal reference (coreference) are mainly captured by the model when a single item is a sufficiently powerful cue, often ignoring the other half of the signal, as shown in SECREF44. . [RGB]230, 230, 230On [RGB]230, 230, 230a [RGB]230, 230, 230new [RGB]230, 230, 230website [RGB]230, 230, 230, [RGB]230, 230, 230" [RGB]230, 230, 230The [RGB]230, 230, 230Internet [RGB]230, 230, 230Explorer [RGB]230, 230, 2306 [RGB]230, 230, 230Countdown [RGB]230, 230, 230" [RGB]230, 230, 230, [RGB]230, 230, 230Microsoft [RGB]230, 230, 230has [RGB]230, 230, 230launched [RGB]230, 230, 230an [RGB]230, 230, 230aggressive [RGB]230, 230, 230campaign [RGB]230, 230, 230to [RGB]230, 230, 230persuade [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230stop [RGB]171, 171, 171using [RGB]133, 133, 133IE6 $\xleftarrow[\text{pred:elaboration}]{\text{gold:elaboration}}$ [RGB]56, 56, 56Its [RGB]197, 197, 197goal [RGB]167, 167, 167is [RGB]230, 230, 230to [RGB]230, 230, 230decrease [RGB]230, 230, 230IE6 [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230less [RGB]230, 230, 230than [RGB]230, 230, 230one [RGB]124, 124, 124percent [RGB]229, 229, 229. Here the model has learned that an initial possessive pronoun, perhaps in the context of a subject NP in a copula sentence (note the shading of the following is) is an indicator of an elaboration relation, even though there is no indication that the model has noticed which word is the antecedent. Similarly for the count category, the model only learns to notice the possible importance of some numbers, but is not actually aware of whether they are identical (e.g. for restatement) or different (e.g. in contrast). Finally, some categories are actually recognized fairly reliably, but are penalized by the same partial substring issue identified above: Date expressions are consistently flagged as indicators of circumstance, but often a single word, such as a weekday in SECREF44, is dominant, while the model is penalized for not scoring other words as highly (including commas within dates, which are marked as part of the signal token span in the gold standard, but whose removal does not degrade prediction accuracy). In this case it seems fair to say that the model has successfully recognized the date signal of `Wednesday April 13', yet it loses points for missing two instances of `,', and the `2011', which is no longer necessary for recognizing that this is a date. . [RGB]230, 230, 230NASA [RGB]230, 230, 230celebrates [RGB]230, 230, 23030th [RGB]230, 230, 230anniversary [RGB]230, 230, 230of [RGB]230, 230, 230first [RGB]230, 230, 230shuttle [RGB]230, 230, 230launch [RGB]230, 230, 230; $\xleftarrow[\text{pred:circumstance}]{\text{gold:circumstance}}$ [RGB]11, 11, 11Wednesday [RGB]186, 186, 186, [RGB]115, 115, 115April [RGB]153, 153, 15313 [RGB]219, 219, 219, [RGB]230, 230, 2302011 Discussion This paper has used a corpus annotated for discourse relation signals within the framework of the RST Signalling Corpus (BIBREF12) and extended with anchored signal annotations (BIBREF27) to develop a taxonomy of unrestricted and hierarchically aware discourse signal positions, as well as a data-driven neural network model to explore distantly supervised signal word extraction. The results shed light on the distribution of signal categories from the RST-SC taxonomy in terms of associated word forms, and show the promise of neural models with contextual embeddings for the extraction of context dependent and gradient discourse signal detection in individual texts. The metric developed for the evaluation, $\Delta _s$, allows us to assess the relative importance of signal words for automatic relation classification, and reveal observations for further study, as well as shortcomings which point to the need to develop richer feature representations and system architectures in future work. The model presented in the previous sections is clearly incomplete in both its classification accuracy and its ability to recognize the same signals that humans do. However, given the fact that it is trained entirely without access to discourse signal annotations and is unaware of any of the guidelines used to create the gold standard that it is evaluated on, its performance may be considered surprisingly good. As an approach to extracting discourse signals in a data-driven way, similar to frequentist methods or association measures used in previous work, we suggest that this model forms a more fine grained tool, capable of taking context into consideration and delivering scores for each instance of a signal candidate, rather than resulting in a table of undifferentiated signal word types. Additionally, although we consider human signal annotations to be the gold standard in identifying the presence of relevant cues, the ${\Delta }_s$ metric gives new insights into signaling which cannot be approached using manual signaling annotations. Firstly, the quantitative nature of the metric allows us to rank signaling strength in a way that humans have not to date been able to apply: using ${\Delta }_s$, we can say which instances of which signals are evaluated as stronger, by how much, and which words within a multi-word signal instance are the most important (e.g. weekdays in dates are important, the commas are not). Secondly, the potential for negative values of the metric opens the door to the study of negative signals, or `distractors', which we have only touched upon briefly in this paper. And finally, we consider the availability of multiple measurements for a single DM or other discourse signal to be a potentially very interesting window into the relative ambiguity of different signaling devices (cf. BIBREF16) and for research on the contexts in which such ambiguity results. To see how ambiguity is reflected in multiple measurements of ${\Delta }_s$, we can consider Figure FIGREF47. The figure shows boxplots for multiple instances of the same signal tokens. We can see that words like and are usually not strong signals, with the entire interquartile range scoring less than 0.02, i.e. aiding relation classification by less than 2%, with some values dipping into the negative region (i.e. cases functioning as distractors). However, some outliers are also present, reaching almost as high as 0.25 – these are likely to be coordinating predicates, which may signal relations such as sequence or joint. A word such as but is more important overall, with the box far above and, but still covering a wide range of values: these can correspond to more or less ambiguous cases of but, but also to cases in which the word is more or less irreplaceable as a signal. In the presence of multiple signals for the same relation, the presence of but should be less important. We can also see that but can be a distractor with negative values, as we saw in example SECREF43 above. As far as we are aware, this is the first empirical corpus-based evidence giving a quantitative confirmation to the intuition that `but' in context is significantly less ambiguous as a discourse marker than `and'; the overlap in their bar plots indicate that they can be similarly ambiguous or even distracting in some cases, but the difference in interquartile ranges makes it clear that these are exceptions. For less ambiguous DMs, such as if, we can also see a contrast between lower and upper case instances: upper case If is almost always a marker of condition, but the lower case if is sometimes part of an embedded object clause, which is not segmented in the corpus and does not mark a conditional relation (e.g. “they wanted to see if...”). For the word to, the figure suggests a strongly bimodal distribution, with a core population of (primarily prepositional) discourse-irrelevant to, and a substantial number of outliers above a large gap, representing to in infinitival purpose clauses (though not all to infinitives mark such clauses, as in adnominal “a chance to go”, which the model is usually able to distinguish in context). In other words, our model can not only disambiguate ambiguous strings into grammatical categories, but also rank members of the same category by importance in context, as evidenced by its ability to correctly classify high frequency items like `to' or `and' as true positives. A frequentist approach would not only lack this ability – it would miss such items altogether, due to its overall high string frequency and low specificity. Beyond what the results can tell us about discourse signals in this particular corpus, the fact that the neural model is sensitive to mutual redundancy of signals raises interesting theoretical questions about what human annotators are doing when they characterize multiple features of a discourse unit as signals. If it is already evident from the presence of a conventional DM that some relation applies, are other, less explicit signals which might be relied on in the absence of the DM, equally `there'? Do we need a concept of primary and auxiliary signals, or graded signaling strength, in the way that a metric such as ${\Delta }_s$ suggests? Another open question relates to the postulation of distractors as an opposite concept to discourse relation signals. While we have not tested this so far, it is interesting to ask to what extent human analysts are aware of distractors, whether we could form annotation guidelines to recognize them, and how humans weigh the value of signals and potential distractors in extrapolating intended discourse relations. It seems likely that distractors affecting humans may be found in cases of misunderstanding or ambiguity of discourse relations (see also BIBREF25). Finally, the error analysis for signal detection complements the otherwise opaque relation classification results in Table TABREF34 in showing some of the missing sources of information that our model would need in order to work better. We have seen that relational information, such as identifying not just the presence of a pronoun but also its antecedent, or both sides of lexical semantic relations such as synonymy, meronymy or antonymy, as well as comparing count information, are still unavailable to the classifier – if they were being used, then ${\Delta }_s$ would reflect the effects of their removal, but this is largely not the case. This suggests that, in the absence of vastly larger discourse annotated corpora, discourse relation recognition may require the construction of either features, architectures, or both, which can harness abstract relational information of this nature beyond the memorization of specific pairs of words (or regions of vector space with similar words) that are already attested in the limited training data. In this vein, BIBREF54 conducted a series of experiments on automatic sense prediction for four top-level implicit discourse relations within the PDTB framework, which also suggested benefits for using linguistically-informed features such as verb information, polarity tags, context, lexical items (e.g. first and last words of the arguments; first three words in the sentence) etc. The model architecture and input data are also in need of improvements, as the current architecture can only be expected to identify endocentric signals. The substantial amount of exocentric signaling cases is in itself an interesting finding, as it suggests that relation classification from head EDU pairs may ultimately have a natural ceiling that is considerably below what could be inferred from looking at larger contexts. We predict that as we add more features to the model and improve its architecture in ways that allow it to recognize the kinds of signals that humans do, classification accuracy will increase; and conversely, as classification accuracy rises, measurements based on ${\Delta }_s$ will overlap increasingly with human annotations of anchored signals. In sum, we believe that there is room for much research on what relation classification models should look like, and how they can represent the kinds of information found in non-trivial signals. The results of this line of work can therefore benefit NLP systems targeting discourse relations by suggesting locations within the text which systems should attend to in one way or another. Moreover, we think that using distant-supervised techniques for learning discourse relations (e.g. BIBREF55) is promising in the development of discourse models using the proposed dataset. We hope to see further analyses benefit from this work and the application of metrics such as ${\Delta }_s$ to other datasets, within more complex models, and using additional features to capture such information. We also hope to see applications of discourse relations such as machine comprehension BIBREF20 and sentiment analysis BIBREF55 etc. benefit from the proposed model architecture as well as the dataset.
model points out plausible signals which were passed over by an annotator, it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action
56e58bdf0df76ad1599021801f6d4c7b77953e29
56e58bdf0df76ad1599021801f6d4c7b77953e29_0
Q: Where does proposed metric overlap with juman judgement? Text: Introduction The development of formal frameworks for the analysis of discourse relations has long gone hand in hand with work on signaling devices. The analysis of discourse relations is also closely tied to what a discourse structure should look like and what discourse goals should be fulfilled in relation to the interpretation of discourse relations BIBREF0. Earlier work on the establishment of inventories of discourse relations and their formalization (BIBREF1, BIBREF2, BIBREF3, BIBREF4, BIBREF5, BIBREF6) relied on the existence of `discourse markers' (DMs) or `connectives', including conjunctions such as because or if, adverbials such as however or as a result, and coordinations such as but, to identify and distinguish relations such as condition in SECREF1, concession in SECREF1, cause in SECREF1, or contrast, result etc., depending on the postulated inventory of relations (signals for these relations as identified by human analysts are given in bold; examples come from the GUM corpus BIBREF7, presented in Section SECREF3). . [If you work for a company,]$_{\textsc {condition}}$ [they pay you that money.] . [Albeit limited,]$_{\textsc {concession}}$ [these results provide valuable insight into SI interpretation by Chitonga-speaking children.] . [not all would have been interviewed at Wave 3] [due to differential patterns of temporary attrition]$_{\textsc {cause}}$ The same reasoning of identifying relations based on overt signals has been applied to the comparison of discourse relations across languages, by comparing inventories of similar function words cross-linguistically (BIBREF8, BIBREF9); and the annotation guidelines of prominent contemporary corpora rely on such markers as well: for instance, the Penn Discourse Treebank (see BIBREF10) explicitly refers to either the presence of DMs or the possibility of their insertion in cases of implicit discourse relations, and DM analysis in Rhetorical Structure Theory BIBREF11 has also shown the important role of DMs as signals of discourse relations at all hierarchical levels of discourse analysis BIBREF12. At the same time, research over the past two decades analyzing the full range of possible cues that humans use to identify the presence of discourse relations has suggested that classic DMs such as conjunctions and adverbials are only a part of the network of signals that writers or speakers can harness for discourse structuring, which also includes entity-based cohesion devices (e.g. certain uses of anaphora, see BIBREF13), alternative lexicalizations using content words, as well as syntactic constructions (see BIBREF14 and the addition of alternative lexicalization constructions, AltLexC, in the latest version of PDTB, BIBREF15). In previous work, two main approaches to extracting the inventory of discourse signal types in an open-ended framework can be identified: data-driven approaches, which attempt to extract relevant words from distributional properties of the data, using frequencies or association measures capturing their co-occurrences with certain relation types (e.g. BIBREF16, BIBREF17); and manual annotation efforts (e.g. BIBREF10, BIBREF18), which develop categorization schemes and guidelines for human evaluation of signaling devices. The former family of methods benefits from an unbiased openness to any and every type of word which may reliably co-occur with some relation types, whether or not a human might notice it while annotating, as well as the naturally graded and comparable nature of the resulting quantitative scores, but, as we will show, falls short in identifying specific cases of a word being a signal (or not) in context. By contrast, the latter approach allows for the identification of individual instances of signaling devices, but relies on less open-ended guidelines and is categorical in nature: a word either is or isn't a signal in context, providing less access to concepts such as signaling strength. The goal of this paper is to develop and evaluate a model of discourse signal identification that is built bottom up from the data, but retains sensitivity to context in the evaluation of each individual example. In addition, even though this work is conducted within Rhetorical Structural Theory, we hope that it can shed light on signal identification of discourse relations across genres and provide empirical evidence to motivate research on theory-neutral and genre-diverse discourse processing, which would be beneficial for pushing forward theories of discourse across frameworks or formalisms. Furthermore, employing a computational approach to studying discourse relations has a promising impact on various NLP downstream tasks such as question answering and document summarization etc. For example, BIBREF20 incorporated discourse information into the task of automated text comprehension and benefited from such information without relying on explicit annotations of discourse structure during training, which outperformed state-of-the-art text comprehension systems at the time. Towards this goal, we begin by reviewing some previous work in the traditions sketched out above in the next section, and point out some open questions which we would like to address. In Section SECREF3 we present the discourse annotated data that we will be using, which covers a number of English text types from the Web annotated for 20 discourse relations in the framework of Rhetorical Structure Theory, and is enriched with human annotations of discourse relation signaling devices for a subset of the data. Moreover, we also propose a taxonomy of anchored signals based on the discourse annotated data used in this paper, illustrating the properties and the distribution of the anchorable signals. In Section SECREF4 we then train a distantly supervised neural network model which is made aware of the relations present in the data, but attempts to learn which words signal those relations without any exposure to explicit signal annotations. We evaluate the accuracy of our model using state-of-the-art pretrained and contextualized character and word embeddings, and develop a metric for signaling strength based on a masking concept similar to permutation importance, which naturally lends itself to the definition of both positive and negative or `anti-signals', which we will refer to as `distractors'. In Section SECREF5, we combine the anchoring annotation data from Section SECREF3 with the model's predictions to evaluate how `human-like' its performance is, using an information retrieval approach measuring recall@k and assessing the stability of different signal types based on how the model scores them. We develop a visualization for tokenwise signaling strength and perform error analysis for some signals found by the model which were not flagged by humans and vice versa, and point out the strengths and weaknesses of the architecture. Section SECREF6 offers further discussion of what we can learn from the model, what kinds of additional features it might benefit from given the error analysis, and what the distributions of scores for individual signals can teach us about the ambiguity and reliability of different signal types, opening up avenues for further research. Previous Work ::: Data-driven Approaches A straightforward approach to identifying discourse relation signals in corpora with discourse parses is to extract frequency counts for all lexical types or lemmas and cross-tabulate them with discourse relations, such as sentences annotated as cause, elaboration, etc. (e.g. BIBREF21, BIBREF22, BIBREF17). Table TABREF4, reproduced from BIBREF17, illustrates this approach for the Russian RST Treebank. This approach quickly reveals the core inventory of cue words in the language, and in particular the class of low-ambiguity discourse markers (DMs), such as odnako `however' signaling contrast (see Fraser 1999 on delimiting the class of explicit DMs) or relative pronouns signaling elaboration. As such, it can be very helpful for corpus-based lexicography of discourse markers (cf. BIBREF23). The approach can potentially include multiword expressions, if applied equally to multi-token spans (e.g. as a result), and because it is language-independent, it also allows for a straightforward comparison of connectives or other DMs across languages. Results may also converge across frameworks, as the frequency analysis may reveal the same items in different corpora annotated using different frameworks. For instance, the inventory of connectives found in work on the Penn Discourse Treebank (PDTB, see BIBREF10) largely converges with findings on connectives using RST (see BIBREF24, BIBREF18): conjunctions such as but can mark different kinds of contrastive relations at a high level, and adverbs such as meanwhile can convey contemporaneousness, among other things, even when more fine-grained analyses are applied. However, a purely frequentist approach runs into problems on multiple levels, as we will show in Section SECREF4: high frequency and specificity to a small number of relations characterize only the most common and unambiguous discourse markers, but not less common ones. Additionally, differentiating actual and potentially ambiguous usages of candidate words in context requires substantial qualitative analysis (see BIBREF25), which is not reflected in aggregated counts, and signals that belong to a class of relations (e.g. a variety of distinct contrastive relation types) may appear to be non-specific, when in fact they reliably mark a superset of relations. Other studies have used more sophisticated metrics, such as point-wise mutual information (PMI), to identify words associated with particular relations BIBREF16. Using the PDTB corpus, BIBREF16 extracted such scores and measured the contribution of different signal types based on the information gain which they deliver for the classification of discourse relations at various degrees of granularity, as expressed by the hierarchical labels of PDTB relation types. This approach is most similar to the goal given to our own model in Section SECREF4, but is less detailed in that the aggregation process assigns a single number to each candidate lexical item, rather than assigning contextual scores to each instance. Finally we note that for hierarchical discourse annotation schemes, the data-driven approaches described here become less feasible at higher levels of abstraction, as relations connecting entire paragraphs encompass large amounts of text, and it is therefore difficult to find words with high specificity to those relations. As a result, approaches using human annotation of discourse relation signals may ultimately be irreplaceable. Previous Work ::: Discourse Relation Signal Annotations Discourse relation signals are broadly classified into two categorizes: anchored signals and unanchored signals. By `anchoring' we refer to associating signals with concrete token spans in texts. Intuitively, most of the signals are anchorable since they correspond to certain token spans. However, it is also possible for a discourse relation to be signaled but remain unanchored. Results from BIBREF27 indicated that there are several signaled but unanchored relations such as preparation and background since they are high-level discourse relations that capture and correspond to genre features such as interview layout in interviews where the conversation is constructed as a question-answer scheme, and are thus rarely anchored to tokens. The Penn Discourse Treebank (PDTB V3, BIBREF15) is the largest discourse annotated corpus of English, and the largest resource annotated explicitly for discourse relation signals such as connectives, with similar corpora having been developed for a variety of languages (e.g. BIBREF28 for Turkish, BIBREF29 for Chinese). However the annotation scheme used by PDTB is ahierarchical, annotating only pairs of textual argument spans connected by a discourse relation, and disregarding relations at higher levels, such as relations between paragraphs or other groups of discourse units. Additionally, the annotation scheme used for explicit signals is limited to specific sets of expressions and constructions, and does not include some types of potential signals, such as the graphical layout of a document, lexical chains of (non-coreferring) content words that are not seen as connectives, or genre conventions which may signal the discourse function for parts of a text. It is nevertheless a very useful resource for obtaining frequency lists of the most prevalent DMs in English, as well as data on a range of phenomena such as anaphoric relations signaled by entities, and some explicitly annotated syntactic constructions. Working in the hierarchical framework of Rhetorical Structure Theory BIBREF11, BIBREF18 re-annotated the existing RST Discourse Treebank BIBREF30, by taking the existing discourse relation annotations in the corpus as a ground truth and analyzing any possible information in the data, including content words, patterns of repetition or genre conventions, as a possibly present discourse relation signaling device. The resulting RST Signalling Corpus (RST-SC, BIBREF31) consists of 385 Wall Street Journal articles from the Penn Treebank BIBREF32, a smaller subset of the same corpus used in PDTB. It contains 20,123 instances of 78 relation types (e.g. attribution, circumstance, result etc.), which are enriched with 29,297 signal annotations. BIBREF12 showed that when all types of signals are considered, over 86% of discourse relations annotated in the corpus were signaled in some way, but among these, just under 20% of cases were marked by a DM. However, unlike PDTB, the RST Signalling Corpus does not provide a concrete span of tokens for the locus of each signal, indicating instead only the type of signaling device used. Although the signal annotations in RST-SC have a broader scope than those in PDTB and are made more complex by extending to hierarchical relations, BIBREF33 have shown that RST-SC's annotation scheme can be `anchored' by associating discourse signal categories from RST-SC with concrete token spans. BIBREF27 applied the same scheme to a data set described in Section SECREF3, which we will use to evaluate our model in Section SECREF5. Since that data set is based on the same annotation scheme of signal types as RST-SC, we will describe the data for the present study and RST-SC signal type annotation scheme next. Data ::: Anchored Signals in the GUM Corpus In order to study open-ended signals anchored to concrete tokens, we use the signal-annotated subset of the freely available Georgetown University Multilayer (GUM) Corpus BIBREF7 from BIBREF27. Our choice to use a multi-genre RST-annotated corpus rather than using PDTB, which also contains discourse relation signal annotation to a large extent is motivated by three reasons: The first reason is that we wish to explore the full range of potential signals, as laid out in the work on the Signalling Corpus BIBREF12, BIBREF34, whereas PDTB annotates only a subset of the possible cues identified by human annotators. Secondly, the use of RST as a framework allows us to examine discourse relations at all hierarchical levels, including long distance, high-level relations between structures as large as paragraphs or sections, which often have different types of signals allowing their identification. Finally, although the entire GUM corpus is only about half the size of RST-DT (109K tokens), using GUM offers the advantage of a more varied range of genres than PDTB and RST-SC, both of which annotate Wall Street Journal data. The signal annotated subset of GUM includes academic papers, how-to guides, interviews and news text, encompassing over 11,000 tokens. Although this data set may be too small to train a successful neural model for signal detection, we will not be using it for this purpose; instead, we will reserve it for use solely as a test set, and use the remainder of the data (about 98K tokens) to build our model (see Section SECREF28 for more details about the subsets and splits), including data from four further genres, for which the corpus also contains RST annotations but no signaling annotations: travel guides, biographies, fiction, and Reddit forum discussions. The GUM corpus is manually annotated with a large number of layers, including document layout (headings, paragraphs, figures, etc.); multiple POS tags (Penn tags, CLAWS5, Universal POS); lemmas; sentence types (e.g. imperative, wh-question etc., BIBREF35); Universal Dependencies BIBREF36; (non-)named entity types; coreference and bridging resolution; and discourse parses using Rhetorical Structure Theory BIBREF11. In particular, the RST annotations in the corpus use a set of 20 commonly used RST relation labels, which are given in Table TABREF10, along with their frequencies in the corpus. The relations cover asymmetrical prominence relations (satellite-nucleus) and symmetrical ones (multinuclear relations), with the restatement relation being realized in two versions, one for each type. The signaling annotation in the corpus follows the scheme developed by RST-SC, with some additions. Although RST-SC does not indicate token positions for signals, it provides a detailed taxonomy of signal types which is hierarchically structured into three levels: signal class, denoting the signal's degree of complexity signal type, indicating the linguistic system to which it belongs specific signal, which gives the most fine-grained subtypes of signals within each type It is assumed that any number of word tokens can be associated with any number of signals (including the same tokens participating in multiple signals), that signals can arise without corresponding to specific tokens (e.g. due to graphical layout of paragraphs), and that each relation can have an unbounded number of signals ($0-n$), each of which is characterized by all three levels. The signal class level is divided into single, combined (for complex signals), and unsure for unclear signals which cannot be identified conclusively, but are noted for further study. For each signal (regardless of its class), signal type and specific signal are identified. According to RST-SC's taxonomy, signal type includes 9 types such as DMs, genre, graphical, lexical, morphological, numerical, reference, semantic, and syntactic. Each type then has specific subcategories. For instance, the signal type semantic has 7 specific signal subtypes: synonymy, antonymy, meronymy, repetition, indicative word pair, lexical chain, and general word. We will describe some of these in more depth below. In addition to the 9 signal types, RST-SC has 6 combined signal types such as reference+syntactic, semantic+syntactic, and graphical+syntactic etc., and 15 specific signals are identified for the combined signals. Although the rich signaling annotations in RST-SC offer an excellent overview of the relative prevalence of different signal types in the Wall Street Journal corpus, it is difficult to apply the original scheme to the study of individual signal words, since actual signal positions are not identified. While recovering these positions may be possible for some categories using the original guidelines, most signaling annotations (e.g. lexical chains, repetition) cannot be automatically paired with actual tokens, meaning that, in order to use the original RST-SC for our study, we would need to re-annotate it for signal token positions. As this effort is beyond the scope of our study, we will use the smaller data set with anchored signaling annotations from BIBREF27: This data is annotated with the same signal categories as RST-SC, but also includes exact token positions for each signal, including possibly no tokens for unanchorable signals such as some types of genre conventions or graphical layout which are not expressible in terms of specific words. In order to get a better sense of how the annotations work, we consider example SECREF7. . [5] Sociologists have explored the adverse consequences of discrimination; [6] psychologists have examined the mental processes that underpin conscious and unconscious biases; [7] neuroscientists have examined the neurobiological underpinnings of discrimination; [8] and evolutionary theorists have explored the various ways that in-group/out-group biases emerged across the history of our species. – joint [GUM_academic_discrimination] In this example, there is a joint relation between four spans in a fragment from an RST discourse tree. The first tokens in each span form a parallel construction and include semantically related items such as explored and examined (signal class `combined', type `semantic+syntactic', specific subtype `parallel syntactic construction + lexical chain'). The words corresponding to this signal in each span are highlighted in Figure FIGREF15, and are considered to signal each instance of the joint relation. Additionally, the joint relation is also signaled by a number of further signals which are highlighted in the figure as well, such as the semicolons between spans, which correspond to a type `graphical', subtype `semicolon' in RST-SC. The data model of the corpus records which tokens are associated with which categorized signals, and allows for multiple membership of the same token in several signal annotations. In terms of annotation reliability, BIBREF12 reported a weighted kappa of 0.71 for signal subtypes in RST-SC without regard to the span of words corresponding to a signal, while a study by BIBREF37 suggests that signal anchoring, i.e. associating RST-SC signal categories with specific tokens achieves a 90.9% perfect agreement score on which tokens constitute signals, or a Cohen's Kappa value of 0.77. As anchored signal positions will be of the greatest interest to our study, we will consider how signal token positions are distributed in the corpus next, and develop an anchoring taxonomy which we will refer back to for the remainder of this paper. Data ::: A Taxonomy of Anchored Signals From a structural point of view, one of the most fundamental distinctions with regard to signal realization recognized in previous work is the classification of signaling tokens into satellite or nucleus-oriented positions, i.e. whether a signal for the relation appears within the modifier span or the span being modified BIBREF38. While some relation types exhibit a strong preference for signal position (e.g. using a discourse marker such as because in the satellite for cause, BIBREF39), others, such as concession are more balanced (almost evenly split signals between satellite and nucleus in BIBREF38). In this study we would like to further refine the taxonomy of signal positions, breaking it down into several features. At the highest level, we have the distinction between anchorable and non-anchorable signals, i.e. signals which correspond to no token in the text (e.g. genre conventions, graphical layout). Below this level, we follow BIBREF38 in classifying signals as satellite or nucleus-oriented, based on whether they appear in the more prominent Elementary Discourse Unit (EDU) of a relation or its dependent. However, several further distinctions may be drawn: Whether the signal appears before or after the relation in text order; since we consider the relation to be instantiated as soon as its second argument in the text appears, `before' is interpreted as any token before the second head unit in the discourse tree begins, and `after' is any subsequent token Whether the signal appears in the head unit of the satellite/nucleus, or in a dependent of that unit; this distinction only matters for satellite or nucleus subtrees that consist of more than one unit Whether the signal is anywhere within the structure dominated by the units participating in the relation, or completely outside of this structure Table TABREF20 gives an overview of the taxonomy proposed here, which includes the possible combinations of these properties and the distribution of the corresponding anchorable signals found in the signal-annotated subset of the GUM Corpus from BIBREF27. Individual feature combinations can be referred to either as acronyms, e.g. ABIHS for `Anchorable, Before the second EDU of the relation, Inside the relation's subtree, Head unit of the Satellite', or using the group IDs near the bottom of the table (in this case the category numbered Roman I). We will refer back to these categories in our comparison of manually annotated and automatically predicted signals. To illustrate how the taxonomy works in practice, we can consider the example in Figure FIGREF23, which shows a signal whose associated tokens instantiate categories I and IV in a discourse tree – the words demographic variables appear both within a preparation satellite (unit [50], category I), which precedes and points to its nucleus [51–54], and within a satellite inside that block (unit [52], a dependent inside the nucleus block, category IV). Based on the RST-SC annotation scheme, the signal class is Simple, with the type Semantic and specific sub-type Lexical chain. The numbers at the bottom of Table TABREF20 show the number of tokens signaling each relation at each position, as well as the number of relations which have signal tokens at the relevant positions. The hypothetical categories V and X, with signal tokens which are not within the subtree of satellite or nucleus descendants, are not attested in our data, as far as annotators were able to identify. Automatic Signal Extraction ::: A Contextless Frequentist Approach To motivate the need for a fine-grained and contextualized approach to describing discourse relation signals in our data, we begin by extracting some basic data-driven descriptions of our data along the lines presented in Section SECREF3. In order to constrain candidate words to just the most relevant ones for marking a specific signal, we first need a way to address a caveat of the frequentist approach: higher order relations which often connect entire paragraphs (notably background and elaboration) must be prevented from allowing most or even all words in the document to be considered as signaling them. A simple approach to achieving this is to assume `Strong Nuclearity', relying on Marcu's (BIBREF42) Compositionality Criterion for Discourse Trees (CCDT), which suggests that if a relation holds between two blocks of EDUs, then it also holds between their head EDUs. While this simplification may not be entirely accurate in all cases, Table TABREF20 suggests that it captures most signals, and allows us to reduce the space of candidate signal tokens to just the two head EDUs implicated in a relation. We will refer to signals within the head units of a relation as `endocentric' and signals outside this region as `exocentric'. Figure FIGREF25 illustrates this, where units [64] and [65] are the respective heads of two blocks of EDUs, and unit [65] in fact contains a plausible endocentric signal for the result relation, the discourse marker thus. More problematic caveats for the frequentist approach are the potential for over/underfitting and ambiguity. The issue of overfitting is especially thorny in small datasets, in which certain content words appear coincidentally in discourse segments with a certain function. Table TABREF27 shows the most distinctive lexical types for several discourse relations in GUM based on pure ratio of occurrence in head EDUs marked for those relations. On the left, types are chosen which have a maximal frequency in the relevant relationship compared with their overall frequency in the corpus. This quickly overfits the contents of the corpus, selecting irrelevant words such as holiest and Slate for the circumstance relation, or hypnotizing and currency for concession. The same lack of filtering can, however, yield some potentially relevant lexical items, such as causing for result or even highly specific content words such as ammonium, which are certainly not discourse markers, but whose appearance in a sequence is not accidental: the word is in this case typical for sequences in how-to guides, where use of ingredients in a recipe is described in a sequence. Even if these kinds of items may be undesirable candidates for signal words in general, it seems likely that some rare content words may function as signals in context, such as evaluative adjectives (e.g. exquisite) enabling readers to recognize an evaluation. If we are willing to give up on the latter kind of rare items, the overfitting problem can be alleviated somewhat by setting a frequency threshold for each potential signal lexeme, thereby suppressing rare items. The items on the right of the table are limited to types occurring more than 10 times. Since the most distinctive items on the left are all comparatively rare (and therefore exclusive to their relations), they do not overlap with the items on the right. Looking at the items on the right, several signals make intuitive sense, especially for relations such as solutionhood (used for question-answer pairs) or concession, which show the expected WH words and auxiliary did, or discourse markers such as though, respectively. At the same time, some high frequency items may be spurious, such as NATO for justify, which could perhaps be filtered out based on low dispersion across documents, but also stuff for cause, which probably could not be. Another problem with the lists on the right is that some expected strong signals, such as the word and for sequence are absent from the table. This is not because and is not frequent in sequences, but rather because it is a ubiquitous word, and as a result, it is not very specific to the relation. However if we look at actual examples of and inside and outside of sequences, it is easy to notice that the kind of and that does signal a relation in context is often clause initial as in SECREF24 and very different from the adnominal coordinating ands in SECREF24, which do not signal the relation: . [she was made a Dame by Elizabeth II for services to architecture,] [and in 2015 she became the first and only woman to be awarded the Royal Gold Medal]$_{\textsc {sequence}}$ . [Gordon visited England and Scotland in 1686.] [In 1687 and 1689 he took part in expeditions against the Tatars in the Crimea]$_{\textsc {sequence}}$ These examples suggest that a data-driven approach to signal detection needs some way of taking context into account. In particular, we would like to be able to compare instances of signals and quantify how strong the signal is in each case. In the next section, we will attempt to apply a neural model with contextualized word embeddings BIBREF44 to this problem, which will be capable of learning contextualized representations of words within the discourse graph. Automatic Signal Extraction ::: A Contextualized Neural Model ::: Task and Model Architecture Since we are interested in identifying unrestricted signaling devices, we deliberately avoid a supervised learning approach as used in automatic signal detection trained on resources such as PDTB. While recent work on PDTB connective detection (BIBREF26, BIBREF45) achieves good results (F-Scores of around 88-89 for English PDTB explicit connectives), the use of such supervised approaches would not tell us about new signaling devices, and especially about unrestricted lexical signals and other coherence devices not annotated in PDTB. Additionally, we would be restricted to the newspaper text types represented in the Wall Street Journal corpus, since no other large English corpus has been annotated for anchored signals. Instead, we will adopt a distantly supervised approach: we will task a model with supervised discourse relation classification on data that has not been annotated for signals, and infer the positions of signals in the text by analyzing the model's behavior. A key assumption, which we will motivate below, is that signals can have different levels of signaling strength, corresponding to their relative importance in identifying a relation. We would like to assume that different signal strength is in fact relevant to human analysts' decision making in relation identification, though in practice we will be focusing on model estimates of strength, the usefulness of which will become apparent below. As a framework, we use the sentence classifier configuration of FLAIR BIBREF46 with a biLSTM encoder/classifier architecture fed by character and word level representations composed of a concatenation of fixed 300 dimensional GloVe embeddings BIBREF47, pre-trained contextualized FLAIR word embeddings, and pre-trained contextualized character embeddings from AllenNLP BIBREF48 with FLAIR's default hyperparameters. The model's architecture is shown in Figure FIGREF30. Contextualized embeddings BIBREF44 have the advantage of giving distinct representations to different instances of the same word based on the surrounding words, meaning that an adnominal and connecting two NPs can be distinguished from one connecting two verbs based on its vector space representation in the model. Using character embeddings, which give vector space representations to substrings within each word, means that the model can learn the importance of morphological forms, such as the English gerund's -ing suffix, even for out-of-vocabulary items not seen during training. Formally, the input to our system is formed of EDU pairs which are the head units within the respective blocks of discourse units that they belong to, which are in turn connected by an instance of a discourse relation. This means that every discourse relation in the corpus is expressed as exactly one EDU pair. Each EDU is encoded as a (possibly padded) sequence of $n$-dimensional vector representations of each word ${x_1,..,x_T}$, with some added separators which are encoded in the same way and described below. The bidirectional LSTM composes representations and context for the input, and a fully connected softmax layer gives the probability of each relation: where the probability of each relation $rel_i$ is derived from the composed output of the function $h$ across time steps $0 \ldots t$, $\delta \in \lbrace b,f\rbrace $ is the direction of the respective LSTMs, $c_t^\delta $ is the recurrent context in each direction and $\theta = {W,b}$ gives the model weights and bias parameters (see BIBREF46 for details). Note that although the output of the system is ostensibly a probability distribution over relation types, we will not be directly interested in the most probable relation as outputted by the classifier, but rather in analyzing the model's behavior with respect to the input word representations as potential signals of each relation. In order to capitalize on the system's natural language modeling knowledge, EDU satellite-nucleus pairs are presented to the model in text order (i.e. either the nucleus or the satellite may come first). However, the model is given special separator symbols indicating the positions of the satellite and nucleus, which are essential for deciding the relation type (e.g. cause vs. result, which may have similar cue words but lead to opposite labels), and a separator symbol indicating the transition between satellite and nucleus. This setup is illustrated in SECREF29. . $<$s$>$ Sometimes this information is available , $<$sep$>$ but usually not . $<$n$>$ Label: concession In this example, the satellite precedes the nucleus and is therefore presented first. The model is made aware of the fact that the segment on the left is the satellite thanks to the tag <s>. Since the LSTM is bi-directional, it is aware of positions being within the nucleus or satellite, as well as their proximity to the separator, at every time step. We reserve the signal-annotated subset of 12 documents from GUM for testing, which contains 1,185 head EDU pairs (each representing one discourse relation), and a random selection of 12 further documents from the remaining RST-annotated GUM data (1,078 pairs) is taken as development data, leaving 102 documents (5,828 pairs) for training. The same EDUs appear in multiple pairs if a unit has multiple children with distinct relations, but no instances of EDUs are shard across partitions, since the splits are based on document boundaries. We note again that for the training and development data, we have no signaling annotations of any kind; this is possible since the network does not actually use the human signaling annotations we will be evaluating against: its distant supervision consists solely of the RST relation labels. Automatic Signal Extraction ::: A Contextualized Neural Model ::: Relation Classification Performance Although only used as an auxiliary training task, we can look at the model's performance on predicting discourse relations, which is given in Table TABREF34. Unsurprisingly, the model performs best on the most frequent relations in the corpus, such as elaboration or joint, but also on rarer ones which tend to be signaled explicitly, such as condition (often signaled explicitly by if), solutionhood (used for question-answer pairs signaled by question marks and WH words), or concession (DMs such as although). However, the model also performs reasonably well for some trickier (i.e. less often introduced by unambiguous DMs) but frequent relations, such as preparation, circumstance, and sequence. Rare relations with complex contextual environments, such as result, justify or antithesis, unsurprisingly do not perform well, with the latter two showing an F-score of 0. The relation restatement, which also shows no correct classifications, reveals a weakness of the model: while it is capable of recognizing signals in context, it cannot learn that repetition in and of itself, regardless of specific areas in vector space, is important (see Section SECREF6 for more discussion of these and other classification weaknesses). Although this is not the actual task targeted by the current paper, we may note that the overall performance of the model, with an F-Score of 44.37, is not bad, though below the performance of state-of-the-art full discourse parsers (see BIBREF49) – this is to be expected, since the model is not aware of the entire RST tree, rather looking only at EDU pairs out of context, and given that standard scores on RST-DT come from a larger and more homogeneous corpus, with with fewer relations and some easy cases that are absent from GUM. Given the model's performance on relation classification, which is far from perfect, one might wonder whether signal predictions made by our analysis should be trusted. This question can be answered in two ways: first, quantitatively, we will see in Section SECREF5 that model signal predictions overlap considerably with human judgments, even when the predicted relation is incorrect. Intuitively, for similar relations, such as concession or contrast, both of which are adversative, the model may notice a relevant cue (e.g. `but', or contrasting lexical items) despite choosing the wrong one. Second, as we will see below, we will be analyzing the model's behavior with respect to the probability of the correct relation, regardless of the label it ultimately chooses, meaning that the importance of predicting the correct label exactly will be diminished further. Automatic Signal Extraction ::: A Contextualized Neural Model ::: Signaling Metric The actual performance we are interested in evaluating is the model's ability to extract signals for given discourse relations, rather than its accuracy in predicting the relations. To do so, we must extract anchored signal predictions from the model, which is non-trivial. While earlier work on interpreting neural models has focused on token-wise softmax probability BIBREF50 or attention weights BIBREF51, using contextualized embeddings complicates the evaluation: since word representations are adjusted to reflect neighboring words, the model may assign higher importance to the word standing next to what a human annotator may interpret as a signal. Example SECREF36 illustrates the problem: . [RGB]230, 230, 230To [RGB]53, 53, 53provide [RGB]165, 165, 165information [RGB]179, 179, 179on [RGB]175, 175, 175the [RGB]160, 160, 160analytical [RGB]157, 157, 157sample [RGB]187, 187, 187as [RGB]170, 170, 170a [RGB]168, 168, 168whole [RGB]207, 207, 207, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]168, 168, 168two [RGB]170, 170, 170additional [RGB]164, 164, 164demographic [RGB]175, 175, 175variables [RGB]182, 182, 182are [RGB]165, 165, 165included [RGB]230, 230, 230. Each word in SECREF36 is shaded based on the softmax probability assigned to the correct relation of the satellite, i.e. how `convincing' the model found the word in terms of local probability. In addition, the top-scoring word in each sentence is rendered in boldface for emphasis. The gold label for the relation is placed above the arrow, which indicates the direction of the relation (satellite to nucleus), and the model's predicted label is shown under the arrow. Intuitively, the strongest signal of the purpose relation in SECREF36 is the initial infinitive marker To – however, the model ranks the adjacent provide higher and almost ignores To. We suspect that the reason for this, and many similar examples in the model evaluated based on relation probabilities, is that contextual embeddings allow for a special representation of the word provide next to To, making it difficult to tease apart the locus of the most meaningful signal. To overcome this complication, we use the logic of permutation importance, treating the neural model as a black box and manipulating the input to discover relevant features in the data (cf. BIBREF52). We reason that this type of evaluation is more robust than, for example, examining model internal attention weights because such weights are not designed or trained with a reward ensuring they are informative – they are simply trained on the same classification error loss as the rest of the model. Instead, we can withhold potentially relevant information from the model directly: After training is complete, we feed the test data to the model in two forms – as-is, and with each word masked, as shown in SECREF36. . Original: <$s$>$ To\quad \ p̄rovide īnformation .. <$sep$>$ .. <$n$>$ Original: \: $<$s$>$ \: \ To \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked1: \: $<$s$>$ \: $<$X$>$ \: provide \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked2: \: $<$s$>$ \: \ To \: \ $<$X$>$ \: information \: ... \: $<$sep$>$ \: ... \: $<$n$>$ \\ Masked3: \: $<$s$>$ \: \ To \: provide \: \ $<$X$>$ \: ... \: $<$sep$>$ \: ... \: $<$n$>$ $ Label: purpose We reason that, if a token is important for predicting the correct label, masking it will degrade the model's classification accuracy, or at least reduce its reported classification certainty. In SECREF36, it seems reasonable to assume that masking the word `To' has a greater impact on predicting the label purpose than masking the word `provide', and even less so, the following noun `information'. We therefore use reduction in softmax probability of the correct relation as our signaling strength metric for the model. We call this metric ${\Delta }_s$ (for delta-softmax), which can be written as: where $rel$ is the true relation of the EDU pair, $t_i$ represents the token at index $i$ of $N$ tokens, and $X_{mask=i}$ represents the input sequence with the masked position $i$ (for $i \in 1 \ldots N$ ignoring separators, or $\phi $, the empty set). To visualize the model's predictions, we compare ${\Delta }_s$ for a particular token to two numbers: the maximum ${\Delta }_s$ achieved by any token in the current pair (a measure of relative importance for the current classification) and the maximum ${\Delta }_s$ achieved by any token in the current document (a measure of how strongly the current relation is signaled compared to other relations in the text). We then shade each token 50% based on the first number and 50% based on the second. As a result, the most valid cues in an EDU pair are darker than their neighbors, but EDU pairs with no good cues are overall very light, whereas pairs with many good signals are darker. Some examples of this visualization are given in SECREF36-SECREF36 (human annotated endocentric signal tokens are marked by double underlines). . [RGB]61, 61, 61To [RGB]112, 112, 112provide [RGB]205, 205, 205information [RGB]230, 230, 230on [RGB]230, 230, 230the [RGB]230, 230, 230analytical [RGB]230, 230, 230sample [RGB]230, 230, 230as [RGB]230, 230, 230a [RGB]230, 230, 230whole [RGB]230, 230, 230, $\xrightarrow[\text{pred:preparation}]{\text{gold:purpose}}$ [RGB]230, 230, 230two [RGB]183, 183, 183additional [RGB]230, 230, 230demographic [RGB]230, 230, 230variables [RGB]94, 94, 94are [RGB]194, 194, 194included [RGB]163, 163, 163. . [RGB]230, 230, 230Telling [RGB]230, 230, 230good [RGB]230, 230, 230jokes [RGB]230, 230, 230is [RGB]230, 230, 230an [RGB]230, 230, 230art [RGB]230, 230, 230that [RGB]230, 230, 230comes [RGB]230, 230, 230naturally [RGB]230, 230, 230to [RGB]230, 230, 230some [RGB]211, 211, 211people [RGB]135, 135, 135, $\xleftarrow[\text{pred:contrast}]{\text{gold:contrast}}$ [RGB]21, 21, 21but [RGB]209, 209, 209for [RGB]207, 207, 207others [RGB]230, 230, 230it [RGB]217, 217, 217takes [RGB]230, 230, 230practice [RGB]230, 230, 230and [RGB]189, 189, 189hard [RGB]230, 230, 230work [RGB]230, 230, 230. . [RGB]230, 230, 230It [RGB]230, 230, 230is [RGB]230, 230, 230possible [RGB]230, 230, 230that [RGB]230, 230, 230these [RGB]230, 230, 230two [RGB]230, 230, 230children [RGB]230, 230, 230understood [RGB]230, 230, 230the [RGB]230, 230, 230task [RGB]230, 230, 230and [RGB]230, 230, 230really [RGB]230, 230, 230did [RGB]230, 230, 230believe [RGB]230, 230, 230that [RGB]230, 230, 230the [RGB]230, 230, 230puppet [RGB]230, 230, 230did [RGB]230, 230, 230not [RGB]230, 230, 230produce [RGB]230, 230, 230any [RGB]230, 230, 230poor [RGB]230, 230, 230descriptions [RGB]230, 230, 230, [RGB]230, 230, 230and [RGB]230, 230, 230in [RGB]230, 230, 230this [RGB]230, 230, 230regard [RGB]230, 230, 230, [RGB]230, 230, 230are [RGB]230, 230, 230not [RGB]230, 230, 230yet [RGB]230, 230, 230adult-like [RGB]230, 230, 230in [RGB]230, 230, 230their [RGB]230, 230, 230SI [RGB]230, 230, 230interpretations [RGB]230, 230, 230. $\xleftarrow[\text{pred:evaluation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230This [RGB]230, 230, 230is [RGB]41, 41, 41unlikely The highlighting in SECREF36 illustrates the benefits of the masking based evaluation compared to SECREF36: the token To is now clearly the strongest signal, and the verb is taken to be less important, followed by the even less important object of the verb. This is because removing the initial To hinders classification much more than the removal of the verb or noun. We note also that although the model in fact misclassified this example as preparation, we can still use masking importance to identify To, since the score queried from the model corresponds to a relative decrease in the probability of the correct relation, purpose, even if this was not the highest scoring relation overall. In SECREF36 we see the model's ability to correctly predict contrast based on the DM but. Note that despite a rather long sentence, the model does not need any other word nearly as much for the classification. Although the model is not trained explicitly to detect discourse markers, the DM can be recognized due to the fact that masking it leads to a drop of 66% softmax probability (${\Delta }_s$=0.66) of this pair representing the contrast relation. We can also note that a somewhat lower scoring content word is also marked: hard (${\Delta }_s$=0.18). In our gold signaling annotations, this word was marked together with comes naturally as a signal, due to the contrast between the two concepts (additionally, some people is flagged as a signal along with others). The fact that the model finds hard helpful, but does not need the contextual near antonym naturally, suggests that it is merely learning that words in the semantic space near hard may indicate contrast, and not learning about the antonymous relationship – otherwise we would expect to see `naturally' have a stronger score (see also the discussion in Section SECREF6). Finally SECREF36 shows that, much like in the case of hard, the model is not biased towards traditional DMs, confirming that it is capable of learning about content words, or neighborhoods of content words in vector space. In a long EDU pair of 41 words, the model relies almost exclusively on the word unlikely (${\Delta }_s$=0.36) to correctly label the relation as evaluation. By contrast, the anaphoric demonstrative `This' flagged by the human annotator, which is a more common function word, is disregarded, perhaps because it can appear with several other relations, and is not particularly exclusive to evaluation. These results suggest that the model may be capable of recognizing signals through distant supervision, allowing it to validate human annotations, to potentially point out signals that may be missed by annotators, and most importantly, to quantify signaling strength on a sliding scale. At the same time, we need a way to evaluate the model's quality and assess the kinds of errors it makes, as well as what we can learn from them. We therefore move on to evaluating the model and its errors next. Evaluation and Error Analysis ::: Evaluation Metric To evaluate the neural model, we would like to know how well ${\Delta }_s$ corresponds to annotators' gold standard labels. This leads to two kinds of problems: the first is that the model is distantly supervised, and therefore does not know about signal types, subtypes, or any aspect of signaling annotation and its relational structure. The second problem is that signaling annotations are categorical, and do not correspond to the ratio-scaled predictions provided by ${\Delta }_s$ (this is in fact one of the motivations for desiring a model-based estimate of signaling strength). The first issue means that we can only examine the model's ability to locate signals – not to classify them. Although there may be some conceivable ways of analyzing model output to identify classes such as DMs (which are highly lexicalized, rather than representing broad regions of vector space, as words such as unlikely might), or more contextual relational signals, such as pronouns, this line of investigation is beyond the scope of the present paper. A naive solution to the second problem might be to identify a cutoff point, e.g. deciding that all and only words scoring ${\Delta }_s>$0.15 are predicted to be signals. The problem with the latter approach is that sentences can be very different in many ways, and specifically in both length and in levels of ambiguity. Sentences with multiple, mutually redundant cues, may produce lower ${\Delta }_s$ scores compared to shorter sentences with a subset of the same cues. Conversely, in very short sentences with low signal strength, the model may reasonably be expected to degrade very badly with the deletion of almost any word, as the context becomes increasingly incomprehensible. For these reasons, we choose to adopt an evaluation metric from the paradigm of information retrieval, and focus on recall@k (recall at rank k, for $k=1,2,3$...). The idea is to poll the model for each sentence in which some signals have been identified, and see whether the model is able to find them if we let it guess using the word with the maximal ${\Delta }_s$ score (recall@1), regardless of how high that score is, or alternatively relax the evaluation criteria and see whether the human annotator's signal tokens appear at rank 2 or 3. Figure FIGREF40 shows numbers for recall@k for the top 3 ranks outputted by the model, next to random guess baselines. The left, middle and right panels in Figure FIGREF40 correspond to measurements when all signals are included, only cases contained entirely in the head EDUs shown to the model, and only DMs, respectively. The scenario on the left is rather unreasonable and is included only for completeness: here the model is also penalized for not detecting signals such as lexical chains, part of which is outside the units that the model is being shown. An example of such a case can be seen in Figure FIGREF41. The phrase Respondents in unit [23] signals the relation elaboration, since it is coreferential with a previous mention of the respondents in [21]. However, because the model is only given heads of EDU blocks to classify, it does not have access to the first occurrence of respondents while predicting the elaboration relation – the first half of the signal token set is situated in a child of the nucleus EDU before the relation, i.e. it belongs to group IV in the taxonomy in Table TABREF20. Realistically, our model can only be expected to learn about signals from `directly participating' EDUs, i.e. groups I, II, VI and VII, the `endocentric' signal groups from Section SECREF16. Although most signals belong to endocentric categories (71.62% of signaled relations belong to these groups, cf. Table TABREF20), exocentric cases form a substantial portion of signals which we have little hope of capturing with the architecture used here. As a result, recall metrics in the `all signals' scenario are closest to the random baselines, though the signals detected in other instances still place the model well above the baseline. A more reasonable evaluation is the one in the middle panel of Figure FIGREF40, which includes only endocentric signals as defined in the taxonomy. EDUs with no endocentric signals are completely disregarded in this scenario, which substantially reduces the number of tokens considered to be signals, since, while many tokens are part of some meaningful lexical chain in the document, requiring signals to be contained only in the pair of head units eliminates a wide range of candidates. Although the random baseline is actually very slightly higher (perhaps because eliminated EDUs were often longer ones, sharing small amounts of material with larger parts of the text, and therefore prone to penalizing the baseline; many words mean more chances for a random guess to be wrong), model accuracy is substantially better in this scenario, reaching a 40% chance of hitting a signal with only one guess, exceeding 53% with two guesses, and capping at 64% for recall@3, over 20 points above baseline. Finally, the right panel in the figure shows recall when only DMs are considered. In this scenario, a random guess fares very poorly, since most words are not DMs. The model, by contrast, achieves the highest results in all metrics, since DMs have the highest cue validity for relation classification, and the model attends to them most strongly. With just one guess, recall is over 56%, and goes as high as 67% for recall@3. The baseline only goes as high as 16% for three guesses. Evaluation and Error Analysis ::: Qualitative Analysis Looking at the model's performance qualitatively, it is clear that it can detect not only DMs, but also morphological cues (e.g. gerunds as markers of elaboration, as in SECREF43), semantic classes and sentiment, such as positive and negative evaluatory terms in SECREF43, as well as multiple signals within the same EDU, as in SECREF43. In fact, only about 8.3% of the tokens correctly identified by the model in Table TABREF45 below are of the DM type, whereas about 7.2% of all tokens flagged by human annotators were DMs, meaning that the model frequently matches non-DM items to discourse relation signals (see Performance on Signal Types below). It should also be noted that signals can be recognized even when the model misclassifies relations, since ${\Delta }_s$ does not rely on correct classification: it merely quantifies the contribution of a word in context toward the correct label's score. If we examine the influence of each word on the score of the correct relation, that impact should and does still correlate with human judgments based on what the system may tag as the second or third best class to choose. . [RGB]230, 230, 230For [RGB]230, 230, 230the [RGB]230, 230, 230present [RGB]230, 230, 230analysis [RGB]230, 230, 230, [RGB]230, 230, 230these [RGB]230, 230, 230responses [RGB]230, 230, 230were [RGB]230, 230, 230recoded [RGB]230, 230, 230into [RGB]230, 230, 230nine [RGB]230, 230, 230mutually [RGB]230, 230, 230exclusive [RGB]230, 230, 230categories $\xleftarrow[\text{pred:elaboration}]{\text{gold:result}}$ [RGB]63, 63, 63capturing [RGB]219, 219, 219the [RGB]230, 230, 230following [RGB]230, 230, 230options [RGB]135, 135, 135: . [RGB]185, 185, 185Professor [RGB]219, 219, 219Eastman [RGB]223, 223, 223said [RGB]207, 207, 207he [RGB]194, 194, 194is [RGB]64, 64, 64alarmed [RGB]230, 230, 230by [RGB]230, 230, 230what [RGB]230, 230, 230they [RGB]230, 230, 230found [RGB]230, 230, 230. $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]230, 230, 230" [RGB]230, 230, 230Pregnant [RGB]229, 229, 229women [RGB]187, 187, 187in [RGB]230, 230, 230Australia [RGB]98, 98, 98are [RGB]213, 213, 213getting [RGB]230, 230, 230about [RGB]230, 230, 230half [RGB]171, 171, 171as [RGB]159, 159, 159much [RGB]230, 230, 230as [RGB]230, 230, 230what [RGB]155, 155, 155they [RGB]155, 155, 155require [RGB]223, 223, 223on [RGB]214, 214, 214a [RGB]109, 109, 109daily [RGB]176, 176, 176basis [RGB]111, 111, 111. . [RGB]195, 195, 195Even [RGB]230, 230, 230so [RGB]230, 230, 230, [RGB]230, 230, 230estimates [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230prevalence [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]230, 230, 230discrimination [RGB]219, 219, 219remains [RGB]230, 230, 230rare $\xleftarrow[\text{pred:evidence}]{\text{gold:concession}}$ [RGB]111, 111, 111At [RGB]63, 63, 63least [RGB]230, 230, 230one [RGB]230, 230, 230prior [RGB]230, 230, 230study [RGB]230, 230, 230by [RGB]230, 230, 230Kessler [RGB]225, 225, 225and [RGB]230, 230, 230colleagues [RGB]230, 230, 230[ [RGB]230, 230, 23015 [RGB]161, 161, 161] [RGB]200, 200, 200, [RGB]136, 136, 136however [RGB]222, 222, 222, [RGB]228, 228, 228using [RGB]230, 230, 230measures [RGB]230, 230, 230of [RGB]230, 230, 230perceived [RGB]224, 224, 224discrimination [RGB]217, 217, 217in [RGB]230, 230, 230a [RGB]230, 230, 230large [RGB]218, 218, 218American [RGB]230, 230, 230sample [RGB]230, 230, 230, [RGB]230, 230, 230reported [RGB]230, 230, 230that [RGB]230, 230, 230approximately [RGB]230, 230, 23033 [RGB]212, 212, 212% [RGB]230, 230, 230of [RGB]230, 230, 230respondents [RGB]156, 156, 156reported [RGB]169, 169, 169some [RGB]122, 122, 122form [RGB]168, 168, 168of [RGB]230, 230, 230discrimination Unsurprisingly, the model sometimes make sporadic errors in signal detection for which good explanations are hard to find, especially when its predicted relation is incorrect, as in SECREF43. Here the evaluative adjective remarkable is missed in favor of neighboring words such as agreed and a subject pronoun, which are not indicative of the evaluation relation in this context but are part of several cohorts of high scoring words. However, the most interesting and interpretable errors arise when ${\Delta }_s$ scores are high compared to an entire document, and not just among words in one EDU pair, in which most or even all words may be relatively weak signals. As an example of such a false positive with high confidence, we can consider SECREF43. In this example, the model correctly assigns the highest score to the DM so marking a purpose relation. However, it also picks up on a recurring tendency in how-to guides in which the second person pronoun referring to the reader is often the benefactee of some action, which contributes to the purpose reading and helps to disambiguate so, despite not being considered a signal by annotators. . [RGB]216, 216, 216The [RGB]99, 99, 99agreement [RGB]89, 89, 89was [RGB]230, 230, 230that [RGB]131, 131, 131Gorbachev [RGB]102, 102, 102agreed [RGB]230, 230, 230to [RGB]230, 230, 230a [RGB]230, 230, 230quite [RGB]230, 230, 230remarkable [RGB]125, 125, 125concession [RGB]230, 230, 230: $\xrightarrow[\text{pred:preparation}]{\text{gold:evaluation}}$ [RGB]64, 64, 64he [RGB]81, 81, 81agreed [RGB]230, 230, 230to [RGB]230, 230, 230let [RGB]220, 220, 220a [RGB]143, 143, 143united [RGB]149, 149, 149Germany [RGB]230, 230, 230join [RGB]83, 83, 83the [RGB]230, 230, 230NATO [RGB]230, 230, 230military [RGB]230, 230, 230alliance [RGB]230, 230, 230. . [RGB]230, 230, 230The [RGB]220, 220, 220opening [RGB]230, 230, 230of [RGB]230, 230, 230the [RGB]230, 230, 230joke [RGB]230, 230, 230— [RGB]230, 230, 230or [RGB]230, 230, 230setup [RGB]230, 230, 230— [RGB]230, 230, 230should [RGB]230, 230, 230have [RGB]230, 230, 230a [RGB]230, 230, 230basis [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230real [RGB]200, 200, 200world $\xleftarrow[\text{pred:purpose}]{\text{gold:purpose}}$ [RGB]7, 7, 7so [RGB]73, 73, 73your [RGB]230, 230, 230audience [RGB]230, 230, 230can [RGB]230, 230, 230relate [RGB]230, 230, 230to [RGB]230, 230, 230it [RGB]230, 230, 230, In other cases, the model points out plausible signals which were passed over by an annotator, and may be considered errors in the gold standard. For example, the model easily notices that question marks indicate the solutionhood relation, even where these were skipped by annotators in favor of marking WH words instead: . [RGB]230, 230, 230Which [RGB]230, 230, 230previous [RGB]230, 230, 230Virginia [RGB]230, 230, 230Governor(s) [RGB]230, 230, 230do [RGB]230, 230, 230you [RGB]230, 230, 230most [RGB]230, 230, 230admire [RGB]230, 230, 230and [RGB]230, 230, 230why [RGB]12, 12, 12? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:solutionhood}}$ [RGB]230, 230, 230Thomas [RGB]230, 230, 230Jefferson [RGB]183, 183, 183. From the model's perspective, the question mark, which scores ${\Delta }_s$=0.79, is the single most important signal, and virtually sufficient for classifying the relation correctly, though it was left out of the gold annotations. The WH word Which and the sentence final why, by contrast, were noticed by annotators but were are not as unambiguous (the former could be a determiner, and the latter in sentence final position could be part of an embedded clause). In the presence of the question mark, their individual removal has much less impact on the classification decision. Although the model's behavior is sensible and can reveal annotation errors, it also suggests that ${\Delta }_s$ will be blind to auxiliary signals in the presence of very strong, independently sufficient cues. Using the difference in likelihood of correct relation prediction as a metric also raises the possibility of an opposite concept to signals, which we will refer to as distractors. Since ${\Delta }_s$ is a signed measure of difference, it is in fact possible to obtain negative values whenever the removal or masking of a word results in an improvement in the model's ability to predict the relation. In such cases, and especially when the negative value is of a large magnitude, it seems like a reasonable interpretation to say that a word functions as a sort of anti-signal, preventing or complicating the recognition of what might otherwise be a more clear-cut case. Examples SECREF43–SECREF43 show some instances of distractors identified by the masking procedure (distractors with ${\Delta }_s<$-0.2 are underlined). . [RGB]230, 230, 230How [RGB]230, 230, 230do [RGB]230, 230, 230they [RGB]201, 201, 201treat [RGB]167, 167, 167those [RGB]210, 210, 210not [RGB]190, 190, 190like [RGB]230, 230, 230themselves [RGB]100, 100, 100? $\xrightarrow[\text{pred:solutionhood}]{\text{gold:preparation}}$ [RGB]52, 52, 52then [RGB]230, 230, 230they [RGB]230, 230, 230're [RGB]230, 230, 230either [RGB]230, 230, 230over-zealous [RGB]230, 230, 230, [RGB]230, 230, 230ignorant [RGB]230, 230, 230of [RGB]230, 230, 230other [RGB]230, 230, 230people [RGB]230, 230, 230or [RGB]230, 230, 230what [RGB]230, 230, 230to [RGB]230, 230, 230avoid [RGB]230, 230, 230those [RGB]230, 230, 230that [RGB]230, 230, 230contradict [RGB]230, 230, 230their [RGB]230, 230, 230fantasy [RGB]230, 230, 230land [RGB]230, 230, 230that [RGB]220, 220, 220caters [RGB]230, 230, 230to [RGB]230, 230, 230them [RGB]230, 230, 230and [RGB]230, 230, 230them [RGB]230, 230, 230only [RGB]230, 230, 230. . [RGB]230, 230, 230God [RGB]230, 230, 230, [RGB]230, 230, 230I [RGB]230, 230, 230do [RGB]230, 230, 230n't [RGB]230, 230, 230know [RGB]51, 51, 51! $\xrightarrow[\text{pred:preparation}]{\text{gold:preparation}}$ [RGB]230, 230, 230but [RGB]230, 230, 230nobody [RGB]230, 230, 230will [RGB]230, 230, 230go [RGB]230, 230, 230to [RGB]230, 230, 230fight [RGB]230, 230, 230for [RGB]230, 230, 230noses [RGB]230, 230, 230any [RGB]219, 219, 219more [RGB]169, 169, 169. In SECREF43, a rhetorical question trips up the classifier, which predicts the question-answer relation solutionhood instead of preparation. Here the initial WH word How and the subsequent auxiliary do-support both distract (with ${\Delta }_s$=-0.23 and -0.25) from the preparation relation, which is however being signaled positively by the DM then in the nucleus unit. Later on, the adverb only is also disruptive (${\Delta }_s$=-0.31), perhaps due to a better association with adversative relations, such as contrast. In SECREF43, a preparatory “God, I don't know!” is followed up with a nucleus starting with but, which typically marks a concession or other adversative relation. In fact, the DM but is related to a concessive relation with another EDU (not shown), which the model is not aware of while making the classification for the preparation. Although this example reveals a weakness in the model's inability to consider broader context, it also reveals the difficulty of expecting DMs to fall in line with a strong nuclearity assumption: since units serve multiple functions as satellites and nuclei, signals which aid the recognition of one relation may hinder the recognition of another. Evaluation and Error Analysis ::: Performance on Signal Types To better understand the kinds of signals which the model captures better or worse, Table TABREF45 gives a breakdown of performance by signal type and specific signal categories, for categories attested over 20 times (note that the categories are human labels assigned to the corresponding positions – the system does not predict signal types). To evaluate performance for all types we cannot use recall@1–3, since some sentences contain more than 3 signal tokens, which would lead to recall errors even if the top 3 ranks are correctly identified signals. The scores in the table therefore express how many of the signal tokens belonging to each subtype in the gold annotations are recognized if we allow the system to make as many guesses as there are signal tokens in each EDU pair, plus a tolerance of a maximum of 2 additional tokens (similarly to recall@3). We also note that a single token may be associated with multiple signal types, in which case its identification or omission is counted separately for each type. Three of the top four categories which the model performs best for are, perhaps unsurprisingly, the most lexical ones: alternate expression captures non-DM phrases such as I mean (for elaboration), or the problem is (for concession), and indicative word includes lexical items such as imperative see (consistently marking evidence in references within academic articles) or evaluative adjectives such as interesting for evaluation. The good performance of the category colon captures the model's recognition of colons as important punctuation, primarily predicting preparation. The only case of a `relational' category, requiring attention to two separate positions in the input, which also fares well is synonymy, though this is often based on flagging only one of two items annotated as synonymous, and is based on rather few examples. We can find only one example, SECREF44, where both sides of a pair of similar words is actually noticed, which both belong to the same stem (decline/declining): . [RGB]230, 230, 230The [RGB]230, 230, 230report [RGB]209, 209, 209says [RGB]213, 213, 213the [RGB]172, 172, 172decline [RGB]220, 220, 220in [RGB]228, 228, 228iodine [RGB]230, 230, 230intake [RGB]215, 215, 215appears [RGB]230, 230, 230to [RGB]230, 230, 230be [RGB]230, 230, 230due [RGB]230, 230, 230to [RGB]230, 230, 230changes [RGB]230, 230, 230in [RGB]230, 230, 230the [RGB]230, 230, 230dairy [RGB]230, 230, 230industry [RGB]230, 230, 230, [RGB]230, 230, 230where [RGB]230, 230, 230chlorine-containing [RGB]230, 230, 230sanitisers [RGB]226, 226, 226have [RGB]230, 230, 230replaced [RGB]230, 230, 230iodine-containing [RGB]230, 230, 230sanitisers [RGB]230, 230, 230. $\xleftarrow[\text{pred:background}]{\text{gold:justify}}$ [RGB]193, 193, 193Iodine [RGB]230, 230, 230released [RGB]230, 230, 230from [RGB]230, 230, 230these [RGB]230, 230, 230chemicals [RGB]230, 230, 230into [RGB]216, 216, 216milk [RGB]230, 230, 230has [RGB]230, 230, 230been [RGB]230, 230, 230the [RGB]230, 230, 230major [RGB]230, 230, 230source [RGB]230, 230, 230of [RGB]226, 226, 226dietary [RGB]206, 206, 206iodine [RGB]230, 230, 230in [RGB]230, 230, 230Australia [RGB]230, 230, 230for [RGB]230, 230, 230at [RGB]230, 230, 230least [RGB]230, 230, 230four [RGB]230, 230, 230decades [RGB]202, 202, 202, [RGB]153, 153, 153but [RGB]230, 230, 230is [RGB]230, 230, 230now [RGB]63, 63, 63declining [RGB]79, 79, 79. We note that our evaluation is actually rather harsh towards the model, since in multiword expressions, often only one central word is flagged by ${\Delta }_s$ (e.g. problem in “the problem is”), while the model is penalized in Table TABREF45 for each token that is not recognized (i.e. the and is, which were all flagged by a human annotator as signals in the data). Interestingly, the model fares rather well in identifying morphological tense cues, even though these are marked by both inflected lexical verbs and semantically poor auxiliaries (e.g. past perfect auxiliary had marking background); but modality cues (especially can or could for evaluation) are less successfully identified, suggesting they are either more ambiguous, or mainly relevant in the presence of evaluative content words which out-score them. Other relational categories from the middle of the table which ostensibly require matching pairs of words, such as repetition, meronymy, or personal reference (coreference) are mainly captured by the model when a single item is a sufficiently powerful cue, often ignoring the other half of the signal, as shown in SECREF44. . [RGB]230, 230, 230On [RGB]230, 230, 230a [RGB]230, 230, 230new [RGB]230, 230, 230website [RGB]230, 230, 230, [RGB]230, 230, 230" [RGB]230, 230, 230The [RGB]230, 230, 230Internet [RGB]230, 230, 230Explorer [RGB]230, 230, 2306 [RGB]230, 230, 230Countdown [RGB]230, 230, 230" [RGB]230, 230, 230, [RGB]230, 230, 230Microsoft [RGB]230, 230, 230has [RGB]230, 230, 230launched [RGB]230, 230, 230an [RGB]230, 230, 230aggressive [RGB]230, 230, 230campaign [RGB]230, 230, 230to [RGB]230, 230, 230persuade [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230stop [RGB]171, 171, 171using [RGB]133, 133, 133IE6 $\xleftarrow[\text{pred:elaboration}]{\text{gold:elaboration}}$ [RGB]56, 56, 56Its [RGB]197, 197, 197goal [RGB]167, 167, 167is [RGB]230, 230, 230to [RGB]230, 230, 230decrease [RGB]230, 230, 230IE6 [RGB]230, 230, 230users [RGB]230, 230, 230to [RGB]230, 230, 230less [RGB]230, 230, 230than [RGB]230, 230, 230one [RGB]124, 124, 124percent [RGB]229, 229, 229. Here the model has learned that an initial possessive pronoun, perhaps in the context of a subject NP in a copula sentence (note the shading of the following is) is an indicator of an elaboration relation, even though there is no indication that the model has noticed which word is the antecedent. Similarly for the count category, the model only learns to notice the possible importance of some numbers, but is not actually aware of whether they are identical (e.g. for restatement) or different (e.g. in contrast). Finally, some categories are actually recognized fairly reliably, but are penalized by the same partial substring issue identified above: Date expressions are consistently flagged as indicators of circumstance, but often a single word, such as a weekday in SECREF44, is dominant, while the model is penalized for not scoring other words as highly (including commas within dates, which are marked as part of the signal token span in the gold standard, but whose removal does not degrade prediction accuracy). In this case it seems fair to say that the model has successfully recognized the date signal of `Wednesday April 13', yet it loses points for missing two instances of `,', and the `2011', which is no longer necessary for recognizing that this is a date. . [RGB]230, 230, 230NASA [RGB]230, 230, 230celebrates [RGB]230, 230, 23030th [RGB]230, 230, 230anniversary [RGB]230, 230, 230of [RGB]230, 230, 230first [RGB]230, 230, 230shuttle [RGB]230, 230, 230launch [RGB]230, 230, 230; $\xleftarrow[\text{pred:circumstance}]{\text{gold:circumstance}}$ [RGB]11, 11, 11Wednesday [RGB]186, 186, 186, [RGB]115, 115, 115April [RGB]153, 153, 15313 [RGB]219, 219, 219, [RGB]230, 230, 2302011 Discussion This paper has used a corpus annotated for discourse relation signals within the framework of the RST Signalling Corpus (BIBREF12) and extended with anchored signal annotations (BIBREF27) to develop a taxonomy of unrestricted and hierarchically aware discourse signal positions, as well as a data-driven neural network model to explore distantly supervised signal word extraction. The results shed light on the distribution of signal categories from the RST-SC taxonomy in terms of associated word forms, and show the promise of neural models with contextual embeddings for the extraction of context dependent and gradient discourse signal detection in individual texts. The metric developed for the evaluation, $\Delta _s$, allows us to assess the relative importance of signal words for automatic relation classification, and reveal observations for further study, as well as shortcomings which point to the need to develop richer feature representations and system architectures in future work. The model presented in the previous sections is clearly incomplete in both its classification accuracy and its ability to recognize the same signals that humans do. However, given the fact that it is trained entirely without access to discourse signal annotations and is unaware of any of the guidelines used to create the gold standard that it is evaluated on, its performance may be considered surprisingly good. As an approach to extracting discourse signals in a data-driven way, similar to frequentist methods or association measures used in previous work, we suggest that this model forms a more fine grained tool, capable of taking context into consideration and delivering scores for each instance of a signal candidate, rather than resulting in a table of undifferentiated signal word types. Additionally, although we consider human signal annotations to be the gold standard in identifying the presence of relevant cues, the ${\Delta }_s$ metric gives new insights into signaling which cannot be approached using manual signaling annotations. Firstly, the quantitative nature of the metric allows us to rank signaling strength in a way that humans have not to date been able to apply: using ${\Delta }_s$, we can say which instances of which signals are evaluated as stronger, by how much, and which words within a multi-word signal instance are the most important (e.g. weekdays in dates are important, the commas are not). Secondly, the potential for negative values of the metric opens the door to the study of negative signals, or `distractors', which we have only touched upon briefly in this paper. And finally, we consider the availability of multiple measurements for a single DM or other discourse signal to be a potentially very interesting window into the relative ambiguity of different signaling devices (cf. BIBREF16) and for research on the contexts in which such ambiguity results. To see how ambiguity is reflected in multiple measurements of ${\Delta }_s$, we can consider Figure FIGREF47. The figure shows boxplots for multiple instances of the same signal tokens. We can see that words like and are usually not strong signals, with the entire interquartile range scoring less than 0.02, i.e. aiding relation classification by less than 2%, with some values dipping into the negative region (i.e. cases functioning as distractors). However, some outliers are also present, reaching almost as high as 0.25 – these are likely to be coordinating predicates, which may signal relations such as sequence or joint. A word such as but is more important overall, with the box far above and, but still covering a wide range of values: these can correspond to more or less ambiguous cases of but, but also to cases in which the word is more or less irreplaceable as a signal. In the presence of multiple signals for the same relation, the presence of but should be less important. We can also see that but can be a distractor with negative values, as we saw in example SECREF43 above. As far as we are aware, this is the first empirical corpus-based evidence giving a quantitative confirmation to the intuition that `but' in context is significantly less ambiguous as a discourse marker than `and'; the overlap in their bar plots indicate that they can be similarly ambiguous or even distracting in some cases, but the difference in interquartile ranges makes it clear that these are exceptions. For less ambiguous DMs, such as if, we can also see a contrast between lower and upper case instances: upper case If is almost always a marker of condition, but the lower case if is sometimes part of an embedded object clause, which is not segmented in the corpus and does not mark a conditional relation (e.g. “they wanted to see if...”). For the word to, the figure suggests a strongly bimodal distribution, with a core population of (primarily prepositional) discourse-irrelevant to, and a substantial number of outliers above a large gap, representing to in infinitival purpose clauses (though not all to infinitives mark such clauses, as in adnominal “a chance to go”, which the model is usually able to distinguish in context). In other words, our model can not only disambiguate ambiguous strings into grammatical categories, but also rank members of the same category by importance in context, as evidenced by its ability to correctly classify high frequency items like `to' or `and' as true positives. A frequentist approach would not only lack this ability – it would miss such items altogether, due to its overall high string frequency and low specificity. Beyond what the results can tell us about discourse signals in this particular corpus, the fact that the neural model is sensitive to mutual redundancy of signals raises interesting theoretical questions about what human annotators are doing when they characterize multiple features of a discourse unit as signals. If it is already evident from the presence of a conventional DM that some relation applies, are other, less explicit signals which might be relied on in the absence of the DM, equally `there'? Do we need a concept of primary and auxiliary signals, or graded signaling strength, in the way that a metric such as ${\Delta }_s$ suggests? Another open question relates to the postulation of distractors as an opposite concept to discourse relation signals. While we have not tested this so far, it is interesting to ask to what extent human analysts are aware of distractors, whether we could form annotation guidelines to recognize them, and how humans weigh the value of signals and potential distractors in extrapolating intended discourse relations. It seems likely that distractors affecting humans may be found in cases of misunderstanding or ambiguity of discourse relations (see also BIBREF25). Finally, the error analysis for signal detection complements the otherwise opaque relation classification results in Table TABREF34 in showing some of the missing sources of information that our model would need in order to work better. We have seen that relational information, such as identifying not just the presence of a pronoun but also its antecedent, or both sides of lexical semantic relations such as synonymy, meronymy or antonymy, as well as comparing count information, are still unavailable to the classifier – if they were being used, then ${\Delta }_s$ would reflect the effects of their removal, but this is largely not the case. This suggests that, in the absence of vastly larger discourse annotated corpora, discourse relation recognition may require the construction of either features, architectures, or both, which can harness abstract relational information of this nature beyond the memorization of specific pairs of words (or regions of vector space with similar words) that are already attested in the limited training data. In this vein, BIBREF54 conducted a series of experiments on automatic sense prediction for four top-level implicit discourse relations within the PDTB framework, which also suggested benefits for using linguistically-informed features such as verb information, polarity tags, context, lexical items (e.g. first and last words of the arguments; first three words in the sentence) etc. The model architecture and input data are also in need of improvements, as the current architecture can only be expected to identify endocentric signals. The substantial amount of exocentric signaling cases is in itself an interesting finding, as it suggests that relation classification from head EDU pairs may ultimately have a natural ceiling that is considerably below what could be inferred from looking at larger contexts. We predict that as we add more features to the model and improve its architecture in ways that allow it to recognize the kinds of signals that humans do, classification accuracy will increase; and conversely, as classification accuracy rises, measurements based on ${\Delta }_s$ will overlap increasingly with human annotations of anchored signals. In sum, we believe that there is room for much research on what relation classification models should look like, and how they can represent the kinds of information found in non-trivial signals. The results of this line of work can therefore benefit NLP systems targeting discourse relations by suggesting locations within the text which systems should attend to in one way or another. Moreover, we think that using distant-supervised techniques for learning discourse relations (e.g. BIBREF55) is promising in the development of discourse models using the proposed dataset. We hope to see further analyses benefit from this work and the application of metrics such as ${\Delta }_s$ to other datasets, within more complex models, and using additional features to capture such information. We also hope to see applications of discourse relations such as machine comprehension BIBREF20 and sentiment analysis BIBREF55 etc. benefit from the proposed model architecture as well as the dataset.
influence of each word on the score of the correct relation, that impact should and does still correlate with human judgments
e74ba39c35af53d3960be5a6c86eddd62cef859f
e74ba39c35af53d3960be5a6c86eddd62cef859f_0
Q: Which baseline performs best? Text: Introduction The output of the world's scientists doubles roughly every nine years BIBREF0, and their pace is quickening. As a result, scientists and other experts must devote significant time to the difficult task of literature review, or coming to understand the context in which they work. Might artificial intelligence help to reduce that time? Several lines of research seek to do so. Citation recommendations systems BIBREF1, BIBREF2, BIBREF3 suggest references to relevant published work for a given document such as a current draft. Summarization systems BIBREF4, BIBREF5 condense the information in one or more documents, allowing researchers to more quickly understand the basic ideas in a piece of research. We introduce a complementary—but so far unaddressed—problem, citation text generation, where the relationship between a document and one or several others is expressed in natural language text. This differs from traditional summarization in that the primary focus is explaining the relationship between the two documents rather than their content. Automatically describing inter-document relationships could dramatically decrease the time researchers devote to literature review. For instance, a new paper could be explained in terms of its relationships to relevant works that a particular reader is most familiar with, rather than just those which the authors elected to cite (personalization). Further, such technology could be incorporated into writing assistance systems to help less experienced or non-native writers better articulate the connection between their work and prior art. Additionally, users of citation recommendation systems can benefit from natural language explanations of recommendation system choices. Beyond the immediate utility of citation text generation systems, the task offers significant challenges for language understanding and generation research. A major challenge is how to represent the information in one or more scientific texts. These documents are longer than those in most other domains typically studied in NLP, and make use of a long-tailed, open-domain technical vocabulary. Often an important phrase in the citing sentence output occurs only in a specific cited document and not elsewhere in the corpus. This requires a model that can learn phrase meanings from very few exposures, an important but unsolved problem for text generation systems. Possibly more challenging is understanding and expressing the various and nuanced relationships between related scientific works. In this work, we introduce the task of citation text generation. Leveraging the full texts of English-language scientific articles, we construct a dataset of citation sentences in the computer science domain for training and evaluating citation text generation models. We investigate strong retrieval and neural baseline models against which future work can compare. For use cases where large models can be trained, we extend the successful GPT2 architecture BIBREF6 to the scientific domain with additional pre-training and subsequent fine-tuning on the citation generation task. We experiment with different kinds of document context in the fine-tuning and inference stages. We also explore retrieval-based techniques which may more easily generalize to lower-resource settings. These models retrieve citation sentences from training documents which are most similar to test inputs. Our evaluations show that these techniques often produce plausible citation sentences, but indicate clear directions for improvement. Code and artifacts are provided for future research. Task Given the important research challenges posed by the citation text generation task, along with the potential social benefits of its solutions, let us continue with a formalization of the problem. Citation text generation is the task of generating a natural language citing sentence which explains the relationship between two documents. Examples of such citing sentences can be found in scientific documents as in-text citations to a previous work. Thus, we will formally distinguish one document as the source document, from which we will draw citing sentences which reference the cited document. If we want to leverage powerful modern neural text generation systems, we are faced with the problem of how to represent the documents in a way that these models can consume. In particular, language models like GPT2 are trained to predict next token probabilities given long stretches of contiguous text from a single document. It is not clear how to mix information from more than one document when providing context to these models. An additional difficulty of the citation text generation task is the vocabulary. In this domain, low-frequency, highly meaningful terms regularly appear in output texts. These terms may be completely novel to a single or small collection of papers (consider the phrase “citation text generation”, for instance), yet they are necessary for explaining the paper. This framing suggests a supervised learning setup. Let $t$ denote a citing sentence drawn from $S$, and $S^{\prime }$ denote $S$ without $t$. Then let be the probability of $t$ given $S^{\prime }$, cited document $C$, and model parameters $\theta $. The goal of learning a citation text generation model would be to maximize this probability across a large number of $t,S,C$ triples, so long as the parameters also generalize to unseen instances. At inference time, the goal is to generate a sentence $t^\ast $ which accurately describes the relationship between $S$ and $C$. The most appropriate evaluation metric for most text generation tasks is human judgment by potential users of the system. Evaluating citation text requires human judges with scientific expertise. For exploratory purposes, we use the standard automatic metrics for text generation tasks described in Section SECREF4, and we an expert error analysis in Section SECREF14. For source and cited documents, we use English-language computer science articles and annotation from the S2-GORC dataset BIBREF7. S2-GORC is a large citation graph dataset which includes full texts of 8.1 million scientific documents. We select a subset of 154K computer science articles as our corpus. From these, we extract 622K citing sentences that link back to other documents in our corpus. We hold 2500 examples for each of the validation and test sets. Detailed statistics can be found in Table TABREF4. Models We explore two basic styles of model for citation text generation. Following current work in neural text generation, we fine-tune the predictions of a large pre-trained language model to the citation text generation task. Additionally, we investigate approximate nearest neighbor methods to retrieve plausible human-authored citation sentences from the training data. Models ::: Neural Text Generation Recent work has shown that adapting large pre-trained language models to text generation tasks yields strong results BIBREF8. Due to its widespread use in text generation, we investigate the GPT model of BIBREF6 for the citation text generation task. GPT2 is a transformer model trained on 40 gigabytes of internet text with a language modeling objective BIBREF9. The adaptation process, called fine-tuning, involves continued training of the model on the target objective, in our case citation text generation. To fine-tune GPT2 for text generation, it is typical to concatenate the conditioning context $X = x_1 \ldots x_n$ and citing sentence $Y = y_1 \ldots y_m$ with a special separator token $\mho $. The model learns to approximate next token probabilities for each index after $\mho $: for $0<i<m$ and model parameters $\theta $. Cross-entropy loss is calculated for each $y_i$ and backpropagation is used find parameters $\theta $ which maximize $p(y_{i+1} \mid X,\mho ,y_1,\ldots ,y_i)$. To adapt Equation DISPLAY_FORM6 to the citation text generation task, we construct the conditioning context $X$ from the source and cited documents. We take $j$ tokens from source document $s_1,\ldots ,s_j$ along with $k$ tokens from the cited document $c_1,\ldots ,c_k$. (Which tokens are drawn from the two documents is an independent variable that we explore experimentally.) We then condition the generation of citing sentence $Y$ on $X = s_1,\ldots ,s_j,\mho ,c_1,\ldots ,c_k$. This model is trained to predict each token of $Y$ as described above. Models ::: Neural Text Generation ::: Context The primary question we investigate with this model is what kind of input is best for generating accurate and informative citation sentences. Prior works in citation recommendation have made use of abstracts, which perhaps act as sufficient summaries of document content for this task. Additionally, we explore variants of extended context, such as the introduction or first section after the abstract. Since scientific texts are too long to fit into the context window of our generation model, we also investigate a “sampling” approach which samples sentences from throughout the document until the context window is full. In this work, we combine either the abstract or introduction of the source document with each of the abstract, introduction, or sampled sentences from the cited document. Models ::: Retrieval with Approximate Nearest Neighbors While neural text generation techniques have advanced significantly in recent years, they are still inferior to human authored texts. For some tasks, it is better to retrieve a relevant human-authored text rather than generating novel text automatically BIBREF10. Is this also the case for citation text generation? To answer this question, we adapt an approximate nearest neighbor search algorithm to find similar pairs of documents. The basic search procedure is as follows: Given a test instance input $(S,C)$ for source $S$ and cited document $C$, we find the set $\bf {N}_C$, the nearest neighbors to $C$ in the training data. For each document $N_C$ from $\bf {N}_C$, let $\bf {N}_S$ be the set of documents that cite $N_C$. This means that each $N_S \in {\bf N}_S$ contains at least one citing sentence $t^{\prime }$ which cites $N_C$. We return the $t^{\prime }$ associated with the $(N_S,N_C)$ pair from the training which is closest to $(S,C)$. We measure the closeness of two pairs of documents by measuring cosine distances between vector representations of their content. The abstract of each document is embedded into a single dense vector by averaging the contextualized embeddings provided by the SciBERT model of BIBREF11 and normalizing. The distance between $(S,C)$ and candidate $(N_S,N_C)$ is computed as: where $\alpha $ and $\beta $ control the relative contribution of the two document similarities. We explore setting both $\alpha $ and $\beta $ to 1, or tuning them to optimize either BLEU or BERTScore on the validation set. Models ::: Language Model Pretraining GPT2-based models have demonstrated an ability to capture long distance dependencies over hundreds of tokens, which we hypothesize will allow them to synthesize information in both the source and cited documents. But citation text generation models must also handle the challenging technical vocabulary of the scientific domain. Prior work has shown that pretraining on in-domain data improves the performance of large language models on domain-specific tasks BIBREF11. Inspired by this, we experiment with additional pretraining of GPT2 in the science domain. This model, SciGPT2, is trained for an additional 3 epochs over the full text of the documents in our corpus using a language modeling objective. We note that both SciGPT2 and the SciBERT language models used here have been exposed to citing sentences from the test and validation sets as in-line citations during their pre-training phases, which may improve their performance versus models without this exposure. Such exposure is typical when using pretrained language models, as text from test data cannot be guaranteed to be absent from the large task-independent corpora upon which these models are trained. Evaluation We compare the different baseline systems using BLEU BIBREF12, ROUGE (specifically ROUGE 1, 2, and L; BIBREF13), and the recently introduced BertScore BIBREF14, a similarity metric based on BERT embeddings which has been shown to correlate well with human judgements on other tasks. To adapt the BertScore metric to the scientific text domain, we use SciBERT embeddings. Table TABREF7 (above the double line) shows the performance of the SciGPT2 model on the test set when provided with the different input context combinations outlined in Section SECREF5. We find that context does make a difference for this category of model, and that models which have access to the intro of the documents outperform those which use abstracts or sampling. Automatic evaluation of the retrieval-based methods on the test data are shown below the double line in Table TABREF7. This table shows that the retrieval methods perform well on this task. However we will show the limitations of these automatic metrics in Section SECREF14. We also observe that tuning the $\alpha $ and $\beta $ parameters on the validation set results in overfitting for this method. Outputs are largely unchanged by this tuning; fewer than 400 test datapoints differ from the untuned outputs. A larger validation split may alleviate this problem. Statistical significance is assessed for select results using bootstrapping with 1000 samples in each of 100 iterations. This test shows that conditioning on the introduction of the source document improves performance compared to conditioning on the abstract when using the SciGPT2 model. However, we see that IR methods perform better than the best neural models. We do not find enough evidence to reject the null hypothesis regarding what context from the cited document should be used. Analysis In this section we take a closer look at the details of the SciGPT2 and IR system outputs on a collection of validation datapoints. We provide a quantitative error analysis as well as qualitative analysis and examples. Analysis ::: Errors In order to better understand the performance of the models, we undertake a quantitative analysis of its output. One author randomly selected 200 datapoints from the validation set and their associated model outputs. Source and cited papers in the topic of NLP were used so as to facilitate expert judgement. For tractability, we limited the context presented to the annotator to the document abstracts and analyze the outputs of the abs $\times $ abs and IR systems. In this analysis, we ask whether the models are producing believable citing sentences given their input. In particular, we are interested in the relative believability of the SciGPT2 and IR systems, as well as how believability of a citing sentence changes when a reader can see the abstract of one document or both. We use 100 datapoints with outputs from the SciGPT2 system and 100 with outputs from the IR system. For 50 datapoints from each system, the cited document's abstract is initially masked such that only the source context is visible (Source, One Visible). Based only on the source context, the annotator judged whether the model output (1) could have convincingly been a citation in the source document based solely on the abstract (believable), (2) could have been a citation in the source document, but unclear from the abstract alone and depends on the rest of the paper's content (content-dependent), or (3) is unlikely to appear in this document (not believable). After making this judgment, the annotator was then shown the abstract of the cited document and asked to make the 3-way believability judgment based on both source and cited abstracts (Source, Both Visible). This process is repeated with the remaining 50 datapoints, but with the cited context masked initially (Cited, One Visible and Cited, Both Visible). The results of our analysis presented in Table TABREF13. We find that believability in the Cited, One Visible condition correlates well with the Cited, Both Visible condition. In the Source conditions, we see a greater difference in believability between One Visible and Both Visible. These findings makes sense: in-line citations often summarize a prior study rather than highlight the paper's own contributions. Together, these results indicate that the believability of citing sentences is more related to the cited document than to the source. Another interesting feature of this analysis is the difference between SciGPT2 and IR in terms of context-dependent citing sentences. We observe fewer such judgements in the IR outputs. This is probably due to the fact that neural text generation systems such as SciGPT2 will sometimes produce generic, uninformative outputs while the IR system outputs are usually specific enough that a stronger believability judgement can be made. We also observe an overall higher instance of not believable judgements of the IR model outputs. This implies that automatic metrics such as BLEU, where the IR system scored higher than SciGPT2, do not correlate with factual accuracy in citation text generation. Example citations and annotations are shown in Table TABREF15. We find that in the cases where the model generated outputs are unconvincing they are still on topic. All 10 cases in the Source, One Visible and 4 of the cases in Cited, One Visible that were no longer believable in the Both Visible conditions exhibit this quality. A common example (4 cases) of this phenomenon occurs when the model output references a dataset. While the dataset would be potentially relevant to both papers, the cited papers focus on modeling contributions and do not introduce a novel corpus. Analysis ::: Examples Example system outputs for randomly selected validation instances are shown in Table TABREF18. We see that both the SciGPT2 and IR model outputs regularly hit on the correct broad topic of the cited text (such “literary analysis” or “image captioning evaluation metrics”). It is notable that the SciGPT2 model outputs syntactically correct and coherent citation sentences, even given the difficulty of the vocabulary in this domain. This is a testament to the power of the domain-specific language model training. We also observe that the outputs of the SciGPT2 model are often shorter than the desired citing sentence. Brevity is a known issue for neural text generation and may be alleviated by penalizing brevity in the inference procedure. More problematic are the factual errors in the generated text. In the last example, for instance, we see that SciGPT2 fails to cite the specific image captioning dataset described in the cited paper (Pascal1K) and instead focuses on the more general evaluation metric for the image captioning task (CIDEr). This is typical of neural text generation systems, which often assign high probability to generic or frequent phrases and revert to these in the face of uncertainty. Analysis ::: Future Work The fluency and topical relevance of the baseline models show the plausibility of the citation text generation task as well as the utility of including pretrained scientific language models in future models. But based on the kinds of errors we have seen, future work should focus on two complementary goals: ensuring the factual accuracy of the generated text and improved modeling of the cited document. Factual accuracy is difficult to enforce in statistical text generation systems, especially where inference includes sampling procedures. Grounding to knowledge bases could help. For this task, knowledge extracted from candidate generations could be compared with knowledge from the full source and cited documents to prune false or irrelevant statements. Further, modeling input documents as knowledge graphs of their contents may help these algorithms better understand the cited document, resulting in better outputs. However, such a model will have to address the open problem of combining pretrained language models with graph encoding techniques. Related Work The current work builds on recent research in scientific document understanding, including citation recommendation and categorization, as well as scientific document summarization. Citation recommendation, or the task of selecting works related to a source document which would be suitable for citing, is a longstanding goal of AI research BIBREF15, BIBREF2, BIBREF16. Recently, researchers have sought to categorize citations using various ontologies of citation intents. BIBREF1 sought to discern “highly influential” citations from others. BIBREF17 uses six categories including “motivation”, “uses”, and “future work” among others. BIBREF3 condense this ontology to just three: “background”,“method”, and “result comparison”. We view the citation text generation task as an extension of these classification approaches with distinct advantages. While classification requires an extant citation link to exist, our generation task can describe possible relationships between works which do not cite each other, such as contemporaneous works. Additionally, because gold citation texts are readily available in scientific documents, the citation text generation task requires no task-specific annotated training data. In practice, citation classification is used to assist in suggesting relevant works to researchers; citation text generation complements this goal by providing rationales for the recommendation and furthering progress toward explainable AI. Generating a citation is also connected to summarizing scientific documents. There is a long history research on summarizing scientific documents BIBREF18, BIBREF19. More recently, researchers have included citing sentences as part of the input for summarization, hoping to capture the contribution of a work along with its content BIBREF20, BIBREF21, BIBREF5. Ours is the first to focus on the specific relationship between two documents when generating such sentences. Because of the emphasis on relational document understanding in our task, citation generation models can be used to assist with drafting papers as well, reducing researcher workload and providing non-native writers with a helpful first draft. Our work builds on recent advances in transfer learning in NLP. In particular, large pretrained models such as BERT BIBREF22 and GPT2 BIBREF6 have made strong advances on a number of tasks BIBREF23. It has also been shown that pretraining these models on domain-specific data further improves results on domain-speicific tasks BIBREF11, BIBREF24. In this work, we apply that methodology by adding an additional pretraining phase on in-domain data before finetuning a GPT2 model on the citation text generation task. Conclusion We have introduced the challenging but useful task of citation text generation. This task requires reasoning about the relationships between documents and expressing these relationships in natural language text. We have established a dataset for this task and studied the performance of contemporary neural text generation and information retrieval models. Our analysis shows that while these models produce fluent and topical outputs, more research is needed to ensure factual accuracy and specificity in the generated text. Acknowledgements This research was supported by the Office of Naval Research under the MURI grant N00014-18-1-2670.
IR methods perform better than the best neural models
458f3963387de57fdc182875c9ca3798b612b633
458f3963387de57fdc182875c9ca3798b612b633_0
Q: Which baselines are explored? Text: Introduction The output of the world's scientists doubles roughly every nine years BIBREF0, and their pace is quickening. As a result, scientists and other experts must devote significant time to the difficult task of literature review, or coming to understand the context in which they work. Might artificial intelligence help to reduce that time? Several lines of research seek to do so. Citation recommendations systems BIBREF1, BIBREF2, BIBREF3 suggest references to relevant published work for a given document such as a current draft. Summarization systems BIBREF4, BIBREF5 condense the information in one or more documents, allowing researchers to more quickly understand the basic ideas in a piece of research. We introduce a complementary—but so far unaddressed—problem, citation text generation, where the relationship between a document and one or several others is expressed in natural language text. This differs from traditional summarization in that the primary focus is explaining the relationship between the two documents rather than their content. Automatically describing inter-document relationships could dramatically decrease the time researchers devote to literature review. For instance, a new paper could be explained in terms of its relationships to relevant works that a particular reader is most familiar with, rather than just those which the authors elected to cite (personalization). Further, such technology could be incorporated into writing assistance systems to help less experienced or non-native writers better articulate the connection between their work and prior art. Additionally, users of citation recommendation systems can benefit from natural language explanations of recommendation system choices. Beyond the immediate utility of citation text generation systems, the task offers significant challenges for language understanding and generation research. A major challenge is how to represent the information in one or more scientific texts. These documents are longer than those in most other domains typically studied in NLP, and make use of a long-tailed, open-domain technical vocabulary. Often an important phrase in the citing sentence output occurs only in a specific cited document and not elsewhere in the corpus. This requires a model that can learn phrase meanings from very few exposures, an important but unsolved problem for text generation systems. Possibly more challenging is understanding and expressing the various and nuanced relationships between related scientific works. In this work, we introduce the task of citation text generation. Leveraging the full texts of English-language scientific articles, we construct a dataset of citation sentences in the computer science domain for training and evaluating citation text generation models. We investigate strong retrieval and neural baseline models against which future work can compare. For use cases where large models can be trained, we extend the successful GPT2 architecture BIBREF6 to the scientific domain with additional pre-training and subsequent fine-tuning on the citation generation task. We experiment with different kinds of document context in the fine-tuning and inference stages. We also explore retrieval-based techniques which may more easily generalize to lower-resource settings. These models retrieve citation sentences from training documents which are most similar to test inputs. Our evaluations show that these techniques often produce plausible citation sentences, but indicate clear directions for improvement. Code and artifacts are provided for future research. Task Given the important research challenges posed by the citation text generation task, along with the potential social benefits of its solutions, let us continue with a formalization of the problem. Citation text generation is the task of generating a natural language citing sentence which explains the relationship between two documents. Examples of such citing sentences can be found in scientific documents as in-text citations to a previous work. Thus, we will formally distinguish one document as the source document, from which we will draw citing sentences which reference the cited document. If we want to leverage powerful modern neural text generation systems, we are faced with the problem of how to represent the documents in a way that these models can consume. In particular, language models like GPT2 are trained to predict next token probabilities given long stretches of contiguous text from a single document. It is not clear how to mix information from more than one document when providing context to these models. An additional difficulty of the citation text generation task is the vocabulary. In this domain, low-frequency, highly meaningful terms regularly appear in output texts. These terms may be completely novel to a single or small collection of papers (consider the phrase “citation text generation”, for instance), yet they are necessary for explaining the paper. This framing suggests a supervised learning setup. Let $t$ denote a citing sentence drawn from $S$, and $S^{\prime }$ denote $S$ without $t$. Then let be the probability of $t$ given $S^{\prime }$, cited document $C$, and model parameters $\theta $. The goal of learning a citation text generation model would be to maximize this probability across a large number of $t,S,C$ triples, so long as the parameters also generalize to unseen instances. At inference time, the goal is to generate a sentence $t^\ast $ which accurately describes the relationship between $S$ and $C$. The most appropriate evaluation metric for most text generation tasks is human judgment by potential users of the system. Evaluating citation text requires human judges with scientific expertise. For exploratory purposes, we use the standard automatic metrics for text generation tasks described in Section SECREF4, and we an expert error analysis in Section SECREF14. For source and cited documents, we use English-language computer science articles and annotation from the S2-GORC dataset BIBREF7. S2-GORC is a large citation graph dataset which includes full texts of 8.1 million scientific documents. We select a subset of 154K computer science articles as our corpus. From these, we extract 622K citing sentences that link back to other documents in our corpus. We hold 2500 examples for each of the validation and test sets. Detailed statistics can be found in Table TABREF4. Models We explore two basic styles of model for citation text generation. Following current work in neural text generation, we fine-tune the predictions of a large pre-trained language model to the citation text generation task. Additionally, we investigate approximate nearest neighbor methods to retrieve plausible human-authored citation sentences from the training data. Models ::: Neural Text Generation Recent work has shown that adapting large pre-trained language models to text generation tasks yields strong results BIBREF8. Due to its widespread use in text generation, we investigate the GPT model of BIBREF6 for the citation text generation task. GPT2 is a transformer model trained on 40 gigabytes of internet text with a language modeling objective BIBREF9. The adaptation process, called fine-tuning, involves continued training of the model on the target objective, in our case citation text generation. To fine-tune GPT2 for text generation, it is typical to concatenate the conditioning context $X = x_1 \ldots x_n$ and citing sentence $Y = y_1 \ldots y_m$ with a special separator token $\mho $. The model learns to approximate next token probabilities for each index after $\mho $: for $0<i<m$ and model parameters $\theta $. Cross-entropy loss is calculated for each $y_i$ and backpropagation is used find parameters $\theta $ which maximize $p(y_{i+1} \mid X,\mho ,y_1,\ldots ,y_i)$. To adapt Equation DISPLAY_FORM6 to the citation text generation task, we construct the conditioning context $X$ from the source and cited documents. We take $j$ tokens from source document $s_1,\ldots ,s_j$ along with $k$ tokens from the cited document $c_1,\ldots ,c_k$. (Which tokens are drawn from the two documents is an independent variable that we explore experimentally.) We then condition the generation of citing sentence $Y$ on $X = s_1,\ldots ,s_j,\mho ,c_1,\ldots ,c_k$. This model is trained to predict each token of $Y$ as described above. Models ::: Neural Text Generation ::: Context The primary question we investigate with this model is what kind of input is best for generating accurate and informative citation sentences. Prior works in citation recommendation have made use of abstracts, which perhaps act as sufficient summaries of document content for this task. Additionally, we explore variants of extended context, such as the introduction or first section after the abstract. Since scientific texts are too long to fit into the context window of our generation model, we also investigate a “sampling” approach which samples sentences from throughout the document until the context window is full. In this work, we combine either the abstract or introduction of the source document with each of the abstract, introduction, or sampled sentences from the cited document. Models ::: Retrieval with Approximate Nearest Neighbors While neural text generation techniques have advanced significantly in recent years, they are still inferior to human authored texts. For some tasks, it is better to retrieve a relevant human-authored text rather than generating novel text automatically BIBREF10. Is this also the case for citation text generation? To answer this question, we adapt an approximate nearest neighbor search algorithm to find similar pairs of documents. The basic search procedure is as follows: Given a test instance input $(S,C)$ for source $S$ and cited document $C$, we find the set $\bf {N}_C$, the nearest neighbors to $C$ in the training data. For each document $N_C$ from $\bf {N}_C$, let $\bf {N}_S$ be the set of documents that cite $N_C$. This means that each $N_S \in {\bf N}_S$ contains at least one citing sentence $t^{\prime }$ which cites $N_C$. We return the $t^{\prime }$ associated with the $(N_S,N_C)$ pair from the training which is closest to $(S,C)$. We measure the closeness of two pairs of documents by measuring cosine distances between vector representations of their content. The abstract of each document is embedded into a single dense vector by averaging the contextualized embeddings provided by the SciBERT model of BIBREF11 and normalizing. The distance between $(S,C)$ and candidate $(N_S,N_C)$ is computed as: where $\alpha $ and $\beta $ control the relative contribution of the two document similarities. We explore setting both $\alpha $ and $\beta $ to 1, or tuning them to optimize either BLEU or BERTScore on the validation set. Models ::: Language Model Pretraining GPT2-based models have demonstrated an ability to capture long distance dependencies over hundreds of tokens, which we hypothesize will allow them to synthesize information in both the source and cited documents. But citation text generation models must also handle the challenging technical vocabulary of the scientific domain. Prior work has shown that pretraining on in-domain data improves the performance of large language models on domain-specific tasks BIBREF11. Inspired by this, we experiment with additional pretraining of GPT2 in the science domain. This model, SciGPT2, is trained for an additional 3 epochs over the full text of the documents in our corpus using a language modeling objective. We note that both SciGPT2 and the SciBERT language models used here have been exposed to citing sentences from the test and validation sets as in-line citations during their pre-training phases, which may improve their performance versus models without this exposure. Such exposure is typical when using pretrained language models, as text from test data cannot be guaranteed to be absent from the large task-independent corpora upon which these models are trained. Evaluation We compare the different baseline systems using BLEU BIBREF12, ROUGE (specifically ROUGE 1, 2, and L; BIBREF13), and the recently introduced BertScore BIBREF14, a similarity metric based on BERT embeddings which has been shown to correlate well with human judgements on other tasks. To adapt the BertScore metric to the scientific text domain, we use SciBERT embeddings. Table TABREF7 (above the double line) shows the performance of the SciGPT2 model on the test set when provided with the different input context combinations outlined in Section SECREF5. We find that context does make a difference for this category of model, and that models which have access to the intro of the documents outperform those which use abstracts or sampling. Automatic evaluation of the retrieval-based methods on the test data are shown below the double line in Table TABREF7. This table shows that the retrieval methods perform well on this task. However we will show the limitations of these automatic metrics in Section SECREF14. We also observe that tuning the $\alpha $ and $\beta $ parameters on the validation set results in overfitting for this method. Outputs are largely unchanged by this tuning; fewer than 400 test datapoints differ from the untuned outputs. A larger validation split may alleviate this problem. Statistical significance is assessed for select results using bootstrapping with 1000 samples in each of 100 iterations. This test shows that conditioning on the introduction of the source document improves performance compared to conditioning on the abstract when using the SciGPT2 model. However, we see that IR methods perform better than the best neural models. We do not find enough evidence to reject the null hypothesis regarding what context from the cited document should be used. Analysis In this section we take a closer look at the details of the SciGPT2 and IR system outputs on a collection of validation datapoints. We provide a quantitative error analysis as well as qualitative analysis and examples. Analysis ::: Errors In order to better understand the performance of the models, we undertake a quantitative analysis of its output. One author randomly selected 200 datapoints from the validation set and their associated model outputs. Source and cited papers in the topic of NLP were used so as to facilitate expert judgement. For tractability, we limited the context presented to the annotator to the document abstracts and analyze the outputs of the abs $\times $ abs and IR systems. In this analysis, we ask whether the models are producing believable citing sentences given their input. In particular, we are interested in the relative believability of the SciGPT2 and IR systems, as well as how believability of a citing sentence changes when a reader can see the abstract of one document or both. We use 100 datapoints with outputs from the SciGPT2 system and 100 with outputs from the IR system. For 50 datapoints from each system, the cited document's abstract is initially masked such that only the source context is visible (Source, One Visible). Based only on the source context, the annotator judged whether the model output (1) could have convincingly been a citation in the source document based solely on the abstract (believable), (2) could have been a citation in the source document, but unclear from the abstract alone and depends on the rest of the paper's content (content-dependent), or (3) is unlikely to appear in this document (not believable). After making this judgment, the annotator was then shown the abstract of the cited document and asked to make the 3-way believability judgment based on both source and cited abstracts (Source, Both Visible). This process is repeated with the remaining 50 datapoints, but with the cited context masked initially (Cited, One Visible and Cited, Both Visible). The results of our analysis presented in Table TABREF13. We find that believability in the Cited, One Visible condition correlates well with the Cited, Both Visible condition. In the Source conditions, we see a greater difference in believability between One Visible and Both Visible. These findings makes sense: in-line citations often summarize a prior study rather than highlight the paper's own contributions. Together, these results indicate that the believability of citing sentences is more related to the cited document than to the source. Another interesting feature of this analysis is the difference between SciGPT2 and IR in terms of context-dependent citing sentences. We observe fewer such judgements in the IR outputs. This is probably due to the fact that neural text generation systems such as SciGPT2 will sometimes produce generic, uninformative outputs while the IR system outputs are usually specific enough that a stronger believability judgement can be made. We also observe an overall higher instance of not believable judgements of the IR model outputs. This implies that automatic metrics such as BLEU, where the IR system scored higher than SciGPT2, do not correlate with factual accuracy in citation text generation. Example citations and annotations are shown in Table TABREF15. We find that in the cases where the model generated outputs are unconvincing they are still on topic. All 10 cases in the Source, One Visible and 4 of the cases in Cited, One Visible that were no longer believable in the Both Visible conditions exhibit this quality. A common example (4 cases) of this phenomenon occurs when the model output references a dataset. While the dataset would be potentially relevant to both papers, the cited papers focus on modeling contributions and do not introduce a novel corpus. Analysis ::: Examples Example system outputs for randomly selected validation instances are shown in Table TABREF18. We see that both the SciGPT2 and IR model outputs regularly hit on the correct broad topic of the cited text (such “literary analysis” or “image captioning evaluation metrics”). It is notable that the SciGPT2 model outputs syntactically correct and coherent citation sentences, even given the difficulty of the vocabulary in this domain. This is a testament to the power of the domain-specific language model training. We also observe that the outputs of the SciGPT2 model are often shorter than the desired citing sentence. Brevity is a known issue for neural text generation and may be alleviated by penalizing brevity in the inference procedure. More problematic are the factual errors in the generated text. In the last example, for instance, we see that SciGPT2 fails to cite the specific image captioning dataset described in the cited paper (Pascal1K) and instead focuses on the more general evaluation metric for the image captioning task (CIDEr). This is typical of neural text generation systems, which often assign high probability to generic or frequent phrases and revert to these in the face of uncertainty. Analysis ::: Future Work The fluency and topical relevance of the baseline models show the plausibility of the citation text generation task as well as the utility of including pretrained scientific language models in future models. But based on the kinds of errors we have seen, future work should focus on two complementary goals: ensuring the factual accuracy of the generated text and improved modeling of the cited document. Factual accuracy is difficult to enforce in statistical text generation systems, especially where inference includes sampling procedures. Grounding to knowledge bases could help. For this task, knowledge extracted from candidate generations could be compared with knowledge from the full source and cited documents to prune false or irrelevant statements. Further, modeling input documents as knowledge graphs of their contents may help these algorithms better understand the cited document, resulting in better outputs. However, such a model will have to address the open problem of combining pretrained language models with graph encoding techniques. Related Work The current work builds on recent research in scientific document understanding, including citation recommendation and categorization, as well as scientific document summarization. Citation recommendation, or the task of selecting works related to a source document which would be suitable for citing, is a longstanding goal of AI research BIBREF15, BIBREF2, BIBREF16. Recently, researchers have sought to categorize citations using various ontologies of citation intents. BIBREF1 sought to discern “highly influential” citations from others. BIBREF17 uses six categories including “motivation”, “uses”, and “future work” among others. BIBREF3 condense this ontology to just three: “background”,“method”, and “result comparison”. We view the citation text generation task as an extension of these classification approaches with distinct advantages. While classification requires an extant citation link to exist, our generation task can describe possible relationships between works which do not cite each other, such as contemporaneous works. Additionally, because gold citation texts are readily available in scientific documents, the citation text generation task requires no task-specific annotated training data. In practice, citation classification is used to assist in suggesting relevant works to researchers; citation text generation complements this goal by providing rationales for the recommendation and furthering progress toward explainable AI. Generating a citation is also connected to summarizing scientific documents. There is a long history research on summarizing scientific documents BIBREF18, BIBREF19. More recently, researchers have included citing sentences as part of the input for summarization, hoping to capture the contribution of a work along with its content BIBREF20, BIBREF21, BIBREF5. Ours is the first to focus on the specific relationship between two documents when generating such sentences. Because of the emphasis on relational document understanding in our task, citation generation models can be used to assist with drafting papers as well, reducing researcher workload and providing non-native writers with a helpful first draft. Our work builds on recent advances in transfer learning in NLP. In particular, large pretrained models such as BERT BIBREF22 and GPT2 BIBREF6 have made strong advances on a number of tasks BIBREF23. It has also been shown that pretraining these models on domain-specific data further improves results on domain-speicific tasks BIBREF11, BIBREF24. In this work, we apply that methodology by adding an additional pretraining phase on in-domain data before finetuning a GPT2 model on the citation text generation task. Conclusion We have introduced the challenging but useful task of citation text generation. This task requires reasoning about the relationships between documents and expressing these relationships in natural language text. We have established a dataset for this task and studied the performance of contemporary neural text generation and information retrieval models. Our analysis shows that while these models produce fluent and topical outputs, more research is needed to ensure factual accuracy and specificity in the generated text. Acknowledgements This research was supported by the Office of Naval Research under the MURI grant N00014-18-1-2670.
GPT2, SciBERT model of BIBREF11
69a88b6be3b34acc95c5e36acbe069c0a0bc67d6
69a88b6be3b34acc95c5e36acbe069c0a0bc67d6_0
Q: What is the size of the corpus? Text: Introduction The output of the world's scientists doubles roughly every nine years BIBREF0, and their pace is quickening. As a result, scientists and other experts must devote significant time to the difficult task of literature review, or coming to understand the context in which they work. Might artificial intelligence help to reduce that time? Several lines of research seek to do so. Citation recommendations systems BIBREF1, BIBREF2, BIBREF3 suggest references to relevant published work for a given document such as a current draft. Summarization systems BIBREF4, BIBREF5 condense the information in one or more documents, allowing researchers to more quickly understand the basic ideas in a piece of research. We introduce a complementary—but so far unaddressed—problem, citation text generation, where the relationship between a document and one or several others is expressed in natural language text. This differs from traditional summarization in that the primary focus is explaining the relationship between the two documents rather than their content. Automatically describing inter-document relationships could dramatically decrease the time researchers devote to literature review. For instance, a new paper could be explained in terms of its relationships to relevant works that a particular reader is most familiar with, rather than just those which the authors elected to cite (personalization). Further, such technology could be incorporated into writing assistance systems to help less experienced or non-native writers better articulate the connection between their work and prior art. Additionally, users of citation recommendation systems can benefit from natural language explanations of recommendation system choices. Beyond the immediate utility of citation text generation systems, the task offers significant challenges for language understanding and generation research. A major challenge is how to represent the information in one or more scientific texts. These documents are longer than those in most other domains typically studied in NLP, and make use of a long-tailed, open-domain technical vocabulary. Often an important phrase in the citing sentence output occurs only in a specific cited document and not elsewhere in the corpus. This requires a model that can learn phrase meanings from very few exposures, an important but unsolved problem for text generation systems. Possibly more challenging is understanding and expressing the various and nuanced relationships between related scientific works. In this work, we introduce the task of citation text generation. Leveraging the full texts of English-language scientific articles, we construct a dataset of citation sentences in the computer science domain for training and evaluating citation text generation models. We investigate strong retrieval and neural baseline models against which future work can compare. For use cases where large models can be trained, we extend the successful GPT2 architecture BIBREF6 to the scientific domain with additional pre-training and subsequent fine-tuning on the citation generation task. We experiment with different kinds of document context in the fine-tuning and inference stages. We also explore retrieval-based techniques which may more easily generalize to lower-resource settings. These models retrieve citation sentences from training documents which are most similar to test inputs. Our evaluations show that these techniques often produce plausible citation sentences, but indicate clear directions for improvement. Code and artifacts are provided for future research. Task Given the important research challenges posed by the citation text generation task, along with the potential social benefits of its solutions, let us continue with a formalization of the problem. Citation text generation is the task of generating a natural language citing sentence which explains the relationship between two documents. Examples of such citing sentences can be found in scientific documents as in-text citations to a previous work. Thus, we will formally distinguish one document as the source document, from which we will draw citing sentences which reference the cited document. If we want to leverage powerful modern neural text generation systems, we are faced with the problem of how to represent the documents in a way that these models can consume. In particular, language models like GPT2 are trained to predict next token probabilities given long stretches of contiguous text from a single document. It is not clear how to mix information from more than one document when providing context to these models. An additional difficulty of the citation text generation task is the vocabulary. In this domain, low-frequency, highly meaningful terms regularly appear in output texts. These terms may be completely novel to a single or small collection of papers (consider the phrase “citation text generation”, for instance), yet they are necessary for explaining the paper. This framing suggests a supervised learning setup. Let $t$ denote a citing sentence drawn from $S$, and $S^{\prime }$ denote $S$ without $t$. Then let be the probability of $t$ given $S^{\prime }$, cited document $C$, and model parameters $\theta $. The goal of learning a citation text generation model would be to maximize this probability across a large number of $t,S,C$ triples, so long as the parameters also generalize to unseen instances. At inference time, the goal is to generate a sentence $t^\ast $ which accurately describes the relationship between $S$ and $C$. The most appropriate evaluation metric for most text generation tasks is human judgment by potential users of the system. Evaluating citation text requires human judges with scientific expertise. For exploratory purposes, we use the standard automatic metrics for text generation tasks described in Section SECREF4, and we an expert error analysis in Section SECREF14. For source and cited documents, we use English-language computer science articles and annotation from the S2-GORC dataset BIBREF7. S2-GORC is a large citation graph dataset which includes full texts of 8.1 million scientific documents. We select a subset of 154K computer science articles as our corpus. From these, we extract 622K citing sentences that link back to other documents in our corpus. We hold 2500 examples for each of the validation and test sets. Detailed statistics can be found in Table TABREF4. Models We explore two basic styles of model for citation text generation. Following current work in neural text generation, we fine-tune the predictions of a large pre-trained language model to the citation text generation task. Additionally, we investigate approximate nearest neighbor methods to retrieve plausible human-authored citation sentences from the training data. Models ::: Neural Text Generation Recent work has shown that adapting large pre-trained language models to text generation tasks yields strong results BIBREF8. Due to its widespread use in text generation, we investigate the GPT model of BIBREF6 for the citation text generation task. GPT2 is a transformer model trained on 40 gigabytes of internet text with a language modeling objective BIBREF9. The adaptation process, called fine-tuning, involves continued training of the model on the target objective, in our case citation text generation. To fine-tune GPT2 for text generation, it is typical to concatenate the conditioning context $X = x_1 \ldots x_n$ and citing sentence $Y = y_1 \ldots y_m$ with a special separator token $\mho $. The model learns to approximate next token probabilities for each index after $\mho $: for $0<i<m$ and model parameters $\theta $. Cross-entropy loss is calculated for each $y_i$ and backpropagation is used find parameters $\theta $ which maximize $p(y_{i+1} \mid X,\mho ,y_1,\ldots ,y_i)$. To adapt Equation DISPLAY_FORM6 to the citation text generation task, we construct the conditioning context $X$ from the source and cited documents. We take $j$ tokens from source document $s_1,\ldots ,s_j$ along with $k$ tokens from the cited document $c_1,\ldots ,c_k$. (Which tokens are drawn from the two documents is an independent variable that we explore experimentally.) We then condition the generation of citing sentence $Y$ on $X = s_1,\ldots ,s_j,\mho ,c_1,\ldots ,c_k$. This model is trained to predict each token of $Y$ as described above. Models ::: Neural Text Generation ::: Context The primary question we investigate with this model is what kind of input is best for generating accurate and informative citation sentences. Prior works in citation recommendation have made use of abstracts, which perhaps act as sufficient summaries of document content for this task. Additionally, we explore variants of extended context, such as the introduction or first section after the abstract. Since scientific texts are too long to fit into the context window of our generation model, we also investigate a “sampling” approach which samples sentences from throughout the document until the context window is full. In this work, we combine either the abstract or introduction of the source document with each of the abstract, introduction, or sampled sentences from the cited document. Models ::: Retrieval with Approximate Nearest Neighbors While neural text generation techniques have advanced significantly in recent years, they are still inferior to human authored texts. For some tasks, it is better to retrieve a relevant human-authored text rather than generating novel text automatically BIBREF10. Is this also the case for citation text generation? To answer this question, we adapt an approximate nearest neighbor search algorithm to find similar pairs of documents. The basic search procedure is as follows: Given a test instance input $(S,C)$ for source $S$ and cited document $C$, we find the set $\bf {N}_C$, the nearest neighbors to $C$ in the training data. For each document $N_C$ from $\bf {N}_C$, let $\bf {N}_S$ be the set of documents that cite $N_C$. This means that each $N_S \in {\bf N}_S$ contains at least one citing sentence $t^{\prime }$ which cites $N_C$. We return the $t^{\prime }$ associated with the $(N_S,N_C)$ pair from the training which is closest to $(S,C)$. We measure the closeness of two pairs of documents by measuring cosine distances between vector representations of their content. The abstract of each document is embedded into a single dense vector by averaging the contextualized embeddings provided by the SciBERT model of BIBREF11 and normalizing. The distance between $(S,C)$ and candidate $(N_S,N_C)$ is computed as: where $\alpha $ and $\beta $ control the relative contribution of the two document similarities. We explore setting both $\alpha $ and $\beta $ to 1, or tuning them to optimize either BLEU or BERTScore on the validation set. Models ::: Language Model Pretraining GPT2-based models have demonstrated an ability to capture long distance dependencies over hundreds of tokens, which we hypothesize will allow them to synthesize information in both the source and cited documents. But citation text generation models must also handle the challenging technical vocabulary of the scientific domain. Prior work has shown that pretraining on in-domain data improves the performance of large language models on domain-specific tasks BIBREF11. Inspired by this, we experiment with additional pretraining of GPT2 in the science domain. This model, SciGPT2, is trained for an additional 3 epochs over the full text of the documents in our corpus using a language modeling objective. We note that both SciGPT2 and the SciBERT language models used here have been exposed to citing sentences from the test and validation sets as in-line citations during their pre-training phases, which may improve their performance versus models without this exposure. Such exposure is typical when using pretrained language models, as text from test data cannot be guaranteed to be absent from the large task-independent corpora upon which these models are trained. Evaluation We compare the different baseline systems using BLEU BIBREF12, ROUGE (specifically ROUGE 1, 2, and L; BIBREF13), and the recently introduced BertScore BIBREF14, a similarity metric based on BERT embeddings which has been shown to correlate well with human judgements on other tasks. To adapt the BertScore metric to the scientific text domain, we use SciBERT embeddings. Table TABREF7 (above the double line) shows the performance of the SciGPT2 model on the test set when provided with the different input context combinations outlined in Section SECREF5. We find that context does make a difference for this category of model, and that models which have access to the intro of the documents outperform those which use abstracts or sampling. Automatic evaluation of the retrieval-based methods on the test data are shown below the double line in Table TABREF7. This table shows that the retrieval methods perform well on this task. However we will show the limitations of these automatic metrics in Section SECREF14. We also observe that tuning the $\alpha $ and $\beta $ parameters on the validation set results in overfitting for this method. Outputs are largely unchanged by this tuning; fewer than 400 test datapoints differ from the untuned outputs. A larger validation split may alleviate this problem. Statistical significance is assessed for select results using bootstrapping with 1000 samples in each of 100 iterations. This test shows that conditioning on the introduction of the source document improves performance compared to conditioning on the abstract when using the SciGPT2 model. However, we see that IR methods perform better than the best neural models. We do not find enough evidence to reject the null hypothesis regarding what context from the cited document should be used. Analysis In this section we take a closer look at the details of the SciGPT2 and IR system outputs on a collection of validation datapoints. We provide a quantitative error analysis as well as qualitative analysis and examples. Analysis ::: Errors In order to better understand the performance of the models, we undertake a quantitative analysis of its output. One author randomly selected 200 datapoints from the validation set and their associated model outputs. Source and cited papers in the topic of NLP were used so as to facilitate expert judgement. For tractability, we limited the context presented to the annotator to the document abstracts and analyze the outputs of the abs $\times $ abs and IR systems. In this analysis, we ask whether the models are producing believable citing sentences given their input. In particular, we are interested in the relative believability of the SciGPT2 and IR systems, as well as how believability of a citing sentence changes when a reader can see the abstract of one document or both. We use 100 datapoints with outputs from the SciGPT2 system and 100 with outputs from the IR system. For 50 datapoints from each system, the cited document's abstract is initially masked such that only the source context is visible (Source, One Visible). Based only on the source context, the annotator judged whether the model output (1) could have convincingly been a citation in the source document based solely on the abstract (believable), (2) could have been a citation in the source document, but unclear from the abstract alone and depends on the rest of the paper's content (content-dependent), or (3) is unlikely to appear in this document (not believable). After making this judgment, the annotator was then shown the abstract of the cited document and asked to make the 3-way believability judgment based on both source and cited abstracts (Source, Both Visible). This process is repeated with the remaining 50 datapoints, but with the cited context masked initially (Cited, One Visible and Cited, Both Visible). The results of our analysis presented in Table TABREF13. We find that believability in the Cited, One Visible condition correlates well with the Cited, Both Visible condition. In the Source conditions, we see a greater difference in believability between One Visible and Both Visible. These findings makes sense: in-line citations often summarize a prior study rather than highlight the paper's own contributions. Together, these results indicate that the believability of citing sentences is more related to the cited document than to the source. Another interesting feature of this analysis is the difference between SciGPT2 and IR in terms of context-dependent citing sentences. We observe fewer such judgements in the IR outputs. This is probably due to the fact that neural text generation systems such as SciGPT2 will sometimes produce generic, uninformative outputs while the IR system outputs are usually specific enough that a stronger believability judgement can be made. We also observe an overall higher instance of not believable judgements of the IR model outputs. This implies that automatic metrics such as BLEU, where the IR system scored higher than SciGPT2, do not correlate with factual accuracy in citation text generation. Example citations and annotations are shown in Table TABREF15. We find that in the cases where the model generated outputs are unconvincing they are still on topic. All 10 cases in the Source, One Visible and 4 of the cases in Cited, One Visible that were no longer believable in the Both Visible conditions exhibit this quality. A common example (4 cases) of this phenomenon occurs when the model output references a dataset. While the dataset would be potentially relevant to both papers, the cited papers focus on modeling contributions and do not introduce a novel corpus. Analysis ::: Examples Example system outputs for randomly selected validation instances are shown in Table TABREF18. We see that both the SciGPT2 and IR model outputs regularly hit on the correct broad topic of the cited text (such “literary analysis” or “image captioning evaluation metrics”). It is notable that the SciGPT2 model outputs syntactically correct and coherent citation sentences, even given the difficulty of the vocabulary in this domain. This is a testament to the power of the domain-specific language model training. We also observe that the outputs of the SciGPT2 model are often shorter than the desired citing sentence. Brevity is a known issue for neural text generation and may be alleviated by penalizing brevity in the inference procedure. More problematic are the factual errors in the generated text. In the last example, for instance, we see that SciGPT2 fails to cite the specific image captioning dataset described in the cited paper (Pascal1K) and instead focuses on the more general evaluation metric for the image captioning task (CIDEr). This is typical of neural text generation systems, which often assign high probability to generic or frequent phrases and revert to these in the face of uncertainty. Analysis ::: Future Work The fluency and topical relevance of the baseline models show the plausibility of the citation text generation task as well as the utility of including pretrained scientific language models in future models. But based on the kinds of errors we have seen, future work should focus on two complementary goals: ensuring the factual accuracy of the generated text and improved modeling of the cited document. Factual accuracy is difficult to enforce in statistical text generation systems, especially where inference includes sampling procedures. Grounding to knowledge bases could help. For this task, knowledge extracted from candidate generations could be compared with knowledge from the full source and cited documents to prune false or irrelevant statements. Further, modeling input documents as knowledge graphs of their contents may help these algorithms better understand the cited document, resulting in better outputs. However, such a model will have to address the open problem of combining pretrained language models with graph encoding techniques. Related Work The current work builds on recent research in scientific document understanding, including citation recommendation and categorization, as well as scientific document summarization. Citation recommendation, or the task of selecting works related to a source document which would be suitable for citing, is a longstanding goal of AI research BIBREF15, BIBREF2, BIBREF16. Recently, researchers have sought to categorize citations using various ontologies of citation intents. BIBREF1 sought to discern “highly influential” citations from others. BIBREF17 uses six categories including “motivation”, “uses”, and “future work” among others. BIBREF3 condense this ontology to just three: “background”,“method”, and “result comparison”. We view the citation text generation task as an extension of these classification approaches with distinct advantages. While classification requires an extant citation link to exist, our generation task can describe possible relationships between works which do not cite each other, such as contemporaneous works. Additionally, because gold citation texts are readily available in scientific documents, the citation text generation task requires no task-specific annotated training data. In practice, citation classification is used to assist in suggesting relevant works to researchers; citation text generation complements this goal by providing rationales for the recommendation and furthering progress toward explainable AI. Generating a citation is also connected to summarizing scientific documents. There is a long history research on summarizing scientific documents BIBREF18, BIBREF19. More recently, researchers have included citing sentences as part of the input for summarization, hoping to capture the contribution of a work along with its content BIBREF20, BIBREF21, BIBREF5. Ours is the first to focus on the specific relationship between two documents when generating such sentences. Because of the emphasis on relational document understanding in our task, citation generation models can be used to assist with drafting papers as well, reducing researcher workload and providing non-native writers with a helpful first draft. Our work builds on recent advances in transfer learning in NLP. In particular, large pretrained models such as BERT BIBREF22 and GPT2 BIBREF6 have made strong advances on a number of tasks BIBREF23. It has also been shown that pretraining these models on domain-specific data further improves results on domain-speicific tasks BIBREF11, BIBREF24. In this work, we apply that methodology by adding an additional pretraining phase on in-domain data before finetuning a GPT2 model on the citation text generation task. Conclusion We have introduced the challenging but useful task of citation text generation. This task requires reasoning about the relationships between documents and expressing these relationships in natural language text. We have established a dataset for this task and studied the performance of contemporary neural text generation and information retrieval models. Our analysis shows that while these models produce fluent and topical outputs, more research is needed to ensure factual accuracy and specificity in the generated text. Acknowledgements This research was supported by the Office of Naval Research under the MURI grant N00014-18-1-2670.
8.1 million scientific documents, 154K computer science articles, 622K citing sentences
7befb7a8354fca9d2a94e3fd4364625c98067ebb
7befb7a8354fca9d2a94e3fd4364625c98067ebb_0
Q: How was the evaluation corpus collected? Text: Introduction The output of the world's scientists doubles roughly every nine years BIBREF0, and their pace is quickening. As a result, scientists and other experts must devote significant time to the difficult task of literature review, or coming to understand the context in which they work. Might artificial intelligence help to reduce that time? Several lines of research seek to do so. Citation recommendations systems BIBREF1, BIBREF2, BIBREF3 suggest references to relevant published work for a given document such as a current draft. Summarization systems BIBREF4, BIBREF5 condense the information in one or more documents, allowing researchers to more quickly understand the basic ideas in a piece of research. We introduce a complementary—but so far unaddressed—problem, citation text generation, where the relationship between a document and one or several others is expressed in natural language text. This differs from traditional summarization in that the primary focus is explaining the relationship between the two documents rather than their content. Automatically describing inter-document relationships could dramatically decrease the time researchers devote to literature review. For instance, a new paper could be explained in terms of its relationships to relevant works that a particular reader is most familiar with, rather than just those which the authors elected to cite (personalization). Further, such technology could be incorporated into writing assistance systems to help less experienced or non-native writers better articulate the connection between their work and prior art. Additionally, users of citation recommendation systems can benefit from natural language explanations of recommendation system choices. Beyond the immediate utility of citation text generation systems, the task offers significant challenges for language understanding and generation research. A major challenge is how to represent the information in one or more scientific texts. These documents are longer than those in most other domains typically studied in NLP, and make use of a long-tailed, open-domain technical vocabulary. Often an important phrase in the citing sentence output occurs only in a specific cited document and not elsewhere in the corpus. This requires a model that can learn phrase meanings from very few exposures, an important but unsolved problem for text generation systems. Possibly more challenging is understanding and expressing the various and nuanced relationships between related scientific works. In this work, we introduce the task of citation text generation. Leveraging the full texts of English-language scientific articles, we construct a dataset of citation sentences in the computer science domain for training and evaluating citation text generation models. We investigate strong retrieval and neural baseline models against which future work can compare. For use cases where large models can be trained, we extend the successful GPT2 architecture BIBREF6 to the scientific domain with additional pre-training and subsequent fine-tuning on the citation generation task. We experiment with different kinds of document context in the fine-tuning and inference stages. We also explore retrieval-based techniques which may more easily generalize to lower-resource settings. These models retrieve citation sentences from training documents which are most similar to test inputs. Our evaluations show that these techniques often produce plausible citation sentences, but indicate clear directions for improvement. Code and artifacts are provided for future research. Task Given the important research challenges posed by the citation text generation task, along with the potential social benefits of its solutions, let us continue with a formalization of the problem. Citation text generation is the task of generating a natural language citing sentence which explains the relationship between two documents. Examples of such citing sentences can be found in scientific documents as in-text citations to a previous work. Thus, we will formally distinguish one document as the source document, from which we will draw citing sentences which reference the cited document. If we want to leverage powerful modern neural text generation systems, we are faced with the problem of how to represent the documents in a way that these models can consume. In particular, language models like GPT2 are trained to predict next token probabilities given long stretches of contiguous text from a single document. It is not clear how to mix information from more than one document when providing context to these models. An additional difficulty of the citation text generation task is the vocabulary. In this domain, low-frequency, highly meaningful terms regularly appear in output texts. These terms may be completely novel to a single or small collection of papers (consider the phrase “citation text generation”, for instance), yet they are necessary for explaining the paper. This framing suggests a supervised learning setup. Let $t$ denote a citing sentence drawn from $S$, and $S^{\prime }$ denote $S$ without $t$. Then let be the probability of $t$ given $S^{\prime }$, cited document $C$, and model parameters $\theta $. The goal of learning a citation text generation model would be to maximize this probability across a large number of $t,S,C$ triples, so long as the parameters also generalize to unseen instances. At inference time, the goal is to generate a sentence $t^\ast $ which accurately describes the relationship between $S$ and $C$. The most appropriate evaluation metric for most text generation tasks is human judgment by potential users of the system. Evaluating citation text requires human judges with scientific expertise. For exploratory purposes, we use the standard automatic metrics for text generation tasks described in Section SECREF4, and we an expert error analysis in Section SECREF14. For source and cited documents, we use English-language computer science articles and annotation from the S2-GORC dataset BIBREF7. S2-GORC is a large citation graph dataset which includes full texts of 8.1 million scientific documents. We select a subset of 154K computer science articles as our corpus. From these, we extract 622K citing sentences that link back to other documents in our corpus. We hold 2500 examples for each of the validation and test sets. Detailed statistics can be found in Table TABREF4. Models We explore two basic styles of model for citation text generation. Following current work in neural text generation, we fine-tune the predictions of a large pre-trained language model to the citation text generation task. Additionally, we investigate approximate nearest neighbor methods to retrieve plausible human-authored citation sentences from the training data. Models ::: Neural Text Generation Recent work has shown that adapting large pre-trained language models to text generation tasks yields strong results BIBREF8. Due to its widespread use in text generation, we investigate the GPT model of BIBREF6 for the citation text generation task. GPT2 is a transformer model trained on 40 gigabytes of internet text with a language modeling objective BIBREF9. The adaptation process, called fine-tuning, involves continued training of the model on the target objective, in our case citation text generation. To fine-tune GPT2 for text generation, it is typical to concatenate the conditioning context $X = x_1 \ldots x_n$ and citing sentence $Y = y_1 \ldots y_m$ with a special separator token $\mho $. The model learns to approximate next token probabilities for each index after $\mho $: for $0<i<m$ and model parameters $\theta $. Cross-entropy loss is calculated for each $y_i$ and backpropagation is used find parameters $\theta $ which maximize $p(y_{i+1} \mid X,\mho ,y_1,\ldots ,y_i)$. To adapt Equation DISPLAY_FORM6 to the citation text generation task, we construct the conditioning context $X$ from the source and cited documents. We take $j$ tokens from source document $s_1,\ldots ,s_j$ along with $k$ tokens from the cited document $c_1,\ldots ,c_k$. (Which tokens are drawn from the two documents is an independent variable that we explore experimentally.) We then condition the generation of citing sentence $Y$ on $X = s_1,\ldots ,s_j,\mho ,c_1,\ldots ,c_k$. This model is trained to predict each token of $Y$ as described above. Models ::: Neural Text Generation ::: Context The primary question we investigate with this model is what kind of input is best for generating accurate and informative citation sentences. Prior works in citation recommendation have made use of abstracts, which perhaps act as sufficient summaries of document content for this task. Additionally, we explore variants of extended context, such as the introduction or first section after the abstract. Since scientific texts are too long to fit into the context window of our generation model, we also investigate a “sampling” approach which samples sentences from throughout the document until the context window is full. In this work, we combine either the abstract or introduction of the source document with each of the abstract, introduction, or sampled sentences from the cited document. Models ::: Retrieval with Approximate Nearest Neighbors While neural text generation techniques have advanced significantly in recent years, they are still inferior to human authored texts. For some tasks, it is better to retrieve a relevant human-authored text rather than generating novel text automatically BIBREF10. Is this also the case for citation text generation? To answer this question, we adapt an approximate nearest neighbor search algorithm to find similar pairs of documents. The basic search procedure is as follows: Given a test instance input $(S,C)$ for source $S$ and cited document $C$, we find the set $\bf {N}_C$, the nearest neighbors to $C$ in the training data. For each document $N_C$ from $\bf {N}_C$, let $\bf {N}_S$ be the set of documents that cite $N_C$. This means that each $N_S \in {\bf N}_S$ contains at least one citing sentence $t^{\prime }$ which cites $N_C$. We return the $t^{\prime }$ associated with the $(N_S,N_C)$ pair from the training which is closest to $(S,C)$. We measure the closeness of two pairs of documents by measuring cosine distances between vector representations of their content. The abstract of each document is embedded into a single dense vector by averaging the contextualized embeddings provided by the SciBERT model of BIBREF11 and normalizing. The distance between $(S,C)$ and candidate $(N_S,N_C)$ is computed as: where $\alpha $ and $\beta $ control the relative contribution of the two document similarities. We explore setting both $\alpha $ and $\beta $ to 1, or tuning them to optimize either BLEU or BERTScore on the validation set. Models ::: Language Model Pretraining GPT2-based models have demonstrated an ability to capture long distance dependencies over hundreds of tokens, which we hypothesize will allow them to synthesize information in both the source and cited documents. But citation text generation models must also handle the challenging technical vocabulary of the scientific domain. Prior work has shown that pretraining on in-domain data improves the performance of large language models on domain-specific tasks BIBREF11. Inspired by this, we experiment with additional pretraining of GPT2 in the science domain. This model, SciGPT2, is trained for an additional 3 epochs over the full text of the documents in our corpus using a language modeling objective. We note that both SciGPT2 and the SciBERT language models used here have been exposed to citing sentences from the test and validation sets as in-line citations during their pre-training phases, which may improve their performance versus models without this exposure. Such exposure is typical when using pretrained language models, as text from test data cannot be guaranteed to be absent from the large task-independent corpora upon which these models are trained. Evaluation We compare the different baseline systems using BLEU BIBREF12, ROUGE (specifically ROUGE 1, 2, and L; BIBREF13), and the recently introduced BertScore BIBREF14, a similarity metric based on BERT embeddings which has been shown to correlate well with human judgements on other tasks. To adapt the BertScore metric to the scientific text domain, we use SciBERT embeddings. Table TABREF7 (above the double line) shows the performance of the SciGPT2 model on the test set when provided with the different input context combinations outlined in Section SECREF5. We find that context does make a difference for this category of model, and that models which have access to the intro of the documents outperform those which use abstracts or sampling. Automatic evaluation of the retrieval-based methods on the test data are shown below the double line in Table TABREF7. This table shows that the retrieval methods perform well on this task. However we will show the limitations of these automatic metrics in Section SECREF14. We also observe that tuning the $\alpha $ and $\beta $ parameters on the validation set results in overfitting for this method. Outputs are largely unchanged by this tuning; fewer than 400 test datapoints differ from the untuned outputs. A larger validation split may alleviate this problem. Statistical significance is assessed for select results using bootstrapping with 1000 samples in each of 100 iterations. This test shows that conditioning on the introduction of the source document improves performance compared to conditioning on the abstract when using the SciGPT2 model. However, we see that IR methods perform better than the best neural models. We do not find enough evidence to reject the null hypothesis regarding what context from the cited document should be used. Analysis In this section we take a closer look at the details of the SciGPT2 and IR system outputs on a collection of validation datapoints. We provide a quantitative error analysis as well as qualitative analysis and examples. Analysis ::: Errors In order to better understand the performance of the models, we undertake a quantitative analysis of its output. One author randomly selected 200 datapoints from the validation set and their associated model outputs. Source and cited papers in the topic of NLP were used so as to facilitate expert judgement. For tractability, we limited the context presented to the annotator to the document abstracts and analyze the outputs of the abs $\times $ abs and IR systems. In this analysis, we ask whether the models are producing believable citing sentences given their input. In particular, we are interested in the relative believability of the SciGPT2 and IR systems, as well as how believability of a citing sentence changes when a reader can see the abstract of one document or both. We use 100 datapoints with outputs from the SciGPT2 system and 100 with outputs from the IR system. For 50 datapoints from each system, the cited document's abstract is initially masked such that only the source context is visible (Source, One Visible). Based only on the source context, the annotator judged whether the model output (1) could have convincingly been a citation in the source document based solely on the abstract (believable), (2) could have been a citation in the source document, but unclear from the abstract alone and depends on the rest of the paper's content (content-dependent), or (3) is unlikely to appear in this document (not believable). After making this judgment, the annotator was then shown the abstract of the cited document and asked to make the 3-way believability judgment based on both source and cited abstracts (Source, Both Visible). This process is repeated with the remaining 50 datapoints, but with the cited context masked initially (Cited, One Visible and Cited, Both Visible). The results of our analysis presented in Table TABREF13. We find that believability in the Cited, One Visible condition correlates well with the Cited, Both Visible condition. In the Source conditions, we see a greater difference in believability between One Visible and Both Visible. These findings makes sense: in-line citations often summarize a prior study rather than highlight the paper's own contributions. Together, these results indicate that the believability of citing sentences is more related to the cited document than to the source. Another interesting feature of this analysis is the difference between SciGPT2 and IR in terms of context-dependent citing sentences. We observe fewer such judgements in the IR outputs. This is probably due to the fact that neural text generation systems such as SciGPT2 will sometimes produce generic, uninformative outputs while the IR system outputs are usually specific enough that a stronger believability judgement can be made. We also observe an overall higher instance of not believable judgements of the IR model outputs. This implies that automatic metrics such as BLEU, where the IR system scored higher than SciGPT2, do not correlate with factual accuracy in citation text generation. Example citations and annotations are shown in Table TABREF15. We find that in the cases where the model generated outputs are unconvincing they are still on topic. All 10 cases in the Source, One Visible and 4 of the cases in Cited, One Visible that were no longer believable in the Both Visible conditions exhibit this quality. A common example (4 cases) of this phenomenon occurs when the model output references a dataset. While the dataset would be potentially relevant to both papers, the cited papers focus on modeling contributions and do not introduce a novel corpus. Analysis ::: Examples Example system outputs for randomly selected validation instances are shown in Table TABREF18. We see that both the SciGPT2 and IR model outputs regularly hit on the correct broad topic of the cited text (such “literary analysis” or “image captioning evaluation metrics”). It is notable that the SciGPT2 model outputs syntactically correct and coherent citation sentences, even given the difficulty of the vocabulary in this domain. This is a testament to the power of the domain-specific language model training. We also observe that the outputs of the SciGPT2 model are often shorter than the desired citing sentence. Brevity is a known issue for neural text generation and may be alleviated by penalizing brevity in the inference procedure. More problematic are the factual errors in the generated text. In the last example, for instance, we see that SciGPT2 fails to cite the specific image captioning dataset described in the cited paper (Pascal1K) and instead focuses on the more general evaluation metric for the image captioning task (CIDEr). This is typical of neural text generation systems, which often assign high probability to generic or frequent phrases and revert to these in the face of uncertainty. Analysis ::: Future Work The fluency and topical relevance of the baseline models show the plausibility of the citation text generation task as well as the utility of including pretrained scientific language models in future models. But based on the kinds of errors we have seen, future work should focus on two complementary goals: ensuring the factual accuracy of the generated text and improved modeling of the cited document. Factual accuracy is difficult to enforce in statistical text generation systems, especially where inference includes sampling procedures. Grounding to knowledge bases could help. For this task, knowledge extracted from candidate generations could be compared with knowledge from the full source and cited documents to prune false or irrelevant statements. Further, modeling input documents as knowledge graphs of their contents may help these algorithms better understand the cited document, resulting in better outputs. However, such a model will have to address the open problem of combining pretrained language models with graph encoding techniques. Related Work The current work builds on recent research in scientific document understanding, including citation recommendation and categorization, as well as scientific document summarization. Citation recommendation, or the task of selecting works related to a source document which would be suitable for citing, is a longstanding goal of AI research BIBREF15, BIBREF2, BIBREF16. Recently, researchers have sought to categorize citations using various ontologies of citation intents. BIBREF1 sought to discern “highly influential” citations from others. BIBREF17 uses six categories including “motivation”, “uses”, and “future work” among others. BIBREF3 condense this ontology to just three: “background”,“method”, and “result comparison”. We view the citation text generation task as an extension of these classification approaches with distinct advantages. While classification requires an extant citation link to exist, our generation task can describe possible relationships between works which do not cite each other, such as contemporaneous works. Additionally, because gold citation texts are readily available in scientific documents, the citation text generation task requires no task-specific annotated training data. In practice, citation classification is used to assist in suggesting relevant works to researchers; citation text generation complements this goal by providing rationales for the recommendation and furthering progress toward explainable AI. Generating a citation is also connected to summarizing scientific documents. There is a long history research on summarizing scientific documents BIBREF18, BIBREF19. More recently, researchers have included citing sentences as part of the input for summarization, hoping to capture the contribution of a work along with its content BIBREF20, BIBREF21, BIBREF5. Ours is the first to focus on the specific relationship between two documents when generating such sentences. Because of the emphasis on relational document understanding in our task, citation generation models can be used to assist with drafting papers as well, reducing researcher workload and providing non-native writers with a helpful first draft. Our work builds on recent advances in transfer learning in NLP. In particular, large pretrained models such as BERT BIBREF22 and GPT2 BIBREF6 have made strong advances on a number of tasks BIBREF23. It has also been shown that pretraining these models on domain-specific data further improves results on domain-speicific tasks BIBREF11, BIBREF24. In this work, we apply that methodology by adding an additional pretraining phase on in-domain data before finetuning a GPT2 model on the citation text generation task. Conclusion We have introduced the challenging but useful task of citation text generation. This task requires reasoning about the relationships between documents and expressing these relationships in natural language text. We have established a dataset for this task and studied the performance of contemporary neural text generation and information retrieval models. Our analysis shows that while these models produce fluent and topical outputs, more research is needed to ensure factual accuracy and specificity in the generated text. Acknowledgements This research was supported by the Office of Naval Research under the MURI grant N00014-18-1-2670.
Unanswerable
da1994421934082439e8fe5071a01d3d17b56601
da1994421934082439e8fe5071a01d3d17b56601_0
Q: Are any machine translation sysems tried with these embeddings, what is the performance? Text: Introduction Methods for machine translations have been studied for years, and at the same time algorithms to generate word embeddings are becoming more and more accurate. Still, there is a lot of research aiming at unifying word embeddings across multiple languages. In this experience we try a technique for machine translation that relates word embeddings between two different languages. Based on the literature we found that it is possible to infer missing dictionary entries using distributed representations of words and phrases. One way of doing it is to create a linear mapping between the two vector spaces of two different languages. In order to achieve this, we first built two dictionaries of the two different languages. Next, we learned a function that projects the first vector space to the second one. In this way, we are able to translate every word belonging to the first language into the second one. Once we obtain the translated word embedding, we output the most similar word vector as the translation. The word embeddings were learnt using the Skip Gram method proposed by (Mikolov et al., 2013a). An example of how the method would work is reported in figure 1 and figure 2. After creating the word embeddings from the two dictionaries, we plotted the numbers in the two graphs using PCA. Figure 3 reports the results after creating a linear mapping between the embeddings from the two languages. You can see how similar words are closer together. Related Work In recent years, various models for learning cross-lingual representations have been proposed. Two main broad categories with some related papers are identified here: Monolingual mapping: In this approach, models are trained using word embeddings from a monolingual corpora. Then, an objective function is used to minimize a linear mapping that enable them to map unknown words from the source language to the target language. Pseudo-cross-lingual: In this case we create a pseudo-cross-lingual corpus by mixing contexts of different languages. We then train an off-the-shelf word embedding model on the created corpus. Ideally the cross-lingual contexts allow the learned representations to capture cross-lingual relations. Dataset In the literature, two main types of datasets are used for machine translation: Word-aligned data and Sentence-aligned data. The first one is basically a dictionary between the two languages, where there is a direct relation between same words in different languages. The second one has the relation between corresponding sentences in the two languages. We decided to start with the sentence aligned corpus, since it was more interesting to infer dependency from contexts among words. For our experiment we decided to use the Europarl dataset, using the data from the WMT11 .The Europarl parallel corpus is extracted from the proceedings of the European Parliament. It includes versions in 21 European languages: Romanic (French, Italian, Spanish, Portuguese, Romanian), Germanic (English, Dutch, German, Danish, Swedish), Slavik (Bulgarian, Czech, Polish, Slovak, Slovene), Finni-Ugric (Finnish, Hungarian, Estonian), Baltic (Latvian, Lithuanian), and Greek. For this experience, we used the English-French parallel corpus, which contains 2,007,723 sentences and the English-Italian corpus, that contains 1,909,115 sentences. Linear Mapping This represents the linear function that maps word embeddings from one language to another. Given a word x in one language, and the respective word in the other language z, the following equation reports the objective function we want to minimize: Were W represents the final matrix that will contain the values for the mapping. Normalization Since the word embeddings in a single language are based on the cosine similarity, we realized that through the objective function reported in equation 1 we were losing the property of this similarity. As a result we wanted the dot products to be preserved after the mapping. In order to do that we normalized the vectors $x_i$ and $z_i$.The new objective function looks like: In order to preserve the dot products after the linear mapping we also had to constrain W to be an orthogonal matrix. In order to orthogonalize the matrix, it's required to solve the following optimization problem: One can show that this problem can be solved by taking the singular value decomposition (SVD) of W and replacing the singular values to ones. This approximation would only work when the dimensions of the source vector and the target vector are the same, which is the set up we are working with. So finally, the method that we tried for this experience uses normalized vectors, and is reduced by using the cosine similarity function. Objective Function Taking equation DISPLAY_FORM9 we expanded it getting the following equation: Which is really easy to show that through some simplifications will result into: Where cos represents the cosine similarity between the two embeddings. Setup Description For this experience we tried monolingual mapping using the Europarl Dataset and the μtopia parallel corpus . Concerning the preprocessing, we tokenized the text into single words, and every number was substituted with a NUM symbol. In addition, all the special characters were removed. To obtain the dictionaries, we used the words from the English corpora and translated them into the target languages using the Google translate API. In this way we built two dictionaries with corresponding words in the two languages, extracted from the same parallel corpora. Methods ::: Skipgram It was recently shown that the distributed representations of words capture surprisingly many linguistic regularities, and that there are many types of similarities among words that can be expressed as linear translations (Mikolov et al., 2013c). In the Skip-gram model, the training objective is to learn word vector representations that are good at predicting its context in the same sentence (Mikolov et al., 2013a). The objective function that skip gram tries to minimize is the following: Where N represents the total number of words in a sentence, P the probability of a word at the position i+j to belong to the sentence, with respect to the word i. Ideally, by using this approach we will be able to provide non-trivial translations that will be related to the context of a word. Methods ::: Minimizing the loss function In order to minimize the loss function we decided to setup a neural network and reduce the objective by using Stochastic Gradient Descent. Figure 4 represents the architecture of our base model. We repeated the same process both for objective functions expressed in equation DISPLAY_FORM8 and DISPLAY_FORM12. We started by getting an English word from the English corpus and we got the corresponding target word by using Google Translate. If this word was contained in the target corpus, this pair of words was used to train the model; otherwise, it was ignored. We then get the two embeddings and get the result by multiplying the first one by the matrix W and then substracting the second embedding. For this experiment, we used embeddings with a dimension of 100. For this reason W is a matrix of size 100*100. Results ::: Accuracy The process for checking the accuracy of the model consisted of first taking a random subset of the corpus that was not used for training. This subset consists of slightly more common words because the more common words should have more accurate word embeddings. Infrequent words (like websites and serial numbers) would not necessarily have accurate word embeddings that were generated through the word2vec model. The English word embedding was multiplied by the transformation matrix to create the predicted translated word embedding. The cosine similarity was then generated with each translated word embedding stored in the corpus. The 20 words with the highest cosine similarity were then outputted. Then the English word was translated through Google Translate and compared to the 20 outputted translated words. It was considered a match if the translated word was found in this list of 20 similar words. The reason why the translated word was not just compared to the most similar embedding was because each word could have multiple semantic meanings or have different synonyms in the other language. After running this accuracy formulation, results for different languages and objective functions are reported in Table TABREF18. In the table you can see the different results we obtained across different languages. You can notice that we increased the number of languages from the first review of our paper. In addition, the table shows the comparison between our baseline method (Least Squares) and our final one (Normalized vectors, with cosine similarity), which shows a slight improvement with respect to the first one. Error Analysis ::: Chinese Translations It's easy to notice how the performance with the Chinese translations was much lower with respect to the other languages. One of the reason is that even the tokenization of the chinese language is not trivial. While using a standard library, we noticed that some words were tokenized ambiguously. Another problem was getting the specific translation from Google translate. A lot of words might have very similar meanings, and Google Translate is not as accurate as it is with the other languages we have worked with. One reason is that Chinese is structured in a very different way compared to the other western languages, which share common roots. For example, a simple word like "Yes", does not have a direct translation in Chinese. Error Analysis ::: Google Translate We noticed that the model was performing worse than expected. For this reason we studied what was the main source of error. It was interesting that the model was able to always predict very close French words in French that had the same meaning as the English ones. The problem was that they did not completely match the ones obtained from Google translate. In fact, it resulted that often, the Google translated word was not the most accurate one. Table 2 reports two of these examples. The first one is represented by the word help. All the three translations are pretty accurate, especially the first one, which is the right translation for the verb "to help". The problem is the translation provided by Google, which literally means "Help Me". The second example reports the word "fire". In French there are different words to express the concept of a fire, the concept of an apartment "on fire" or the verb "fire a gun". All the translations provided by the model represent the different meanings that the English word has. This problem was alleviated by translating the 20 outputted French words back into English and comparing those with the original English word. By doing this process, the accuracy of the system increased to 47%. We expected the results of the cosine similarity objective function to produce a higher accuracy, but in reality, we a achieved slightly worse results. The Code The code can be found at https://github.com/MarcoBerlot/Languages_for_Machine_Translation. The Predictive model file contains all the implementations, from the feature engineering to the training of the model. References Linear projection (Mikolov et al., 2013) Lexicon Projection via CCA (Faruqui and Dyer, 2014) Normalisation and orthogonal transformation (Xing et al., 2015) Alignment-based projection (Guo et al., 2015)
No
30c6d34b878630736f819fd898319ac4e71ee50b
30c6d34b878630736f819fd898319ac4e71ee50b_0
Q: Are any experiments performed to try this approach to word embeddings? Text: Introduction Methods for machine translations have been studied for years, and at the same time algorithms to generate word embeddings are becoming more and more accurate. Still, there is a lot of research aiming at unifying word embeddings across multiple languages. In this experience we try a technique for machine translation that relates word embeddings between two different languages. Based on the literature we found that it is possible to infer missing dictionary entries using distributed representations of words and phrases. One way of doing it is to create a linear mapping between the two vector spaces of two different languages. In order to achieve this, we first built two dictionaries of the two different languages. Next, we learned a function that projects the first vector space to the second one. In this way, we are able to translate every word belonging to the first language into the second one. Once we obtain the translated word embedding, we output the most similar word vector as the translation. The word embeddings were learnt using the Skip Gram method proposed by (Mikolov et al., 2013a). An example of how the method would work is reported in figure 1 and figure 2. After creating the word embeddings from the two dictionaries, we plotted the numbers in the two graphs using PCA. Figure 3 reports the results after creating a linear mapping between the embeddings from the two languages. You can see how similar words are closer together. Related Work In recent years, various models for learning cross-lingual representations have been proposed. Two main broad categories with some related papers are identified here: Monolingual mapping: In this approach, models are trained using word embeddings from a monolingual corpora. Then, an objective function is used to minimize a linear mapping that enable them to map unknown words from the source language to the target language. Pseudo-cross-lingual: In this case we create a pseudo-cross-lingual corpus by mixing contexts of different languages. We then train an off-the-shelf word embedding model on the created corpus. Ideally the cross-lingual contexts allow the learned representations to capture cross-lingual relations. Dataset In the literature, two main types of datasets are used for machine translation: Word-aligned data and Sentence-aligned data. The first one is basically a dictionary between the two languages, where there is a direct relation between same words in different languages. The second one has the relation between corresponding sentences in the two languages. We decided to start with the sentence aligned corpus, since it was more interesting to infer dependency from contexts among words. For our experiment we decided to use the Europarl dataset, using the data from the WMT11 .The Europarl parallel corpus is extracted from the proceedings of the European Parliament. It includes versions in 21 European languages: Romanic (French, Italian, Spanish, Portuguese, Romanian), Germanic (English, Dutch, German, Danish, Swedish), Slavik (Bulgarian, Czech, Polish, Slovak, Slovene), Finni-Ugric (Finnish, Hungarian, Estonian), Baltic (Latvian, Lithuanian), and Greek. For this experience, we used the English-French parallel corpus, which contains 2,007,723 sentences and the English-Italian corpus, that contains 1,909,115 sentences. Linear Mapping This represents the linear function that maps word embeddings from one language to another. Given a word x in one language, and the respective word in the other language z, the following equation reports the objective function we want to minimize: Were W represents the final matrix that will contain the values for the mapping. Normalization Since the word embeddings in a single language are based on the cosine similarity, we realized that through the objective function reported in equation 1 we were losing the property of this similarity. As a result we wanted the dot products to be preserved after the mapping. In order to do that we normalized the vectors $x_i$ and $z_i$.The new objective function looks like: In order to preserve the dot products after the linear mapping we also had to constrain W to be an orthogonal matrix. In order to orthogonalize the matrix, it's required to solve the following optimization problem: One can show that this problem can be solved by taking the singular value decomposition (SVD) of W and replacing the singular values to ones. This approximation would only work when the dimensions of the source vector and the target vector are the same, which is the set up we are working with. So finally, the method that we tried for this experience uses normalized vectors, and is reduced by using the cosine similarity function. Objective Function Taking equation DISPLAY_FORM9 we expanded it getting the following equation: Which is really easy to show that through some simplifications will result into: Where cos represents the cosine similarity between the two embeddings. Setup Description For this experience we tried monolingual mapping using the Europarl Dataset and the μtopia parallel corpus . Concerning the preprocessing, we tokenized the text into single words, and every number was substituted with a NUM symbol. In addition, all the special characters were removed. To obtain the dictionaries, we used the words from the English corpora and translated them into the target languages using the Google translate API. In this way we built two dictionaries with corresponding words in the two languages, extracted from the same parallel corpora. Methods ::: Skipgram It was recently shown that the distributed representations of words capture surprisingly many linguistic regularities, and that there are many types of similarities among words that can be expressed as linear translations (Mikolov et al., 2013c). In the Skip-gram model, the training objective is to learn word vector representations that are good at predicting its context in the same sentence (Mikolov et al., 2013a). The objective function that skip gram tries to minimize is the following: Where N represents the total number of words in a sentence, P the probability of a word at the position i+j to belong to the sentence, with respect to the word i. Ideally, by using this approach we will be able to provide non-trivial translations that will be related to the context of a word. Methods ::: Minimizing the loss function In order to minimize the loss function we decided to setup a neural network and reduce the objective by using Stochastic Gradient Descent. Figure 4 represents the architecture of our base model. We repeated the same process both for objective functions expressed in equation DISPLAY_FORM8 and DISPLAY_FORM12. We started by getting an English word from the English corpus and we got the corresponding target word by using Google Translate. If this word was contained in the target corpus, this pair of words was used to train the model; otherwise, it was ignored. We then get the two embeddings and get the result by multiplying the first one by the matrix W and then substracting the second embedding. For this experiment, we used embeddings with a dimension of 100. For this reason W is a matrix of size 100*100. Results ::: Accuracy The process for checking the accuracy of the model consisted of first taking a random subset of the corpus that was not used for training. This subset consists of slightly more common words because the more common words should have more accurate word embeddings. Infrequent words (like websites and serial numbers) would not necessarily have accurate word embeddings that were generated through the word2vec model. The English word embedding was multiplied by the transformation matrix to create the predicted translated word embedding. The cosine similarity was then generated with each translated word embedding stored in the corpus. The 20 words with the highest cosine similarity were then outputted. Then the English word was translated through Google Translate and compared to the 20 outputted translated words. It was considered a match if the translated word was found in this list of 20 similar words. The reason why the translated word was not just compared to the most similar embedding was because each word could have multiple semantic meanings or have different synonyms in the other language. After running this accuracy formulation, results for different languages and objective functions are reported in Table TABREF18. In the table you can see the different results we obtained across different languages. You can notice that we increased the number of languages from the first review of our paper. In addition, the table shows the comparison between our baseline method (Least Squares) and our final one (Normalized vectors, with cosine similarity), which shows a slight improvement with respect to the first one. Error Analysis ::: Chinese Translations It's easy to notice how the performance with the Chinese translations was much lower with respect to the other languages. One of the reason is that even the tokenization of the chinese language is not trivial. While using a standard library, we noticed that some words were tokenized ambiguously. Another problem was getting the specific translation from Google translate. A lot of words might have very similar meanings, and Google Translate is not as accurate as it is with the other languages we have worked with. One reason is that Chinese is structured in a very different way compared to the other western languages, which share common roots. For example, a simple word like "Yes", does not have a direct translation in Chinese. Error Analysis ::: Google Translate We noticed that the model was performing worse than expected. For this reason we studied what was the main source of error. It was interesting that the model was able to always predict very close French words in French that had the same meaning as the English ones. The problem was that they did not completely match the ones obtained from Google translate. In fact, it resulted that often, the Google translated word was not the most accurate one. Table 2 reports two of these examples. The first one is represented by the word help. All the three translations are pretty accurate, especially the first one, which is the right translation for the verb "to help". The problem is the translation provided by Google, which literally means "Help Me". The second example reports the word "fire". In French there are different words to express the concept of a fire, the concept of an apartment "on fire" or the verb "fire a gun". All the translations provided by the model represent the different meanings that the English word has. This problem was alleviated by translating the 20 outputted French words back into English and comparing those with the original English word. By doing this process, the accuracy of the system increased to 47%. We expected the results of the cosine similarity objective function to produce a higher accuracy, but in reality, we a achieved slightly worse results. The Code The code can be found at https://github.com/MarcoBerlot/Languages_for_Machine_Translation. The Predictive model file contains all the implementations, from the feature engineering to the training of the model. References Linear projection (Mikolov et al., 2013) Lexicon Projection via CCA (Faruqui and Dyer, 2014) Normalisation and orthogonal transformation (Xing et al., 2015) Alignment-based projection (Guo et al., 2015)
Yes
a4ff1b91643e0c8a0d4cc1502d25ca85995cf428
a4ff1b91643e0c8a0d4cc1502d25ca85995cf428_0
Q: Which two datasets does the resource come from? Text: Introduction Language grounding, i.e., understanding how words and expressions are anchored in data, is one of the initial tasks that are essential for the conception of a data-to-text (D2T) system BIBREF0 , BIBREF1 . This can be achieved through different means, such as using heuristics or machine learning algorithms on an available parallel corpora of text and data BIBREF2 to obtain a mapping between the expressions of interest and the underlying data BIBREF3 , getting experts to provide these mappings, or running surveys on writers or readers that provide enough data for the application of mapping algorithms BIBREF4 . Performing language grounding allows ensuring that generated texts include words whose meaning is aligned with what writers understand or what readers would expect BIBREF0 , given the variation that is known to exist among writers and readers BIBREF5 . Moreover, when contradictory data appears in corpora or any other resource that is used to create the data-to-words mapping, creating models that remove inconsistencies can also be a challenging part of language grounding which can influence the development of a successful system BIBREF3 . This paper presents a resource for language grounding of geographical descriptors. The original purpose of this data collection is the creation of models of geographical descriptors whose meaning is modeled as graded or fuzzy BIBREF6 , BIBREF7 , to be used for research on generation of geographical referring expressions, e.g., BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , BIBREF4 . However, we believe it can be useful for other related research purposes as well. The resource and its interest The resource is composed of data from two different surveys. In both surveys subjects were asked to draw on a map (displayed under a Mercator projection) a polygon representing a given geographical descriptor, in the context of the geography of Galicia in Northwestern Spain (see Fig. FIGREF1 ). However, the surveys were run with different purposes, and the subject groups that participated in each survey and the list of descriptors provided were accordingly different. The first survey was run in order to obtain a high number of responses to be used as an evaluation testbed for modeling algorithms. It was answered by 15/16 year old students in a high school in Pontevedra (located in Western Galicia). 99 students provided answers for a list of 7 descriptors (including cardinal points, coast, inland, and a proper name). Figure FIGREF2 shows a representation of the answers given by the students for “Northern Galicia” and a contour map that illustrates the percentages of overlapping answers. The second survey was addressed to meteorologists in the Galician Weather Agency BIBREF12 . Its purpose was to gather data to create fuzzy models that will be used in a future NLG system in the weather domain. Eight meteorologists completed the survey, which included a list of 24 descriptors. For instance, Figure FIGREF3 shows a representation of the answers given by the meteorologists for “Eastern Galicia” and a contour map that illustrates the percentage of overlapping answers. Table TABREF4 includes the complete list of descriptors for both groups of subjects. 20 out of the 24 descriptors are commonly used in the writing of weather forecasts by experts and include cardinal directions, proper names, and other kinds of references such as mountainous areas, parts of provinces, etc. The remaining four were added to study intersecting combinations of cardinal directions (e.g. exploring ways of combining “north” and “west” for obtaining a model that is similar to “northwest”). The data for the descriptors from the surveys is focused on a very specific geographical context. However, the conjunction of both data sets provides a very interesting resource for performing a variety of more general language grounding-oriented and natural language generation research tasks, such as: Qualitative analysis of the data sets The two data sets were gathered for different purposes and only coincide in a few descriptors, so providing a direct comparison is not feasible. However, we can discuss general qualitative insights and a more detailed analysis of the descriptors that both surveys share in common. At a general level, we had hypothesized that experts would be much more consistent than students, given their professional training and the reduced number of meteorologists participating in the survey. Comparing the visualizations of both data sets we have observed that this is clearly the case; the polygons drawn by the experts are more concentrated and therefore there is a higher agreement among them. On top of these differences, some students provided unexpected drawings in terms of shape, size, or location of the polygon for several descriptors. If we focus on single descriptors, one interesting outcome is that some of the answers for “Northern Galicia” and “Southern Galicia” overlap for both subject groups. Thus, although `north' and `south' are natural antonyms, if we take into account the opinion of each group as a whole, there exists a small area where points can be considered as belonging to both descriptors at the same time (see Fig. FIGREF9 ). In the case of “west” and “east”, the drawings made by the experts were almost divergent and showed no overlapping between those two descriptors. Regarding “Inland Galicia”, the unions of the answers for each group occupy approximately the same area with a similar shape, but there is a very high overlapping among the answers of the meteorologists. A similar situation is found for the remaining descriptor “Rías Baixas”, where both groups encompass a similar area. In this case, the students' answers cover a more extensive region and the experts coincide within a more restricted area. A further analysis: apparent issues As in any survey that involves a task-based collection of data, some of the answers provided by the subjects for the described data sets can be considered erroneous or misleading due to several reasons. Here we describe for each subject group some of the most relevant issues that any user of this resource should take into account. In the case of the students, we have identified minor drawing errors appearing in most of the descriptors, which in general shouldn't have a negative impact in the long term thanks to the high number of participants in the original survey. For some descriptors, however, there exist polygons drawn by subjects that clearly deviate from what could be considered a proper answer. The clearest example of this problem involves the `west' and `east' descriptors, which were confused by some of the students who drew them inversely (see Fig. FIGREF11 , around 10-15% of the answers). In our case, given their background, some of the students may have actually confused the meaning of + “west” and “east”. However, the most plausible explanation is that, unlike in English and other languages, in Spanish both descriptors are phonetically similar (“este” and “oeste”) and can be easily mistaken for one another if read without attention. As for the expert group, a similar case is found for “Northeastern Galicia” (see Fig. FIGREF12 ), where some of the given answers (3/8) clearly correspond to “Northwestern Galicia”. However, unlike the issue related to “west” and “east” found for the student group, this problem is not found reciprocally for the “northwestern” answers. Resource materials The resource is available at BIBREF13 under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Both data sets are provided as SQLite databases which share the same table structure, and also in a compact JSON format. Polygon data is encoded in GeoJSON format BIBREF14 . The data sets are well-documented in the repository's README, and several Python scripts are provided for data loading, using Shapely BIBREF15 ; and for visualization purposes, using Cartopy BIBREF16 . Concluding remarks The data sets presented provide a means to perform different research tasks that can be useful from a natural language generation point of view. Among them, we can highlight the creation of models of geographical descriptors, comparing models between both subject groups, studying combinations of models of cardinal directions, and researching on geographical referring expression generation. Furthermore, insights about the semantics of geographical concepts could be inferred under a more thorough analysis. One of the inconveniences that our data sets present is the appearance of the issues described in Sec. SECREF10 . It could be necessary to filter some of the answers according to different criteria (e.g., deviation of the centroid location, deviation of size, etc.). For more applied cases, manually filtering can also be an option, but this would require a certain knowledge of the geography of Galicia. In any case, the squared-like shape of this region may allow researchers to become rapidly familiar with many of the descriptors listed in Table TABREF4 . As future work, we believe it would be invaluable to perform similar data gathering tasks for other regions from different parts of the world. These should provide a variety of different shapes (both regular and irregular), so that it can be feasible to generalize (e.g., through data-driven approaches) the semantics of some of the more common descriptors, such as cardinal points, coastal areas, etc. The proposal of a shared task could help achieve this objective. Acknowledgments This research was supported by the Spanish Ministry of Economy and Competitiveness (grants TIN2014-56633-C3-1-R and TIN2017-84796-C2-1-R) and the Galician Ministry of Education (grants GRC2014/030 and "accreditation 2016-2019, ED431G/08"). All grants were co-funded by the European Regional Development Fund (ERDF/FEDER program). A. Ramos-Soto is funded by the “Consellería de Cultura, Educación e Ordenación Universitaria” (under the Postdoctoral Fellowship accreditation ED481B 2017/030). J.M. Alonso is supported by RYC-2016-19802 (Ramón y Cajal contract). The authors would also like to thank Juan Taboada for providing the list of most frequently used geographical expressions by MeteoGalicia, and José Manuel Ramos for organizing the survey at the high school IES Xunqueira I in Pontevedra, Spain.
two surveys by two groups - school students and meteorologists to draw on a map a polygon representing a given geographical descriptor
544e29937e0c972abcdd27c953dc494b2376dd76
544e29937e0c972abcdd27c953dc494b2376dd76_0
Q: What model was used by the top team? Text: Introduction Emotions are a central component of our existence as human beings, and are manifested by physiological and psychological changes that often affect behavior and action. Emotions involve a complicated interplay of mind, body, language, and culture BIBREF0. Detecting and recognizing emotions is a difficult task for machines. Nevertheless, following the successful use of computational linguistics to analyze sentiment in texts, there is growing interest in the more difficult task of the automatic detection and classification of emotions in texts. The detection of emotions in text is a complicated challenge for multiple reasons: first, emotions are complex entities, and no universally-agreed upon psychological model of emotions exists. Second, isolated texts convey less information compared to a complete human interaction in which emotions can be detected from the other person's facial expressions, listening to their tone of voice, etc. However, due to important applications in fields such as psychology, marketing, and political science, research in this topic is now expanding rapidly BIBREF1. In particular, dialogue systems such as those available on social media or instant messaging services are rich sources of textual data and have become the focus of much attention. Emotions of utterances within dialogues can be detected more precisely due to the presence of more context. For example, a single utterance (“OK!”) might convey different emotions (happiness, anger, surprise), depending on its context. Taking all this into consideration, in 2018 the EmotionX Challenge asked participants to detect emotions in complete dialogues BIBREF2. Participants were challenged to classify utterances using Ekman's well-known theory of six basic emotions (sadness, happiness, anger, fear, disgust, and surprise) BIBREF3. For the 2019 challenge, we built and expanded upon the 2018 challenge. We provided an additional 20% of data for training, as well as augmenting the dataset using two-way translation. The metric used was micro-F1 score, and we also report the macro-F1 score. A total of thirty-six teams registered to participate in the challenge. Eleven of the teams successfully submitted their data for performance evaluation, and seven of them submitted technical papers for the workshop. Approaches used by the teams included deep neural networks and SVM classifiers. In the following sections we expand on the challenge and the data. We then briefly describe the various approaches used by the teams, and conclude with a summary and some notes. Detailed descriptions of the various submissions are available in the teams' technical reports. Datasets The two datasets used for the challenge are Friends and EmotionPush, part of the EmotionLines corpus BIBREF4. The datasets contain English-language dialogues of varying lengths. For the competition, we provided 1,000 labeled dialogues from each dataset for training, and 240 unlabeled dialogues from each dataset for evaluation. The Friends dialogues are scripts taken from the American TV sitcom (1994-2004). The EmotionPush dialogues are from Facebook Messenger chats by real users which have been anonymized to ensure user privacy. For both datasets, dialogue lengths range from 5 to 24 lines each. A breakdown of the lengths of the dialogues is shown in Table . We employed workers using Amazon Mechanical Turk (aka AMT or MTurk) to annotate the dialogues BIBREF5. Each complete dialogue was offered as a single MTurk Human Intelligence Task (HIT), within which each utterance was read and annotated for emotions by the worker. Each HIT was assigned to five workers. To ensure workers were qualified for the annotation task, we set up a number of requirements: workers had to be from an English-speaking country (Australia, Canada, Great Britain, Ireland, New Zealand, or the US), have a high HIT approval rate (at least 98%), and have already performed a minimum of 2,000 HITs. In the datasets, each utterance is accompanied by an annotation and emotion. The annotation contains the raw count of votes for each emotion by the five annotators, with the order of the emotions being Neutral, Joy, Sadness, Fear, Anger, Surprise, Disgust. For example, an annotation of “2000030” denotes that two annotators voted for “neutral”, and three voted for “surprise”. The labeled emotion is calculated using the absolute majority of votes. Thus, if a specific emotion received three or more votes, then that utterance is labeled with that emotion. If there is no majority vote, the utterance is labeled with “non-neutral” label. In addition to the utterance, annotation, and label, each line in each dialogue includes the speaker's name (in the case of EmotionPush, a speaker ID was used). The emotion distribution for Friends and EmotionPush, for both training and evaluation data, is shown in Table . We used Fleiss' kappa measure to assess the reliability of agreement between the annotators BIBREF6. The value for $\kappa $-statistic is $0.326$ and $0.342$ for Friends and EmotionPush, respectively. For the combined datasets the value of the $\kappa $-statistic is $0.345$. Sample excerpts from the two datasets, with their annotations and labels, are given in Table . Datasets ::: Augmentation NLP tasks require plenty of data. Due to the relatively small number of samples in our datasets, we added more labeled data using a technique developed in BIBREF7 that was used by the winning team in Kaggle's Toxic Comment Classification Challenge BIBREF8. The augmented datasets are similar to the original data files, but include additional machine-computed utterances for each original utterance. We created the additional utterances using the Google Translate API. Each original utterance was first translated from English into three target languages (German, French, and Italian), and then translated back into English. The resulting utterances were included together in the same object with the original utterance. These “duplex translations” can sometimes result in the original sentence, but many times variations are generated that convey the same emotions. Table shows an example utterance (labeled with “Joy”) after augmentation. Challenge Details A dedicated website for the competition was set up. The website included instructions, the registration form, schedule, and other relevant details. Following registration, participants were able to download the training datasets. The label distribution of emotions in our data are highly unbalanced, as can be seen in Figure FIGREF6. Due to the small number of three of the labels, participants were instructed to use only four emotions for labels: joy, sadness, anger, and neutral. Evaluation of submissions was done using only utterances with these four labels. Utterances with labels other than the above four (i.e., surprise, disgust, fear or non-neutral) were discarded and not used in the evaluation. Scripts for verifying and evaluating the submissions were made available online. We used micro-F1 as the comparison metric. Submissions A total of eleven teams submitted their evaluations, and are presented in the online leaderboard. Seven of the teams also submitted technical reports, the highlights of which are summarized below. More details are available in the relevant reports. Submissions ::: IDEA BIBREF9 Two different BERT models were developed. For Friends, pre-training was done using a sliding window of two utterances to provide dialogue context. Both Next Sentence Prediction (NSP) phase on the complete unlabeled scripts from all 10 seasons of Friends, which are available for download. In addition, the model learned the emotional disposition of each of six main six main characters in Friends (Rachel, Monica, Phoebe, Joey, Chandler and Ross) by adding a special token to represent the speaker. For EmotionPush, pre-training was performed on Twitter data, as it is similar in nature to chat based dialogues. In both cases, special attention was given to the class imbalance issue by applying “weighted balanced warming” on the loss function. Submissions ::: KU BIBREF10 BERT is post-trained via Masked Language Model (MLM) and Next Sentence Prediction (NSP) on a corpus consisting of the complete and augmented dialogues of Friends, and the EmotionPush training data. The resulting token embeddings are max-pooled and fed into a dense network for classification. A $K$-fold cross-validation ensemble with majority voting was used for prediction. To deal with the class imbalance problem, weighted cross entropy was used as a training loss function. Submissions ::: HSU BIBREF11 A pre-trained BERT is fine-tuned using filtered training data which only included the desired labels. Additional augmented data with joy, sadness, and anger labels are also used. BERT is then fed into a standard feed-forward-network with a softmax layer used for classification. Submissions ::: Podlab BIBREF12 A support vector machine (SVM) was used for classification. Words are ranked using a per-emotion TF-IDF score. Experiments were performed to verify whether the previous utterance would improve classification performance. Input to the Linear SVM was done using one-hot-encoding of top ranking words. Submissions ::: AlexU BIBREF13 The classifier uses a pre-trained BERT model followed by a feed-forward neural network with a softmax output. Due to the overwhelming presence of the neutral label, a classifying cascade is employed, where the majority classifier is first used to decide whether the utterance should be classified with “neutral” or not. A second classifier is used to focus on the other emotions (joy, sadness, and anger). Dealing with the imbalanced classes is done through the use of a weighted loss function. Submissions ::: Antenna BIBREF14 BERT is first used to generate word and sentence embeddings for all utterances. The resulting calculated word embeddings are fed into a Convolutional Neural Network (CNN), and its output is then concatenated with the BERT-generated sentence embeddings. The concatenated vectors are then used to train a bi-directional GRU with a residual connection followed by a fully-connected layer, and finally a softmax layer produces predictions. Class imbalance is tackled using focal loss BIBREF15. Submissions ::: CYUT BIBREF16 A word embedding layer followed by a bi-directional GRU-based RNN. Output from the RNN was fed into a single-node classifier. The augmented dataset was used for training the model, but “neutral”-labeled utterances were filtered to deal with class imbalance. Results The submissions and the final results are summarized in Tables and . Two of the submissions did not follow up with technical papers and thus they do not appear in this summary. We note that the top-performing models used BERT, reflecting the recent state-of-the-art performance of this model in many NLP tasks. For Friends and EmotionPush the top micro-F1 scores were 81.5% and 88.5% respectively. Evaluation & Discussion An evaluation summary of the submissions is available in Tables and . We only present the teams that submitted technical reports. A full leaderboard that includes all the teams is available on the challenge website. This section highlights some observations related to the challenge. Identical utterances can convey different emotions in different contexts. A few of the models incorporated the dialogue context into the model, such as the models proposed by teams IDEA and KU. Evaluation & Discussion ::: Deep Learning Models. Most of the submissions used deep learning models. Five of the models were based on the BERT architecture, with some using pre-trained BERT. Some of the submissions enhanced the model by adding context and speaker related encoding to improve performance. We also received submissions using more traditional networks such as CNN, as well as machine learning classics such as SVM. The results demonstrate that domain knowledge, feature engineering, and careful application of existing methodologies is still paramount for building successful machine learning models. Evaluation & Discussion ::: Unbalanced Labels. Emotion detection in text often suffers from a data imbalance problem, our datasets included. The teams used two approaches to deal with this issue. Some used a class-balanced loss functions while others under-sampled classes with majority label “neutral”. Classification performance of underrepresented emotions, especially sadness and anger, is low compared to the others. This is still a challenge, especially as some real-world applications are dependent on detection of specific emotions such as anger and sadness. Evaluation & Discussion ::: Emotional Model and Annotation Challenges. The discrete 6-emotion model and similar models are often used in emotion detection tasks. However, such 1-out-of-n models are limited in a few ways: first, expressed emotions are often not discrete but mixed (for example, surprise and joy or surprise and anger are often manifested in the same utterance). This leads to more inter-annotator disagreement, as annotators can only select one emotion. Second, there are additional emotional states that are not covered by the basic six emotions but are often conveyed in speech and physical expressions, such as desire, embarrassment, relief, and sympathy. This is reflected in feedback we received from one of the AMT workers: “I am doing my best on your HITs. However, the emotions given (7 of them) are a lot of times not the emotion I'm reading (such as questioning, happy, excited, etc). Your emotions do not fit them all...”. To further investigate, we calculated the per-emotion $\kappa $-statistic for our datasets in Table . We see that for some emotions, such as disgust and fear (and anger for EmotionPush), the $\kappa $-statistic is poor, indicating ambiguity in annotation and thus an opportunity for future improvement. We also note that there is an interplay between the emotion label distribution, per-emotion classification performance, and their corresponding $\kappa $ scores, which calls for further investigation. Evaluation & Discussion ::: Data Sources. One of the main requirements of successful training of deep learning models is the availability of high-quality labeled data. Using AMT to label data has proved to be useful. However, current data is limited in quantity. In addition, more work needs to be done in order to measure, evaluate, and guarantee annotation quality. In addition, the Friends data is based on an American TV series which emphasizes certain emotions, and it remains to be seen how to transfer learning of emotions to other domains. Acknowledgment This research is partially supported by Ministry of Science and Technology, Taiwan, under Grant no. MOST108-2634-F-001-004- and MOST107-2218-E-002-009-.
Two different BERT models were developed
b8fdc600f9e930133bb3ec8fbcc9c600d60d24b0
b8fdc600f9e930133bb3ec8fbcc9c600d60d24b0_0
Q: What was the baseline? Text: Introduction Emotions are a central component of our existence as human beings, and are manifested by physiological and psychological changes that often affect behavior and action. Emotions involve a complicated interplay of mind, body, language, and culture BIBREF0. Detecting and recognizing emotions is a difficult task for machines. Nevertheless, following the successful use of computational linguistics to analyze sentiment in texts, there is growing interest in the more difficult task of the automatic detection and classification of emotions in texts. The detection of emotions in text is a complicated challenge for multiple reasons: first, emotions are complex entities, and no universally-agreed upon psychological model of emotions exists. Second, isolated texts convey less information compared to a complete human interaction in which emotions can be detected from the other person's facial expressions, listening to their tone of voice, etc. However, due to important applications in fields such as psychology, marketing, and political science, research in this topic is now expanding rapidly BIBREF1. In particular, dialogue systems such as those available on social media or instant messaging services are rich sources of textual data and have become the focus of much attention. Emotions of utterances within dialogues can be detected more precisely due to the presence of more context. For example, a single utterance (“OK!”) might convey different emotions (happiness, anger, surprise), depending on its context. Taking all this into consideration, in 2018 the EmotionX Challenge asked participants to detect emotions in complete dialogues BIBREF2. Participants were challenged to classify utterances using Ekman's well-known theory of six basic emotions (sadness, happiness, anger, fear, disgust, and surprise) BIBREF3. For the 2019 challenge, we built and expanded upon the 2018 challenge. We provided an additional 20% of data for training, as well as augmenting the dataset using two-way translation. The metric used was micro-F1 score, and we also report the macro-F1 score. A total of thirty-six teams registered to participate in the challenge. Eleven of the teams successfully submitted their data for performance evaluation, and seven of them submitted technical papers for the workshop. Approaches used by the teams included deep neural networks and SVM classifiers. In the following sections we expand on the challenge and the data. We then briefly describe the various approaches used by the teams, and conclude with a summary and some notes. Detailed descriptions of the various submissions are available in the teams' technical reports. Datasets The two datasets used for the challenge are Friends and EmotionPush, part of the EmotionLines corpus BIBREF4. The datasets contain English-language dialogues of varying lengths. For the competition, we provided 1,000 labeled dialogues from each dataset for training, and 240 unlabeled dialogues from each dataset for evaluation. The Friends dialogues are scripts taken from the American TV sitcom (1994-2004). The EmotionPush dialogues are from Facebook Messenger chats by real users which have been anonymized to ensure user privacy. For both datasets, dialogue lengths range from 5 to 24 lines each. A breakdown of the lengths of the dialogues is shown in Table . We employed workers using Amazon Mechanical Turk (aka AMT or MTurk) to annotate the dialogues BIBREF5. Each complete dialogue was offered as a single MTurk Human Intelligence Task (HIT), within which each utterance was read and annotated for emotions by the worker. Each HIT was assigned to five workers. To ensure workers were qualified for the annotation task, we set up a number of requirements: workers had to be from an English-speaking country (Australia, Canada, Great Britain, Ireland, New Zealand, or the US), have a high HIT approval rate (at least 98%), and have already performed a minimum of 2,000 HITs. In the datasets, each utterance is accompanied by an annotation and emotion. The annotation contains the raw count of votes for each emotion by the five annotators, with the order of the emotions being Neutral, Joy, Sadness, Fear, Anger, Surprise, Disgust. For example, an annotation of “2000030” denotes that two annotators voted for “neutral”, and three voted for “surprise”. The labeled emotion is calculated using the absolute majority of votes. Thus, if a specific emotion received three or more votes, then that utterance is labeled with that emotion. If there is no majority vote, the utterance is labeled with “non-neutral” label. In addition to the utterance, annotation, and label, each line in each dialogue includes the speaker's name (in the case of EmotionPush, a speaker ID was used). The emotion distribution for Friends and EmotionPush, for both training and evaluation data, is shown in Table . We used Fleiss' kappa measure to assess the reliability of agreement between the annotators BIBREF6. The value for $\kappa $-statistic is $0.326$ and $0.342$ for Friends and EmotionPush, respectively. For the combined datasets the value of the $\kappa $-statistic is $0.345$. Sample excerpts from the two datasets, with their annotations and labels, are given in Table . Datasets ::: Augmentation NLP tasks require plenty of data. Due to the relatively small number of samples in our datasets, we added more labeled data using a technique developed in BIBREF7 that was used by the winning team in Kaggle's Toxic Comment Classification Challenge BIBREF8. The augmented datasets are similar to the original data files, but include additional machine-computed utterances for each original utterance. We created the additional utterances using the Google Translate API. Each original utterance was first translated from English into three target languages (German, French, and Italian), and then translated back into English. The resulting utterances were included together in the same object with the original utterance. These “duplex translations” can sometimes result in the original sentence, but many times variations are generated that convey the same emotions. Table shows an example utterance (labeled with “Joy”) after augmentation. Challenge Details A dedicated website for the competition was set up. The website included instructions, the registration form, schedule, and other relevant details. Following registration, participants were able to download the training datasets. The label distribution of emotions in our data are highly unbalanced, as can be seen in Figure FIGREF6. Due to the small number of three of the labels, participants were instructed to use only four emotions for labels: joy, sadness, anger, and neutral. Evaluation of submissions was done using only utterances with these four labels. Utterances with labels other than the above four (i.e., surprise, disgust, fear or non-neutral) were discarded and not used in the evaluation. Scripts for verifying and evaluating the submissions were made available online. We used micro-F1 as the comparison metric. Submissions A total of eleven teams submitted their evaluations, and are presented in the online leaderboard. Seven of the teams also submitted technical reports, the highlights of which are summarized below. More details are available in the relevant reports. Submissions ::: IDEA BIBREF9 Two different BERT models were developed. For Friends, pre-training was done using a sliding window of two utterances to provide dialogue context. Both Next Sentence Prediction (NSP) phase on the complete unlabeled scripts from all 10 seasons of Friends, which are available for download. In addition, the model learned the emotional disposition of each of six main six main characters in Friends (Rachel, Monica, Phoebe, Joey, Chandler and Ross) by adding a special token to represent the speaker. For EmotionPush, pre-training was performed on Twitter data, as it is similar in nature to chat based dialogues. In both cases, special attention was given to the class imbalance issue by applying “weighted balanced warming” on the loss function. Submissions ::: KU BIBREF10 BERT is post-trained via Masked Language Model (MLM) and Next Sentence Prediction (NSP) on a corpus consisting of the complete and augmented dialogues of Friends, and the EmotionPush training data. The resulting token embeddings are max-pooled and fed into a dense network for classification. A $K$-fold cross-validation ensemble with majority voting was used for prediction. To deal with the class imbalance problem, weighted cross entropy was used as a training loss function. Submissions ::: HSU BIBREF11 A pre-trained BERT is fine-tuned using filtered training data which only included the desired labels. Additional augmented data with joy, sadness, and anger labels are also used. BERT is then fed into a standard feed-forward-network with a softmax layer used for classification. Submissions ::: Podlab BIBREF12 A support vector machine (SVM) was used for classification. Words are ranked using a per-emotion TF-IDF score. Experiments were performed to verify whether the previous utterance would improve classification performance. Input to the Linear SVM was done using one-hot-encoding of top ranking words. Submissions ::: AlexU BIBREF13 The classifier uses a pre-trained BERT model followed by a feed-forward neural network with a softmax output. Due to the overwhelming presence of the neutral label, a classifying cascade is employed, where the majority classifier is first used to decide whether the utterance should be classified with “neutral” or not. A second classifier is used to focus on the other emotions (joy, sadness, and anger). Dealing with the imbalanced classes is done through the use of a weighted loss function. Submissions ::: Antenna BIBREF14 BERT is first used to generate word and sentence embeddings for all utterances. The resulting calculated word embeddings are fed into a Convolutional Neural Network (CNN), and its output is then concatenated with the BERT-generated sentence embeddings. The concatenated vectors are then used to train a bi-directional GRU with a residual connection followed by a fully-connected layer, and finally a softmax layer produces predictions. Class imbalance is tackled using focal loss BIBREF15. Submissions ::: CYUT BIBREF16 A word embedding layer followed by a bi-directional GRU-based RNN. Output from the RNN was fed into a single-node classifier. The augmented dataset was used for training the model, but “neutral”-labeled utterances were filtered to deal with class imbalance. Results The submissions and the final results are summarized in Tables and . Two of the submissions did not follow up with technical papers and thus they do not appear in this summary. We note that the top-performing models used BERT, reflecting the recent state-of-the-art performance of this model in many NLP tasks. For Friends and EmotionPush the top micro-F1 scores were 81.5% and 88.5% respectively. Evaluation & Discussion An evaluation summary of the submissions is available in Tables and . We only present the teams that submitted technical reports. A full leaderboard that includes all the teams is available on the challenge website. This section highlights some observations related to the challenge. Identical utterances can convey different emotions in different contexts. A few of the models incorporated the dialogue context into the model, such as the models proposed by teams IDEA and KU. Evaluation & Discussion ::: Deep Learning Models. Most of the submissions used deep learning models. Five of the models were based on the BERT architecture, with some using pre-trained BERT. Some of the submissions enhanced the model by adding context and speaker related encoding to improve performance. We also received submissions using more traditional networks such as CNN, as well as machine learning classics such as SVM. The results demonstrate that domain knowledge, feature engineering, and careful application of existing methodologies is still paramount for building successful machine learning models. Evaluation & Discussion ::: Unbalanced Labels. Emotion detection in text often suffers from a data imbalance problem, our datasets included. The teams used two approaches to deal with this issue. Some used a class-balanced loss functions while others under-sampled classes with majority label “neutral”. Classification performance of underrepresented emotions, especially sadness and anger, is low compared to the others. This is still a challenge, especially as some real-world applications are dependent on detection of specific emotions such as anger and sadness. Evaluation & Discussion ::: Emotional Model and Annotation Challenges. The discrete 6-emotion model and similar models are often used in emotion detection tasks. However, such 1-out-of-n models are limited in a few ways: first, expressed emotions are often not discrete but mixed (for example, surprise and joy or surprise and anger are often manifested in the same utterance). This leads to more inter-annotator disagreement, as annotators can only select one emotion. Second, there are additional emotional states that are not covered by the basic six emotions but are often conveyed in speech and physical expressions, such as desire, embarrassment, relief, and sympathy. This is reflected in feedback we received from one of the AMT workers: “I am doing my best on your HITs. However, the emotions given (7 of them) are a lot of times not the emotion I'm reading (such as questioning, happy, excited, etc). Your emotions do not fit them all...”. To further investigate, we calculated the per-emotion $\kappa $-statistic for our datasets in Table . We see that for some emotions, such as disgust and fear (and anger for EmotionPush), the $\kappa $-statistic is poor, indicating ambiguity in annotation and thus an opportunity for future improvement. We also note that there is an interplay between the emotion label distribution, per-emotion classification performance, and their corresponding $\kappa $ scores, which calls for further investigation. Evaluation & Discussion ::: Data Sources. One of the main requirements of successful training of deep learning models is the availability of high-quality labeled data. Using AMT to label data has proved to be useful. However, current data is limited in quantity. In addition, more work needs to be done in order to measure, evaluate, and guarantee annotation quality. In addition, the Friends data is based on an American TV series which emphasizes certain emotions, and it remains to be seen how to transfer learning of emotions to other domains. Acknowledgment This research is partially supported by Ministry of Science and Technology, Taiwan, under Grant no. MOST108-2634-F-001-004- and MOST107-2218-E-002-009-.
Unanswerable
bdc93ac1b8643617c966e91d09c01766f7503872
bdc93ac1b8643617c966e91d09c01766f7503872_0
Q: What is the size of the second dataset? Text: Introduction Emotions are a central component of our existence as human beings, and are manifested by physiological and psychological changes that often affect behavior and action. Emotions involve a complicated interplay of mind, body, language, and culture BIBREF0. Detecting and recognizing emotions is a difficult task for machines. Nevertheless, following the successful use of computational linguistics to analyze sentiment in texts, there is growing interest in the more difficult task of the automatic detection and classification of emotions in texts. The detection of emotions in text is a complicated challenge for multiple reasons: first, emotions are complex entities, and no universally-agreed upon psychological model of emotions exists. Second, isolated texts convey less information compared to a complete human interaction in which emotions can be detected from the other person's facial expressions, listening to their tone of voice, etc. However, due to important applications in fields such as psychology, marketing, and political science, research in this topic is now expanding rapidly BIBREF1. In particular, dialogue systems such as those available on social media or instant messaging services are rich sources of textual data and have become the focus of much attention. Emotions of utterances within dialogues can be detected more precisely due to the presence of more context. For example, a single utterance (“OK!”) might convey different emotions (happiness, anger, surprise), depending on its context. Taking all this into consideration, in 2018 the EmotionX Challenge asked participants to detect emotions in complete dialogues BIBREF2. Participants were challenged to classify utterances using Ekman's well-known theory of six basic emotions (sadness, happiness, anger, fear, disgust, and surprise) BIBREF3. For the 2019 challenge, we built and expanded upon the 2018 challenge. We provided an additional 20% of data for training, as well as augmenting the dataset using two-way translation. The metric used was micro-F1 score, and we also report the macro-F1 score. A total of thirty-six teams registered to participate in the challenge. Eleven of the teams successfully submitted their data for performance evaluation, and seven of them submitted technical papers for the workshop. Approaches used by the teams included deep neural networks and SVM classifiers. In the following sections we expand on the challenge and the data. We then briefly describe the various approaches used by the teams, and conclude with a summary and some notes. Detailed descriptions of the various submissions are available in the teams' technical reports. Datasets The two datasets used for the challenge are Friends and EmotionPush, part of the EmotionLines corpus BIBREF4. The datasets contain English-language dialogues of varying lengths. For the competition, we provided 1,000 labeled dialogues from each dataset for training, and 240 unlabeled dialogues from each dataset for evaluation. The Friends dialogues are scripts taken from the American TV sitcom (1994-2004). The EmotionPush dialogues are from Facebook Messenger chats by real users which have been anonymized to ensure user privacy. For both datasets, dialogue lengths range from 5 to 24 lines each. A breakdown of the lengths of the dialogues is shown in Table . We employed workers using Amazon Mechanical Turk (aka AMT or MTurk) to annotate the dialogues BIBREF5. Each complete dialogue was offered as a single MTurk Human Intelligence Task (HIT), within which each utterance was read and annotated for emotions by the worker. Each HIT was assigned to five workers. To ensure workers were qualified for the annotation task, we set up a number of requirements: workers had to be from an English-speaking country (Australia, Canada, Great Britain, Ireland, New Zealand, or the US), have a high HIT approval rate (at least 98%), and have already performed a minimum of 2,000 HITs. In the datasets, each utterance is accompanied by an annotation and emotion. The annotation contains the raw count of votes for each emotion by the five annotators, with the order of the emotions being Neutral, Joy, Sadness, Fear, Anger, Surprise, Disgust. For example, an annotation of “2000030” denotes that two annotators voted for “neutral”, and three voted for “surprise”. The labeled emotion is calculated using the absolute majority of votes. Thus, if a specific emotion received three or more votes, then that utterance is labeled with that emotion. If there is no majority vote, the utterance is labeled with “non-neutral” label. In addition to the utterance, annotation, and label, each line in each dialogue includes the speaker's name (in the case of EmotionPush, a speaker ID was used). The emotion distribution for Friends and EmotionPush, for both training and evaluation data, is shown in Table . We used Fleiss' kappa measure to assess the reliability of agreement between the annotators BIBREF6. The value for $\kappa $-statistic is $0.326$ and $0.342$ for Friends and EmotionPush, respectively. For the combined datasets the value of the $\kappa $-statistic is $0.345$. Sample excerpts from the two datasets, with their annotations and labels, are given in Table . Datasets ::: Augmentation NLP tasks require plenty of data. Due to the relatively small number of samples in our datasets, we added more labeled data using a technique developed in BIBREF7 that was used by the winning team in Kaggle's Toxic Comment Classification Challenge BIBREF8. The augmented datasets are similar to the original data files, but include additional machine-computed utterances for each original utterance. We created the additional utterances using the Google Translate API. Each original utterance was first translated from English into three target languages (German, French, and Italian), and then translated back into English. The resulting utterances were included together in the same object with the original utterance. These “duplex translations” can sometimes result in the original sentence, but many times variations are generated that convey the same emotions. Table shows an example utterance (labeled with “Joy”) after augmentation. Challenge Details A dedicated website for the competition was set up. The website included instructions, the registration form, schedule, and other relevant details. Following registration, participants were able to download the training datasets. The label distribution of emotions in our data are highly unbalanced, as can be seen in Figure FIGREF6. Due to the small number of three of the labels, participants were instructed to use only four emotions for labels: joy, sadness, anger, and neutral. Evaluation of submissions was done using only utterances with these four labels. Utterances with labels other than the above four (i.e., surprise, disgust, fear or non-neutral) were discarded and not used in the evaluation. Scripts for verifying and evaluating the submissions were made available online. We used micro-F1 as the comparison metric. Submissions A total of eleven teams submitted their evaluations, and are presented in the online leaderboard. Seven of the teams also submitted technical reports, the highlights of which are summarized below. More details are available in the relevant reports. Submissions ::: IDEA BIBREF9 Two different BERT models were developed. For Friends, pre-training was done using a sliding window of two utterances to provide dialogue context. Both Next Sentence Prediction (NSP) phase on the complete unlabeled scripts from all 10 seasons of Friends, which are available for download. In addition, the model learned the emotional disposition of each of six main six main characters in Friends (Rachel, Monica, Phoebe, Joey, Chandler and Ross) by adding a special token to represent the speaker. For EmotionPush, pre-training was performed on Twitter data, as it is similar in nature to chat based dialogues. In both cases, special attention was given to the class imbalance issue by applying “weighted balanced warming” on the loss function. Submissions ::: KU BIBREF10 BERT is post-trained via Masked Language Model (MLM) and Next Sentence Prediction (NSP) on a corpus consisting of the complete and augmented dialogues of Friends, and the EmotionPush training data. The resulting token embeddings are max-pooled and fed into a dense network for classification. A $K$-fold cross-validation ensemble with majority voting was used for prediction. To deal with the class imbalance problem, weighted cross entropy was used as a training loss function. Submissions ::: HSU BIBREF11 A pre-trained BERT is fine-tuned using filtered training data which only included the desired labels. Additional augmented data with joy, sadness, and anger labels are also used. BERT is then fed into a standard feed-forward-network with a softmax layer used for classification. Submissions ::: Podlab BIBREF12 A support vector machine (SVM) was used for classification. Words are ranked using a per-emotion TF-IDF score. Experiments were performed to verify whether the previous utterance would improve classification performance. Input to the Linear SVM was done using one-hot-encoding of top ranking words. Submissions ::: AlexU BIBREF13 The classifier uses a pre-trained BERT model followed by a feed-forward neural network with a softmax output. Due to the overwhelming presence of the neutral label, a classifying cascade is employed, where the majority classifier is first used to decide whether the utterance should be classified with “neutral” or not. A second classifier is used to focus on the other emotions (joy, sadness, and anger). Dealing with the imbalanced classes is done through the use of a weighted loss function. Submissions ::: Antenna BIBREF14 BERT is first used to generate word and sentence embeddings for all utterances. The resulting calculated word embeddings are fed into a Convolutional Neural Network (CNN), and its output is then concatenated with the BERT-generated sentence embeddings. The concatenated vectors are then used to train a bi-directional GRU with a residual connection followed by a fully-connected layer, and finally a softmax layer produces predictions. Class imbalance is tackled using focal loss BIBREF15. Submissions ::: CYUT BIBREF16 A word embedding layer followed by a bi-directional GRU-based RNN. Output from the RNN was fed into a single-node classifier. The augmented dataset was used for training the model, but “neutral”-labeled utterances were filtered to deal with class imbalance. Results The submissions and the final results are summarized in Tables and . Two of the submissions did not follow up with technical papers and thus they do not appear in this summary. We note that the top-performing models used BERT, reflecting the recent state-of-the-art performance of this model in many NLP tasks. For Friends and EmotionPush the top micro-F1 scores were 81.5% and 88.5% respectively. Evaluation & Discussion An evaluation summary of the submissions is available in Tables and . We only present the teams that submitted technical reports. A full leaderboard that includes all the teams is available on the challenge website. This section highlights some observations related to the challenge. Identical utterances can convey different emotions in different contexts. A few of the models incorporated the dialogue context into the model, such as the models proposed by teams IDEA and KU. Evaluation & Discussion ::: Deep Learning Models. Most of the submissions used deep learning models. Five of the models were based on the BERT architecture, with some using pre-trained BERT. Some of the submissions enhanced the model by adding context and speaker related encoding to improve performance. We also received submissions using more traditional networks such as CNN, as well as machine learning classics such as SVM. The results demonstrate that domain knowledge, feature engineering, and careful application of existing methodologies is still paramount for building successful machine learning models. Evaluation & Discussion ::: Unbalanced Labels. Emotion detection in text often suffers from a data imbalance problem, our datasets included. The teams used two approaches to deal with this issue. Some used a class-balanced loss functions while others under-sampled classes with majority label “neutral”. Classification performance of underrepresented emotions, especially sadness and anger, is low compared to the others. This is still a challenge, especially as some real-world applications are dependent on detection of specific emotions such as anger and sadness. Evaluation & Discussion ::: Emotional Model and Annotation Challenges. The discrete 6-emotion model and similar models are often used in emotion detection tasks. However, such 1-out-of-n models are limited in a few ways: first, expressed emotions are often not discrete but mixed (for example, surprise and joy or surprise and anger are often manifested in the same utterance). This leads to more inter-annotator disagreement, as annotators can only select one emotion. Second, there are additional emotional states that are not covered by the basic six emotions but are often conveyed in speech and physical expressions, such as desire, embarrassment, relief, and sympathy. This is reflected in feedback we received from one of the AMT workers: “I am doing my best on your HITs. However, the emotions given (7 of them) are a lot of times not the emotion I'm reading (such as questioning, happy, excited, etc). Your emotions do not fit them all...”. To further investigate, we calculated the per-emotion $\kappa $-statistic for our datasets in Table . We see that for some emotions, such as disgust and fear (and anger for EmotionPush), the $\kappa $-statistic is poor, indicating ambiguity in annotation and thus an opportunity for future improvement. We also note that there is an interplay between the emotion label distribution, per-emotion classification performance, and their corresponding $\kappa $ scores, which calls for further investigation. Evaluation & Discussion ::: Data Sources. One of the main requirements of successful training of deep learning models is the availability of high-quality labeled data. Using AMT to label data has proved to be useful. However, current data is limited in quantity. In addition, more work needs to be done in order to measure, evaluate, and guarantee annotation quality. In addition, the Friends data is based on an American TV series which emphasizes certain emotions, and it remains to be seen how to transfer learning of emotions to other domains. Acknowledgment This research is partially supported by Ministry of Science and Technology, Taiwan, under Grant no. MOST108-2634-F-001-004- and MOST107-2218-E-002-009-.
1 000 labeled dialogues for training and 240 unlabeled dialogues for evaluation
4ca0d52f655bb9b4bc25310f3a76c5d744830043
4ca0d52f655bb9b4bc25310f3a76c5d744830043_0
Q: How large is the first dataset? Text: Introduction Emotions are a central component of our existence as human beings, and are manifested by physiological and psychological changes that often affect behavior and action. Emotions involve a complicated interplay of mind, body, language, and culture BIBREF0. Detecting and recognizing emotions is a difficult task for machines. Nevertheless, following the successful use of computational linguistics to analyze sentiment in texts, there is growing interest in the more difficult task of the automatic detection and classification of emotions in texts. The detection of emotions in text is a complicated challenge for multiple reasons: first, emotions are complex entities, and no universally-agreed upon psychological model of emotions exists. Second, isolated texts convey less information compared to a complete human interaction in which emotions can be detected from the other person's facial expressions, listening to their tone of voice, etc. However, due to important applications in fields such as psychology, marketing, and political science, research in this topic is now expanding rapidly BIBREF1. In particular, dialogue systems such as those available on social media or instant messaging services are rich sources of textual data and have become the focus of much attention. Emotions of utterances within dialogues can be detected more precisely due to the presence of more context. For example, a single utterance (“OK!”) might convey different emotions (happiness, anger, surprise), depending on its context. Taking all this into consideration, in 2018 the EmotionX Challenge asked participants to detect emotions in complete dialogues BIBREF2. Participants were challenged to classify utterances using Ekman's well-known theory of six basic emotions (sadness, happiness, anger, fear, disgust, and surprise) BIBREF3. For the 2019 challenge, we built and expanded upon the 2018 challenge. We provided an additional 20% of data for training, as well as augmenting the dataset using two-way translation. The metric used was micro-F1 score, and we also report the macro-F1 score. A total of thirty-six teams registered to participate in the challenge. Eleven of the teams successfully submitted their data for performance evaluation, and seven of them submitted technical papers for the workshop. Approaches used by the teams included deep neural networks and SVM classifiers. In the following sections we expand on the challenge and the data. We then briefly describe the various approaches used by the teams, and conclude with a summary and some notes. Detailed descriptions of the various submissions are available in the teams' technical reports. Datasets The two datasets used for the challenge are Friends and EmotionPush, part of the EmotionLines corpus BIBREF4. The datasets contain English-language dialogues of varying lengths. For the competition, we provided 1,000 labeled dialogues from each dataset for training, and 240 unlabeled dialogues from each dataset for evaluation. The Friends dialogues are scripts taken from the American TV sitcom (1994-2004). The EmotionPush dialogues are from Facebook Messenger chats by real users which have been anonymized to ensure user privacy. For both datasets, dialogue lengths range from 5 to 24 lines each. A breakdown of the lengths of the dialogues is shown in Table . We employed workers using Amazon Mechanical Turk (aka AMT or MTurk) to annotate the dialogues BIBREF5. Each complete dialogue was offered as a single MTurk Human Intelligence Task (HIT), within which each utterance was read and annotated for emotions by the worker. Each HIT was assigned to five workers. To ensure workers were qualified for the annotation task, we set up a number of requirements: workers had to be from an English-speaking country (Australia, Canada, Great Britain, Ireland, New Zealand, or the US), have a high HIT approval rate (at least 98%), and have already performed a minimum of 2,000 HITs. In the datasets, each utterance is accompanied by an annotation and emotion. The annotation contains the raw count of votes for each emotion by the five annotators, with the order of the emotions being Neutral, Joy, Sadness, Fear, Anger, Surprise, Disgust. For example, an annotation of “2000030” denotes that two annotators voted for “neutral”, and three voted for “surprise”. The labeled emotion is calculated using the absolute majority of votes. Thus, if a specific emotion received three or more votes, then that utterance is labeled with that emotion. If there is no majority vote, the utterance is labeled with “non-neutral” label. In addition to the utterance, annotation, and label, each line in each dialogue includes the speaker's name (in the case of EmotionPush, a speaker ID was used). The emotion distribution for Friends and EmotionPush, for both training and evaluation data, is shown in Table . We used Fleiss' kappa measure to assess the reliability of agreement between the annotators BIBREF6. The value for $\kappa $-statistic is $0.326$ and $0.342$ for Friends and EmotionPush, respectively. For the combined datasets the value of the $\kappa $-statistic is $0.345$. Sample excerpts from the two datasets, with their annotations and labels, are given in Table . Datasets ::: Augmentation NLP tasks require plenty of data. Due to the relatively small number of samples in our datasets, we added more labeled data using a technique developed in BIBREF7 that was used by the winning team in Kaggle's Toxic Comment Classification Challenge BIBREF8. The augmented datasets are similar to the original data files, but include additional machine-computed utterances for each original utterance. We created the additional utterances using the Google Translate API. Each original utterance was first translated from English into three target languages (German, French, and Italian), and then translated back into English. The resulting utterances were included together in the same object with the original utterance. These “duplex translations” can sometimes result in the original sentence, but many times variations are generated that convey the same emotions. Table shows an example utterance (labeled with “Joy”) after augmentation. Challenge Details A dedicated website for the competition was set up. The website included instructions, the registration form, schedule, and other relevant details. Following registration, participants were able to download the training datasets. The label distribution of emotions in our data are highly unbalanced, as can be seen in Figure FIGREF6. Due to the small number of three of the labels, participants were instructed to use only four emotions for labels: joy, sadness, anger, and neutral. Evaluation of submissions was done using only utterances with these four labels. Utterances with labels other than the above four (i.e., surprise, disgust, fear or non-neutral) were discarded and not used in the evaluation. Scripts for verifying and evaluating the submissions were made available online. We used micro-F1 as the comparison metric. Submissions A total of eleven teams submitted their evaluations, and are presented in the online leaderboard. Seven of the teams also submitted technical reports, the highlights of which are summarized below. More details are available in the relevant reports. Submissions ::: IDEA BIBREF9 Two different BERT models were developed. For Friends, pre-training was done using a sliding window of two utterances to provide dialogue context. Both Next Sentence Prediction (NSP) phase on the complete unlabeled scripts from all 10 seasons of Friends, which are available for download. In addition, the model learned the emotional disposition of each of six main six main characters in Friends (Rachel, Monica, Phoebe, Joey, Chandler and Ross) by adding a special token to represent the speaker. For EmotionPush, pre-training was performed on Twitter data, as it is similar in nature to chat based dialogues. In both cases, special attention was given to the class imbalance issue by applying “weighted balanced warming” on the loss function. Submissions ::: KU BIBREF10 BERT is post-trained via Masked Language Model (MLM) and Next Sentence Prediction (NSP) on a corpus consisting of the complete and augmented dialogues of Friends, and the EmotionPush training data. The resulting token embeddings are max-pooled and fed into a dense network for classification. A $K$-fold cross-validation ensemble with majority voting was used for prediction. To deal with the class imbalance problem, weighted cross entropy was used as a training loss function. Submissions ::: HSU BIBREF11 A pre-trained BERT is fine-tuned using filtered training data which only included the desired labels. Additional augmented data with joy, sadness, and anger labels are also used. BERT is then fed into a standard feed-forward-network with a softmax layer used for classification. Submissions ::: Podlab BIBREF12 A support vector machine (SVM) was used for classification. Words are ranked using a per-emotion TF-IDF score. Experiments were performed to verify whether the previous utterance would improve classification performance. Input to the Linear SVM was done using one-hot-encoding of top ranking words. Submissions ::: AlexU BIBREF13 The classifier uses a pre-trained BERT model followed by a feed-forward neural network with a softmax output. Due to the overwhelming presence of the neutral label, a classifying cascade is employed, where the majority classifier is first used to decide whether the utterance should be classified with “neutral” or not. A second classifier is used to focus on the other emotions (joy, sadness, and anger). Dealing with the imbalanced classes is done through the use of a weighted loss function. Submissions ::: Antenna BIBREF14 BERT is first used to generate word and sentence embeddings for all utterances. The resulting calculated word embeddings are fed into a Convolutional Neural Network (CNN), and its output is then concatenated with the BERT-generated sentence embeddings. The concatenated vectors are then used to train a bi-directional GRU with a residual connection followed by a fully-connected layer, and finally a softmax layer produces predictions. Class imbalance is tackled using focal loss BIBREF15. Submissions ::: CYUT BIBREF16 A word embedding layer followed by a bi-directional GRU-based RNN. Output from the RNN was fed into a single-node classifier. The augmented dataset was used for training the model, but “neutral”-labeled utterances were filtered to deal with class imbalance. Results The submissions and the final results are summarized in Tables and . Two of the submissions did not follow up with technical papers and thus they do not appear in this summary. We note that the top-performing models used BERT, reflecting the recent state-of-the-art performance of this model in many NLP tasks. For Friends and EmotionPush the top micro-F1 scores were 81.5% and 88.5% respectively. Evaluation & Discussion An evaluation summary of the submissions is available in Tables and . We only present the teams that submitted technical reports. A full leaderboard that includes all the teams is available on the challenge website. This section highlights some observations related to the challenge. Identical utterances can convey different emotions in different contexts. A few of the models incorporated the dialogue context into the model, such as the models proposed by teams IDEA and KU. Evaluation & Discussion ::: Deep Learning Models. Most of the submissions used deep learning models. Five of the models were based on the BERT architecture, with some using pre-trained BERT. Some of the submissions enhanced the model by adding context and speaker related encoding to improve performance. We also received submissions using more traditional networks such as CNN, as well as machine learning classics such as SVM. The results demonstrate that domain knowledge, feature engineering, and careful application of existing methodologies is still paramount for building successful machine learning models. Evaluation & Discussion ::: Unbalanced Labels. Emotion detection in text often suffers from a data imbalance problem, our datasets included. The teams used two approaches to deal with this issue. Some used a class-balanced loss functions while others under-sampled classes with majority label “neutral”. Classification performance of underrepresented emotions, especially sadness and anger, is low compared to the others. This is still a challenge, especially as some real-world applications are dependent on detection of specific emotions such as anger and sadness. Evaluation & Discussion ::: Emotional Model and Annotation Challenges. The discrete 6-emotion model and similar models are often used in emotion detection tasks. However, such 1-out-of-n models are limited in a few ways: first, expressed emotions are often not discrete but mixed (for example, surprise and joy or surprise and anger are often manifested in the same utterance). This leads to more inter-annotator disagreement, as annotators can only select one emotion. Second, there are additional emotional states that are not covered by the basic six emotions but are often conveyed in speech and physical expressions, such as desire, embarrassment, relief, and sympathy. This is reflected in feedback we received from one of the AMT workers: “I am doing my best on your HITs. However, the emotions given (7 of them) are a lot of times not the emotion I'm reading (such as questioning, happy, excited, etc). Your emotions do not fit them all...”. To further investigate, we calculated the per-emotion $\kappa $-statistic for our datasets in Table . We see that for some emotions, such as disgust and fear (and anger for EmotionPush), the $\kappa $-statistic is poor, indicating ambiguity in annotation and thus an opportunity for future improvement. We also note that there is an interplay between the emotion label distribution, per-emotion classification performance, and their corresponding $\kappa $ scores, which calls for further investigation. Evaluation & Discussion ::: Data Sources. One of the main requirements of successful training of deep learning models is the availability of high-quality labeled data. Using AMT to label data has proved to be useful. However, current data is limited in quantity. In addition, more work needs to be done in order to measure, evaluate, and guarantee annotation quality. In addition, the Friends data is based on an American TV series which emphasizes certain emotions, and it remains to be seen how to transfer learning of emotions to other domains. Acknowledgment This research is partially supported by Ministry of Science and Technology, Taiwan, under Grant no. MOST108-2634-F-001-004- and MOST107-2218-E-002-009-.
1 000 labeled dialogues for training and 240 unlabeled dialogues for evaluation
d2fbf34cf4b5b1fd82394124728b03003884409c
d2fbf34cf4b5b1fd82394124728b03003884409c_0
Q: Who was the top-scoring team? Text: Introduction Emotions are a central component of our existence as human beings, and are manifested by physiological and psychological changes that often affect behavior and action. Emotions involve a complicated interplay of mind, body, language, and culture BIBREF0. Detecting and recognizing emotions is a difficult task for machines. Nevertheless, following the successful use of computational linguistics to analyze sentiment in texts, there is growing interest in the more difficult task of the automatic detection and classification of emotions in texts. The detection of emotions in text is a complicated challenge for multiple reasons: first, emotions are complex entities, and no universally-agreed upon psychological model of emotions exists. Second, isolated texts convey less information compared to a complete human interaction in which emotions can be detected from the other person's facial expressions, listening to their tone of voice, etc. However, due to important applications in fields such as psychology, marketing, and political science, research in this topic is now expanding rapidly BIBREF1. In particular, dialogue systems such as those available on social media or instant messaging services are rich sources of textual data and have become the focus of much attention. Emotions of utterances within dialogues can be detected more precisely due to the presence of more context. For example, a single utterance (“OK!”) might convey different emotions (happiness, anger, surprise), depending on its context. Taking all this into consideration, in 2018 the EmotionX Challenge asked participants to detect emotions in complete dialogues BIBREF2. Participants were challenged to classify utterances using Ekman's well-known theory of six basic emotions (sadness, happiness, anger, fear, disgust, and surprise) BIBREF3. For the 2019 challenge, we built and expanded upon the 2018 challenge. We provided an additional 20% of data for training, as well as augmenting the dataset using two-way translation. The metric used was micro-F1 score, and we also report the macro-F1 score. A total of thirty-six teams registered to participate in the challenge. Eleven of the teams successfully submitted their data for performance evaluation, and seven of them submitted technical papers for the workshop. Approaches used by the teams included deep neural networks and SVM classifiers. In the following sections we expand on the challenge and the data. We then briefly describe the various approaches used by the teams, and conclude with a summary and some notes. Detailed descriptions of the various submissions are available in the teams' technical reports. Datasets The two datasets used for the challenge are Friends and EmotionPush, part of the EmotionLines corpus BIBREF4. The datasets contain English-language dialogues of varying lengths. For the competition, we provided 1,000 labeled dialogues from each dataset for training, and 240 unlabeled dialogues from each dataset for evaluation. The Friends dialogues are scripts taken from the American TV sitcom (1994-2004). The EmotionPush dialogues are from Facebook Messenger chats by real users which have been anonymized to ensure user privacy. For both datasets, dialogue lengths range from 5 to 24 lines each. A breakdown of the lengths of the dialogues is shown in Table . We employed workers using Amazon Mechanical Turk (aka AMT or MTurk) to annotate the dialogues BIBREF5. Each complete dialogue was offered as a single MTurk Human Intelligence Task (HIT), within which each utterance was read and annotated for emotions by the worker. Each HIT was assigned to five workers. To ensure workers were qualified for the annotation task, we set up a number of requirements: workers had to be from an English-speaking country (Australia, Canada, Great Britain, Ireland, New Zealand, or the US), have a high HIT approval rate (at least 98%), and have already performed a minimum of 2,000 HITs. In the datasets, each utterance is accompanied by an annotation and emotion. The annotation contains the raw count of votes for each emotion by the five annotators, with the order of the emotions being Neutral, Joy, Sadness, Fear, Anger, Surprise, Disgust. For example, an annotation of “2000030” denotes that two annotators voted for “neutral”, and three voted for “surprise”. The labeled emotion is calculated using the absolute majority of votes. Thus, if a specific emotion received three or more votes, then that utterance is labeled with that emotion. If there is no majority vote, the utterance is labeled with “non-neutral” label. In addition to the utterance, annotation, and label, each line in each dialogue includes the speaker's name (in the case of EmotionPush, a speaker ID was used). The emotion distribution for Friends and EmotionPush, for both training and evaluation data, is shown in Table . We used Fleiss' kappa measure to assess the reliability of agreement between the annotators BIBREF6. The value for $\kappa $-statistic is $0.326$ and $0.342$ for Friends and EmotionPush, respectively. For the combined datasets the value of the $\kappa $-statistic is $0.345$. Sample excerpts from the two datasets, with their annotations and labels, are given in Table . Datasets ::: Augmentation NLP tasks require plenty of data. Due to the relatively small number of samples in our datasets, we added more labeled data using a technique developed in BIBREF7 that was used by the winning team in Kaggle's Toxic Comment Classification Challenge BIBREF8. The augmented datasets are similar to the original data files, but include additional machine-computed utterances for each original utterance. We created the additional utterances using the Google Translate API. Each original utterance was first translated from English into three target languages (German, French, and Italian), and then translated back into English. The resulting utterances were included together in the same object with the original utterance. These “duplex translations” can sometimes result in the original sentence, but many times variations are generated that convey the same emotions. Table shows an example utterance (labeled with “Joy”) after augmentation. Challenge Details A dedicated website for the competition was set up. The website included instructions, the registration form, schedule, and other relevant details. Following registration, participants were able to download the training datasets. The label distribution of emotions in our data are highly unbalanced, as can be seen in Figure FIGREF6. Due to the small number of three of the labels, participants were instructed to use only four emotions for labels: joy, sadness, anger, and neutral. Evaluation of submissions was done using only utterances with these four labels. Utterances with labels other than the above four (i.e., surprise, disgust, fear or non-neutral) were discarded and not used in the evaluation. Scripts for verifying and evaluating the submissions were made available online. We used micro-F1 as the comparison metric. Submissions A total of eleven teams submitted their evaluations, and are presented in the online leaderboard. Seven of the teams also submitted technical reports, the highlights of which are summarized below. More details are available in the relevant reports. Submissions ::: IDEA BIBREF9 Two different BERT models were developed. For Friends, pre-training was done using a sliding window of two utterances to provide dialogue context. Both Next Sentence Prediction (NSP) phase on the complete unlabeled scripts from all 10 seasons of Friends, which are available for download. In addition, the model learned the emotional disposition of each of six main six main characters in Friends (Rachel, Monica, Phoebe, Joey, Chandler and Ross) by adding a special token to represent the speaker. For EmotionPush, pre-training was performed on Twitter data, as it is similar in nature to chat based dialogues. In both cases, special attention was given to the class imbalance issue by applying “weighted balanced warming” on the loss function. Submissions ::: KU BIBREF10 BERT is post-trained via Masked Language Model (MLM) and Next Sentence Prediction (NSP) on a corpus consisting of the complete and augmented dialogues of Friends, and the EmotionPush training data. The resulting token embeddings are max-pooled and fed into a dense network for classification. A $K$-fold cross-validation ensemble with majority voting was used for prediction. To deal with the class imbalance problem, weighted cross entropy was used as a training loss function. Submissions ::: HSU BIBREF11 A pre-trained BERT is fine-tuned using filtered training data which only included the desired labels. Additional augmented data with joy, sadness, and anger labels are also used. BERT is then fed into a standard feed-forward-network with a softmax layer used for classification. Submissions ::: Podlab BIBREF12 A support vector machine (SVM) was used for classification. Words are ranked using a per-emotion TF-IDF score. Experiments were performed to verify whether the previous utterance would improve classification performance. Input to the Linear SVM was done using one-hot-encoding of top ranking words. Submissions ::: AlexU BIBREF13 The classifier uses a pre-trained BERT model followed by a feed-forward neural network with a softmax output. Due to the overwhelming presence of the neutral label, a classifying cascade is employed, where the majority classifier is first used to decide whether the utterance should be classified with “neutral” or not. A second classifier is used to focus on the other emotions (joy, sadness, and anger). Dealing with the imbalanced classes is done through the use of a weighted loss function. Submissions ::: Antenna BIBREF14 BERT is first used to generate word and sentence embeddings for all utterances. The resulting calculated word embeddings are fed into a Convolutional Neural Network (CNN), and its output is then concatenated with the BERT-generated sentence embeddings. The concatenated vectors are then used to train a bi-directional GRU with a residual connection followed by a fully-connected layer, and finally a softmax layer produces predictions. Class imbalance is tackled using focal loss BIBREF15. Submissions ::: CYUT BIBREF16 A word embedding layer followed by a bi-directional GRU-based RNN. Output from the RNN was fed into a single-node classifier. The augmented dataset was used for training the model, but “neutral”-labeled utterances were filtered to deal with class imbalance. Results The submissions and the final results are summarized in Tables and . Two of the submissions did not follow up with technical papers and thus they do not appear in this summary. We note that the top-performing models used BERT, reflecting the recent state-of-the-art performance of this model in many NLP tasks. For Friends and EmotionPush the top micro-F1 scores were 81.5% and 88.5% respectively. Evaluation & Discussion An evaluation summary of the submissions is available in Tables and . We only present the teams that submitted technical reports. A full leaderboard that includes all the teams is available on the challenge website. This section highlights some observations related to the challenge. Identical utterances can convey different emotions in different contexts. A few of the models incorporated the dialogue context into the model, such as the models proposed by teams IDEA and KU. Evaluation & Discussion ::: Deep Learning Models. Most of the submissions used deep learning models. Five of the models were based on the BERT architecture, with some using pre-trained BERT. Some of the submissions enhanced the model by adding context and speaker related encoding to improve performance. We also received submissions using more traditional networks such as CNN, as well as machine learning classics such as SVM. The results demonstrate that domain knowledge, feature engineering, and careful application of existing methodologies is still paramount for building successful machine learning models. Evaluation & Discussion ::: Unbalanced Labels. Emotion detection in text often suffers from a data imbalance problem, our datasets included. The teams used two approaches to deal with this issue. Some used a class-balanced loss functions while others under-sampled classes with majority label “neutral”. Classification performance of underrepresented emotions, especially sadness and anger, is low compared to the others. This is still a challenge, especially as some real-world applications are dependent on detection of specific emotions such as anger and sadness. Evaluation & Discussion ::: Emotional Model and Annotation Challenges. The discrete 6-emotion model and similar models are often used in emotion detection tasks. However, such 1-out-of-n models are limited in a few ways: first, expressed emotions are often not discrete but mixed (for example, surprise and joy or surprise and anger are often manifested in the same utterance). This leads to more inter-annotator disagreement, as annotators can only select one emotion. Second, there are additional emotional states that are not covered by the basic six emotions but are often conveyed in speech and physical expressions, such as desire, embarrassment, relief, and sympathy. This is reflected in feedback we received from one of the AMT workers: “I am doing my best on your HITs. However, the emotions given (7 of them) are a lot of times not the emotion I'm reading (such as questioning, happy, excited, etc). Your emotions do not fit them all...”. To further investigate, we calculated the per-emotion $\kappa $-statistic for our datasets in Table . We see that for some emotions, such as disgust and fear (and anger for EmotionPush), the $\kappa $-statistic is poor, indicating ambiguity in annotation and thus an opportunity for future improvement. We also note that there is an interplay between the emotion label distribution, per-emotion classification performance, and their corresponding $\kappa $ scores, which calls for further investigation. Evaluation & Discussion ::: Data Sources. One of the main requirements of successful training of deep learning models is the availability of high-quality labeled data. Using AMT to label data has proved to be useful. However, current data is limited in quantity. In addition, more work needs to be done in order to measure, evaluate, and guarantee annotation quality. In addition, the Friends data is based on an American TV series which emphasizes certain emotions, and it remains to be seen how to transfer learning of emotions to other domains. Acknowledgment This research is partially supported by Ministry of Science and Technology, Taiwan, under Grant no. MOST108-2634-F-001-004- and MOST107-2218-E-002-009-.
IDEA
4c71ed7d30ee44cf85ffbd7756b985e32e8e07da
4c71ed7d30ee44cf85ffbd7756b985e32e8e07da_0
Q: What supervised learning tasks are attempted with these representations? Text: Introduction Word embedding models, which learn to encode dictionary words with vector space representations, have been shown to be valuable for a variety of natural language processing (NLP) tasks such as statistical machine translation BIBREF2 , part-of-speech tagging, chunking, and named entity recogition BIBREF3 , as they provide a more nuanced representation of words than a simple indicator vector into a dictionary. These models follow a long line of research in data-driven semantic representations of text, including latent semantic analysis BIBREF4 and its probabilistic extensions BIBREF5 , BIBREF6 . In particular, topic models BIBREF7 have found broad applications in computational social science BIBREF8 , BIBREF9 and the digital humanities BIBREF10 , where interpretable representations reveal meaningful insights. Despite widespread success at NLP tasks, word embeddings have not yet supplanted topic models as the method of choice in computational social science applications. I speculate that this is due to two primary factors: 1) a perceived reliance on big data, and 2) a lack of interpretability. In this work, I develop new models to address both of these limitations. Word embeddings have risen in popularity for NLP applications due to the success of models designed specifically for the big data setting. In particular, BIBREF0 , BIBREF1 showed that very simple word embedding models with high-dimensional representations can scale up to massive datasets, allowing them to outperform more sophisticated neural network language models which can process fewer documents. In this work, I offer a somewhat contrarian perspective to the currently prevailing trend of big data optimism, as exemplified by the work of BIBREF0 , BIBREF1 , BIBREF3 , and others, who argue that massive datasets are sufficient to allow language models to automatically resolve many challenging NLP tasks. Note that “big” datasets are not always available, particularly in computational social science NLP applications, where the data of interest are often not obtained from large scale sources such as the internet and social media, but from sources such as press releases BIBREF11 , academic journals BIBREF10 , books BIBREF12 , and transcripts of recorded speech BIBREF13 , BIBREF14 , BIBREF15 . A standard practice in the literature is to train word embedding models on a generic large corpus such as Wikipedia, and use the embeddings for NLP tasks on the target dataset, cf. BIBREF3 , BIBREF0 , BIBREF16 , BIBREF17 . However, as we shall see here, this standard practice might not always be effective, as the size of a dataset does not correspond to its degree of relevance for a particular analysis. Even very large corpora have idiosyncrasies that can make their embeddings invalid for other domains. For instance, suppose we would like to use word embeddings to analyze scientific articles on machine learning. In Table TABREF1 , I report the most similar words to the word “learning” based on word embedding models trained on two corpora. For embeddings trained on articles from the NIPS conference, the most similar words are related to machine learning, as desired, while for embeddings trained on the massive, generic Google News corpus, the most similar words relate to learning and teaching in the classroom. Evidently, domain-specific data can be important. Even more concerningly, BIBREF18 show that word embeddings can encode implicit sexist assumptions. This suggests that when trained on large generic corpora they could also encode the hegemonic worldview, which is inappropriate for studying, e.g., black female hip-hop artists' lyrics, or poetry by Syrian refugees, and could potentially lead to systematic bias against minorities, women, and people of color in NLP applications with real-world consequences, such as automatic essay grading and college admissions. In order to proactively combat these kinds of biases in large generic datasets, and to address computational social science tasks, there is a need for effective word embeddings for small datasets, so that the most relevant datasets can be used for training, even when they are small. To make word embeddings a viable alternative to topic models for applications in the social sciences, we further desire that the embeddings are semantically meaningful to human analysts. In this paper, I introduce an interpretable word embedding model, and an associated topic model, which are designed to work well when trained on a small to medium-sized corpus of interest. The primary insight is to use a data-efficient parameter sharing scheme via mixed membership modeling, with inspiration from topic models. Mixed membership models provide a flexible yet efficient latent representation, in which entities are associated with shared, global representations, but to uniquely varying degrees. I identify the skip-gram word2vec model of BIBREF0 , BIBREF1 as corresponding to a certain naive Bayes topic model, which leads to mixed membership extensions, allowing the use of fewer vectors than words. I show that this leads to better modeling performance without big data, as measured by predictive performance (when the context is leveraged for prediction), as well as to interpretable latent representations that are highly valuable for computational social science applications. The interpretability of the representations arises from defining embeddings for words (and hence, documents) in terms of embeddings for topics. My experiments also shed light on the relative merits of training embeddings on generic big data corpora versus domain-specific data. Background In this section, I provide the necessary background on word embeddings, as well as on topic models and mixed membership models. Traditional language models aim to predict words given the contexts that they are found in, thereby forming a joint probabilistic model for sequences of words in a language. BIBREF19 developed improved language models by using distributed representations BIBREF20 , in which words are represented by neural network synapse weights, or equivalently, vector space embeddings. Later authors have noted that these word embeddings are useful for semantic representations of words, independently of whether a full joint probabilistic language model is learned, and that alternative training schemes can be beneficial for learning the embeddings. In particular, BIBREF0 , BIBREF1 proposed the skip-gram model, which inverts the language model prediction task and aims to predict the context given an input word. The skip-gram model is a log-bilinear discriminative probabilistic classifier parameterized by “input” word embedding vectors INLINEFORM0 for the input words INLINEFORM1 , and “output” word embedding vectors INLINEFORM2 for context words INLINEFORM3 , as shown in Table TABREF2 , top-left. Topic models such as latent Dirichlet allocation (LDA) BIBREF7 are another class of probabilistic language models that have been used for semantic representation BIBREF6 . A straightforward way to model text corpora is via unsupervised multinomial naive Bayes, in which a latent cluster assignment for each document selects a multinomial distribution over words, referred to as a topic, with which the documents' words are assumed to be generated. LDA topic models improve over naive Bayes by using a mixed membership model, in which the assumption that all words in a document INLINEFORM0 belong to the same topic is relaxed, and replaced with a distribution over topics INLINEFORM1 . In the model's assumed generative process, for each word INLINEFORM2 in document INLINEFORM3 , a topic assignment INLINEFORM4 is drawn via INLINEFORM5 , then the word is drawn from the chosen topic INLINEFORM6 . The mixed membership formalism provides a useful compromise between model flexibility and statistical efficiency: the INLINEFORM7 topics INLINEFORM8 are shared across all documents, thereby sharing statistical strength, but each document is free to use the topics to its own unique degree. Bayesian inference further aids data efficiency, as uncertainty over INLINEFORM9 can be managed for shorter documents. Some recent papers have aimed to combine topic models and word embeddings BIBREF21 , BIBREF22 , but they do not aim to address the small data problem for computational social science, which I focus on here. I provide a more detailed discussion of related work in the supplementary. The Mixed Membership Skip-Gram To design an interpretable word embedding model for small corpora, we identify novel connections between word embeddings and topic models, and adapt advances from topic modeling. Following the distributional hypothesis BIBREF23 , the skip-gram's word embeddings parameterize discrete probability distributions over words INLINEFORM0 which tend to co-occur, and tend to be semantically coherent – a property leveraged by the Gaussian LDA model of BIBREF21 . This suggests that these discrete distributions can be reinterpreted as topics INLINEFORM1 . We thus reinterpret the skip-gram as a parameterization of a certain supervised naive Bayes topic model (Table TABREF2 , top-right). In this topic model, input words INLINEFORM2 are fully observed “cluster assignments,” and the words in INLINEFORM3 's contexts are a “document.” The skip-gram differs from this supervised topic model only in the parameterization of the “topics” via word vectors which encode the distributions with a log-bilinear model. Note that although the skip-gram is discriminative, in the sense that it does not jointly model the input words INLINEFORM4 , we are here equivalently interpreting it as encoding a “conditionally generative” process for the context given the words, in order to develop probabilistic models that extend the skip-gram. As in LDA, this model can be improved by replacing the naive Bayes assumption with a mixed membership assumption. By applying the mixed membership representation to this topic model version of the skip-gram, we obtain the model in the bottom-right of Table TABREF2 . After once again parameterizing this model with word embeddings, we obtain our final model, the mixed membership skip-gram (MMSG) (Table TABREF2 , bottom-left). In the model, each input word has a distribution over topics INLINEFORM0 . Each topic has a vector-space embedding INLINEFORM1 and each output word has a vector INLINEFORM2 (a parameter, not an embedding for INLINEFORM3 ). A topic INLINEFORM4 is drawn for each context, and the words in the context are drawn from the log-bilinear model using INLINEFORM5 : DISPLAYFORM0 We can expect that the resulting mixed membership word embeddings are beneficial in the small-to-medium data regime for the following reasons: Of course, the model also requires some new parameters to be learned, namely the mixed membership proportions INLINEFORM0 . Based on topic modeling, I hypothesized that with care, these added parameters need not adversely affect performance in the small-medium data regime, for two reasons: 1) we can use a Bayesian approach to effectively manage uncertainty in them, and to marginalize them out, which prevents them being a bottleneck during training; and 2) at test time, using the posterior for INLINEFORM1 given the context, instead of the “prior” INLINEFORM2 , mitigates the impact of uncertainty in INLINEFORM3 due to limited training data: DISPLAYFORM0 To obtain a vector for a word type INLINEFORM0 , we can use the prior mean, INLINEFORM1 . For a word token INLINEFORM2 , we can leverage its context via the posterior mean, INLINEFORM3 . These embeddings are convex combinations of topic vectors (see Figure FIGREF23 for an example). With fewer vectors than words, some model capacity is lost, but the flexibility of the mixed membership representation allows the model to compensate. When the number of shared vectors equals the number of words, the mixed membership skip-gram is strictly more representationally powerful than the skip-gram. With more vectors than words, we can expect that the increased representational power would be beneficial in the big data regime. As this is not my goal, I leave this for future work. Experimental Results The goals of our experiments were to study the relative merits of big data and domain-specific small data, to validate the proposed methods, and to study their applicability for computational social science research. Quantitative Experiments I first measured the effectiveness of the embeddings at the skip-gram's training task, predicting context words INLINEFORM0 given input words INLINEFORM1 . This task measures the methods' performance for predictive language modeling. I used four datasets of sociopolitical, scientific, and literary interest: the corpus of NIPS articles from 1987 – 1999 ( INLINEFORM2 million), the U.S. presidential state of the Union addresses from 1790 – 2015 ( INLINEFORM3 ), the complete works of Shakespeare ( INLINEFORM4 ; this version did not contain the Sonnets), and the writings of black scholar and activist W.E.B. Du Bois, as digitized by Project Gutenberg ( INLINEFORM5 ). For each dataset, I held out 10,000 INLINEFORM6 pairs uniformly at random, where INLINEFORM7 , and aimed to predict INLINEFORM8 given INLINEFORM9 (and optionally, INLINEFORM10 ). Since there are a large number of classes, I treat this as a ranking problem, and report the mean reciprocal rank. The experiments were repeated and averaged over 5 train/test splits. The results are shown in Table TABREF25 . I compared to a word frequency baseline, the skip-gram (SG), and Tomas Mikolov/Google's vectors trained on Google News, INLINEFORM0 billion, via CBOW. Simulated annealing was performed for 1,000 iterations, NCE was performed for 1 million minibatches of size 128, and 128-dimensional embeddings were used (300 for Google). I used INLINEFORM1 for NIPS, INLINEFORM2 for state of the Union, and INLINEFORM3 for the two smaller datasets. Methods were able to leverage the remainder of the context, either by adding the context's vectors, or via the posterior (Equation EQREF22 ), which helped for all methods except the naive skip-gram. We can identify several noteworthy findings. First, the generic big data vectors (Google+context) were outperformed by the skip-gram on 3 out of 4 datasets (and by the skip-gram topic model on the other), by a large margin, indicating that domain-specific embeddings are often important. Second, the mixed membership models, using posterior inference, beat or matched their naive Bayes counterparts, for both the word embedding models and the topic models. As hypothesized, posterior inference on INLINEFORM4 at test time was important for good performance. Finally, the topic models beat their corresponding word embedding models at prediction. I therefore recommend the use of our MMSG topic model variant for predictive language modeling in the small data regime. I tested the performance of the representations as features for document categorization and regression tasks. The results are given in Table TABREF26 . For document categorization, I used three standard benchmark datasets: 20 Newsgroups (19,997 newsgroup posts), Reuters-150 newswire articles (15,500 articles and 150 classes), and Ohsumed medical abstracts on 23 cardiovascular diseases (20,000 articles). I held out 4,000 test documents for 20 Newsgroups, and used the standard train/test splits from the literature in the other corpora (e.g. for Ohsumed, 50% of documents were assigned to training and to test sets). I obtained document embeddings for the MMSG, in the same latent space as the topic embeddings, by summing the posterior mean vectors INLINEFORM0 for each token. Vector addition was similarly used to construct document vectors for the other embedding models. All vectors were normalized to unit length. I also considered a tf-idf baseline. Logistic regression models were trained on the features extracted on the training set for each method. Across the three datasets, several clear trends emerged (Table TABREF26 ). First, the generic Google vectors were consistently and substantially outperformed in classification performance by the skipgram (SG) and MMSG vectors, highlighting the importance of corpus-specific embeddings. Second, despite the MMSG's superior performance at language modeling on small datasets, the SG features outperformed the MMSG's at the document categorization task. By encoding vectors at the topic level instead of the word level, the MMSG loses word level resolution in the embeddings, which turned out to be valuable for these particular classification tasks. We are not, however, restricted to use only one type of embedding to construct features for classification. Interestingly, when the SG and MMSG features were concatenated (SG+MMSG), this improved classification performance over these vectors individually. This suggests that the topic-level MMSG vectors and word-level SG vectors encode complementary information, and both are beneficial for performance. Finally, further concatenating the generic Google vectors' features (SG+MMSG+Google) improved performance again, despite the fact that these vectors performed poorly on their own. It should be noted that tf-idf, which is notoriously effective for document categorization, outperformed the embedding methods on these datasets. I also analyzed the regression task of predicting the year of a state of the Union address based on its text information. I used lasso-regularized linear regression models, evaluated via a leave-one-out cross-validation experimental setup. Root-mean-square error (RMSE) results are reported in Table TABREF26 (bottom). Unlike for the other tasks, the Google big data vectors were the best individual features in this case, outperforming the domain-specific SG and MMSG embeddings individually. On the other hand, SG+MMSG+Google performed the best overall, showing that domain-specific embeddings can improve performance even when big data embeddings are successful. The tf-idf baseline was beaten by all of the embedding models on this task. Computational Social Science Case Studies: State of the Union and NIPS I also performed several case studies. I obtained document embeddings, in the same latent space as the topic embeddings, by summing the posterior mean vectors INLINEFORM0 for each token, and visualized them in two dimensions using INLINEFORM1 -SNE BIBREF24 (all vectors were normalized to unit length). The state of the Union addresses (Figure FIGREF27 ) are embedded almost linearly by year, with a major jump around the New Deal (1930s), and are well separated by party at any given time period. The embedded topics (gray) allow us to interpret the space. The George W. Bush addresses are embedded near a “war on terror” topic (“weapons, war...”), and the Barack Obama addresses are embedded near a “stimulus” topic (“people, work...”). On the NIPS corpus, for the input word “Bayesian” (Table ), the naive Bayes and skip-gram models learned a topic with words that refer to Bayesian networks, probabilistic models, and neural networks. The mixed membership models are able to separate this into more coherent and specific topics including Bayesian modeling, Bayesian training of neural networks (for which Sir David MacKay was a strong proponent, and Andreas Weigend wrote an influential early paper), and Monte Carlo methods. By performing the additive composition of word vectors, which we obtain by finding the prior mean vector for each word type INLINEFORM0 , INLINEFORM1 (and then normalizing), we obtain relevant topics INLINEFORM2 as nearest neighbors (Figure FIGREF28 ). Similarly, we find that the additive composition of topic and word vectors works correctly: INLINEFORM3 , and INLINEFORM4 . The INLINEFORM0 -SNE visualization of NIPS documents (Figure FIGREF28 ) shows some temporal clustering patterns (blue documents are more recent, red documents are older, and gray points are topics). I provide a more detailed case study on NIPS in the supplementary material. Conclusion I have proposed a model-based method for training interpretable corpus-specific word embeddings for computational social science, using mixed membership representations, Metropolis-Hastings-Walker sampling, and NCE. Experimental results for prediction, supervised learning, and case studies on state of the Union addresses and NIPS articles, indicate that high-quality embeddings and topics can be obtained using the method. The results highlight the fact that big data is not always best, as domain-specific data can be very valuable, even when it is small. I plan to use this approach for substantive social science applications, and to address algorithmic bias and fairness issues. Acknowledgements I thank Eric Nalisnick and Padhraic Smyth for many helpful discussions. Supplementary Material ] Related Work In this supplementary document, we discuss related work in the literature and its relation to our proposed methods, provide a case study on NIPS articles, and derive the collapsed Gibbs sampling update for the MMSGTM, which we leverage when training the MMSG. Topic Modeling and Word Embeddings The Gaussian LDA model of BIBREF21 improves the performance of topic modeling by leveraging the semantic information encoded in word embeddings. Gaussian LDA modifies the generative process of LDA such that each topic is assumed to generate the vectors via its own Gaussian distribution. Similarly to our MMSG model, in Gaussian LDA each topic is encoded with a vector, in this case the mean of the Gaussian. It takes pre-trained word embeddings as input, rather than learning the embeddings from data within the same model, and does not aim to perform word embedding. The topical word embedding (TWE) models of BIBREF22 reverse this, as they take LDA topic assignments of words as input, and aim to use them to improve the resultant word embeddings. The authors propose three variants, each of which modifies the skip-gram training objective to use LDA topic assignments together with words. In the best performing variant, called TWE-1, a standard skip-gram word embedding model is trained independently with another skip-gram variant, which tries to predict context words given the input word's topic assignment. The skip-gram embedding and the topic embeddings are concatenated to form the final embedding. At test time, a distribution over topics for the word given the context, INLINEFORM0 is estimated according to the topic counts over the other context words. Using this as a prior, a posterior over topics given both the input word and the context is calculated, and similarities between pairs of words (with their contexts) are averaged over this posterior, in a procedure inspired by those used by BIBREF43 , BIBREF36 . The primary similarity to our MMSG approach is the use of a training algorithm involving the prediction of context words, given a topic. Our method does this as part of an overall model-based inference procedure, and we learn mixed membership proportions INLINEFORM1 rather than using empirical counts as the prior over topics for a word token. In accordance with the skip-gram's prediction model, we are thus able to model the context words in the data likelihood term when computing the posterior probability of the topic assignment. TWE-1 requires that topic assignments are available at test time. It provides a mechanism to predict contextual similarity, but not to predict held-out context words, so we are unable to compare to it in our experiments. Other neurally-inspired topic models include replicated softmax BIBREF34 , and its successor, DocNADE BIBREF37 . Replicated softmax extends the restricted Boltzmann machine to handle multinomial counts for document modeling. DocNADE builds on the ideas of replicated softmax, but uses the NADE architecture, where observations (i.e. words) are modeled sequentially given the previous observations. Multi-Prototype Embedding Models Multi-prototype embeddings models are another relevant line of work. These models address lexical ambiguity by assigning multiple vectors to each word type, each corresponding to a different meaning of that word. BIBREF43 propose to cluster the occurrences of each word type, based on features extracted from its context. Embeddings are then learned for each cluster. BIBREF36 apply a similar approach, but they use initial single-prototype word embeddings to provide the features used for clustering. These clustering methods have some resemblance to our topic model pre-clustering step, although their clustering is applied within instances of a given word type, rather than globally across all word types, as in our methods. This results in models with more vectors than words, while we aim to find fewer vectors than words, to reduce the model's complexity for small datasets. Rather than employing an off-the-shelf clustering algorithm and then applying an unrelated embedding model to its output, our approach aims to perform model-based clustering within an overall joint model of topic/cluster assignments and word vectors. Perhaps the most similar model to ours in the literature is the probabilistic multi-prototype embedding model of BIBREF45 , who treat the prototype assignment of a word as a latent variable, assumed drawn from a mixture over prototypes for each word. The embeddings are then trained using EM. Our MMSG model can be understood as the mixed membership version of this model, in which the prototypes (vectors) are shared across all word types, and each word type has its own mixed membership proportions across the shared prototypes. While a similar EM algorithm can be applied to the MMSG, the E-step is much more expensive, as we typically desire many more shared vectors (often in the thousands) than we would prototypes per a single word type (Tian et al. use ten in their experiments). We use the Metropolis-Hastings-Walker algorithm with the topic model reparameterization of our model in order to address this by efficiently pre-solving the E-step. Mixed Membership Modeling Mixed membership modeling is a flexible alternative to traditional clustering, in which each data point is assigned to a single cluster. Instead, mixed membership models posit that individual entities are associated with multiple underlying clusters, to differing degrees, as encoded by a mixed membership vector that sums to one across the clusters BIBREF28 , BIBREF26 . These mixed membership proportions are generally used to model lower-level grouped data, such as the words inside a document. Each lower-level data point inside a group is assumed to be assigned to one of the shared, global clusters according to the group-level membership proportions. Thus, a mixed membership model consists of a mixture model for each group, which share common mixture component parameters, but with differing mixture proportions. This formalism has lead to probabilistic models for a variety of applications, including medical diagnosis BIBREF39 , population genetics BIBREF42 , survey analysis BIBREF29 , computer vision BIBREF27 , BIBREF30 , text documents BIBREF35 , BIBREF7 , and social network analysis BIBREF25 . Nonparametric Bayesian extensions, in which the number of underlying clusters is learned from data via Bayesian inference, have also been proposed BIBREF44 . In this work, dictionary words are assigned a mixed membership distribution over a set of shared latent vector space embeddings. Each instantiation of a dictionary word (an “input” word) is assigned to one of the shared embeddings based on its dictionary word's membership vector. The words in its context (“output” words) are assumed to be drawn based on the chosen embedding. Case Study on NIPS In Figure FIGREF33 , we show a zoomed in INLINEFORM0 -SNE visualization of NIPS document embeddings. We can see regions of the space corresponding to learning algorithms (bottom), data space and latent space (center), training neural networks (top), and nearest neighbors (bottom-left). We also visualized the authors' embeddings via INLINEFORM1 -SNE (Figure FIGREF34 ). We find regions of latent space for reinforcement learning authors (left: “state, action,...,” Singh, Barto,Sutton), probabilistic methods (right: “mixture, model,” “monte, carlo,” Bishop, Williams, Barber, Opper, Jordan, Ghahramani, Tresp, Smyth), and evaluation (top-right: “results, performance, experiments,...”). Derivation of the Collapsed Gibbs Update Let INLINEFORM0 be the number of output words in the INLINEFORM1 th context, let INLINEFORM2 be those output words, and let INLINEFORM3 be the input words other that INLINEFORM4 (similarly, topic assignments INLINEFORM5 and output words INLINEFORM6 ). Then the collapsed Gibbs update samples from the conditional distribution INLINEFORM7 We recognize the first integral as the mean of a Dirichlet distribution which we obtain via conjugacy: INLINEFORM0 The above can also be understood as the probability of the next ball drawn from a multivariate Polya urn model, also known as the Dirichlet-compound multinomial distribution, arising from the posterior predictive distribution of a discrete likelihood with a Dirichlet prior. We will need the full form of such a distribution to analyze the second integral. Once again leveraging conjugacy, we have: INLINEFORM0 INLINEFORM0 where INLINEFORM0 is the number of times that output word INLINEFORM1 occurs in the INLINEFORM2 th context, since the final integral is over the full support of a Dirichlet distribution, which integrates to one. Eliminating terms that aren't affected by the INLINEFORM3 assignment, the above is INLINEFORM4 where we have used the fact that INLINEFORM0 for any INLINEFORM1 , and integer INLINEFORM2 . We can interpret this as the probability of drawing the context words under the multivariate Polya urn model, in which the number of “colored balls” (word counts plus prior counts) is increased by one each time a certain color (word) is selected. In other words, in each step, corresponding to the selection of each context word, we draw a ball from the urn, then put it back, along with another ball of the same color. The INLINEFORM3 and INLINEFORM4 terms reflect that the counts have been changed by adding these extra balls into the urn in each step. The second to last equation shows that this process is exchangeable: it does not matter which order the balls were drawn in when determining the probability of the sequence. Multiplying this with the term from the first integral, calculated earlier, gives us the final form of the update equation, INLINEFORM5
document categorization, regression tasks
1949d84653562fa9e83413796ae55980ab7318f2
1949d84653562fa9e83413796ae55980ab7318f2_0
Q: What is MRR? Text: Introduction Word embedding models, which learn to encode dictionary words with vector space representations, have been shown to be valuable for a variety of natural language processing (NLP) tasks such as statistical machine translation BIBREF2 , part-of-speech tagging, chunking, and named entity recogition BIBREF3 , as they provide a more nuanced representation of words than a simple indicator vector into a dictionary. These models follow a long line of research in data-driven semantic representations of text, including latent semantic analysis BIBREF4 and its probabilistic extensions BIBREF5 , BIBREF6 . In particular, topic models BIBREF7 have found broad applications in computational social science BIBREF8 , BIBREF9 and the digital humanities BIBREF10 , where interpretable representations reveal meaningful insights. Despite widespread success at NLP tasks, word embeddings have not yet supplanted topic models as the method of choice in computational social science applications. I speculate that this is due to two primary factors: 1) a perceived reliance on big data, and 2) a lack of interpretability. In this work, I develop new models to address both of these limitations. Word embeddings have risen in popularity for NLP applications due to the success of models designed specifically for the big data setting. In particular, BIBREF0 , BIBREF1 showed that very simple word embedding models with high-dimensional representations can scale up to massive datasets, allowing them to outperform more sophisticated neural network language models which can process fewer documents. In this work, I offer a somewhat contrarian perspective to the currently prevailing trend of big data optimism, as exemplified by the work of BIBREF0 , BIBREF1 , BIBREF3 , and others, who argue that massive datasets are sufficient to allow language models to automatically resolve many challenging NLP tasks. Note that “big” datasets are not always available, particularly in computational social science NLP applications, where the data of interest are often not obtained from large scale sources such as the internet and social media, but from sources such as press releases BIBREF11 , academic journals BIBREF10 , books BIBREF12 , and transcripts of recorded speech BIBREF13 , BIBREF14 , BIBREF15 . A standard practice in the literature is to train word embedding models on a generic large corpus such as Wikipedia, and use the embeddings for NLP tasks on the target dataset, cf. BIBREF3 , BIBREF0 , BIBREF16 , BIBREF17 . However, as we shall see here, this standard practice might not always be effective, as the size of a dataset does not correspond to its degree of relevance for a particular analysis. Even very large corpora have idiosyncrasies that can make their embeddings invalid for other domains. For instance, suppose we would like to use word embeddings to analyze scientific articles on machine learning. In Table TABREF1 , I report the most similar words to the word “learning” based on word embedding models trained on two corpora. For embeddings trained on articles from the NIPS conference, the most similar words are related to machine learning, as desired, while for embeddings trained on the massive, generic Google News corpus, the most similar words relate to learning and teaching in the classroom. Evidently, domain-specific data can be important. Even more concerningly, BIBREF18 show that word embeddings can encode implicit sexist assumptions. This suggests that when trained on large generic corpora they could also encode the hegemonic worldview, which is inappropriate for studying, e.g., black female hip-hop artists' lyrics, or poetry by Syrian refugees, and could potentially lead to systematic bias against minorities, women, and people of color in NLP applications with real-world consequences, such as automatic essay grading and college admissions. In order to proactively combat these kinds of biases in large generic datasets, and to address computational social science tasks, there is a need for effective word embeddings for small datasets, so that the most relevant datasets can be used for training, even when they are small. To make word embeddings a viable alternative to topic models for applications in the social sciences, we further desire that the embeddings are semantically meaningful to human analysts. In this paper, I introduce an interpretable word embedding model, and an associated topic model, which are designed to work well when trained on a small to medium-sized corpus of interest. The primary insight is to use a data-efficient parameter sharing scheme via mixed membership modeling, with inspiration from topic models. Mixed membership models provide a flexible yet efficient latent representation, in which entities are associated with shared, global representations, but to uniquely varying degrees. I identify the skip-gram word2vec model of BIBREF0 , BIBREF1 as corresponding to a certain naive Bayes topic model, which leads to mixed membership extensions, allowing the use of fewer vectors than words. I show that this leads to better modeling performance without big data, as measured by predictive performance (when the context is leveraged for prediction), as well as to interpretable latent representations that are highly valuable for computational social science applications. The interpretability of the representations arises from defining embeddings for words (and hence, documents) in terms of embeddings for topics. My experiments also shed light on the relative merits of training embeddings on generic big data corpora versus domain-specific data. Background In this section, I provide the necessary background on word embeddings, as well as on topic models and mixed membership models. Traditional language models aim to predict words given the contexts that they are found in, thereby forming a joint probabilistic model for sequences of words in a language. BIBREF19 developed improved language models by using distributed representations BIBREF20 , in which words are represented by neural network synapse weights, or equivalently, vector space embeddings. Later authors have noted that these word embeddings are useful for semantic representations of words, independently of whether a full joint probabilistic language model is learned, and that alternative training schemes can be beneficial for learning the embeddings. In particular, BIBREF0 , BIBREF1 proposed the skip-gram model, which inverts the language model prediction task and aims to predict the context given an input word. The skip-gram model is a log-bilinear discriminative probabilistic classifier parameterized by “input” word embedding vectors INLINEFORM0 for the input words INLINEFORM1 , and “output” word embedding vectors INLINEFORM2 for context words INLINEFORM3 , as shown in Table TABREF2 , top-left. Topic models such as latent Dirichlet allocation (LDA) BIBREF7 are another class of probabilistic language models that have been used for semantic representation BIBREF6 . A straightforward way to model text corpora is via unsupervised multinomial naive Bayes, in which a latent cluster assignment for each document selects a multinomial distribution over words, referred to as a topic, with which the documents' words are assumed to be generated. LDA topic models improve over naive Bayes by using a mixed membership model, in which the assumption that all words in a document INLINEFORM0 belong to the same topic is relaxed, and replaced with a distribution over topics INLINEFORM1 . In the model's assumed generative process, for each word INLINEFORM2 in document INLINEFORM3 , a topic assignment INLINEFORM4 is drawn via INLINEFORM5 , then the word is drawn from the chosen topic INLINEFORM6 . The mixed membership formalism provides a useful compromise between model flexibility and statistical efficiency: the INLINEFORM7 topics INLINEFORM8 are shared across all documents, thereby sharing statistical strength, but each document is free to use the topics to its own unique degree. Bayesian inference further aids data efficiency, as uncertainty over INLINEFORM9 can be managed for shorter documents. Some recent papers have aimed to combine topic models and word embeddings BIBREF21 , BIBREF22 , but they do not aim to address the small data problem for computational social science, which I focus on here. I provide a more detailed discussion of related work in the supplementary. The Mixed Membership Skip-Gram To design an interpretable word embedding model for small corpora, we identify novel connections between word embeddings and topic models, and adapt advances from topic modeling. Following the distributional hypothesis BIBREF23 , the skip-gram's word embeddings parameterize discrete probability distributions over words INLINEFORM0 which tend to co-occur, and tend to be semantically coherent – a property leveraged by the Gaussian LDA model of BIBREF21 . This suggests that these discrete distributions can be reinterpreted as topics INLINEFORM1 . We thus reinterpret the skip-gram as a parameterization of a certain supervised naive Bayes topic model (Table TABREF2 , top-right). In this topic model, input words INLINEFORM2 are fully observed “cluster assignments,” and the words in INLINEFORM3 's contexts are a “document.” The skip-gram differs from this supervised topic model only in the parameterization of the “topics” via word vectors which encode the distributions with a log-bilinear model. Note that although the skip-gram is discriminative, in the sense that it does not jointly model the input words INLINEFORM4 , we are here equivalently interpreting it as encoding a “conditionally generative” process for the context given the words, in order to develop probabilistic models that extend the skip-gram. As in LDA, this model can be improved by replacing the naive Bayes assumption with a mixed membership assumption. By applying the mixed membership representation to this topic model version of the skip-gram, we obtain the model in the bottom-right of Table TABREF2 . After once again parameterizing this model with word embeddings, we obtain our final model, the mixed membership skip-gram (MMSG) (Table TABREF2 , bottom-left). In the model, each input word has a distribution over topics INLINEFORM0 . Each topic has a vector-space embedding INLINEFORM1 and each output word has a vector INLINEFORM2 (a parameter, not an embedding for INLINEFORM3 ). A topic INLINEFORM4 is drawn for each context, and the words in the context are drawn from the log-bilinear model using INLINEFORM5 : DISPLAYFORM0 We can expect that the resulting mixed membership word embeddings are beneficial in the small-to-medium data regime for the following reasons: Of course, the model also requires some new parameters to be learned, namely the mixed membership proportions INLINEFORM0 . Based on topic modeling, I hypothesized that with care, these added parameters need not adversely affect performance in the small-medium data regime, for two reasons: 1) we can use a Bayesian approach to effectively manage uncertainty in them, and to marginalize them out, which prevents them being a bottleneck during training; and 2) at test time, using the posterior for INLINEFORM1 given the context, instead of the “prior” INLINEFORM2 , mitigates the impact of uncertainty in INLINEFORM3 due to limited training data: DISPLAYFORM0 To obtain a vector for a word type INLINEFORM0 , we can use the prior mean, INLINEFORM1 . For a word token INLINEFORM2 , we can leverage its context via the posterior mean, INLINEFORM3 . These embeddings are convex combinations of topic vectors (see Figure FIGREF23 for an example). With fewer vectors than words, some model capacity is lost, but the flexibility of the mixed membership representation allows the model to compensate. When the number of shared vectors equals the number of words, the mixed membership skip-gram is strictly more representationally powerful than the skip-gram. With more vectors than words, we can expect that the increased representational power would be beneficial in the big data regime. As this is not my goal, I leave this for future work. Experimental Results The goals of our experiments were to study the relative merits of big data and domain-specific small data, to validate the proposed methods, and to study their applicability for computational social science research. Quantitative Experiments I first measured the effectiveness of the embeddings at the skip-gram's training task, predicting context words INLINEFORM0 given input words INLINEFORM1 . This task measures the methods' performance for predictive language modeling. I used four datasets of sociopolitical, scientific, and literary interest: the corpus of NIPS articles from 1987 – 1999 ( INLINEFORM2 million), the U.S. presidential state of the Union addresses from 1790 – 2015 ( INLINEFORM3 ), the complete works of Shakespeare ( INLINEFORM4 ; this version did not contain the Sonnets), and the writings of black scholar and activist W.E.B. Du Bois, as digitized by Project Gutenberg ( INLINEFORM5 ). For each dataset, I held out 10,000 INLINEFORM6 pairs uniformly at random, where INLINEFORM7 , and aimed to predict INLINEFORM8 given INLINEFORM9 (and optionally, INLINEFORM10 ). Since there are a large number of classes, I treat this as a ranking problem, and report the mean reciprocal rank. The experiments were repeated and averaged over 5 train/test splits. The results are shown in Table TABREF25 . I compared to a word frequency baseline, the skip-gram (SG), and Tomas Mikolov/Google's vectors trained on Google News, INLINEFORM0 billion, via CBOW. Simulated annealing was performed for 1,000 iterations, NCE was performed for 1 million minibatches of size 128, and 128-dimensional embeddings were used (300 for Google). I used INLINEFORM1 for NIPS, INLINEFORM2 for state of the Union, and INLINEFORM3 for the two smaller datasets. Methods were able to leverage the remainder of the context, either by adding the context's vectors, or via the posterior (Equation EQREF22 ), which helped for all methods except the naive skip-gram. We can identify several noteworthy findings. First, the generic big data vectors (Google+context) were outperformed by the skip-gram on 3 out of 4 datasets (and by the skip-gram topic model on the other), by a large margin, indicating that domain-specific embeddings are often important. Second, the mixed membership models, using posterior inference, beat or matched their naive Bayes counterparts, for both the word embedding models and the topic models. As hypothesized, posterior inference on INLINEFORM4 at test time was important for good performance. Finally, the topic models beat their corresponding word embedding models at prediction. I therefore recommend the use of our MMSG topic model variant for predictive language modeling in the small data regime. I tested the performance of the representations as features for document categorization and regression tasks. The results are given in Table TABREF26 . For document categorization, I used three standard benchmark datasets: 20 Newsgroups (19,997 newsgroup posts), Reuters-150 newswire articles (15,500 articles and 150 classes), and Ohsumed medical abstracts on 23 cardiovascular diseases (20,000 articles). I held out 4,000 test documents for 20 Newsgroups, and used the standard train/test splits from the literature in the other corpora (e.g. for Ohsumed, 50% of documents were assigned to training and to test sets). I obtained document embeddings for the MMSG, in the same latent space as the topic embeddings, by summing the posterior mean vectors INLINEFORM0 for each token. Vector addition was similarly used to construct document vectors for the other embedding models. All vectors were normalized to unit length. I also considered a tf-idf baseline. Logistic regression models were trained on the features extracted on the training set for each method. Across the three datasets, several clear trends emerged (Table TABREF26 ). First, the generic Google vectors were consistently and substantially outperformed in classification performance by the skipgram (SG) and MMSG vectors, highlighting the importance of corpus-specific embeddings. Second, despite the MMSG's superior performance at language modeling on small datasets, the SG features outperformed the MMSG's at the document categorization task. By encoding vectors at the topic level instead of the word level, the MMSG loses word level resolution in the embeddings, which turned out to be valuable for these particular classification tasks. We are not, however, restricted to use only one type of embedding to construct features for classification. Interestingly, when the SG and MMSG features were concatenated (SG+MMSG), this improved classification performance over these vectors individually. This suggests that the topic-level MMSG vectors and word-level SG vectors encode complementary information, and both are beneficial for performance. Finally, further concatenating the generic Google vectors' features (SG+MMSG+Google) improved performance again, despite the fact that these vectors performed poorly on their own. It should be noted that tf-idf, which is notoriously effective for document categorization, outperformed the embedding methods on these datasets. I also analyzed the regression task of predicting the year of a state of the Union address based on its text information. I used lasso-regularized linear regression models, evaluated via a leave-one-out cross-validation experimental setup. Root-mean-square error (RMSE) results are reported in Table TABREF26 (bottom). Unlike for the other tasks, the Google big data vectors were the best individual features in this case, outperforming the domain-specific SG and MMSG embeddings individually. On the other hand, SG+MMSG+Google performed the best overall, showing that domain-specific embeddings can improve performance even when big data embeddings are successful. The tf-idf baseline was beaten by all of the embedding models on this task. Computational Social Science Case Studies: State of the Union and NIPS I also performed several case studies. I obtained document embeddings, in the same latent space as the topic embeddings, by summing the posterior mean vectors INLINEFORM0 for each token, and visualized them in two dimensions using INLINEFORM1 -SNE BIBREF24 (all vectors were normalized to unit length). The state of the Union addresses (Figure FIGREF27 ) are embedded almost linearly by year, with a major jump around the New Deal (1930s), and are well separated by party at any given time period. The embedded topics (gray) allow us to interpret the space. The George W. Bush addresses are embedded near a “war on terror” topic (“weapons, war...”), and the Barack Obama addresses are embedded near a “stimulus” topic (“people, work...”). On the NIPS corpus, for the input word “Bayesian” (Table ), the naive Bayes and skip-gram models learned a topic with words that refer to Bayesian networks, probabilistic models, and neural networks. The mixed membership models are able to separate this into more coherent and specific topics including Bayesian modeling, Bayesian training of neural networks (for which Sir David MacKay was a strong proponent, and Andreas Weigend wrote an influential early paper), and Monte Carlo methods. By performing the additive composition of word vectors, which we obtain by finding the prior mean vector for each word type INLINEFORM0 , INLINEFORM1 (and then normalizing), we obtain relevant topics INLINEFORM2 as nearest neighbors (Figure FIGREF28 ). Similarly, we find that the additive composition of topic and word vectors works correctly: INLINEFORM3 , and INLINEFORM4 . The INLINEFORM0 -SNE visualization of NIPS documents (Figure FIGREF28 ) shows some temporal clustering patterns (blue documents are more recent, red documents are older, and gray points are topics). I provide a more detailed case study on NIPS in the supplementary material. Conclusion I have proposed a model-based method for training interpretable corpus-specific word embeddings for computational social science, using mixed membership representations, Metropolis-Hastings-Walker sampling, and NCE. Experimental results for prediction, supervised learning, and case studies on state of the Union addresses and NIPS articles, indicate that high-quality embeddings and topics can be obtained using the method. The results highlight the fact that big data is not always best, as domain-specific data can be very valuable, even when it is small. I plan to use this approach for substantive social science applications, and to address algorithmic bias and fairness issues. Acknowledgements I thank Eric Nalisnick and Padhraic Smyth for many helpful discussions. Supplementary Material ] Related Work In this supplementary document, we discuss related work in the literature and its relation to our proposed methods, provide a case study on NIPS articles, and derive the collapsed Gibbs sampling update for the MMSGTM, which we leverage when training the MMSG. Topic Modeling and Word Embeddings The Gaussian LDA model of BIBREF21 improves the performance of topic modeling by leveraging the semantic information encoded in word embeddings. Gaussian LDA modifies the generative process of LDA such that each topic is assumed to generate the vectors via its own Gaussian distribution. Similarly to our MMSG model, in Gaussian LDA each topic is encoded with a vector, in this case the mean of the Gaussian. It takes pre-trained word embeddings as input, rather than learning the embeddings from data within the same model, and does not aim to perform word embedding. The topical word embedding (TWE) models of BIBREF22 reverse this, as they take LDA topic assignments of words as input, and aim to use them to improve the resultant word embeddings. The authors propose three variants, each of which modifies the skip-gram training objective to use LDA topic assignments together with words. In the best performing variant, called TWE-1, a standard skip-gram word embedding model is trained independently with another skip-gram variant, which tries to predict context words given the input word's topic assignment. The skip-gram embedding and the topic embeddings are concatenated to form the final embedding. At test time, a distribution over topics for the word given the context, INLINEFORM0 is estimated according to the topic counts over the other context words. Using this as a prior, a posterior over topics given both the input word and the context is calculated, and similarities between pairs of words (with their contexts) are averaged over this posterior, in a procedure inspired by those used by BIBREF43 , BIBREF36 . The primary similarity to our MMSG approach is the use of a training algorithm involving the prediction of context words, given a topic. Our method does this as part of an overall model-based inference procedure, and we learn mixed membership proportions INLINEFORM1 rather than using empirical counts as the prior over topics for a word token. In accordance with the skip-gram's prediction model, we are thus able to model the context words in the data likelihood term when computing the posterior probability of the topic assignment. TWE-1 requires that topic assignments are available at test time. It provides a mechanism to predict contextual similarity, but not to predict held-out context words, so we are unable to compare to it in our experiments. Other neurally-inspired topic models include replicated softmax BIBREF34 , and its successor, DocNADE BIBREF37 . Replicated softmax extends the restricted Boltzmann machine to handle multinomial counts for document modeling. DocNADE builds on the ideas of replicated softmax, but uses the NADE architecture, where observations (i.e. words) are modeled sequentially given the previous observations. Multi-Prototype Embedding Models Multi-prototype embeddings models are another relevant line of work. These models address lexical ambiguity by assigning multiple vectors to each word type, each corresponding to a different meaning of that word. BIBREF43 propose to cluster the occurrences of each word type, based on features extracted from its context. Embeddings are then learned for each cluster. BIBREF36 apply a similar approach, but they use initial single-prototype word embeddings to provide the features used for clustering. These clustering methods have some resemblance to our topic model pre-clustering step, although their clustering is applied within instances of a given word type, rather than globally across all word types, as in our methods. This results in models with more vectors than words, while we aim to find fewer vectors than words, to reduce the model's complexity for small datasets. Rather than employing an off-the-shelf clustering algorithm and then applying an unrelated embedding model to its output, our approach aims to perform model-based clustering within an overall joint model of topic/cluster assignments and word vectors. Perhaps the most similar model to ours in the literature is the probabilistic multi-prototype embedding model of BIBREF45 , who treat the prototype assignment of a word as a latent variable, assumed drawn from a mixture over prototypes for each word. The embeddings are then trained using EM. Our MMSG model can be understood as the mixed membership version of this model, in which the prototypes (vectors) are shared across all word types, and each word type has its own mixed membership proportions across the shared prototypes. While a similar EM algorithm can be applied to the MMSG, the E-step is much more expensive, as we typically desire many more shared vectors (often in the thousands) than we would prototypes per a single word type (Tian et al. use ten in their experiments). We use the Metropolis-Hastings-Walker algorithm with the topic model reparameterization of our model in order to address this by efficiently pre-solving the E-step. Mixed Membership Modeling Mixed membership modeling is a flexible alternative to traditional clustering, in which each data point is assigned to a single cluster. Instead, mixed membership models posit that individual entities are associated with multiple underlying clusters, to differing degrees, as encoded by a mixed membership vector that sums to one across the clusters BIBREF28 , BIBREF26 . These mixed membership proportions are generally used to model lower-level grouped data, such as the words inside a document. Each lower-level data point inside a group is assumed to be assigned to one of the shared, global clusters according to the group-level membership proportions. Thus, a mixed membership model consists of a mixture model for each group, which share common mixture component parameters, but with differing mixture proportions. This formalism has lead to probabilistic models for a variety of applications, including medical diagnosis BIBREF39 , population genetics BIBREF42 , survey analysis BIBREF29 , computer vision BIBREF27 , BIBREF30 , text documents BIBREF35 , BIBREF7 , and social network analysis BIBREF25 . Nonparametric Bayesian extensions, in which the number of underlying clusters is learned from data via Bayesian inference, have also been proposed BIBREF44 . In this work, dictionary words are assigned a mixed membership distribution over a set of shared latent vector space embeddings. Each instantiation of a dictionary word (an “input” word) is assigned to one of the shared embeddings based on its dictionary word's membership vector. The words in its context (“output” words) are assumed to be drawn based on the chosen embedding. Case Study on NIPS In Figure FIGREF33 , we show a zoomed in INLINEFORM0 -SNE visualization of NIPS document embeddings. We can see regions of the space corresponding to learning algorithms (bottom), data space and latent space (center), training neural networks (top), and nearest neighbors (bottom-left). We also visualized the authors' embeddings via INLINEFORM1 -SNE (Figure FIGREF34 ). We find regions of latent space for reinforcement learning authors (left: “state, action,...,” Singh, Barto,Sutton), probabilistic methods (right: “mixture, model,” “monte, carlo,” Bishop, Williams, Barber, Opper, Jordan, Ghahramani, Tresp, Smyth), and evaluation (top-right: “results, performance, experiments,...”). Derivation of the Collapsed Gibbs Update Let INLINEFORM0 be the number of output words in the INLINEFORM1 th context, let INLINEFORM2 be those output words, and let INLINEFORM3 be the input words other that INLINEFORM4 (similarly, topic assignments INLINEFORM5 and output words INLINEFORM6 ). Then the collapsed Gibbs update samples from the conditional distribution INLINEFORM7 We recognize the first integral as the mean of a Dirichlet distribution which we obtain via conjugacy: INLINEFORM0 The above can also be understood as the probability of the next ball drawn from a multivariate Polya urn model, also known as the Dirichlet-compound multinomial distribution, arising from the posterior predictive distribution of a discrete likelihood with a Dirichlet prior. We will need the full form of such a distribution to analyze the second integral. Once again leveraging conjugacy, we have: INLINEFORM0 INLINEFORM0 where INLINEFORM0 is the number of times that output word INLINEFORM1 occurs in the INLINEFORM2 th context, since the final integral is over the full support of a Dirichlet distribution, which integrates to one. Eliminating terms that aren't affected by the INLINEFORM3 assignment, the above is INLINEFORM4 where we have used the fact that INLINEFORM0 for any INLINEFORM1 , and integer INLINEFORM2 . We can interpret this as the probability of drawing the context words under the multivariate Polya urn model, in which the number of “colored balls” (word counts plus prior counts) is increased by one each time a certain color (word) is selected. In other words, in each step, corresponding to the selection of each context word, we draw a ball from the urn, then put it back, along with another ball of the same color. The INLINEFORM3 and INLINEFORM4 terms reflect that the counts have been changed by adding these extra balls into the urn in each step. The second to last equation shows that this process is exchangeable: it does not matter which order the balls were drawn in when determining the probability of the sequence. Multiplying this with the term from the first integral, calculated earlier, gives us the final form of the update equation, INLINEFORM5
mean reciprocal rank
7ee660927e2b202376849e489faa7341518adaf9
7ee660927e2b202376849e489faa7341518adaf9_0
Q: Which techniques for word embeddings and topic models are used? Text: Introduction Word embedding models, which learn to encode dictionary words with vector space representations, have been shown to be valuable for a variety of natural language processing (NLP) tasks such as statistical machine translation BIBREF2 , part-of-speech tagging, chunking, and named entity recogition BIBREF3 , as they provide a more nuanced representation of words than a simple indicator vector into a dictionary. These models follow a long line of research in data-driven semantic representations of text, including latent semantic analysis BIBREF4 and its probabilistic extensions BIBREF5 , BIBREF6 . In particular, topic models BIBREF7 have found broad applications in computational social science BIBREF8 , BIBREF9 and the digital humanities BIBREF10 , where interpretable representations reveal meaningful insights. Despite widespread success at NLP tasks, word embeddings have not yet supplanted topic models as the method of choice in computational social science applications. I speculate that this is due to two primary factors: 1) a perceived reliance on big data, and 2) a lack of interpretability. In this work, I develop new models to address both of these limitations. Word embeddings have risen in popularity for NLP applications due to the success of models designed specifically for the big data setting. In particular, BIBREF0 , BIBREF1 showed that very simple word embedding models with high-dimensional representations can scale up to massive datasets, allowing them to outperform more sophisticated neural network language models which can process fewer documents. In this work, I offer a somewhat contrarian perspective to the currently prevailing trend of big data optimism, as exemplified by the work of BIBREF0 , BIBREF1 , BIBREF3 , and others, who argue that massive datasets are sufficient to allow language models to automatically resolve many challenging NLP tasks. Note that “big” datasets are not always available, particularly in computational social science NLP applications, where the data of interest are often not obtained from large scale sources such as the internet and social media, but from sources such as press releases BIBREF11 , academic journals BIBREF10 , books BIBREF12 , and transcripts of recorded speech BIBREF13 , BIBREF14 , BIBREF15 . A standard practice in the literature is to train word embedding models on a generic large corpus such as Wikipedia, and use the embeddings for NLP tasks on the target dataset, cf. BIBREF3 , BIBREF0 , BIBREF16 , BIBREF17 . However, as we shall see here, this standard practice might not always be effective, as the size of a dataset does not correspond to its degree of relevance for a particular analysis. Even very large corpora have idiosyncrasies that can make their embeddings invalid for other domains. For instance, suppose we would like to use word embeddings to analyze scientific articles on machine learning. In Table TABREF1 , I report the most similar words to the word “learning” based on word embedding models trained on two corpora. For embeddings trained on articles from the NIPS conference, the most similar words are related to machine learning, as desired, while for embeddings trained on the massive, generic Google News corpus, the most similar words relate to learning and teaching in the classroom. Evidently, domain-specific data can be important. Even more concerningly, BIBREF18 show that word embeddings can encode implicit sexist assumptions. This suggests that when trained on large generic corpora they could also encode the hegemonic worldview, which is inappropriate for studying, e.g., black female hip-hop artists' lyrics, or poetry by Syrian refugees, and could potentially lead to systematic bias against minorities, women, and people of color in NLP applications with real-world consequences, such as automatic essay grading and college admissions. In order to proactively combat these kinds of biases in large generic datasets, and to address computational social science tasks, there is a need for effective word embeddings for small datasets, so that the most relevant datasets can be used for training, even when they are small. To make word embeddings a viable alternative to topic models for applications in the social sciences, we further desire that the embeddings are semantically meaningful to human analysts. In this paper, I introduce an interpretable word embedding model, and an associated topic model, which are designed to work well when trained on a small to medium-sized corpus of interest. The primary insight is to use a data-efficient parameter sharing scheme via mixed membership modeling, with inspiration from topic models. Mixed membership models provide a flexible yet efficient latent representation, in which entities are associated with shared, global representations, but to uniquely varying degrees. I identify the skip-gram word2vec model of BIBREF0 , BIBREF1 as corresponding to a certain naive Bayes topic model, which leads to mixed membership extensions, allowing the use of fewer vectors than words. I show that this leads to better modeling performance without big data, as measured by predictive performance (when the context is leveraged for prediction), as well as to interpretable latent representations that are highly valuable for computational social science applications. The interpretability of the representations arises from defining embeddings for words (and hence, documents) in terms of embeddings for topics. My experiments also shed light on the relative merits of training embeddings on generic big data corpora versus domain-specific data. Background In this section, I provide the necessary background on word embeddings, as well as on topic models and mixed membership models. Traditional language models aim to predict words given the contexts that they are found in, thereby forming a joint probabilistic model for sequences of words in a language. BIBREF19 developed improved language models by using distributed representations BIBREF20 , in which words are represented by neural network synapse weights, or equivalently, vector space embeddings. Later authors have noted that these word embeddings are useful for semantic representations of words, independently of whether a full joint probabilistic language model is learned, and that alternative training schemes can be beneficial for learning the embeddings. In particular, BIBREF0 , BIBREF1 proposed the skip-gram model, which inverts the language model prediction task and aims to predict the context given an input word. The skip-gram model is a log-bilinear discriminative probabilistic classifier parameterized by “input” word embedding vectors INLINEFORM0 for the input words INLINEFORM1 , and “output” word embedding vectors INLINEFORM2 for context words INLINEFORM3 , as shown in Table TABREF2 , top-left. Topic models such as latent Dirichlet allocation (LDA) BIBREF7 are another class of probabilistic language models that have been used for semantic representation BIBREF6 . A straightforward way to model text corpora is via unsupervised multinomial naive Bayes, in which a latent cluster assignment for each document selects a multinomial distribution over words, referred to as a topic, with which the documents' words are assumed to be generated. LDA topic models improve over naive Bayes by using a mixed membership model, in which the assumption that all words in a document INLINEFORM0 belong to the same topic is relaxed, and replaced with a distribution over topics INLINEFORM1 . In the model's assumed generative process, for each word INLINEFORM2 in document INLINEFORM3 , a topic assignment INLINEFORM4 is drawn via INLINEFORM5 , then the word is drawn from the chosen topic INLINEFORM6 . The mixed membership formalism provides a useful compromise between model flexibility and statistical efficiency: the INLINEFORM7 topics INLINEFORM8 are shared across all documents, thereby sharing statistical strength, but each document is free to use the topics to its own unique degree. Bayesian inference further aids data efficiency, as uncertainty over INLINEFORM9 can be managed for shorter documents. Some recent papers have aimed to combine topic models and word embeddings BIBREF21 , BIBREF22 , but they do not aim to address the small data problem for computational social science, which I focus on here. I provide a more detailed discussion of related work in the supplementary. The Mixed Membership Skip-Gram To design an interpretable word embedding model for small corpora, we identify novel connections between word embeddings and topic models, and adapt advances from topic modeling. Following the distributional hypothesis BIBREF23 , the skip-gram's word embeddings parameterize discrete probability distributions over words INLINEFORM0 which tend to co-occur, and tend to be semantically coherent – a property leveraged by the Gaussian LDA model of BIBREF21 . This suggests that these discrete distributions can be reinterpreted as topics INLINEFORM1 . We thus reinterpret the skip-gram as a parameterization of a certain supervised naive Bayes topic model (Table TABREF2 , top-right). In this topic model, input words INLINEFORM2 are fully observed “cluster assignments,” and the words in INLINEFORM3 's contexts are a “document.” The skip-gram differs from this supervised topic model only in the parameterization of the “topics” via word vectors which encode the distributions with a log-bilinear model. Note that although the skip-gram is discriminative, in the sense that it does not jointly model the input words INLINEFORM4 , we are here equivalently interpreting it as encoding a “conditionally generative” process for the context given the words, in order to develop probabilistic models that extend the skip-gram. As in LDA, this model can be improved by replacing the naive Bayes assumption with a mixed membership assumption. By applying the mixed membership representation to this topic model version of the skip-gram, we obtain the model in the bottom-right of Table TABREF2 . After once again parameterizing this model with word embeddings, we obtain our final model, the mixed membership skip-gram (MMSG) (Table TABREF2 , bottom-left). In the model, each input word has a distribution over topics INLINEFORM0 . Each topic has a vector-space embedding INLINEFORM1 and each output word has a vector INLINEFORM2 (a parameter, not an embedding for INLINEFORM3 ). A topic INLINEFORM4 is drawn for each context, and the words in the context are drawn from the log-bilinear model using INLINEFORM5 : DISPLAYFORM0 We can expect that the resulting mixed membership word embeddings are beneficial in the small-to-medium data regime for the following reasons: Of course, the model also requires some new parameters to be learned, namely the mixed membership proportions INLINEFORM0 . Based on topic modeling, I hypothesized that with care, these added parameters need not adversely affect performance in the small-medium data regime, for two reasons: 1) we can use a Bayesian approach to effectively manage uncertainty in them, and to marginalize them out, which prevents them being a bottleneck during training; and 2) at test time, using the posterior for INLINEFORM1 given the context, instead of the “prior” INLINEFORM2 , mitigates the impact of uncertainty in INLINEFORM3 due to limited training data: DISPLAYFORM0 To obtain a vector for a word type INLINEFORM0 , we can use the prior mean, INLINEFORM1 . For a word token INLINEFORM2 , we can leverage its context via the posterior mean, INLINEFORM3 . These embeddings are convex combinations of topic vectors (see Figure FIGREF23 for an example). With fewer vectors than words, some model capacity is lost, but the flexibility of the mixed membership representation allows the model to compensate. When the number of shared vectors equals the number of words, the mixed membership skip-gram is strictly more representationally powerful than the skip-gram. With more vectors than words, we can expect that the increased representational power would be beneficial in the big data regime. As this is not my goal, I leave this for future work. Experimental Results The goals of our experiments were to study the relative merits of big data and domain-specific small data, to validate the proposed methods, and to study their applicability for computational social science research. Quantitative Experiments I first measured the effectiveness of the embeddings at the skip-gram's training task, predicting context words INLINEFORM0 given input words INLINEFORM1 . This task measures the methods' performance for predictive language modeling. I used four datasets of sociopolitical, scientific, and literary interest: the corpus of NIPS articles from 1987 – 1999 ( INLINEFORM2 million), the U.S. presidential state of the Union addresses from 1790 – 2015 ( INLINEFORM3 ), the complete works of Shakespeare ( INLINEFORM4 ; this version did not contain the Sonnets), and the writings of black scholar and activist W.E.B. Du Bois, as digitized by Project Gutenberg ( INLINEFORM5 ). For each dataset, I held out 10,000 INLINEFORM6 pairs uniformly at random, where INLINEFORM7 , and aimed to predict INLINEFORM8 given INLINEFORM9 (and optionally, INLINEFORM10 ). Since there are a large number of classes, I treat this as a ranking problem, and report the mean reciprocal rank. The experiments were repeated and averaged over 5 train/test splits. The results are shown in Table TABREF25 . I compared to a word frequency baseline, the skip-gram (SG), and Tomas Mikolov/Google's vectors trained on Google News, INLINEFORM0 billion, via CBOW. Simulated annealing was performed for 1,000 iterations, NCE was performed for 1 million minibatches of size 128, and 128-dimensional embeddings were used (300 for Google). I used INLINEFORM1 for NIPS, INLINEFORM2 for state of the Union, and INLINEFORM3 for the two smaller datasets. Methods were able to leverage the remainder of the context, either by adding the context's vectors, or via the posterior (Equation EQREF22 ), which helped for all methods except the naive skip-gram. We can identify several noteworthy findings. First, the generic big data vectors (Google+context) were outperformed by the skip-gram on 3 out of 4 datasets (and by the skip-gram topic model on the other), by a large margin, indicating that domain-specific embeddings are often important. Second, the mixed membership models, using posterior inference, beat or matched their naive Bayes counterparts, for both the word embedding models and the topic models. As hypothesized, posterior inference on INLINEFORM4 at test time was important for good performance. Finally, the topic models beat their corresponding word embedding models at prediction. I therefore recommend the use of our MMSG topic model variant for predictive language modeling in the small data regime. I tested the performance of the representations as features for document categorization and regression tasks. The results are given in Table TABREF26 . For document categorization, I used three standard benchmark datasets: 20 Newsgroups (19,997 newsgroup posts), Reuters-150 newswire articles (15,500 articles and 150 classes), and Ohsumed medical abstracts on 23 cardiovascular diseases (20,000 articles). I held out 4,000 test documents for 20 Newsgroups, and used the standard train/test splits from the literature in the other corpora (e.g. for Ohsumed, 50% of documents were assigned to training and to test sets). I obtained document embeddings for the MMSG, in the same latent space as the topic embeddings, by summing the posterior mean vectors INLINEFORM0 for each token. Vector addition was similarly used to construct document vectors for the other embedding models. All vectors were normalized to unit length. I also considered a tf-idf baseline. Logistic regression models were trained on the features extracted on the training set for each method. Across the three datasets, several clear trends emerged (Table TABREF26 ). First, the generic Google vectors were consistently and substantially outperformed in classification performance by the skipgram (SG) and MMSG vectors, highlighting the importance of corpus-specific embeddings. Second, despite the MMSG's superior performance at language modeling on small datasets, the SG features outperformed the MMSG's at the document categorization task. By encoding vectors at the topic level instead of the word level, the MMSG loses word level resolution in the embeddings, which turned out to be valuable for these particular classification tasks. We are not, however, restricted to use only one type of embedding to construct features for classification. Interestingly, when the SG and MMSG features were concatenated (SG+MMSG), this improved classification performance over these vectors individually. This suggests that the topic-level MMSG vectors and word-level SG vectors encode complementary information, and both are beneficial for performance. Finally, further concatenating the generic Google vectors' features (SG+MMSG+Google) improved performance again, despite the fact that these vectors performed poorly on their own. It should be noted that tf-idf, which is notoriously effective for document categorization, outperformed the embedding methods on these datasets. I also analyzed the regression task of predicting the year of a state of the Union address based on its text information. I used lasso-regularized linear regression models, evaluated via a leave-one-out cross-validation experimental setup. Root-mean-square error (RMSE) results are reported in Table TABREF26 (bottom). Unlike for the other tasks, the Google big data vectors were the best individual features in this case, outperforming the domain-specific SG and MMSG embeddings individually. On the other hand, SG+MMSG+Google performed the best overall, showing that domain-specific embeddings can improve performance even when big data embeddings are successful. The tf-idf baseline was beaten by all of the embedding models on this task. Computational Social Science Case Studies: State of the Union and NIPS I also performed several case studies. I obtained document embeddings, in the same latent space as the topic embeddings, by summing the posterior mean vectors INLINEFORM0 for each token, and visualized them in two dimensions using INLINEFORM1 -SNE BIBREF24 (all vectors were normalized to unit length). The state of the Union addresses (Figure FIGREF27 ) are embedded almost linearly by year, with a major jump around the New Deal (1930s), and are well separated by party at any given time period. The embedded topics (gray) allow us to interpret the space. The George W. Bush addresses are embedded near a “war on terror” topic (“weapons, war...”), and the Barack Obama addresses are embedded near a “stimulus” topic (“people, work...”). On the NIPS corpus, for the input word “Bayesian” (Table ), the naive Bayes and skip-gram models learned a topic with words that refer to Bayesian networks, probabilistic models, and neural networks. The mixed membership models are able to separate this into more coherent and specific topics including Bayesian modeling, Bayesian training of neural networks (for which Sir David MacKay was a strong proponent, and Andreas Weigend wrote an influential early paper), and Monte Carlo methods. By performing the additive composition of word vectors, which we obtain by finding the prior mean vector for each word type INLINEFORM0 , INLINEFORM1 (and then normalizing), we obtain relevant topics INLINEFORM2 as nearest neighbors (Figure FIGREF28 ). Similarly, we find that the additive composition of topic and word vectors works correctly: INLINEFORM3 , and INLINEFORM4 . The INLINEFORM0 -SNE visualization of NIPS documents (Figure FIGREF28 ) shows some temporal clustering patterns (blue documents are more recent, red documents are older, and gray points are topics). I provide a more detailed case study on NIPS in the supplementary material. Conclusion I have proposed a model-based method for training interpretable corpus-specific word embeddings for computational social science, using mixed membership representations, Metropolis-Hastings-Walker sampling, and NCE. Experimental results for prediction, supervised learning, and case studies on state of the Union addresses and NIPS articles, indicate that high-quality embeddings and topics can be obtained using the method. The results highlight the fact that big data is not always best, as domain-specific data can be very valuable, even when it is small. I plan to use this approach for substantive social science applications, and to address algorithmic bias and fairness issues. Acknowledgements I thank Eric Nalisnick and Padhraic Smyth for many helpful discussions. Supplementary Material ] Related Work In this supplementary document, we discuss related work in the literature and its relation to our proposed methods, provide a case study on NIPS articles, and derive the collapsed Gibbs sampling update for the MMSGTM, which we leverage when training the MMSG. Topic Modeling and Word Embeddings The Gaussian LDA model of BIBREF21 improves the performance of topic modeling by leveraging the semantic information encoded in word embeddings. Gaussian LDA modifies the generative process of LDA such that each topic is assumed to generate the vectors via its own Gaussian distribution. Similarly to our MMSG model, in Gaussian LDA each topic is encoded with a vector, in this case the mean of the Gaussian. It takes pre-trained word embeddings as input, rather than learning the embeddings from data within the same model, and does not aim to perform word embedding. The topical word embedding (TWE) models of BIBREF22 reverse this, as they take LDA topic assignments of words as input, and aim to use them to improve the resultant word embeddings. The authors propose three variants, each of which modifies the skip-gram training objective to use LDA topic assignments together with words. In the best performing variant, called TWE-1, a standard skip-gram word embedding model is trained independently with another skip-gram variant, which tries to predict context words given the input word's topic assignment. The skip-gram embedding and the topic embeddings are concatenated to form the final embedding. At test time, a distribution over topics for the word given the context, INLINEFORM0 is estimated according to the topic counts over the other context words. Using this as a prior, a posterior over topics given both the input word and the context is calculated, and similarities between pairs of words (with their contexts) are averaged over this posterior, in a procedure inspired by those used by BIBREF43 , BIBREF36 . The primary similarity to our MMSG approach is the use of a training algorithm involving the prediction of context words, given a topic. Our method does this as part of an overall model-based inference procedure, and we learn mixed membership proportions INLINEFORM1 rather than using empirical counts as the prior over topics for a word token. In accordance with the skip-gram's prediction model, we are thus able to model the context words in the data likelihood term when computing the posterior probability of the topic assignment. TWE-1 requires that topic assignments are available at test time. It provides a mechanism to predict contextual similarity, but not to predict held-out context words, so we are unable to compare to it in our experiments. Other neurally-inspired topic models include replicated softmax BIBREF34 , and its successor, DocNADE BIBREF37 . Replicated softmax extends the restricted Boltzmann machine to handle multinomial counts for document modeling. DocNADE builds on the ideas of replicated softmax, but uses the NADE architecture, where observations (i.e. words) are modeled sequentially given the previous observations. Multi-Prototype Embedding Models Multi-prototype embeddings models are another relevant line of work. These models address lexical ambiguity by assigning multiple vectors to each word type, each corresponding to a different meaning of that word. BIBREF43 propose to cluster the occurrences of each word type, based on features extracted from its context. Embeddings are then learned for each cluster. BIBREF36 apply a similar approach, but they use initial single-prototype word embeddings to provide the features used for clustering. These clustering methods have some resemblance to our topic model pre-clustering step, although their clustering is applied within instances of a given word type, rather than globally across all word types, as in our methods. This results in models with more vectors than words, while we aim to find fewer vectors than words, to reduce the model's complexity for small datasets. Rather than employing an off-the-shelf clustering algorithm and then applying an unrelated embedding model to its output, our approach aims to perform model-based clustering within an overall joint model of topic/cluster assignments and word vectors. Perhaps the most similar model to ours in the literature is the probabilistic multi-prototype embedding model of BIBREF45 , who treat the prototype assignment of a word as a latent variable, assumed drawn from a mixture over prototypes for each word. The embeddings are then trained using EM. Our MMSG model can be understood as the mixed membership version of this model, in which the prototypes (vectors) are shared across all word types, and each word type has its own mixed membership proportions across the shared prototypes. While a similar EM algorithm can be applied to the MMSG, the E-step is much more expensive, as we typically desire many more shared vectors (often in the thousands) than we would prototypes per a single word type (Tian et al. use ten in their experiments). We use the Metropolis-Hastings-Walker algorithm with the topic model reparameterization of our model in order to address this by efficiently pre-solving the E-step. Mixed Membership Modeling Mixed membership modeling is a flexible alternative to traditional clustering, in which each data point is assigned to a single cluster. Instead, mixed membership models posit that individual entities are associated with multiple underlying clusters, to differing degrees, as encoded by a mixed membership vector that sums to one across the clusters BIBREF28 , BIBREF26 . These mixed membership proportions are generally used to model lower-level grouped data, such as the words inside a document. Each lower-level data point inside a group is assumed to be assigned to one of the shared, global clusters according to the group-level membership proportions. Thus, a mixed membership model consists of a mixture model for each group, which share common mixture component parameters, but with differing mixture proportions. This formalism has lead to probabilistic models for a variety of applications, including medical diagnosis BIBREF39 , population genetics BIBREF42 , survey analysis BIBREF29 , computer vision BIBREF27 , BIBREF30 , text documents BIBREF35 , BIBREF7 , and social network analysis BIBREF25 . Nonparametric Bayesian extensions, in which the number of underlying clusters is learned from data via Bayesian inference, have also been proposed BIBREF44 . In this work, dictionary words are assigned a mixed membership distribution over a set of shared latent vector space embeddings. Each instantiation of a dictionary word (an “input” word) is assigned to one of the shared embeddings based on its dictionary word's membership vector. The words in its context (“output” words) are assumed to be drawn based on the chosen embedding. Case Study on NIPS In Figure FIGREF33 , we show a zoomed in INLINEFORM0 -SNE visualization of NIPS document embeddings. We can see regions of the space corresponding to learning algorithms (bottom), data space and latent space (center), training neural networks (top), and nearest neighbors (bottom-left). We also visualized the authors' embeddings via INLINEFORM1 -SNE (Figure FIGREF34 ). We find regions of latent space for reinforcement learning authors (left: “state, action,...,” Singh, Barto,Sutton), probabilistic methods (right: “mixture, model,” “monte, carlo,” Bishop, Williams, Barber, Opper, Jordan, Ghahramani, Tresp, Smyth), and evaluation (top-right: “results, performance, experiments,...”). Derivation of the Collapsed Gibbs Update Let INLINEFORM0 be the number of output words in the INLINEFORM1 th context, let INLINEFORM2 be those output words, and let INLINEFORM3 be the input words other that INLINEFORM4 (similarly, topic assignments INLINEFORM5 and output words INLINEFORM6 ). Then the collapsed Gibbs update samples from the conditional distribution INLINEFORM7 We recognize the first integral as the mean of a Dirichlet distribution which we obtain via conjugacy: INLINEFORM0 The above can also be understood as the probability of the next ball drawn from a multivariate Polya urn model, also known as the Dirichlet-compound multinomial distribution, arising from the posterior predictive distribution of a discrete likelihood with a Dirichlet prior. We will need the full form of such a distribution to analyze the second integral. Once again leveraging conjugacy, we have: INLINEFORM0 INLINEFORM0 where INLINEFORM0 is the number of times that output word INLINEFORM1 occurs in the INLINEFORM2 th context, since the final integral is over the full support of a Dirichlet distribution, which integrates to one. Eliminating terms that aren't affected by the INLINEFORM3 assignment, the above is INLINEFORM4 where we have used the fact that INLINEFORM0 for any INLINEFORM1 , and integer INLINEFORM2 . We can interpret this as the probability of drawing the context words under the multivariate Polya urn model, in which the number of “colored balls” (word counts plus prior counts) is increased by one each time a certain color (word) is selected. In other words, in each step, corresponding to the selection of each context word, we draw a ball from the urn, then put it back, along with another ball of the same color. The INLINEFORM3 and INLINEFORM4 terms reflect that the counts have been changed by adding these extra balls into the urn in each step. The second to last equation shows that this process is exchangeable: it does not matter which order the balls were drawn in when determining the probability of the sequence. Multiplying this with the term from the first integral, calculated earlier, gives us the final form of the update equation, INLINEFORM5
skip-gram, LDA
f6380c60e2eb32cb3a9d3bca17cf4dc5ae584eca
f6380c60e2eb32cb3a9d3bca17cf4dc5ae584eca_0
Q: Why is big data not appropriate for this task? Text: Introduction Word embedding models, which learn to encode dictionary words with vector space representations, have been shown to be valuable for a variety of natural language processing (NLP) tasks such as statistical machine translation BIBREF2 , part-of-speech tagging, chunking, and named entity recogition BIBREF3 , as they provide a more nuanced representation of words than a simple indicator vector into a dictionary. These models follow a long line of research in data-driven semantic representations of text, including latent semantic analysis BIBREF4 and its probabilistic extensions BIBREF5 , BIBREF6 . In particular, topic models BIBREF7 have found broad applications in computational social science BIBREF8 , BIBREF9 and the digital humanities BIBREF10 , where interpretable representations reveal meaningful insights. Despite widespread success at NLP tasks, word embeddings have not yet supplanted topic models as the method of choice in computational social science applications. I speculate that this is due to two primary factors: 1) a perceived reliance on big data, and 2) a lack of interpretability. In this work, I develop new models to address both of these limitations. Word embeddings have risen in popularity for NLP applications due to the success of models designed specifically for the big data setting. In particular, BIBREF0 , BIBREF1 showed that very simple word embedding models with high-dimensional representations can scale up to massive datasets, allowing them to outperform more sophisticated neural network language models which can process fewer documents. In this work, I offer a somewhat contrarian perspective to the currently prevailing trend of big data optimism, as exemplified by the work of BIBREF0 , BIBREF1 , BIBREF3 , and others, who argue that massive datasets are sufficient to allow language models to automatically resolve many challenging NLP tasks. Note that “big” datasets are not always available, particularly in computational social science NLP applications, where the data of interest are often not obtained from large scale sources such as the internet and social media, but from sources such as press releases BIBREF11 , academic journals BIBREF10 , books BIBREF12 , and transcripts of recorded speech BIBREF13 , BIBREF14 , BIBREF15 . A standard practice in the literature is to train word embedding models on a generic large corpus such as Wikipedia, and use the embeddings for NLP tasks on the target dataset, cf. BIBREF3 , BIBREF0 , BIBREF16 , BIBREF17 . However, as we shall see here, this standard practice might not always be effective, as the size of a dataset does not correspond to its degree of relevance for a particular analysis. Even very large corpora have idiosyncrasies that can make their embeddings invalid for other domains. For instance, suppose we would like to use word embeddings to analyze scientific articles on machine learning. In Table TABREF1 , I report the most similar words to the word “learning” based on word embedding models trained on two corpora. For embeddings trained on articles from the NIPS conference, the most similar words are related to machine learning, as desired, while for embeddings trained on the massive, generic Google News corpus, the most similar words relate to learning and teaching in the classroom. Evidently, domain-specific data can be important. Even more concerningly, BIBREF18 show that word embeddings can encode implicit sexist assumptions. This suggests that when trained on large generic corpora they could also encode the hegemonic worldview, which is inappropriate for studying, e.g., black female hip-hop artists' lyrics, or poetry by Syrian refugees, and could potentially lead to systematic bias against minorities, women, and people of color in NLP applications with real-world consequences, such as automatic essay grading and college admissions. In order to proactively combat these kinds of biases in large generic datasets, and to address computational social science tasks, there is a need for effective word embeddings for small datasets, so that the most relevant datasets can be used for training, even when they are small. To make word embeddings a viable alternative to topic models for applications in the social sciences, we further desire that the embeddings are semantically meaningful to human analysts. In this paper, I introduce an interpretable word embedding model, and an associated topic model, which are designed to work well when trained on a small to medium-sized corpus of interest. The primary insight is to use a data-efficient parameter sharing scheme via mixed membership modeling, with inspiration from topic models. Mixed membership models provide a flexible yet efficient latent representation, in which entities are associated with shared, global representations, but to uniquely varying degrees. I identify the skip-gram word2vec model of BIBREF0 , BIBREF1 as corresponding to a certain naive Bayes topic model, which leads to mixed membership extensions, allowing the use of fewer vectors than words. I show that this leads to better modeling performance without big data, as measured by predictive performance (when the context is leveraged for prediction), as well as to interpretable latent representations that are highly valuable for computational social science applications. The interpretability of the representations arises from defining embeddings for words (and hence, documents) in terms of embeddings for topics. My experiments also shed light on the relative merits of training embeddings on generic big data corpora versus domain-specific data. Background In this section, I provide the necessary background on word embeddings, as well as on topic models and mixed membership models. Traditional language models aim to predict words given the contexts that they are found in, thereby forming a joint probabilistic model for sequences of words in a language. BIBREF19 developed improved language models by using distributed representations BIBREF20 , in which words are represented by neural network synapse weights, or equivalently, vector space embeddings. Later authors have noted that these word embeddings are useful for semantic representations of words, independently of whether a full joint probabilistic language model is learned, and that alternative training schemes can be beneficial for learning the embeddings. In particular, BIBREF0 , BIBREF1 proposed the skip-gram model, which inverts the language model prediction task and aims to predict the context given an input word. The skip-gram model is a log-bilinear discriminative probabilistic classifier parameterized by “input” word embedding vectors INLINEFORM0 for the input words INLINEFORM1 , and “output” word embedding vectors INLINEFORM2 for context words INLINEFORM3 , as shown in Table TABREF2 , top-left. Topic models such as latent Dirichlet allocation (LDA) BIBREF7 are another class of probabilistic language models that have been used for semantic representation BIBREF6 . A straightforward way to model text corpora is via unsupervised multinomial naive Bayes, in which a latent cluster assignment for each document selects a multinomial distribution over words, referred to as a topic, with which the documents' words are assumed to be generated. LDA topic models improve over naive Bayes by using a mixed membership model, in which the assumption that all words in a document INLINEFORM0 belong to the same topic is relaxed, and replaced with a distribution over topics INLINEFORM1 . In the model's assumed generative process, for each word INLINEFORM2 in document INLINEFORM3 , a topic assignment INLINEFORM4 is drawn via INLINEFORM5 , then the word is drawn from the chosen topic INLINEFORM6 . The mixed membership formalism provides a useful compromise between model flexibility and statistical efficiency: the INLINEFORM7 topics INLINEFORM8 are shared across all documents, thereby sharing statistical strength, but each document is free to use the topics to its own unique degree. Bayesian inference further aids data efficiency, as uncertainty over INLINEFORM9 can be managed for shorter documents. Some recent papers have aimed to combine topic models and word embeddings BIBREF21 , BIBREF22 , but they do not aim to address the small data problem for computational social science, which I focus on here. I provide a more detailed discussion of related work in the supplementary. The Mixed Membership Skip-Gram To design an interpretable word embedding model for small corpora, we identify novel connections between word embeddings and topic models, and adapt advances from topic modeling. Following the distributional hypothesis BIBREF23 , the skip-gram's word embeddings parameterize discrete probability distributions over words INLINEFORM0 which tend to co-occur, and tend to be semantically coherent – a property leveraged by the Gaussian LDA model of BIBREF21 . This suggests that these discrete distributions can be reinterpreted as topics INLINEFORM1 . We thus reinterpret the skip-gram as a parameterization of a certain supervised naive Bayes topic model (Table TABREF2 , top-right). In this topic model, input words INLINEFORM2 are fully observed “cluster assignments,” and the words in INLINEFORM3 's contexts are a “document.” The skip-gram differs from this supervised topic model only in the parameterization of the “topics” via word vectors which encode the distributions with a log-bilinear model. Note that although the skip-gram is discriminative, in the sense that it does not jointly model the input words INLINEFORM4 , we are here equivalently interpreting it as encoding a “conditionally generative” process for the context given the words, in order to develop probabilistic models that extend the skip-gram. As in LDA, this model can be improved by replacing the naive Bayes assumption with a mixed membership assumption. By applying the mixed membership representation to this topic model version of the skip-gram, we obtain the model in the bottom-right of Table TABREF2 . After once again parameterizing this model with word embeddings, we obtain our final model, the mixed membership skip-gram (MMSG) (Table TABREF2 , bottom-left). In the model, each input word has a distribution over topics INLINEFORM0 . Each topic has a vector-space embedding INLINEFORM1 and each output word has a vector INLINEFORM2 (a parameter, not an embedding for INLINEFORM3 ). A topic INLINEFORM4 is drawn for each context, and the words in the context are drawn from the log-bilinear model using INLINEFORM5 : DISPLAYFORM0 We can expect that the resulting mixed membership word embeddings are beneficial in the small-to-medium data regime for the following reasons: Of course, the model also requires some new parameters to be learned, namely the mixed membership proportions INLINEFORM0 . Based on topic modeling, I hypothesized that with care, these added parameters need not adversely affect performance in the small-medium data regime, for two reasons: 1) we can use a Bayesian approach to effectively manage uncertainty in them, and to marginalize them out, which prevents them being a bottleneck during training; and 2) at test time, using the posterior for INLINEFORM1 given the context, instead of the “prior” INLINEFORM2 , mitigates the impact of uncertainty in INLINEFORM3 due to limited training data: DISPLAYFORM0 To obtain a vector for a word type INLINEFORM0 , we can use the prior mean, INLINEFORM1 . For a word token INLINEFORM2 , we can leverage its context via the posterior mean, INLINEFORM3 . These embeddings are convex combinations of topic vectors (see Figure FIGREF23 for an example). With fewer vectors than words, some model capacity is lost, but the flexibility of the mixed membership representation allows the model to compensate. When the number of shared vectors equals the number of words, the mixed membership skip-gram is strictly more representationally powerful than the skip-gram. With more vectors than words, we can expect that the increased representational power would be beneficial in the big data regime. As this is not my goal, I leave this for future work. Experimental Results The goals of our experiments were to study the relative merits of big data and domain-specific small data, to validate the proposed methods, and to study their applicability for computational social science research. Quantitative Experiments I first measured the effectiveness of the embeddings at the skip-gram's training task, predicting context words INLINEFORM0 given input words INLINEFORM1 . This task measures the methods' performance for predictive language modeling. I used four datasets of sociopolitical, scientific, and literary interest: the corpus of NIPS articles from 1987 – 1999 ( INLINEFORM2 million), the U.S. presidential state of the Union addresses from 1790 – 2015 ( INLINEFORM3 ), the complete works of Shakespeare ( INLINEFORM4 ; this version did not contain the Sonnets), and the writings of black scholar and activist W.E.B. Du Bois, as digitized by Project Gutenberg ( INLINEFORM5 ). For each dataset, I held out 10,000 INLINEFORM6 pairs uniformly at random, where INLINEFORM7 , and aimed to predict INLINEFORM8 given INLINEFORM9 (and optionally, INLINEFORM10 ). Since there are a large number of classes, I treat this as a ranking problem, and report the mean reciprocal rank. The experiments were repeated and averaged over 5 train/test splits. The results are shown in Table TABREF25 . I compared to a word frequency baseline, the skip-gram (SG), and Tomas Mikolov/Google's vectors trained on Google News, INLINEFORM0 billion, via CBOW. Simulated annealing was performed for 1,000 iterations, NCE was performed for 1 million minibatches of size 128, and 128-dimensional embeddings were used (300 for Google). I used INLINEFORM1 for NIPS, INLINEFORM2 for state of the Union, and INLINEFORM3 for the two smaller datasets. Methods were able to leverage the remainder of the context, either by adding the context's vectors, or via the posterior (Equation EQREF22 ), which helped for all methods except the naive skip-gram. We can identify several noteworthy findings. First, the generic big data vectors (Google+context) were outperformed by the skip-gram on 3 out of 4 datasets (and by the skip-gram topic model on the other), by a large margin, indicating that domain-specific embeddings are often important. Second, the mixed membership models, using posterior inference, beat or matched their naive Bayes counterparts, for both the word embedding models and the topic models. As hypothesized, posterior inference on INLINEFORM4 at test time was important for good performance. Finally, the topic models beat their corresponding word embedding models at prediction. I therefore recommend the use of our MMSG topic model variant for predictive language modeling in the small data regime. I tested the performance of the representations as features for document categorization and regression tasks. The results are given in Table TABREF26 . For document categorization, I used three standard benchmark datasets: 20 Newsgroups (19,997 newsgroup posts), Reuters-150 newswire articles (15,500 articles and 150 classes), and Ohsumed medical abstracts on 23 cardiovascular diseases (20,000 articles). I held out 4,000 test documents for 20 Newsgroups, and used the standard train/test splits from the literature in the other corpora (e.g. for Ohsumed, 50% of documents were assigned to training and to test sets). I obtained document embeddings for the MMSG, in the same latent space as the topic embeddings, by summing the posterior mean vectors INLINEFORM0 for each token. Vector addition was similarly used to construct document vectors for the other embedding models. All vectors were normalized to unit length. I also considered a tf-idf baseline. Logistic regression models were trained on the features extracted on the training set for each method. Across the three datasets, several clear trends emerged (Table TABREF26 ). First, the generic Google vectors were consistently and substantially outperformed in classification performance by the skipgram (SG) and MMSG vectors, highlighting the importance of corpus-specific embeddings. Second, despite the MMSG's superior performance at language modeling on small datasets, the SG features outperformed the MMSG's at the document categorization task. By encoding vectors at the topic level instead of the word level, the MMSG loses word level resolution in the embeddings, which turned out to be valuable for these particular classification tasks. We are not, however, restricted to use only one type of embedding to construct features for classification. Interestingly, when the SG and MMSG features were concatenated (SG+MMSG), this improved classification performance over these vectors individually. This suggests that the topic-level MMSG vectors and word-level SG vectors encode complementary information, and both are beneficial for performance. Finally, further concatenating the generic Google vectors' features (SG+MMSG+Google) improved performance again, despite the fact that these vectors performed poorly on their own. It should be noted that tf-idf, which is notoriously effective for document categorization, outperformed the embedding methods on these datasets. I also analyzed the regression task of predicting the year of a state of the Union address based on its text information. I used lasso-regularized linear regression models, evaluated via a leave-one-out cross-validation experimental setup. Root-mean-square error (RMSE) results are reported in Table TABREF26 (bottom). Unlike for the other tasks, the Google big data vectors were the best individual features in this case, outperforming the domain-specific SG and MMSG embeddings individually. On the other hand, SG+MMSG+Google performed the best overall, showing that domain-specific embeddings can improve performance even when big data embeddings are successful. The tf-idf baseline was beaten by all of the embedding models on this task. Computational Social Science Case Studies: State of the Union and NIPS I also performed several case studies. I obtained document embeddings, in the same latent space as the topic embeddings, by summing the posterior mean vectors INLINEFORM0 for each token, and visualized them in two dimensions using INLINEFORM1 -SNE BIBREF24 (all vectors were normalized to unit length). The state of the Union addresses (Figure FIGREF27 ) are embedded almost linearly by year, with a major jump around the New Deal (1930s), and are well separated by party at any given time period. The embedded topics (gray) allow us to interpret the space. The George W. Bush addresses are embedded near a “war on terror” topic (“weapons, war...”), and the Barack Obama addresses are embedded near a “stimulus” topic (“people, work...”). On the NIPS corpus, for the input word “Bayesian” (Table ), the naive Bayes and skip-gram models learned a topic with words that refer to Bayesian networks, probabilistic models, and neural networks. The mixed membership models are able to separate this into more coherent and specific topics including Bayesian modeling, Bayesian training of neural networks (for which Sir David MacKay was a strong proponent, and Andreas Weigend wrote an influential early paper), and Monte Carlo methods. By performing the additive composition of word vectors, which we obtain by finding the prior mean vector for each word type INLINEFORM0 , INLINEFORM1 (and then normalizing), we obtain relevant topics INLINEFORM2 as nearest neighbors (Figure FIGREF28 ). Similarly, we find that the additive composition of topic and word vectors works correctly: INLINEFORM3 , and INLINEFORM4 . The INLINEFORM0 -SNE visualization of NIPS documents (Figure FIGREF28 ) shows some temporal clustering patterns (blue documents are more recent, red documents are older, and gray points are topics). I provide a more detailed case study on NIPS in the supplementary material. Conclusion I have proposed a model-based method for training interpretable corpus-specific word embeddings for computational social science, using mixed membership representations, Metropolis-Hastings-Walker sampling, and NCE. Experimental results for prediction, supervised learning, and case studies on state of the Union addresses and NIPS articles, indicate that high-quality embeddings and topics can be obtained using the method. The results highlight the fact that big data is not always best, as domain-specific data can be very valuable, even when it is small. I plan to use this approach for substantive social science applications, and to address algorithmic bias and fairness issues. Acknowledgements I thank Eric Nalisnick and Padhraic Smyth for many helpful discussions. Supplementary Material ] Related Work In this supplementary document, we discuss related work in the literature and its relation to our proposed methods, provide a case study on NIPS articles, and derive the collapsed Gibbs sampling update for the MMSGTM, which we leverage when training the MMSG. Topic Modeling and Word Embeddings The Gaussian LDA model of BIBREF21 improves the performance of topic modeling by leveraging the semantic information encoded in word embeddings. Gaussian LDA modifies the generative process of LDA such that each topic is assumed to generate the vectors via its own Gaussian distribution. Similarly to our MMSG model, in Gaussian LDA each topic is encoded with a vector, in this case the mean of the Gaussian. It takes pre-trained word embeddings as input, rather than learning the embeddings from data within the same model, and does not aim to perform word embedding. The topical word embedding (TWE) models of BIBREF22 reverse this, as they take LDA topic assignments of words as input, and aim to use them to improve the resultant word embeddings. The authors propose three variants, each of which modifies the skip-gram training objective to use LDA topic assignments together with words. In the best performing variant, called TWE-1, a standard skip-gram word embedding model is trained independently with another skip-gram variant, which tries to predict context words given the input word's topic assignment. The skip-gram embedding and the topic embeddings are concatenated to form the final embedding. At test time, a distribution over topics for the word given the context, INLINEFORM0 is estimated according to the topic counts over the other context words. Using this as a prior, a posterior over topics given both the input word and the context is calculated, and similarities between pairs of words (with their contexts) are averaged over this posterior, in a procedure inspired by those used by BIBREF43 , BIBREF36 . The primary similarity to our MMSG approach is the use of a training algorithm involving the prediction of context words, given a topic. Our method does this as part of an overall model-based inference procedure, and we learn mixed membership proportions INLINEFORM1 rather than using empirical counts as the prior over topics for a word token. In accordance with the skip-gram's prediction model, we are thus able to model the context words in the data likelihood term when computing the posterior probability of the topic assignment. TWE-1 requires that topic assignments are available at test time. It provides a mechanism to predict contextual similarity, but not to predict held-out context words, so we are unable to compare to it in our experiments. Other neurally-inspired topic models include replicated softmax BIBREF34 , and its successor, DocNADE BIBREF37 . Replicated softmax extends the restricted Boltzmann machine to handle multinomial counts for document modeling. DocNADE builds on the ideas of replicated softmax, but uses the NADE architecture, where observations (i.e. words) are modeled sequentially given the previous observations. Multi-Prototype Embedding Models Multi-prototype embeddings models are another relevant line of work. These models address lexical ambiguity by assigning multiple vectors to each word type, each corresponding to a different meaning of that word. BIBREF43 propose to cluster the occurrences of each word type, based on features extracted from its context. Embeddings are then learned for each cluster. BIBREF36 apply a similar approach, but they use initial single-prototype word embeddings to provide the features used for clustering. These clustering methods have some resemblance to our topic model pre-clustering step, although their clustering is applied within instances of a given word type, rather than globally across all word types, as in our methods. This results in models with more vectors than words, while we aim to find fewer vectors than words, to reduce the model's complexity for small datasets. Rather than employing an off-the-shelf clustering algorithm and then applying an unrelated embedding model to its output, our approach aims to perform model-based clustering within an overall joint model of topic/cluster assignments and word vectors. Perhaps the most similar model to ours in the literature is the probabilistic multi-prototype embedding model of BIBREF45 , who treat the prototype assignment of a word as a latent variable, assumed drawn from a mixture over prototypes for each word. The embeddings are then trained using EM. Our MMSG model can be understood as the mixed membership version of this model, in which the prototypes (vectors) are shared across all word types, and each word type has its own mixed membership proportions across the shared prototypes. While a similar EM algorithm can be applied to the MMSG, the E-step is much more expensive, as we typically desire many more shared vectors (often in the thousands) than we would prototypes per a single word type (Tian et al. use ten in their experiments). We use the Metropolis-Hastings-Walker algorithm with the topic model reparameterization of our model in order to address this by efficiently pre-solving the E-step. Mixed Membership Modeling Mixed membership modeling is a flexible alternative to traditional clustering, in which each data point is assigned to a single cluster. Instead, mixed membership models posit that individual entities are associated with multiple underlying clusters, to differing degrees, as encoded by a mixed membership vector that sums to one across the clusters BIBREF28 , BIBREF26 . These mixed membership proportions are generally used to model lower-level grouped data, such as the words inside a document. Each lower-level data point inside a group is assumed to be assigned to one of the shared, global clusters according to the group-level membership proportions. Thus, a mixed membership model consists of a mixture model for each group, which share common mixture component parameters, but with differing mixture proportions. This formalism has lead to probabilistic models for a variety of applications, including medical diagnosis BIBREF39 , population genetics BIBREF42 , survey analysis BIBREF29 , computer vision BIBREF27 , BIBREF30 , text documents BIBREF35 , BIBREF7 , and social network analysis BIBREF25 . Nonparametric Bayesian extensions, in which the number of underlying clusters is learned from data via Bayesian inference, have also been proposed BIBREF44 . In this work, dictionary words are assigned a mixed membership distribution over a set of shared latent vector space embeddings. Each instantiation of a dictionary word (an “input” word) is assigned to one of the shared embeddings based on its dictionary word's membership vector. The words in its context (“output” words) are assumed to be drawn based on the chosen embedding. Case Study on NIPS In Figure FIGREF33 , we show a zoomed in INLINEFORM0 -SNE visualization of NIPS document embeddings. We can see regions of the space corresponding to learning algorithms (bottom), data space and latent space (center), training neural networks (top), and nearest neighbors (bottom-left). We also visualized the authors' embeddings via INLINEFORM1 -SNE (Figure FIGREF34 ). We find regions of latent space for reinforcement learning authors (left: “state, action,...,” Singh, Barto,Sutton), probabilistic methods (right: “mixture, model,” “monte, carlo,” Bishop, Williams, Barber, Opper, Jordan, Ghahramani, Tresp, Smyth), and evaluation (top-right: “results, performance, experiments,...”). Derivation of the Collapsed Gibbs Update Let INLINEFORM0 be the number of output words in the INLINEFORM1 th context, let INLINEFORM2 be those output words, and let INLINEFORM3 be the input words other that INLINEFORM4 (similarly, topic assignments INLINEFORM5 and output words INLINEFORM6 ). Then the collapsed Gibbs update samples from the conditional distribution INLINEFORM7 We recognize the first integral as the mean of a Dirichlet distribution which we obtain via conjugacy: INLINEFORM0 The above can also be understood as the probability of the next ball drawn from a multivariate Polya urn model, also known as the Dirichlet-compound multinomial distribution, arising from the posterior predictive distribution of a discrete likelihood with a Dirichlet prior. We will need the full form of such a distribution to analyze the second integral. Once again leveraging conjugacy, we have: INLINEFORM0 INLINEFORM0 where INLINEFORM0 is the number of times that output word INLINEFORM1 occurs in the INLINEFORM2 th context, since the final integral is over the full support of a Dirichlet distribution, which integrates to one. Eliminating terms that aren't affected by the INLINEFORM3 assignment, the above is INLINEFORM4 where we have used the fact that INLINEFORM0 for any INLINEFORM1 , and integer INLINEFORM2 . We can interpret this as the probability of drawing the context words under the multivariate Polya urn model, in which the number of “colored balls” (word counts plus prior counts) is increased by one each time a certain color (word) is selected. In other words, in each step, corresponding to the selection of each context word, we draw a ball from the urn, then put it back, along with another ball of the same color. The INLINEFORM3 and INLINEFORM4 terms reflect that the counts have been changed by adding these extra balls into the urn in each step. The second to last equation shows that this process is exchangeable: it does not matter which order the balls were drawn in when determining the probability of the sequence. Multiplying this with the term from the first integral, calculated earlier, gives us the final form of the update equation, INLINEFORM5
Training embeddings from small-corpora can increase the performance of some tasks
c7d99e66c4ab555fe3d616b15a5048f3fe1f3f0e
c7d99e66c4ab555fe3d616b15a5048f3fe1f3f0e_0
Q: What is an example of a computational social science NLP task? Text: Introduction Word embedding models, which learn to encode dictionary words with vector space representations, have been shown to be valuable for a variety of natural language processing (NLP) tasks such as statistical machine translation BIBREF2 , part-of-speech tagging, chunking, and named entity recogition BIBREF3 , as they provide a more nuanced representation of words than a simple indicator vector into a dictionary. These models follow a long line of research in data-driven semantic representations of text, including latent semantic analysis BIBREF4 and its probabilistic extensions BIBREF5 , BIBREF6 . In particular, topic models BIBREF7 have found broad applications in computational social science BIBREF8 , BIBREF9 and the digital humanities BIBREF10 , where interpretable representations reveal meaningful insights. Despite widespread success at NLP tasks, word embeddings have not yet supplanted topic models as the method of choice in computational social science applications. I speculate that this is due to two primary factors: 1) a perceived reliance on big data, and 2) a lack of interpretability. In this work, I develop new models to address both of these limitations. Word embeddings have risen in popularity for NLP applications due to the success of models designed specifically for the big data setting. In particular, BIBREF0 , BIBREF1 showed that very simple word embedding models with high-dimensional representations can scale up to massive datasets, allowing them to outperform more sophisticated neural network language models which can process fewer documents. In this work, I offer a somewhat contrarian perspective to the currently prevailing trend of big data optimism, as exemplified by the work of BIBREF0 , BIBREF1 , BIBREF3 , and others, who argue that massive datasets are sufficient to allow language models to automatically resolve many challenging NLP tasks. Note that “big” datasets are not always available, particularly in computational social science NLP applications, where the data of interest are often not obtained from large scale sources such as the internet and social media, but from sources such as press releases BIBREF11 , academic journals BIBREF10 , books BIBREF12 , and transcripts of recorded speech BIBREF13 , BIBREF14 , BIBREF15 . A standard practice in the literature is to train word embedding models on a generic large corpus such as Wikipedia, and use the embeddings for NLP tasks on the target dataset, cf. BIBREF3 , BIBREF0 , BIBREF16 , BIBREF17 . However, as we shall see here, this standard practice might not always be effective, as the size of a dataset does not correspond to its degree of relevance for a particular analysis. Even very large corpora have idiosyncrasies that can make their embeddings invalid for other domains. For instance, suppose we would like to use word embeddings to analyze scientific articles on machine learning. In Table TABREF1 , I report the most similar words to the word “learning” based on word embedding models trained on two corpora. For embeddings trained on articles from the NIPS conference, the most similar words are related to machine learning, as desired, while for embeddings trained on the massive, generic Google News corpus, the most similar words relate to learning and teaching in the classroom. Evidently, domain-specific data can be important. Even more concerningly, BIBREF18 show that word embeddings can encode implicit sexist assumptions. This suggests that when trained on large generic corpora they could also encode the hegemonic worldview, which is inappropriate for studying, e.g., black female hip-hop artists' lyrics, or poetry by Syrian refugees, and could potentially lead to systematic bias against minorities, women, and people of color in NLP applications with real-world consequences, such as automatic essay grading and college admissions. In order to proactively combat these kinds of biases in large generic datasets, and to address computational social science tasks, there is a need for effective word embeddings for small datasets, so that the most relevant datasets can be used for training, even when they are small. To make word embeddings a viable alternative to topic models for applications in the social sciences, we further desire that the embeddings are semantically meaningful to human analysts. In this paper, I introduce an interpretable word embedding model, and an associated topic model, which are designed to work well when trained on a small to medium-sized corpus of interest. The primary insight is to use a data-efficient parameter sharing scheme via mixed membership modeling, with inspiration from topic models. Mixed membership models provide a flexible yet efficient latent representation, in which entities are associated with shared, global representations, but to uniquely varying degrees. I identify the skip-gram word2vec model of BIBREF0 , BIBREF1 as corresponding to a certain naive Bayes topic model, which leads to mixed membership extensions, allowing the use of fewer vectors than words. I show that this leads to better modeling performance without big data, as measured by predictive performance (when the context is leveraged for prediction), as well as to interpretable latent representations that are highly valuable for computational social science applications. The interpretability of the representations arises from defining embeddings for words (and hence, documents) in terms of embeddings for topics. My experiments also shed light on the relative merits of training embeddings on generic big data corpora versus domain-specific data. Background In this section, I provide the necessary background on word embeddings, as well as on topic models and mixed membership models. Traditional language models aim to predict words given the contexts that they are found in, thereby forming a joint probabilistic model for sequences of words in a language. BIBREF19 developed improved language models by using distributed representations BIBREF20 , in which words are represented by neural network synapse weights, or equivalently, vector space embeddings. Later authors have noted that these word embeddings are useful for semantic representations of words, independently of whether a full joint probabilistic language model is learned, and that alternative training schemes can be beneficial for learning the embeddings. In particular, BIBREF0 , BIBREF1 proposed the skip-gram model, which inverts the language model prediction task and aims to predict the context given an input word. The skip-gram model is a log-bilinear discriminative probabilistic classifier parameterized by “input” word embedding vectors INLINEFORM0 for the input words INLINEFORM1 , and “output” word embedding vectors INLINEFORM2 for context words INLINEFORM3 , as shown in Table TABREF2 , top-left. Topic models such as latent Dirichlet allocation (LDA) BIBREF7 are another class of probabilistic language models that have been used for semantic representation BIBREF6 . A straightforward way to model text corpora is via unsupervised multinomial naive Bayes, in which a latent cluster assignment for each document selects a multinomial distribution over words, referred to as a topic, with which the documents' words are assumed to be generated. LDA topic models improve over naive Bayes by using a mixed membership model, in which the assumption that all words in a document INLINEFORM0 belong to the same topic is relaxed, and replaced with a distribution over topics INLINEFORM1 . In the model's assumed generative process, for each word INLINEFORM2 in document INLINEFORM3 , a topic assignment INLINEFORM4 is drawn via INLINEFORM5 , then the word is drawn from the chosen topic INLINEFORM6 . The mixed membership formalism provides a useful compromise between model flexibility and statistical efficiency: the INLINEFORM7 topics INLINEFORM8 are shared across all documents, thereby sharing statistical strength, but each document is free to use the topics to its own unique degree. Bayesian inference further aids data efficiency, as uncertainty over INLINEFORM9 can be managed for shorter documents. Some recent papers have aimed to combine topic models and word embeddings BIBREF21 , BIBREF22 , but they do not aim to address the small data problem for computational social science, which I focus on here. I provide a more detailed discussion of related work in the supplementary. The Mixed Membership Skip-Gram To design an interpretable word embedding model for small corpora, we identify novel connections between word embeddings and topic models, and adapt advances from topic modeling. Following the distributional hypothesis BIBREF23 , the skip-gram's word embeddings parameterize discrete probability distributions over words INLINEFORM0 which tend to co-occur, and tend to be semantically coherent – a property leveraged by the Gaussian LDA model of BIBREF21 . This suggests that these discrete distributions can be reinterpreted as topics INLINEFORM1 . We thus reinterpret the skip-gram as a parameterization of a certain supervised naive Bayes topic model (Table TABREF2 , top-right). In this topic model, input words INLINEFORM2 are fully observed “cluster assignments,” and the words in INLINEFORM3 's contexts are a “document.” The skip-gram differs from this supervised topic model only in the parameterization of the “topics” via word vectors which encode the distributions with a log-bilinear model. Note that although the skip-gram is discriminative, in the sense that it does not jointly model the input words INLINEFORM4 , we are here equivalently interpreting it as encoding a “conditionally generative” process for the context given the words, in order to develop probabilistic models that extend the skip-gram. As in LDA, this model can be improved by replacing the naive Bayes assumption with a mixed membership assumption. By applying the mixed membership representation to this topic model version of the skip-gram, we obtain the model in the bottom-right of Table TABREF2 . After once again parameterizing this model with word embeddings, we obtain our final model, the mixed membership skip-gram (MMSG) (Table TABREF2 , bottom-left). In the model, each input word has a distribution over topics INLINEFORM0 . Each topic has a vector-space embedding INLINEFORM1 and each output word has a vector INLINEFORM2 (a parameter, not an embedding for INLINEFORM3 ). A topic INLINEFORM4 is drawn for each context, and the words in the context are drawn from the log-bilinear model using INLINEFORM5 : DISPLAYFORM0 We can expect that the resulting mixed membership word embeddings are beneficial in the small-to-medium data regime for the following reasons: Of course, the model also requires some new parameters to be learned, namely the mixed membership proportions INLINEFORM0 . Based on topic modeling, I hypothesized that with care, these added parameters need not adversely affect performance in the small-medium data regime, for two reasons: 1) we can use a Bayesian approach to effectively manage uncertainty in them, and to marginalize them out, which prevents them being a bottleneck during training; and 2) at test time, using the posterior for INLINEFORM1 given the context, instead of the “prior” INLINEFORM2 , mitigates the impact of uncertainty in INLINEFORM3 due to limited training data: DISPLAYFORM0 To obtain a vector for a word type INLINEFORM0 , we can use the prior mean, INLINEFORM1 . For a word token INLINEFORM2 , we can leverage its context via the posterior mean, INLINEFORM3 . These embeddings are convex combinations of topic vectors (see Figure FIGREF23 for an example). With fewer vectors than words, some model capacity is lost, but the flexibility of the mixed membership representation allows the model to compensate. When the number of shared vectors equals the number of words, the mixed membership skip-gram is strictly more representationally powerful than the skip-gram. With more vectors than words, we can expect that the increased representational power would be beneficial in the big data regime. As this is not my goal, I leave this for future work. Experimental Results The goals of our experiments were to study the relative merits of big data and domain-specific small data, to validate the proposed methods, and to study their applicability for computational social science research. Quantitative Experiments I first measured the effectiveness of the embeddings at the skip-gram's training task, predicting context words INLINEFORM0 given input words INLINEFORM1 . This task measures the methods' performance for predictive language modeling. I used four datasets of sociopolitical, scientific, and literary interest: the corpus of NIPS articles from 1987 – 1999 ( INLINEFORM2 million), the U.S. presidential state of the Union addresses from 1790 – 2015 ( INLINEFORM3 ), the complete works of Shakespeare ( INLINEFORM4 ; this version did not contain the Sonnets), and the writings of black scholar and activist W.E.B. Du Bois, as digitized by Project Gutenberg ( INLINEFORM5 ). For each dataset, I held out 10,000 INLINEFORM6 pairs uniformly at random, where INLINEFORM7 , and aimed to predict INLINEFORM8 given INLINEFORM9 (and optionally, INLINEFORM10 ). Since there are a large number of classes, I treat this as a ranking problem, and report the mean reciprocal rank. The experiments were repeated and averaged over 5 train/test splits. The results are shown in Table TABREF25 . I compared to a word frequency baseline, the skip-gram (SG), and Tomas Mikolov/Google's vectors trained on Google News, INLINEFORM0 billion, via CBOW. Simulated annealing was performed for 1,000 iterations, NCE was performed for 1 million minibatches of size 128, and 128-dimensional embeddings were used (300 for Google). I used INLINEFORM1 for NIPS, INLINEFORM2 for state of the Union, and INLINEFORM3 for the two smaller datasets. Methods were able to leverage the remainder of the context, either by adding the context's vectors, or via the posterior (Equation EQREF22 ), which helped for all methods except the naive skip-gram. We can identify several noteworthy findings. First, the generic big data vectors (Google+context) were outperformed by the skip-gram on 3 out of 4 datasets (and by the skip-gram topic model on the other), by a large margin, indicating that domain-specific embeddings are often important. Second, the mixed membership models, using posterior inference, beat or matched their naive Bayes counterparts, for both the word embedding models and the topic models. As hypothesized, posterior inference on INLINEFORM4 at test time was important for good performance. Finally, the topic models beat their corresponding word embedding models at prediction. I therefore recommend the use of our MMSG topic model variant for predictive language modeling in the small data regime. I tested the performance of the representations as features for document categorization and regression tasks. The results are given in Table TABREF26 . For document categorization, I used three standard benchmark datasets: 20 Newsgroups (19,997 newsgroup posts), Reuters-150 newswire articles (15,500 articles and 150 classes), and Ohsumed medical abstracts on 23 cardiovascular diseases (20,000 articles). I held out 4,000 test documents for 20 Newsgroups, and used the standard train/test splits from the literature in the other corpora (e.g. for Ohsumed, 50% of documents were assigned to training and to test sets). I obtained document embeddings for the MMSG, in the same latent space as the topic embeddings, by summing the posterior mean vectors INLINEFORM0 for each token. Vector addition was similarly used to construct document vectors for the other embedding models. All vectors were normalized to unit length. I also considered a tf-idf baseline. Logistic regression models were trained on the features extracted on the training set for each method. Across the three datasets, several clear trends emerged (Table TABREF26 ). First, the generic Google vectors were consistently and substantially outperformed in classification performance by the skipgram (SG) and MMSG vectors, highlighting the importance of corpus-specific embeddings. Second, despite the MMSG's superior performance at language modeling on small datasets, the SG features outperformed the MMSG's at the document categorization task. By encoding vectors at the topic level instead of the word level, the MMSG loses word level resolution in the embeddings, which turned out to be valuable for these particular classification tasks. We are not, however, restricted to use only one type of embedding to construct features for classification. Interestingly, when the SG and MMSG features were concatenated (SG+MMSG), this improved classification performance over these vectors individually. This suggests that the topic-level MMSG vectors and word-level SG vectors encode complementary information, and both are beneficial for performance. Finally, further concatenating the generic Google vectors' features (SG+MMSG+Google) improved performance again, despite the fact that these vectors performed poorly on their own. It should be noted that tf-idf, which is notoriously effective for document categorization, outperformed the embedding methods on these datasets. I also analyzed the regression task of predicting the year of a state of the Union address based on its text information. I used lasso-regularized linear regression models, evaluated via a leave-one-out cross-validation experimental setup. Root-mean-square error (RMSE) results are reported in Table TABREF26 (bottom). Unlike for the other tasks, the Google big data vectors were the best individual features in this case, outperforming the domain-specific SG and MMSG embeddings individually. On the other hand, SG+MMSG+Google performed the best overall, showing that domain-specific embeddings can improve performance even when big data embeddings are successful. The tf-idf baseline was beaten by all of the embedding models on this task. Computational Social Science Case Studies: State of the Union and NIPS I also performed several case studies. I obtained document embeddings, in the same latent space as the topic embeddings, by summing the posterior mean vectors INLINEFORM0 for each token, and visualized them in two dimensions using INLINEFORM1 -SNE BIBREF24 (all vectors were normalized to unit length). The state of the Union addresses (Figure FIGREF27 ) are embedded almost linearly by year, with a major jump around the New Deal (1930s), and are well separated by party at any given time period. The embedded topics (gray) allow us to interpret the space. The George W. Bush addresses are embedded near a “war on terror” topic (“weapons, war...”), and the Barack Obama addresses are embedded near a “stimulus” topic (“people, work...”). On the NIPS corpus, for the input word “Bayesian” (Table ), the naive Bayes and skip-gram models learned a topic with words that refer to Bayesian networks, probabilistic models, and neural networks. The mixed membership models are able to separate this into more coherent and specific topics including Bayesian modeling, Bayesian training of neural networks (for which Sir David MacKay was a strong proponent, and Andreas Weigend wrote an influential early paper), and Monte Carlo methods. By performing the additive composition of word vectors, which we obtain by finding the prior mean vector for each word type INLINEFORM0 , INLINEFORM1 (and then normalizing), we obtain relevant topics INLINEFORM2 as nearest neighbors (Figure FIGREF28 ). Similarly, we find that the additive composition of topic and word vectors works correctly: INLINEFORM3 , and INLINEFORM4 . The INLINEFORM0 -SNE visualization of NIPS documents (Figure FIGREF28 ) shows some temporal clustering patterns (blue documents are more recent, red documents are older, and gray points are topics). I provide a more detailed case study on NIPS in the supplementary material. Conclusion I have proposed a model-based method for training interpretable corpus-specific word embeddings for computational social science, using mixed membership representations, Metropolis-Hastings-Walker sampling, and NCE. Experimental results for prediction, supervised learning, and case studies on state of the Union addresses and NIPS articles, indicate that high-quality embeddings and topics can be obtained using the method. The results highlight the fact that big data is not always best, as domain-specific data can be very valuable, even when it is small. I plan to use this approach for substantive social science applications, and to address algorithmic bias and fairness issues. Acknowledgements I thank Eric Nalisnick and Padhraic Smyth for many helpful discussions. Supplementary Material ] Related Work In this supplementary document, we discuss related work in the literature and its relation to our proposed methods, provide a case study on NIPS articles, and derive the collapsed Gibbs sampling update for the MMSGTM, which we leverage when training the MMSG. Topic Modeling and Word Embeddings The Gaussian LDA model of BIBREF21 improves the performance of topic modeling by leveraging the semantic information encoded in word embeddings. Gaussian LDA modifies the generative process of LDA such that each topic is assumed to generate the vectors via its own Gaussian distribution. Similarly to our MMSG model, in Gaussian LDA each topic is encoded with a vector, in this case the mean of the Gaussian. It takes pre-trained word embeddings as input, rather than learning the embeddings from data within the same model, and does not aim to perform word embedding. The topical word embedding (TWE) models of BIBREF22 reverse this, as they take LDA topic assignments of words as input, and aim to use them to improve the resultant word embeddings. The authors propose three variants, each of which modifies the skip-gram training objective to use LDA topic assignments together with words. In the best performing variant, called TWE-1, a standard skip-gram word embedding model is trained independently with another skip-gram variant, which tries to predict context words given the input word's topic assignment. The skip-gram embedding and the topic embeddings are concatenated to form the final embedding. At test time, a distribution over topics for the word given the context, INLINEFORM0 is estimated according to the topic counts over the other context words. Using this as a prior, a posterior over topics given both the input word and the context is calculated, and similarities between pairs of words (with their contexts) are averaged over this posterior, in a procedure inspired by those used by BIBREF43 , BIBREF36 . The primary similarity to our MMSG approach is the use of a training algorithm involving the prediction of context words, given a topic. Our method does this as part of an overall model-based inference procedure, and we learn mixed membership proportions INLINEFORM1 rather than using empirical counts as the prior over topics for a word token. In accordance with the skip-gram's prediction model, we are thus able to model the context words in the data likelihood term when computing the posterior probability of the topic assignment. TWE-1 requires that topic assignments are available at test time. It provides a mechanism to predict contextual similarity, but not to predict held-out context words, so we are unable to compare to it in our experiments. Other neurally-inspired topic models include replicated softmax BIBREF34 , and its successor, DocNADE BIBREF37 . Replicated softmax extends the restricted Boltzmann machine to handle multinomial counts for document modeling. DocNADE builds on the ideas of replicated softmax, but uses the NADE architecture, where observations (i.e. words) are modeled sequentially given the previous observations. Multi-Prototype Embedding Models Multi-prototype embeddings models are another relevant line of work. These models address lexical ambiguity by assigning multiple vectors to each word type, each corresponding to a different meaning of that word. BIBREF43 propose to cluster the occurrences of each word type, based on features extracted from its context. Embeddings are then learned for each cluster. BIBREF36 apply a similar approach, but they use initial single-prototype word embeddings to provide the features used for clustering. These clustering methods have some resemblance to our topic model pre-clustering step, although their clustering is applied within instances of a given word type, rather than globally across all word types, as in our methods. This results in models with more vectors than words, while we aim to find fewer vectors than words, to reduce the model's complexity for small datasets. Rather than employing an off-the-shelf clustering algorithm and then applying an unrelated embedding model to its output, our approach aims to perform model-based clustering within an overall joint model of topic/cluster assignments and word vectors. Perhaps the most similar model to ours in the literature is the probabilistic multi-prototype embedding model of BIBREF45 , who treat the prototype assignment of a word as a latent variable, assumed drawn from a mixture over prototypes for each word. The embeddings are then trained using EM. Our MMSG model can be understood as the mixed membership version of this model, in which the prototypes (vectors) are shared across all word types, and each word type has its own mixed membership proportions across the shared prototypes. While a similar EM algorithm can be applied to the MMSG, the E-step is much more expensive, as we typically desire many more shared vectors (often in the thousands) than we would prototypes per a single word type (Tian et al. use ten in their experiments). We use the Metropolis-Hastings-Walker algorithm with the topic model reparameterization of our model in order to address this by efficiently pre-solving the E-step. Mixed Membership Modeling Mixed membership modeling is a flexible alternative to traditional clustering, in which each data point is assigned to a single cluster. Instead, mixed membership models posit that individual entities are associated with multiple underlying clusters, to differing degrees, as encoded by a mixed membership vector that sums to one across the clusters BIBREF28 , BIBREF26 . These mixed membership proportions are generally used to model lower-level grouped data, such as the words inside a document. Each lower-level data point inside a group is assumed to be assigned to one of the shared, global clusters according to the group-level membership proportions. Thus, a mixed membership model consists of a mixture model for each group, which share common mixture component parameters, but with differing mixture proportions. This formalism has lead to probabilistic models for a variety of applications, including medical diagnosis BIBREF39 , population genetics BIBREF42 , survey analysis BIBREF29 , computer vision BIBREF27 , BIBREF30 , text documents BIBREF35 , BIBREF7 , and social network analysis BIBREF25 . Nonparametric Bayesian extensions, in which the number of underlying clusters is learned from data via Bayesian inference, have also been proposed BIBREF44 . In this work, dictionary words are assigned a mixed membership distribution over a set of shared latent vector space embeddings. Each instantiation of a dictionary word (an “input” word) is assigned to one of the shared embeddings based on its dictionary word's membership vector. The words in its context (“output” words) are assumed to be drawn based on the chosen embedding. Case Study on NIPS In Figure FIGREF33 , we show a zoomed in INLINEFORM0 -SNE visualization of NIPS document embeddings. We can see regions of the space corresponding to learning algorithms (bottom), data space and latent space (center), training neural networks (top), and nearest neighbors (bottom-left). We also visualized the authors' embeddings via INLINEFORM1 -SNE (Figure FIGREF34 ). We find regions of latent space for reinforcement learning authors (left: “state, action,...,” Singh, Barto,Sutton), probabilistic methods (right: “mixture, model,” “monte, carlo,” Bishop, Williams, Barber, Opper, Jordan, Ghahramani, Tresp, Smyth), and evaluation (top-right: “results, performance, experiments,...”). Derivation of the Collapsed Gibbs Update Let INLINEFORM0 be the number of output words in the INLINEFORM1 th context, let INLINEFORM2 be those output words, and let INLINEFORM3 be the input words other that INLINEFORM4 (similarly, topic assignments INLINEFORM5 and output words INLINEFORM6 ). Then the collapsed Gibbs update samples from the conditional distribution INLINEFORM7 We recognize the first integral as the mean of a Dirichlet distribution which we obtain via conjugacy: INLINEFORM0 The above can also be understood as the probability of the next ball drawn from a multivariate Polya urn model, also known as the Dirichlet-compound multinomial distribution, arising from the posterior predictive distribution of a discrete likelihood with a Dirichlet prior. We will need the full form of such a distribution to analyze the second integral. Once again leveraging conjugacy, we have: INLINEFORM0 INLINEFORM0 where INLINEFORM0 is the number of times that output word INLINEFORM1 occurs in the INLINEFORM2 th context, since the final integral is over the full support of a Dirichlet distribution, which integrates to one. Eliminating terms that aren't affected by the INLINEFORM3 assignment, the above is INLINEFORM4 where we have used the fact that INLINEFORM0 for any INLINEFORM1 , and integer INLINEFORM2 . We can interpret this as the probability of drawing the context words under the multivariate Polya urn model, in which the number of “colored balls” (word counts plus prior counts) is increased by one each time a certain color (word) is selected. In other words, in each step, corresponding to the selection of each context word, we draw a ball from the urn, then put it back, along with another ball of the same color. The INLINEFORM3 and INLINEFORM4 terms reflect that the counts have been changed by adding these extra balls into the urn in each step. The second to last equation shows that this process is exchangeable: it does not matter which order the balls were drawn in when determining the probability of the sequence. Multiplying this with the term from the first integral, calculated earlier, gives us the final form of the update equation, INLINEFORM5
Visualization of State of the union addresses
400efd1bd8517cc51f217b34cbf19c75d94b1874
400efd1bd8517cc51f217b34cbf19c75d94b1874_0
Q: Do they report results only on English datasets? Text: Introduction To understand the progress towards multimedia vision and language understanding, a visual Turing test was proposed by BIBREF0 that was aimed at visual question answering BIBREF1 . Visual Dialog BIBREF2 is a natural extension for VQA. Current dialog systems as evaluated in BIBREF3 show that when trained between bots, AI-AI dialog systems show improvement, but that does not translate to actual improvement for Human-AI dialog. This is because, the questions generated by bots are not natural (human-like) and therefore does not translate to improved human dialog. Therefore it is imperative that improvement in the quality of questions will enable dialog agents to perform well in human interactions. Further, BIBREF4 show that unanswered questions can be used for improving VQA, Image captioning and Object Classification. An interesting line of work in this respect is the work of BIBREF5 . Here the authors have proposed the challenging task of generating natural questions for an image. One aspect that is central to a question is the context that is relevant to generate it. However, this context changes for every image. As can be seen in Figure FIGREF1 , an image with a person on a skateboard would result in questions related to the event. Whereas for a little girl, the questions could be related to age rather than the action. How can one have widely varying context provided for generating questions? To solve this problem, we use the context obtained by considering exemplars, specifically we use the difference between relevant and irrelevant exemplars. We consider different contexts in the form of Location, Caption, and Part of Speech tags. The human annotated questions are (b) for the first image and (a) for the second image. Our method implicitly uses a differential context obtained through supporting and contrasting exemplars to obtain a differentiable embedding. This embedding is used by a question decoder to decode the appropriate question. As discussed further, we observe this implicit differential context to perform better than an explicit keyword based context. The difference between the two approaches is illustrated in Figure FIGREF2 . This also allows for better optimization as we can backpropagate through the whole network. We provide detailed empirical evidence to support our hypothesis. As seen in Figure FIGREF1 our method generates natural questions and improves over the state-of-the-art techniques for this problem. To summarize, we propose a multimodal differential network to solve the task of visual question generation. Our contributions are: (1) A method to incorporate exemplars to learn differential embeddings that captures the subtle differences between supporting and contrasting examples and aid in generating natural questions. (2) We provide Multimodal differential embeddings, as image or text alone does not capture the whole context and we show that these embeddings outperform the ablations which incorporate cues such as only image, or tags or place information. (3) We provide a thorough comparison of the proposed network against state-of-the-art benchmarks along with a user study and statistical significance test. Related Work Generating a natural and engaging question is an interesting and challenging task for a smart robot (like chat-bot). It is a step towards having a natural visual dialog instead of the widely prevalent visual question answering bots. Further, having the ability to ask natural questions based on different contexts is also useful for artificial agents that can interact with visually impaired people. While the task of generating question automatically is well studied in NLP community, it has been relatively less studied for image-related natural questions. This is still a difficult task BIBREF5 that has gained recent interest in the community. Recently there have been many deep learning based approaches as well for solving the text-based question generation task such as BIBREF6 . Further, BIBREF7 have proposed a method to generate a factoid based question based on triplet set {subject, relation and object} to capture the structural representation of text and the corresponding generated question. These methods, however, were limited to text-based question generation. There has been extensive work done in the Vision and Language domain for solving image captioning, paragraph generation, Visual Question Answering (VQA) and Visual Dialog. BIBREF8 , BIBREF9 , BIBREF10 proposed conventional machine learning methods for image description. BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 have generated descriptive sentences from images with the help of Deep Networks. There have been many works for solving Visual Dialog BIBREF19 , BIBREF20 , BIBREF2 , BIBREF21 , BIBREF22 . A variety of methods have been proposed by BIBREF23 , BIBREF24 , BIBREF1 , BIBREF25 , BIBREF26 , BIBREF27 for solving VQA task including attention-based methods BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 . However, Visual Question Generation (VQG) is a separate task which is of interest in its own right and has not been so well explored BIBREF5 . This is a vision based novel task aimed at generating natural and engaging question for an image. BIBREF35 proposed a method for continuously generating questions from an image and subsequently answering those questions. The works closely related to ours are that of BIBREF5 and BIBREF36 . In the former work, the authors used an encoder-decoder based framework whereas in the latter work, the authors extend it by using a variational autoencoder based sequential routine to obtain natural questions by performing sampling of the latent variable. Approach In this section, we clarify the basis for our approach of using exemplars for question generation. To use exemplars for our method, we need to ensure that our exemplars can provide context and that our method generates valid exemplars. We first analyze whether the exemplars are valid or not. We illustrate this in figure FIGREF3 . We used a pre-trained RESNET-101 BIBREF37 object classification network on the target, supporting and contrasting images. We observed that the supporting image and target image have quite similar probability scores. The contrasting exemplar image, on the other hand, has completely different probability scores. Exemplars aim to provide appropriate context. To better understand the context, we experimented by analysing the questions generated through an exemplar. We observed that indeed a supporting exemplar could identify relevant tags (cows in Figure FIGREF3 ) for generating questions. We improve use of exemplars by using a triplet network. This network ensures that the joint image-caption embedding for the supporting exemplar are closer to that of the target image-caption and vice-versa. We empirically evaluated whether an explicit approach that uses the differential set of tags as a one-hot encoding improves the question generation, or the implicit embedding obtained based on the triplet network. We observed that the implicit multimodal differential network empirically provided better context for generating questions. Our understanding of this phenomenon is that both target and supporting exemplars generate similar questions whereas contrasting exemplars generate very different questions from the target question. The triplet network that enhances the joint embedding thus aids to improve the generation of target question. These are observed to be better than the explicitly obtained context tags as can be seen in Figure FIGREF2 . We now explain our method in detail. Method The task in visual question generation (VQG) is to generate a natural language question INLINEFORM0 , for an image INLINEFORM1 . We consider a set of pre-generated context INLINEFORM2 from image INLINEFORM3 . We maximize the conditional probability of generated question given image and context as follows: DISPLAYFORM0 where INLINEFORM0 is a vector for all possible parameters of our model. INLINEFORM1 is the ground truth question. The log probability for the question is calculated by using joint probability over INLINEFORM2 with the help of chain rule. For a particular question, the above term is obtained as: INLINEFORM3 where INLINEFORM0 is length of the sequence, and INLINEFORM1 is the INLINEFORM2 word of the question. We have removed INLINEFORM3 for simplicity. Our method is based on a sequence to sequence network BIBREF38 , BIBREF12 , BIBREF39 . The sequence to sequence network has a text sequence as input and output. In our method, we take an image as input and generate a natural question as output. The architecture for our model is shown in Figure FIGREF4 . Our model contains three main modules, (a) Representation Module that extracts multimodal features (b) Mixture Module that fuses the multimodal representation and (c) Decoder that generates question using an LSTM-based language model. During inference, we sample a question word INLINEFORM0 from the softmax distribution and continue sampling until the end token or maximum length for the question is reached. We experimented with both sampling and argmax and found out that argmax works better. This result is provided in the supplementary material. Multimodal Differential Network The proposed Multimodal Differential Network (MDN) consists of a representation module and a joint mixture module. We used an efficient KNN-based approach (k-d tree) with Euclidean metric to obtain the exemplars. This is obtained through a coarse quantization of nearest neighbors of the training examples into 50 clusters, and selecting the nearest as supporting and farthest as the contrasting exemplars. We experimented with ITML based metric learning BIBREF40 for image features. Surprisingly, the KNN-based approach outperforms the latter one. We also tried random exemplars and different number of exemplars and found that INLINEFORM0 works best. We provide these results in the supplementary material. We use a triplet network BIBREF41 , BIBREF42 in our representation module. We refereed a similar kind of work done in BIBREF34 for building our triplet network. The triplet network consists of three sub-parts: target, supporting, and contrasting networks. All three networks share the same parameters. Given an image INLINEFORM0 we obtain an embedding INLINEFORM1 using a CNN parameterized by a function INLINEFORM2 where INLINEFORM3 are the weights for the CNN. The caption INLINEFORM4 results in a caption embedding INLINEFORM5 through an LSTM parameterized by a function INLINEFORM6 where INLINEFORM7 are the weights for the LSTM. This is shown in part 1 of Figure FIGREF4 . Similarly we obtain image embeddings INLINEFORM8 & INLINEFORM9 and caption embeddings INLINEFORM10 & INLINEFORM11 . DISPLAYFORM0 The Mixture module brings the image and caption embeddings to a joint feature embedding space. The input to the module is the embeddings obtained from the representation module. We have evaluated four different approaches for fusion viz., joint, element-wise addition, hadamard and attention method. Each of these variants receives image features INLINEFORM0 & the caption embedding INLINEFORM1 , and outputs a fixed dimensional feature vector INLINEFORM2 . The Joint method concatenates INLINEFORM3 & INLINEFORM4 and maps them to a fixed length feature vector INLINEFORM5 as follows: DISPLAYFORM0 where INLINEFORM0 is the 4096-dimensional convolutional feature from the FC7 layer of pretrained VGG-19 Net BIBREF43 . INLINEFORM1 are the weights and INLINEFORM2 is the bias for different layers. INLINEFORM3 is the concatenation operator. Similarly, We obtain context vectors INLINEFORM0 & INLINEFORM1 for the supporting and contrasting exemplars. Details for other fusion methods are present in supplementary.The aim of the triplet network BIBREF44 is to obtain context vectors that bring the supporting exemplar embeddings closer to the target embedding and vice-versa. This is obtained as follows: DISPLAYFORM0 where INLINEFORM0 is the euclidean distance between two embeddings INLINEFORM1 and INLINEFORM2 . M is the training dataset that contains all set of possible triplets. INLINEFORM3 is the triplet loss function. This is decomposed into two terms, one that brings the supporting sample closer and one that pushes the contrasting sample further. This is given by DISPLAYFORM0 Here INLINEFORM0 represent the euclidean distance between the target and supporting sample, and target and opposing sample respectively. The parameter INLINEFORM1 controls the separation margin between these and is obtained through validation data. Decoder: Question Generator The role of decoder is to predict the probability for a question, given INLINEFORM0 . RNN provides a nice way to perform conditioning on previous state value using a fixed length hidden vector. The conditional probability of a question token at particular time step INLINEFORM1 is modeled using an LSTM as used in machine translation BIBREF38 . At time step INLINEFORM2 , the conditional probability is denoted by INLINEFORM3 , where INLINEFORM4 is the hidden state of the LSTM cell at time step INLINEFORM5 , which is conditioned on all the previously generated words INLINEFORM6 . The word with maximum probability in the probability distribution of the LSTM cell at step INLINEFORM7 is fed as an input to the LSTM cell at step INLINEFORM8 as shown in part 3 of Figure FIGREF4 . At INLINEFORM9 , we are feeding the output of the mixture module to LSTM. INLINEFORM10 are the predicted question tokens for the input image INLINEFORM11 . Here, we are using INLINEFORM12 and INLINEFORM13 as the special token START and STOP respectively. The softmax probability for the predicted question token at different time steps is given by the following equations where LSTM refers to the standard LSTM cell equations: INLINEFORM14 Where INLINEFORM0 is the probability distribution over all question tokens. INLINEFORM1 is cross entropy loss. Cost function Our objective is to minimize the total loss, that is the sum of cross entropy loss and triplet loss over all training examples. The total loss is: DISPLAYFORM0 where INLINEFORM0 is the total number of samples, INLINEFORM1 is a constant, which controls both the loss. INLINEFORM2 is the triplet loss function EQREF13 . INLINEFORM3 is the cross entropy loss between the predicted and ground truth questions and is given by: INLINEFORM4 where, INLINEFORM0 is the total number of question tokens, INLINEFORM1 is the ground truth label. The code for MDN-VQG model is provided . Variations of Proposed Method While, we advocate the use of multimodal differential network for generating embeddings that can be used by the decoder for generating questions, we also evaluate several variants of this architecture. These are as follows: Tag Net: In this variant, we consider extracting the part-of-speech (POS) tags for the words present in the caption and obtaining a Tag embedding by considering different methods of combining the one-hot vectors. Further details and experimental results are present in the supplementary. This Tag embedding is then combined with the image embedding and provided to the decoder network. Place Net: In this variant we explore obtaining embeddings based on the visual scene understanding. This is obtained using a pre-trained PlaceCNN BIBREF45 that is trained to classify 365 different types of scene categories. We then combine the activation map for the input image and the VGG-19 based place embedding to obtain the joint embedding used by the decoder. Differential Image Network: Instead of using multimodal differential network for generating embeddings, we also evaluate differential image network for the same. In this case, the embedding does not include the caption but is based only on the image feature. We also experimented with using multiple exemplars and random exemplars. Further details, pseudocode and results regarding these are present in the supplementary material. Dataset We conduct our experiments on Visual Question Generation (VQG) dataset BIBREF5 , which contains human annotated questions based on images of MS-COCO dataset. This dataset was developed for generating natural and engaging questions based on common sense reasoning. We use VQG-COCO dataset for our experiments which contains a total of 2500 training images, 1250 validation images, and 1250 testing images. Each image in the dataset contains five natural questions and five ground truth captions. It is worth noting that the work of BIBREF36 also used the questions from VQA dataset BIBREF1 for training purpose, whereas the work by BIBREF5 uses only the VQG-COCO dataset. VQA-1.0 dataset is also built on images from MS-COCO dataset. It contains a total of 82783 images for training, 40504 for validation and 81434 for testing. Each image is associated with 3 questions. We used pretrained caption generation model BIBREF13 to extract captions for VQA dataset as the human annotated captions are not there in the dataset. We also get good results on the VQA dataset (as shown in Table TABREF26 ) which shows that our method doesn't necessitate the presence of ground truth captions. We train our model separately for VQG-COCO and VQA dataset. Inference We made use of the 1250 validation images to tune the hyperparameters and are providing the results on test set of VQG-COCO dataset. During inference, We use the Representation module to find the embeddings for the image and ground truth caption without using the supporting and contrasting exemplars. The mixture module provides the joint representation of the target image and ground truth caption. Finally, the decoder takes in the joint features and generates the question. We also experimented with the captions generated by an Image-Captioning network BIBREF13 for VQG-COCO dataset and the result for that and training details are present in the supplementary material. Experiments We evaluate our proposed MDN method in the following ways: First, we evaluate it against other variants described in section SECREF19 and SECREF10 . Second, we further compare our network with state-of-the-art methods for VQA 1.0 and VQG-COCO dataset. We perform a user study to gauge human opinion on naturalness of the generated question and analyze the word statistics in Figure FIGREF22 . This is an important test as humans are the best deciders of naturalness. We further consider the statistical significance for the various ablations as well as the state-of-the-art models. The quantitative evaluation is conducted using standard metrics like BLEU BIBREF46 , METEOR BIBREF47 , ROUGE BIBREF48 , CIDEr BIBREF49 . Although these metrics have not been shown to correlate with `naturalness' of the question these still provide a reasonable quantitative measure for comparison. Here we only provide the BLEU1 scores, but the remaining BLEU-n metric scores are present in the supplementary. We observe that the proposed MDN provides improved embeddings to the decoder. We believe that these embeddings capture instance specific differential information that helps in guiding the question generation. Details regarding the metrics are given in the supplementary material. Ablation Analysis We considered different variations of our method mentioned in section SECREF19 and the various ways to obtain the joint multimodal embedding as described in section SECREF10 . The results for the VQG-COCO test set are given in table TABREF24 . In this table, every block provides the results for one of the variations of obtaining the embeddings and different ways of combining them. We observe that the Joint Method (JM) of combining the embeddings works the best in all cases except the Tag Embeddings. Among the ablations, the proposed MDN method works way better than the other variants in terms of BLEU, METEOR and ROUGE metrics by achieving an improvement of 6%, 12% and 18% in the scores respectively over the best other variant. Baseline and State-of-the-Art The comparison of our method with various baselines and state-of-the-art methods is provided in table TABREF26 for VQA 1.0 and table TABREF27 for VQG-COCO dataset. The comparable baselines for our method are the image based and caption based models in which we use either only the image or the caption embedding and generate the question. In both the tables, the first block consists of the current state-of-the-art methods on that dataset and the second contains the baselines. We observe that for the VQA dataset we achieve an improvement of 8% in BLEU and 7% in METEOR metric scores over the baselines, whereas for VQG-COCO dataset this is 15% for both the metrics. We improve over the previous state-of-the-art BIBREF35 for VQA dataset by around 6% in BLEU score and 10% in METEOR score. In the VQG-COCO dataset, we improve over BIBREF5 by 3.7% and BIBREF36 by 3.5% in terms of METEOR scores. Statistical Significance Analysis We have analysed Statistical Significance BIBREF50 of our MDN model for VQG for different variations of the mixture module mentioned in section SECREF10 and also against the state-of-the-art methods. The Critical Difference (CD) for Nemenyi BIBREF51 test depends upon the given INLINEFORM0 (confidence level, which is 0.05 in our case) for average ranks and N (number of tested datasets). If the difference in the rank of the two methods lies within CD, then they are not significantly different and vice-versa. Figure FIGREF29 visualizes the post-hoc analysis using the CD diagram. From the figure, it is clear that MDN-Joint works best and is statistically significantly different from the state-of-the-art methods. Perceptual Realism A human is the best judge of naturalness of any question, We evaluated our proposed MDN method using a `Naturalness' Turing test BIBREF52 on 175 people. People were shown an image with 2 questions just as in figure FIGREF1 and were asked to rate the naturalness of both the questions on a scale of 1 to 5 where 1 means `Least Natural' and 5 is the `Most Natural'. We provided 175 people with 100 such images from the VQG-COCO validation dataset which has 1250 images. Figure FIGREF30 indicates the number of people who were fooled (rated the generated question more or equal to the ground truth question). For the 100 images, on an average 59.7% people were fooled in this experiment and this shows that our model is able to generate natural questions. Conclusion In this paper we have proposed a novel method for generating natural questions for an image. The approach relies on obtaining multimodal differential embeddings from image and its caption. We also provide ablation analysis and a detailed comparison with state-of-the-art methods, perform a user study to evaluate the naturalness of our generated questions and also ensure that the results are statistically significant. In future, we would like to analyse means of obtaining composite embeddings. We also aim to consider the generalisation of this approach to other vision and language tasks. Supplementary Material Section SECREF8 will provide details about training configuration for MDN, Section SECREF9 will explain the various Proposed Methods and we also provide a discussion in section regarding some important questions related to our method. We report BLEU1, BLEU2, BLEU3, BLEU4, METEOR, ROUGE and CIDER metric scores for VQG-COCO dataset. We present different experiments with Tag Net in which we explore the performance of various tags (Noun, Verb, and Question tags) and different ways of combining them to get the context vectors. Multimodal Differential Network [1] MDN INLINEFORM0 Finding Exemplars: INLINEFORM1 INLINEFORM2 Compute Triplet Embedding: INLINEFORM3 INLINEFORM4 Compute Triplet Fusion Embedding : INLINEFORM5 INLINEFORM6 INLINEFORM7 Compute Triplet Loss: INLINEFORM8 Compute Decode Question Sentence: INLINEFORM9 INLINEFORM10 —————————————————– Triplet Fusion INLINEFORM11 , INLINEFORM12 INLINEFORM13 :Image feature,14x14x512 INLINEFORM14 : Caption feature,1x512 Match Dimension: INLINEFORM15 ,196x512 INLINEFORM16 196x512 If flag==Joint Fusion: INLINEFORM17 INLINEFORM18 , [ INLINEFORM19 (MDN-Mul), INLINEFORM20 (MDN-Add)] If flag==Attention Fusion : INLINEFORM21 Semb INLINEFORM22 Dataset and Training Details Dataset We conduct our experiments on two types of dataset: VQA dataset BIBREF1 , which contains human annotated questions based on images on MS-COCO dataset. Second one is VQG-COCO dataset based on natural question BIBREF55 . VQA dataset VQA dataset BIBREF1 is built on complex images from MS-COCO dataset. It contains a total of 204721 images, out of which 82783 are for training, 40504 for validation and 81434 for testing. Each image in the MS-COCO dataset is associated with 3 questions and each question has 10 possible answers. So there are 248349 QA pair for training, 121512 QA pairs for validating and 244302 QA pairs for testing. We used pre-trained caption generation model BIBREF53 to extract captions for VQA dataset. VQG dataset The VQG-COCO dataset BIBREF55 , is developed for generating natural and engaging questions that are based on common sense reasoning. This dataset contains a total of 2500 training images, 1250 validation images and 1250 testing images. Each image in the dataset contains 5 natural questions. Training Configuration We have used RMSPROP optimizer to update the model parameter and configured hyper-parameter values to be as follows: INLINEFORM23 to train the classification network . In order to train a triplet model, we have used RMSPROP to optimize the triplet model model parameter and configure hyper-parameter values to be: INLINEFORM24 . We also used learning rate decay to decrease the learning rate on every epoch by a factor given by: INLINEFORM25 where values of a=1500 and b=1250 are set empirically. Ablation Analysis of Model While, we advocate the use of multimodal differential network (MDN) for generating embeddings that can be used by the decoder for generating questions, we also evaluate several variants of this architecture namely (a) Differential Image Network, (b) Tag net and (c) Place net. These are described in detail as follows: Differential Image Network For obtaining the exemplar image based context embedding, we propose a triplet network consist of three network, one is target net, supporting net and opposing net. All these three networks designed with convolution neural network and shared the same parameters. The weights of this network are learnt through end-to-end learning using a triplet loss. The aim is to obtain latent weight vectors that bring the supporting exemplar close to the target image and enhances the difference between opposing examples. More formally, given an image INLINEFORM26 we obtain an embedding INLINEFORM27 using a CNN that we parameterize through a function INLINEFORM28 where INLINEFORM29 are the weights of the CNN. This is illustrated in figure FIGREF43 . Tag net The tag net consists of two parts Context Extractor & Tag Embedding Net. This is illustrated in figure FIGREF45 . Extract Context: The first step is to extract the caption of the image using NeuralTalk2 BIBREF53 model. We find the part-of-speech(POS) tag present in the caption. POS taggers have been developed for two well known corpuses, the Brown Corpus and the Penn Treebanks. For our work, we are using the Brown Corpus tags. The tags are clustered into three category namely Noun tag, Verb tag and Question tags (What, Where, ...). Noun tag consists of all the noun & pronouns present in the caption sentence and similarly, verb tag consists of verb & adverbs present in the caption sentence. The question tags consists of the 7-well know question words i.e., why, how, what, when, where, who and which. Each tag token is represented as a one-hot vector of the dimension of vocabulary size. For generalization, we have considered 5 tokens from each category of the Tags. Tag Embedding Net: The embedding network consists of word embedding followed by temporal convolutions neural network followed by max-pooling network. In the first step, sparse high dimensional one-hot vector is transformed to dense low dimension vector using word embedding. After this, we apply temporal convolution on the word embedding vector. The uni-gram, bi-gram and tri-gram feature are computed by applying convolution filter of size 1, 2 and 3 respectability. Finally, we applied max-pooling on this to get a vector representation of the tags as shown figure FIGREF45 . We concatenated all the tag words followed by fully connected layer to get feature dimension of 512. We also explored joint networks based on concatenation of all the tags, on element-wise addition and element-wise multiplication of the tag vectors. However, we observed that convolution over max pooling and joint concatenation gives better performance based on CIDer score. INLINEFORM30 Where, T_CNN is Temporally Convolution Neural Network applied on word embedding vector with kernel size three. Place net Visual object and scene recognition plays a crucial role in the image. Here, places in the image are labeled with scene semantic categories BIBREF45 , comprise of large and diverse type of environment in the world, such as (amusement park, tower, swimming pool, shoe shop, cafeteria, rain-forest, conference center, fish pond, etc.). So we have used different type of scene semantic categories present in the image as a place based context to generate natural question. A place365 is a convolution neural network is modeled to classify 365 types of scene categories, which is trained on the place2 dataset consist of 1.8 million of scene images. We have used a pre-trained VGG16-places365 network to obtain place based context embedding feature for various type scene categories present in the image. The context feature INLINEFORM31 is obtained by: INLINEFORM32 Where, INLINEFORM33 is Place365_CNN. We have extracted INLINEFORM34 features of dimension 14x14x512 for attention model and FC8 features of dimension 365 for joint, addition and hadamard model of places365. Finally, we use a linear transformation to obtain a 512 dimensional vector. We explored using the CONV5 having feature dimension 14x14 512, FC7 having 4096 and FC8 having feature dimension of 365 of places365. Ablation Analysis Sampling Exemplar: KNN vs ITML Our method is aimed at using efficient exemplar-based retrieval techniques. We have experimented with various exemplar methods, such as ITML BIBREF40 based metric learning for image features and KNN based approaches. We observed KNN based approach (K-D tree) with Euclidean metric is a efficient method for finding exemplars. Also we observed that ITML is computationally expensive and also depends on the training procedure. The table provides the experimental result for Differential Image Network variant with k (number of exemplars) = 2 and Hadamard method: Question Generation approaches: Sampling vs Argmax We obtained the decoding using standard practice followed in the literature BIBREF38 . This method selects the argmax sentence. Also, we evaluated our method by sampling from the probability distributions and provide the results for our proposed MDN-Joint method for VQG dataset as follows: How are exemplars improving Embedding In Multimodel differential network, we use exemplars and train them using a triplet loss. It is known that using a triplet network, we can learn a representation that accentuates how the image is closer to a supporting exemplar as against the opposing exemplar BIBREF42 , BIBREF41 . The Joint embedding is obtained between the image and language representations. Therefore the improved representation helps in obtaining an improved context vector. Further we show that this also results in improving VQG. Are exemplars required? We had similar concerns and validated this point by using random exemplars for the nearest neighbor for MDN. (k=R in table TABREF35 ) In this case the method is similar to the baseline. This suggests that with random exemplar, the model learns to ignore the cue. Are captions necessary for our method? This is not actually necessary. In our method, we have used an existing image captioning method BIBREF13 to generate captions for images that did not have them. For VQG dataset, captions were available and we have used that, but, for VQA dataset captions were not available and we have generated captions while training. We provide detailed evidence with respect to caption-question pairs to ensure that we are generating novel questions. While the caption generates scene description, our proposed method generates semantically meaningful and novel questions. Examples for Figure 1 of main paper: First Image:- Caption- A young man skateboarding around little cones. Our Question- Is this a skateboard competition? Second Image:- Caption- A small child is standing on a pair of skis. Our Question:- How old is that little girl? Intuition behind Triplet Network: The intuition behind use of triplet networks is clear through this paper BIBREF41 that first advocated its use. The main idea is that when we learn distance functions that are “close” for similar and “far” from dissimilar representations, it is not clear that close and far are with respect to what measure. By incorporating a triplet we learn distance functions that learn that “A is more similar to B as compared to C”. Learning such measures allows us to bring target image-caption joint embeddings that are closer to supporting exemplars as compared to contrasting exemplars. Analysis of Network Analysis of Tag Context Tag is language based context. These tags are extracted from caption, except question-tags which is fixed as the 7 'Wh words' (What, Why, Where, Who, When, Which and How). We have experimented with Noun tag, Verb tag and 'Wh-word' tag as shown in tables. Also, we have experimented in each tag by varying the number of tags from 1 to 7. We combined different tags using 1D-convolution, concatenation, and addition of all the tags and observed that the concatenation mechanism gives better results. As we can see in the table TABREF33 that taking Nouns, Verbs and Wh-Words as context, we achieve significant improvement in the BLEU, METEOR and CIDEr scores from the basic models which only takes the image and the caption respectively. Taking Nouns generated from the captions and questions of the corresponding training example as context, we achieve an increase of 1.6% in Bleu Score and 2% in METEOR and 34.4% in CIDEr Score from the basic Image model. Similarly taking Verbs as context gives us an increase of 1.3% in Bleu Score and 2.1% in METEOR and 33.5% in CIDEr Score from the basic Image model. And the best result comes when we take 3 Wh-Words as context and apply the Hadamard Model with concatenating the 3 WH-words. Also in Table TABREF34 we have shown the results when we take more than one words as context. Here we show that for 3 words i.e 3 nouns, 3 verbs and 3 Wh-words, the Concatenation model performs the best. In this table the conv model is using 1D convolution to combine the tags and the joint model combine all the tags. Analysis of Context: Exemplars In Multimodel Differential Network and Differential Image Network, we use exemplar images(target, supporting and opposing image) to obtain the differential context. We have performed the experiment based on the single exemplar(K=1), which is one supporting and one opposing image along with target image, based on two exemplar(K=2), i.e. two supporting and two opposing image along with single target image. similarly we have performed experiment for K=3 and K=4 as shown in table- TABREF35 . Mixture Module: Other Variations Hadamard method uses element-wise multiplication whereas Addition method uses element-wise addition in place of the concatenation operator of the Joint method. The Hadamard method finds a correlation between image feature and caption feature vector while the Addition method learns a resultant vector. In the attention method, the output INLINEFORM35 is the weighted average of attention probability vector INLINEFORM36 and convolutional features INLINEFORM37 . The attention probability vector weights the contribution of each convolutional feature based on the caption vector. This attention method is similar to work stack attention method BIBREF54 . The attention mechanism is given by: DISPLAYFORM0 where INLINEFORM38 is the 14x14x512-dimensional convolution feature map from the fifth convolution layer of VGG-19 Net of image INLINEFORM39 and INLINEFORM40 is the caption context vector. The attention probability vector INLINEFORM41 is a 196-dimensional vector. INLINEFORM42 are the weights and INLINEFORM43 is the bias for different layers. We evaluate the different approaches and provide results for the same. Here INLINEFORM44 represents element-wise addition. Evaluation Metrics Our task is similar to encoder -decoder framework of machine translation. we have used same evaluation metric is used in machine translation. BLEU BIBREF46 is the first metric to find the correlation between generated question with ground truth question. BLEU score is used to measure the precision value, i.e That is how much words in the predicted question is appeared in reference question. BLEU-n score measures the n-gram precision for counting co-occurrence on reference sentences. we have evaluated BLEU score from n is 1 to 4. The mechanism of ROUGE-n BIBREF48 score is similar to BLEU-n,where as, it measures recall value instead of precision value in BLEU. That is how much words in the reference question is appeared in predicted question.Another version ROUGE metric is ROUGE-L, which measures longest common sub-sequence present in the generated question. METEOR BIBREF47 score is another useful evaluation metric to calculate the similarity between generated question with reference one by considering synonyms, stemming and paraphrases. the output of the METEOR score measure the word matches between predicted question and reference question. In VQG, it compute the word match score between predicted question with five reference question. CIDer BIBREF49 score is a consensus based evaluation metric. It measure human-likeness, that is the sentence is written by human or not. The consensus is measured, how often n-grams in the predicted question are appeared in the reference question. If the n-grams in the predicted question sentence is appeared more frequently in reference question then question is less informative and have low CIDer score. We provide our results using all these metrics and compare it with existing baselines.
Unanswerable
4698298d506bef02f02c80465867f2cd12d29182
4698298d506bef02f02c80465867f2cd12d29182_0
Q: What were the previous state of the art benchmarks? Text: Introduction To understand the progress towards multimedia vision and language understanding, a visual Turing test was proposed by BIBREF0 that was aimed at visual question answering BIBREF1 . Visual Dialog BIBREF2 is a natural extension for VQA. Current dialog systems as evaluated in BIBREF3 show that when trained between bots, AI-AI dialog systems show improvement, but that does not translate to actual improvement for Human-AI dialog. This is because, the questions generated by bots are not natural (human-like) and therefore does not translate to improved human dialog. Therefore it is imperative that improvement in the quality of questions will enable dialog agents to perform well in human interactions. Further, BIBREF4 show that unanswered questions can be used for improving VQA, Image captioning and Object Classification. An interesting line of work in this respect is the work of BIBREF5 . Here the authors have proposed the challenging task of generating natural questions for an image. One aspect that is central to a question is the context that is relevant to generate it. However, this context changes for every image. As can be seen in Figure FIGREF1 , an image with a person on a skateboard would result in questions related to the event. Whereas for a little girl, the questions could be related to age rather than the action. How can one have widely varying context provided for generating questions? To solve this problem, we use the context obtained by considering exemplars, specifically we use the difference between relevant and irrelevant exemplars. We consider different contexts in the form of Location, Caption, and Part of Speech tags. The human annotated questions are (b) for the first image and (a) for the second image. Our method implicitly uses a differential context obtained through supporting and contrasting exemplars to obtain a differentiable embedding. This embedding is used by a question decoder to decode the appropriate question. As discussed further, we observe this implicit differential context to perform better than an explicit keyword based context. The difference between the two approaches is illustrated in Figure FIGREF2 . This also allows for better optimization as we can backpropagate through the whole network. We provide detailed empirical evidence to support our hypothesis. As seen in Figure FIGREF1 our method generates natural questions and improves over the state-of-the-art techniques for this problem. To summarize, we propose a multimodal differential network to solve the task of visual question generation. Our contributions are: (1) A method to incorporate exemplars to learn differential embeddings that captures the subtle differences between supporting and contrasting examples and aid in generating natural questions. (2) We provide Multimodal differential embeddings, as image or text alone does not capture the whole context and we show that these embeddings outperform the ablations which incorporate cues such as only image, or tags or place information. (3) We provide a thorough comparison of the proposed network against state-of-the-art benchmarks along with a user study and statistical significance test. Related Work Generating a natural and engaging question is an interesting and challenging task for a smart robot (like chat-bot). It is a step towards having a natural visual dialog instead of the widely prevalent visual question answering bots. Further, having the ability to ask natural questions based on different contexts is also useful for artificial agents that can interact with visually impaired people. While the task of generating question automatically is well studied in NLP community, it has been relatively less studied for image-related natural questions. This is still a difficult task BIBREF5 that has gained recent interest in the community. Recently there have been many deep learning based approaches as well for solving the text-based question generation task such as BIBREF6 . Further, BIBREF7 have proposed a method to generate a factoid based question based on triplet set {subject, relation and object} to capture the structural representation of text and the corresponding generated question. These methods, however, were limited to text-based question generation. There has been extensive work done in the Vision and Language domain for solving image captioning, paragraph generation, Visual Question Answering (VQA) and Visual Dialog. BIBREF8 , BIBREF9 , BIBREF10 proposed conventional machine learning methods for image description. BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 have generated descriptive sentences from images with the help of Deep Networks. There have been many works for solving Visual Dialog BIBREF19 , BIBREF20 , BIBREF2 , BIBREF21 , BIBREF22 . A variety of methods have been proposed by BIBREF23 , BIBREF24 , BIBREF1 , BIBREF25 , BIBREF26 , BIBREF27 for solving VQA task including attention-based methods BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 . However, Visual Question Generation (VQG) is a separate task which is of interest in its own right and has not been so well explored BIBREF5 . This is a vision based novel task aimed at generating natural and engaging question for an image. BIBREF35 proposed a method for continuously generating questions from an image and subsequently answering those questions. The works closely related to ours are that of BIBREF5 and BIBREF36 . In the former work, the authors used an encoder-decoder based framework whereas in the latter work, the authors extend it by using a variational autoencoder based sequential routine to obtain natural questions by performing sampling of the latent variable. Approach In this section, we clarify the basis for our approach of using exemplars for question generation. To use exemplars for our method, we need to ensure that our exemplars can provide context and that our method generates valid exemplars. We first analyze whether the exemplars are valid or not. We illustrate this in figure FIGREF3 . We used a pre-trained RESNET-101 BIBREF37 object classification network on the target, supporting and contrasting images. We observed that the supporting image and target image have quite similar probability scores. The contrasting exemplar image, on the other hand, has completely different probability scores. Exemplars aim to provide appropriate context. To better understand the context, we experimented by analysing the questions generated through an exemplar. We observed that indeed a supporting exemplar could identify relevant tags (cows in Figure FIGREF3 ) for generating questions. We improve use of exemplars by using a triplet network. This network ensures that the joint image-caption embedding for the supporting exemplar are closer to that of the target image-caption and vice-versa. We empirically evaluated whether an explicit approach that uses the differential set of tags as a one-hot encoding improves the question generation, or the implicit embedding obtained based on the triplet network. We observed that the implicit multimodal differential network empirically provided better context for generating questions. Our understanding of this phenomenon is that both target and supporting exemplars generate similar questions whereas contrasting exemplars generate very different questions from the target question. The triplet network that enhances the joint embedding thus aids to improve the generation of target question. These are observed to be better than the explicitly obtained context tags as can be seen in Figure FIGREF2 . We now explain our method in detail. Method The task in visual question generation (VQG) is to generate a natural language question INLINEFORM0 , for an image INLINEFORM1 . We consider a set of pre-generated context INLINEFORM2 from image INLINEFORM3 . We maximize the conditional probability of generated question given image and context as follows: DISPLAYFORM0 where INLINEFORM0 is a vector for all possible parameters of our model. INLINEFORM1 is the ground truth question. The log probability for the question is calculated by using joint probability over INLINEFORM2 with the help of chain rule. For a particular question, the above term is obtained as: INLINEFORM3 where INLINEFORM0 is length of the sequence, and INLINEFORM1 is the INLINEFORM2 word of the question. We have removed INLINEFORM3 for simplicity. Our method is based on a sequence to sequence network BIBREF38 , BIBREF12 , BIBREF39 . The sequence to sequence network has a text sequence as input and output. In our method, we take an image as input and generate a natural question as output. The architecture for our model is shown in Figure FIGREF4 . Our model contains three main modules, (a) Representation Module that extracts multimodal features (b) Mixture Module that fuses the multimodal representation and (c) Decoder that generates question using an LSTM-based language model. During inference, we sample a question word INLINEFORM0 from the softmax distribution and continue sampling until the end token or maximum length for the question is reached. We experimented with both sampling and argmax and found out that argmax works better. This result is provided in the supplementary material. Multimodal Differential Network The proposed Multimodal Differential Network (MDN) consists of a representation module and a joint mixture module. We used an efficient KNN-based approach (k-d tree) with Euclidean metric to obtain the exemplars. This is obtained through a coarse quantization of nearest neighbors of the training examples into 50 clusters, and selecting the nearest as supporting and farthest as the contrasting exemplars. We experimented with ITML based metric learning BIBREF40 for image features. Surprisingly, the KNN-based approach outperforms the latter one. We also tried random exemplars and different number of exemplars and found that INLINEFORM0 works best. We provide these results in the supplementary material. We use a triplet network BIBREF41 , BIBREF42 in our representation module. We refereed a similar kind of work done in BIBREF34 for building our triplet network. The triplet network consists of three sub-parts: target, supporting, and contrasting networks. All three networks share the same parameters. Given an image INLINEFORM0 we obtain an embedding INLINEFORM1 using a CNN parameterized by a function INLINEFORM2 where INLINEFORM3 are the weights for the CNN. The caption INLINEFORM4 results in a caption embedding INLINEFORM5 through an LSTM parameterized by a function INLINEFORM6 where INLINEFORM7 are the weights for the LSTM. This is shown in part 1 of Figure FIGREF4 . Similarly we obtain image embeddings INLINEFORM8 & INLINEFORM9 and caption embeddings INLINEFORM10 & INLINEFORM11 . DISPLAYFORM0 The Mixture module brings the image and caption embeddings to a joint feature embedding space. The input to the module is the embeddings obtained from the representation module. We have evaluated four different approaches for fusion viz., joint, element-wise addition, hadamard and attention method. Each of these variants receives image features INLINEFORM0 & the caption embedding INLINEFORM1 , and outputs a fixed dimensional feature vector INLINEFORM2 . The Joint method concatenates INLINEFORM3 & INLINEFORM4 and maps them to a fixed length feature vector INLINEFORM5 as follows: DISPLAYFORM0 where INLINEFORM0 is the 4096-dimensional convolutional feature from the FC7 layer of pretrained VGG-19 Net BIBREF43 . INLINEFORM1 are the weights and INLINEFORM2 is the bias for different layers. INLINEFORM3 is the concatenation operator. Similarly, We obtain context vectors INLINEFORM0 & INLINEFORM1 for the supporting and contrasting exemplars. Details for other fusion methods are present in supplementary.The aim of the triplet network BIBREF44 is to obtain context vectors that bring the supporting exemplar embeddings closer to the target embedding and vice-versa. This is obtained as follows: DISPLAYFORM0 where INLINEFORM0 is the euclidean distance between two embeddings INLINEFORM1 and INLINEFORM2 . M is the training dataset that contains all set of possible triplets. INLINEFORM3 is the triplet loss function. This is decomposed into two terms, one that brings the supporting sample closer and one that pushes the contrasting sample further. This is given by DISPLAYFORM0 Here INLINEFORM0 represent the euclidean distance between the target and supporting sample, and target and opposing sample respectively. The parameter INLINEFORM1 controls the separation margin between these and is obtained through validation data. Decoder: Question Generator The role of decoder is to predict the probability for a question, given INLINEFORM0 . RNN provides a nice way to perform conditioning on previous state value using a fixed length hidden vector. The conditional probability of a question token at particular time step INLINEFORM1 is modeled using an LSTM as used in machine translation BIBREF38 . At time step INLINEFORM2 , the conditional probability is denoted by INLINEFORM3 , where INLINEFORM4 is the hidden state of the LSTM cell at time step INLINEFORM5 , which is conditioned on all the previously generated words INLINEFORM6 . The word with maximum probability in the probability distribution of the LSTM cell at step INLINEFORM7 is fed as an input to the LSTM cell at step INLINEFORM8 as shown in part 3 of Figure FIGREF4 . At INLINEFORM9 , we are feeding the output of the mixture module to LSTM. INLINEFORM10 are the predicted question tokens for the input image INLINEFORM11 . Here, we are using INLINEFORM12 and INLINEFORM13 as the special token START and STOP respectively. The softmax probability for the predicted question token at different time steps is given by the following equations where LSTM refers to the standard LSTM cell equations: INLINEFORM14 Where INLINEFORM0 is the probability distribution over all question tokens. INLINEFORM1 is cross entropy loss. Cost function Our objective is to minimize the total loss, that is the sum of cross entropy loss and triplet loss over all training examples. The total loss is: DISPLAYFORM0 where INLINEFORM0 is the total number of samples, INLINEFORM1 is a constant, which controls both the loss. INLINEFORM2 is the triplet loss function EQREF13 . INLINEFORM3 is the cross entropy loss between the predicted and ground truth questions and is given by: INLINEFORM4 where, INLINEFORM0 is the total number of question tokens, INLINEFORM1 is the ground truth label. The code for MDN-VQG model is provided . Variations of Proposed Method While, we advocate the use of multimodal differential network for generating embeddings that can be used by the decoder for generating questions, we also evaluate several variants of this architecture. These are as follows: Tag Net: In this variant, we consider extracting the part-of-speech (POS) tags for the words present in the caption and obtaining a Tag embedding by considering different methods of combining the one-hot vectors. Further details and experimental results are present in the supplementary. This Tag embedding is then combined with the image embedding and provided to the decoder network. Place Net: In this variant we explore obtaining embeddings based on the visual scene understanding. This is obtained using a pre-trained PlaceCNN BIBREF45 that is trained to classify 365 different types of scene categories. We then combine the activation map for the input image and the VGG-19 based place embedding to obtain the joint embedding used by the decoder. Differential Image Network: Instead of using multimodal differential network for generating embeddings, we also evaluate differential image network for the same. In this case, the embedding does not include the caption but is based only on the image feature. We also experimented with using multiple exemplars and random exemplars. Further details, pseudocode and results regarding these are present in the supplementary material. Dataset We conduct our experiments on Visual Question Generation (VQG) dataset BIBREF5 , which contains human annotated questions based on images of MS-COCO dataset. This dataset was developed for generating natural and engaging questions based on common sense reasoning. We use VQG-COCO dataset for our experiments which contains a total of 2500 training images, 1250 validation images, and 1250 testing images. Each image in the dataset contains five natural questions and five ground truth captions. It is worth noting that the work of BIBREF36 also used the questions from VQA dataset BIBREF1 for training purpose, whereas the work by BIBREF5 uses only the VQG-COCO dataset. VQA-1.0 dataset is also built on images from MS-COCO dataset. It contains a total of 82783 images for training, 40504 for validation and 81434 for testing. Each image is associated with 3 questions. We used pretrained caption generation model BIBREF13 to extract captions for VQA dataset as the human annotated captions are not there in the dataset. We also get good results on the VQA dataset (as shown in Table TABREF26 ) which shows that our method doesn't necessitate the presence of ground truth captions. We train our model separately for VQG-COCO and VQA dataset. Inference We made use of the 1250 validation images to tune the hyperparameters and are providing the results on test set of VQG-COCO dataset. During inference, We use the Representation module to find the embeddings for the image and ground truth caption without using the supporting and contrasting exemplars. The mixture module provides the joint representation of the target image and ground truth caption. Finally, the decoder takes in the joint features and generates the question. We also experimented with the captions generated by an Image-Captioning network BIBREF13 for VQG-COCO dataset and the result for that and training details are present in the supplementary material. Experiments We evaluate our proposed MDN method in the following ways: First, we evaluate it against other variants described in section SECREF19 and SECREF10 . Second, we further compare our network with state-of-the-art methods for VQA 1.0 and VQG-COCO dataset. We perform a user study to gauge human opinion on naturalness of the generated question and analyze the word statistics in Figure FIGREF22 . This is an important test as humans are the best deciders of naturalness. We further consider the statistical significance for the various ablations as well as the state-of-the-art models. The quantitative evaluation is conducted using standard metrics like BLEU BIBREF46 , METEOR BIBREF47 , ROUGE BIBREF48 , CIDEr BIBREF49 . Although these metrics have not been shown to correlate with `naturalness' of the question these still provide a reasonable quantitative measure for comparison. Here we only provide the BLEU1 scores, but the remaining BLEU-n metric scores are present in the supplementary. We observe that the proposed MDN provides improved embeddings to the decoder. We believe that these embeddings capture instance specific differential information that helps in guiding the question generation. Details regarding the metrics are given in the supplementary material. Ablation Analysis We considered different variations of our method mentioned in section SECREF19 and the various ways to obtain the joint multimodal embedding as described in section SECREF10 . The results for the VQG-COCO test set are given in table TABREF24 . In this table, every block provides the results for one of the variations of obtaining the embeddings and different ways of combining them. We observe that the Joint Method (JM) of combining the embeddings works the best in all cases except the Tag Embeddings. Among the ablations, the proposed MDN method works way better than the other variants in terms of BLEU, METEOR and ROUGE metrics by achieving an improvement of 6%, 12% and 18% in the scores respectively over the best other variant. Baseline and State-of-the-Art The comparison of our method with various baselines and state-of-the-art methods is provided in table TABREF26 for VQA 1.0 and table TABREF27 for VQG-COCO dataset. The comparable baselines for our method are the image based and caption based models in which we use either only the image or the caption embedding and generate the question. In both the tables, the first block consists of the current state-of-the-art methods on that dataset and the second contains the baselines. We observe that for the VQA dataset we achieve an improvement of 8% in BLEU and 7% in METEOR metric scores over the baselines, whereas for VQG-COCO dataset this is 15% for both the metrics. We improve over the previous state-of-the-art BIBREF35 for VQA dataset by around 6% in BLEU score and 10% in METEOR score. In the VQG-COCO dataset, we improve over BIBREF5 by 3.7% and BIBREF36 by 3.5% in terms of METEOR scores. Statistical Significance Analysis We have analysed Statistical Significance BIBREF50 of our MDN model for VQG for different variations of the mixture module mentioned in section SECREF10 and also against the state-of-the-art methods. The Critical Difference (CD) for Nemenyi BIBREF51 test depends upon the given INLINEFORM0 (confidence level, which is 0.05 in our case) for average ranks and N (number of tested datasets). If the difference in the rank of the two methods lies within CD, then they are not significantly different and vice-versa. Figure FIGREF29 visualizes the post-hoc analysis using the CD diagram. From the figure, it is clear that MDN-Joint works best and is statistically significantly different from the state-of-the-art methods. Perceptual Realism A human is the best judge of naturalness of any question, We evaluated our proposed MDN method using a `Naturalness' Turing test BIBREF52 on 175 people. People were shown an image with 2 questions just as in figure FIGREF1 and were asked to rate the naturalness of both the questions on a scale of 1 to 5 where 1 means `Least Natural' and 5 is the `Most Natural'. We provided 175 people with 100 such images from the VQG-COCO validation dataset which has 1250 images. Figure FIGREF30 indicates the number of people who were fooled (rated the generated question more or equal to the ground truth question). For the 100 images, on an average 59.7% people were fooled in this experiment and this shows that our model is able to generate natural questions. Conclusion In this paper we have proposed a novel method for generating natural questions for an image. The approach relies on obtaining multimodal differential embeddings from image and its caption. We also provide ablation analysis and a detailed comparison with state-of-the-art methods, perform a user study to evaluate the naturalness of our generated questions and also ensure that the results are statistically significant. In future, we would like to analyse means of obtaining composite embeddings. We also aim to consider the generalisation of this approach to other vision and language tasks. Supplementary Material Section SECREF8 will provide details about training configuration for MDN, Section SECREF9 will explain the various Proposed Methods and we also provide a discussion in section regarding some important questions related to our method. We report BLEU1, BLEU2, BLEU3, BLEU4, METEOR, ROUGE and CIDER metric scores for VQG-COCO dataset. We present different experiments with Tag Net in which we explore the performance of various tags (Noun, Verb, and Question tags) and different ways of combining them to get the context vectors. Multimodal Differential Network [1] MDN INLINEFORM0 Finding Exemplars: INLINEFORM1 INLINEFORM2 Compute Triplet Embedding: INLINEFORM3 INLINEFORM4 Compute Triplet Fusion Embedding : INLINEFORM5 INLINEFORM6 INLINEFORM7 Compute Triplet Loss: INLINEFORM8 Compute Decode Question Sentence: INLINEFORM9 INLINEFORM10 —————————————————– Triplet Fusion INLINEFORM11 , INLINEFORM12 INLINEFORM13 :Image feature,14x14x512 INLINEFORM14 : Caption feature,1x512 Match Dimension: INLINEFORM15 ,196x512 INLINEFORM16 196x512 If flag==Joint Fusion: INLINEFORM17 INLINEFORM18 , [ INLINEFORM19 (MDN-Mul), INLINEFORM20 (MDN-Add)] If flag==Attention Fusion : INLINEFORM21 Semb INLINEFORM22 Dataset and Training Details Dataset We conduct our experiments on two types of dataset: VQA dataset BIBREF1 , which contains human annotated questions based on images on MS-COCO dataset. Second one is VQG-COCO dataset based on natural question BIBREF55 . VQA dataset VQA dataset BIBREF1 is built on complex images from MS-COCO dataset. It contains a total of 204721 images, out of which 82783 are for training, 40504 for validation and 81434 for testing. Each image in the MS-COCO dataset is associated with 3 questions and each question has 10 possible answers. So there are 248349 QA pair for training, 121512 QA pairs for validating and 244302 QA pairs for testing. We used pre-trained caption generation model BIBREF53 to extract captions for VQA dataset. VQG dataset The VQG-COCO dataset BIBREF55 , is developed for generating natural and engaging questions that are based on common sense reasoning. This dataset contains a total of 2500 training images, 1250 validation images and 1250 testing images. Each image in the dataset contains 5 natural questions. Training Configuration We have used RMSPROP optimizer to update the model parameter and configured hyper-parameter values to be as follows: INLINEFORM23 to train the classification network . In order to train a triplet model, we have used RMSPROP to optimize the triplet model model parameter and configure hyper-parameter values to be: INLINEFORM24 . We also used learning rate decay to decrease the learning rate on every epoch by a factor given by: INLINEFORM25 where values of a=1500 and b=1250 are set empirically. Ablation Analysis of Model While, we advocate the use of multimodal differential network (MDN) for generating embeddings that can be used by the decoder for generating questions, we also evaluate several variants of this architecture namely (a) Differential Image Network, (b) Tag net and (c) Place net. These are described in detail as follows: Differential Image Network For obtaining the exemplar image based context embedding, we propose a triplet network consist of three network, one is target net, supporting net and opposing net. All these three networks designed with convolution neural network and shared the same parameters. The weights of this network are learnt through end-to-end learning using a triplet loss. The aim is to obtain latent weight vectors that bring the supporting exemplar close to the target image and enhances the difference between opposing examples. More formally, given an image INLINEFORM26 we obtain an embedding INLINEFORM27 using a CNN that we parameterize through a function INLINEFORM28 where INLINEFORM29 are the weights of the CNN. This is illustrated in figure FIGREF43 . Tag net The tag net consists of two parts Context Extractor & Tag Embedding Net. This is illustrated in figure FIGREF45 . Extract Context: The first step is to extract the caption of the image using NeuralTalk2 BIBREF53 model. We find the part-of-speech(POS) tag present in the caption. POS taggers have been developed for two well known corpuses, the Brown Corpus and the Penn Treebanks. For our work, we are using the Brown Corpus tags. The tags are clustered into three category namely Noun tag, Verb tag and Question tags (What, Where, ...). Noun tag consists of all the noun & pronouns present in the caption sentence and similarly, verb tag consists of verb & adverbs present in the caption sentence. The question tags consists of the 7-well know question words i.e., why, how, what, when, where, who and which. Each tag token is represented as a one-hot vector of the dimension of vocabulary size. For generalization, we have considered 5 tokens from each category of the Tags. Tag Embedding Net: The embedding network consists of word embedding followed by temporal convolutions neural network followed by max-pooling network. In the first step, sparse high dimensional one-hot vector is transformed to dense low dimension vector using word embedding. After this, we apply temporal convolution on the word embedding vector. The uni-gram, bi-gram and tri-gram feature are computed by applying convolution filter of size 1, 2 and 3 respectability. Finally, we applied max-pooling on this to get a vector representation of the tags as shown figure FIGREF45 . We concatenated all the tag words followed by fully connected layer to get feature dimension of 512. We also explored joint networks based on concatenation of all the tags, on element-wise addition and element-wise multiplication of the tag vectors. However, we observed that convolution over max pooling and joint concatenation gives better performance based on CIDer score. INLINEFORM30 Where, T_CNN is Temporally Convolution Neural Network applied on word embedding vector with kernel size three. Place net Visual object and scene recognition plays a crucial role in the image. Here, places in the image are labeled with scene semantic categories BIBREF45 , comprise of large and diverse type of environment in the world, such as (amusement park, tower, swimming pool, shoe shop, cafeteria, rain-forest, conference center, fish pond, etc.). So we have used different type of scene semantic categories present in the image as a place based context to generate natural question. A place365 is a convolution neural network is modeled to classify 365 types of scene categories, which is trained on the place2 dataset consist of 1.8 million of scene images. We have used a pre-trained VGG16-places365 network to obtain place based context embedding feature for various type scene categories present in the image. The context feature INLINEFORM31 is obtained by: INLINEFORM32 Where, INLINEFORM33 is Place365_CNN. We have extracted INLINEFORM34 features of dimension 14x14x512 for attention model and FC8 features of dimension 365 for joint, addition and hadamard model of places365. Finally, we use a linear transformation to obtain a 512 dimensional vector. We explored using the CONV5 having feature dimension 14x14 512, FC7 having 4096 and FC8 having feature dimension of 365 of places365. Ablation Analysis Sampling Exemplar: KNN vs ITML Our method is aimed at using efficient exemplar-based retrieval techniques. We have experimented with various exemplar methods, such as ITML BIBREF40 based metric learning for image features and KNN based approaches. We observed KNN based approach (K-D tree) with Euclidean metric is a efficient method for finding exemplars. Also we observed that ITML is computationally expensive and also depends on the training procedure. The table provides the experimental result for Differential Image Network variant with k (number of exemplars) = 2 and Hadamard method: Question Generation approaches: Sampling vs Argmax We obtained the decoding using standard practice followed in the literature BIBREF38 . This method selects the argmax sentence. Also, we evaluated our method by sampling from the probability distributions and provide the results for our proposed MDN-Joint method for VQG dataset as follows: How are exemplars improving Embedding In Multimodel differential network, we use exemplars and train them using a triplet loss. It is known that using a triplet network, we can learn a representation that accentuates how the image is closer to a supporting exemplar as against the opposing exemplar BIBREF42 , BIBREF41 . The Joint embedding is obtained between the image and language representations. Therefore the improved representation helps in obtaining an improved context vector. Further we show that this also results in improving VQG. Are exemplars required? We had similar concerns and validated this point by using random exemplars for the nearest neighbor for MDN. (k=R in table TABREF35 ) In this case the method is similar to the baseline. This suggests that with random exemplar, the model learns to ignore the cue. Are captions necessary for our method? This is not actually necessary. In our method, we have used an existing image captioning method BIBREF13 to generate captions for images that did not have them. For VQG dataset, captions were available and we have used that, but, for VQA dataset captions were not available and we have generated captions while training. We provide detailed evidence with respect to caption-question pairs to ensure that we are generating novel questions. While the caption generates scene description, our proposed method generates semantically meaningful and novel questions. Examples for Figure 1 of main paper: First Image:- Caption- A young man skateboarding around little cones. Our Question- Is this a skateboard competition? Second Image:- Caption- A small child is standing on a pair of skis. Our Question:- How old is that little girl? Intuition behind Triplet Network: The intuition behind use of triplet networks is clear through this paper BIBREF41 that first advocated its use. The main idea is that when we learn distance functions that are “close” for similar and “far” from dissimilar representations, it is not clear that close and far are with respect to what measure. By incorporating a triplet we learn distance functions that learn that “A is more similar to B as compared to C”. Learning such measures allows us to bring target image-caption joint embeddings that are closer to supporting exemplars as compared to contrasting exemplars. Analysis of Network Analysis of Tag Context Tag is language based context. These tags are extracted from caption, except question-tags which is fixed as the 7 'Wh words' (What, Why, Where, Who, When, Which and How). We have experimented with Noun tag, Verb tag and 'Wh-word' tag as shown in tables. Also, we have experimented in each tag by varying the number of tags from 1 to 7. We combined different tags using 1D-convolution, concatenation, and addition of all the tags and observed that the concatenation mechanism gives better results. As we can see in the table TABREF33 that taking Nouns, Verbs and Wh-Words as context, we achieve significant improvement in the BLEU, METEOR and CIDEr scores from the basic models which only takes the image and the caption respectively. Taking Nouns generated from the captions and questions of the corresponding training example as context, we achieve an increase of 1.6% in Bleu Score and 2% in METEOR and 34.4% in CIDEr Score from the basic Image model. Similarly taking Verbs as context gives us an increase of 1.3% in Bleu Score and 2.1% in METEOR and 33.5% in CIDEr Score from the basic Image model. And the best result comes when we take 3 Wh-Words as context and apply the Hadamard Model with concatenating the 3 WH-words. Also in Table TABREF34 we have shown the results when we take more than one words as context. Here we show that for 3 words i.e 3 nouns, 3 verbs and 3 Wh-words, the Concatenation model performs the best. In this table the conv model is using 1D convolution to combine the tags and the joint model combine all the tags. Analysis of Context: Exemplars In Multimodel Differential Network and Differential Image Network, we use exemplar images(target, supporting and opposing image) to obtain the differential context. We have performed the experiment based on the single exemplar(K=1), which is one supporting and one opposing image along with target image, based on two exemplar(K=2), i.e. two supporting and two opposing image along with single target image. similarly we have performed experiment for K=3 and K=4 as shown in table- TABREF35 . Mixture Module: Other Variations Hadamard method uses element-wise multiplication whereas Addition method uses element-wise addition in place of the concatenation operator of the Joint method. The Hadamard method finds a correlation between image feature and caption feature vector while the Addition method learns a resultant vector. In the attention method, the output INLINEFORM35 is the weighted average of attention probability vector INLINEFORM36 and convolutional features INLINEFORM37 . The attention probability vector weights the contribution of each convolutional feature based on the caption vector. This attention method is similar to work stack attention method BIBREF54 . The attention mechanism is given by: DISPLAYFORM0 where INLINEFORM38 is the 14x14x512-dimensional convolution feature map from the fifth convolution layer of VGG-19 Net of image INLINEFORM39 and INLINEFORM40 is the caption context vector. The attention probability vector INLINEFORM41 is a 196-dimensional vector. INLINEFORM42 are the weights and INLINEFORM43 is the bias for different layers. We evaluate the different approaches and provide results for the same. Here INLINEFORM44 represents element-wise addition. Evaluation Metrics Our task is similar to encoder -decoder framework of machine translation. we have used same evaluation metric is used in machine translation. BLEU BIBREF46 is the first metric to find the correlation between generated question with ground truth question. BLEU score is used to measure the precision value, i.e That is how much words in the predicted question is appeared in reference question. BLEU-n score measures the n-gram precision for counting co-occurrence on reference sentences. we have evaluated BLEU score from n is 1 to 4. The mechanism of ROUGE-n BIBREF48 score is similar to BLEU-n,where as, it measures recall value instead of precision value in BLEU. That is how much words in the reference question is appeared in predicted question.Another version ROUGE metric is ROUGE-L, which measures longest common sub-sequence present in the generated question. METEOR BIBREF47 score is another useful evaluation metric to calculate the similarity between generated question with reference one by considering synonyms, stemming and paraphrases. the output of the METEOR score measure the word matches between predicted question and reference question. In VQG, it compute the word match score between predicted question with five reference question. CIDer BIBREF49 score is a consensus based evaluation metric. It measure human-likeness, that is the sentence is written by human or not. The consensus is measured, how often n-grams in the predicted question are appeared in the reference question. If the n-grams in the predicted question sentence is appeared more frequently in reference question then question is less informative and have low CIDer score. We provide our results using all these metrics and compare it with existing baselines.
BIBREF35 for VQA dataset, BIBREF5, BIBREF36
4e2cb1677df949ee3d1d3cd10962b951da907105
4e2cb1677df949ee3d1d3cd10962b951da907105_0
Q: How/where are the natural question generated? Text: Introduction To understand the progress towards multimedia vision and language understanding, a visual Turing test was proposed by BIBREF0 that was aimed at visual question answering BIBREF1 . Visual Dialog BIBREF2 is a natural extension for VQA. Current dialog systems as evaluated in BIBREF3 show that when trained between bots, AI-AI dialog systems show improvement, but that does not translate to actual improvement for Human-AI dialog. This is because, the questions generated by bots are not natural (human-like) and therefore does not translate to improved human dialog. Therefore it is imperative that improvement in the quality of questions will enable dialog agents to perform well in human interactions. Further, BIBREF4 show that unanswered questions can be used for improving VQA, Image captioning and Object Classification. An interesting line of work in this respect is the work of BIBREF5 . Here the authors have proposed the challenging task of generating natural questions for an image. One aspect that is central to a question is the context that is relevant to generate it. However, this context changes for every image. As can be seen in Figure FIGREF1 , an image with a person on a skateboard would result in questions related to the event. Whereas for a little girl, the questions could be related to age rather than the action. How can one have widely varying context provided for generating questions? To solve this problem, we use the context obtained by considering exemplars, specifically we use the difference between relevant and irrelevant exemplars. We consider different contexts in the form of Location, Caption, and Part of Speech tags. The human annotated questions are (b) for the first image and (a) for the second image. Our method implicitly uses a differential context obtained through supporting and contrasting exemplars to obtain a differentiable embedding. This embedding is used by a question decoder to decode the appropriate question. As discussed further, we observe this implicit differential context to perform better than an explicit keyword based context. The difference between the two approaches is illustrated in Figure FIGREF2 . This also allows for better optimization as we can backpropagate through the whole network. We provide detailed empirical evidence to support our hypothesis. As seen in Figure FIGREF1 our method generates natural questions and improves over the state-of-the-art techniques for this problem. To summarize, we propose a multimodal differential network to solve the task of visual question generation. Our contributions are: (1) A method to incorporate exemplars to learn differential embeddings that captures the subtle differences between supporting and contrasting examples and aid in generating natural questions. (2) We provide Multimodal differential embeddings, as image or text alone does not capture the whole context and we show that these embeddings outperform the ablations which incorporate cues such as only image, or tags or place information. (3) We provide a thorough comparison of the proposed network against state-of-the-art benchmarks along with a user study and statistical significance test. Related Work Generating a natural and engaging question is an interesting and challenging task for a smart robot (like chat-bot). It is a step towards having a natural visual dialog instead of the widely prevalent visual question answering bots. Further, having the ability to ask natural questions based on different contexts is also useful for artificial agents that can interact with visually impaired people. While the task of generating question automatically is well studied in NLP community, it has been relatively less studied for image-related natural questions. This is still a difficult task BIBREF5 that has gained recent interest in the community. Recently there have been many deep learning based approaches as well for solving the text-based question generation task such as BIBREF6 . Further, BIBREF7 have proposed a method to generate a factoid based question based on triplet set {subject, relation and object} to capture the structural representation of text and the corresponding generated question. These methods, however, were limited to text-based question generation. There has been extensive work done in the Vision and Language domain for solving image captioning, paragraph generation, Visual Question Answering (VQA) and Visual Dialog. BIBREF8 , BIBREF9 , BIBREF10 proposed conventional machine learning methods for image description. BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 have generated descriptive sentences from images with the help of Deep Networks. There have been many works for solving Visual Dialog BIBREF19 , BIBREF20 , BIBREF2 , BIBREF21 , BIBREF22 . A variety of methods have been proposed by BIBREF23 , BIBREF24 , BIBREF1 , BIBREF25 , BIBREF26 , BIBREF27 for solving VQA task including attention-based methods BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 . However, Visual Question Generation (VQG) is a separate task which is of interest in its own right and has not been so well explored BIBREF5 . This is a vision based novel task aimed at generating natural and engaging question for an image. BIBREF35 proposed a method for continuously generating questions from an image and subsequently answering those questions. The works closely related to ours are that of BIBREF5 and BIBREF36 . In the former work, the authors used an encoder-decoder based framework whereas in the latter work, the authors extend it by using a variational autoencoder based sequential routine to obtain natural questions by performing sampling of the latent variable. Approach In this section, we clarify the basis for our approach of using exemplars for question generation. To use exemplars for our method, we need to ensure that our exemplars can provide context and that our method generates valid exemplars. We first analyze whether the exemplars are valid or not. We illustrate this in figure FIGREF3 . We used a pre-trained RESNET-101 BIBREF37 object classification network on the target, supporting and contrasting images. We observed that the supporting image and target image have quite similar probability scores. The contrasting exemplar image, on the other hand, has completely different probability scores. Exemplars aim to provide appropriate context. To better understand the context, we experimented by analysing the questions generated through an exemplar. We observed that indeed a supporting exemplar could identify relevant tags (cows in Figure FIGREF3 ) for generating questions. We improve use of exemplars by using a triplet network. This network ensures that the joint image-caption embedding for the supporting exemplar are closer to that of the target image-caption and vice-versa. We empirically evaluated whether an explicit approach that uses the differential set of tags as a one-hot encoding improves the question generation, or the implicit embedding obtained based on the triplet network. We observed that the implicit multimodal differential network empirically provided better context for generating questions. Our understanding of this phenomenon is that both target and supporting exemplars generate similar questions whereas contrasting exemplars generate very different questions from the target question. The triplet network that enhances the joint embedding thus aids to improve the generation of target question. These are observed to be better than the explicitly obtained context tags as can be seen in Figure FIGREF2 . We now explain our method in detail. Method The task in visual question generation (VQG) is to generate a natural language question INLINEFORM0 , for an image INLINEFORM1 . We consider a set of pre-generated context INLINEFORM2 from image INLINEFORM3 . We maximize the conditional probability of generated question given image and context as follows: DISPLAYFORM0 where INLINEFORM0 is a vector for all possible parameters of our model. INLINEFORM1 is the ground truth question. The log probability for the question is calculated by using joint probability over INLINEFORM2 with the help of chain rule. For a particular question, the above term is obtained as: INLINEFORM3 where INLINEFORM0 is length of the sequence, and INLINEFORM1 is the INLINEFORM2 word of the question. We have removed INLINEFORM3 for simplicity. Our method is based on a sequence to sequence network BIBREF38 , BIBREF12 , BIBREF39 . The sequence to sequence network has a text sequence as input and output. In our method, we take an image as input and generate a natural question as output. The architecture for our model is shown in Figure FIGREF4 . Our model contains three main modules, (a) Representation Module that extracts multimodal features (b) Mixture Module that fuses the multimodal representation and (c) Decoder that generates question using an LSTM-based language model. During inference, we sample a question word INLINEFORM0 from the softmax distribution and continue sampling until the end token or maximum length for the question is reached. We experimented with both sampling and argmax and found out that argmax works better. This result is provided in the supplementary material. Multimodal Differential Network The proposed Multimodal Differential Network (MDN) consists of a representation module and a joint mixture module. We used an efficient KNN-based approach (k-d tree) with Euclidean metric to obtain the exemplars. This is obtained through a coarse quantization of nearest neighbors of the training examples into 50 clusters, and selecting the nearest as supporting and farthest as the contrasting exemplars. We experimented with ITML based metric learning BIBREF40 for image features. Surprisingly, the KNN-based approach outperforms the latter one. We also tried random exemplars and different number of exemplars and found that INLINEFORM0 works best. We provide these results in the supplementary material. We use a triplet network BIBREF41 , BIBREF42 in our representation module. We refereed a similar kind of work done in BIBREF34 for building our triplet network. The triplet network consists of three sub-parts: target, supporting, and contrasting networks. All three networks share the same parameters. Given an image INLINEFORM0 we obtain an embedding INLINEFORM1 using a CNN parameterized by a function INLINEFORM2 where INLINEFORM3 are the weights for the CNN. The caption INLINEFORM4 results in a caption embedding INLINEFORM5 through an LSTM parameterized by a function INLINEFORM6 where INLINEFORM7 are the weights for the LSTM. This is shown in part 1 of Figure FIGREF4 . Similarly we obtain image embeddings INLINEFORM8 & INLINEFORM9 and caption embeddings INLINEFORM10 & INLINEFORM11 . DISPLAYFORM0 The Mixture module brings the image and caption embeddings to a joint feature embedding space. The input to the module is the embeddings obtained from the representation module. We have evaluated four different approaches for fusion viz., joint, element-wise addition, hadamard and attention method. Each of these variants receives image features INLINEFORM0 & the caption embedding INLINEFORM1 , and outputs a fixed dimensional feature vector INLINEFORM2 . The Joint method concatenates INLINEFORM3 & INLINEFORM4 and maps them to a fixed length feature vector INLINEFORM5 as follows: DISPLAYFORM0 where INLINEFORM0 is the 4096-dimensional convolutional feature from the FC7 layer of pretrained VGG-19 Net BIBREF43 . INLINEFORM1 are the weights and INLINEFORM2 is the bias for different layers. INLINEFORM3 is the concatenation operator. Similarly, We obtain context vectors INLINEFORM0 & INLINEFORM1 for the supporting and contrasting exemplars. Details for other fusion methods are present in supplementary.The aim of the triplet network BIBREF44 is to obtain context vectors that bring the supporting exemplar embeddings closer to the target embedding and vice-versa. This is obtained as follows: DISPLAYFORM0 where INLINEFORM0 is the euclidean distance between two embeddings INLINEFORM1 and INLINEFORM2 . M is the training dataset that contains all set of possible triplets. INLINEFORM3 is the triplet loss function. This is decomposed into two terms, one that brings the supporting sample closer and one that pushes the contrasting sample further. This is given by DISPLAYFORM0 Here INLINEFORM0 represent the euclidean distance between the target and supporting sample, and target and opposing sample respectively. The parameter INLINEFORM1 controls the separation margin between these and is obtained through validation data. Decoder: Question Generator The role of decoder is to predict the probability for a question, given INLINEFORM0 . RNN provides a nice way to perform conditioning on previous state value using a fixed length hidden vector. The conditional probability of a question token at particular time step INLINEFORM1 is modeled using an LSTM as used in machine translation BIBREF38 . At time step INLINEFORM2 , the conditional probability is denoted by INLINEFORM3 , where INLINEFORM4 is the hidden state of the LSTM cell at time step INLINEFORM5 , which is conditioned on all the previously generated words INLINEFORM6 . The word with maximum probability in the probability distribution of the LSTM cell at step INLINEFORM7 is fed as an input to the LSTM cell at step INLINEFORM8 as shown in part 3 of Figure FIGREF4 . At INLINEFORM9 , we are feeding the output of the mixture module to LSTM. INLINEFORM10 are the predicted question tokens for the input image INLINEFORM11 . Here, we are using INLINEFORM12 and INLINEFORM13 as the special token START and STOP respectively. The softmax probability for the predicted question token at different time steps is given by the following equations where LSTM refers to the standard LSTM cell equations: INLINEFORM14 Where INLINEFORM0 is the probability distribution over all question tokens. INLINEFORM1 is cross entropy loss. Cost function Our objective is to minimize the total loss, that is the sum of cross entropy loss and triplet loss over all training examples. The total loss is: DISPLAYFORM0 where INLINEFORM0 is the total number of samples, INLINEFORM1 is a constant, which controls both the loss. INLINEFORM2 is the triplet loss function EQREF13 . INLINEFORM3 is the cross entropy loss between the predicted and ground truth questions and is given by: INLINEFORM4 where, INLINEFORM0 is the total number of question tokens, INLINEFORM1 is the ground truth label. The code for MDN-VQG model is provided . Variations of Proposed Method While, we advocate the use of multimodal differential network for generating embeddings that can be used by the decoder for generating questions, we also evaluate several variants of this architecture. These are as follows: Tag Net: In this variant, we consider extracting the part-of-speech (POS) tags for the words present in the caption and obtaining a Tag embedding by considering different methods of combining the one-hot vectors. Further details and experimental results are present in the supplementary. This Tag embedding is then combined with the image embedding and provided to the decoder network. Place Net: In this variant we explore obtaining embeddings based on the visual scene understanding. This is obtained using a pre-trained PlaceCNN BIBREF45 that is trained to classify 365 different types of scene categories. We then combine the activation map for the input image and the VGG-19 based place embedding to obtain the joint embedding used by the decoder. Differential Image Network: Instead of using multimodal differential network for generating embeddings, we also evaluate differential image network for the same. In this case, the embedding does not include the caption but is based only on the image feature. We also experimented with using multiple exemplars and random exemplars. Further details, pseudocode and results regarding these are present in the supplementary material. Dataset We conduct our experiments on Visual Question Generation (VQG) dataset BIBREF5 , which contains human annotated questions based on images of MS-COCO dataset. This dataset was developed for generating natural and engaging questions based on common sense reasoning. We use VQG-COCO dataset for our experiments which contains a total of 2500 training images, 1250 validation images, and 1250 testing images. Each image in the dataset contains five natural questions and five ground truth captions. It is worth noting that the work of BIBREF36 also used the questions from VQA dataset BIBREF1 for training purpose, whereas the work by BIBREF5 uses only the VQG-COCO dataset. VQA-1.0 dataset is also built on images from MS-COCO dataset. It contains a total of 82783 images for training, 40504 for validation and 81434 for testing. Each image is associated with 3 questions. We used pretrained caption generation model BIBREF13 to extract captions for VQA dataset as the human annotated captions are not there in the dataset. We also get good results on the VQA dataset (as shown in Table TABREF26 ) which shows that our method doesn't necessitate the presence of ground truth captions. We train our model separately for VQG-COCO and VQA dataset. Inference We made use of the 1250 validation images to tune the hyperparameters and are providing the results on test set of VQG-COCO dataset. During inference, We use the Representation module to find the embeddings for the image and ground truth caption without using the supporting and contrasting exemplars. The mixture module provides the joint representation of the target image and ground truth caption. Finally, the decoder takes in the joint features and generates the question. We also experimented with the captions generated by an Image-Captioning network BIBREF13 for VQG-COCO dataset and the result for that and training details are present in the supplementary material. Experiments We evaluate our proposed MDN method in the following ways: First, we evaluate it against other variants described in section SECREF19 and SECREF10 . Second, we further compare our network with state-of-the-art methods for VQA 1.0 and VQG-COCO dataset. We perform a user study to gauge human opinion on naturalness of the generated question and analyze the word statistics in Figure FIGREF22 . This is an important test as humans are the best deciders of naturalness. We further consider the statistical significance for the various ablations as well as the state-of-the-art models. The quantitative evaluation is conducted using standard metrics like BLEU BIBREF46 , METEOR BIBREF47 , ROUGE BIBREF48 , CIDEr BIBREF49 . Although these metrics have not been shown to correlate with `naturalness' of the question these still provide a reasonable quantitative measure for comparison. Here we only provide the BLEU1 scores, but the remaining BLEU-n metric scores are present in the supplementary. We observe that the proposed MDN provides improved embeddings to the decoder. We believe that these embeddings capture instance specific differential information that helps in guiding the question generation. Details regarding the metrics are given in the supplementary material. Ablation Analysis We considered different variations of our method mentioned in section SECREF19 and the various ways to obtain the joint multimodal embedding as described in section SECREF10 . The results for the VQG-COCO test set are given in table TABREF24 . In this table, every block provides the results for one of the variations of obtaining the embeddings and different ways of combining them. We observe that the Joint Method (JM) of combining the embeddings works the best in all cases except the Tag Embeddings. Among the ablations, the proposed MDN method works way better than the other variants in terms of BLEU, METEOR and ROUGE metrics by achieving an improvement of 6%, 12% and 18% in the scores respectively over the best other variant. Baseline and State-of-the-Art The comparison of our method with various baselines and state-of-the-art methods is provided in table TABREF26 for VQA 1.0 and table TABREF27 for VQG-COCO dataset. The comparable baselines for our method are the image based and caption based models in which we use either only the image or the caption embedding and generate the question. In both the tables, the first block consists of the current state-of-the-art methods on that dataset and the second contains the baselines. We observe that for the VQA dataset we achieve an improvement of 8% in BLEU and 7% in METEOR metric scores over the baselines, whereas for VQG-COCO dataset this is 15% for both the metrics. We improve over the previous state-of-the-art BIBREF35 for VQA dataset by around 6% in BLEU score and 10% in METEOR score. In the VQG-COCO dataset, we improve over BIBREF5 by 3.7% and BIBREF36 by 3.5% in terms of METEOR scores. Statistical Significance Analysis We have analysed Statistical Significance BIBREF50 of our MDN model for VQG for different variations of the mixture module mentioned in section SECREF10 and also against the state-of-the-art methods. The Critical Difference (CD) for Nemenyi BIBREF51 test depends upon the given INLINEFORM0 (confidence level, which is 0.05 in our case) for average ranks and N (number of tested datasets). If the difference in the rank of the two methods lies within CD, then they are not significantly different and vice-versa. Figure FIGREF29 visualizes the post-hoc analysis using the CD diagram. From the figure, it is clear that MDN-Joint works best and is statistically significantly different from the state-of-the-art methods. Perceptual Realism A human is the best judge of naturalness of any question, We evaluated our proposed MDN method using a `Naturalness' Turing test BIBREF52 on 175 people. People were shown an image with 2 questions just as in figure FIGREF1 and were asked to rate the naturalness of both the questions on a scale of 1 to 5 where 1 means `Least Natural' and 5 is the `Most Natural'. We provided 175 people with 100 such images from the VQG-COCO validation dataset which has 1250 images. Figure FIGREF30 indicates the number of people who were fooled (rated the generated question more or equal to the ground truth question). For the 100 images, on an average 59.7% people were fooled in this experiment and this shows that our model is able to generate natural questions. Conclusion In this paper we have proposed a novel method for generating natural questions for an image. The approach relies on obtaining multimodal differential embeddings from image and its caption. We also provide ablation analysis and a detailed comparison with state-of-the-art methods, perform a user study to evaluate the naturalness of our generated questions and also ensure that the results are statistically significant. In future, we would like to analyse means of obtaining composite embeddings. We also aim to consider the generalisation of this approach to other vision and language tasks. Supplementary Material Section SECREF8 will provide details about training configuration for MDN, Section SECREF9 will explain the various Proposed Methods and we also provide a discussion in section regarding some important questions related to our method. We report BLEU1, BLEU2, BLEU3, BLEU4, METEOR, ROUGE and CIDER metric scores for VQG-COCO dataset. We present different experiments with Tag Net in which we explore the performance of various tags (Noun, Verb, and Question tags) and different ways of combining them to get the context vectors. Multimodal Differential Network [1] MDN INLINEFORM0 Finding Exemplars: INLINEFORM1 INLINEFORM2 Compute Triplet Embedding: INLINEFORM3 INLINEFORM4 Compute Triplet Fusion Embedding : INLINEFORM5 INLINEFORM6 INLINEFORM7 Compute Triplet Loss: INLINEFORM8 Compute Decode Question Sentence: INLINEFORM9 INLINEFORM10 —————————————————– Triplet Fusion INLINEFORM11 , INLINEFORM12 INLINEFORM13 :Image feature,14x14x512 INLINEFORM14 : Caption feature,1x512 Match Dimension: INLINEFORM15 ,196x512 INLINEFORM16 196x512 If flag==Joint Fusion: INLINEFORM17 INLINEFORM18 , [ INLINEFORM19 (MDN-Mul), INLINEFORM20 (MDN-Add)] If flag==Attention Fusion : INLINEFORM21 Semb INLINEFORM22 Dataset and Training Details Dataset We conduct our experiments on two types of dataset: VQA dataset BIBREF1 , which contains human annotated questions based on images on MS-COCO dataset. Second one is VQG-COCO dataset based on natural question BIBREF55 . VQA dataset VQA dataset BIBREF1 is built on complex images from MS-COCO dataset. It contains a total of 204721 images, out of which 82783 are for training, 40504 for validation and 81434 for testing. Each image in the MS-COCO dataset is associated with 3 questions and each question has 10 possible answers. So there are 248349 QA pair for training, 121512 QA pairs for validating and 244302 QA pairs for testing. We used pre-trained caption generation model BIBREF53 to extract captions for VQA dataset. VQG dataset The VQG-COCO dataset BIBREF55 , is developed for generating natural and engaging questions that are based on common sense reasoning. This dataset contains a total of 2500 training images, 1250 validation images and 1250 testing images. Each image in the dataset contains 5 natural questions. Training Configuration We have used RMSPROP optimizer to update the model parameter and configured hyper-parameter values to be as follows: INLINEFORM23 to train the classification network . In order to train a triplet model, we have used RMSPROP to optimize the triplet model model parameter and configure hyper-parameter values to be: INLINEFORM24 . We also used learning rate decay to decrease the learning rate on every epoch by a factor given by: INLINEFORM25 where values of a=1500 and b=1250 are set empirically. Ablation Analysis of Model While, we advocate the use of multimodal differential network (MDN) for generating embeddings that can be used by the decoder for generating questions, we also evaluate several variants of this architecture namely (a) Differential Image Network, (b) Tag net and (c) Place net. These are described in detail as follows: Differential Image Network For obtaining the exemplar image based context embedding, we propose a triplet network consist of three network, one is target net, supporting net and opposing net. All these three networks designed with convolution neural network and shared the same parameters. The weights of this network are learnt through end-to-end learning using a triplet loss. The aim is to obtain latent weight vectors that bring the supporting exemplar close to the target image and enhances the difference between opposing examples. More formally, given an image INLINEFORM26 we obtain an embedding INLINEFORM27 using a CNN that we parameterize through a function INLINEFORM28 where INLINEFORM29 are the weights of the CNN. This is illustrated in figure FIGREF43 . Tag net The tag net consists of two parts Context Extractor & Tag Embedding Net. This is illustrated in figure FIGREF45 . Extract Context: The first step is to extract the caption of the image using NeuralTalk2 BIBREF53 model. We find the part-of-speech(POS) tag present in the caption. POS taggers have been developed for two well known corpuses, the Brown Corpus and the Penn Treebanks. For our work, we are using the Brown Corpus tags. The tags are clustered into three category namely Noun tag, Verb tag and Question tags (What, Where, ...). Noun tag consists of all the noun & pronouns present in the caption sentence and similarly, verb tag consists of verb & adverbs present in the caption sentence. The question tags consists of the 7-well know question words i.e., why, how, what, when, where, who and which. Each tag token is represented as a one-hot vector of the dimension of vocabulary size. For generalization, we have considered 5 tokens from each category of the Tags. Tag Embedding Net: The embedding network consists of word embedding followed by temporal convolutions neural network followed by max-pooling network. In the first step, sparse high dimensional one-hot vector is transformed to dense low dimension vector using word embedding. After this, we apply temporal convolution on the word embedding vector. The uni-gram, bi-gram and tri-gram feature are computed by applying convolution filter of size 1, 2 and 3 respectability. Finally, we applied max-pooling on this to get a vector representation of the tags as shown figure FIGREF45 . We concatenated all the tag words followed by fully connected layer to get feature dimension of 512. We also explored joint networks based on concatenation of all the tags, on element-wise addition and element-wise multiplication of the tag vectors. However, we observed that convolution over max pooling and joint concatenation gives better performance based on CIDer score. INLINEFORM30 Where, T_CNN is Temporally Convolution Neural Network applied on word embedding vector with kernel size three. Place net Visual object and scene recognition plays a crucial role in the image. Here, places in the image are labeled with scene semantic categories BIBREF45 , comprise of large and diverse type of environment in the world, such as (amusement park, tower, swimming pool, shoe shop, cafeteria, rain-forest, conference center, fish pond, etc.). So we have used different type of scene semantic categories present in the image as a place based context to generate natural question. A place365 is a convolution neural network is modeled to classify 365 types of scene categories, which is trained on the place2 dataset consist of 1.8 million of scene images. We have used a pre-trained VGG16-places365 network to obtain place based context embedding feature for various type scene categories present in the image. The context feature INLINEFORM31 is obtained by: INLINEFORM32 Where, INLINEFORM33 is Place365_CNN. We have extracted INLINEFORM34 features of dimension 14x14x512 for attention model and FC8 features of dimension 365 for joint, addition and hadamard model of places365. Finally, we use a linear transformation to obtain a 512 dimensional vector. We explored using the CONV5 having feature dimension 14x14 512, FC7 having 4096 and FC8 having feature dimension of 365 of places365. Ablation Analysis Sampling Exemplar: KNN vs ITML Our method is aimed at using efficient exemplar-based retrieval techniques. We have experimented with various exemplar methods, such as ITML BIBREF40 based metric learning for image features and KNN based approaches. We observed KNN based approach (K-D tree) with Euclidean metric is a efficient method for finding exemplars. Also we observed that ITML is computationally expensive and also depends on the training procedure. The table provides the experimental result for Differential Image Network variant with k (number of exemplars) = 2 and Hadamard method: Question Generation approaches: Sampling vs Argmax We obtained the decoding using standard practice followed in the literature BIBREF38 . This method selects the argmax sentence. Also, we evaluated our method by sampling from the probability distributions and provide the results for our proposed MDN-Joint method for VQG dataset as follows: How are exemplars improving Embedding In Multimodel differential network, we use exemplars and train them using a triplet loss. It is known that using a triplet network, we can learn a representation that accentuates how the image is closer to a supporting exemplar as against the opposing exemplar BIBREF42 , BIBREF41 . The Joint embedding is obtained between the image and language representations. Therefore the improved representation helps in obtaining an improved context vector. Further we show that this also results in improving VQG. Are exemplars required? We had similar concerns and validated this point by using random exemplars for the nearest neighbor for MDN. (k=R in table TABREF35 ) In this case the method is similar to the baseline. This suggests that with random exemplar, the model learns to ignore the cue. Are captions necessary for our method? This is not actually necessary. In our method, we have used an existing image captioning method BIBREF13 to generate captions for images that did not have them. For VQG dataset, captions were available and we have used that, but, for VQA dataset captions were not available and we have generated captions while training. We provide detailed evidence with respect to caption-question pairs to ensure that we are generating novel questions. While the caption generates scene description, our proposed method generates semantically meaningful and novel questions. Examples for Figure 1 of main paper: First Image:- Caption- A young man skateboarding around little cones. Our Question- Is this a skateboard competition? Second Image:- Caption- A small child is standing on a pair of skis. Our Question:- How old is that little girl? Intuition behind Triplet Network: The intuition behind use of triplet networks is clear through this paper BIBREF41 that first advocated its use. The main idea is that when we learn distance functions that are “close” for similar and “far” from dissimilar representations, it is not clear that close and far are with respect to what measure. By incorporating a triplet we learn distance functions that learn that “A is more similar to B as compared to C”. Learning such measures allows us to bring target image-caption joint embeddings that are closer to supporting exemplars as compared to contrasting exemplars. Analysis of Network Analysis of Tag Context Tag is language based context. These tags are extracted from caption, except question-tags which is fixed as the 7 'Wh words' (What, Why, Where, Who, When, Which and How). We have experimented with Noun tag, Verb tag and 'Wh-word' tag as shown in tables. Also, we have experimented in each tag by varying the number of tags from 1 to 7. We combined different tags using 1D-convolution, concatenation, and addition of all the tags and observed that the concatenation mechanism gives better results. As we can see in the table TABREF33 that taking Nouns, Verbs and Wh-Words as context, we achieve significant improvement in the BLEU, METEOR and CIDEr scores from the basic models which only takes the image and the caption respectively. Taking Nouns generated from the captions and questions of the corresponding training example as context, we achieve an increase of 1.6% in Bleu Score and 2% in METEOR and 34.4% in CIDEr Score from the basic Image model. Similarly taking Verbs as context gives us an increase of 1.3% in Bleu Score and 2.1% in METEOR and 33.5% in CIDEr Score from the basic Image model. And the best result comes when we take 3 Wh-Words as context and apply the Hadamard Model with concatenating the 3 WH-words. Also in Table TABREF34 we have shown the results when we take more than one words as context. Here we show that for 3 words i.e 3 nouns, 3 verbs and 3 Wh-words, the Concatenation model performs the best. In this table the conv model is using 1D convolution to combine the tags and the joint model combine all the tags. Analysis of Context: Exemplars In Multimodel Differential Network and Differential Image Network, we use exemplar images(target, supporting and opposing image) to obtain the differential context. We have performed the experiment based on the single exemplar(K=1), which is one supporting and one opposing image along with target image, based on two exemplar(K=2), i.e. two supporting and two opposing image along with single target image. similarly we have performed experiment for K=3 and K=4 as shown in table- TABREF35 . Mixture Module: Other Variations Hadamard method uses element-wise multiplication whereas Addition method uses element-wise addition in place of the concatenation operator of the Joint method. The Hadamard method finds a correlation between image feature and caption feature vector while the Addition method learns a resultant vector. In the attention method, the output INLINEFORM35 is the weighted average of attention probability vector INLINEFORM36 and convolutional features INLINEFORM37 . The attention probability vector weights the contribution of each convolutional feature based on the caption vector. This attention method is similar to work stack attention method BIBREF54 . The attention mechanism is given by: DISPLAYFORM0 where INLINEFORM38 is the 14x14x512-dimensional convolution feature map from the fifth convolution layer of VGG-19 Net of image INLINEFORM39 and INLINEFORM40 is the caption context vector. The attention probability vector INLINEFORM41 is a 196-dimensional vector. INLINEFORM42 are the weights and INLINEFORM43 is the bias for different layers. We evaluate the different approaches and provide results for the same. Here INLINEFORM44 represents element-wise addition. Evaluation Metrics Our task is similar to encoder -decoder framework of machine translation. we have used same evaluation metric is used in machine translation. BLEU BIBREF46 is the first metric to find the correlation between generated question with ground truth question. BLEU score is used to measure the precision value, i.e That is how much words in the predicted question is appeared in reference question. BLEU-n score measures the n-gram precision for counting co-occurrence on reference sentences. we have evaluated BLEU score from n is 1 to 4. The mechanism of ROUGE-n BIBREF48 score is similar to BLEU-n,where as, it measures recall value instead of precision value in BLEU. That is how much words in the reference question is appeared in predicted question.Another version ROUGE metric is ROUGE-L, which measures longest common sub-sequence present in the generated question. METEOR BIBREF47 score is another useful evaluation metric to calculate the similarity between generated question with reference one by considering synonyms, stemming and paraphrases. the output of the METEOR score measure the word matches between predicted question and reference question. In VQG, it compute the word match score between predicted question with five reference question. CIDer BIBREF49 score is a consensus based evaluation metric. It measure human-likeness, that is the sentence is written by human or not. The consensus is measured, how often n-grams in the predicted question are appeared in the reference question. If the n-grams in the predicted question sentence is appeared more frequently in reference question then question is less informative and have low CIDer score. We provide our results using all these metrics and compare it with existing baselines.
Decoder that generates question using an LSTM-based language model
9cc0fd3721881bd8e246d20fff5d15bd32365655
9cc0fd3721881bd8e246d20fff5d15bd32365655_0
Q: What is the input to the differential network? Text: Introduction To understand the progress towards multimedia vision and language understanding, a visual Turing test was proposed by BIBREF0 that was aimed at visual question answering BIBREF1 . Visual Dialog BIBREF2 is a natural extension for VQA. Current dialog systems as evaluated in BIBREF3 show that when trained between bots, AI-AI dialog systems show improvement, but that does not translate to actual improvement for Human-AI dialog. This is because, the questions generated by bots are not natural (human-like) and therefore does not translate to improved human dialog. Therefore it is imperative that improvement in the quality of questions will enable dialog agents to perform well in human interactions. Further, BIBREF4 show that unanswered questions can be used for improving VQA, Image captioning and Object Classification. An interesting line of work in this respect is the work of BIBREF5 . Here the authors have proposed the challenging task of generating natural questions for an image. One aspect that is central to a question is the context that is relevant to generate it. However, this context changes for every image. As can be seen in Figure FIGREF1 , an image with a person on a skateboard would result in questions related to the event. Whereas for a little girl, the questions could be related to age rather than the action. How can one have widely varying context provided for generating questions? To solve this problem, we use the context obtained by considering exemplars, specifically we use the difference between relevant and irrelevant exemplars. We consider different contexts in the form of Location, Caption, and Part of Speech tags. The human annotated questions are (b) for the first image and (a) for the second image. Our method implicitly uses a differential context obtained through supporting and contrasting exemplars to obtain a differentiable embedding. This embedding is used by a question decoder to decode the appropriate question. As discussed further, we observe this implicit differential context to perform better than an explicit keyword based context. The difference between the two approaches is illustrated in Figure FIGREF2 . This also allows for better optimization as we can backpropagate through the whole network. We provide detailed empirical evidence to support our hypothesis. As seen in Figure FIGREF1 our method generates natural questions and improves over the state-of-the-art techniques for this problem. To summarize, we propose a multimodal differential network to solve the task of visual question generation. Our contributions are: (1) A method to incorporate exemplars to learn differential embeddings that captures the subtle differences between supporting and contrasting examples and aid in generating natural questions. (2) We provide Multimodal differential embeddings, as image or text alone does not capture the whole context and we show that these embeddings outperform the ablations which incorporate cues such as only image, or tags or place information. (3) We provide a thorough comparison of the proposed network against state-of-the-art benchmarks along with a user study and statistical significance test. Related Work Generating a natural and engaging question is an interesting and challenging task for a smart robot (like chat-bot). It is a step towards having a natural visual dialog instead of the widely prevalent visual question answering bots. Further, having the ability to ask natural questions based on different contexts is also useful for artificial agents that can interact with visually impaired people. While the task of generating question automatically is well studied in NLP community, it has been relatively less studied for image-related natural questions. This is still a difficult task BIBREF5 that has gained recent interest in the community. Recently there have been many deep learning based approaches as well for solving the text-based question generation task such as BIBREF6 . Further, BIBREF7 have proposed a method to generate a factoid based question based on triplet set {subject, relation and object} to capture the structural representation of text and the corresponding generated question. These methods, however, were limited to text-based question generation. There has been extensive work done in the Vision and Language domain for solving image captioning, paragraph generation, Visual Question Answering (VQA) and Visual Dialog. BIBREF8 , BIBREF9 , BIBREF10 proposed conventional machine learning methods for image description. BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 have generated descriptive sentences from images with the help of Deep Networks. There have been many works for solving Visual Dialog BIBREF19 , BIBREF20 , BIBREF2 , BIBREF21 , BIBREF22 . A variety of methods have been proposed by BIBREF23 , BIBREF24 , BIBREF1 , BIBREF25 , BIBREF26 , BIBREF27 for solving VQA task including attention-based methods BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 . However, Visual Question Generation (VQG) is a separate task which is of interest in its own right and has not been so well explored BIBREF5 . This is a vision based novel task aimed at generating natural and engaging question for an image. BIBREF35 proposed a method for continuously generating questions from an image and subsequently answering those questions. The works closely related to ours are that of BIBREF5 and BIBREF36 . In the former work, the authors used an encoder-decoder based framework whereas in the latter work, the authors extend it by using a variational autoencoder based sequential routine to obtain natural questions by performing sampling of the latent variable. Approach In this section, we clarify the basis for our approach of using exemplars for question generation. To use exemplars for our method, we need to ensure that our exemplars can provide context and that our method generates valid exemplars. We first analyze whether the exemplars are valid or not. We illustrate this in figure FIGREF3 . We used a pre-trained RESNET-101 BIBREF37 object classification network on the target, supporting and contrasting images. We observed that the supporting image and target image have quite similar probability scores. The contrasting exemplar image, on the other hand, has completely different probability scores. Exemplars aim to provide appropriate context. To better understand the context, we experimented by analysing the questions generated through an exemplar. We observed that indeed a supporting exemplar could identify relevant tags (cows in Figure FIGREF3 ) for generating questions. We improve use of exemplars by using a triplet network. This network ensures that the joint image-caption embedding for the supporting exemplar are closer to that of the target image-caption and vice-versa. We empirically evaluated whether an explicit approach that uses the differential set of tags as a one-hot encoding improves the question generation, or the implicit embedding obtained based on the triplet network. We observed that the implicit multimodal differential network empirically provided better context for generating questions. Our understanding of this phenomenon is that both target and supporting exemplars generate similar questions whereas contrasting exemplars generate very different questions from the target question. The triplet network that enhances the joint embedding thus aids to improve the generation of target question. These are observed to be better than the explicitly obtained context tags as can be seen in Figure FIGREF2 . We now explain our method in detail. Method The task in visual question generation (VQG) is to generate a natural language question INLINEFORM0 , for an image INLINEFORM1 . We consider a set of pre-generated context INLINEFORM2 from image INLINEFORM3 . We maximize the conditional probability of generated question given image and context as follows: DISPLAYFORM0 where INLINEFORM0 is a vector for all possible parameters of our model. INLINEFORM1 is the ground truth question. The log probability for the question is calculated by using joint probability over INLINEFORM2 with the help of chain rule. For a particular question, the above term is obtained as: INLINEFORM3 where INLINEFORM0 is length of the sequence, and INLINEFORM1 is the INLINEFORM2 word of the question. We have removed INLINEFORM3 for simplicity. Our method is based on a sequence to sequence network BIBREF38 , BIBREF12 , BIBREF39 . The sequence to sequence network has a text sequence as input and output. In our method, we take an image as input and generate a natural question as output. The architecture for our model is shown in Figure FIGREF4 . Our model contains three main modules, (a) Representation Module that extracts multimodal features (b) Mixture Module that fuses the multimodal representation and (c) Decoder that generates question using an LSTM-based language model. During inference, we sample a question word INLINEFORM0 from the softmax distribution and continue sampling until the end token or maximum length for the question is reached. We experimented with both sampling and argmax and found out that argmax works better. This result is provided in the supplementary material. Multimodal Differential Network The proposed Multimodal Differential Network (MDN) consists of a representation module and a joint mixture module. We used an efficient KNN-based approach (k-d tree) with Euclidean metric to obtain the exemplars. This is obtained through a coarse quantization of nearest neighbors of the training examples into 50 clusters, and selecting the nearest as supporting and farthest as the contrasting exemplars. We experimented with ITML based metric learning BIBREF40 for image features. Surprisingly, the KNN-based approach outperforms the latter one. We also tried random exemplars and different number of exemplars and found that INLINEFORM0 works best. We provide these results in the supplementary material. We use a triplet network BIBREF41 , BIBREF42 in our representation module. We refereed a similar kind of work done in BIBREF34 for building our triplet network. The triplet network consists of three sub-parts: target, supporting, and contrasting networks. All three networks share the same parameters. Given an image INLINEFORM0 we obtain an embedding INLINEFORM1 using a CNN parameterized by a function INLINEFORM2 where INLINEFORM3 are the weights for the CNN. The caption INLINEFORM4 results in a caption embedding INLINEFORM5 through an LSTM parameterized by a function INLINEFORM6 where INLINEFORM7 are the weights for the LSTM. This is shown in part 1 of Figure FIGREF4 . Similarly we obtain image embeddings INLINEFORM8 & INLINEFORM9 and caption embeddings INLINEFORM10 & INLINEFORM11 . DISPLAYFORM0 The Mixture module brings the image and caption embeddings to a joint feature embedding space. The input to the module is the embeddings obtained from the representation module. We have evaluated four different approaches for fusion viz., joint, element-wise addition, hadamard and attention method. Each of these variants receives image features INLINEFORM0 & the caption embedding INLINEFORM1 , and outputs a fixed dimensional feature vector INLINEFORM2 . The Joint method concatenates INLINEFORM3 & INLINEFORM4 and maps them to a fixed length feature vector INLINEFORM5 as follows: DISPLAYFORM0 where INLINEFORM0 is the 4096-dimensional convolutional feature from the FC7 layer of pretrained VGG-19 Net BIBREF43 . INLINEFORM1 are the weights and INLINEFORM2 is the bias for different layers. INLINEFORM3 is the concatenation operator. Similarly, We obtain context vectors INLINEFORM0 & INLINEFORM1 for the supporting and contrasting exemplars. Details for other fusion methods are present in supplementary.The aim of the triplet network BIBREF44 is to obtain context vectors that bring the supporting exemplar embeddings closer to the target embedding and vice-versa. This is obtained as follows: DISPLAYFORM0 where INLINEFORM0 is the euclidean distance between two embeddings INLINEFORM1 and INLINEFORM2 . M is the training dataset that contains all set of possible triplets. INLINEFORM3 is the triplet loss function. This is decomposed into two terms, one that brings the supporting sample closer and one that pushes the contrasting sample further. This is given by DISPLAYFORM0 Here INLINEFORM0 represent the euclidean distance between the target and supporting sample, and target and opposing sample respectively. The parameter INLINEFORM1 controls the separation margin between these and is obtained through validation data. Decoder: Question Generator The role of decoder is to predict the probability for a question, given INLINEFORM0 . RNN provides a nice way to perform conditioning on previous state value using a fixed length hidden vector. The conditional probability of a question token at particular time step INLINEFORM1 is modeled using an LSTM as used in machine translation BIBREF38 . At time step INLINEFORM2 , the conditional probability is denoted by INLINEFORM3 , where INLINEFORM4 is the hidden state of the LSTM cell at time step INLINEFORM5 , which is conditioned on all the previously generated words INLINEFORM6 . The word with maximum probability in the probability distribution of the LSTM cell at step INLINEFORM7 is fed as an input to the LSTM cell at step INLINEFORM8 as shown in part 3 of Figure FIGREF4 . At INLINEFORM9 , we are feeding the output of the mixture module to LSTM. INLINEFORM10 are the predicted question tokens for the input image INLINEFORM11 . Here, we are using INLINEFORM12 and INLINEFORM13 as the special token START and STOP respectively. The softmax probability for the predicted question token at different time steps is given by the following equations where LSTM refers to the standard LSTM cell equations: INLINEFORM14 Where INLINEFORM0 is the probability distribution over all question tokens. INLINEFORM1 is cross entropy loss. Cost function Our objective is to minimize the total loss, that is the sum of cross entropy loss and triplet loss over all training examples. The total loss is: DISPLAYFORM0 where INLINEFORM0 is the total number of samples, INLINEFORM1 is a constant, which controls both the loss. INLINEFORM2 is the triplet loss function EQREF13 . INLINEFORM3 is the cross entropy loss between the predicted and ground truth questions and is given by: INLINEFORM4 where, INLINEFORM0 is the total number of question tokens, INLINEFORM1 is the ground truth label. The code for MDN-VQG model is provided . Variations of Proposed Method While, we advocate the use of multimodal differential network for generating embeddings that can be used by the decoder for generating questions, we also evaluate several variants of this architecture. These are as follows: Tag Net: In this variant, we consider extracting the part-of-speech (POS) tags for the words present in the caption and obtaining a Tag embedding by considering different methods of combining the one-hot vectors. Further details and experimental results are present in the supplementary. This Tag embedding is then combined with the image embedding and provided to the decoder network. Place Net: In this variant we explore obtaining embeddings based on the visual scene understanding. This is obtained using a pre-trained PlaceCNN BIBREF45 that is trained to classify 365 different types of scene categories. We then combine the activation map for the input image and the VGG-19 based place embedding to obtain the joint embedding used by the decoder. Differential Image Network: Instead of using multimodal differential network for generating embeddings, we also evaluate differential image network for the same. In this case, the embedding does not include the caption but is based only on the image feature. We also experimented with using multiple exemplars and random exemplars. Further details, pseudocode and results regarding these are present in the supplementary material. Dataset We conduct our experiments on Visual Question Generation (VQG) dataset BIBREF5 , which contains human annotated questions based on images of MS-COCO dataset. This dataset was developed for generating natural and engaging questions based on common sense reasoning. We use VQG-COCO dataset for our experiments which contains a total of 2500 training images, 1250 validation images, and 1250 testing images. Each image in the dataset contains five natural questions and five ground truth captions. It is worth noting that the work of BIBREF36 also used the questions from VQA dataset BIBREF1 for training purpose, whereas the work by BIBREF5 uses only the VQG-COCO dataset. VQA-1.0 dataset is also built on images from MS-COCO dataset. It contains a total of 82783 images for training, 40504 for validation and 81434 for testing. Each image is associated with 3 questions. We used pretrained caption generation model BIBREF13 to extract captions for VQA dataset as the human annotated captions are not there in the dataset. We also get good results on the VQA dataset (as shown in Table TABREF26 ) which shows that our method doesn't necessitate the presence of ground truth captions. We train our model separately for VQG-COCO and VQA dataset. Inference We made use of the 1250 validation images to tune the hyperparameters and are providing the results on test set of VQG-COCO dataset. During inference, We use the Representation module to find the embeddings for the image and ground truth caption without using the supporting and contrasting exemplars. The mixture module provides the joint representation of the target image and ground truth caption. Finally, the decoder takes in the joint features and generates the question. We also experimented with the captions generated by an Image-Captioning network BIBREF13 for VQG-COCO dataset and the result for that and training details are present in the supplementary material. Experiments We evaluate our proposed MDN method in the following ways: First, we evaluate it against other variants described in section SECREF19 and SECREF10 . Second, we further compare our network with state-of-the-art methods for VQA 1.0 and VQG-COCO dataset. We perform a user study to gauge human opinion on naturalness of the generated question and analyze the word statistics in Figure FIGREF22 . This is an important test as humans are the best deciders of naturalness. We further consider the statistical significance for the various ablations as well as the state-of-the-art models. The quantitative evaluation is conducted using standard metrics like BLEU BIBREF46 , METEOR BIBREF47 , ROUGE BIBREF48 , CIDEr BIBREF49 . Although these metrics have not been shown to correlate with `naturalness' of the question these still provide a reasonable quantitative measure for comparison. Here we only provide the BLEU1 scores, but the remaining BLEU-n metric scores are present in the supplementary. We observe that the proposed MDN provides improved embeddings to the decoder. We believe that these embeddings capture instance specific differential information that helps in guiding the question generation. Details regarding the metrics are given in the supplementary material. Ablation Analysis We considered different variations of our method mentioned in section SECREF19 and the various ways to obtain the joint multimodal embedding as described in section SECREF10 . The results for the VQG-COCO test set are given in table TABREF24 . In this table, every block provides the results for one of the variations of obtaining the embeddings and different ways of combining them. We observe that the Joint Method (JM) of combining the embeddings works the best in all cases except the Tag Embeddings. Among the ablations, the proposed MDN method works way better than the other variants in terms of BLEU, METEOR and ROUGE metrics by achieving an improvement of 6%, 12% and 18% in the scores respectively over the best other variant. Baseline and State-of-the-Art The comparison of our method with various baselines and state-of-the-art methods is provided in table TABREF26 for VQA 1.0 and table TABREF27 for VQG-COCO dataset. The comparable baselines for our method are the image based and caption based models in which we use either only the image or the caption embedding and generate the question. In both the tables, the first block consists of the current state-of-the-art methods on that dataset and the second contains the baselines. We observe that for the VQA dataset we achieve an improvement of 8% in BLEU and 7% in METEOR metric scores over the baselines, whereas for VQG-COCO dataset this is 15% for both the metrics. We improve over the previous state-of-the-art BIBREF35 for VQA dataset by around 6% in BLEU score and 10% in METEOR score. In the VQG-COCO dataset, we improve over BIBREF5 by 3.7% and BIBREF36 by 3.5% in terms of METEOR scores. Statistical Significance Analysis We have analysed Statistical Significance BIBREF50 of our MDN model for VQG for different variations of the mixture module mentioned in section SECREF10 and also against the state-of-the-art methods. The Critical Difference (CD) for Nemenyi BIBREF51 test depends upon the given INLINEFORM0 (confidence level, which is 0.05 in our case) for average ranks and N (number of tested datasets). If the difference in the rank of the two methods lies within CD, then they are not significantly different and vice-versa. Figure FIGREF29 visualizes the post-hoc analysis using the CD diagram. From the figure, it is clear that MDN-Joint works best and is statistically significantly different from the state-of-the-art methods. Perceptual Realism A human is the best judge of naturalness of any question, We evaluated our proposed MDN method using a `Naturalness' Turing test BIBREF52 on 175 people. People were shown an image with 2 questions just as in figure FIGREF1 and were asked to rate the naturalness of both the questions on a scale of 1 to 5 where 1 means `Least Natural' and 5 is the `Most Natural'. We provided 175 people with 100 such images from the VQG-COCO validation dataset which has 1250 images. Figure FIGREF30 indicates the number of people who were fooled (rated the generated question more or equal to the ground truth question). For the 100 images, on an average 59.7% people were fooled in this experiment and this shows that our model is able to generate natural questions. Conclusion In this paper we have proposed a novel method for generating natural questions for an image. The approach relies on obtaining multimodal differential embeddings from image and its caption. We also provide ablation analysis and a detailed comparison with state-of-the-art methods, perform a user study to evaluate the naturalness of our generated questions and also ensure that the results are statistically significant. In future, we would like to analyse means of obtaining composite embeddings. We also aim to consider the generalisation of this approach to other vision and language tasks. Supplementary Material Section SECREF8 will provide details about training configuration for MDN, Section SECREF9 will explain the various Proposed Methods and we also provide a discussion in section regarding some important questions related to our method. We report BLEU1, BLEU2, BLEU3, BLEU4, METEOR, ROUGE and CIDER metric scores for VQG-COCO dataset. We present different experiments with Tag Net in which we explore the performance of various tags (Noun, Verb, and Question tags) and different ways of combining them to get the context vectors. Multimodal Differential Network [1] MDN INLINEFORM0 Finding Exemplars: INLINEFORM1 INLINEFORM2 Compute Triplet Embedding: INLINEFORM3 INLINEFORM4 Compute Triplet Fusion Embedding : INLINEFORM5 INLINEFORM6 INLINEFORM7 Compute Triplet Loss: INLINEFORM8 Compute Decode Question Sentence: INLINEFORM9 INLINEFORM10 —————————————————– Triplet Fusion INLINEFORM11 , INLINEFORM12 INLINEFORM13 :Image feature,14x14x512 INLINEFORM14 : Caption feature,1x512 Match Dimension: INLINEFORM15 ,196x512 INLINEFORM16 196x512 If flag==Joint Fusion: INLINEFORM17 INLINEFORM18 , [ INLINEFORM19 (MDN-Mul), INLINEFORM20 (MDN-Add)] If flag==Attention Fusion : INLINEFORM21 Semb INLINEFORM22 Dataset and Training Details Dataset We conduct our experiments on two types of dataset: VQA dataset BIBREF1 , which contains human annotated questions based on images on MS-COCO dataset. Second one is VQG-COCO dataset based on natural question BIBREF55 . VQA dataset VQA dataset BIBREF1 is built on complex images from MS-COCO dataset. It contains a total of 204721 images, out of which 82783 are for training, 40504 for validation and 81434 for testing. Each image in the MS-COCO dataset is associated with 3 questions and each question has 10 possible answers. So there are 248349 QA pair for training, 121512 QA pairs for validating and 244302 QA pairs for testing. We used pre-trained caption generation model BIBREF53 to extract captions for VQA dataset. VQG dataset The VQG-COCO dataset BIBREF55 , is developed for generating natural and engaging questions that are based on common sense reasoning. This dataset contains a total of 2500 training images, 1250 validation images and 1250 testing images. Each image in the dataset contains 5 natural questions. Training Configuration We have used RMSPROP optimizer to update the model parameter and configured hyper-parameter values to be as follows: INLINEFORM23 to train the classification network . In order to train a triplet model, we have used RMSPROP to optimize the triplet model model parameter and configure hyper-parameter values to be: INLINEFORM24 . We also used learning rate decay to decrease the learning rate on every epoch by a factor given by: INLINEFORM25 where values of a=1500 and b=1250 are set empirically. Ablation Analysis of Model While, we advocate the use of multimodal differential network (MDN) for generating embeddings that can be used by the decoder for generating questions, we also evaluate several variants of this architecture namely (a) Differential Image Network, (b) Tag net and (c) Place net. These are described in detail as follows: Differential Image Network For obtaining the exemplar image based context embedding, we propose a triplet network consist of three network, one is target net, supporting net and opposing net. All these three networks designed with convolution neural network and shared the same parameters. The weights of this network are learnt through end-to-end learning using a triplet loss. The aim is to obtain latent weight vectors that bring the supporting exemplar close to the target image and enhances the difference between opposing examples. More formally, given an image INLINEFORM26 we obtain an embedding INLINEFORM27 using a CNN that we parameterize through a function INLINEFORM28 where INLINEFORM29 are the weights of the CNN. This is illustrated in figure FIGREF43 . Tag net The tag net consists of two parts Context Extractor & Tag Embedding Net. This is illustrated in figure FIGREF45 . Extract Context: The first step is to extract the caption of the image using NeuralTalk2 BIBREF53 model. We find the part-of-speech(POS) tag present in the caption. POS taggers have been developed for two well known corpuses, the Brown Corpus and the Penn Treebanks. For our work, we are using the Brown Corpus tags. The tags are clustered into three category namely Noun tag, Verb tag and Question tags (What, Where, ...). Noun tag consists of all the noun & pronouns present in the caption sentence and similarly, verb tag consists of verb & adverbs present in the caption sentence. The question tags consists of the 7-well know question words i.e., why, how, what, when, where, who and which. Each tag token is represented as a one-hot vector of the dimension of vocabulary size. For generalization, we have considered 5 tokens from each category of the Tags. Tag Embedding Net: The embedding network consists of word embedding followed by temporal convolutions neural network followed by max-pooling network. In the first step, sparse high dimensional one-hot vector is transformed to dense low dimension vector using word embedding. After this, we apply temporal convolution on the word embedding vector. The uni-gram, bi-gram and tri-gram feature are computed by applying convolution filter of size 1, 2 and 3 respectability. Finally, we applied max-pooling on this to get a vector representation of the tags as shown figure FIGREF45 . We concatenated all the tag words followed by fully connected layer to get feature dimension of 512. We also explored joint networks based on concatenation of all the tags, on element-wise addition and element-wise multiplication of the tag vectors. However, we observed that convolution over max pooling and joint concatenation gives better performance based on CIDer score. INLINEFORM30 Where, T_CNN is Temporally Convolution Neural Network applied on word embedding vector with kernel size three. Place net Visual object and scene recognition plays a crucial role in the image. Here, places in the image are labeled with scene semantic categories BIBREF45 , comprise of large and diverse type of environment in the world, such as (amusement park, tower, swimming pool, shoe shop, cafeteria, rain-forest, conference center, fish pond, etc.). So we have used different type of scene semantic categories present in the image as a place based context to generate natural question. A place365 is a convolution neural network is modeled to classify 365 types of scene categories, which is trained on the place2 dataset consist of 1.8 million of scene images. We have used a pre-trained VGG16-places365 network to obtain place based context embedding feature for various type scene categories present in the image. The context feature INLINEFORM31 is obtained by: INLINEFORM32 Where, INLINEFORM33 is Place365_CNN. We have extracted INLINEFORM34 features of dimension 14x14x512 for attention model and FC8 features of dimension 365 for joint, addition and hadamard model of places365. Finally, we use a linear transformation to obtain a 512 dimensional vector. We explored using the CONV5 having feature dimension 14x14 512, FC7 having 4096 and FC8 having feature dimension of 365 of places365. Ablation Analysis Sampling Exemplar: KNN vs ITML Our method is aimed at using efficient exemplar-based retrieval techniques. We have experimented with various exemplar methods, such as ITML BIBREF40 based metric learning for image features and KNN based approaches. We observed KNN based approach (K-D tree) with Euclidean metric is a efficient method for finding exemplars. Also we observed that ITML is computationally expensive and also depends on the training procedure. The table provides the experimental result for Differential Image Network variant with k (number of exemplars) = 2 and Hadamard method: Question Generation approaches: Sampling vs Argmax We obtained the decoding using standard practice followed in the literature BIBREF38 . This method selects the argmax sentence. Also, we evaluated our method by sampling from the probability distributions and provide the results for our proposed MDN-Joint method for VQG dataset as follows: How are exemplars improving Embedding In Multimodel differential network, we use exemplars and train them using a triplet loss. It is known that using a triplet network, we can learn a representation that accentuates how the image is closer to a supporting exemplar as against the opposing exemplar BIBREF42 , BIBREF41 . The Joint embedding is obtained between the image and language representations. Therefore the improved representation helps in obtaining an improved context vector. Further we show that this also results in improving VQG. Are exemplars required? We had similar concerns and validated this point by using random exemplars for the nearest neighbor for MDN. (k=R in table TABREF35 ) In this case the method is similar to the baseline. This suggests that with random exemplar, the model learns to ignore the cue. Are captions necessary for our method? This is not actually necessary. In our method, we have used an existing image captioning method BIBREF13 to generate captions for images that did not have them. For VQG dataset, captions were available and we have used that, but, for VQA dataset captions were not available and we have generated captions while training. We provide detailed evidence with respect to caption-question pairs to ensure that we are generating novel questions. While the caption generates scene description, our proposed method generates semantically meaningful and novel questions. Examples for Figure 1 of main paper: First Image:- Caption- A young man skateboarding around little cones. Our Question- Is this a skateboard competition? Second Image:- Caption- A small child is standing on a pair of skis. Our Question:- How old is that little girl? Intuition behind Triplet Network: The intuition behind use of triplet networks is clear through this paper BIBREF41 that first advocated its use. The main idea is that when we learn distance functions that are “close” for similar and “far” from dissimilar representations, it is not clear that close and far are with respect to what measure. By incorporating a triplet we learn distance functions that learn that “A is more similar to B as compared to C”. Learning such measures allows us to bring target image-caption joint embeddings that are closer to supporting exemplars as compared to contrasting exemplars. Analysis of Network Analysis of Tag Context Tag is language based context. These tags are extracted from caption, except question-tags which is fixed as the 7 'Wh words' (What, Why, Where, Who, When, Which and How). We have experimented with Noun tag, Verb tag and 'Wh-word' tag as shown in tables. Also, we have experimented in each tag by varying the number of tags from 1 to 7. We combined different tags using 1D-convolution, concatenation, and addition of all the tags and observed that the concatenation mechanism gives better results. As we can see in the table TABREF33 that taking Nouns, Verbs and Wh-Words as context, we achieve significant improvement in the BLEU, METEOR and CIDEr scores from the basic models which only takes the image and the caption respectively. Taking Nouns generated from the captions and questions of the corresponding training example as context, we achieve an increase of 1.6% in Bleu Score and 2% in METEOR and 34.4% in CIDEr Score from the basic Image model. Similarly taking Verbs as context gives us an increase of 1.3% in Bleu Score and 2.1% in METEOR and 33.5% in CIDEr Score from the basic Image model. And the best result comes when we take 3 Wh-Words as context and apply the Hadamard Model with concatenating the 3 WH-words. Also in Table TABREF34 we have shown the results when we take more than one words as context. Here we show that for 3 words i.e 3 nouns, 3 verbs and 3 Wh-words, the Concatenation model performs the best. In this table the conv model is using 1D convolution to combine the tags and the joint model combine all the tags. Analysis of Context: Exemplars In Multimodel Differential Network and Differential Image Network, we use exemplar images(target, supporting and opposing image) to obtain the differential context. We have performed the experiment based on the single exemplar(K=1), which is one supporting and one opposing image along with target image, based on two exemplar(K=2), i.e. two supporting and two opposing image along with single target image. similarly we have performed experiment for K=3 and K=4 as shown in table- TABREF35 . Mixture Module: Other Variations Hadamard method uses element-wise multiplication whereas Addition method uses element-wise addition in place of the concatenation operator of the Joint method. The Hadamard method finds a correlation between image feature and caption feature vector while the Addition method learns a resultant vector. In the attention method, the output INLINEFORM35 is the weighted average of attention probability vector INLINEFORM36 and convolutional features INLINEFORM37 . The attention probability vector weights the contribution of each convolutional feature based on the caption vector. This attention method is similar to work stack attention method BIBREF54 . The attention mechanism is given by: DISPLAYFORM0 where INLINEFORM38 is the 14x14x512-dimensional convolution feature map from the fifth convolution layer of VGG-19 Net of image INLINEFORM39 and INLINEFORM40 is the caption context vector. The attention probability vector INLINEFORM41 is a 196-dimensional vector. INLINEFORM42 are the weights and INLINEFORM43 is the bias for different layers. We evaluate the different approaches and provide results for the same. Here INLINEFORM44 represents element-wise addition. Evaluation Metrics Our task is similar to encoder -decoder framework of machine translation. we have used same evaluation metric is used in machine translation. BLEU BIBREF46 is the first metric to find the correlation between generated question with ground truth question. BLEU score is used to measure the precision value, i.e That is how much words in the predicted question is appeared in reference question. BLEU-n score measures the n-gram precision for counting co-occurrence on reference sentences. we have evaluated BLEU score from n is 1 to 4. The mechanism of ROUGE-n BIBREF48 score is similar to BLEU-n,where as, it measures recall value instead of precision value in BLEU. That is how much words in the reference question is appeared in predicted question.Another version ROUGE metric is ROUGE-L, which measures longest common sub-sequence present in the generated question. METEOR BIBREF47 score is another useful evaluation metric to calculate the similarity between generated question with reference one by considering synonyms, stemming and paraphrases. the output of the METEOR score measure the word matches between predicted question and reference question. In VQG, it compute the word match score between predicted question with five reference question. CIDer BIBREF49 score is a consensus based evaluation metric. It measure human-likeness, that is the sentence is written by human or not. The consensus is measured, how often n-grams in the predicted question are appeared in the reference question. If the n-grams in the predicted question sentence is appeared more frequently in reference question then question is less informative and have low CIDer score. We provide our results using all these metrics and compare it with existing baselines.
image
82c4863293a179fe5c0d9a1ff17d224bde952f54
82c4863293a179fe5c0d9a1ff17d224bde952f54_0
Q: How do the authors define a differential network? Text: Introduction To understand the progress towards multimedia vision and language understanding, a visual Turing test was proposed by BIBREF0 that was aimed at visual question answering BIBREF1 . Visual Dialog BIBREF2 is a natural extension for VQA. Current dialog systems as evaluated in BIBREF3 show that when trained between bots, AI-AI dialog systems show improvement, but that does not translate to actual improvement for Human-AI dialog. This is because, the questions generated by bots are not natural (human-like) and therefore does not translate to improved human dialog. Therefore it is imperative that improvement in the quality of questions will enable dialog agents to perform well in human interactions. Further, BIBREF4 show that unanswered questions can be used for improving VQA, Image captioning and Object Classification. An interesting line of work in this respect is the work of BIBREF5 . Here the authors have proposed the challenging task of generating natural questions for an image. One aspect that is central to a question is the context that is relevant to generate it. However, this context changes for every image. As can be seen in Figure FIGREF1 , an image with a person on a skateboard would result in questions related to the event. Whereas for a little girl, the questions could be related to age rather than the action. How can one have widely varying context provided for generating questions? To solve this problem, we use the context obtained by considering exemplars, specifically we use the difference between relevant and irrelevant exemplars. We consider different contexts in the form of Location, Caption, and Part of Speech tags. The human annotated questions are (b) for the first image and (a) for the second image. Our method implicitly uses a differential context obtained through supporting and contrasting exemplars to obtain a differentiable embedding. This embedding is used by a question decoder to decode the appropriate question. As discussed further, we observe this implicit differential context to perform better than an explicit keyword based context. The difference between the two approaches is illustrated in Figure FIGREF2 . This also allows for better optimization as we can backpropagate through the whole network. We provide detailed empirical evidence to support our hypothesis. As seen in Figure FIGREF1 our method generates natural questions and improves over the state-of-the-art techniques for this problem. To summarize, we propose a multimodal differential network to solve the task of visual question generation. Our contributions are: (1) A method to incorporate exemplars to learn differential embeddings that captures the subtle differences between supporting and contrasting examples and aid in generating natural questions. (2) We provide Multimodal differential embeddings, as image or text alone does not capture the whole context and we show that these embeddings outperform the ablations which incorporate cues such as only image, or tags or place information. (3) We provide a thorough comparison of the proposed network against state-of-the-art benchmarks along with a user study and statistical significance test. Related Work Generating a natural and engaging question is an interesting and challenging task for a smart robot (like chat-bot). It is a step towards having a natural visual dialog instead of the widely prevalent visual question answering bots. Further, having the ability to ask natural questions based on different contexts is also useful for artificial agents that can interact with visually impaired people. While the task of generating question automatically is well studied in NLP community, it has been relatively less studied for image-related natural questions. This is still a difficult task BIBREF5 that has gained recent interest in the community. Recently there have been many deep learning based approaches as well for solving the text-based question generation task such as BIBREF6 . Further, BIBREF7 have proposed a method to generate a factoid based question based on triplet set {subject, relation and object} to capture the structural representation of text and the corresponding generated question. These methods, however, were limited to text-based question generation. There has been extensive work done in the Vision and Language domain for solving image captioning, paragraph generation, Visual Question Answering (VQA) and Visual Dialog. BIBREF8 , BIBREF9 , BIBREF10 proposed conventional machine learning methods for image description. BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 have generated descriptive sentences from images with the help of Deep Networks. There have been many works for solving Visual Dialog BIBREF19 , BIBREF20 , BIBREF2 , BIBREF21 , BIBREF22 . A variety of methods have been proposed by BIBREF23 , BIBREF24 , BIBREF1 , BIBREF25 , BIBREF26 , BIBREF27 for solving VQA task including attention-based methods BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 . However, Visual Question Generation (VQG) is a separate task which is of interest in its own right and has not been so well explored BIBREF5 . This is a vision based novel task aimed at generating natural and engaging question for an image. BIBREF35 proposed a method for continuously generating questions from an image and subsequently answering those questions. The works closely related to ours are that of BIBREF5 and BIBREF36 . In the former work, the authors used an encoder-decoder based framework whereas in the latter work, the authors extend it by using a variational autoencoder based sequential routine to obtain natural questions by performing sampling of the latent variable. Approach In this section, we clarify the basis for our approach of using exemplars for question generation. To use exemplars for our method, we need to ensure that our exemplars can provide context and that our method generates valid exemplars. We first analyze whether the exemplars are valid or not. We illustrate this in figure FIGREF3 . We used a pre-trained RESNET-101 BIBREF37 object classification network on the target, supporting and contrasting images. We observed that the supporting image and target image have quite similar probability scores. The contrasting exemplar image, on the other hand, has completely different probability scores. Exemplars aim to provide appropriate context. To better understand the context, we experimented by analysing the questions generated through an exemplar. We observed that indeed a supporting exemplar could identify relevant tags (cows in Figure FIGREF3 ) for generating questions. We improve use of exemplars by using a triplet network. This network ensures that the joint image-caption embedding for the supporting exemplar are closer to that of the target image-caption and vice-versa. We empirically evaluated whether an explicit approach that uses the differential set of tags as a one-hot encoding improves the question generation, or the implicit embedding obtained based on the triplet network. We observed that the implicit multimodal differential network empirically provided better context for generating questions. Our understanding of this phenomenon is that both target and supporting exemplars generate similar questions whereas contrasting exemplars generate very different questions from the target question. The triplet network that enhances the joint embedding thus aids to improve the generation of target question. These are observed to be better than the explicitly obtained context tags as can be seen in Figure FIGREF2 . We now explain our method in detail. Method The task in visual question generation (VQG) is to generate a natural language question INLINEFORM0 , for an image INLINEFORM1 . We consider a set of pre-generated context INLINEFORM2 from image INLINEFORM3 . We maximize the conditional probability of generated question given image and context as follows: DISPLAYFORM0 where INLINEFORM0 is a vector for all possible parameters of our model. INLINEFORM1 is the ground truth question. The log probability for the question is calculated by using joint probability over INLINEFORM2 with the help of chain rule. For a particular question, the above term is obtained as: INLINEFORM3 where INLINEFORM0 is length of the sequence, and INLINEFORM1 is the INLINEFORM2 word of the question. We have removed INLINEFORM3 for simplicity. Our method is based on a sequence to sequence network BIBREF38 , BIBREF12 , BIBREF39 . The sequence to sequence network has a text sequence as input and output. In our method, we take an image as input and generate a natural question as output. The architecture for our model is shown in Figure FIGREF4 . Our model contains three main modules, (a) Representation Module that extracts multimodal features (b) Mixture Module that fuses the multimodal representation and (c) Decoder that generates question using an LSTM-based language model. During inference, we sample a question word INLINEFORM0 from the softmax distribution and continue sampling until the end token or maximum length for the question is reached. We experimented with both sampling and argmax and found out that argmax works better. This result is provided in the supplementary material. Multimodal Differential Network The proposed Multimodal Differential Network (MDN) consists of a representation module and a joint mixture module. We used an efficient KNN-based approach (k-d tree) with Euclidean metric to obtain the exemplars. This is obtained through a coarse quantization of nearest neighbors of the training examples into 50 clusters, and selecting the nearest as supporting and farthest as the contrasting exemplars. We experimented with ITML based metric learning BIBREF40 for image features. Surprisingly, the KNN-based approach outperforms the latter one. We also tried random exemplars and different number of exemplars and found that INLINEFORM0 works best. We provide these results in the supplementary material. We use a triplet network BIBREF41 , BIBREF42 in our representation module. We refereed a similar kind of work done in BIBREF34 for building our triplet network. The triplet network consists of three sub-parts: target, supporting, and contrasting networks. All three networks share the same parameters. Given an image INLINEFORM0 we obtain an embedding INLINEFORM1 using a CNN parameterized by a function INLINEFORM2 where INLINEFORM3 are the weights for the CNN. The caption INLINEFORM4 results in a caption embedding INLINEFORM5 through an LSTM parameterized by a function INLINEFORM6 where INLINEFORM7 are the weights for the LSTM. This is shown in part 1 of Figure FIGREF4 . Similarly we obtain image embeddings INLINEFORM8 & INLINEFORM9 and caption embeddings INLINEFORM10 & INLINEFORM11 . DISPLAYFORM0 The Mixture module brings the image and caption embeddings to a joint feature embedding space. The input to the module is the embeddings obtained from the representation module. We have evaluated four different approaches for fusion viz., joint, element-wise addition, hadamard and attention method. Each of these variants receives image features INLINEFORM0 & the caption embedding INLINEFORM1 , and outputs a fixed dimensional feature vector INLINEFORM2 . The Joint method concatenates INLINEFORM3 & INLINEFORM4 and maps them to a fixed length feature vector INLINEFORM5 as follows: DISPLAYFORM0 where INLINEFORM0 is the 4096-dimensional convolutional feature from the FC7 layer of pretrained VGG-19 Net BIBREF43 . INLINEFORM1 are the weights and INLINEFORM2 is the bias for different layers. INLINEFORM3 is the concatenation operator. Similarly, We obtain context vectors INLINEFORM0 & INLINEFORM1 for the supporting and contrasting exemplars. Details for other fusion methods are present in supplementary.The aim of the triplet network BIBREF44 is to obtain context vectors that bring the supporting exemplar embeddings closer to the target embedding and vice-versa. This is obtained as follows: DISPLAYFORM0 where INLINEFORM0 is the euclidean distance between two embeddings INLINEFORM1 and INLINEFORM2 . M is the training dataset that contains all set of possible triplets. INLINEFORM3 is the triplet loss function. This is decomposed into two terms, one that brings the supporting sample closer and one that pushes the contrasting sample further. This is given by DISPLAYFORM0 Here INLINEFORM0 represent the euclidean distance between the target and supporting sample, and target and opposing sample respectively. The parameter INLINEFORM1 controls the separation margin between these and is obtained through validation data. Decoder: Question Generator The role of decoder is to predict the probability for a question, given INLINEFORM0 . RNN provides a nice way to perform conditioning on previous state value using a fixed length hidden vector. The conditional probability of a question token at particular time step INLINEFORM1 is modeled using an LSTM as used in machine translation BIBREF38 . At time step INLINEFORM2 , the conditional probability is denoted by INLINEFORM3 , where INLINEFORM4 is the hidden state of the LSTM cell at time step INLINEFORM5 , which is conditioned on all the previously generated words INLINEFORM6 . The word with maximum probability in the probability distribution of the LSTM cell at step INLINEFORM7 is fed as an input to the LSTM cell at step INLINEFORM8 as shown in part 3 of Figure FIGREF4 . At INLINEFORM9 , we are feeding the output of the mixture module to LSTM. INLINEFORM10 are the predicted question tokens for the input image INLINEFORM11 . Here, we are using INLINEFORM12 and INLINEFORM13 as the special token START and STOP respectively. The softmax probability for the predicted question token at different time steps is given by the following equations where LSTM refers to the standard LSTM cell equations: INLINEFORM14 Where INLINEFORM0 is the probability distribution over all question tokens. INLINEFORM1 is cross entropy loss. Cost function Our objective is to minimize the total loss, that is the sum of cross entropy loss and triplet loss over all training examples. The total loss is: DISPLAYFORM0 where INLINEFORM0 is the total number of samples, INLINEFORM1 is a constant, which controls both the loss. INLINEFORM2 is the triplet loss function EQREF13 . INLINEFORM3 is the cross entropy loss between the predicted and ground truth questions and is given by: INLINEFORM4 where, INLINEFORM0 is the total number of question tokens, INLINEFORM1 is the ground truth label. The code for MDN-VQG model is provided . Variations of Proposed Method While, we advocate the use of multimodal differential network for generating embeddings that can be used by the decoder for generating questions, we also evaluate several variants of this architecture. These are as follows: Tag Net: In this variant, we consider extracting the part-of-speech (POS) tags for the words present in the caption and obtaining a Tag embedding by considering different methods of combining the one-hot vectors. Further details and experimental results are present in the supplementary. This Tag embedding is then combined with the image embedding and provided to the decoder network. Place Net: In this variant we explore obtaining embeddings based on the visual scene understanding. This is obtained using a pre-trained PlaceCNN BIBREF45 that is trained to classify 365 different types of scene categories. We then combine the activation map for the input image and the VGG-19 based place embedding to obtain the joint embedding used by the decoder. Differential Image Network: Instead of using multimodal differential network for generating embeddings, we also evaluate differential image network for the same. In this case, the embedding does not include the caption but is based only on the image feature. We also experimented with using multiple exemplars and random exemplars. Further details, pseudocode and results regarding these are present in the supplementary material. Dataset We conduct our experiments on Visual Question Generation (VQG) dataset BIBREF5 , which contains human annotated questions based on images of MS-COCO dataset. This dataset was developed for generating natural and engaging questions based on common sense reasoning. We use VQG-COCO dataset for our experiments which contains a total of 2500 training images, 1250 validation images, and 1250 testing images. Each image in the dataset contains five natural questions and five ground truth captions. It is worth noting that the work of BIBREF36 also used the questions from VQA dataset BIBREF1 for training purpose, whereas the work by BIBREF5 uses only the VQG-COCO dataset. VQA-1.0 dataset is also built on images from MS-COCO dataset. It contains a total of 82783 images for training, 40504 for validation and 81434 for testing. Each image is associated with 3 questions. We used pretrained caption generation model BIBREF13 to extract captions for VQA dataset as the human annotated captions are not there in the dataset. We also get good results on the VQA dataset (as shown in Table TABREF26 ) which shows that our method doesn't necessitate the presence of ground truth captions. We train our model separately for VQG-COCO and VQA dataset. Inference We made use of the 1250 validation images to tune the hyperparameters and are providing the results on test set of VQG-COCO dataset. During inference, We use the Representation module to find the embeddings for the image and ground truth caption without using the supporting and contrasting exemplars. The mixture module provides the joint representation of the target image and ground truth caption. Finally, the decoder takes in the joint features and generates the question. We also experimented with the captions generated by an Image-Captioning network BIBREF13 for VQG-COCO dataset and the result for that and training details are present in the supplementary material. Experiments We evaluate our proposed MDN method in the following ways: First, we evaluate it against other variants described in section SECREF19 and SECREF10 . Second, we further compare our network with state-of-the-art methods for VQA 1.0 and VQG-COCO dataset. We perform a user study to gauge human opinion on naturalness of the generated question and analyze the word statistics in Figure FIGREF22 . This is an important test as humans are the best deciders of naturalness. We further consider the statistical significance for the various ablations as well as the state-of-the-art models. The quantitative evaluation is conducted using standard metrics like BLEU BIBREF46 , METEOR BIBREF47 , ROUGE BIBREF48 , CIDEr BIBREF49 . Although these metrics have not been shown to correlate with `naturalness' of the question these still provide a reasonable quantitative measure for comparison. Here we only provide the BLEU1 scores, but the remaining BLEU-n metric scores are present in the supplementary. We observe that the proposed MDN provides improved embeddings to the decoder. We believe that these embeddings capture instance specific differential information that helps in guiding the question generation. Details regarding the metrics are given in the supplementary material. Ablation Analysis We considered different variations of our method mentioned in section SECREF19 and the various ways to obtain the joint multimodal embedding as described in section SECREF10 . The results for the VQG-COCO test set are given in table TABREF24 . In this table, every block provides the results for one of the variations of obtaining the embeddings and different ways of combining them. We observe that the Joint Method (JM) of combining the embeddings works the best in all cases except the Tag Embeddings. Among the ablations, the proposed MDN method works way better than the other variants in terms of BLEU, METEOR and ROUGE metrics by achieving an improvement of 6%, 12% and 18% in the scores respectively over the best other variant. Baseline and State-of-the-Art The comparison of our method with various baselines and state-of-the-art methods is provided in table TABREF26 for VQA 1.0 and table TABREF27 for VQG-COCO dataset. The comparable baselines for our method are the image based and caption based models in which we use either only the image or the caption embedding and generate the question. In both the tables, the first block consists of the current state-of-the-art methods on that dataset and the second contains the baselines. We observe that for the VQA dataset we achieve an improvement of 8% in BLEU and 7% in METEOR metric scores over the baselines, whereas for VQG-COCO dataset this is 15% for both the metrics. We improve over the previous state-of-the-art BIBREF35 for VQA dataset by around 6% in BLEU score and 10% in METEOR score. In the VQG-COCO dataset, we improve over BIBREF5 by 3.7% and BIBREF36 by 3.5% in terms of METEOR scores. Statistical Significance Analysis We have analysed Statistical Significance BIBREF50 of our MDN model for VQG for different variations of the mixture module mentioned in section SECREF10 and also against the state-of-the-art methods. The Critical Difference (CD) for Nemenyi BIBREF51 test depends upon the given INLINEFORM0 (confidence level, which is 0.05 in our case) for average ranks and N (number of tested datasets). If the difference in the rank of the two methods lies within CD, then they are not significantly different and vice-versa. Figure FIGREF29 visualizes the post-hoc analysis using the CD diagram. From the figure, it is clear that MDN-Joint works best and is statistically significantly different from the state-of-the-art methods. Perceptual Realism A human is the best judge of naturalness of any question, We evaluated our proposed MDN method using a `Naturalness' Turing test BIBREF52 on 175 people. People were shown an image with 2 questions just as in figure FIGREF1 and were asked to rate the naturalness of both the questions on a scale of 1 to 5 where 1 means `Least Natural' and 5 is the `Most Natural'. We provided 175 people with 100 such images from the VQG-COCO validation dataset which has 1250 images. Figure FIGREF30 indicates the number of people who were fooled (rated the generated question more or equal to the ground truth question). For the 100 images, on an average 59.7% people were fooled in this experiment and this shows that our model is able to generate natural questions. Conclusion In this paper we have proposed a novel method for generating natural questions for an image. The approach relies on obtaining multimodal differential embeddings from image and its caption. We also provide ablation analysis and a detailed comparison with state-of-the-art methods, perform a user study to evaluate the naturalness of our generated questions and also ensure that the results are statistically significant. In future, we would like to analyse means of obtaining composite embeddings. We also aim to consider the generalisation of this approach to other vision and language tasks. Supplementary Material Section SECREF8 will provide details about training configuration for MDN, Section SECREF9 will explain the various Proposed Methods and we also provide a discussion in section regarding some important questions related to our method. We report BLEU1, BLEU2, BLEU3, BLEU4, METEOR, ROUGE and CIDER metric scores for VQG-COCO dataset. We present different experiments with Tag Net in which we explore the performance of various tags (Noun, Verb, and Question tags) and different ways of combining them to get the context vectors. Multimodal Differential Network [1] MDN INLINEFORM0 Finding Exemplars: INLINEFORM1 INLINEFORM2 Compute Triplet Embedding: INLINEFORM3 INLINEFORM4 Compute Triplet Fusion Embedding : INLINEFORM5 INLINEFORM6 INLINEFORM7 Compute Triplet Loss: INLINEFORM8 Compute Decode Question Sentence: INLINEFORM9 INLINEFORM10 —————————————————– Triplet Fusion INLINEFORM11 , INLINEFORM12 INLINEFORM13 :Image feature,14x14x512 INLINEFORM14 : Caption feature,1x512 Match Dimension: INLINEFORM15 ,196x512 INLINEFORM16 196x512 If flag==Joint Fusion: INLINEFORM17 INLINEFORM18 , [ INLINEFORM19 (MDN-Mul), INLINEFORM20 (MDN-Add)] If flag==Attention Fusion : INLINEFORM21 Semb INLINEFORM22 Dataset and Training Details Dataset We conduct our experiments on two types of dataset: VQA dataset BIBREF1 , which contains human annotated questions based on images on MS-COCO dataset. Second one is VQG-COCO dataset based on natural question BIBREF55 . VQA dataset VQA dataset BIBREF1 is built on complex images from MS-COCO dataset. It contains a total of 204721 images, out of which 82783 are for training, 40504 for validation and 81434 for testing. Each image in the MS-COCO dataset is associated with 3 questions and each question has 10 possible answers. So there are 248349 QA pair for training, 121512 QA pairs for validating and 244302 QA pairs for testing. We used pre-trained caption generation model BIBREF53 to extract captions for VQA dataset. VQG dataset The VQG-COCO dataset BIBREF55 , is developed for generating natural and engaging questions that are based on common sense reasoning. This dataset contains a total of 2500 training images, 1250 validation images and 1250 testing images. Each image in the dataset contains 5 natural questions. Training Configuration We have used RMSPROP optimizer to update the model parameter and configured hyper-parameter values to be as follows: INLINEFORM23 to train the classification network . In order to train a triplet model, we have used RMSPROP to optimize the triplet model model parameter and configure hyper-parameter values to be: INLINEFORM24 . We also used learning rate decay to decrease the learning rate on every epoch by a factor given by: INLINEFORM25 where values of a=1500 and b=1250 are set empirically. Ablation Analysis of Model While, we advocate the use of multimodal differential network (MDN) for generating embeddings that can be used by the decoder for generating questions, we also evaluate several variants of this architecture namely (a) Differential Image Network, (b) Tag net and (c) Place net. These are described in detail as follows: Differential Image Network For obtaining the exemplar image based context embedding, we propose a triplet network consist of three network, one is target net, supporting net and opposing net. All these three networks designed with convolution neural network and shared the same parameters. The weights of this network are learnt through end-to-end learning using a triplet loss. The aim is to obtain latent weight vectors that bring the supporting exemplar close to the target image and enhances the difference between opposing examples. More formally, given an image INLINEFORM26 we obtain an embedding INLINEFORM27 using a CNN that we parameterize through a function INLINEFORM28 where INLINEFORM29 are the weights of the CNN. This is illustrated in figure FIGREF43 . Tag net The tag net consists of two parts Context Extractor & Tag Embedding Net. This is illustrated in figure FIGREF45 . Extract Context: The first step is to extract the caption of the image using NeuralTalk2 BIBREF53 model. We find the part-of-speech(POS) tag present in the caption. POS taggers have been developed for two well known corpuses, the Brown Corpus and the Penn Treebanks. For our work, we are using the Brown Corpus tags. The tags are clustered into three category namely Noun tag, Verb tag and Question tags (What, Where, ...). Noun tag consists of all the noun & pronouns present in the caption sentence and similarly, verb tag consists of verb & adverbs present in the caption sentence. The question tags consists of the 7-well know question words i.e., why, how, what, when, where, who and which. Each tag token is represented as a one-hot vector of the dimension of vocabulary size. For generalization, we have considered 5 tokens from each category of the Tags. Tag Embedding Net: The embedding network consists of word embedding followed by temporal convolutions neural network followed by max-pooling network. In the first step, sparse high dimensional one-hot vector is transformed to dense low dimension vector using word embedding. After this, we apply temporal convolution on the word embedding vector. The uni-gram, bi-gram and tri-gram feature are computed by applying convolution filter of size 1, 2 and 3 respectability. Finally, we applied max-pooling on this to get a vector representation of the tags as shown figure FIGREF45 . We concatenated all the tag words followed by fully connected layer to get feature dimension of 512. We also explored joint networks based on concatenation of all the tags, on element-wise addition and element-wise multiplication of the tag vectors. However, we observed that convolution over max pooling and joint concatenation gives better performance based on CIDer score. INLINEFORM30 Where, T_CNN is Temporally Convolution Neural Network applied on word embedding vector with kernel size three. Place net Visual object and scene recognition plays a crucial role in the image. Here, places in the image are labeled with scene semantic categories BIBREF45 , comprise of large and diverse type of environment in the world, such as (amusement park, tower, swimming pool, shoe shop, cafeteria, rain-forest, conference center, fish pond, etc.). So we have used different type of scene semantic categories present in the image as a place based context to generate natural question. A place365 is a convolution neural network is modeled to classify 365 types of scene categories, which is trained on the place2 dataset consist of 1.8 million of scene images. We have used a pre-trained VGG16-places365 network to obtain place based context embedding feature for various type scene categories present in the image. The context feature INLINEFORM31 is obtained by: INLINEFORM32 Where, INLINEFORM33 is Place365_CNN. We have extracted INLINEFORM34 features of dimension 14x14x512 for attention model and FC8 features of dimension 365 for joint, addition and hadamard model of places365. Finally, we use a linear transformation to obtain a 512 dimensional vector. We explored using the CONV5 having feature dimension 14x14 512, FC7 having 4096 and FC8 having feature dimension of 365 of places365. Ablation Analysis Sampling Exemplar: KNN vs ITML Our method is aimed at using efficient exemplar-based retrieval techniques. We have experimented with various exemplar methods, such as ITML BIBREF40 based metric learning for image features and KNN based approaches. We observed KNN based approach (K-D tree) with Euclidean metric is a efficient method for finding exemplars. Also we observed that ITML is computationally expensive and also depends on the training procedure. The table provides the experimental result for Differential Image Network variant with k (number of exemplars) = 2 and Hadamard method: Question Generation approaches: Sampling vs Argmax We obtained the decoding using standard practice followed in the literature BIBREF38 . This method selects the argmax sentence. Also, we evaluated our method by sampling from the probability distributions and provide the results for our proposed MDN-Joint method for VQG dataset as follows: How are exemplars improving Embedding In Multimodel differential network, we use exemplars and train them using a triplet loss. It is known that using a triplet network, we can learn a representation that accentuates how the image is closer to a supporting exemplar as against the opposing exemplar BIBREF42 , BIBREF41 . The Joint embedding is obtained between the image and language representations. Therefore the improved representation helps in obtaining an improved context vector. Further we show that this also results in improving VQG. Are exemplars required? We had similar concerns and validated this point by using random exemplars for the nearest neighbor for MDN. (k=R in table TABREF35 ) In this case the method is similar to the baseline. This suggests that with random exemplar, the model learns to ignore the cue. Are captions necessary for our method? This is not actually necessary. In our method, we have used an existing image captioning method BIBREF13 to generate captions for images that did not have them. For VQG dataset, captions were available and we have used that, but, for VQA dataset captions were not available and we have generated captions while training. We provide detailed evidence with respect to caption-question pairs to ensure that we are generating novel questions. While the caption generates scene description, our proposed method generates semantically meaningful and novel questions. Examples for Figure 1 of main paper: First Image:- Caption- A young man skateboarding around little cones. Our Question- Is this a skateboard competition? Second Image:- Caption- A small child is standing on a pair of skis. Our Question:- How old is that little girl? Intuition behind Triplet Network: The intuition behind use of triplet networks is clear through this paper BIBREF41 that first advocated its use. The main idea is that when we learn distance functions that are “close” for similar and “far” from dissimilar representations, it is not clear that close and far are with respect to what measure. By incorporating a triplet we learn distance functions that learn that “A is more similar to B as compared to C”. Learning such measures allows us to bring target image-caption joint embeddings that are closer to supporting exemplars as compared to contrasting exemplars. Analysis of Network Analysis of Tag Context Tag is language based context. These tags are extracted from caption, except question-tags which is fixed as the 7 'Wh words' (What, Why, Where, Who, When, Which and How). We have experimented with Noun tag, Verb tag and 'Wh-word' tag as shown in tables. Also, we have experimented in each tag by varying the number of tags from 1 to 7. We combined different tags using 1D-convolution, concatenation, and addition of all the tags and observed that the concatenation mechanism gives better results. As we can see in the table TABREF33 that taking Nouns, Verbs and Wh-Words as context, we achieve significant improvement in the BLEU, METEOR and CIDEr scores from the basic models which only takes the image and the caption respectively. Taking Nouns generated from the captions and questions of the corresponding training example as context, we achieve an increase of 1.6% in Bleu Score and 2% in METEOR and 34.4% in CIDEr Score from the basic Image model. Similarly taking Verbs as context gives us an increase of 1.3% in Bleu Score and 2.1% in METEOR and 33.5% in CIDEr Score from the basic Image model. And the best result comes when we take 3 Wh-Words as context and apply the Hadamard Model with concatenating the 3 WH-words. Also in Table TABREF34 we have shown the results when we take more than one words as context. Here we show that for 3 words i.e 3 nouns, 3 verbs and 3 Wh-words, the Concatenation model performs the best. In this table the conv model is using 1D convolution to combine the tags and the joint model combine all the tags. Analysis of Context: Exemplars In Multimodel Differential Network and Differential Image Network, we use exemplar images(target, supporting and opposing image) to obtain the differential context. We have performed the experiment based on the single exemplar(K=1), which is one supporting and one opposing image along with target image, based on two exemplar(K=2), i.e. two supporting and two opposing image along with single target image. similarly we have performed experiment for K=3 and K=4 as shown in table- TABREF35 . Mixture Module: Other Variations Hadamard method uses element-wise multiplication whereas Addition method uses element-wise addition in place of the concatenation operator of the Joint method. The Hadamard method finds a correlation between image feature and caption feature vector while the Addition method learns a resultant vector. In the attention method, the output INLINEFORM35 is the weighted average of attention probability vector INLINEFORM36 and convolutional features INLINEFORM37 . The attention probability vector weights the contribution of each convolutional feature based on the caption vector. This attention method is similar to work stack attention method BIBREF54 . The attention mechanism is given by: DISPLAYFORM0 where INLINEFORM38 is the 14x14x512-dimensional convolution feature map from the fifth convolution layer of VGG-19 Net of image INLINEFORM39 and INLINEFORM40 is the caption context vector. The attention probability vector INLINEFORM41 is a 196-dimensional vector. INLINEFORM42 are the weights and INLINEFORM43 is the bias for different layers. We evaluate the different approaches and provide results for the same. Here INLINEFORM44 represents element-wise addition. Evaluation Metrics Our task is similar to encoder -decoder framework of machine translation. we have used same evaluation metric is used in machine translation. BLEU BIBREF46 is the first metric to find the correlation between generated question with ground truth question. BLEU score is used to measure the precision value, i.e That is how much words in the predicted question is appeared in reference question. BLEU-n score measures the n-gram precision for counting co-occurrence on reference sentences. we have evaluated BLEU score from n is 1 to 4. The mechanism of ROUGE-n BIBREF48 score is similar to BLEU-n,where as, it measures recall value instead of precision value in BLEU. That is how much words in the reference question is appeared in predicted question.Another version ROUGE metric is ROUGE-L, which measures longest common sub-sequence present in the generated question. METEOR BIBREF47 score is another useful evaluation metric to calculate the similarity between generated question with reference one by considering synonyms, stemming and paraphrases. the output of the METEOR score measure the word matches between predicted question and reference question. In VQG, it compute the word match score between predicted question with five reference question. CIDer BIBREF49 score is a consensus based evaluation metric. It measure human-likeness, that is the sentence is written by human or not. The consensus is measured, how often n-grams in the predicted question are appeared in the reference question. If the n-grams in the predicted question sentence is appeared more frequently in reference question then question is less informative and have low CIDer score. We provide our results using all these metrics and compare it with existing baselines.
The proposed Multimodal Differential Network (MDN) consists of a representation module and a joint mixture module.
88d9d32fb7a22943e1f4868263246731a1726e6e
88d9d32fb7a22943e1f4868263246731a1726e6e_0
Q: How do the authors define exemplars? Text: Introduction To understand the progress towards multimedia vision and language understanding, a visual Turing test was proposed by BIBREF0 that was aimed at visual question answering BIBREF1 . Visual Dialog BIBREF2 is a natural extension for VQA. Current dialog systems as evaluated in BIBREF3 show that when trained between bots, AI-AI dialog systems show improvement, but that does not translate to actual improvement for Human-AI dialog. This is because, the questions generated by bots are not natural (human-like) and therefore does not translate to improved human dialog. Therefore it is imperative that improvement in the quality of questions will enable dialog agents to perform well in human interactions. Further, BIBREF4 show that unanswered questions can be used for improving VQA, Image captioning and Object Classification. An interesting line of work in this respect is the work of BIBREF5 . Here the authors have proposed the challenging task of generating natural questions for an image. One aspect that is central to a question is the context that is relevant to generate it. However, this context changes for every image. As can be seen in Figure FIGREF1 , an image with a person on a skateboard would result in questions related to the event. Whereas for a little girl, the questions could be related to age rather than the action. How can one have widely varying context provided for generating questions? To solve this problem, we use the context obtained by considering exemplars, specifically we use the difference between relevant and irrelevant exemplars. We consider different contexts in the form of Location, Caption, and Part of Speech tags. The human annotated questions are (b) for the first image and (a) for the second image. Our method implicitly uses a differential context obtained through supporting and contrasting exemplars to obtain a differentiable embedding. This embedding is used by a question decoder to decode the appropriate question. As discussed further, we observe this implicit differential context to perform better than an explicit keyword based context. The difference between the two approaches is illustrated in Figure FIGREF2 . This also allows for better optimization as we can backpropagate through the whole network. We provide detailed empirical evidence to support our hypothesis. As seen in Figure FIGREF1 our method generates natural questions and improves over the state-of-the-art techniques for this problem. To summarize, we propose a multimodal differential network to solve the task of visual question generation. Our contributions are: (1) A method to incorporate exemplars to learn differential embeddings that captures the subtle differences between supporting and contrasting examples and aid in generating natural questions. (2) We provide Multimodal differential embeddings, as image or text alone does not capture the whole context and we show that these embeddings outperform the ablations which incorporate cues such as only image, or tags or place information. (3) We provide a thorough comparison of the proposed network against state-of-the-art benchmarks along with a user study and statistical significance test. Related Work Generating a natural and engaging question is an interesting and challenging task for a smart robot (like chat-bot). It is a step towards having a natural visual dialog instead of the widely prevalent visual question answering bots. Further, having the ability to ask natural questions based on different contexts is also useful for artificial agents that can interact with visually impaired people. While the task of generating question automatically is well studied in NLP community, it has been relatively less studied for image-related natural questions. This is still a difficult task BIBREF5 that has gained recent interest in the community. Recently there have been many deep learning based approaches as well for solving the text-based question generation task such as BIBREF6 . Further, BIBREF7 have proposed a method to generate a factoid based question based on triplet set {subject, relation and object} to capture the structural representation of text and the corresponding generated question. These methods, however, were limited to text-based question generation. There has been extensive work done in the Vision and Language domain for solving image captioning, paragraph generation, Visual Question Answering (VQA) and Visual Dialog. BIBREF8 , BIBREF9 , BIBREF10 proposed conventional machine learning methods for image description. BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 have generated descriptive sentences from images with the help of Deep Networks. There have been many works for solving Visual Dialog BIBREF19 , BIBREF20 , BIBREF2 , BIBREF21 , BIBREF22 . A variety of methods have been proposed by BIBREF23 , BIBREF24 , BIBREF1 , BIBREF25 , BIBREF26 , BIBREF27 for solving VQA task including attention-based methods BIBREF28 , BIBREF29 , BIBREF30 , BIBREF31 , BIBREF32 , BIBREF33 , BIBREF34 . However, Visual Question Generation (VQG) is a separate task which is of interest in its own right and has not been so well explored BIBREF5 . This is a vision based novel task aimed at generating natural and engaging question for an image. BIBREF35 proposed a method for continuously generating questions from an image and subsequently answering those questions. The works closely related to ours are that of BIBREF5 and BIBREF36 . In the former work, the authors used an encoder-decoder based framework whereas in the latter work, the authors extend it by using a variational autoencoder based sequential routine to obtain natural questions by performing sampling of the latent variable. Approach In this section, we clarify the basis for our approach of using exemplars for question generation. To use exemplars for our method, we need to ensure that our exemplars can provide context and that our method generates valid exemplars. We first analyze whether the exemplars are valid or not. We illustrate this in figure FIGREF3 . We used a pre-trained RESNET-101 BIBREF37 object classification network on the target, supporting and contrasting images. We observed that the supporting image and target image have quite similar probability scores. The contrasting exemplar image, on the other hand, has completely different probability scores. Exemplars aim to provide appropriate context. To better understand the context, we experimented by analysing the questions generated through an exemplar. We observed that indeed a supporting exemplar could identify relevant tags (cows in Figure FIGREF3 ) for generating questions. We improve use of exemplars by using a triplet network. This network ensures that the joint image-caption embedding for the supporting exemplar are closer to that of the target image-caption and vice-versa. We empirically evaluated whether an explicit approach that uses the differential set of tags as a one-hot encoding improves the question generation, or the implicit embedding obtained based on the triplet network. We observed that the implicit multimodal differential network empirically provided better context for generating questions. Our understanding of this phenomenon is that both target and supporting exemplars generate similar questions whereas contrasting exemplars generate very different questions from the target question. The triplet network that enhances the joint embedding thus aids to improve the generation of target question. These are observed to be better than the explicitly obtained context tags as can be seen in Figure FIGREF2 . We now explain our method in detail. Method The task in visual question generation (VQG) is to generate a natural language question INLINEFORM0 , for an image INLINEFORM1 . We consider a set of pre-generated context INLINEFORM2 from image INLINEFORM3 . We maximize the conditional probability of generated question given image and context as follows: DISPLAYFORM0 where INLINEFORM0 is a vector for all possible parameters of our model. INLINEFORM1 is the ground truth question. The log probability for the question is calculated by using joint probability over INLINEFORM2 with the help of chain rule. For a particular question, the above term is obtained as: INLINEFORM3 where INLINEFORM0 is length of the sequence, and INLINEFORM1 is the INLINEFORM2 word of the question. We have removed INLINEFORM3 for simplicity. Our method is based on a sequence to sequence network BIBREF38 , BIBREF12 , BIBREF39 . The sequence to sequence network has a text sequence as input and output. In our method, we take an image as input and generate a natural question as output. The architecture for our model is shown in Figure FIGREF4 . Our model contains three main modules, (a) Representation Module that extracts multimodal features (b) Mixture Module that fuses the multimodal representation and (c) Decoder that generates question using an LSTM-based language model. During inference, we sample a question word INLINEFORM0 from the softmax distribution and continue sampling until the end token or maximum length for the question is reached. We experimented with both sampling and argmax and found out that argmax works better. This result is provided in the supplementary material. Multimodal Differential Network The proposed Multimodal Differential Network (MDN) consists of a representation module and a joint mixture module. We used an efficient KNN-based approach (k-d tree) with Euclidean metric to obtain the exemplars. This is obtained through a coarse quantization of nearest neighbors of the training examples into 50 clusters, and selecting the nearest as supporting and farthest as the contrasting exemplars. We experimented with ITML based metric learning BIBREF40 for image features. Surprisingly, the KNN-based approach outperforms the latter one. We also tried random exemplars and different number of exemplars and found that INLINEFORM0 works best. We provide these results in the supplementary material. We use a triplet network BIBREF41 , BIBREF42 in our representation module. We refereed a similar kind of work done in BIBREF34 for building our triplet network. The triplet network consists of three sub-parts: target, supporting, and contrasting networks. All three networks share the same parameters. Given an image INLINEFORM0 we obtain an embedding INLINEFORM1 using a CNN parameterized by a function INLINEFORM2 where INLINEFORM3 are the weights for the CNN. The caption INLINEFORM4 results in a caption embedding INLINEFORM5 through an LSTM parameterized by a function INLINEFORM6 where INLINEFORM7 are the weights for the LSTM. This is shown in part 1 of Figure FIGREF4 . Similarly we obtain image embeddings INLINEFORM8 & INLINEFORM9 and caption embeddings INLINEFORM10 & INLINEFORM11 . DISPLAYFORM0 The Mixture module brings the image and caption embeddings to a joint feature embedding space. The input to the module is the embeddings obtained from the representation module. We have evaluated four different approaches for fusion viz., joint, element-wise addition, hadamard and attention method. Each of these variants receives image features INLINEFORM0 & the caption embedding INLINEFORM1 , and outputs a fixed dimensional feature vector INLINEFORM2 . The Joint method concatenates INLINEFORM3 & INLINEFORM4 and maps them to a fixed length feature vector INLINEFORM5 as follows: DISPLAYFORM0 where INLINEFORM0 is the 4096-dimensional convolutional feature from the FC7 layer of pretrained VGG-19 Net BIBREF43 . INLINEFORM1 are the weights and INLINEFORM2 is the bias for different layers. INLINEFORM3 is the concatenation operator. Similarly, We obtain context vectors INLINEFORM0 & INLINEFORM1 for the supporting and contrasting exemplars. Details for other fusion methods are present in supplementary.The aim of the triplet network BIBREF44 is to obtain context vectors that bring the supporting exemplar embeddings closer to the target embedding and vice-versa. This is obtained as follows: DISPLAYFORM0 where INLINEFORM0 is the euclidean distance between two embeddings INLINEFORM1 and INLINEFORM2 . M is the training dataset that contains all set of possible triplets. INLINEFORM3 is the triplet loss function. This is decomposed into two terms, one that brings the supporting sample closer and one that pushes the contrasting sample further. This is given by DISPLAYFORM0 Here INLINEFORM0 represent the euclidean distance between the target and supporting sample, and target and opposing sample respectively. The parameter INLINEFORM1 controls the separation margin between these and is obtained through validation data. Decoder: Question Generator The role of decoder is to predict the probability for a question, given INLINEFORM0 . RNN provides a nice way to perform conditioning on previous state value using a fixed length hidden vector. The conditional probability of a question token at particular time step INLINEFORM1 is modeled using an LSTM as used in machine translation BIBREF38 . At time step INLINEFORM2 , the conditional probability is denoted by INLINEFORM3 , where INLINEFORM4 is the hidden state of the LSTM cell at time step INLINEFORM5 , which is conditioned on all the previously generated words INLINEFORM6 . The word with maximum probability in the probability distribution of the LSTM cell at step INLINEFORM7 is fed as an input to the LSTM cell at step INLINEFORM8 as shown in part 3 of Figure FIGREF4 . At INLINEFORM9 , we are feeding the output of the mixture module to LSTM. INLINEFORM10 are the predicted question tokens for the input image INLINEFORM11 . Here, we are using INLINEFORM12 and INLINEFORM13 as the special token START and STOP respectively. The softmax probability for the predicted question token at different time steps is given by the following equations where LSTM refers to the standard LSTM cell equations: INLINEFORM14 Where INLINEFORM0 is the probability distribution over all question tokens. INLINEFORM1 is cross entropy loss. Cost function Our objective is to minimize the total loss, that is the sum of cross entropy loss and triplet loss over all training examples. The total loss is: DISPLAYFORM0 where INLINEFORM0 is the total number of samples, INLINEFORM1 is a constant, which controls both the loss. INLINEFORM2 is the triplet loss function EQREF13 . INLINEFORM3 is the cross entropy loss between the predicted and ground truth questions and is given by: INLINEFORM4 where, INLINEFORM0 is the total number of question tokens, INLINEFORM1 is the ground truth label. The code for MDN-VQG model is provided . Variations of Proposed Method While, we advocate the use of multimodal differential network for generating embeddings that can be used by the decoder for generating questions, we also evaluate several variants of this architecture. These are as follows: Tag Net: In this variant, we consider extracting the part-of-speech (POS) tags for the words present in the caption and obtaining a Tag embedding by considering different methods of combining the one-hot vectors. Further details and experimental results are present in the supplementary. This Tag embedding is then combined with the image embedding and provided to the decoder network. Place Net: In this variant we explore obtaining embeddings based on the visual scene understanding. This is obtained using a pre-trained PlaceCNN BIBREF45 that is trained to classify 365 different types of scene categories. We then combine the activation map for the input image and the VGG-19 based place embedding to obtain the joint embedding used by the decoder. Differential Image Network: Instead of using multimodal differential network for generating embeddings, we also evaluate differential image network for the same. In this case, the embedding does not include the caption but is based only on the image feature. We also experimented with using multiple exemplars and random exemplars. Further details, pseudocode and results regarding these are present in the supplementary material. Dataset We conduct our experiments on Visual Question Generation (VQG) dataset BIBREF5 , which contains human annotated questions based on images of MS-COCO dataset. This dataset was developed for generating natural and engaging questions based on common sense reasoning. We use VQG-COCO dataset for our experiments which contains a total of 2500 training images, 1250 validation images, and 1250 testing images. Each image in the dataset contains five natural questions and five ground truth captions. It is worth noting that the work of BIBREF36 also used the questions from VQA dataset BIBREF1 for training purpose, whereas the work by BIBREF5 uses only the VQG-COCO dataset. VQA-1.0 dataset is also built on images from MS-COCO dataset. It contains a total of 82783 images for training, 40504 for validation and 81434 for testing. Each image is associated with 3 questions. We used pretrained caption generation model BIBREF13 to extract captions for VQA dataset as the human annotated captions are not there in the dataset. We also get good results on the VQA dataset (as shown in Table TABREF26 ) which shows that our method doesn't necessitate the presence of ground truth captions. We train our model separately for VQG-COCO and VQA dataset. Inference We made use of the 1250 validation images to tune the hyperparameters and are providing the results on test set of VQG-COCO dataset. During inference, We use the Representation module to find the embeddings for the image and ground truth caption without using the supporting and contrasting exemplars. The mixture module provides the joint representation of the target image and ground truth caption. Finally, the decoder takes in the joint features and generates the question. We also experimented with the captions generated by an Image-Captioning network BIBREF13 for VQG-COCO dataset and the result for that and training details are present in the supplementary material. Experiments We evaluate our proposed MDN method in the following ways: First, we evaluate it against other variants described in section SECREF19 and SECREF10 . Second, we further compare our network with state-of-the-art methods for VQA 1.0 and VQG-COCO dataset. We perform a user study to gauge human opinion on naturalness of the generated question and analyze the word statistics in Figure FIGREF22 . This is an important test as humans are the best deciders of naturalness. We further consider the statistical significance for the various ablations as well as the state-of-the-art models. The quantitative evaluation is conducted using standard metrics like BLEU BIBREF46 , METEOR BIBREF47 , ROUGE BIBREF48 , CIDEr BIBREF49 . Although these metrics have not been shown to correlate with `naturalness' of the question these still provide a reasonable quantitative measure for comparison. Here we only provide the BLEU1 scores, but the remaining BLEU-n metric scores are present in the supplementary. We observe that the proposed MDN provides improved embeddings to the decoder. We believe that these embeddings capture instance specific differential information that helps in guiding the question generation. Details regarding the metrics are given in the supplementary material. Ablation Analysis We considered different variations of our method mentioned in section SECREF19 and the various ways to obtain the joint multimodal embedding as described in section SECREF10 . The results for the VQG-COCO test set are given in table TABREF24 . In this table, every block provides the results for one of the variations of obtaining the embeddings and different ways of combining them. We observe that the Joint Method (JM) of combining the embeddings works the best in all cases except the Tag Embeddings. Among the ablations, the proposed MDN method works way better than the other variants in terms of BLEU, METEOR and ROUGE metrics by achieving an improvement of 6%, 12% and 18% in the scores respectively over the best other variant. Baseline and State-of-the-Art The comparison of our method with various baselines and state-of-the-art methods is provided in table TABREF26 for VQA 1.0 and table TABREF27 for VQG-COCO dataset. The comparable baselines for our method are the image based and caption based models in which we use either only the image or the caption embedding and generate the question. In both the tables, the first block consists of the current state-of-the-art methods on that dataset and the second contains the baselines. We observe that for the VQA dataset we achieve an improvement of 8% in BLEU and 7% in METEOR metric scores over the baselines, whereas for VQG-COCO dataset this is 15% for both the metrics. We improve over the previous state-of-the-art BIBREF35 for VQA dataset by around 6% in BLEU score and 10% in METEOR score. In the VQG-COCO dataset, we improve over BIBREF5 by 3.7% and BIBREF36 by 3.5% in terms of METEOR scores. Statistical Significance Analysis We have analysed Statistical Significance BIBREF50 of our MDN model for VQG for different variations of the mixture module mentioned in section SECREF10 and also against the state-of-the-art methods. The Critical Difference (CD) for Nemenyi BIBREF51 test depends upon the given INLINEFORM0 (confidence level, which is 0.05 in our case) for average ranks and N (number of tested datasets). If the difference in the rank of the two methods lies within CD, then they are not significantly different and vice-versa. Figure FIGREF29 visualizes the post-hoc analysis using the CD diagram. From the figure, it is clear that MDN-Joint works best and is statistically significantly different from the state-of-the-art methods. Perceptual Realism A human is the best judge of naturalness of any question, We evaluated our proposed MDN method using a `Naturalness' Turing test BIBREF52 on 175 people. People were shown an image with 2 questions just as in figure FIGREF1 and were asked to rate the naturalness of both the questions on a scale of 1 to 5 where 1 means `Least Natural' and 5 is the `Most Natural'. We provided 175 people with 100 such images from the VQG-COCO validation dataset which has 1250 images. Figure FIGREF30 indicates the number of people who were fooled (rated the generated question more or equal to the ground truth question). For the 100 images, on an average 59.7% people were fooled in this experiment and this shows that our model is able to generate natural questions. Conclusion In this paper we have proposed a novel method for generating natural questions for an image. The approach relies on obtaining multimodal differential embeddings from image and its caption. We also provide ablation analysis and a detailed comparison with state-of-the-art methods, perform a user study to evaluate the naturalness of our generated questions and also ensure that the results are statistically significant. In future, we would like to analyse means of obtaining composite embeddings. We also aim to consider the generalisation of this approach to other vision and language tasks. Supplementary Material Section SECREF8 will provide details about training configuration for MDN, Section SECREF9 will explain the various Proposed Methods and we also provide a discussion in section regarding some important questions related to our method. We report BLEU1, BLEU2, BLEU3, BLEU4, METEOR, ROUGE and CIDER metric scores for VQG-COCO dataset. We present different experiments with Tag Net in which we explore the performance of various tags (Noun, Verb, and Question tags) and different ways of combining them to get the context vectors. Multimodal Differential Network [1] MDN INLINEFORM0 Finding Exemplars: INLINEFORM1 INLINEFORM2 Compute Triplet Embedding: INLINEFORM3 INLINEFORM4 Compute Triplet Fusion Embedding : INLINEFORM5 INLINEFORM6 INLINEFORM7 Compute Triplet Loss: INLINEFORM8 Compute Decode Question Sentence: INLINEFORM9 INLINEFORM10 —————————————————– Triplet Fusion INLINEFORM11 , INLINEFORM12 INLINEFORM13 :Image feature,14x14x512 INLINEFORM14 : Caption feature,1x512 Match Dimension: INLINEFORM15 ,196x512 INLINEFORM16 196x512 If flag==Joint Fusion: INLINEFORM17 INLINEFORM18 , [ INLINEFORM19 (MDN-Mul), INLINEFORM20 (MDN-Add)] If flag==Attention Fusion : INLINEFORM21 Semb INLINEFORM22 Dataset and Training Details Dataset We conduct our experiments on two types of dataset: VQA dataset BIBREF1 , which contains human annotated questions based on images on MS-COCO dataset. Second one is VQG-COCO dataset based on natural question BIBREF55 . VQA dataset VQA dataset BIBREF1 is built on complex images from MS-COCO dataset. It contains a total of 204721 images, out of which 82783 are for training, 40504 for validation and 81434 for testing. Each image in the MS-COCO dataset is associated with 3 questions and each question has 10 possible answers. So there are 248349 QA pair for training, 121512 QA pairs for validating and 244302 QA pairs for testing. We used pre-trained caption generation model BIBREF53 to extract captions for VQA dataset. VQG dataset The VQG-COCO dataset BIBREF55 , is developed for generating natural and engaging questions that are based on common sense reasoning. This dataset contains a total of 2500 training images, 1250 validation images and 1250 testing images. Each image in the dataset contains 5 natural questions. Training Configuration We have used RMSPROP optimizer to update the model parameter and configured hyper-parameter values to be as follows: INLINEFORM23 to train the classification network . In order to train a triplet model, we have used RMSPROP to optimize the triplet model model parameter and configure hyper-parameter values to be: INLINEFORM24 . We also used learning rate decay to decrease the learning rate on every epoch by a factor given by: INLINEFORM25 where values of a=1500 and b=1250 are set empirically. Ablation Analysis of Model While, we advocate the use of multimodal differential network (MDN) for generating embeddings that can be used by the decoder for generating questions, we also evaluate several variants of this architecture namely (a) Differential Image Network, (b) Tag net and (c) Place net. These are described in detail as follows: Differential Image Network For obtaining the exemplar image based context embedding, we propose a triplet network consist of three network, one is target net, supporting net and opposing net. All these three networks designed with convolution neural network and shared the same parameters. The weights of this network are learnt through end-to-end learning using a triplet loss. The aim is to obtain latent weight vectors that bring the supporting exemplar close to the target image and enhances the difference between opposing examples. More formally, given an image INLINEFORM26 we obtain an embedding INLINEFORM27 using a CNN that we parameterize through a function INLINEFORM28 where INLINEFORM29 are the weights of the CNN. This is illustrated in figure FIGREF43 . Tag net The tag net consists of two parts Context Extractor & Tag Embedding Net. This is illustrated in figure FIGREF45 . Extract Context: The first step is to extract the caption of the image using NeuralTalk2 BIBREF53 model. We find the part-of-speech(POS) tag present in the caption. POS taggers have been developed for two well known corpuses, the Brown Corpus and the Penn Treebanks. For our work, we are using the Brown Corpus tags. The tags are clustered into three category namely Noun tag, Verb tag and Question tags (What, Where, ...). Noun tag consists of all the noun & pronouns present in the caption sentence and similarly, verb tag consists of verb & adverbs present in the caption sentence. The question tags consists of the 7-well know question words i.e., why, how, what, when, where, who and which. Each tag token is represented as a one-hot vector of the dimension of vocabulary size. For generalization, we have considered 5 tokens from each category of the Tags. Tag Embedding Net: The embedding network consists of word embedding followed by temporal convolutions neural network followed by max-pooling network. In the first step, sparse high dimensional one-hot vector is transformed to dense low dimension vector using word embedding. After this, we apply temporal convolution on the word embedding vector. The uni-gram, bi-gram and tri-gram feature are computed by applying convolution filter of size 1, 2 and 3 respectability. Finally, we applied max-pooling on this to get a vector representation of the tags as shown figure FIGREF45 . We concatenated all the tag words followed by fully connected layer to get feature dimension of 512. We also explored joint networks based on concatenation of all the tags, on element-wise addition and element-wise multiplication of the tag vectors. However, we observed that convolution over max pooling and joint concatenation gives better performance based on CIDer score. INLINEFORM30 Where, T_CNN is Temporally Convolution Neural Network applied on word embedding vector with kernel size three. Place net Visual object and scene recognition plays a crucial role in the image. Here, places in the image are labeled with scene semantic categories BIBREF45 , comprise of large and diverse type of environment in the world, such as (amusement park, tower, swimming pool, shoe shop, cafeteria, rain-forest, conference center, fish pond, etc.). So we have used different type of scene semantic categories present in the image as a place based context to generate natural question. A place365 is a convolution neural network is modeled to classify 365 types of scene categories, which is trained on the place2 dataset consist of 1.8 million of scene images. We have used a pre-trained VGG16-places365 network to obtain place based context embedding feature for various type scene categories present in the image. The context feature INLINEFORM31 is obtained by: INLINEFORM32 Where, INLINEFORM33 is Place365_CNN. We have extracted INLINEFORM34 features of dimension 14x14x512 for attention model and FC8 features of dimension 365 for joint, addition and hadamard model of places365. Finally, we use a linear transformation to obtain a 512 dimensional vector. We explored using the CONV5 having feature dimension 14x14 512, FC7 having 4096 and FC8 having feature dimension of 365 of places365. Ablation Analysis Sampling Exemplar: KNN vs ITML Our method is aimed at using efficient exemplar-based retrieval techniques. We have experimented with various exemplar methods, such as ITML BIBREF40 based metric learning for image features and KNN based approaches. We observed KNN based approach (K-D tree) with Euclidean metric is a efficient method for finding exemplars. Also we observed that ITML is computationally expensive and also depends on the training procedure. The table provides the experimental result for Differential Image Network variant with k (number of exemplars) = 2 and Hadamard method: Question Generation approaches: Sampling vs Argmax We obtained the decoding using standard practice followed in the literature BIBREF38 . This method selects the argmax sentence. Also, we evaluated our method by sampling from the probability distributions and provide the results for our proposed MDN-Joint method for VQG dataset as follows: How are exemplars improving Embedding In Multimodel differential network, we use exemplars and train them using a triplet loss. It is known that using a triplet network, we can learn a representation that accentuates how the image is closer to a supporting exemplar as against the opposing exemplar BIBREF42 , BIBREF41 . The Joint embedding is obtained between the image and language representations. Therefore the improved representation helps in obtaining an improved context vector. Further we show that this also results in improving VQG. Are exemplars required? We had similar concerns and validated this point by using random exemplars for the nearest neighbor for MDN. (k=R in table TABREF35 ) In this case the method is similar to the baseline. This suggests that with random exemplar, the model learns to ignore the cue. Are captions necessary for our method? This is not actually necessary. In our method, we have used an existing image captioning method BIBREF13 to generate captions for images that did not have them. For VQG dataset, captions were available and we have used that, but, for VQA dataset captions were not available and we have generated captions while training. We provide detailed evidence with respect to caption-question pairs to ensure that we are generating novel questions. While the caption generates scene description, our proposed method generates semantically meaningful and novel questions. Examples for Figure 1 of main paper: First Image:- Caption- A young man skateboarding around little cones. Our Question- Is this a skateboard competition? Second Image:- Caption- A small child is standing on a pair of skis. Our Question:- How old is that little girl? Intuition behind Triplet Network: The intuition behind use of triplet networks is clear through this paper BIBREF41 that first advocated its use. The main idea is that when we learn distance functions that are “close” for similar and “far” from dissimilar representations, it is not clear that close and far are with respect to what measure. By incorporating a triplet we learn distance functions that learn that “A is more similar to B as compared to C”. Learning such measures allows us to bring target image-caption joint embeddings that are closer to supporting exemplars as compared to contrasting exemplars. Analysis of Network Analysis of Tag Context Tag is language based context. These tags are extracted from caption, except question-tags which is fixed as the 7 'Wh words' (What, Why, Where, Who, When, Which and How). We have experimented with Noun tag, Verb tag and 'Wh-word' tag as shown in tables. Also, we have experimented in each tag by varying the number of tags from 1 to 7. We combined different tags using 1D-convolution, concatenation, and addition of all the tags and observed that the concatenation mechanism gives better results. As we can see in the table TABREF33 that taking Nouns, Verbs and Wh-Words as context, we achieve significant improvement in the BLEU, METEOR and CIDEr scores from the basic models which only takes the image and the caption respectively. Taking Nouns generated from the captions and questions of the corresponding training example as context, we achieve an increase of 1.6% in Bleu Score and 2% in METEOR and 34.4% in CIDEr Score from the basic Image model. Similarly taking Verbs as context gives us an increase of 1.3% in Bleu Score and 2.1% in METEOR and 33.5% in CIDEr Score from the basic Image model. And the best result comes when we take 3 Wh-Words as context and apply the Hadamard Model with concatenating the 3 WH-words. Also in Table TABREF34 we have shown the results when we take more than one words as context. Here we show that for 3 words i.e 3 nouns, 3 verbs and 3 Wh-words, the Concatenation model performs the best. In this table the conv model is using 1D convolution to combine the tags and the joint model combine all the tags. Analysis of Context: Exemplars In Multimodel Differential Network and Differential Image Network, we use exemplar images(target, supporting and opposing image) to obtain the differential context. We have performed the experiment based on the single exemplar(K=1), which is one supporting and one opposing image along with target image, based on two exemplar(K=2), i.e. two supporting and two opposing image along with single target image. similarly we have performed experiment for K=3 and K=4 as shown in table- TABREF35 . Mixture Module: Other Variations Hadamard method uses element-wise multiplication whereas Addition method uses element-wise addition in place of the concatenation operator of the Joint method. The Hadamard method finds a correlation between image feature and caption feature vector while the Addition method learns a resultant vector. In the attention method, the output INLINEFORM35 is the weighted average of attention probability vector INLINEFORM36 and convolutional features INLINEFORM37 . The attention probability vector weights the contribution of each convolutional feature based on the caption vector. This attention method is similar to work stack attention method BIBREF54 . The attention mechanism is given by: DISPLAYFORM0 where INLINEFORM38 is the 14x14x512-dimensional convolution feature map from the fifth convolution layer of VGG-19 Net of image INLINEFORM39 and INLINEFORM40 is the caption context vector. The attention probability vector INLINEFORM41 is a 196-dimensional vector. INLINEFORM42 are the weights and INLINEFORM43 is the bias for different layers. We evaluate the different approaches and provide results for the same. Here INLINEFORM44 represents element-wise addition. Evaluation Metrics Our task is similar to encoder -decoder framework of machine translation. we have used same evaluation metric is used in machine translation. BLEU BIBREF46 is the first metric to find the correlation between generated question with ground truth question. BLEU score is used to measure the precision value, i.e That is how much words in the predicted question is appeared in reference question. BLEU-n score measures the n-gram precision for counting co-occurrence on reference sentences. we have evaluated BLEU score from n is 1 to 4. The mechanism of ROUGE-n BIBREF48 score is similar to BLEU-n,where as, it measures recall value instead of precision value in BLEU. That is how much words in the reference question is appeared in predicted question.Another version ROUGE metric is ROUGE-L, which measures longest common sub-sequence present in the generated question. METEOR BIBREF47 score is another useful evaluation metric to calculate the similarity between generated question with reference one by considering synonyms, stemming and paraphrases. the output of the METEOR score measure the word matches between predicted question and reference question. In VQG, it compute the word match score between predicted question with five reference question. CIDer BIBREF49 score is a consensus based evaluation metric. It measure human-likeness, that is the sentence is written by human or not. The consensus is measured, how often n-grams in the predicted question are appeared in the reference question. If the n-grams in the predicted question sentence is appeared more frequently in reference question then question is less informative and have low CIDer score. We provide our results using all these metrics and compare it with existing baselines.
Exemplars aim to provide appropriate context., joint image-caption embedding for the supporting exemplar are closer to that of the target image-caption
af82043e7d046c2fb1ed86ef9b48c35492e6a48c
af82043e7d046c2fb1ed86ef9b48c35492e6a48c_0
Q: Is this a task other people have worked on? Text: Introduction The social web has become a common means for seeking romantic companionship, made evident by the wide assortment of online dating sites that are available on the Internet. As such, the notion of relationship recommendation systems is not only interesting but also highly applicable. This paper investigates the possibility and effectiveness of a deep learning based relationship recommendation system. An overarching research question is whether modern artificial intelligence (AI) techniques, given social profiles, can successfully approximate successful relationships and measure the relationship compatibility of two users. Prior works in this area BIBREF0 , BIBREF1 , BIBREF2 , BIBREF0 have been mainly considered the `online dating recommendation' problem, i.e., focusing on the reciprocal domain of dating social networks (DSN) such as Tinder and OKCupid. While the functionality and mechanics of dating sites differ across the spectrum, the main objective is usually to facilitate communication between users, who are explicitly seeking relationships. Another key characteristic of many DSNs is the functionality that enables a user to express interest to another user, e.g., swiping right on Tinder. Therefore, many of prior work in this area focus on reciprocal recommendation, i.e., predicting if two users will like or text each other. Intuitively, we note that likes and replies on DSNs are not any concrete statements of compatibility nor evidence of any long-term relationship. For instance, a user may have many reciprocal matches on Tinder but eventually form meaningful friendships or relationships with only a small fraction. Our work, however, focuses on a seemingly similar but vastly different problem. Instead of relying on reciprocal signals from DSNs, our work proposes a novel distant supervision scheme, constructing a dataset of real world couples from regular social networks (RSN). Our distant supervision scheme is based on Twitter, searching for tweets such as `good night baby love you ' and `darling i love you so much ' to indicate that two users are in a stable and loving relationship (at least at that time). Using this labeled dataset, we train a distant supervision based learning to rank model to predict relationship compatibility between two users using their social profiles. The key idea is that social profiles contain cues pertaining to personality and interests that may be a predictor if whether two people are romantically compatible. Moreover, unlike many prior works that operate on propriety datasets BIBREF1 , BIBREF2 , BIBREF0 , our dataset is publicly and legally obtainable via the official Twitter API. In this work, we construct the first public dataset of approximately 2 million tweets for the task of relationship recommendation. Another key advantage is that our method trains on regular social networks, which spares itself from the inherent problems faced by DSNs, e.g., deceptive self-presentation, harassment, bots, etc. BIBREF3 . More specifically, self-presented information on DSNs might be inaccurate with the sole motivation of appearing more attractive BIBREF4 , BIBREF5 . In our work, we argue that measuring the compatibility of two users on RSN might be more suitable, eliminating any potential explicit self-presentation bias. Intuitively, social posts such as tweets can reveal information regarding personality, interests and attributes BIBREF6 , BIBREF7 . Finally, we propose CoupleNet, an end-to-end deep learning based architecture for estimating the compatibility of two users on RSNs. CoupleNet takes the social profiles of two users as an input and computes a compatibility score. This score can then be used to serve a ranked list to users and subsequently embedded in some kind of `who to follow' service. CoupleNet is characterized by its Coupled Attention, which learns to pay attention to parts of a user's profile dynamically based on the current candidate user. CoupleNet also does not require any feature engineering and is a proof-of-concept of a completely text-based relationship recommender system. Additionally, CoupleNet is also capable of providing explainable recommendations which we further elaborate in our qualitative experiments. Our Contributions This section provides an overview of the main contributions of this work. We propose a novel problem of relationship recommendation (RSR). Different from the reciprocal recommendation problem on DSNs, our RSR task operates on regular social networks (RSN), estimating long-term and serious relationship compatibility based on social posts such as tweets. We propose a novel distant supervision scheme to construct the first publicly available (distributable in the form of tweet ids) dataset for the RSR task. Our dataset, which we call the LoveBirds2M dataset consists of approximately 2 million tweets. We propose a novel deep learning model for the task of RSR. Our model, the CoupleNet uses hierarchical Gated Recurrent Units (GRUs) and coupled attention layers to model the interactions between two users. To the best of our knowledge, this is the first deep learning model for both RSR and reciprocal recommendation problems. We evaluate several strong machine learning and neural baselines on the RSR task. This includes the recently proposed DeepCoNN (Deep Co-operative Neural Networks) BIBREF8 for item recommendation. CoupleNet significantly outperforms DeepCoNN with a $200\%$ relative improvement in precision metrics such as Hit Ratio (HR@N). Overall findings show that a text-only deep learning system for RSR task is plausible and reasonably effective. We show that CoupleNet produces explainable recommendation by analyzing the attention maps of the coupled attention layers. Related Work In this section, we review existing literature that is related to our work. Reciprocal and Dating Recommendation Prior works on online dating recommendation BIBREF0 , BIBREF9 , BIBREF2 , BIBREF10 mainly focus on designing systems for dating social networks (DSN), i.e., websites whereby users are on for the specific purpose of finding a potential partner. Moreover, all existing works have primarily focused on the notion of reciprocal relationships, e.g., a successful signal implied a two way signal (likes or replies) between two users. Tu et al. BIBREF9 proposed a recommendation system based on Latent Dirichlet Allocation (LDA) to match users based on messaging and conversational history between users. Xia et al. BIBREF0 , BIBREF1 cast the dating recommendation problem into a link prediction task, proposing a graph-based approach based on user interactions. The CCR (Content-Collaborative Reciprocal Recommender System) BIBREF10 was proposed by Akehurtst et al. for the task of reciprocal recommendation, utilizing content-based features (user profile similarity) and collaborative filtering features (user-user interactions). However, all of their approaches operate on a propriety dataset obtained via collaboration with online dating sites. This hinders research efforts in this domain. Our work proposes a different direction from the standard reciprocal recommendation (RR) models. The objective of our work is fundamentally different, i.e., instead of finding users that might reciprocate to each other, we learn to functionally approximate the essence of a good (possibly stable and serious) relationship, learning a compatibility score for two users given their regular social profiles (e.g., Twitter). To the best of our knowledge, our work is the first to build a relationship recommendation model based on a distant supervision signal on real world relationships. Hence, we distinguish our work from all existing works on online dating recommendation. Moreover, our dataset is obtained legally via the official twitter API and can be distributed for future research. Unlike prior work BIBREF0 which might invoke privacy concerns especially with the usage of conversation history, the users employed in our study have public twitter feeds. We note that publicly available twitter datasets have been the cornerstone of many scientific studies especially in the fields of social science and natural language processing (NLP). Across scientific literature, several other aspects of online dating have been extensively studied. Nagarajan and Hearst BIBREF11 studied self-presentation on online dating sites by specifically examining language on dating profiles. Hancock et al. presented an analysis on deception and lying on online dating profiles BIBREF5 , reporting that at least $50\%$ of participants provide deceptive information pertaining to physical attributes such as height, weight or age. Toma et al. BIBREF4 investigated the correlation between linguistic cues and deception on online dating profiles. Maldeniya et al. BIBREF12 studied how textual similarity between user profiles impacts the likelihood of reciprocal behavior. A recent work by Cobb and Kohno BIBREF13 provided an extensive study which tries to understand users’ privacy preferences and practices in online dating. Finally, BIBREF14 studied the impacts of relationship breakups on Twitter, revealing many crucial insights pertaining to the social and linguistic behaviour of couples that have just broken up. In order to do so, they collect likely couple pairs and monitor them over a period of time. Notably, our data collection procedure is reminscent of theirs, i.e., using keyword-based filters to find highly likely couple pairs. However, their work utilizes a second stage crowdworker based evaluation to check for breakups. User Profiling and Friend Recommendation Our work is a cross between user profiling and user match-making systems. An earlier work, BIBREF15 proposed a gradient-boosted learning-to-rank model for match-making users on a dating forum. While the authors ran experiments on a dating service website, the authors drew parallels with other match-making services such as job-seeking forums. The user profiling aspect in our work comes from the fact that we use social networks to learn user representations. As such, our approach performs both user profiling and then match-making within an end-to-end framework. BIBREF7 proposed a deep learning personality detection system which is trained on social posts on Weibo and Twitter. BIBREF6 proposed a Twitter personality detection system based on machine learning models. BIBREF16 learned multi-view embeddings of Twitter users using canonical correlation analysis for friend recommendation. From an application perspective, our work is also highly related to `People you might know' or `who to follow' (WTF) services on RSNs BIBREF17 albeit taking a romantic twist. In practical applications, our RSN based relationship recommender can either be deployed as part of a WTF service, or to increase the visibility of the content of users with high compatibility score. Deep Learning and Collaborative Ranking One-class collaborative filtering (also known as collaborative ranking) BIBREF18 is a central research problem in IR. In general, deep learning BIBREF19 , BIBREF20 , BIBREF21 has also been recently very popular for collaborative ranking problems today. However, to the best of our knowledge, our work is the first deep learning based approach for the online dating domain. BIBREF22 provides a comprehensive overview of deep learning methods for CF. Notably, our approach also follows the neural IR approach which is mainly concerned with modeling document-query pairs BIBREF23 , BIBREF24 , BIBREF25 or user-item pairs BIBREF8 , BIBREF26 since we deal with the textual domain. Finally, our work leverages recent advances in deep learning, namely Gated Recurrent Units BIBREF27 and Neural Attention BIBREF28 , BIBREF29 , BIBREF30 . The key idea of neural attention is to learn to attend to various segments of a document, eliminating noise and emphasizing the important segments for prediction. Problem Definition and Notation In this section, we introduce the formal problem definition of this work. Definition 3.1 Let $U$ be the set of Users. Let $s_i$ be the social profile of user $i$ which is denoted by $u_i \in U$ . Each social profile $s_i \in S$ contains $\eta $ documents. Each document $d_i \in s_i$ contains a maximum of $L$ words. Given a user $u_i$ and his or her social profile $s_i$ , the task of the Relationship Recommendation problem is to produce a ranked list of candidates based on a computed relevance score $s_i$0 where $s_i$1 is the social profile of the candidate user $s_i$2 . $s_i$3 is a parameterized function. There are mainly three types of learning to rank methods, namely pointwise, pairwise and list-wise. Pointwise considers each user pair individually, computing a relevance score solely based on the current sample, i.e., binary classification. Pairwise trains via noise constrastive estimation, which often minimizes a loss function like the margin based hinge loss. List-wise considers an entire list of candidates and is seldom employed due to the cumbersome constraints that stem from implementation efforts. Our proposed CoupleNet employs a pairwise paradigm. The intuition for this is that, relationship recommendation is considered very sparse and has very imbalanced classes (for each user, only one ground truth exists). Hence, training binary classification models suffers from class imbalance. Moreover, the good performance of pairwise learning to rank is also motivated by our early experiments. The Love Birds Dataset Since there are no publicly available datasets for training relationship recommendation models, we construct our own. The goal is to construct a list of user pairs in which both users are in relationship. Our dataset is constructed via distant supervision from Twitter. We call this dataset the Love Birds dataset. This not only references the metaphorical meaning of the phrase `love birds' but also deliberately references the fact that the Twitter icon is a bird. This section describes the construction of our dataset. Figure 1 describes the overall process of our distant supervision framework. Distant Supervision Using the Twitter public API, we collected tweets with emojis contains the keyword `heart' in its description. The key is to find tweets where a user expresses love to another user. We observed that there are countless tweets such as `good night baby love you ' and `darling i love you so much ' on Twitter. As such, the initial list of tweets is crawled by watching heart and love-related emojis, e.g., , , etc. By collecting tweets containing these emojis, we form our initial candidate list of couple tweets (tweets in which two people in a relationship send to each other). Through this process, we collected 10 million tweets over a span of a couple of days. Each tweet will contain a sender and a target (the user mentioned and also the target of affection). We also noticed that the love related emojis do not necessarily imply a romantic relationship between two users. For instance, we noticed that a large percentage of such tweets are affection towards family members. Given the large corpus of candidates, we can apply a stricter filtering rule to obtain true couples. To this end, we use a ban list of words such as 'bro', 'sis', `dad', `mum' and apply regular expression based filtering on the candidates. We also observed a huge amount of music related tweets, e.g., `I love this song so much !'. Hence, we also included music-related keywords such as `perform', `music', `official' and `song'. Finally, we also noticed that people use the heart emoji frequently when asking for someone to follow them back. As such, we also ban the word `follow'. We further restricted tweets to contain only a single mention. Intuitively, mentioning more than one person implies a group message rather than a couple tweet. We also checked if one user has a much higher follower count over the other user. In this case, we found that this is because people send love messages to popular pop idols (we found that a huge bulk of crawled tweets came from fangirls sending love message to @harrystylesofficial). Any tweet with a user containing more than 5K followers is being removed from the candidate list. Forming Couple Pairs Finally, we arrive at 12K tweets after aggressive filtering. Using the 12K `cleaned' couple tweets, we formed a list of couples. We sorted couples in alphabetical order, i.e., (clara, ben) becomes (ben, clara) and removed duplicate couples to ensure that there are no `bidirectional' pairs in the dataset. For each user on this list, we crawled their timeline and collected 200 latest tweets from their timeline. Subsequently, we applied further preprocessing to remove explicit couple information. Notably, we do not differentiate between male and female users (since twitter API does not provide this information either). The signal for distant supervision can be thought of as an explicit signal which is commonplace in recommendation problems that are based on explicit feedback (user ratings, reviews, etc.). In this case, an act (tweet) of love / affection is the signal used. We call this explicit couple information. To ensure that there are no additional explicit couple information in each user's timeline, we removed all tweets with any words of affection (heart-related emojis, `love', `dear', etc.). We also masked all mentions with the @USER symbol. This is to ensure that there is no explicit leak of signals in the final dataset. Naturally, a more accurate method is to determine the date in which users got to know each other and then subsequently construct timelines based on tweets prior to that date. Unfortunately, there is no automatic and trivial way to easily determine this information. Consequently, a fraction of their timeline would possibly have been tweeted when the users have already been together in a relationship. As such, in order to remove as much 'couple' signals, we try our best to mask such information. Why Twitter? Finally, we answer the question of why Twitter was chosen as our primary data source. One key desiderata was that the data should be public, differentiating ourselves from other works that use proprietary datasets BIBREF0 , BIBREF9 . In designing our experiments, we considered two other popular social platforms, i.e., Facebook and Instagram. Firstly, while Facebook provides explicit relationship information, we found that there is a lack of personal, personality-revealing posts on Facebook. For a large majority of users, the only signals on Facebook mainly consist of shares and likes of articles. The amount of original content created per user is extremely low compared to Twitter whereby it is trivial to obtain more than 200 tweets per user. Pertaining to Instagram, we found that posts are also generally much sparser especially in regards to frequency, making it difficult to amass large amounts of data per user. Moreover, Instagram adds a layer of difficulty as Instagram is primarily multi-modal. In our Twitter dataset, we can easily mask explicit couple information by keyword filters. However, it is non-trivial to mask a user's face on an image. Nevertheless, we would like to consider Instagram as an interesting line of future work. Dataset Statistics Our final dataset consists of 1.858M tweets (200 tweets per user). The total number of users is 9290 and 4645 couple pairs. The couple pairs are split into training, testing and development with a 80/10/10 split. The total vocabulary size (after lowercasing) is 2.33M. Ideally, more user pairs could be included in the dataset. However, we also note that the dataset is quite large (almost 2 million tweets) already, posing a challenge for standard hardware with mid-range graphic cards. Since this is the first dataset created for this novel problem, we leave the construction of a larger benchmark for future work. Our Proposed Approach In this section, we introduce our deep learning architecture - the CoupleNet. Overall, our neural architecture is a hierarchical recurrent model BIBREF28 , utilizing multi-layered attentions at different hierarchical levels. An overview of the model architecture is illustrated in Figure 2 . There are two sides of the network, one for each user. Our network follows a `Siamese' architecture, with shared parameters for each side of the network. A single data input to our model comprises user pairs ( $U1, U2$ ) (couples) and ( $U1, U3$ ) (negative samples). Each user has $K$ tweets each with a maximum length of $L$ . The value of $K$ and $L$ are tunnable hyperparameters. Embedding Layer For each user, the inputs to our network are a matrix of indices, each corresponding to a specific word in the dictionary. The embedding matrix $\textbf {W} \in \mathbb {R}^{d \times |V|}$ acts as a look-up whereby each index selects a $d$ dimensional vector, i.e., the word representation. Thus, for each user, we have $K \times L$ vectors of dimension size $d$ . The embedding layer is shared for all users and is initialized with pretrained word vectors. Learning Tweet Representations For each user, the output of the embedding layer is a tensor of shape $K \times L \times d$ . We pass each tweet through a recurrent neural network. More specifically, we use Gated Recurrent Units (GRU) encoders with attentional pooling to learn a $n$ dimensional vector for each tweet. The GRU accepts a sequence of vectors and recursively composes each input vector into a hidden state. The recursive operation of the GRU is defined as follows: $ z_t &= \sigma (W_z x_t + U_z h_{t-1} + b_z) \\ r_t &= \sigma (W_r x_t + U_r h_{t-1} + b_r) \\ \hat{h_t} &= tanh (W_h \: x_t + U_h (r_t h_{t-1}) + b_h) \\ h_t &= z_t \: h_{t-1} + (1-z_t) \: \hat{h_t} $ where $h_t$ is the hidden state at time step $t$ , $z_t$ and $r_t$ are the update gate and reset gate at time step $t$ respectively. $\sigma $ is the sigmoid function. $x_t$ is the input to the GRU unit at time step $t$ . Note that time step is analogous to parsing a sequence of words sequentially in this context. $W_z, W_r \in \mathbb {R}^{d \times n}, W_h \in \mathbb {R}^{n \times n}$ are parameters of the GRU layer. The output of each GRU is a sequence of hidden vectors $h_1, h_2 \cdots h_L \in \textbf {H}$ , where $\textbf {H} \in \mathbb {R}^{L \times n}$ . Each hidden vector is $n$ dimensions, which corresponds to the parameter size of the GRU. To learn a single $n$ dimensional vector, the last hidden vector $h_L$ is typically considered. However, a variety of pooling functions such as the average pooling, max pooling or attentional pooling can be adopted to learn more informative representations. More specifically, neural attention mechanisms are applied across the matrix $\textbf {H}$ , learning a weighted representation of all hidden vectors. Intuitively, this learns to select more informative words to be passed to subsequent layers, potentially reducing noise and improving model performance. $ \textbf {Y} = \text{tanh}(W_y \: \textbf {H}) \:\:;\:\: a= \text{softmax}(w^{\top } \: \textbf {Y}) \:\:;\:\: r = \textbf {H}\: a^{\top } $ where $W_y \in \mathbb {R}^{n \times n}, w \in \mathbb {R}^{n}$ are the parameters of the attention pooling layer. The output $r \in \mathbb {R}^{n}$ is the final vector representation of the tweet. Note that the parameters of the attentional pooling layer are shared across all tweets and across both users. Learning User Representations Recall that each user is represented by $K$ tweets and for each tweet we have a $n$ dimensional vector. Let $t^i_1, t^i_2 \cdots t^i_K$ be all the tweets for a given user $i$ . In order to learn a fixed $n$ dimensional vector for each user, we require a pooling function across each user's tweet embeddings. In order to do so, we use a Coupled Attention Layer that learns to attend to U1 based on U2 (and vice versa). Similarly, for the negative sample, coupled attention is applied to (U1, U3) instead. However, we only describe the operation of (U1, U2) for the sake of brevity. The key intuition behind the coupled attention layer is to learn attentional representations of U1 with respect to U2 (and vice versa). Intuitively, this compares each tweet of U1 with each tweet of U2 and learns to weight each tweet based on this grid-wise comparison scheme. Let U1 and U2 be represented by a sequence of $K$ tweets (each of which is a $n$ dimensional vector) and let $T_1, T_2 \in \mathbb {R}^{k \times n}$ be the tweet matrix for U1 and U2 respectively. For each tweet pair ( $t^{1}_i, t^{2}_j$ ), we utilize a feed-forward neural network to learn a similarity score between each tweet. As such, each value of the similarity grid is computed: $$s_{ij} = W_{c} \: [t^{1}_i; t^{2}_j] + b_c$$ (Eq. 28) where $W_c \in \mathbb {R}^{n \times 1}$ and $b_c \in \mathbb {R}^{1}$ are parameters of the feed-forward neural network. Note that these parameters are shared across all tweet pair comparisons. The score $s_{ij}$ is a scalar value indicating the similarity between tweet $i$ of U1 and tweet $j$ of U2. Given the similarity matrix $\textbf {S} \in \mathbb {R}^{K \times K}$ , the strongest signals across each dimension are aggregated using max pooling. For example, by taking a max over the columns of S, we regard the importance of tweet $i$ of U1 as the strongest influence it has over all tweets of U2. The result of this aggregation is two $K$ length vectors which are used to attend over the original sequence of tweets. The following operations describe the aggregation functions: $$a^{row} = \text{smax}(\max _{row} \textbf {S}) \:\:\:\text{and}\:\:\: a^{col} = \text{smax}(\max _{col} \textbf {S})$$ (Eq. 30) where $a^{row}, a^{col} \in \mathbb {R}^{K}$ and smax is the softmax function. Subsequently, both of these vectors are used to attentively pool the tweet vectors of each user. $ u_1 = T_1 \: a^{col} \:\:\text{and}\:\:u_2 = T_2 \: a^{row} $ where $u_1, u_2 \in \mathbb {R}^{n}$ are the final user representations for U1 and U2. Learning to Rank and Training Procedure Given embeddings $u_1, u_2, u_3$ , we introduce our similarity modeling layer and learning to rank objective. Given $u_1$ and $u_2$ , the similarity between each user pair is modeled as follows: $$s(u_1, u_2) = \frac{u_i \cdot u_2}{|u_1| |u_2|}$$ (Eq. 32) which is the cosine similarity function. Subsequently, the pairwise ranking loss is optimized. We use the margin-based hinge loss to optimize our model. $$J = \max \lbrace 0, \lambda - s(u_1,u_2) + s(u_1, u_3) \rbrace $$ (Eq. 33) where $\lambda $ is the margin hyperparameter, $s(u_1, u_2)$ is the similarity score for the ground truth (true couples) and $s(u_1, u_3)$ is the similarity score for the negative sample. This function aims to discriminate between couples and non-couples by increasing the margin between the ranking scores of these user pairs. Parameters of the network can be optimized efficiently with stochastic gradient descent (SGD). Empirical Evaluation Our experiments are designed to answer the following Research Questions (RQs). Experimental Setup All empirical evaluation is conducted on our LoveBirds dataset which has been described earlier. This section describes the evaluation metrics used and evaluation procedure. Our problem is posed as a learning-to-rank problem. As such, the evaluation metrics used are as follows: Hit Ratio @N is the ratio of test samples which are correctly retrieved within the top $N$ users. We evaluate on $N=10,5,3$ . Accuracy is the number of test samples that have been correctly ranked in the top position. Mean Reciprocal Rank (MRR) is a commonly used information retrieval metric. The reciprocal rank of a single test sample is the multiplicative inverse of the rank. The MRR is computed by $\frac{1}{Q} \sum ^{|Q|}_{i=1} \frac{1}{rank_i}$ . Mean Rank is the average rank of all test samples. Our experimental procedure samples 100 users per test sample and ranks the golden sample amongst the 100 negative samples. In this section, we discuss the algorithms and baselines compared. Notably, there are no established benchmarks for this new problem. As such, we create 6 baselines to compare against our proposed CoupleNet. RankSVM (Tf-idf) - This model is a RankSVM (Support Vector Machine) trained on tf-idf vectors. This model is known to be a powerful vector space model (VSM) baseline. The feature vector of each user is a $k$ dimensional vector, representing the top- $k$ most common n-grams. The n-gram range is set to (1,3) and $k$ is set to 5000 in our experiments. Following the original implementation, the kernel of RankSVM is a linear kernel. RankSVM (Embed) - This model is a RankSVM model trained on pretrained (static, un-tuned) word embeddings. For each user pair, the feature vector is the sum of all words of both users. MLP (Embed) - This is a Multi-layered Perceptron (MLP) model that learns to non-linearly project static word embedding. Each word embedding is projected using 2 layered MLP with ReLU activations. The user representation is the sum of all transformed word embeddings. DeepCoNN (Deep Co-operative Neural Networks) BIBREF8 is a convolutional neural network (CNN). CNNs learn n-gram features by sliding weights across an input. In this model, all of a user's tweets are concatenated and encoded into a $d$ dimensional vector via a convolutional encoder. We use a fixed filter width of 3. DeepCoNN was originally proposed for item recommendation task using reviews. In our context, we adapt the DeepCoNN for our RSR task (tweets are analogous to reviews). Given the different objectives (MSE vs ranking), we also switch the factorization machine (FM) layer for the cosine similarity. The number of filters is 100. A max pooling layer is used to aggregate features. Baseline Gated Recurrent Unit (GRU) - We compare with a baseline GRU model. Similar to the DeepCoNN model, the baseline GRU considers a user to be a concatenation of all the user's tweets. The size of the recurrent cell is 100 dimensions. Hierarchical GRU (H-GRU) - This model learns user representations by first encoding each tweet with a GRU encoder. The tweet embedding is the last hidden state of the GRU. Subsequently, all tweet embeddings are summed. This model serves as an ablation baseline of our model, i.e., removing all attentional pooling functions. All models were implemented in Tensorflow on a Linux machine. For all neural network models, we follow a Siamese architecture (shared parameters for both users) and mainly vary the neural encoder. The cosine ranking function and hinge loss are then used to optimize all models. We train all models with the Adam BIBREF31 optimizer with a learning rate of $10^{-3}$ since this learning rate consistently produced the best results across all models. The batch size is tuned amongst $\lbrace 16,32,64\rbrace $ and models are trained for 10 epochs. We report the result based on the best performance on the development set. The margin is tuned amongst $\lbrace 0.1, 0.2, 0.5\rbrace $ . All model parameters are initialized with Gaussian distributions with a mean of 0 and standard deviation of $0.1$ . The L2 regularization is set to $10^{-8}$ . We use a dropout of $0.5$ after the convolution or recurrent layers. A dropout of $0.8$ is set after the Coupled Attention layer in our model. Text is tokenized with NLTK's tweet tokenizer. We initialize the word embedding matrix with Glove BIBREF32 trained on Twitter corpus. All words that do not appear more than 5 times are assigned unknown tokens. All tweets are truncated at a fixed length of 10 tokens. Early experiments found that raising the number of tokens per tweet does not improve the performance. The number of tweets per user is tuned amongst $\lbrace 10,20,50,100,150,200\rbrace $ and reported in our experimental results. Discussion and Analysis Figure 3 reports the experimental results on the LoveBirds2M dataset. For all baselines and evaluation metrics, we compare across different settings of $\eta $ , the number of tweets per user that is used to train the model. Firstly, we observe that CoupleNet significantly outperforms most of the baselines. Across most metrics, there is almost a $180\%-200\%$ relative improvement over DeepCoNN, the state-of-the-art model for item recommendation with text data. The performance improvement over the baseline GRU model is also extremely large, i.e., with a relative improvement of approximately 4 times across all metrics. This shows that concatenating all of a user's tweets into a single document severely hurts performance. We believe that this is due to the inability of recurrent models to handle long sequences. Moreover, the DeepCoNN performs about 2 times better than the baseline GRU model. On the other hand, we observe that H-GRU significantly improves the baseline GRU model. In the H-GRU model, sequences are only $L=10$ long but are encoded $K$ times with shared parameters. On the other hand, the GRU model has to process $K \times L$ words, which inevitably causes performance to drop significantly. While the performance of the H-GRU model is reasonable, it is still significantly outperformed by our CoupleNet. We believe this is due to the incorporation of the attentional pooling layers in our model, which allows it to eliminate noise and focus on the important keywords. A surprising and notable strong baseline is the MLP (Embed) model which outperforms DeepCoNN but still performs much worse than CoupleNet. On the other hand, RankSVM (Embed) performs poorly. We believe that this is attributed to the insufficiency of the linear kernel of the SVM. Since RankSVM and MLP are trained on the same features, we believe that nonlinear ReLU transformations of the MLP improve the performance significantly. Moreover, the MLP model has 2 layers, which learn different levels of abstractions. Finally, the performance of RankSVM (Tf-idf) is also poor. However, we observe that RankSVM (Tf-idf) slightly outperforms RankSVM (Embed) occasionally. While other models display a clear trend in performance with respect to the number of tweets, the performance of RankSVM (Tf-idf) and RankSVM (Embed) seem to fluctuate across the number of user tweets. Finally, we observe a clear trend in performance gain with respect to the number of user tweets. This is intuitive because more tweets provide the model with greater insight into the user's interest and personality, allowing a better match to be made. The improvement seems to follow a logarithmic scale which suggests diminishing returns beyond a certain number of tweets. Finally, we report the time cost of CoupleNet. With 200 tweets per user, the cost of training is approximately $\approx 2$ mins per epoch on a medium grade GPU. This is much faster than expected because GRUs benefit from parallism as they can process multiple tweets simultaneously. Ablation Study In this section, we study the component-wise effectiveness of CoupleNet. We removed layers from CoupleNet in order to empirically motivate the design of each component. Firstly, we switched CoupleNet to a pointwise classification model, minimizing a cross entropy loss. We found that this halves the performance. As such, we observe the importance of pairwise ranking. Secondly, we swapped cosine similarity for a MLP layer with scalar sigmoid activation (to ensure inputs lie within $[0,1]$ ). We also found that the performance drops significantly. Finally, we also observe that the attention layers of CoupleNet contribute substantially to the performance of the model. More specifically, removing both the GRU attention and coupled attention layers cause performance to drop by 13.9%. Removing the couple attention suffers a performance degrade of $2.5\%$ while removing the GRU attention drops performance by $3.9\%$ . It also seems that dropping both degrades performance more than expected (not a straightforward summation of performance degradation). Overall Quantitative Findings In this subsection, we describe the overall findings of our quantitative experiments. Overall, the best HR@10 score for CoupleNet is about $64\%$ , i.e., if an application would to recommend the top 10 prospective partners to a user, then the ground truth will appear in this list $64\%$ of the time. Moreover, the accuracy is $25\%$ (ranking out of 100 candidates) which is also reasonably high. Given the intrinsic difficulty of the problem, we believe that the performance of CoupleNet on this new problem is encouraging and promising. To answer RQ1, we believe that text-based deep learning systems for relationship recommendation are plausible. However, special care has to be taken, i.e., model selection matters. The performance significantly improves when we include more tweets per user. This answers RQ2. This is intuitive since more tweets would enable better and more informative user representations, leading to a better matching performance. Qualitative Analysis In this section, we describe several insights and observations based on real examples from our LoveBirds20 dataset. One key advantage of CoupleNet is a greater extent of explainability due to the coupled attention mechanism. More specifically, we are able to obtain which of each user's tweets contributed the most to the user representation and the overall prediction. By analyzing the attention output of user pairs, we are able to derive qualitative insights. As an overall conclusion to answer RQ3 (which will be elaborated by in the subsequent subsections), we found that CoupleNet is capable of explainable recommendations if there are explicit matching signals such as user interest and demographic similarity between user pairs. Finally, we discuss some caveats and limitations of our approach. Mutual Interest between Couples is Captured in CoupleNet We observed the CoupleNet is able to capture the mutual interest between couples. Table 2 shows an example from the LoveBirds2M dataset. In general, we found that most user pairs have noisy tweets. However, we also observed that whenever couple pairs have mutual interest, CoupleNet is able to assign a high attention weight to the relevant tweets. For example, in Table 2 , both couples are fans of BTS, a Korean pop idol group. As such, tweets related to BTS are surfaced to the top via coupled attention. In the first tweet of User 1, tweets related to two entities, seokjin and hoseok, are ranked high (both entities are members of the pop idol group). This ascertains that CoupleNet is able to, to some extent, explain why two users are matched. This also validates the usage of our coupled attention mechanism. For instance, we could infer that User1 and User2 are matched because of their mutual interest in BTS. A limitation is that it is difficult to interpret why the other tweets (such as a thank you without much context, or supporting your family) were ranked highly. CoupleNet Infers User Attribute and Demographic by Word Usage We also discovered that CoupleNet learns to match users with similar attributes and demographics. For example, high school students will be recommended high school students at a higher probability. Note that location, age or any other information is not provided to CoupleNet. In other words, user attribute and demographic are solely inferred via a user's tweets. In Table 3 , we report an example in which the top-ranked tweets (via coupled attention) are high school related tweets (homecoming, high school reception). This shows two things: (1) the coupled attention shows that the following 3 tweets were the most important tweets for prediction and (2) CoupleNet learns to infer user attribute and demographic without being explicitly provided with such information. We also note that both users seem to have strongly positive tweets being ranked highly in their attention scores which might hint at the role of sentiment and mood in making prediction. CoupleNet Ranks Successfully Even Without Explicit Signals It is intuitive that not every user will post interest or demographic revealing tweets. For instance, some users might exclusively post about their emotions. When analyzing the ranking outputs of CoupleNet, we found that, interestingly, CoupleNet can successfully rank couple pairs even when there seem to be no explicit matching signal in the social profiles of both users. Table 4 shows an example where two user profiles do not share any explicit matching signals. User E and User F are a ground truth couple pair and the prediction of CoupleNet ranks User E with User F at the top position. The top tweets of User E and User F are mostly emotional tweets that are non-matching. Through this case, we understand that CoupleNet does not simply match people with similar emotions together. Notably, relationship recommendation is also a problem that humans may struggle with. Many times, the reason why two people are in a relationship may be implicit or unclear (even to humans). As such, the fact that CoupleNet ranks couple pairs correctly even when there is no explicit matching signals hints at its ability to go beyond simple keyword matching. In this case, we believe `hidden' (latent) patterns (such as emotions and personality) of the users are being learned and modeled in order to make recommendations. This shows that CoupleNet is not simply acting as a text-matching algorithm and learning features beyond that. Side Note, Caveats and Limitations While we show that our approach is capable of producing interpretable results (especially when explicit signals exist), the usefulness of its explainability may still have limitations, e.g., consider Table 4 where it is clear that the results are not explainable. Firstly, there might be a complete absence of any interpretable content in two user's profiles in the first place. Secondly, explaining relationships are also challenging for humans. As such, we recommend that the outputs of CoupleNet to be only used as a reference. Given that a user's profile may contain easily a hundreds to thousands of tweets, one posssible use is to use this ranked list to enable more efficient analysis by humans (such as social scientist or linguists). We believe our work provides a starting point of explainable relationship recommendation. Conclusion We introduced a new problem of relationship recommendation. In order to construct a dataset, we employ a novel distant supervision scheme to obtain real world couples from social media. We proposed the first deep learning model for text-based relationship recommendation. Our deep learning model, CoupleNet is characterized by its usage of hierarchical attention-based GRUs and coupled attention layers. Performance evaluation is overall optimistic and promising. Despite huge class imbalance, our approach is able to recommend at a reasonable precision ( $64\%$ at HR@10 and $25\%$ accuracy while being ranked against 100 negative samples). Finally, our qualitative analysis shows three key findings: (1) CoupleNet finds mutual interests between users for match-making, (2) CoupleNet infers user attributes and demographics in order to make recommendations, and (3) CoupleNet can successfully match-make couples even when there is no explicit matching signals in their social profiles, possibly leveraging emotion and personality based latent features for prediction.
No
1bc8904118eb87fa5949ad7ce5b28ad3b3082bd0
1bc8904118eb87fa5949ad7ce5b28ad3b3082bd0_0
Q: Where did they get the data for this project? Text: Introduction The social web has become a common means for seeking romantic companionship, made evident by the wide assortment of online dating sites that are available on the Internet. As such, the notion of relationship recommendation systems is not only interesting but also highly applicable. This paper investigates the possibility and effectiveness of a deep learning based relationship recommendation system. An overarching research question is whether modern artificial intelligence (AI) techniques, given social profiles, can successfully approximate successful relationships and measure the relationship compatibility of two users. Prior works in this area BIBREF0 , BIBREF1 , BIBREF2 , BIBREF0 have been mainly considered the `online dating recommendation' problem, i.e., focusing on the reciprocal domain of dating social networks (DSN) such as Tinder and OKCupid. While the functionality and mechanics of dating sites differ across the spectrum, the main objective is usually to facilitate communication between users, who are explicitly seeking relationships. Another key characteristic of many DSNs is the functionality that enables a user to express interest to another user, e.g., swiping right on Tinder. Therefore, many of prior work in this area focus on reciprocal recommendation, i.e., predicting if two users will like or text each other. Intuitively, we note that likes and replies on DSNs are not any concrete statements of compatibility nor evidence of any long-term relationship. For instance, a user may have many reciprocal matches on Tinder but eventually form meaningful friendships or relationships with only a small fraction. Our work, however, focuses on a seemingly similar but vastly different problem. Instead of relying on reciprocal signals from DSNs, our work proposes a novel distant supervision scheme, constructing a dataset of real world couples from regular social networks (RSN). Our distant supervision scheme is based on Twitter, searching for tweets such as `good night baby love you ' and `darling i love you so much ' to indicate that two users are in a stable and loving relationship (at least at that time). Using this labeled dataset, we train a distant supervision based learning to rank model to predict relationship compatibility between two users using their social profiles. The key idea is that social profiles contain cues pertaining to personality and interests that may be a predictor if whether two people are romantically compatible. Moreover, unlike many prior works that operate on propriety datasets BIBREF1 , BIBREF2 , BIBREF0 , our dataset is publicly and legally obtainable via the official Twitter API. In this work, we construct the first public dataset of approximately 2 million tweets for the task of relationship recommendation. Another key advantage is that our method trains on regular social networks, which spares itself from the inherent problems faced by DSNs, e.g., deceptive self-presentation, harassment, bots, etc. BIBREF3 . More specifically, self-presented information on DSNs might be inaccurate with the sole motivation of appearing more attractive BIBREF4 , BIBREF5 . In our work, we argue that measuring the compatibility of two users on RSN might be more suitable, eliminating any potential explicit self-presentation bias. Intuitively, social posts such as tweets can reveal information regarding personality, interests and attributes BIBREF6 , BIBREF7 . Finally, we propose CoupleNet, an end-to-end deep learning based architecture for estimating the compatibility of two users on RSNs. CoupleNet takes the social profiles of two users as an input and computes a compatibility score. This score can then be used to serve a ranked list to users and subsequently embedded in some kind of `who to follow' service. CoupleNet is characterized by its Coupled Attention, which learns to pay attention to parts of a user's profile dynamically based on the current candidate user. CoupleNet also does not require any feature engineering and is a proof-of-concept of a completely text-based relationship recommender system. Additionally, CoupleNet is also capable of providing explainable recommendations which we further elaborate in our qualitative experiments. Our Contributions This section provides an overview of the main contributions of this work. We propose a novel problem of relationship recommendation (RSR). Different from the reciprocal recommendation problem on DSNs, our RSR task operates on regular social networks (RSN), estimating long-term and serious relationship compatibility based on social posts such as tweets. We propose a novel distant supervision scheme to construct the first publicly available (distributable in the form of tweet ids) dataset for the RSR task. Our dataset, which we call the LoveBirds2M dataset consists of approximately 2 million tweets. We propose a novel deep learning model for the task of RSR. Our model, the CoupleNet uses hierarchical Gated Recurrent Units (GRUs) and coupled attention layers to model the interactions between two users. To the best of our knowledge, this is the first deep learning model for both RSR and reciprocal recommendation problems. We evaluate several strong machine learning and neural baselines on the RSR task. This includes the recently proposed DeepCoNN (Deep Co-operative Neural Networks) BIBREF8 for item recommendation. CoupleNet significantly outperforms DeepCoNN with a $200\%$ relative improvement in precision metrics such as Hit Ratio (HR@N). Overall findings show that a text-only deep learning system for RSR task is plausible and reasonably effective. We show that CoupleNet produces explainable recommendation by analyzing the attention maps of the coupled attention layers. Related Work In this section, we review existing literature that is related to our work. Reciprocal and Dating Recommendation Prior works on online dating recommendation BIBREF0 , BIBREF9 , BIBREF2 , BIBREF10 mainly focus on designing systems for dating social networks (DSN), i.e., websites whereby users are on for the specific purpose of finding a potential partner. Moreover, all existing works have primarily focused on the notion of reciprocal relationships, e.g., a successful signal implied a two way signal (likes or replies) between two users. Tu et al. BIBREF9 proposed a recommendation system based on Latent Dirichlet Allocation (LDA) to match users based on messaging and conversational history between users. Xia et al. BIBREF0 , BIBREF1 cast the dating recommendation problem into a link prediction task, proposing a graph-based approach based on user interactions. The CCR (Content-Collaborative Reciprocal Recommender System) BIBREF10 was proposed by Akehurtst et al. for the task of reciprocal recommendation, utilizing content-based features (user profile similarity) and collaborative filtering features (user-user interactions). However, all of their approaches operate on a propriety dataset obtained via collaboration with online dating sites. This hinders research efforts in this domain. Our work proposes a different direction from the standard reciprocal recommendation (RR) models. The objective of our work is fundamentally different, i.e., instead of finding users that might reciprocate to each other, we learn to functionally approximate the essence of a good (possibly stable and serious) relationship, learning a compatibility score for two users given their regular social profiles (e.g., Twitter). To the best of our knowledge, our work is the first to build a relationship recommendation model based on a distant supervision signal on real world relationships. Hence, we distinguish our work from all existing works on online dating recommendation. Moreover, our dataset is obtained legally via the official twitter API and can be distributed for future research. Unlike prior work BIBREF0 which might invoke privacy concerns especially with the usage of conversation history, the users employed in our study have public twitter feeds. We note that publicly available twitter datasets have been the cornerstone of many scientific studies especially in the fields of social science and natural language processing (NLP). Across scientific literature, several other aspects of online dating have been extensively studied. Nagarajan and Hearst BIBREF11 studied self-presentation on online dating sites by specifically examining language on dating profiles. Hancock et al. presented an analysis on deception and lying on online dating profiles BIBREF5 , reporting that at least $50\%$ of participants provide deceptive information pertaining to physical attributes such as height, weight or age. Toma et al. BIBREF4 investigated the correlation between linguistic cues and deception on online dating profiles. Maldeniya et al. BIBREF12 studied how textual similarity between user profiles impacts the likelihood of reciprocal behavior. A recent work by Cobb and Kohno BIBREF13 provided an extensive study which tries to understand users’ privacy preferences and practices in online dating. Finally, BIBREF14 studied the impacts of relationship breakups on Twitter, revealing many crucial insights pertaining to the social and linguistic behaviour of couples that have just broken up. In order to do so, they collect likely couple pairs and monitor them over a period of time. Notably, our data collection procedure is reminscent of theirs, i.e., using keyword-based filters to find highly likely couple pairs. However, their work utilizes a second stage crowdworker based evaluation to check for breakups. User Profiling and Friend Recommendation Our work is a cross between user profiling and user match-making systems. An earlier work, BIBREF15 proposed a gradient-boosted learning-to-rank model for match-making users on a dating forum. While the authors ran experiments on a dating service website, the authors drew parallels with other match-making services such as job-seeking forums. The user profiling aspect in our work comes from the fact that we use social networks to learn user representations. As such, our approach performs both user profiling and then match-making within an end-to-end framework. BIBREF7 proposed a deep learning personality detection system which is trained on social posts on Weibo and Twitter. BIBREF6 proposed a Twitter personality detection system based on machine learning models. BIBREF16 learned multi-view embeddings of Twitter users using canonical correlation analysis for friend recommendation. From an application perspective, our work is also highly related to `People you might know' or `who to follow' (WTF) services on RSNs BIBREF17 albeit taking a romantic twist. In practical applications, our RSN based relationship recommender can either be deployed as part of a WTF service, or to increase the visibility of the content of users with high compatibility score. Deep Learning and Collaborative Ranking One-class collaborative filtering (also known as collaborative ranking) BIBREF18 is a central research problem in IR. In general, deep learning BIBREF19 , BIBREF20 , BIBREF21 has also been recently very popular for collaborative ranking problems today. However, to the best of our knowledge, our work is the first deep learning based approach for the online dating domain. BIBREF22 provides a comprehensive overview of deep learning methods for CF. Notably, our approach also follows the neural IR approach which is mainly concerned with modeling document-query pairs BIBREF23 , BIBREF24 , BIBREF25 or user-item pairs BIBREF8 , BIBREF26 since we deal with the textual domain. Finally, our work leverages recent advances in deep learning, namely Gated Recurrent Units BIBREF27 and Neural Attention BIBREF28 , BIBREF29 , BIBREF30 . The key idea of neural attention is to learn to attend to various segments of a document, eliminating noise and emphasizing the important segments for prediction. Problem Definition and Notation In this section, we introduce the formal problem definition of this work. Definition 3.1 Let $U$ be the set of Users. Let $s_i$ be the social profile of user $i$ which is denoted by $u_i \in U$ . Each social profile $s_i \in S$ contains $\eta $ documents. Each document $d_i \in s_i$ contains a maximum of $L$ words. Given a user $u_i$ and his or her social profile $s_i$ , the task of the Relationship Recommendation problem is to produce a ranked list of candidates based on a computed relevance score $s_i$0 where $s_i$1 is the social profile of the candidate user $s_i$2 . $s_i$3 is a parameterized function. There are mainly three types of learning to rank methods, namely pointwise, pairwise and list-wise. Pointwise considers each user pair individually, computing a relevance score solely based on the current sample, i.e., binary classification. Pairwise trains via noise constrastive estimation, which often minimizes a loss function like the margin based hinge loss. List-wise considers an entire list of candidates and is seldom employed due to the cumbersome constraints that stem from implementation efforts. Our proposed CoupleNet employs a pairwise paradigm. The intuition for this is that, relationship recommendation is considered very sparse and has very imbalanced classes (for each user, only one ground truth exists). Hence, training binary classification models suffers from class imbalance. Moreover, the good performance of pairwise learning to rank is also motivated by our early experiments. The Love Birds Dataset Since there are no publicly available datasets for training relationship recommendation models, we construct our own. The goal is to construct a list of user pairs in which both users are in relationship. Our dataset is constructed via distant supervision from Twitter. We call this dataset the Love Birds dataset. This not only references the metaphorical meaning of the phrase `love birds' but also deliberately references the fact that the Twitter icon is a bird. This section describes the construction of our dataset. Figure 1 describes the overall process of our distant supervision framework. Distant Supervision Using the Twitter public API, we collected tweets with emojis contains the keyword `heart' in its description. The key is to find tweets where a user expresses love to another user. We observed that there are countless tweets such as `good night baby love you ' and `darling i love you so much ' on Twitter. As such, the initial list of tweets is crawled by watching heart and love-related emojis, e.g., , , etc. By collecting tweets containing these emojis, we form our initial candidate list of couple tweets (tweets in which two people in a relationship send to each other). Through this process, we collected 10 million tweets over a span of a couple of days. Each tweet will contain a sender and a target (the user mentioned and also the target of affection). We also noticed that the love related emojis do not necessarily imply a romantic relationship between two users. For instance, we noticed that a large percentage of such tweets are affection towards family members. Given the large corpus of candidates, we can apply a stricter filtering rule to obtain true couples. To this end, we use a ban list of words such as 'bro', 'sis', `dad', `mum' and apply regular expression based filtering on the candidates. We also observed a huge amount of music related tweets, e.g., `I love this song so much !'. Hence, we also included music-related keywords such as `perform', `music', `official' and `song'. Finally, we also noticed that people use the heart emoji frequently when asking for someone to follow them back. As such, we also ban the word `follow'. We further restricted tweets to contain only a single mention. Intuitively, mentioning more than one person implies a group message rather than a couple tweet. We also checked if one user has a much higher follower count over the other user. In this case, we found that this is because people send love messages to popular pop idols (we found that a huge bulk of crawled tweets came from fangirls sending love message to @harrystylesofficial). Any tweet with a user containing more than 5K followers is being removed from the candidate list. Forming Couple Pairs Finally, we arrive at 12K tweets after aggressive filtering. Using the 12K `cleaned' couple tweets, we formed a list of couples. We sorted couples in alphabetical order, i.e., (clara, ben) becomes (ben, clara) and removed duplicate couples to ensure that there are no `bidirectional' pairs in the dataset. For each user on this list, we crawled their timeline and collected 200 latest tweets from their timeline. Subsequently, we applied further preprocessing to remove explicit couple information. Notably, we do not differentiate between male and female users (since twitter API does not provide this information either). The signal for distant supervision can be thought of as an explicit signal which is commonplace in recommendation problems that are based on explicit feedback (user ratings, reviews, etc.). In this case, an act (tweet) of love / affection is the signal used. We call this explicit couple information. To ensure that there are no additional explicit couple information in each user's timeline, we removed all tweets with any words of affection (heart-related emojis, `love', `dear', etc.). We also masked all mentions with the @USER symbol. This is to ensure that there is no explicit leak of signals in the final dataset. Naturally, a more accurate method is to determine the date in which users got to know each other and then subsequently construct timelines based on tweets prior to that date. Unfortunately, there is no automatic and trivial way to easily determine this information. Consequently, a fraction of their timeline would possibly have been tweeted when the users have already been together in a relationship. As such, in order to remove as much 'couple' signals, we try our best to mask such information. Why Twitter? Finally, we answer the question of why Twitter was chosen as our primary data source. One key desiderata was that the data should be public, differentiating ourselves from other works that use proprietary datasets BIBREF0 , BIBREF9 . In designing our experiments, we considered two other popular social platforms, i.e., Facebook and Instagram. Firstly, while Facebook provides explicit relationship information, we found that there is a lack of personal, personality-revealing posts on Facebook. For a large majority of users, the only signals on Facebook mainly consist of shares and likes of articles. The amount of original content created per user is extremely low compared to Twitter whereby it is trivial to obtain more than 200 tweets per user. Pertaining to Instagram, we found that posts are also generally much sparser especially in regards to frequency, making it difficult to amass large amounts of data per user. Moreover, Instagram adds a layer of difficulty as Instagram is primarily multi-modal. In our Twitter dataset, we can easily mask explicit couple information by keyword filters. However, it is non-trivial to mask a user's face on an image. Nevertheless, we would like to consider Instagram as an interesting line of future work. Dataset Statistics Our final dataset consists of 1.858M tweets (200 tweets per user). The total number of users is 9290 and 4645 couple pairs. The couple pairs are split into training, testing and development with a 80/10/10 split. The total vocabulary size (after lowercasing) is 2.33M. Ideally, more user pairs could be included in the dataset. However, we also note that the dataset is quite large (almost 2 million tweets) already, posing a challenge for standard hardware with mid-range graphic cards. Since this is the first dataset created for this novel problem, we leave the construction of a larger benchmark for future work. Our Proposed Approach In this section, we introduce our deep learning architecture - the CoupleNet. Overall, our neural architecture is a hierarchical recurrent model BIBREF28 , utilizing multi-layered attentions at different hierarchical levels. An overview of the model architecture is illustrated in Figure 2 . There are two sides of the network, one for each user. Our network follows a `Siamese' architecture, with shared parameters for each side of the network. A single data input to our model comprises user pairs ( $U1, U2$ ) (couples) and ( $U1, U3$ ) (negative samples). Each user has $K$ tweets each with a maximum length of $L$ . The value of $K$ and $L$ are tunnable hyperparameters. Embedding Layer For each user, the inputs to our network are a matrix of indices, each corresponding to a specific word in the dictionary. The embedding matrix $\textbf {W} \in \mathbb {R}^{d \times |V|}$ acts as a look-up whereby each index selects a $d$ dimensional vector, i.e., the word representation. Thus, for each user, we have $K \times L$ vectors of dimension size $d$ . The embedding layer is shared for all users and is initialized with pretrained word vectors. Learning Tweet Representations For each user, the output of the embedding layer is a tensor of shape $K \times L \times d$ . We pass each tweet through a recurrent neural network. More specifically, we use Gated Recurrent Units (GRU) encoders with attentional pooling to learn a $n$ dimensional vector for each tweet. The GRU accepts a sequence of vectors and recursively composes each input vector into a hidden state. The recursive operation of the GRU is defined as follows: $ z_t &= \sigma (W_z x_t + U_z h_{t-1} + b_z) \\ r_t &= \sigma (W_r x_t + U_r h_{t-1} + b_r) \\ \hat{h_t} &= tanh (W_h \: x_t + U_h (r_t h_{t-1}) + b_h) \\ h_t &= z_t \: h_{t-1} + (1-z_t) \: \hat{h_t} $ where $h_t$ is the hidden state at time step $t$ , $z_t$ and $r_t$ are the update gate and reset gate at time step $t$ respectively. $\sigma $ is the sigmoid function. $x_t$ is the input to the GRU unit at time step $t$ . Note that time step is analogous to parsing a sequence of words sequentially in this context. $W_z, W_r \in \mathbb {R}^{d \times n}, W_h \in \mathbb {R}^{n \times n}$ are parameters of the GRU layer. The output of each GRU is a sequence of hidden vectors $h_1, h_2 \cdots h_L \in \textbf {H}$ , where $\textbf {H} \in \mathbb {R}^{L \times n}$ . Each hidden vector is $n$ dimensions, which corresponds to the parameter size of the GRU. To learn a single $n$ dimensional vector, the last hidden vector $h_L$ is typically considered. However, a variety of pooling functions such as the average pooling, max pooling or attentional pooling can be adopted to learn more informative representations. More specifically, neural attention mechanisms are applied across the matrix $\textbf {H}$ , learning a weighted representation of all hidden vectors. Intuitively, this learns to select more informative words to be passed to subsequent layers, potentially reducing noise and improving model performance. $ \textbf {Y} = \text{tanh}(W_y \: \textbf {H}) \:\:;\:\: a= \text{softmax}(w^{\top } \: \textbf {Y}) \:\:;\:\: r = \textbf {H}\: a^{\top } $ where $W_y \in \mathbb {R}^{n \times n}, w \in \mathbb {R}^{n}$ are the parameters of the attention pooling layer. The output $r \in \mathbb {R}^{n}$ is the final vector representation of the tweet. Note that the parameters of the attentional pooling layer are shared across all tweets and across both users. Learning User Representations Recall that each user is represented by $K$ tweets and for each tweet we have a $n$ dimensional vector. Let $t^i_1, t^i_2 \cdots t^i_K$ be all the tweets for a given user $i$ . In order to learn a fixed $n$ dimensional vector for each user, we require a pooling function across each user's tweet embeddings. In order to do so, we use a Coupled Attention Layer that learns to attend to U1 based on U2 (and vice versa). Similarly, for the negative sample, coupled attention is applied to (U1, U3) instead. However, we only describe the operation of (U1, U2) for the sake of brevity. The key intuition behind the coupled attention layer is to learn attentional representations of U1 with respect to U2 (and vice versa). Intuitively, this compares each tweet of U1 with each tweet of U2 and learns to weight each tweet based on this grid-wise comparison scheme. Let U1 and U2 be represented by a sequence of $K$ tweets (each of which is a $n$ dimensional vector) and let $T_1, T_2 \in \mathbb {R}^{k \times n}$ be the tweet matrix for U1 and U2 respectively. For each tweet pair ( $t^{1}_i, t^{2}_j$ ), we utilize a feed-forward neural network to learn a similarity score between each tweet. As such, each value of the similarity grid is computed: $$s_{ij} = W_{c} \: [t^{1}_i; t^{2}_j] + b_c$$ (Eq. 28) where $W_c \in \mathbb {R}^{n \times 1}$ and $b_c \in \mathbb {R}^{1}$ are parameters of the feed-forward neural network. Note that these parameters are shared across all tweet pair comparisons. The score $s_{ij}$ is a scalar value indicating the similarity between tweet $i$ of U1 and tweet $j$ of U2. Given the similarity matrix $\textbf {S} \in \mathbb {R}^{K \times K}$ , the strongest signals across each dimension are aggregated using max pooling. For example, by taking a max over the columns of S, we regard the importance of tweet $i$ of U1 as the strongest influence it has over all tweets of U2. The result of this aggregation is two $K$ length vectors which are used to attend over the original sequence of tweets. The following operations describe the aggregation functions: $$a^{row} = \text{smax}(\max _{row} \textbf {S}) \:\:\:\text{and}\:\:\: a^{col} = \text{smax}(\max _{col} \textbf {S})$$ (Eq. 30) where $a^{row}, a^{col} \in \mathbb {R}^{K}$ and smax is the softmax function. Subsequently, both of these vectors are used to attentively pool the tweet vectors of each user. $ u_1 = T_1 \: a^{col} \:\:\text{and}\:\:u_2 = T_2 \: a^{row} $ where $u_1, u_2 \in \mathbb {R}^{n}$ are the final user representations for U1 and U2. Learning to Rank and Training Procedure Given embeddings $u_1, u_2, u_3$ , we introduce our similarity modeling layer and learning to rank objective. Given $u_1$ and $u_2$ , the similarity between each user pair is modeled as follows: $$s(u_1, u_2) = \frac{u_i \cdot u_2}{|u_1| |u_2|}$$ (Eq. 32) which is the cosine similarity function. Subsequently, the pairwise ranking loss is optimized. We use the margin-based hinge loss to optimize our model. $$J = \max \lbrace 0, \lambda - s(u_1,u_2) + s(u_1, u_3) \rbrace $$ (Eq. 33) where $\lambda $ is the margin hyperparameter, $s(u_1, u_2)$ is the similarity score for the ground truth (true couples) and $s(u_1, u_3)$ is the similarity score for the negative sample. This function aims to discriminate between couples and non-couples by increasing the margin between the ranking scores of these user pairs. Parameters of the network can be optimized efficiently with stochastic gradient descent (SGD). Empirical Evaluation Our experiments are designed to answer the following Research Questions (RQs). Experimental Setup All empirical evaluation is conducted on our LoveBirds dataset which has been described earlier. This section describes the evaluation metrics used and evaluation procedure. Our problem is posed as a learning-to-rank problem. As such, the evaluation metrics used are as follows: Hit Ratio @N is the ratio of test samples which are correctly retrieved within the top $N$ users. We evaluate on $N=10,5,3$ . Accuracy is the number of test samples that have been correctly ranked in the top position. Mean Reciprocal Rank (MRR) is a commonly used information retrieval metric. The reciprocal rank of a single test sample is the multiplicative inverse of the rank. The MRR is computed by $\frac{1}{Q} \sum ^{|Q|}_{i=1} \frac{1}{rank_i}$ . Mean Rank is the average rank of all test samples. Our experimental procedure samples 100 users per test sample and ranks the golden sample amongst the 100 negative samples. In this section, we discuss the algorithms and baselines compared. Notably, there are no established benchmarks for this new problem. As such, we create 6 baselines to compare against our proposed CoupleNet. RankSVM (Tf-idf) - This model is a RankSVM (Support Vector Machine) trained on tf-idf vectors. This model is known to be a powerful vector space model (VSM) baseline. The feature vector of each user is a $k$ dimensional vector, representing the top- $k$ most common n-grams. The n-gram range is set to (1,3) and $k$ is set to 5000 in our experiments. Following the original implementation, the kernel of RankSVM is a linear kernel. RankSVM (Embed) - This model is a RankSVM model trained on pretrained (static, un-tuned) word embeddings. For each user pair, the feature vector is the sum of all words of both users. MLP (Embed) - This is a Multi-layered Perceptron (MLP) model that learns to non-linearly project static word embedding. Each word embedding is projected using 2 layered MLP with ReLU activations. The user representation is the sum of all transformed word embeddings. DeepCoNN (Deep Co-operative Neural Networks) BIBREF8 is a convolutional neural network (CNN). CNNs learn n-gram features by sliding weights across an input. In this model, all of a user's tweets are concatenated and encoded into a $d$ dimensional vector via a convolutional encoder. We use a fixed filter width of 3. DeepCoNN was originally proposed for item recommendation task using reviews. In our context, we adapt the DeepCoNN for our RSR task (tweets are analogous to reviews). Given the different objectives (MSE vs ranking), we also switch the factorization machine (FM) layer for the cosine similarity. The number of filters is 100. A max pooling layer is used to aggregate features. Baseline Gated Recurrent Unit (GRU) - We compare with a baseline GRU model. Similar to the DeepCoNN model, the baseline GRU considers a user to be a concatenation of all the user's tweets. The size of the recurrent cell is 100 dimensions. Hierarchical GRU (H-GRU) - This model learns user representations by first encoding each tweet with a GRU encoder. The tweet embedding is the last hidden state of the GRU. Subsequently, all tweet embeddings are summed. This model serves as an ablation baseline of our model, i.e., removing all attentional pooling functions. All models were implemented in Tensorflow on a Linux machine. For all neural network models, we follow a Siamese architecture (shared parameters for both users) and mainly vary the neural encoder. The cosine ranking function and hinge loss are then used to optimize all models. We train all models with the Adam BIBREF31 optimizer with a learning rate of $10^{-3}$ since this learning rate consistently produced the best results across all models. The batch size is tuned amongst $\lbrace 16,32,64\rbrace $ and models are trained for 10 epochs. We report the result based on the best performance on the development set. The margin is tuned amongst $\lbrace 0.1, 0.2, 0.5\rbrace $ . All model parameters are initialized with Gaussian distributions with a mean of 0 and standard deviation of $0.1$ . The L2 regularization is set to $10^{-8}$ . We use a dropout of $0.5$ after the convolution or recurrent layers. A dropout of $0.8$ is set after the Coupled Attention layer in our model. Text is tokenized with NLTK's tweet tokenizer. We initialize the word embedding matrix with Glove BIBREF32 trained on Twitter corpus. All words that do not appear more than 5 times are assigned unknown tokens. All tweets are truncated at a fixed length of 10 tokens. Early experiments found that raising the number of tokens per tweet does not improve the performance. The number of tweets per user is tuned amongst $\lbrace 10,20,50,100,150,200\rbrace $ and reported in our experimental results. Discussion and Analysis Figure 3 reports the experimental results on the LoveBirds2M dataset. For all baselines and evaluation metrics, we compare across different settings of $\eta $ , the number of tweets per user that is used to train the model. Firstly, we observe that CoupleNet significantly outperforms most of the baselines. Across most metrics, there is almost a $180\%-200\%$ relative improvement over DeepCoNN, the state-of-the-art model for item recommendation with text data. The performance improvement over the baseline GRU model is also extremely large, i.e., with a relative improvement of approximately 4 times across all metrics. This shows that concatenating all of a user's tweets into a single document severely hurts performance. We believe that this is due to the inability of recurrent models to handle long sequences. Moreover, the DeepCoNN performs about 2 times better than the baseline GRU model. On the other hand, we observe that H-GRU significantly improves the baseline GRU model. In the H-GRU model, sequences are only $L=10$ long but are encoded $K$ times with shared parameters. On the other hand, the GRU model has to process $K \times L$ words, which inevitably causes performance to drop significantly. While the performance of the H-GRU model is reasonable, it is still significantly outperformed by our CoupleNet. We believe this is due to the incorporation of the attentional pooling layers in our model, which allows it to eliminate noise and focus on the important keywords. A surprising and notable strong baseline is the MLP (Embed) model which outperforms DeepCoNN but still performs much worse than CoupleNet. On the other hand, RankSVM (Embed) performs poorly. We believe that this is attributed to the insufficiency of the linear kernel of the SVM. Since RankSVM and MLP are trained on the same features, we believe that nonlinear ReLU transformations of the MLP improve the performance significantly. Moreover, the MLP model has 2 layers, which learn different levels of abstractions. Finally, the performance of RankSVM (Tf-idf) is also poor. However, we observe that RankSVM (Tf-idf) slightly outperforms RankSVM (Embed) occasionally. While other models display a clear trend in performance with respect to the number of tweets, the performance of RankSVM (Tf-idf) and RankSVM (Embed) seem to fluctuate across the number of user tweets. Finally, we observe a clear trend in performance gain with respect to the number of user tweets. This is intuitive because more tweets provide the model with greater insight into the user's interest and personality, allowing a better match to be made. The improvement seems to follow a logarithmic scale which suggests diminishing returns beyond a certain number of tweets. Finally, we report the time cost of CoupleNet. With 200 tweets per user, the cost of training is approximately $\approx 2$ mins per epoch on a medium grade GPU. This is much faster than expected because GRUs benefit from parallism as they can process multiple tweets simultaneously. Ablation Study In this section, we study the component-wise effectiveness of CoupleNet. We removed layers from CoupleNet in order to empirically motivate the design of each component. Firstly, we switched CoupleNet to a pointwise classification model, minimizing a cross entropy loss. We found that this halves the performance. As such, we observe the importance of pairwise ranking. Secondly, we swapped cosine similarity for a MLP layer with scalar sigmoid activation (to ensure inputs lie within $[0,1]$ ). We also found that the performance drops significantly. Finally, we also observe that the attention layers of CoupleNet contribute substantially to the performance of the model. More specifically, removing both the GRU attention and coupled attention layers cause performance to drop by 13.9%. Removing the couple attention suffers a performance degrade of $2.5\%$ while removing the GRU attention drops performance by $3.9\%$ . It also seems that dropping both degrades performance more than expected (not a straightforward summation of performance degradation). Overall Quantitative Findings In this subsection, we describe the overall findings of our quantitative experiments. Overall, the best HR@10 score for CoupleNet is about $64\%$ , i.e., if an application would to recommend the top 10 prospective partners to a user, then the ground truth will appear in this list $64\%$ of the time. Moreover, the accuracy is $25\%$ (ranking out of 100 candidates) which is also reasonably high. Given the intrinsic difficulty of the problem, we believe that the performance of CoupleNet on this new problem is encouraging and promising. To answer RQ1, we believe that text-based deep learning systems for relationship recommendation are plausible. However, special care has to be taken, i.e., model selection matters. The performance significantly improves when we include more tweets per user. This answers RQ2. This is intuitive since more tweets would enable better and more informative user representations, leading to a better matching performance. Qualitative Analysis In this section, we describe several insights and observations based on real examples from our LoveBirds20 dataset. One key advantage of CoupleNet is a greater extent of explainability due to the coupled attention mechanism. More specifically, we are able to obtain which of each user's tweets contributed the most to the user representation and the overall prediction. By analyzing the attention output of user pairs, we are able to derive qualitative insights. As an overall conclusion to answer RQ3 (which will be elaborated by in the subsequent subsections), we found that CoupleNet is capable of explainable recommendations if there are explicit matching signals such as user interest and demographic similarity between user pairs. Finally, we discuss some caveats and limitations of our approach. Mutual Interest between Couples is Captured in CoupleNet We observed the CoupleNet is able to capture the mutual interest between couples. Table 2 shows an example from the LoveBirds2M dataset. In general, we found that most user pairs have noisy tweets. However, we also observed that whenever couple pairs have mutual interest, CoupleNet is able to assign a high attention weight to the relevant tweets. For example, in Table 2 , both couples are fans of BTS, a Korean pop idol group. As such, tweets related to BTS are surfaced to the top via coupled attention. In the first tweet of User 1, tweets related to two entities, seokjin and hoseok, are ranked high (both entities are members of the pop idol group). This ascertains that CoupleNet is able to, to some extent, explain why two users are matched. This also validates the usage of our coupled attention mechanism. For instance, we could infer that User1 and User2 are matched because of their mutual interest in BTS. A limitation is that it is difficult to interpret why the other tweets (such as a thank you without much context, or supporting your family) were ranked highly. CoupleNet Infers User Attribute and Demographic by Word Usage We also discovered that CoupleNet learns to match users with similar attributes and demographics. For example, high school students will be recommended high school students at a higher probability. Note that location, age or any other information is not provided to CoupleNet. In other words, user attribute and demographic are solely inferred via a user's tweets. In Table 3 , we report an example in which the top-ranked tweets (via coupled attention) are high school related tweets (homecoming, high school reception). This shows two things: (1) the coupled attention shows that the following 3 tweets were the most important tweets for prediction and (2) CoupleNet learns to infer user attribute and demographic without being explicitly provided with such information. We also note that both users seem to have strongly positive tweets being ranked highly in their attention scores which might hint at the role of sentiment and mood in making prediction. CoupleNet Ranks Successfully Even Without Explicit Signals It is intuitive that not every user will post interest or demographic revealing tweets. For instance, some users might exclusively post about their emotions. When analyzing the ranking outputs of CoupleNet, we found that, interestingly, CoupleNet can successfully rank couple pairs even when there seem to be no explicit matching signal in the social profiles of both users. Table 4 shows an example where two user profiles do not share any explicit matching signals. User E and User F are a ground truth couple pair and the prediction of CoupleNet ranks User E with User F at the top position. The top tweets of User E and User F are mostly emotional tweets that are non-matching. Through this case, we understand that CoupleNet does not simply match people with similar emotions together. Notably, relationship recommendation is also a problem that humans may struggle with. Many times, the reason why two people are in a relationship may be implicit or unclear (even to humans). As such, the fact that CoupleNet ranks couple pairs correctly even when there is no explicit matching signals hints at its ability to go beyond simple keyword matching. In this case, we believe `hidden' (latent) patterns (such as emotions and personality) of the users are being learned and modeled in order to make recommendations. This shows that CoupleNet is not simply acting as a text-matching algorithm and learning features beyond that. Side Note, Caveats and Limitations While we show that our approach is capable of producing interpretable results (especially when explicit signals exist), the usefulness of its explainability may still have limitations, e.g., consider Table 4 where it is clear that the results are not explainable. Firstly, there might be a complete absence of any interpretable content in two user's profiles in the first place. Secondly, explaining relationships are also challenging for humans. As such, we recommend that the outputs of CoupleNet to be only used as a reference. Given that a user's profile may contain easily a hundreds to thousands of tweets, one posssible use is to use this ranked list to enable more efficient analysis by humans (such as social scientist or linguists). We believe our work provides a starting point of explainable relationship recommendation. Conclusion We introduced a new problem of relationship recommendation. In order to construct a dataset, we employ a novel distant supervision scheme to obtain real world couples from social media. We proposed the first deep learning model for text-based relationship recommendation. Our deep learning model, CoupleNet is characterized by its usage of hierarchical attention-based GRUs and coupled attention layers. Performance evaluation is overall optimistic and promising. Despite huge class imbalance, our approach is able to recommend at a reasonable precision ( $64\%$ at HR@10 and $25\%$ accuracy while being ranked against 100 negative samples). Finally, our qualitative analysis shows three key findings: (1) CoupleNet finds mutual interests between users for match-making, (2) CoupleNet infers user attributes and demographics in order to make recommendations, and (3) CoupleNet can successfully match-make couples even when there is no explicit matching signals in their social profiles, possibly leveraging emotion and personality based latent features for prediction.
Twitter
5dc1aca619323ea0d4717d1f825606b2b7c21f01
5dc1aca619323ea0d4717d1f825606b2b7c21f01_0
Q: Which major geographical regions are studied? Text: Introduction Sexual harassment is defined as "bullying or coercion of a sexual nature, or the unwelcome or inappropriate promise of rewards in exchange for sexual favors." In fact, it is an ongoing problem in the U.S., especially within the higher education community. According to the National Sexual Violence Resource Center (NSRVC), one in five women and one in sixteen men are sexually assaulted while they are attending college. In addition to the prevalence of campus sexual harassment, it has been shown to have detrimental effects on student's well-being, including health-related disorders and psychological distress BIBREF0, BIBREF1. However, these studies on college sexual misconduct usually collect data based on questionnaires from a small sample of the college population, which might not be sufficiently substantial to capture the big picture of sexual harassment risk of the entire student body. Alternatively, social media opens up new opportunities to gather a larger and more comprehensive amount of data and mitigate the risk of false or inaccurate narratives from the studied subjects. On October 15 of 2017, prominent Hollywood actress Alyssa Milano, by accusing Oscar-winning film producer, Harvey Weinstein, for multiple sexual impropriety attempts on herself and many other women in the film industry, ignited the "MeToo" trend on social media that called for women and men to share their own sexual harassment experience. According to CNN, over 1.7 million users had used the hash-tag in 85 countries. Benefiting from the tremendous amount of data supplied by this trend and the existing state-of-the-art semantic parser and generative statistical models, we propose a new approach to characterizing sexual harassment by mining the tweets from college users with the hash-tag #metoo on Twitter. Our main contributions are several folds. We investigate campus sexual harassment using a big-data approach by collecting data from Twitter. We employ traditional topic modeling and linear regression methods on a new dataset to highlight patterns of the ongoing troubling social behaviors at both institutional and individual levels. We propose a novel approach to combining domain-general deep semantic parsing and sentiment analysis to dissect personal narratives. Related Work Previous works for sexual misconduct in academia and workplace dated back to last few decades, when researchers studied the existence, as well as psychometric and demographic insights regarding this social issue, based on survey and official data BIBREF2, BIBREF3, BIBREF4. However, these methods of gathering data are limited in scale and might be influenced by the psychological and cognitive tendencies of respondents not to provide faithful answers BIBREF5. The ubiquity of social media has motivated various research on widely-debated social topics such as gang violence, hate code, or presidential election using Twitter data BIBREF6, BIBREF7, BIBREF8, BIBREF9. Recently, researchers have taken the earliest steps to understand sexual harassment using textual data on Twitter. Using machine learning techniques, Modrek and Chakalov (2019) built predictive models for the identification and categorization of lexical items pertaining to sexual abuse, while analysis on semantic contents remains untouched BIBREF10. Despite the absence of Twitter data, Field et al. (2019) did a study more related to ours as they approach to the subject geared more towards linguistics tasks such as event, entity and sentiment analysis BIBREF11. Their work on event-entity extraction and contextual sentiment analysis has provided many useful insights, which enable us to tap into the potential of our Twitter dataset. There are several novelties in our approach to the #MeToo problem. Our target population is restricted to college followers on Twitter, with the goal to explore people's sentiment towards the sexual harassment they experienced and its implication on the society's awareness and perception of the issue. Moreover, the focus on the sexual harassment reality in colleges calls for an analysis on the metadata of this demographics to reveal meaningful knowledge of their distinctive characteristics BIBREF12. Dataset ::: Data Collection In this study, we limit the sample size to the followers identified as English speakers in the U.S. News Top 200 National Universities. We utilize the Jefferson-Henrique script, a web scraper designed for Twitter to retrieve a total of over 300,000 #MeToo tweets from October 15th, when Alyssa Milano posted the inceptive #MeToo tweet, to November 15th of 2017 to cover a period of a month when the trend was on the rise and attracting mass concerns. Since the lists of the followers of the studied colleges might overlap and many Twitter users tend to reiterate other's tweets, simply putting all the data collected together could create a major redundancy problem. We extract unique users and tweets from the combined result set to generate a dataset of about 60,000 unique tweets, pertaining to 51,104 unique users. Dataset ::: Text Preprocessing We pre-process the Twitter textual data to ensure that its lexical items are to a high degree lexically comparable to those of natural language. This is done by performing sentiment-aware tokenization, spell correction, word normalization, segmentation (for splitting hashtags) and annotation. The implemented tokenizer with SentiWordnet corpus BIBREF13 is able to avoid splitting expressions or words that should be kept intact (as one token), and identify most emoticons, emojis, expressions such as dates, currencies, acronyms, censored words (e.g. s**t), etc. In addition, we perform modifications on the extracted tokens. For spelling correction, we compose a dictionary for the most commonly seen abbreviations, censored words and elongated words (for emphasis, e.g. "reallyyy"). The Viterbi algorithm is used for word segmentation, with word statistics (unigrams and bigrams) computed from the NLTK English Corpus to obtain the most probable segmentation posteriors from the unigrams and bigrams probabilities. Moreover, all texts are lower-cased, and URLs, emails and mentioned usernames are replaced with common designated tags so that they would not need to be annotated by the semantic parser. Dataset ::: College Metadata The meta-statistics on the college demographics regarding enrollment, geographical location, private/public categorization and male-to-female ratio are obtained. Furthermore, we acquire the Campus Safety and Security Survey dataset from the official U.S. Department of Education website and use rape-related cases statistic as an attribute to complete the data for our linear regression model. The number of such reported cases by these 200 colleges in 2015 amounts to 2,939. Methodology ::: Regression Analysis We examine other features regarding the characteristics of the studied colleges, which might be significant factors of sexual harassment. Four factual attributes pertaining to the 200 colleges are extracted from the U.S. News Statistics, which consists of Undergraduate Enrollment, Male/Female Ratio, Private/Public, and Region (Northeast, South, West, and Midwest). We also use the normalized rape-related cases count (number of cases reported per student enrolled) from the stated government resource as another attribute to examine the proximity of our dataset to the official one. This feature vector is then fitted in a linear regression to predict the normalized #metoo users count (number of unique users who posted #MeToo tweets per student enrolled) for each individual college. Methodology ::: Labeling Sexual Harassment Per our topic modeling results, we decide to look deeper into the narratives of #MeToo users who reveal their personal stories. We examine 6,760 tweets from the most relevant topic of our LDA model, and categorize them based on the following metrics: harassment types (verbal, physical, and visual abuse) and context (peer-to-peer, school employee or work employer, and third-parties). These labels are based on definitions by the U.S. Dept. of Education BIBREF14. Methodology ::: Topic Modeling on #MeToo Tweets In order to understand the latent topics of those #MeToo tweets for college followers, we first utilize Latent Dirichlet Allocation (LDA) to label universal topics demonstrated by the users. We determine the optimal topic number by selecting the one with the highest coherence score. Since certain words frequently appear in those #MeToo tweets (e.g., sexual harassment, men, women, story, etc.), we transform our corpus using TF-IDF, a term-weighting scheme that discounts the influence of common terms. Methodology ::: Semantic Parsing with TRIPS Learning deep meaning representations, which enables the preservation of rich semantic content of entities, meaning ambiguity resolution and partial relational understanding of texts, is one of the challenges that the TRIPS parser BIBREF15 is tasked to tackle. This kind of meaning is represented by TRIPS Logical Form (LF), which is a graph-based representation that serves as the interface between structural analysis of text (i.e., parse) and the subsequent use of the information to produce knowledge. The LF graphs are obtained by using the semantic types, roles and rule-based relations defined by the TRIPS Ontology BIBREF15 at its core in combination with various linguistic techniques such as Dialogue Act Identification, Dependency Parsing, Named Entity Recognition, and Crowd-sourced Lexicon (Wordnet). Figure 1 illustrates an example of the TRIPS LF graph depicting the meaning of the sentence "He harassed me," where the event described though the speech act TELL (i.e. telling a story) is the verb predicate HARASS, which is caused by the agent HE and influences the affected (also called "theme" in traditional literature) ME. As seen from the previously discussed example, the action-agent-affected relational structure is applicable to even the simplest sentences used for storytelling, and it is in fact very common for humans to encounter in both spoken and written languages. This makes it well suited for event extraction from short texts, useful for analyzing tweets with Twitter's 280 character limit. Therefore, our implementation of TRIPS parser is particularly tailored for identifying the verb predicates in tweets and their corresponding agent-affected arguments (with $82.4\%$ F1 score), so that we can have a solid ground for further analysis. Methodology ::: Connotation Frames and Sentiment Analysis In order to develop an interpretable analysis that focuses on sentiment scores pertaining to the entities and events mentioned in the narratives, as well as the perceptions of readers on such events, we draw from existing literature on connotation frames: a set of verbs annotated according to what they imply about semantically dependent entities. Connotation frames, first introduced by Rashkin, Singh, and Choi (2016), provides a framework for analyzing nuanced dimensions in text by combining polarity annotations with frame semantics (Fillmore 1982). More specifically, verbs are annotated across various dimensions and perspectives so that a verb might elicit a positive sentiment for its subject (i.e. sympathy) but imply a negative effect for its object. We target the sentiments towards the entities and verb predicates through a pre-collected set of 950 verbs that have been annotated for these traits, which can be more clearly demonstrated through the example "He harassed me.": ${Sentiment(\textrm {verb}) -}$: something negative happened to the writer. $Sentiment(\textrm {affected}) -$: the writer (affected) most likely feels negative about the event. $Perspective(\textrm {affected} \rightarrow \textrm {agent})-$: the writer most likely has negative feelings towards the agent as a result of the event. $Perspective(\textrm {reader} \rightarrow \textrm {affected})-$: the reader most likely view the agent as the antagonist. $Perspective(\textrm {affected} \rightarrow \textrm {affected})+$: the reader most likely feels sympathetic towards the writer. In addition to extracting sentiment scores from the pre-annotated corpus, we also need to predict sentiment scores of unknown verbs. To achieve this task, we rely on the 200-dimensional GloVe word embeddings BIBREF16, pretrained on their Twitter dataset, to compute the scores of the nearest neighboring synonyms contained in the annotated verb set and normalize their weighted sum to get the resulting sentiment (Equation 1). where $\mathcal {I}=\mathbf {1_{w \in \mathcal {A}}}$ is the indicator function for whether verb predicate $w$ is in the annotation set $\mathcal {A}$, $\gamma (w)$ is the set of nearest neighbors $e$'s of verb $w$. Because our predictive model computes event-entity sentiment scores and generates verb predicate knowledge simultaneously, it is sensitive to data initialization. Therefore, we train the model iteratively on a number of random initialization to achieve the best results. Experimental Results ::: Topical Themes of #MeToo Tweets The results of LDA on #MeToo tweets of college users (Table 1) fall into the same pattern as the research of Modrek and Chakalov (2019), which suggests that a large portion of #MeToo tweets on Twitter focuses on sharing personal traumatic stories about sexual harassment BIBREF10. In fact, in our top 5 topics, Topics 1 and 5 mainly depict gruesome stories and childhood or college time experience. This finding seems to support the validity of the Twitter sample of Modrek and Chakalov (2019), where 11% discloses personal sexual harassment memories and 5.8% of them was in formative years BIBREF10. These users also shows multiple emotions toward this movement, such as compassion (topic 2), determination (topic 3), and hope (topic 4). We will further examine the emotion features in the latter results. Experimental Results ::: Regression Result Observing the results of the linear regression in Table 2, we find the normalized governmental reported cases count and regional feature to be statistically significant on the sexual harassment rate in the Twitter data ($p-value<0.05$). Specifically, the change in the number of reported cases constitutes a considerable change in the number of #MeToo users on Twitter as p-value is extremely small at $5.7e-13$. This corresponds to the research by Napolitano (2014) regarding the "Yes means yes" movement in higher education institutes in recent years, as even with some limitations and inconsistency, the sexual assault reporting system is gradually becoming more rigorous BIBREF17. Meanwhile, attending colleges in the Northeast, West and South regions increases the possibility of posting about sexual harassment (positive coefficients), over the Midwest region. This finding is interesting and warrants further scrutiny. Experimental Results ::: Event-Entity Sentiment Analysis We discover that approximately half of users who detailed their sexual harassment experiences with the #MeToo hashtag suffered from physical aggression. Also, more than half of them claimed to encounter the perpetrators outside the college and work environment. The sentimental score for the affected entities and the verb of cases pertaining to faculty are strictly negative, suggesting that academic personnel's actions might be described as more damaging to the students' mental health. This finding resonates a recent research by Cantapulo et al. regarding the potential hazard of sexual harassment conducts by university faculties using data from federal investigation and relevant social science literature BIBREF18. Furthermore, many in this group tend to mention their respective age, typically between 5 and 20 (24% of the studied subset). This observation reveals an alarming number of child and teenager sexual abuse, indicating that although college students are not as prone to sexual harassment from their peers and teachers, they might still be traumatized by their childhood experiences. In addition, although verbal abuse experiences accounts for a large proportion of the tweets, it is challenging to gain sentiment insights into them, as the majority of them contains insinuations and sarcasms regarding sexual harassment. This explains why the sentiment scores of the events and entities are very close to neutral. Experimental Results ::: Limitations and Ethical Implications Our dataset is taken from only a sample of a specific set of colleges, and different samples might yield different results. Our method of identifying college students is simple, and might not reflect the whole student population. Furthermore, the majority of posts on Twitter are short texts (under 50 words). This factor, according to previous research, might hamper the performance of the LDA results, despite the use of the TF-IDF scheme BIBREF19. Furthermore, while the main goal of this paper is to shed lights to the ongoing problems in the academia and contribute to the future sociological study using big data analysis, our dataset might be misused for detrimental purposes. Also, data regarding sexual harassment is sensitive in nature, and might have unanticipated effects on those addressed users. Conclusion In this study, we discover a novel correlation between the number of college users who participate in the #MeToo movement and the number of official reported cases from the government data. This is a positive sign suggesting that the higher education system is moving into a right direction to effectively utilize Title IV, a portion of the Education Amendments Act of 1972, which requests colleges to submit their sexual misconduct reports to the officials and protect the victims. In addition, we capture several geographic and behavioral characteristics of the #MeToo users related to sexual assault such as region, reaction and narrative content following the trend, as well as sentiment and social interactions, some of which are supported by various literature on sexual harassment. Importantly, our semantic analysis reveals interesting patterns of the assaulting cases. We believe our methodologies on defining these #MeToo users and their features will be applicable to further studies on this and other alarming social issues. Furthermore, we find that the social media-driven approach is highly useful in facilitating crime-related sociology research on a large scale and spectrum. Moreover, since social networks appeal to a broad audience, especially those outside academia, studies using these resources are highly useful for raising awareness in the community on concurrent social problems. Last but not least, many other aspects of the text data from social media, which could provide many interesting insights on sexual harassment, remain largely untouched. In the future, we intend to explore more sophisticated language features and implement more supervised models with advanced neural network parsing and classification. We believe that with our current dataset, an extension to take advantage of cutting-edge linguistic techniques will be the next step to address the previously unanswered questions and uncover deeper meanings of the tweets on sexual harassment.
Northeast U.S, South U.S., West U.S. and Midwest U.S.
dd5c9a370652f6550b4fd13e2ac317eaf90973a8
dd5c9a370652f6550b4fd13e2ac317eaf90973a8_0
Q: How strong is the correlation between the prevalence of the #MeToo movement and official reports [of sexual harassment]? Text: Introduction Sexual harassment is defined as "bullying or coercion of a sexual nature, or the unwelcome or inappropriate promise of rewards in exchange for sexual favors." In fact, it is an ongoing problem in the U.S., especially within the higher education community. According to the National Sexual Violence Resource Center (NSRVC), one in five women and one in sixteen men are sexually assaulted while they are attending college. In addition to the prevalence of campus sexual harassment, it has been shown to have detrimental effects on student's well-being, including health-related disorders and psychological distress BIBREF0, BIBREF1. However, these studies on college sexual misconduct usually collect data based on questionnaires from a small sample of the college population, which might not be sufficiently substantial to capture the big picture of sexual harassment risk of the entire student body. Alternatively, social media opens up new opportunities to gather a larger and more comprehensive amount of data and mitigate the risk of false or inaccurate narratives from the studied subjects. On October 15 of 2017, prominent Hollywood actress Alyssa Milano, by accusing Oscar-winning film producer, Harvey Weinstein, for multiple sexual impropriety attempts on herself and many other women in the film industry, ignited the "MeToo" trend on social media that called for women and men to share their own sexual harassment experience. According to CNN, over 1.7 million users had used the hash-tag in 85 countries. Benefiting from the tremendous amount of data supplied by this trend and the existing state-of-the-art semantic parser and generative statistical models, we propose a new approach to characterizing sexual harassment by mining the tweets from college users with the hash-tag #metoo on Twitter. Our main contributions are several folds. We investigate campus sexual harassment using a big-data approach by collecting data from Twitter. We employ traditional topic modeling and linear regression methods on a new dataset to highlight patterns of the ongoing troubling social behaviors at both institutional and individual levels. We propose a novel approach to combining domain-general deep semantic parsing and sentiment analysis to dissect personal narratives. Related Work Previous works for sexual misconduct in academia and workplace dated back to last few decades, when researchers studied the existence, as well as psychometric and demographic insights regarding this social issue, based on survey and official data BIBREF2, BIBREF3, BIBREF4. However, these methods of gathering data are limited in scale and might be influenced by the psychological and cognitive tendencies of respondents not to provide faithful answers BIBREF5. The ubiquity of social media has motivated various research on widely-debated social topics such as gang violence, hate code, or presidential election using Twitter data BIBREF6, BIBREF7, BIBREF8, BIBREF9. Recently, researchers have taken the earliest steps to understand sexual harassment using textual data on Twitter. Using machine learning techniques, Modrek and Chakalov (2019) built predictive models for the identification and categorization of lexical items pertaining to sexual abuse, while analysis on semantic contents remains untouched BIBREF10. Despite the absence of Twitter data, Field et al. (2019) did a study more related to ours as they approach to the subject geared more towards linguistics tasks such as event, entity and sentiment analysis BIBREF11. Their work on event-entity extraction and contextual sentiment analysis has provided many useful insights, which enable us to tap into the potential of our Twitter dataset. There are several novelties in our approach to the #MeToo problem. Our target population is restricted to college followers on Twitter, with the goal to explore people's sentiment towards the sexual harassment they experienced and its implication on the society's awareness and perception of the issue. Moreover, the focus on the sexual harassment reality in colleges calls for an analysis on the metadata of this demographics to reveal meaningful knowledge of their distinctive characteristics BIBREF12. Dataset ::: Data Collection In this study, we limit the sample size to the followers identified as English speakers in the U.S. News Top 200 National Universities. We utilize the Jefferson-Henrique script, a web scraper designed for Twitter to retrieve a total of over 300,000 #MeToo tweets from October 15th, when Alyssa Milano posted the inceptive #MeToo tweet, to November 15th of 2017 to cover a period of a month when the trend was on the rise and attracting mass concerns. Since the lists of the followers of the studied colleges might overlap and many Twitter users tend to reiterate other's tweets, simply putting all the data collected together could create a major redundancy problem. We extract unique users and tweets from the combined result set to generate a dataset of about 60,000 unique tweets, pertaining to 51,104 unique users. Dataset ::: Text Preprocessing We pre-process the Twitter textual data to ensure that its lexical items are to a high degree lexically comparable to those of natural language. This is done by performing sentiment-aware tokenization, spell correction, word normalization, segmentation (for splitting hashtags) and annotation. The implemented tokenizer with SentiWordnet corpus BIBREF13 is able to avoid splitting expressions or words that should be kept intact (as one token), and identify most emoticons, emojis, expressions such as dates, currencies, acronyms, censored words (e.g. s**t), etc. In addition, we perform modifications on the extracted tokens. For spelling correction, we compose a dictionary for the most commonly seen abbreviations, censored words and elongated words (for emphasis, e.g. "reallyyy"). The Viterbi algorithm is used for word segmentation, with word statistics (unigrams and bigrams) computed from the NLTK English Corpus to obtain the most probable segmentation posteriors from the unigrams and bigrams probabilities. Moreover, all texts are lower-cased, and URLs, emails and mentioned usernames are replaced with common designated tags so that they would not need to be annotated by the semantic parser. Dataset ::: College Metadata The meta-statistics on the college demographics regarding enrollment, geographical location, private/public categorization and male-to-female ratio are obtained. Furthermore, we acquire the Campus Safety and Security Survey dataset from the official U.S. Department of Education website and use rape-related cases statistic as an attribute to complete the data for our linear regression model. The number of such reported cases by these 200 colleges in 2015 amounts to 2,939. Methodology ::: Regression Analysis We examine other features regarding the characteristics of the studied colleges, which might be significant factors of sexual harassment. Four factual attributes pertaining to the 200 colleges are extracted from the U.S. News Statistics, which consists of Undergraduate Enrollment, Male/Female Ratio, Private/Public, and Region (Northeast, South, West, and Midwest). We also use the normalized rape-related cases count (number of cases reported per student enrolled) from the stated government resource as another attribute to examine the proximity of our dataset to the official one. This feature vector is then fitted in a linear regression to predict the normalized #metoo users count (number of unique users who posted #MeToo tweets per student enrolled) for each individual college. Methodology ::: Labeling Sexual Harassment Per our topic modeling results, we decide to look deeper into the narratives of #MeToo users who reveal their personal stories. We examine 6,760 tweets from the most relevant topic of our LDA model, and categorize them based on the following metrics: harassment types (verbal, physical, and visual abuse) and context (peer-to-peer, school employee or work employer, and third-parties). These labels are based on definitions by the U.S. Dept. of Education BIBREF14. Methodology ::: Topic Modeling on #MeToo Tweets In order to understand the latent topics of those #MeToo tweets for college followers, we first utilize Latent Dirichlet Allocation (LDA) to label universal topics demonstrated by the users. We determine the optimal topic number by selecting the one with the highest coherence score. Since certain words frequently appear in those #MeToo tweets (e.g., sexual harassment, men, women, story, etc.), we transform our corpus using TF-IDF, a term-weighting scheme that discounts the influence of common terms. Methodology ::: Semantic Parsing with TRIPS Learning deep meaning representations, which enables the preservation of rich semantic content of entities, meaning ambiguity resolution and partial relational understanding of texts, is one of the challenges that the TRIPS parser BIBREF15 is tasked to tackle. This kind of meaning is represented by TRIPS Logical Form (LF), which is a graph-based representation that serves as the interface between structural analysis of text (i.e., parse) and the subsequent use of the information to produce knowledge. The LF graphs are obtained by using the semantic types, roles and rule-based relations defined by the TRIPS Ontology BIBREF15 at its core in combination with various linguistic techniques such as Dialogue Act Identification, Dependency Parsing, Named Entity Recognition, and Crowd-sourced Lexicon (Wordnet). Figure 1 illustrates an example of the TRIPS LF graph depicting the meaning of the sentence "He harassed me," where the event described though the speech act TELL (i.e. telling a story) is the verb predicate HARASS, which is caused by the agent HE and influences the affected (also called "theme" in traditional literature) ME. As seen from the previously discussed example, the action-agent-affected relational structure is applicable to even the simplest sentences used for storytelling, and it is in fact very common for humans to encounter in both spoken and written languages. This makes it well suited for event extraction from short texts, useful for analyzing tweets with Twitter's 280 character limit. Therefore, our implementation of TRIPS parser is particularly tailored for identifying the verb predicates in tweets and their corresponding agent-affected arguments (with $82.4\%$ F1 score), so that we can have a solid ground for further analysis. Methodology ::: Connotation Frames and Sentiment Analysis In order to develop an interpretable analysis that focuses on sentiment scores pertaining to the entities and events mentioned in the narratives, as well as the perceptions of readers on such events, we draw from existing literature on connotation frames: a set of verbs annotated according to what they imply about semantically dependent entities. Connotation frames, first introduced by Rashkin, Singh, and Choi (2016), provides a framework for analyzing nuanced dimensions in text by combining polarity annotations with frame semantics (Fillmore 1982). More specifically, verbs are annotated across various dimensions and perspectives so that a verb might elicit a positive sentiment for its subject (i.e. sympathy) but imply a negative effect for its object. We target the sentiments towards the entities and verb predicates through a pre-collected set of 950 verbs that have been annotated for these traits, which can be more clearly demonstrated through the example "He harassed me.": ${Sentiment(\textrm {verb}) -}$: something negative happened to the writer. $Sentiment(\textrm {affected}) -$: the writer (affected) most likely feels negative about the event. $Perspective(\textrm {affected} \rightarrow \textrm {agent})-$: the writer most likely has negative feelings towards the agent as a result of the event. $Perspective(\textrm {reader} \rightarrow \textrm {affected})-$: the reader most likely view the agent as the antagonist. $Perspective(\textrm {affected} \rightarrow \textrm {affected})+$: the reader most likely feels sympathetic towards the writer. In addition to extracting sentiment scores from the pre-annotated corpus, we also need to predict sentiment scores of unknown verbs. To achieve this task, we rely on the 200-dimensional GloVe word embeddings BIBREF16, pretrained on their Twitter dataset, to compute the scores of the nearest neighboring synonyms contained in the annotated verb set and normalize their weighted sum to get the resulting sentiment (Equation 1). where $\mathcal {I}=\mathbf {1_{w \in \mathcal {A}}}$ is the indicator function for whether verb predicate $w$ is in the annotation set $\mathcal {A}$, $\gamma (w)$ is the set of nearest neighbors $e$'s of verb $w$. Because our predictive model computes event-entity sentiment scores and generates verb predicate knowledge simultaneously, it is sensitive to data initialization. Therefore, we train the model iteratively on a number of random initialization to achieve the best results. Experimental Results ::: Topical Themes of #MeToo Tweets The results of LDA on #MeToo tweets of college users (Table 1) fall into the same pattern as the research of Modrek and Chakalov (2019), which suggests that a large portion of #MeToo tweets on Twitter focuses on sharing personal traumatic stories about sexual harassment BIBREF10. In fact, in our top 5 topics, Topics 1 and 5 mainly depict gruesome stories and childhood or college time experience. This finding seems to support the validity of the Twitter sample of Modrek and Chakalov (2019), where 11% discloses personal sexual harassment memories and 5.8% of them was in formative years BIBREF10. These users also shows multiple emotions toward this movement, such as compassion (topic 2), determination (topic 3), and hope (topic 4). We will further examine the emotion features in the latter results. Experimental Results ::: Regression Result Observing the results of the linear regression in Table 2, we find the normalized governmental reported cases count and regional feature to be statistically significant on the sexual harassment rate in the Twitter data ($p-value<0.05$). Specifically, the change in the number of reported cases constitutes a considerable change in the number of #MeToo users on Twitter as p-value is extremely small at $5.7e-13$. This corresponds to the research by Napolitano (2014) regarding the "Yes means yes" movement in higher education institutes in recent years, as even with some limitations and inconsistency, the sexual assault reporting system is gradually becoming more rigorous BIBREF17. Meanwhile, attending colleges in the Northeast, West and South regions increases the possibility of posting about sexual harassment (positive coefficients), over the Midwest region. This finding is interesting and warrants further scrutiny. Experimental Results ::: Event-Entity Sentiment Analysis We discover that approximately half of users who detailed their sexual harassment experiences with the #MeToo hashtag suffered from physical aggression. Also, more than half of them claimed to encounter the perpetrators outside the college and work environment. The sentimental score for the affected entities and the verb of cases pertaining to faculty are strictly negative, suggesting that academic personnel's actions might be described as more damaging to the students' mental health. This finding resonates a recent research by Cantapulo et al. regarding the potential hazard of sexual harassment conducts by university faculties using data from federal investigation and relevant social science literature BIBREF18. Furthermore, many in this group tend to mention their respective age, typically between 5 and 20 (24% of the studied subset). This observation reveals an alarming number of child and teenager sexual abuse, indicating that although college students are not as prone to sexual harassment from their peers and teachers, they might still be traumatized by their childhood experiences. In addition, although verbal abuse experiences accounts for a large proportion of the tweets, it is challenging to gain sentiment insights into them, as the majority of them contains insinuations and sarcasms regarding sexual harassment. This explains why the sentiment scores of the events and entities are very close to neutral. Experimental Results ::: Limitations and Ethical Implications Our dataset is taken from only a sample of a specific set of colleges, and different samples might yield different results. Our method of identifying college students is simple, and might not reflect the whole student population. Furthermore, the majority of posts on Twitter are short texts (under 50 words). This factor, according to previous research, might hamper the performance of the LDA results, despite the use of the TF-IDF scheme BIBREF19. Furthermore, while the main goal of this paper is to shed lights to the ongoing problems in the academia and contribute to the future sociological study using big data analysis, our dataset might be misused for detrimental purposes. Also, data regarding sexual harassment is sensitive in nature, and might have unanticipated effects on those addressed users. Conclusion In this study, we discover a novel correlation between the number of college users who participate in the #MeToo movement and the number of official reported cases from the government data. This is a positive sign suggesting that the higher education system is moving into a right direction to effectively utilize Title IV, a portion of the Education Amendments Act of 1972, which requests colleges to submit their sexual misconduct reports to the officials and protect the victims. In addition, we capture several geographic and behavioral characteristics of the #MeToo users related to sexual assault such as region, reaction and narrative content following the trend, as well as sentiment and social interactions, some of which are supported by various literature on sexual harassment. Importantly, our semantic analysis reveals interesting patterns of the assaulting cases. We believe our methodologies on defining these #MeToo users and their features will be applicable to further studies on this and other alarming social issues. Furthermore, we find that the social media-driven approach is highly useful in facilitating crime-related sociology research on a large scale and spectrum. Moreover, since social networks appeal to a broad audience, especially those outside academia, studies using these resources are highly useful for raising awareness in the community on concurrent social problems. Last but not least, many other aspects of the text data from social media, which could provide many interesting insights on sexual harassment, remain largely untouched. In the future, we intend to explore more sophisticated language features and implement more supervised models with advanced neural network parsing and classification. We believe that with our current dataset, an extension to take advantage of cutting-edge linguistic techniques will be the next step to address the previously unanswered questions and uncover deeper meanings of the tweets on sexual harassment.
0.9098 correlation
39c78924df095c92e058ffa5a779de597e8c43f4
39c78924df095c92e058ffa5a779de597e8c43f4_0
Q: How are the topics embedded in the #MeToo tweets extracted? Text: Introduction Sexual harassment is defined as "bullying or coercion of a sexual nature, or the unwelcome or inappropriate promise of rewards in exchange for sexual favors." In fact, it is an ongoing problem in the U.S., especially within the higher education community. According to the National Sexual Violence Resource Center (NSRVC), one in five women and one in sixteen men are sexually assaulted while they are attending college. In addition to the prevalence of campus sexual harassment, it has been shown to have detrimental effects on student's well-being, including health-related disorders and psychological distress BIBREF0, BIBREF1. However, these studies on college sexual misconduct usually collect data based on questionnaires from a small sample of the college population, which might not be sufficiently substantial to capture the big picture of sexual harassment risk of the entire student body. Alternatively, social media opens up new opportunities to gather a larger and more comprehensive amount of data and mitigate the risk of false or inaccurate narratives from the studied subjects. On October 15 of 2017, prominent Hollywood actress Alyssa Milano, by accusing Oscar-winning film producer, Harvey Weinstein, for multiple sexual impropriety attempts on herself and many other women in the film industry, ignited the "MeToo" trend on social media that called for women and men to share their own sexual harassment experience. According to CNN, over 1.7 million users had used the hash-tag in 85 countries. Benefiting from the tremendous amount of data supplied by this trend and the existing state-of-the-art semantic parser and generative statistical models, we propose a new approach to characterizing sexual harassment by mining the tweets from college users with the hash-tag #metoo on Twitter. Our main contributions are several folds. We investigate campus sexual harassment using a big-data approach by collecting data from Twitter. We employ traditional topic modeling and linear regression methods on a new dataset to highlight patterns of the ongoing troubling social behaviors at both institutional and individual levels. We propose a novel approach to combining domain-general deep semantic parsing and sentiment analysis to dissect personal narratives. Related Work Previous works for sexual misconduct in academia and workplace dated back to last few decades, when researchers studied the existence, as well as psychometric and demographic insights regarding this social issue, based on survey and official data BIBREF2, BIBREF3, BIBREF4. However, these methods of gathering data are limited in scale and might be influenced by the psychological and cognitive tendencies of respondents not to provide faithful answers BIBREF5. The ubiquity of social media has motivated various research on widely-debated social topics such as gang violence, hate code, or presidential election using Twitter data BIBREF6, BIBREF7, BIBREF8, BIBREF9. Recently, researchers have taken the earliest steps to understand sexual harassment using textual data on Twitter. Using machine learning techniques, Modrek and Chakalov (2019) built predictive models for the identification and categorization of lexical items pertaining to sexual abuse, while analysis on semantic contents remains untouched BIBREF10. Despite the absence of Twitter data, Field et al. (2019) did a study more related to ours as they approach to the subject geared more towards linguistics tasks such as event, entity and sentiment analysis BIBREF11. Their work on event-entity extraction and contextual sentiment analysis has provided many useful insights, which enable us to tap into the potential of our Twitter dataset. There are several novelties in our approach to the #MeToo problem. Our target population is restricted to college followers on Twitter, with the goal to explore people's sentiment towards the sexual harassment they experienced and its implication on the society's awareness and perception of the issue. Moreover, the focus on the sexual harassment reality in colleges calls for an analysis on the metadata of this demographics to reveal meaningful knowledge of their distinctive characteristics BIBREF12. Dataset ::: Data Collection In this study, we limit the sample size to the followers identified as English speakers in the U.S. News Top 200 National Universities. We utilize the Jefferson-Henrique script, a web scraper designed for Twitter to retrieve a total of over 300,000 #MeToo tweets from October 15th, when Alyssa Milano posted the inceptive #MeToo tweet, to November 15th of 2017 to cover a period of a month when the trend was on the rise and attracting mass concerns. Since the lists of the followers of the studied colleges might overlap and many Twitter users tend to reiterate other's tweets, simply putting all the data collected together could create a major redundancy problem. We extract unique users and tweets from the combined result set to generate a dataset of about 60,000 unique tweets, pertaining to 51,104 unique users. Dataset ::: Text Preprocessing We pre-process the Twitter textual data to ensure that its lexical items are to a high degree lexically comparable to those of natural language. This is done by performing sentiment-aware tokenization, spell correction, word normalization, segmentation (for splitting hashtags) and annotation. The implemented tokenizer with SentiWordnet corpus BIBREF13 is able to avoid splitting expressions or words that should be kept intact (as one token), and identify most emoticons, emojis, expressions such as dates, currencies, acronyms, censored words (e.g. s**t), etc. In addition, we perform modifications on the extracted tokens. For spelling correction, we compose a dictionary for the most commonly seen abbreviations, censored words and elongated words (for emphasis, e.g. "reallyyy"). The Viterbi algorithm is used for word segmentation, with word statistics (unigrams and bigrams) computed from the NLTK English Corpus to obtain the most probable segmentation posteriors from the unigrams and bigrams probabilities. Moreover, all texts are lower-cased, and URLs, emails and mentioned usernames are replaced with common designated tags so that they would not need to be annotated by the semantic parser. Dataset ::: College Metadata The meta-statistics on the college demographics regarding enrollment, geographical location, private/public categorization and male-to-female ratio are obtained. Furthermore, we acquire the Campus Safety and Security Survey dataset from the official U.S. Department of Education website and use rape-related cases statistic as an attribute to complete the data for our linear regression model. The number of such reported cases by these 200 colleges in 2015 amounts to 2,939. Methodology ::: Regression Analysis We examine other features regarding the characteristics of the studied colleges, which might be significant factors of sexual harassment. Four factual attributes pertaining to the 200 colleges are extracted from the U.S. News Statistics, which consists of Undergraduate Enrollment, Male/Female Ratio, Private/Public, and Region (Northeast, South, West, and Midwest). We also use the normalized rape-related cases count (number of cases reported per student enrolled) from the stated government resource as another attribute to examine the proximity of our dataset to the official one. This feature vector is then fitted in a linear regression to predict the normalized #metoo users count (number of unique users who posted #MeToo tweets per student enrolled) for each individual college. Methodology ::: Labeling Sexual Harassment Per our topic modeling results, we decide to look deeper into the narratives of #MeToo users who reveal their personal stories. We examine 6,760 tweets from the most relevant topic of our LDA model, and categorize them based on the following metrics: harassment types (verbal, physical, and visual abuse) and context (peer-to-peer, school employee or work employer, and third-parties). These labels are based on definitions by the U.S. Dept. of Education BIBREF14. Methodology ::: Topic Modeling on #MeToo Tweets In order to understand the latent topics of those #MeToo tweets for college followers, we first utilize Latent Dirichlet Allocation (LDA) to label universal topics demonstrated by the users. We determine the optimal topic number by selecting the one with the highest coherence score. Since certain words frequently appear in those #MeToo tweets (e.g., sexual harassment, men, women, story, etc.), we transform our corpus using TF-IDF, a term-weighting scheme that discounts the influence of common terms. Methodology ::: Semantic Parsing with TRIPS Learning deep meaning representations, which enables the preservation of rich semantic content of entities, meaning ambiguity resolution and partial relational understanding of texts, is one of the challenges that the TRIPS parser BIBREF15 is tasked to tackle. This kind of meaning is represented by TRIPS Logical Form (LF), which is a graph-based representation that serves as the interface between structural analysis of text (i.e., parse) and the subsequent use of the information to produce knowledge. The LF graphs are obtained by using the semantic types, roles and rule-based relations defined by the TRIPS Ontology BIBREF15 at its core in combination with various linguistic techniques such as Dialogue Act Identification, Dependency Parsing, Named Entity Recognition, and Crowd-sourced Lexicon (Wordnet). Figure 1 illustrates an example of the TRIPS LF graph depicting the meaning of the sentence "He harassed me," where the event described though the speech act TELL (i.e. telling a story) is the verb predicate HARASS, which is caused by the agent HE and influences the affected (also called "theme" in traditional literature) ME. As seen from the previously discussed example, the action-agent-affected relational structure is applicable to even the simplest sentences used for storytelling, and it is in fact very common for humans to encounter in both spoken and written languages. This makes it well suited for event extraction from short texts, useful for analyzing tweets with Twitter's 280 character limit. Therefore, our implementation of TRIPS parser is particularly tailored for identifying the verb predicates in tweets and their corresponding agent-affected arguments (with $82.4\%$ F1 score), so that we can have a solid ground for further analysis. Methodology ::: Connotation Frames and Sentiment Analysis In order to develop an interpretable analysis that focuses on sentiment scores pertaining to the entities and events mentioned in the narratives, as well as the perceptions of readers on such events, we draw from existing literature on connotation frames: a set of verbs annotated according to what they imply about semantically dependent entities. Connotation frames, first introduced by Rashkin, Singh, and Choi (2016), provides a framework for analyzing nuanced dimensions in text by combining polarity annotations with frame semantics (Fillmore 1982). More specifically, verbs are annotated across various dimensions and perspectives so that a verb might elicit a positive sentiment for its subject (i.e. sympathy) but imply a negative effect for its object. We target the sentiments towards the entities and verb predicates through a pre-collected set of 950 verbs that have been annotated for these traits, which can be more clearly demonstrated through the example "He harassed me.": ${Sentiment(\textrm {verb}) -}$: something negative happened to the writer. $Sentiment(\textrm {affected}) -$: the writer (affected) most likely feels negative about the event. $Perspective(\textrm {affected} \rightarrow \textrm {agent})-$: the writer most likely has negative feelings towards the agent as a result of the event. $Perspective(\textrm {reader} \rightarrow \textrm {affected})-$: the reader most likely view the agent as the antagonist. $Perspective(\textrm {affected} \rightarrow \textrm {affected})+$: the reader most likely feels sympathetic towards the writer. In addition to extracting sentiment scores from the pre-annotated corpus, we also need to predict sentiment scores of unknown verbs. To achieve this task, we rely on the 200-dimensional GloVe word embeddings BIBREF16, pretrained on their Twitter dataset, to compute the scores of the nearest neighboring synonyms contained in the annotated verb set and normalize their weighted sum to get the resulting sentiment (Equation 1). where $\mathcal {I}=\mathbf {1_{w \in \mathcal {A}}}$ is the indicator function for whether verb predicate $w$ is in the annotation set $\mathcal {A}$, $\gamma (w)$ is the set of nearest neighbors $e$'s of verb $w$. Because our predictive model computes event-entity sentiment scores and generates verb predicate knowledge simultaneously, it is sensitive to data initialization. Therefore, we train the model iteratively on a number of random initialization to achieve the best results. Experimental Results ::: Topical Themes of #MeToo Tweets The results of LDA on #MeToo tweets of college users (Table 1) fall into the same pattern as the research of Modrek and Chakalov (2019), which suggests that a large portion of #MeToo tweets on Twitter focuses on sharing personal traumatic stories about sexual harassment BIBREF10. In fact, in our top 5 topics, Topics 1 and 5 mainly depict gruesome stories and childhood or college time experience. This finding seems to support the validity of the Twitter sample of Modrek and Chakalov (2019), where 11% discloses personal sexual harassment memories and 5.8% of them was in formative years BIBREF10. These users also shows multiple emotions toward this movement, such as compassion (topic 2), determination (topic 3), and hope (topic 4). We will further examine the emotion features in the latter results. Experimental Results ::: Regression Result Observing the results of the linear regression in Table 2, we find the normalized governmental reported cases count and regional feature to be statistically significant on the sexual harassment rate in the Twitter data ($p-value<0.05$). Specifically, the change in the number of reported cases constitutes a considerable change in the number of #MeToo users on Twitter as p-value is extremely small at $5.7e-13$. This corresponds to the research by Napolitano (2014) regarding the "Yes means yes" movement in higher education institutes in recent years, as even with some limitations and inconsistency, the sexual assault reporting system is gradually becoming more rigorous BIBREF17. Meanwhile, attending colleges in the Northeast, West and South regions increases the possibility of posting about sexual harassment (positive coefficients), over the Midwest region. This finding is interesting and warrants further scrutiny. Experimental Results ::: Event-Entity Sentiment Analysis We discover that approximately half of users who detailed their sexual harassment experiences with the #MeToo hashtag suffered from physical aggression. Also, more than half of them claimed to encounter the perpetrators outside the college and work environment. The sentimental score for the affected entities and the verb of cases pertaining to faculty are strictly negative, suggesting that academic personnel's actions might be described as more damaging to the students' mental health. This finding resonates a recent research by Cantapulo et al. regarding the potential hazard of sexual harassment conducts by university faculties using data from federal investigation and relevant social science literature BIBREF18. Furthermore, many in this group tend to mention their respective age, typically between 5 and 20 (24% of the studied subset). This observation reveals an alarming number of child and teenager sexual abuse, indicating that although college students are not as prone to sexual harassment from their peers and teachers, they might still be traumatized by their childhood experiences. In addition, although verbal abuse experiences accounts for a large proportion of the tweets, it is challenging to gain sentiment insights into them, as the majority of them contains insinuations and sarcasms regarding sexual harassment. This explains why the sentiment scores of the events and entities are very close to neutral. Experimental Results ::: Limitations and Ethical Implications Our dataset is taken from only a sample of a specific set of colleges, and different samples might yield different results. Our method of identifying college students is simple, and might not reflect the whole student population. Furthermore, the majority of posts on Twitter are short texts (under 50 words). This factor, according to previous research, might hamper the performance of the LDA results, despite the use of the TF-IDF scheme BIBREF19. Furthermore, while the main goal of this paper is to shed lights to the ongoing problems in the academia and contribute to the future sociological study using big data analysis, our dataset might be misused for detrimental purposes. Also, data regarding sexual harassment is sensitive in nature, and might have unanticipated effects on those addressed users. Conclusion In this study, we discover a novel correlation between the number of college users who participate in the #MeToo movement and the number of official reported cases from the government data. This is a positive sign suggesting that the higher education system is moving into a right direction to effectively utilize Title IV, a portion of the Education Amendments Act of 1972, which requests colleges to submit their sexual misconduct reports to the officials and protect the victims. In addition, we capture several geographic and behavioral characteristics of the #MeToo users related to sexual assault such as region, reaction and narrative content following the trend, as well as sentiment and social interactions, some of which are supported by various literature on sexual harassment. Importantly, our semantic analysis reveals interesting patterns of the assaulting cases. We believe our methodologies on defining these #MeToo users and their features will be applicable to further studies on this and other alarming social issues. Furthermore, we find that the social media-driven approach is highly useful in facilitating crime-related sociology research on a large scale and spectrum. Moreover, since social networks appeal to a broad audience, especially those outside academia, studies using these resources are highly useful for raising awareness in the community on concurrent social problems. Last but not least, many other aspects of the text data from social media, which could provide many interesting insights on sexual harassment, remain largely untouched. In the future, we intend to explore more sophisticated language features and implement more supervised models with advanced neural network parsing and classification. We believe that with our current dataset, an extension to take advantage of cutting-edge linguistic techniques will be the next step to address the previously unanswered questions and uncover deeper meanings of the tweets on sexual harassment.
Using Latent Dirichlet Allocation on TF-IDF transformed from the corpus
a95188a0f35d3cb3ca70ae1527d57ac61710afa3
a95188a0f35d3cb3ca70ae1527d57ac61710afa3_0
Q: How many tweets are explored in this paper? Text: Introduction Sexual harassment is defined as "bullying or coercion of a sexual nature, or the unwelcome or inappropriate promise of rewards in exchange for sexual favors." In fact, it is an ongoing problem in the U.S., especially within the higher education community. According to the National Sexual Violence Resource Center (NSRVC), one in five women and one in sixteen men are sexually assaulted while they are attending college. In addition to the prevalence of campus sexual harassment, it has been shown to have detrimental effects on student's well-being, including health-related disorders and psychological distress BIBREF0, BIBREF1. However, these studies on college sexual misconduct usually collect data based on questionnaires from a small sample of the college population, which might not be sufficiently substantial to capture the big picture of sexual harassment risk of the entire student body. Alternatively, social media opens up new opportunities to gather a larger and more comprehensive amount of data and mitigate the risk of false or inaccurate narratives from the studied subjects. On October 15 of 2017, prominent Hollywood actress Alyssa Milano, by accusing Oscar-winning film producer, Harvey Weinstein, for multiple sexual impropriety attempts on herself and many other women in the film industry, ignited the "MeToo" trend on social media that called for women and men to share their own sexual harassment experience. According to CNN, over 1.7 million users had used the hash-tag in 85 countries. Benefiting from the tremendous amount of data supplied by this trend and the existing state-of-the-art semantic parser and generative statistical models, we propose a new approach to characterizing sexual harassment by mining the tweets from college users with the hash-tag #metoo on Twitter. Our main contributions are several folds. We investigate campus sexual harassment using a big-data approach by collecting data from Twitter. We employ traditional topic modeling and linear regression methods on a new dataset to highlight patterns of the ongoing troubling social behaviors at both institutional and individual levels. We propose a novel approach to combining domain-general deep semantic parsing and sentiment analysis to dissect personal narratives. Related Work Previous works for sexual misconduct in academia and workplace dated back to last few decades, when researchers studied the existence, as well as psychometric and demographic insights regarding this social issue, based on survey and official data BIBREF2, BIBREF3, BIBREF4. However, these methods of gathering data are limited in scale and might be influenced by the psychological and cognitive tendencies of respondents not to provide faithful answers BIBREF5. The ubiquity of social media has motivated various research on widely-debated social topics such as gang violence, hate code, or presidential election using Twitter data BIBREF6, BIBREF7, BIBREF8, BIBREF9. Recently, researchers have taken the earliest steps to understand sexual harassment using textual data on Twitter. Using machine learning techniques, Modrek and Chakalov (2019) built predictive models for the identification and categorization of lexical items pertaining to sexual abuse, while analysis on semantic contents remains untouched BIBREF10. Despite the absence of Twitter data, Field et al. (2019) did a study more related to ours as they approach to the subject geared more towards linguistics tasks such as event, entity and sentiment analysis BIBREF11. Their work on event-entity extraction and contextual sentiment analysis has provided many useful insights, which enable us to tap into the potential of our Twitter dataset. There are several novelties in our approach to the #MeToo problem. Our target population is restricted to college followers on Twitter, with the goal to explore people's sentiment towards the sexual harassment they experienced and its implication on the society's awareness and perception of the issue. Moreover, the focus on the sexual harassment reality in colleges calls for an analysis on the metadata of this demographics to reveal meaningful knowledge of their distinctive characteristics BIBREF12. Dataset ::: Data Collection In this study, we limit the sample size to the followers identified as English speakers in the U.S. News Top 200 National Universities. We utilize the Jefferson-Henrique script, a web scraper designed for Twitter to retrieve a total of over 300,000 #MeToo tweets from October 15th, when Alyssa Milano posted the inceptive #MeToo tweet, to November 15th of 2017 to cover a period of a month when the trend was on the rise and attracting mass concerns. Since the lists of the followers of the studied colleges might overlap and many Twitter users tend to reiterate other's tweets, simply putting all the data collected together could create a major redundancy problem. We extract unique users and tweets from the combined result set to generate a dataset of about 60,000 unique tweets, pertaining to 51,104 unique users. Dataset ::: Text Preprocessing We pre-process the Twitter textual data to ensure that its lexical items are to a high degree lexically comparable to those of natural language. This is done by performing sentiment-aware tokenization, spell correction, word normalization, segmentation (for splitting hashtags) and annotation. The implemented tokenizer with SentiWordnet corpus BIBREF13 is able to avoid splitting expressions or words that should be kept intact (as one token), and identify most emoticons, emojis, expressions such as dates, currencies, acronyms, censored words (e.g. s**t), etc. In addition, we perform modifications on the extracted tokens. For spelling correction, we compose a dictionary for the most commonly seen abbreviations, censored words and elongated words (for emphasis, e.g. "reallyyy"). The Viterbi algorithm is used for word segmentation, with word statistics (unigrams and bigrams) computed from the NLTK English Corpus to obtain the most probable segmentation posteriors from the unigrams and bigrams probabilities. Moreover, all texts are lower-cased, and URLs, emails and mentioned usernames are replaced with common designated tags so that they would not need to be annotated by the semantic parser. Dataset ::: College Metadata The meta-statistics on the college demographics regarding enrollment, geographical location, private/public categorization and male-to-female ratio are obtained. Furthermore, we acquire the Campus Safety and Security Survey dataset from the official U.S. Department of Education website and use rape-related cases statistic as an attribute to complete the data for our linear regression model. The number of such reported cases by these 200 colleges in 2015 amounts to 2,939. Methodology ::: Regression Analysis We examine other features regarding the characteristics of the studied colleges, which might be significant factors of sexual harassment. Four factual attributes pertaining to the 200 colleges are extracted from the U.S. News Statistics, which consists of Undergraduate Enrollment, Male/Female Ratio, Private/Public, and Region (Northeast, South, West, and Midwest). We also use the normalized rape-related cases count (number of cases reported per student enrolled) from the stated government resource as another attribute to examine the proximity of our dataset to the official one. This feature vector is then fitted in a linear regression to predict the normalized #metoo users count (number of unique users who posted #MeToo tweets per student enrolled) for each individual college. Methodology ::: Labeling Sexual Harassment Per our topic modeling results, we decide to look deeper into the narratives of #MeToo users who reveal their personal stories. We examine 6,760 tweets from the most relevant topic of our LDA model, and categorize them based on the following metrics: harassment types (verbal, physical, and visual abuse) and context (peer-to-peer, school employee or work employer, and third-parties). These labels are based on definitions by the U.S. Dept. of Education BIBREF14. Methodology ::: Topic Modeling on #MeToo Tweets In order to understand the latent topics of those #MeToo tweets for college followers, we first utilize Latent Dirichlet Allocation (LDA) to label universal topics demonstrated by the users. We determine the optimal topic number by selecting the one with the highest coherence score. Since certain words frequently appear in those #MeToo tweets (e.g., sexual harassment, men, women, story, etc.), we transform our corpus using TF-IDF, a term-weighting scheme that discounts the influence of common terms. Methodology ::: Semantic Parsing with TRIPS Learning deep meaning representations, which enables the preservation of rich semantic content of entities, meaning ambiguity resolution and partial relational understanding of texts, is one of the challenges that the TRIPS parser BIBREF15 is tasked to tackle. This kind of meaning is represented by TRIPS Logical Form (LF), which is a graph-based representation that serves as the interface between structural analysis of text (i.e., parse) and the subsequent use of the information to produce knowledge. The LF graphs are obtained by using the semantic types, roles and rule-based relations defined by the TRIPS Ontology BIBREF15 at its core in combination with various linguistic techniques such as Dialogue Act Identification, Dependency Parsing, Named Entity Recognition, and Crowd-sourced Lexicon (Wordnet). Figure 1 illustrates an example of the TRIPS LF graph depicting the meaning of the sentence "He harassed me," where the event described though the speech act TELL (i.e. telling a story) is the verb predicate HARASS, which is caused by the agent HE and influences the affected (also called "theme" in traditional literature) ME. As seen from the previously discussed example, the action-agent-affected relational structure is applicable to even the simplest sentences used for storytelling, and it is in fact very common for humans to encounter in both spoken and written languages. This makes it well suited for event extraction from short texts, useful for analyzing tweets with Twitter's 280 character limit. Therefore, our implementation of TRIPS parser is particularly tailored for identifying the verb predicates in tweets and their corresponding agent-affected arguments (with $82.4\%$ F1 score), so that we can have a solid ground for further analysis. Methodology ::: Connotation Frames and Sentiment Analysis In order to develop an interpretable analysis that focuses on sentiment scores pertaining to the entities and events mentioned in the narratives, as well as the perceptions of readers on such events, we draw from existing literature on connotation frames: a set of verbs annotated according to what they imply about semantically dependent entities. Connotation frames, first introduced by Rashkin, Singh, and Choi (2016), provides a framework for analyzing nuanced dimensions in text by combining polarity annotations with frame semantics (Fillmore 1982). More specifically, verbs are annotated across various dimensions and perspectives so that a verb might elicit a positive sentiment for its subject (i.e. sympathy) but imply a negative effect for its object. We target the sentiments towards the entities and verb predicates through a pre-collected set of 950 verbs that have been annotated for these traits, which can be more clearly demonstrated through the example "He harassed me.": ${Sentiment(\textrm {verb}) -}$: something negative happened to the writer. $Sentiment(\textrm {affected}) -$: the writer (affected) most likely feels negative about the event. $Perspective(\textrm {affected} \rightarrow \textrm {agent})-$: the writer most likely has negative feelings towards the agent as a result of the event. $Perspective(\textrm {reader} \rightarrow \textrm {affected})-$: the reader most likely view the agent as the antagonist. $Perspective(\textrm {affected} \rightarrow \textrm {affected})+$: the reader most likely feels sympathetic towards the writer. In addition to extracting sentiment scores from the pre-annotated corpus, we also need to predict sentiment scores of unknown verbs. To achieve this task, we rely on the 200-dimensional GloVe word embeddings BIBREF16, pretrained on their Twitter dataset, to compute the scores of the nearest neighboring synonyms contained in the annotated verb set and normalize their weighted sum to get the resulting sentiment (Equation 1). where $\mathcal {I}=\mathbf {1_{w \in \mathcal {A}}}$ is the indicator function for whether verb predicate $w$ is in the annotation set $\mathcal {A}$, $\gamma (w)$ is the set of nearest neighbors $e$'s of verb $w$. Because our predictive model computes event-entity sentiment scores and generates verb predicate knowledge simultaneously, it is sensitive to data initialization. Therefore, we train the model iteratively on a number of random initialization to achieve the best results. Experimental Results ::: Topical Themes of #MeToo Tweets The results of LDA on #MeToo tweets of college users (Table 1) fall into the same pattern as the research of Modrek and Chakalov (2019), which suggests that a large portion of #MeToo tweets on Twitter focuses on sharing personal traumatic stories about sexual harassment BIBREF10. In fact, in our top 5 topics, Topics 1 and 5 mainly depict gruesome stories and childhood or college time experience. This finding seems to support the validity of the Twitter sample of Modrek and Chakalov (2019), where 11% discloses personal sexual harassment memories and 5.8% of them was in formative years BIBREF10. These users also shows multiple emotions toward this movement, such as compassion (topic 2), determination (topic 3), and hope (topic 4). We will further examine the emotion features in the latter results. Experimental Results ::: Regression Result Observing the results of the linear regression in Table 2, we find the normalized governmental reported cases count and regional feature to be statistically significant on the sexual harassment rate in the Twitter data ($p-value<0.05$). Specifically, the change in the number of reported cases constitutes a considerable change in the number of #MeToo users on Twitter as p-value is extremely small at $5.7e-13$. This corresponds to the research by Napolitano (2014) regarding the "Yes means yes" movement in higher education institutes in recent years, as even with some limitations and inconsistency, the sexual assault reporting system is gradually becoming more rigorous BIBREF17. Meanwhile, attending colleges in the Northeast, West and South regions increases the possibility of posting about sexual harassment (positive coefficients), over the Midwest region. This finding is interesting and warrants further scrutiny. Experimental Results ::: Event-Entity Sentiment Analysis We discover that approximately half of users who detailed their sexual harassment experiences with the #MeToo hashtag suffered from physical aggression. Also, more than half of them claimed to encounter the perpetrators outside the college and work environment. The sentimental score for the affected entities and the verb of cases pertaining to faculty are strictly negative, suggesting that academic personnel's actions might be described as more damaging to the students' mental health. This finding resonates a recent research by Cantapulo et al. regarding the potential hazard of sexual harassment conducts by university faculties using data from federal investigation and relevant social science literature BIBREF18. Furthermore, many in this group tend to mention their respective age, typically between 5 and 20 (24% of the studied subset). This observation reveals an alarming number of child and teenager sexual abuse, indicating that although college students are not as prone to sexual harassment from their peers and teachers, they might still be traumatized by their childhood experiences. In addition, although verbal abuse experiences accounts for a large proportion of the tweets, it is challenging to gain sentiment insights into them, as the majority of them contains insinuations and sarcasms regarding sexual harassment. This explains why the sentiment scores of the events and entities are very close to neutral. Experimental Results ::: Limitations and Ethical Implications Our dataset is taken from only a sample of a specific set of colleges, and different samples might yield different results. Our method of identifying college students is simple, and might not reflect the whole student population. Furthermore, the majority of posts on Twitter are short texts (under 50 words). This factor, according to previous research, might hamper the performance of the LDA results, despite the use of the TF-IDF scheme BIBREF19. Furthermore, while the main goal of this paper is to shed lights to the ongoing problems in the academia and contribute to the future sociological study using big data analysis, our dataset might be misused for detrimental purposes. Also, data regarding sexual harassment is sensitive in nature, and might have unanticipated effects on those addressed users. Conclusion In this study, we discover a novel correlation between the number of college users who participate in the #MeToo movement and the number of official reported cases from the government data. This is a positive sign suggesting that the higher education system is moving into a right direction to effectively utilize Title IV, a portion of the Education Amendments Act of 1972, which requests colleges to submit their sexual misconduct reports to the officials and protect the victims. In addition, we capture several geographic and behavioral characteristics of the #MeToo users related to sexual assault such as region, reaction and narrative content following the trend, as well as sentiment and social interactions, some of which are supported by various literature on sexual harassment. Importantly, our semantic analysis reveals interesting patterns of the assaulting cases. We believe our methodologies on defining these #MeToo users and their features will be applicable to further studies on this and other alarming social issues. Furthermore, we find that the social media-driven approach is highly useful in facilitating crime-related sociology research on a large scale and spectrum. Moreover, since social networks appeal to a broad audience, especially those outside academia, studies using these resources are highly useful for raising awareness in the community on concurrent social problems. Last but not least, many other aspects of the text data from social media, which could provide many interesting insights on sexual harassment, remain largely untouched. In the future, we intend to explore more sophisticated language features and implement more supervised models with advanced neural network parsing and classification. We believe that with our current dataset, an extension to take advantage of cutting-edge linguistic techniques will be the next step to address the previously unanswered questions and uncover deeper meanings of the tweets on sexual harassment.
60,000
a1557ec0f3deb1e4cd1e68f4880dcecda55656dd
a1557ec0f3deb1e4cd1e68f4880dcecda55656dd_0
Q: Which geographical regions correlate to the trend? Text: Introduction Sexual harassment is defined as "bullying or coercion of a sexual nature, or the unwelcome or inappropriate promise of rewards in exchange for sexual favors." In fact, it is an ongoing problem in the U.S., especially within the higher education community. According to the National Sexual Violence Resource Center (NSRVC), one in five women and one in sixteen men are sexually assaulted while they are attending college. In addition to the prevalence of campus sexual harassment, it has been shown to have detrimental effects on student's well-being, including health-related disorders and psychological distress BIBREF0, BIBREF1. However, these studies on college sexual misconduct usually collect data based on questionnaires from a small sample of the college population, which might not be sufficiently substantial to capture the big picture of sexual harassment risk of the entire student body. Alternatively, social media opens up new opportunities to gather a larger and more comprehensive amount of data and mitigate the risk of false or inaccurate narratives from the studied subjects. On October 15 of 2017, prominent Hollywood actress Alyssa Milano, by accusing Oscar-winning film producer, Harvey Weinstein, for multiple sexual impropriety attempts on herself and many other women in the film industry, ignited the "MeToo" trend on social media that called for women and men to share their own sexual harassment experience. According to CNN, over 1.7 million users had used the hash-tag in 85 countries. Benefiting from the tremendous amount of data supplied by this trend and the existing state-of-the-art semantic parser and generative statistical models, we propose a new approach to characterizing sexual harassment by mining the tweets from college users with the hash-tag #metoo on Twitter. Our main contributions are several folds. We investigate campus sexual harassment using a big-data approach by collecting data from Twitter. We employ traditional topic modeling and linear regression methods on a new dataset to highlight patterns of the ongoing troubling social behaviors at both institutional and individual levels. We propose a novel approach to combining domain-general deep semantic parsing and sentiment analysis to dissect personal narratives. Related Work Previous works for sexual misconduct in academia and workplace dated back to last few decades, when researchers studied the existence, as well as psychometric and demographic insights regarding this social issue, based on survey and official data BIBREF2, BIBREF3, BIBREF4. However, these methods of gathering data are limited in scale and might be influenced by the psychological and cognitive tendencies of respondents not to provide faithful answers BIBREF5. The ubiquity of social media has motivated various research on widely-debated social topics such as gang violence, hate code, or presidential election using Twitter data BIBREF6, BIBREF7, BIBREF8, BIBREF9. Recently, researchers have taken the earliest steps to understand sexual harassment using textual data on Twitter. Using machine learning techniques, Modrek and Chakalov (2019) built predictive models for the identification and categorization of lexical items pertaining to sexual abuse, while analysis on semantic contents remains untouched BIBREF10. Despite the absence of Twitter data, Field et al. (2019) did a study more related to ours as they approach to the subject geared more towards linguistics tasks such as event, entity and sentiment analysis BIBREF11. Their work on event-entity extraction and contextual sentiment analysis has provided many useful insights, which enable us to tap into the potential of our Twitter dataset. There are several novelties in our approach to the #MeToo problem. Our target population is restricted to college followers on Twitter, with the goal to explore people's sentiment towards the sexual harassment they experienced and its implication on the society's awareness and perception of the issue. Moreover, the focus on the sexual harassment reality in colleges calls for an analysis on the metadata of this demographics to reveal meaningful knowledge of their distinctive characteristics BIBREF12. Dataset ::: Data Collection In this study, we limit the sample size to the followers identified as English speakers in the U.S. News Top 200 National Universities. We utilize the Jefferson-Henrique script, a web scraper designed for Twitter to retrieve a total of over 300,000 #MeToo tweets from October 15th, when Alyssa Milano posted the inceptive #MeToo tweet, to November 15th of 2017 to cover a period of a month when the trend was on the rise and attracting mass concerns. Since the lists of the followers of the studied colleges might overlap and many Twitter users tend to reiterate other's tweets, simply putting all the data collected together could create a major redundancy problem. We extract unique users and tweets from the combined result set to generate a dataset of about 60,000 unique tweets, pertaining to 51,104 unique users. Dataset ::: Text Preprocessing We pre-process the Twitter textual data to ensure that its lexical items are to a high degree lexically comparable to those of natural language. This is done by performing sentiment-aware tokenization, spell correction, word normalization, segmentation (for splitting hashtags) and annotation. The implemented tokenizer with SentiWordnet corpus BIBREF13 is able to avoid splitting expressions or words that should be kept intact (as one token), and identify most emoticons, emojis, expressions such as dates, currencies, acronyms, censored words (e.g. s**t), etc. In addition, we perform modifications on the extracted tokens. For spelling correction, we compose a dictionary for the most commonly seen abbreviations, censored words and elongated words (for emphasis, e.g. "reallyyy"). The Viterbi algorithm is used for word segmentation, with word statistics (unigrams and bigrams) computed from the NLTK English Corpus to obtain the most probable segmentation posteriors from the unigrams and bigrams probabilities. Moreover, all texts are lower-cased, and URLs, emails and mentioned usernames are replaced with common designated tags so that they would not need to be annotated by the semantic parser. Dataset ::: College Metadata The meta-statistics on the college demographics regarding enrollment, geographical location, private/public categorization and male-to-female ratio are obtained. Furthermore, we acquire the Campus Safety and Security Survey dataset from the official U.S. Department of Education website and use rape-related cases statistic as an attribute to complete the data for our linear regression model. The number of such reported cases by these 200 colleges in 2015 amounts to 2,939. Methodology ::: Regression Analysis We examine other features regarding the characteristics of the studied colleges, which might be significant factors of sexual harassment. Four factual attributes pertaining to the 200 colleges are extracted from the U.S. News Statistics, which consists of Undergraduate Enrollment, Male/Female Ratio, Private/Public, and Region (Northeast, South, West, and Midwest). We also use the normalized rape-related cases count (number of cases reported per student enrolled) from the stated government resource as another attribute to examine the proximity of our dataset to the official one. This feature vector is then fitted in a linear regression to predict the normalized #metoo users count (number of unique users who posted #MeToo tweets per student enrolled) for each individual college. Methodology ::: Labeling Sexual Harassment Per our topic modeling results, we decide to look deeper into the narratives of #MeToo users who reveal their personal stories. We examine 6,760 tweets from the most relevant topic of our LDA model, and categorize them based on the following metrics: harassment types (verbal, physical, and visual abuse) and context (peer-to-peer, school employee or work employer, and third-parties). These labels are based on definitions by the U.S. Dept. of Education BIBREF14. Methodology ::: Topic Modeling on #MeToo Tweets In order to understand the latent topics of those #MeToo tweets for college followers, we first utilize Latent Dirichlet Allocation (LDA) to label universal topics demonstrated by the users. We determine the optimal topic number by selecting the one with the highest coherence score. Since certain words frequently appear in those #MeToo tweets (e.g., sexual harassment, men, women, story, etc.), we transform our corpus using TF-IDF, a term-weighting scheme that discounts the influence of common terms. Methodology ::: Semantic Parsing with TRIPS Learning deep meaning representations, which enables the preservation of rich semantic content of entities, meaning ambiguity resolution and partial relational understanding of texts, is one of the challenges that the TRIPS parser BIBREF15 is tasked to tackle. This kind of meaning is represented by TRIPS Logical Form (LF), which is a graph-based representation that serves as the interface between structural analysis of text (i.e., parse) and the subsequent use of the information to produce knowledge. The LF graphs are obtained by using the semantic types, roles and rule-based relations defined by the TRIPS Ontology BIBREF15 at its core in combination with various linguistic techniques such as Dialogue Act Identification, Dependency Parsing, Named Entity Recognition, and Crowd-sourced Lexicon (Wordnet). Figure 1 illustrates an example of the TRIPS LF graph depicting the meaning of the sentence "He harassed me," where the event described though the speech act TELL (i.e. telling a story) is the verb predicate HARASS, which is caused by the agent HE and influences the affected (also called "theme" in traditional literature) ME. As seen from the previously discussed example, the action-agent-affected relational structure is applicable to even the simplest sentences used for storytelling, and it is in fact very common for humans to encounter in both spoken and written languages. This makes it well suited for event extraction from short texts, useful for analyzing tweets with Twitter's 280 character limit. Therefore, our implementation of TRIPS parser is particularly tailored for identifying the verb predicates in tweets and their corresponding agent-affected arguments (with $82.4\%$ F1 score), so that we can have a solid ground for further analysis. Methodology ::: Connotation Frames and Sentiment Analysis In order to develop an interpretable analysis that focuses on sentiment scores pertaining to the entities and events mentioned in the narratives, as well as the perceptions of readers on such events, we draw from existing literature on connotation frames: a set of verbs annotated according to what they imply about semantically dependent entities. Connotation frames, first introduced by Rashkin, Singh, and Choi (2016), provides a framework for analyzing nuanced dimensions in text by combining polarity annotations with frame semantics (Fillmore 1982). More specifically, verbs are annotated across various dimensions and perspectives so that a verb might elicit a positive sentiment for its subject (i.e. sympathy) but imply a negative effect for its object. We target the sentiments towards the entities and verb predicates through a pre-collected set of 950 verbs that have been annotated for these traits, which can be more clearly demonstrated through the example "He harassed me.": ${Sentiment(\textrm {verb}) -}$: something negative happened to the writer. $Sentiment(\textrm {affected}) -$: the writer (affected) most likely feels negative about the event. $Perspective(\textrm {affected} \rightarrow \textrm {agent})-$: the writer most likely has negative feelings towards the agent as a result of the event. $Perspective(\textrm {reader} \rightarrow \textrm {affected})-$: the reader most likely view the agent as the antagonist. $Perspective(\textrm {affected} \rightarrow \textrm {affected})+$: the reader most likely feels sympathetic towards the writer. In addition to extracting sentiment scores from the pre-annotated corpus, we also need to predict sentiment scores of unknown verbs. To achieve this task, we rely on the 200-dimensional GloVe word embeddings BIBREF16, pretrained on their Twitter dataset, to compute the scores of the nearest neighboring synonyms contained in the annotated verb set and normalize their weighted sum to get the resulting sentiment (Equation 1). where $\mathcal {I}=\mathbf {1_{w \in \mathcal {A}}}$ is the indicator function for whether verb predicate $w$ is in the annotation set $\mathcal {A}$, $\gamma (w)$ is the set of nearest neighbors $e$'s of verb $w$. Because our predictive model computes event-entity sentiment scores and generates verb predicate knowledge simultaneously, it is sensitive to data initialization. Therefore, we train the model iteratively on a number of random initialization to achieve the best results. Experimental Results ::: Topical Themes of #MeToo Tweets The results of LDA on #MeToo tweets of college users (Table 1) fall into the same pattern as the research of Modrek and Chakalov (2019), which suggests that a large portion of #MeToo tweets on Twitter focuses on sharing personal traumatic stories about sexual harassment BIBREF10. In fact, in our top 5 topics, Topics 1 and 5 mainly depict gruesome stories and childhood or college time experience. This finding seems to support the validity of the Twitter sample of Modrek and Chakalov (2019), where 11% discloses personal sexual harassment memories and 5.8% of them was in formative years BIBREF10. These users also shows multiple emotions toward this movement, such as compassion (topic 2), determination (topic 3), and hope (topic 4). We will further examine the emotion features in the latter results. Experimental Results ::: Regression Result Observing the results of the linear regression in Table 2, we find the normalized governmental reported cases count and regional feature to be statistically significant on the sexual harassment rate in the Twitter data ($p-value<0.05$). Specifically, the change in the number of reported cases constitutes a considerable change in the number of #MeToo users on Twitter as p-value is extremely small at $5.7e-13$. This corresponds to the research by Napolitano (2014) regarding the "Yes means yes" movement in higher education institutes in recent years, as even with some limitations and inconsistency, the sexual assault reporting system is gradually becoming more rigorous BIBREF17. Meanwhile, attending colleges in the Northeast, West and South regions increases the possibility of posting about sexual harassment (positive coefficients), over the Midwest region. This finding is interesting and warrants further scrutiny. Experimental Results ::: Event-Entity Sentiment Analysis We discover that approximately half of users who detailed their sexual harassment experiences with the #MeToo hashtag suffered from physical aggression. Also, more than half of them claimed to encounter the perpetrators outside the college and work environment. The sentimental score for the affected entities and the verb of cases pertaining to faculty are strictly negative, suggesting that academic personnel's actions might be described as more damaging to the students' mental health. This finding resonates a recent research by Cantapulo et al. regarding the potential hazard of sexual harassment conducts by university faculties using data from federal investigation and relevant social science literature BIBREF18. Furthermore, many in this group tend to mention their respective age, typically between 5 and 20 (24% of the studied subset). This observation reveals an alarming number of child and teenager sexual abuse, indicating that although college students are not as prone to sexual harassment from their peers and teachers, they might still be traumatized by their childhood experiences. In addition, although verbal abuse experiences accounts for a large proportion of the tweets, it is challenging to gain sentiment insights into them, as the majority of them contains insinuations and sarcasms regarding sexual harassment. This explains why the sentiment scores of the events and entities are very close to neutral. Experimental Results ::: Limitations and Ethical Implications Our dataset is taken from only a sample of a specific set of colleges, and different samples might yield different results. Our method of identifying college students is simple, and might not reflect the whole student population. Furthermore, the majority of posts on Twitter are short texts (under 50 words). This factor, according to previous research, might hamper the performance of the LDA results, despite the use of the TF-IDF scheme BIBREF19. Furthermore, while the main goal of this paper is to shed lights to the ongoing problems in the academia and contribute to the future sociological study using big data analysis, our dataset might be misused for detrimental purposes. Also, data regarding sexual harassment is sensitive in nature, and might have unanticipated effects on those addressed users. Conclusion In this study, we discover a novel correlation between the number of college users who participate in the #MeToo movement and the number of official reported cases from the government data. This is a positive sign suggesting that the higher education system is moving into a right direction to effectively utilize Title IV, a portion of the Education Amendments Act of 1972, which requests colleges to submit their sexual misconduct reports to the officials and protect the victims. In addition, we capture several geographic and behavioral characteristics of the #MeToo users related to sexual assault such as region, reaction and narrative content following the trend, as well as sentiment and social interactions, some of which are supported by various literature on sexual harassment. Importantly, our semantic analysis reveals interesting patterns of the assaulting cases. We believe our methodologies on defining these #MeToo users and their features will be applicable to further studies on this and other alarming social issues. Furthermore, we find that the social media-driven approach is highly useful in facilitating crime-related sociology research on a large scale and spectrum. Moreover, since social networks appeal to a broad audience, especially those outside academia, studies using these resources are highly useful for raising awareness in the community on concurrent social problems. Last but not least, many other aspects of the text data from social media, which could provide many interesting insights on sexual harassment, remain largely untouched. In the future, we intend to explore more sophisticated language features and implement more supervised models with advanced neural network parsing and classification. We believe that with our current dataset, an extension to take advantage of cutting-edge linguistic techniques will be the next step to address the previously unanswered questions and uncover deeper meanings of the tweets on sexual harassment.
Northeast U.S., West U.S. and South U.S.
096f5c59f43f49cab1ef37126341c78f272c0e26
096f5c59f43f49cab1ef37126341c78f272c0e26_0
Q: How many followers did they analyze? Text: Introduction Sexual harassment is defined as "bullying or coercion of a sexual nature, or the unwelcome or inappropriate promise of rewards in exchange for sexual favors." In fact, it is an ongoing problem in the U.S., especially within the higher education community. According to the National Sexual Violence Resource Center (NSRVC), one in five women and one in sixteen men are sexually assaulted while they are attending college. In addition to the prevalence of campus sexual harassment, it has been shown to have detrimental effects on student's well-being, including health-related disorders and psychological distress BIBREF0, BIBREF1. However, these studies on college sexual misconduct usually collect data based on questionnaires from a small sample of the college population, which might not be sufficiently substantial to capture the big picture of sexual harassment risk of the entire student body. Alternatively, social media opens up new opportunities to gather a larger and more comprehensive amount of data and mitigate the risk of false or inaccurate narratives from the studied subjects. On October 15 of 2017, prominent Hollywood actress Alyssa Milano, by accusing Oscar-winning film producer, Harvey Weinstein, for multiple sexual impropriety attempts on herself and many other women in the film industry, ignited the "MeToo" trend on social media that called for women and men to share their own sexual harassment experience. According to CNN, over 1.7 million users had used the hash-tag in 85 countries. Benefiting from the tremendous amount of data supplied by this trend and the existing state-of-the-art semantic parser and generative statistical models, we propose a new approach to characterizing sexual harassment by mining the tweets from college users with the hash-tag #metoo on Twitter. Our main contributions are several folds. We investigate campus sexual harassment using a big-data approach by collecting data from Twitter. We employ traditional topic modeling and linear regression methods on a new dataset to highlight patterns of the ongoing troubling social behaviors at both institutional and individual levels. We propose a novel approach to combining domain-general deep semantic parsing and sentiment analysis to dissect personal narratives. Related Work Previous works for sexual misconduct in academia and workplace dated back to last few decades, when researchers studied the existence, as well as psychometric and demographic insights regarding this social issue, based on survey and official data BIBREF2, BIBREF3, BIBREF4. However, these methods of gathering data are limited in scale and might be influenced by the psychological and cognitive tendencies of respondents not to provide faithful answers BIBREF5. The ubiquity of social media has motivated various research on widely-debated social topics such as gang violence, hate code, or presidential election using Twitter data BIBREF6, BIBREF7, BIBREF8, BIBREF9. Recently, researchers have taken the earliest steps to understand sexual harassment using textual data on Twitter. Using machine learning techniques, Modrek and Chakalov (2019) built predictive models for the identification and categorization of lexical items pertaining to sexual abuse, while analysis on semantic contents remains untouched BIBREF10. Despite the absence of Twitter data, Field et al. (2019) did a study more related to ours as they approach to the subject geared more towards linguistics tasks such as event, entity and sentiment analysis BIBREF11. Their work on event-entity extraction and contextual sentiment analysis has provided many useful insights, which enable us to tap into the potential of our Twitter dataset. There are several novelties in our approach to the #MeToo problem. Our target population is restricted to college followers on Twitter, with the goal to explore people's sentiment towards the sexual harassment they experienced and its implication on the society's awareness and perception of the issue. Moreover, the focus on the sexual harassment reality in colleges calls for an analysis on the metadata of this demographics to reveal meaningful knowledge of their distinctive characteristics BIBREF12. Dataset ::: Data Collection In this study, we limit the sample size to the followers identified as English speakers in the U.S. News Top 200 National Universities. We utilize the Jefferson-Henrique script, a web scraper designed for Twitter to retrieve a total of over 300,000 #MeToo tweets from October 15th, when Alyssa Milano posted the inceptive #MeToo tweet, to November 15th of 2017 to cover a period of a month when the trend was on the rise and attracting mass concerns. Since the lists of the followers of the studied colleges might overlap and many Twitter users tend to reiterate other's tweets, simply putting all the data collected together could create a major redundancy problem. We extract unique users and tweets from the combined result set to generate a dataset of about 60,000 unique tweets, pertaining to 51,104 unique users. Dataset ::: Text Preprocessing We pre-process the Twitter textual data to ensure that its lexical items are to a high degree lexically comparable to those of natural language. This is done by performing sentiment-aware tokenization, spell correction, word normalization, segmentation (for splitting hashtags) and annotation. The implemented tokenizer with SentiWordnet corpus BIBREF13 is able to avoid splitting expressions or words that should be kept intact (as one token), and identify most emoticons, emojis, expressions such as dates, currencies, acronyms, censored words (e.g. s**t), etc. In addition, we perform modifications on the extracted tokens. For spelling correction, we compose a dictionary for the most commonly seen abbreviations, censored words and elongated words (for emphasis, e.g. "reallyyy"). The Viterbi algorithm is used for word segmentation, with word statistics (unigrams and bigrams) computed from the NLTK English Corpus to obtain the most probable segmentation posteriors from the unigrams and bigrams probabilities. Moreover, all texts are lower-cased, and URLs, emails and mentioned usernames are replaced with common designated tags so that they would not need to be annotated by the semantic parser. Dataset ::: College Metadata The meta-statistics on the college demographics regarding enrollment, geographical location, private/public categorization and male-to-female ratio are obtained. Furthermore, we acquire the Campus Safety and Security Survey dataset from the official U.S. Department of Education website and use rape-related cases statistic as an attribute to complete the data for our linear regression model. The number of such reported cases by these 200 colleges in 2015 amounts to 2,939. Methodology ::: Regression Analysis We examine other features regarding the characteristics of the studied colleges, which might be significant factors of sexual harassment. Four factual attributes pertaining to the 200 colleges are extracted from the U.S. News Statistics, which consists of Undergraduate Enrollment, Male/Female Ratio, Private/Public, and Region (Northeast, South, West, and Midwest). We also use the normalized rape-related cases count (number of cases reported per student enrolled) from the stated government resource as another attribute to examine the proximity of our dataset to the official one. This feature vector is then fitted in a linear regression to predict the normalized #metoo users count (number of unique users who posted #MeToo tweets per student enrolled) for each individual college. Methodology ::: Labeling Sexual Harassment Per our topic modeling results, we decide to look deeper into the narratives of #MeToo users who reveal their personal stories. We examine 6,760 tweets from the most relevant topic of our LDA model, and categorize them based on the following metrics: harassment types (verbal, physical, and visual abuse) and context (peer-to-peer, school employee or work employer, and third-parties). These labels are based on definitions by the U.S. Dept. of Education BIBREF14. Methodology ::: Topic Modeling on #MeToo Tweets In order to understand the latent topics of those #MeToo tweets for college followers, we first utilize Latent Dirichlet Allocation (LDA) to label universal topics demonstrated by the users. We determine the optimal topic number by selecting the one with the highest coherence score. Since certain words frequently appear in those #MeToo tweets (e.g., sexual harassment, men, women, story, etc.), we transform our corpus using TF-IDF, a term-weighting scheme that discounts the influence of common terms. Methodology ::: Semantic Parsing with TRIPS Learning deep meaning representations, which enables the preservation of rich semantic content of entities, meaning ambiguity resolution and partial relational understanding of texts, is one of the challenges that the TRIPS parser BIBREF15 is tasked to tackle. This kind of meaning is represented by TRIPS Logical Form (LF), which is a graph-based representation that serves as the interface between structural analysis of text (i.e., parse) and the subsequent use of the information to produce knowledge. The LF graphs are obtained by using the semantic types, roles and rule-based relations defined by the TRIPS Ontology BIBREF15 at its core in combination with various linguistic techniques such as Dialogue Act Identification, Dependency Parsing, Named Entity Recognition, and Crowd-sourced Lexicon (Wordnet). Figure 1 illustrates an example of the TRIPS LF graph depicting the meaning of the sentence "He harassed me," where the event described though the speech act TELL (i.e. telling a story) is the verb predicate HARASS, which is caused by the agent HE and influences the affected (also called "theme" in traditional literature) ME. As seen from the previously discussed example, the action-agent-affected relational structure is applicable to even the simplest sentences used for storytelling, and it is in fact very common for humans to encounter in both spoken and written languages. This makes it well suited for event extraction from short texts, useful for analyzing tweets with Twitter's 280 character limit. Therefore, our implementation of TRIPS parser is particularly tailored for identifying the verb predicates in tweets and their corresponding agent-affected arguments (with $82.4\%$ F1 score), so that we can have a solid ground for further analysis. Methodology ::: Connotation Frames and Sentiment Analysis In order to develop an interpretable analysis that focuses on sentiment scores pertaining to the entities and events mentioned in the narratives, as well as the perceptions of readers on such events, we draw from existing literature on connotation frames: a set of verbs annotated according to what they imply about semantically dependent entities. Connotation frames, first introduced by Rashkin, Singh, and Choi (2016), provides a framework for analyzing nuanced dimensions in text by combining polarity annotations with frame semantics (Fillmore 1982). More specifically, verbs are annotated across various dimensions and perspectives so that a verb might elicit a positive sentiment for its subject (i.e. sympathy) but imply a negative effect for its object. We target the sentiments towards the entities and verb predicates through a pre-collected set of 950 verbs that have been annotated for these traits, which can be more clearly demonstrated through the example "He harassed me.": ${Sentiment(\textrm {verb}) -}$: something negative happened to the writer. $Sentiment(\textrm {affected}) -$: the writer (affected) most likely feels negative about the event. $Perspective(\textrm {affected} \rightarrow \textrm {agent})-$: the writer most likely has negative feelings towards the agent as a result of the event. $Perspective(\textrm {reader} \rightarrow \textrm {affected})-$: the reader most likely view the agent as the antagonist. $Perspective(\textrm {affected} \rightarrow \textrm {affected})+$: the reader most likely feels sympathetic towards the writer. In addition to extracting sentiment scores from the pre-annotated corpus, we also need to predict sentiment scores of unknown verbs. To achieve this task, we rely on the 200-dimensional GloVe word embeddings BIBREF16, pretrained on their Twitter dataset, to compute the scores of the nearest neighboring synonyms contained in the annotated verb set and normalize their weighted sum to get the resulting sentiment (Equation 1). where $\mathcal {I}=\mathbf {1_{w \in \mathcal {A}}}$ is the indicator function for whether verb predicate $w$ is in the annotation set $\mathcal {A}$, $\gamma (w)$ is the set of nearest neighbors $e$'s of verb $w$. Because our predictive model computes event-entity sentiment scores and generates verb predicate knowledge simultaneously, it is sensitive to data initialization. Therefore, we train the model iteratively on a number of random initialization to achieve the best results. Experimental Results ::: Topical Themes of #MeToo Tweets The results of LDA on #MeToo tweets of college users (Table 1) fall into the same pattern as the research of Modrek and Chakalov (2019), which suggests that a large portion of #MeToo tweets on Twitter focuses on sharing personal traumatic stories about sexual harassment BIBREF10. In fact, in our top 5 topics, Topics 1 and 5 mainly depict gruesome stories and childhood or college time experience. This finding seems to support the validity of the Twitter sample of Modrek and Chakalov (2019), where 11% discloses personal sexual harassment memories and 5.8% of them was in formative years BIBREF10. These users also shows multiple emotions toward this movement, such as compassion (topic 2), determination (topic 3), and hope (topic 4). We will further examine the emotion features in the latter results. Experimental Results ::: Regression Result Observing the results of the linear regression in Table 2, we find the normalized governmental reported cases count and regional feature to be statistically significant on the sexual harassment rate in the Twitter data ($p-value<0.05$). Specifically, the change in the number of reported cases constitutes a considerable change in the number of #MeToo users on Twitter as p-value is extremely small at $5.7e-13$. This corresponds to the research by Napolitano (2014) regarding the "Yes means yes" movement in higher education institutes in recent years, as even with some limitations and inconsistency, the sexual assault reporting system is gradually becoming more rigorous BIBREF17. Meanwhile, attending colleges in the Northeast, West and South regions increases the possibility of posting about sexual harassment (positive coefficients), over the Midwest region. This finding is interesting and warrants further scrutiny. Experimental Results ::: Event-Entity Sentiment Analysis We discover that approximately half of users who detailed their sexual harassment experiences with the #MeToo hashtag suffered from physical aggression. Also, more than half of them claimed to encounter the perpetrators outside the college and work environment. The sentimental score for the affected entities and the verb of cases pertaining to faculty are strictly negative, suggesting that academic personnel's actions might be described as more damaging to the students' mental health. This finding resonates a recent research by Cantapulo et al. regarding the potential hazard of sexual harassment conducts by university faculties using data from federal investigation and relevant social science literature BIBREF18. Furthermore, many in this group tend to mention their respective age, typically between 5 and 20 (24% of the studied subset). This observation reveals an alarming number of child and teenager sexual abuse, indicating that although college students are not as prone to sexual harassment from their peers and teachers, they might still be traumatized by their childhood experiences. In addition, although verbal abuse experiences accounts for a large proportion of the tweets, it is challenging to gain sentiment insights into them, as the majority of them contains insinuations and sarcasms regarding sexual harassment. This explains why the sentiment scores of the events and entities are very close to neutral. Experimental Results ::: Limitations and Ethical Implications Our dataset is taken from only a sample of a specific set of colleges, and different samples might yield different results. Our method of identifying college students is simple, and might not reflect the whole student population. Furthermore, the majority of posts on Twitter are short texts (under 50 words). This factor, according to previous research, might hamper the performance of the LDA results, despite the use of the TF-IDF scheme BIBREF19. Furthermore, while the main goal of this paper is to shed lights to the ongoing problems in the academia and contribute to the future sociological study using big data analysis, our dataset might be misused for detrimental purposes. Also, data regarding sexual harassment is sensitive in nature, and might have unanticipated effects on those addressed users. Conclusion In this study, we discover a novel correlation between the number of college users who participate in the #MeToo movement and the number of official reported cases from the government data. This is a positive sign suggesting that the higher education system is moving into a right direction to effectively utilize Title IV, a portion of the Education Amendments Act of 1972, which requests colleges to submit their sexual misconduct reports to the officials and protect the victims. In addition, we capture several geographic and behavioral characteristics of the #MeToo users related to sexual assault such as region, reaction and narrative content following the trend, as well as sentiment and social interactions, some of which are supported by various literature on sexual harassment. Importantly, our semantic analysis reveals interesting patterns of the assaulting cases. We believe our methodologies on defining these #MeToo users and their features will be applicable to further studies on this and other alarming social issues. Furthermore, we find that the social media-driven approach is highly useful in facilitating crime-related sociology research on a large scale and spectrum. Moreover, since social networks appeal to a broad audience, especially those outside academia, studies using these resources are highly useful for raising awareness in the community on concurrent social problems. Last but not least, many other aspects of the text data from social media, which could provide many interesting insights on sexual harassment, remain largely untouched. In the future, we intend to explore more sophisticated language features and implement more supervised models with advanced neural network parsing and classification. We believe that with our current dataset, an extension to take advantage of cutting-edge linguistic techniques will be the next step to address the previously unanswered questions and uncover deeper meanings of the tweets on sexual harassment.
51,104
c348a8c06e20d5dee07443e962b763073f490079
c348a8c06e20d5dee07443e962b763073f490079_0
Q: What two components are included in their proposed framework? Text: Introduction Machine reading comprehension BIBREF0 , BIBREF1 , which attempts to enable machines to answer questions after reading a passage or a set of passages, attracts great attentions from both research and industry communities in recent years. The release of the Stanford Question Answering Dataset (SQuAD) BIBREF0 and the Microsoft MAchine Reading COmprehension Dataset (MS-MARCO) BIBREF1 provides the large-scale manually created datasets for model training and testing of machine learning (especially deep learning) algorithms for this task. There are two main differences in existing machine reading comprehension datasets. First, the SQuAD dataset constrains the answer to be an exact sub-span in the passage, while words in the answer are not necessary in the passages in the MS-MARCO dataset. Second, the SQuAD dataset only has one passage for a question, while the MS-MARCO dataset contains multiple passages. Existing methods for the MS-MARCO dataset usually follow the extraction based approach for single passage in the SQuAD dataset. It formulates the task as predicting the start and end positions of the answer in the passage. However, as defined in the MS-MARCO dataset, the answer may come from multiple spans, and the system needs to elaborate the answer using words in the passages and words from the questions as well as words that cannot be found in the passages or questions. Table 1 shows several examples from the MS-MARCO dataset. Except in the first example the answer is an exact text span in the passage, in other examples the answers need to be synthesized or generated from the question and passage. In the second example the answer consists of multiple text spans (hereafter evidence snippets) from the passage. In the third example, the answer contains words from the question. In the fourth example, the answer has words that cannot be found in the passages or question. In the last example, all words are not in the passages or questions. In this paper, we present an extraction-then-synthesis framework for machine reading comprehension shown in Figure 1 , in which the answer is synthesized from the extraction results. We build an evidence extraction model to predict the most important sub-spans from the passages as evidence, and then develop an answer synthesis model which takes the evidence as additional features along with the question and passage to further elaborate the final answers. Specifically, we develop the answer extraction model with state-of-the-art attention based neural networks which predict the start and end positions of evidence snippets. As multiple passages are provided for each question in the MS-MARCO dataset, we propose incorporating passage ranking as an additional task to improve the results of evidence extraction under a multi-task learning framework. We use the bidirectional recurrent neural networks (RNN) for the word-level representation, and then apply the attention mechanism BIBREF2 to incorporate matching information from question to passage at the word level. Next, we predict start and end positions of the evidence snippet by pointer networks BIBREF3 . Moreover, we aggregate the word-level matching information of each passage using the attention pooling, and use the passage-level representation to rank all candidate passages as an additional task. For the answer synthesis, we apply the sequence-to-sequence model to synthesize the final answer based on the extracted evidence. The question and passage are encoded by a bi-directional RNN in which the start and end positions of extracted snippet are labeled as features. We combine the question and passage information in the encoding part to initialize the attention-equipped decoder to generate the answer. We conduct experiments on the MS-MARCO dataset. The results show our extraction-then-synthesis framework outperforms our baselines and all other existing methods in terms of ROUGE-L and BLEU-1. Our contributions can be summarized as follows: Related Work Benchmark datasets play an important role in recent progress in reading comprehension and question answering research. BIBREF4 release MCTest whose goal is to select the best answer from four options given the question and the passage. CNN/Daily-Mail BIBREF5 and CBT BIBREF6 are the cloze-style datasets in which the goal is to predict the missing word (often a named entity) in a passage. Different from above datasets, the SQuAD dataset BIBREF0 whose answer can be much longer phrase is more challenging. The answer in SQuAD is a segment of text, or span, from the corresponding reading passage. Similar to the SQuAD, MS-MARCO BIBREF1 is the reading comprehension dataset which aims to answer the question given a set of passages. The answer in MS-MARCO is generated by human after reading all related passages and not necessarily sub-spans of the passages. To the best of our knowledge, the existing works on the MS-MARCO dataset follow their methods on the SQuAD. BIBREF7 combine match-LSTM and pointer networks to produce the boundary of the answer. BIBREF8 and BIBREF9 employ variant co-attention mechanism to match the question and passage mutually. BIBREF8 propose a dynamic pointer network to iteratively infer the answer. BIBREF10 apply an additional gate to the attention-based recurrent networks and propose a self-matching mechanism for aggregating evidence from the whole passage, which achieves the state-of-the-art result on SQuAD dataset. Other works which only focus on the SQuAD dataset may also be applied on the MS-MARCO dataset BIBREF11 , BIBREF12 , BIBREF13 . The sequence-to-sequence model is widely-used in many tasks such as machine translation BIBREF14 , parsing BIBREF15 , response generation BIBREF16 , and summarization generation BIBREF17 . We use it to generate the synthetic answer with the start and end positions of the evidence snippet as features. Our Approach Following the overview in Figure 1 , our approach consists of two parts as evidence extraction and answer synthesis. The two parts are trained in two stages. The evidence extraction part aims to extract evidence snippets related to the question and passage. The answer synthesis part aims to generate the answer based on the extracted evidence snippets. We propose a multi-task learning framework for the evidence extraction shown in Figure 15 , and use the sequence-to-sequence model with additional features of the start and end positions of the evidence snippet for the answer synthesis shown in Figure 3 . Gated Recurrent Unit We use Gated Recurrent Unit (GRU) BIBREF18 instead of basic RNN. Equation 8 describes the mathematical model of the GRU. $r_t$ and $z_t$ are the gates and $h_t$ is the hidden state. $$z_t &= \sigma (W_{hz} h_{t-1} + W_{xz} x_t + b_z)\nonumber \\ r_t &= \sigma (W_{hr} h_{t-1} + W_{xr} x_t + b_r)\nonumber \\ \hat{h_t} &= \Phi (W_h (r_t \odot h_{t-1}) + W_x x_t + b)\nonumber \\ h_t &= (1-z_t)\odot h_{t-1} + z_t \odot \hat{h_t}$$ (Eq. 8) Evidence Extraction We propose a multi-task learning framework for evidence extraction. Unlike the SQuAD dataset, which only has one passage given a question, there are several related passages for each question in the MS-MARCO dataset. In addition to annotating the answer, MS-MARCO also annotates which passage is correct. To this end, we propose improving text span prediction with passage ranking. Specifically, as shown in Figure 2 , in addition to predicting a text span, we apply another task to rank candidate passages with the passage-level representation. Consider a question Q = $\lbrace w_t^Q\rbrace _{t=1}^m$ and a passage P = $\lbrace w_t^P\rbrace _{t=1}^n$ , we first convert the words to their respective word-level embeddings and character-level embeddings. The character-level embeddings are generated by taking the final hidden states of a bi-directional GRU applied to embeddings of characters in the token. We then use a bi-directional GRU to produce new representation $u^Q_1, \dots , u^Q_m$ and $u^P_1, \dots , u^P_n$ of all words in the question and passage respectively: $$u_t^Q = \mathrm {BiGRU}_Q(u_{t - 1}^Q, [e_t^Q,char_t^Q]) \nonumber \\ u_t^P = \mathrm {BiGRU}_P(u_{t - 1}^P, [e_t^P,char_t^P])$$ (Eq. 11) Given question and passage representation $\lbrace u_t^Q\rbrace _{t=1}^m$ and $\lbrace u_t^P\rbrace _{t=1}^n$ , BIBREF2 propose generating sentence-pair representation $\lbrace v_t^P\rbrace _{t=1}^n$ via soft-alignment of words in the question and passage as follows: $$v_t^P = \mathrm {GRU} (v_{t-1}^P, c^Q_t)$$ (Eq. 12) where $c^Q_t=att(u^Q, [u_t^P, v_{t-1}^P])$ is an attention-pooling vector of the whole question ( $u^Q$ ): $$s_j^t &= \mathrm {v}^\mathrm {T}\mathrm {tanh}(W_u^Q u_j^Q + W_u^P u_t^P) \nonumber \\ a_i^t &= \mathrm {exp}(s_i^t) / \Sigma _{j=1}^m \mathrm {exp}(s_j^t) \nonumber \\ c^Q_t &= \Sigma _{i=1}^m a_i^t u_i^Q$$ (Eq. 13) BIBREF19 introduce match-LSTM, which takes $u_j^P$ as an additional input into the recurrent network. BIBREF10 propose adding gate to the input ( $[u_t^P, c^Q_t]$ ) of RNN to determine the importance of passage parts. $$&g_t = \mathrm {sigmoid}(W_g [u_t^P, c^Q_t]) \nonumber \\ &[u_t^P, c^Q_t]^* = g_t\odot [u_t^P, c^Q_t] \nonumber \\ &v_t^P = \mathrm {GRU} (v_{t-1}^P, [u_t^P, c^Q_t]^*)$$ (Eq. 14) We use pointer networks BIBREF3 to predict the position of evidence snippets. Following the previous work BIBREF7 , we concatenate all passages to predict one span for the evidence snippet prediction. Given the representation $\lbrace v_t^P\rbrace _{t=1}^N$ where $N$ is the sum of the length of all passages, the attention mechanism is utilized as a pointer to select the start position ( $p^1$ ) and end position ( $p^2$ ), which can be formulated as follows: $$s_j^t &= \mathrm {v}^\mathrm {T}\mathrm {tanh}(W_h^{P} v_j^P + W_{h}^{a} h_{t-1}^a) \nonumber \\ a_i^t &= \mathrm {exp}(s_i^t) / \Sigma _{j=1}^N \mathrm {exp}(s_j^t) \nonumber \\ p^t &= \mathrm {argmax}(a_1^t, \dots , a_N^t)$$ (Eq. 16) Here $h_{t-1}^a$ represents the last hidden state of the answer recurrent network (pointer network). The input of the answer recurrent network is the attention-pooling vector based on current predicted probability $a^t$ : $$c_t &= \Sigma _{i=1}^N a_i^t v_i^P \nonumber \\ h_t^a &= \mathrm {GRU}(h_{t-1}^a, c_t)$$ (Eq. 17) When predicting the start position, $h_{t-1}^a$ represents the initial hidden state of the answer recurrent network. We utilize the question vector $r^Q$ as the initial state of the answer recurrent network. $r^Q = att(u^Q, v^Q_r)$ is an attention-pooling vector of the question based on the parameter $v^Q_r$ : $$s_j &= \mathrm {v}^\mathrm {T}\mathrm {tanh}(W_u^{Q} u_j^Q + W_{v}^{Q} v_r^Q) \nonumber \\ a_i &= \mathrm {exp}(s_i) / \Sigma _{j=1}^m \mathrm {exp}(s_j) \nonumber \\ r^Q &= \Sigma _{i=1}^m a_i u_i^Q$$ (Eq. 18) For this part, the objective function is to minimize the following cross entropy: $$\mathcal {L}_{AP} = -\Sigma _{t=1}^{2}\Sigma _{i=1}^{N}[y^t_i\log a^t_i + (1-y^t_i)\log (1-a^t_i)]$$ (Eq. 19) where $y^t_i \in \lbrace 0,1\rbrace $ denotes a label. $y^t_i=1$ means $i$ is a correct position, otherwise $y^t_i=0$ . In this part, we match the question and each passage from word level to passage level. Firstly, we use the question representation $r^Q$ to attend words in each passage to obtain the passage representation $r^P$ where $r^P = att(v^P, r^Q)$ . $$s_j &= \mathrm {v}^\mathrm {T}\mathrm {tanh}(W_v^{P} v_j^P + W_{v}^{Q} r^Q) \nonumber \\ a_i &= \mathrm {exp}(s_i) / \Sigma _{j=1}^n \mathrm {exp}(s_j) \nonumber \\ r^P &= \Sigma _{i=1}^n a_i v_i^P$$ (Eq. 21) Next, the question representation $r^Q$ and the passage representation $r^P$ are combined to pass two fully connected layers for a matching score, $$g = v_g^{\mathrm {T}}(\mathrm {tanh}(W_g[r^Q,r^P]))$$ (Eq. 22) For one question, each candidate passage $P_i$ has a matching score $g_i$ . We normalize their scores and optimize following objective function: $$\hat{g}_i = \mathrm {exp}(g_i) / \Sigma _{j=1}^k \mathrm {exp}(g_j) \nonumber \\ \mathcal {L}_{PR} = -\sum _{i=1}^{k}[y_i\log \hat{g}_i + (1-y_i)\log (1-\hat{g}_i)]$$ (Eq. 23) where $k$ is the number of passages. $y_i \in \lbrace 0,1\rbrace $ denotes a label. $y_i=1$ means $P_i$ is the correct passage, otherwise $y_i=0$ . The evident extraction part is trained by minimizing joint objective functions: $$\mathcal {L}_{E} = r \mathcal {L}_{AP} + (1-r) \mathcal {L}_{PR}$$ (Eq. 25) where $r$ is the hyper-parameter for weights of two loss functions. Answer Synthesis As shown in Figure 3 , we use the sequence-to-sequence model to synthesize the answer with the extracted evidences as features. We first produce the representation $h_{t}^P$ and $h_{t}^Q$ of all words in the passage and question respectively. When producing the answer representation, we combine the basic word embedding $e_t^p$ with additional features $f_t^s$ and $f_t^e$ to indicate the start and end positions of the evidence snippet respectively predicted by evidence extraction model. $f_t^s =1$ and $f_t^e =1$ mean the position $t$ is the start and end of the evidence span, respectively. $$&h_{t}^P =\mathrm {BiGRU}(h_{t-1}^P, [e_t^p,f_t^s,f_t^e]) \nonumber \\ &h_{t}^Q = \mathrm {BiGRU}(h_{t-1}^Q,e_t^Q)$$ (Eq. 27) On top of the encoder, we use GRU with attention as the decoder to produce the answer. At each decoding time step $t$ , the GRU reads the previous word embedding $ w_{t-1} $ and previous context vector $ c_{t-1} $ as inputs to compute the new hidden state $ d_{t} $ . To initialize the GRU hidden state, we use a linear layer with the last backward encoder hidden state $ \scalebox {-1}[1]{\vec{\scalebox {-1}[1]{h}}}_{1}^P $ and $ \scalebox {-1}[1]{\vec{\scalebox {-1}[1]{h}}}_{1}^Q $ as input: $$d_{t} &= \text{GRU}(w_{t-1}, c_{t-1}, d_{t-1}) \nonumber \\ d_{0} &= \tanh (W_{d}[\scalebox {-1}[1]{\vec{\scalebox {-1}[1]{h}}}_{1}^P,\scalebox {-1}[1]{\vec{\scalebox {-1}[1]{h}}}_{1}^Q] + b)$$ (Eq. 28) where $ W_{d} $ is the weight matrix and $ b $ is the bias vector. The context vector $ c_{t} $ for current time step $ t $ is computed through the concatenate attention mechanism BIBREF14 , which matches the current decoder state $ d_{t} $ with each encoder hidden state $ h_{t} $ to get the weighted sum representation. Here $h_{i}$ consists of the passage representation $h_{t}^P$ and the question representation $h_{t}^Q$ . $$s^t_j &= v_{a}^{\mathrm {T}}\tanh (W_{a}d_{t-1} + U_{a}h_{j}) \nonumber \\ a^t_i &= \mathrm {exp}(s_i^t) / \Sigma _{j=1}^n \mathrm {exp}(s_j^t) \nonumber \\ c_{t} &= \Sigma _{i = 1}^{n} a^t_ih_{i}$$ (Eq. 30) We then combine the previous word embedding $ w_{t-1} $ , the current context vector $ c_{t} $ , and the decoder state $ d_{t} $ to construct the readout state $ r_{t} $ . The readout state is then passed through a maxout hidden layer BIBREF20 to predict the next word with a softmax layer over the decoder vocabulary. $$r_{t} &= W_{r}w_{t-1} + U_{r}c_{t} + V_{r}d_{t} \nonumber \\ m_{t} &= [\max \lbrace r_{t, 2j-1}, r_{t, 2j}\rbrace ]^{\mathrm {T}} \nonumber \\ p(y_{t} &\vert y_{1}, \dots , y_{t-1}) = \text{softmax}(W_{o}m_{t})$$ (Eq. 31) where $ W_{a} $ , $ U_{a} $ , $ W_{r} $ , $ U_{r} $ , $ V_{r} $ and $ W_{o} $ are parameters to be learned. Readout state $ r_{t} $ is a $ 2d $ -dimensional vector, and the maxout layer (Equation 31 ) picks the max value for every two numbers in $ r_{t} $ and produces a d-dimensional vector $ m_{t} $ . Our goal is to maximize the output probability given the input sentence. Therefore, we optimize the negative log-likelihood loss function: $$\mathcal {L}_{S}= - \frac{1}{\vert \mathcal {D} \vert } \Sigma _{(X, Y) \in \mathcal {D}} \log p(Y|X)$$ (Eq. 32) where $\mathcal {D}$ is the set of data. $X$ represents the question and passage including evidence snippets, and $Y$ represents the answer. Experiment We conduct our experiments on the MS-MARCO dataset BIBREF1 . We compare our extraction-then-synthesis framework with pure extraction model and other baseline methods on the leaderboard of MS-MARCO. Experimental results show that our model achieves better results in official evaluation metrics. We also conduct ablation tests to verify our method, and compare our framework with the end-to-end generation framework. Dataset and Evaluation Metrics For the MS-MARCO dataset, the questions are user queries issued to the Bing search engine and the context passages are from real web documents. The data has been split into a training set (82,326 pairs), a development set (10,047 pairs) and a test set (9,650 pairs). The answers are human-generated and not necessarily sub-spans of the passages so that the metrics in the official tool of MS-MARCO evaluation are BLEU BIBREF21 and ROUGE-L BIBREF22 . In the official evaluation tool, the ROUGE-L is calculated by averaging the score per question, however, the BLEU is normalized with all questions. We hold that the answer should be evaluated case-by-case in the reading comprehension task. Therefore, we mainly focus on the result in the ROUGE-L. Implementation Details The evidence extraction and the answer synthesis are trained in two stages. For evidence extraction, since the answers are not necessarily sub-spans of the passages, we choose the span with the highest ROUGE-L score with the reference answer as the gold span in the training. Moreover, we only use the data whose ROUGE-L score of chosen text span is higher than 0.7, therefore we only use 71,417 training pairs in our experiments. For answer synthesis, the training data consists of two parts. First, for all passages in the training data, we choose the best span with highest ROUGE-L score as the evidence, and use the corresponding reference answer as the output. We only use the data whose ROUGE-L score of chosen evidence snippet is higher than 0.5. Second, we apply our evidence extraction model to all training data to obtain the extracted span. Then we treat the passage to which this span belongs as the input. For answer extraction, we use 300-dimensional uncased pre-trained GloVe embeddings BIBREF23 for both question and passage without update during training. We use zero vectors to represent all out-of-vocabulary words. Hidden vector length is set to 150 for all layers. We also apply dropout BIBREF24 between layers, with dropout rate 0.1. The weight $r$ is set to 0.8. For answer synthesis, we use an identical vocabulary set for the input and output collected from the training data. We set the vocabulary size to 30,000 according to the frequency and the other words are set to $<$ unk $>$ . All word embeddings are updated during the training. We set the word embedding size to 300, set the feature embedding size of start and end positions of the extracted snippet to 50, and set all GRU hidden state sizes to 150. The model is optimized using AdaDelta BIBREF25 with initial learning rate of 1.0. All hyper-parameters are selected on the MS-MARCO development set. When decoding, we first run our extraction model to obtain the extracted span, and run our synthesis model with the extracted result and the passage that contains this span. We use the beam search with beam size of 12 to generate the sequence. After the sequence-to-sequence model, we post-process the sequence with following rules: We only keep once if the sequence-to-sequence model generates duplicated words or phrases. For all “ $<$ unk $>$ ” and the word as well as phrase which are not existed in the extracted answer, we try to refine it by finding a word or phrase with the same adjacent words in the extracted span and passage. If the generated answer only contains a single word “ $<$ unk $>$ ”, we use the extracted span as the final answer. Baseline Methods We conduct experiments with following settings: S-Net (Extraction): the model that only has the evidence extraction part. S-Net: the model that consists of the evidence extraction part and the answer synthesis part. We implement two state-of-the-art baselines on reading comprehension, namely BiDAF BIBREF9 and Prediction BIBREF7 , to extract text spans as evidence snippets. Moreover, we implement a baseline that only has the evidence extraction part without the passage ranking. Then we apply the answer synthesis part on top of their results. We also compare with other methods on the MS-MARCO leaderboard, including FastQAExt BIBREF26 , ReasoNet BIBREF27 , and R-Net BIBREF10 . Result Table 2 shows the results on the MS-MARCO test data. Our extraction model achieves 41.45 and 44.08 in terms of ROUGE-L and BLEU-1, respectively. Next we train the model 30 times with the same setting, and select models using a greedy search. We sum the probability at each position of each single model to decide the ensemble result. Finally we select 13 models for ensemble, which achieves 42.92 and 44.97 in terms of ROUGE-L and BLEU-1, respectively, which achieves the state-of-the-art results of the extraction model. Then we test our synthesis model based on the extracted evidence. Our synthesis model achieves 3.78% and 3.73% improvement on the single model and ensemble model in terms of ROUGE-L, respectively. Our best result achieves 46.65 in terms of ROUGE-L and 44.78 in terms of BLEU-1, which outperforms all existing methods with a large margin and are very close to human performance. Moreover, we observe that our method only achieves significant improvement in terms of ROUGE-L compared with our baseline. The reason is that our synthesis model works better when the answer is short, which almost has no effect on BLEU as it is normalized with all questions. Since answers on the test set are not published, we analyze our model on the development set. Table 3 shows results on the development set in terms of ROUGE-L. As we can see, our method outperforms the baseline and several strong state-of-the-art systems. For the evidence extraction part, our proposed multi-task learning framework achieves 42.23 and 44.11 for the single and ensemble model in terms of ROUGE-L. For the answer synthesis, the single and ensemble models improve 3.72% and 3.65% respectively in terms of ROUGE-L. We observe the consistent improvement when applying our answer synthesis model to other answer span prediction models, such as BiDAF and Prediction. Discussion We analyze the result of incorporating passage ranking as an additional task. We compare our multi-task framework with two baselines as shown in Table 4 . For passage selection, our multi-task model achieves the accuracy of 38.9, which outperforms the pure answer prediction model with 4.3. Moreover, jointly learning the answer prediction part and the passage ranking part is better than solving this task by two separated steps because the answer span can provide more information with stronger supervision, which benefits the passage ranking part. The ROUGE-L is calculated by the best answer span in the selected passage, which shows our multi-task learning framework has more potential for better answer. We compare the result of answer extraction and answer synthesis in different categories grouped by the upper bound of extraction method in Table 5 . For the question whose answer can be exactly matched in the passage, our answer synthesis model performs slightly worse because the sequence-to-sequence model makes some deviation when copying extracted evidences. In other categories, our synthesis model achieves more or less improvement. For the question whose answer can be almost found in the passage (ROUGE-L $\ge $ 0.8), our model achieves 0.2 improvement even though the space that can be raised is limited. For the question whose upper performance via answer extraction is between 0.6 and 0.8, our model achieves a large improvement of 2.0. Part of questions in the last category (ROUGE-L $<$ 0.2) are the polar questions whose answers are “yes” or “no”. Although the answer is not in the passage or question, our synthesis model can easily solve this problem and determine the correct answer through the extracted evidences, which leads to such improvement in this category. However, in these questions, answers are too short to influence the final score in terms of BLEU because it is normalized in all questions. Moreover, the score decreases due to the penalty of length. Due to the limitation of BLEU, we only report the result in terms of ROUGE-L in our analysis. We compare our extraction-then-synthesis model with several end-to-end generation models in Table 6 . S2S represents the sequence-to-sequence framework shown in Figure 3 . The difference among our synthesis model and all entries in the Table 6 is the information we use in the encoding part. The authors of MS-MACRO publish a baseline of training a sequence-to-sequence model with the question and answer, which only achieves 8.9 in terms of ROUGE-L. Adding all passages to the sequence-to-sequence model can obviously improve the result to 28.75. Then we only use the question and the selected passage to generate the answer. The only difference with our synthesis model is that we add the position features to the basic sequence-to-sequence model. The result is still worse than our synthesis model with a large margin, which shows the matching between question and passage is very important for generating answer. Next, we build an end-to-end framework combining matching and generation. We apply the sequence-to-sequence model on top of the matching information by taking question sensitive passage representation $v^P_t$ in the Equation 14 as the input of sequence-to-sequence model, which only achieves 6.28 in terms of ROUGE-L. Above results show the effectiveness of our model that solves this task with two steps. In the future, we hope the reinforcement learning can help the connection between evidence extraction and answer synthesis. Conclusion and Future Work In this paper, we propose S-Net, an extraction-then-synthesis framework, for machine reading comprehension. The extraction model aims to match the question and passage and predict most important sub-spans in the passage related to the question as evidence. Then, the synthesis model synthesizes the question information and the evidence snippet to generate the final answer. We propose a multi-task learning framework to improve the evidence extraction model by passage ranking to extract the evidence snippet, and use the sequence-to-sequence model for answer synthesis. We conduct experiments on the MS-MARCO dataset. Results demonstrate that our approach outperforms pure answer extraction model and other existing methods. We only annotate one evidence snippet in the sequence-to-sequence model for synthesizing answer, which cannot solve the question whose answer comes from multiple evidences, such as the second example in Table 1 . Our extraction model is based on the pointer network which selects the evidence by predicting the start and end positions of the text span. Therefore the top candidates are similar as they usually share the same start or end positions. By ranking separated candidates for predicting evidence snippets, we can annotate multiple evidence snippets as features in the sequence-to-sequence model for questions in this category in the future. Acknowledgement We thank the MS-MARCO organizers for help in submissions.
evidence extraction and answer synthesis
0300cf768996849cab7463d929afcb0b09c9cf2a
0300cf768996849cab7463d929afcb0b09c9cf2a_0
Q: Which framework they propose in this paper? Text: Introduction Machine reading comprehension BIBREF0 , BIBREF1 , which attempts to enable machines to answer questions after reading a passage or a set of passages, attracts great attentions from both research and industry communities in recent years. The release of the Stanford Question Answering Dataset (SQuAD) BIBREF0 and the Microsoft MAchine Reading COmprehension Dataset (MS-MARCO) BIBREF1 provides the large-scale manually created datasets for model training and testing of machine learning (especially deep learning) algorithms for this task. There are two main differences in existing machine reading comprehension datasets. First, the SQuAD dataset constrains the answer to be an exact sub-span in the passage, while words in the answer are not necessary in the passages in the MS-MARCO dataset. Second, the SQuAD dataset only has one passage for a question, while the MS-MARCO dataset contains multiple passages. Existing methods for the MS-MARCO dataset usually follow the extraction based approach for single passage in the SQuAD dataset. It formulates the task as predicting the start and end positions of the answer in the passage. However, as defined in the MS-MARCO dataset, the answer may come from multiple spans, and the system needs to elaborate the answer using words in the passages and words from the questions as well as words that cannot be found in the passages or questions. Table 1 shows several examples from the MS-MARCO dataset. Except in the first example the answer is an exact text span in the passage, in other examples the answers need to be synthesized or generated from the question and passage. In the second example the answer consists of multiple text spans (hereafter evidence snippets) from the passage. In the third example, the answer contains words from the question. In the fourth example, the answer has words that cannot be found in the passages or question. In the last example, all words are not in the passages or questions. In this paper, we present an extraction-then-synthesis framework for machine reading comprehension shown in Figure 1 , in which the answer is synthesized from the extraction results. We build an evidence extraction model to predict the most important sub-spans from the passages as evidence, and then develop an answer synthesis model which takes the evidence as additional features along with the question and passage to further elaborate the final answers. Specifically, we develop the answer extraction model with state-of-the-art attention based neural networks which predict the start and end positions of evidence snippets. As multiple passages are provided for each question in the MS-MARCO dataset, we propose incorporating passage ranking as an additional task to improve the results of evidence extraction under a multi-task learning framework. We use the bidirectional recurrent neural networks (RNN) for the word-level representation, and then apply the attention mechanism BIBREF2 to incorporate matching information from question to passage at the word level. Next, we predict start and end positions of the evidence snippet by pointer networks BIBREF3 . Moreover, we aggregate the word-level matching information of each passage using the attention pooling, and use the passage-level representation to rank all candidate passages as an additional task. For the answer synthesis, we apply the sequence-to-sequence model to synthesize the final answer based on the extracted evidence. The question and passage are encoded by a bi-directional RNN in which the start and end positions of extracted snippet are labeled as features. We combine the question and passage information in the encoding part to initialize the attention-equipped decoder to generate the answer. We conduct experiments on the MS-MARCO dataset. The results show our extraction-then-synthesis framework outperforms our baselines and all other existing methods in terms of ROUGE-L and BLEU-1. Our contributions can be summarized as follows: Related Work Benchmark datasets play an important role in recent progress in reading comprehension and question answering research. BIBREF4 release MCTest whose goal is to select the best answer from four options given the question and the passage. CNN/Daily-Mail BIBREF5 and CBT BIBREF6 are the cloze-style datasets in which the goal is to predict the missing word (often a named entity) in a passage. Different from above datasets, the SQuAD dataset BIBREF0 whose answer can be much longer phrase is more challenging. The answer in SQuAD is a segment of text, or span, from the corresponding reading passage. Similar to the SQuAD, MS-MARCO BIBREF1 is the reading comprehension dataset which aims to answer the question given a set of passages. The answer in MS-MARCO is generated by human after reading all related passages and not necessarily sub-spans of the passages. To the best of our knowledge, the existing works on the MS-MARCO dataset follow their methods on the SQuAD. BIBREF7 combine match-LSTM and pointer networks to produce the boundary of the answer. BIBREF8 and BIBREF9 employ variant co-attention mechanism to match the question and passage mutually. BIBREF8 propose a dynamic pointer network to iteratively infer the answer. BIBREF10 apply an additional gate to the attention-based recurrent networks and propose a self-matching mechanism for aggregating evidence from the whole passage, which achieves the state-of-the-art result on SQuAD dataset. Other works which only focus on the SQuAD dataset may also be applied on the MS-MARCO dataset BIBREF11 , BIBREF12 , BIBREF13 . The sequence-to-sequence model is widely-used in many tasks such as machine translation BIBREF14 , parsing BIBREF15 , response generation BIBREF16 , and summarization generation BIBREF17 . We use it to generate the synthetic answer with the start and end positions of the evidence snippet as features. Our Approach Following the overview in Figure 1 , our approach consists of two parts as evidence extraction and answer synthesis. The two parts are trained in two stages. The evidence extraction part aims to extract evidence snippets related to the question and passage. The answer synthesis part aims to generate the answer based on the extracted evidence snippets. We propose a multi-task learning framework for the evidence extraction shown in Figure 15 , and use the sequence-to-sequence model with additional features of the start and end positions of the evidence snippet for the answer synthesis shown in Figure 3 . Gated Recurrent Unit We use Gated Recurrent Unit (GRU) BIBREF18 instead of basic RNN. Equation 8 describes the mathematical model of the GRU. $r_t$ and $z_t$ are the gates and $h_t$ is the hidden state. $$z_t &= \sigma (W_{hz} h_{t-1} + W_{xz} x_t + b_z)\nonumber \\ r_t &= \sigma (W_{hr} h_{t-1} + W_{xr} x_t + b_r)\nonumber \\ \hat{h_t} &= \Phi (W_h (r_t \odot h_{t-1}) + W_x x_t + b)\nonumber \\ h_t &= (1-z_t)\odot h_{t-1} + z_t \odot \hat{h_t}$$ (Eq. 8) Evidence Extraction We propose a multi-task learning framework for evidence extraction. Unlike the SQuAD dataset, which only has one passage given a question, there are several related passages for each question in the MS-MARCO dataset. In addition to annotating the answer, MS-MARCO also annotates which passage is correct. To this end, we propose improving text span prediction with passage ranking. Specifically, as shown in Figure 2 , in addition to predicting a text span, we apply another task to rank candidate passages with the passage-level representation. Consider a question Q = $\lbrace w_t^Q\rbrace _{t=1}^m$ and a passage P = $\lbrace w_t^P\rbrace _{t=1}^n$ , we first convert the words to their respective word-level embeddings and character-level embeddings. The character-level embeddings are generated by taking the final hidden states of a bi-directional GRU applied to embeddings of characters in the token. We then use a bi-directional GRU to produce new representation $u^Q_1, \dots , u^Q_m$ and $u^P_1, \dots , u^P_n$ of all words in the question and passage respectively: $$u_t^Q = \mathrm {BiGRU}_Q(u_{t - 1}^Q, [e_t^Q,char_t^Q]) \nonumber \\ u_t^P = \mathrm {BiGRU}_P(u_{t - 1}^P, [e_t^P,char_t^P])$$ (Eq. 11) Given question and passage representation $\lbrace u_t^Q\rbrace _{t=1}^m$ and $\lbrace u_t^P\rbrace _{t=1}^n$ , BIBREF2 propose generating sentence-pair representation $\lbrace v_t^P\rbrace _{t=1}^n$ via soft-alignment of words in the question and passage as follows: $$v_t^P = \mathrm {GRU} (v_{t-1}^P, c^Q_t)$$ (Eq. 12) where $c^Q_t=att(u^Q, [u_t^P, v_{t-1}^P])$ is an attention-pooling vector of the whole question ( $u^Q$ ): $$s_j^t &= \mathrm {v}^\mathrm {T}\mathrm {tanh}(W_u^Q u_j^Q + W_u^P u_t^P) \nonumber \\ a_i^t &= \mathrm {exp}(s_i^t) / \Sigma _{j=1}^m \mathrm {exp}(s_j^t) \nonumber \\ c^Q_t &= \Sigma _{i=1}^m a_i^t u_i^Q$$ (Eq. 13) BIBREF19 introduce match-LSTM, which takes $u_j^P$ as an additional input into the recurrent network. BIBREF10 propose adding gate to the input ( $[u_t^P, c^Q_t]$ ) of RNN to determine the importance of passage parts. $$&g_t = \mathrm {sigmoid}(W_g [u_t^P, c^Q_t]) \nonumber \\ &[u_t^P, c^Q_t]^* = g_t\odot [u_t^P, c^Q_t] \nonumber \\ &v_t^P = \mathrm {GRU} (v_{t-1}^P, [u_t^P, c^Q_t]^*)$$ (Eq. 14) We use pointer networks BIBREF3 to predict the position of evidence snippets. Following the previous work BIBREF7 , we concatenate all passages to predict one span for the evidence snippet prediction. Given the representation $\lbrace v_t^P\rbrace _{t=1}^N$ where $N$ is the sum of the length of all passages, the attention mechanism is utilized as a pointer to select the start position ( $p^1$ ) and end position ( $p^2$ ), which can be formulated as follows: $$s_j^t &= \mathrm {v}^\mathrm {T}\mathrm {tanh}(W_h^{P} v_j^P + W_{h}^{a} h_{t-1}^a) \nonumber \\ a_i^t &= \mathrm {exp}(s_i^t) / \Sigma _{j=1}^N \mathrm {exp}(s_j^t) \nonumber \\ p^t &= \mathrm {argmax}(a_1^t, \dots , a_N^t)$$ (Eq. 16) Here $h_{t-1}^a$ represents the last hidden state of the answer recurrent network (pointer network). The input of the answer recurrent network is the attention-pooling vector based on current predicted probability $a^t$ : $$c_t &= \Sigma _{i=1}^N a_i^t v_i^P \nonumber \\ h_t^a &= \mathrm {GRU}(h_{t-1}^a, c_t)$$ (Eq. 17) When predicting the start position, $h_{t-1}^a$ represents the initial hidden state of the answer recurrent network. We utilize the question vector $r^Q$ as the initial state of the answer recurrent network. $r^Q = att(u^Q, v^Q_r)$ is an attention-pooling vector of the question based on the parameter $v^Q_r$ : $$s_j &= \mathrm {v}^\mathrm {T}\mathrm {tanh}(W_u^{Q} u_j^Q + W_{v}^{Q} v_r^Q) \nonumber \\ a_i &= \mathrm {exp}(s_i) / \Sigma _{j=1}^m \mathrm {exp}(s_j) \nonumber \\ r^Q &= \Sigma _{i=1}^m a_i u_i^Q$$ (Eq. 18) For this part, the objective function is to minimize the following cross entropy: $$\mathcal {L}_{AP} = -\Sigma _{t=1}^{2}\Sigma _{i=1}^{N}[y^t_i\log a^t_i + (1-y^t_i)\log (1-a^t_i)]$$ (Eq. 19) where $y^t_i \in \lbrace 0,1\rbrace $ denotes a label. $y^t_i=1$ means $i$ is a correct position, otherwise $y^t_i=0$ . In this part, we match the question and each passage from word level to passage level. Firstly, we use the question representation $r^Q$ to attend words in each passage to obtain the passage representation $r^P$ where $r^P = att(v^P, r^Q)$ . $$s_j &= \mathrm {v}^\mathrm {T}\mathrm {tanh}(W_v^{P} v_j^P + W_{v}^{Q} r^Q) \nonumber \\ a_i &= \mathrm {exp}(s_i) / \Sigma _{j=1}^n \mathrm {exp}(s_j) \nonumber \\ r^P &= \Sigma _{i=1}^n a_i v_i^P$$ (Eq. 21) Next, the question representation $r^Q$ and the passage representation $r^P$ are combined to pass two fully connected layers for a matching score, $$g = v_g^{\mathrm {T}}(\mathrm {tanh}(W_g[r^Q,r^P]))$$ (Eq. 22) For one question, each candidate passage $P_i$ has a matching score $g_i$ . We normalize their scores and optimize following objective function: $$\hat{g}_i = \mathrm {exp}(g_i) / \Sigma _{j=1}^k \mathrm {exp}(g_j) \nonumber \\ \mathcal {L}_{PR} = -\sum _{i=1}^{k}[y_i\log \hat{g}_i + (1-y_i)\log (1-\hat{g}_i)]$$ (Eq. 23) where $k$ is the number of passages. $y_i \in \lbrace 0,1\rbrace $ denotes a label. $y_i=1$ means $P_i$ is the correct passage, otherwise $y_i=0$ . The evident extraction part is trained by minimizing joint objective functions: $$\mathcal {L}_{E} = r \mathcal {L}_{AP} + (1-r) \mathcal {L}_{PR}$$ (Eq. 25) where $r$ is the hyper-parameter for weights of two loss functions. Answer Synthesis As shown in Figure 3 , we use the sequence-to-sequence model to synthesize the answer with the extracted evidences as features. We first produce the representation $h_{t}^P$ and $h_{t}^Q$ of all words in the passage and question respectively. When producing the answer representation, we combine the basic word embedding $e_t^p$ with additional features $f_t^s$ and $f_t^e$ to indicate the start and end positions of the evidence snippet respectively predicted by evidence extraction model. $f_t^s =1$ and $f_t^e =1$ mean the position $t$ is the start and end of the evidence span, respectively. $$&h_{t}^P =\mathrm {BiGRU}(h_{t-1}^P, [e_t^p,f_t^s,f_t^e]) \nonumber \\ &h_{t}^Q = \mathrm {BiGRU}(h_{t-1}^Q,e_t^Q)$$ (Eq. 27) On top of the encoder, we use GRU with attention as the decoder to produce the answer. At each decoding time step $t$ , the GRU reads the previous word embedding $ w_{t-1} $ and previous context vector $ c_{t-1} $ as inputs to compute the new hidden state $ d_{t} $ . To initialize the GRU hidden state, we use a linear layer with the last backward encoder hidden state $ \scalebox {-1}[1]{\vec{\scalebox {-1}[1]{h}}}_{1}^P $ and $ \scalebox {-1}[1]{\vec{\scalebox {-1}[1]{h}}}_{1}^Q $ as input: $$d_{t} &= \text{GRU}(w_{t-1}, c_{t-1}, d_{t-1}) \nonumber \\ d_{0} &= \tanh (W_{d}[\scalebox {-1}[1]{\vec{\scalebox {-1}[1]{h}}}_{1}^P,\scalebox {-1}[1]{\vec{\scalebox {-1}[1]{h}}}_{1}^Q] + b)$$ (Eq. 28) where $ W_{d} $ is the weight matrix and $ b $ is the bias vector. The context vector $ c_{t} $ for current time step $ t $ is computed through the concatenate attention mechanism BIBREF14 , which matches the current decoder state $ d_{t} $ with each encoder hidden state $ h_{t} $ to get the weighted sum representation. Here $h_{i}$ consists of the passage representation $h_{t}^P$ and the question representation $h_{t}^Q$ . $$s^t_j &= v_{a}^{\mathrm {T}}\tanh (W_{a}d_{t-1} + U_{a}h_{j}) \nonumber \\ a^t_i &= \mathrm {exp}(s_i^t) / \Sigma _{j=1}^n \mathrm {exp}(s_j^t) \nonumber \\ c_{t} &= \Sigma _{i = 1}^{n} a^t_ih_{i}$$ (Eq. 30) We then combine the previous word embedding $ w_{t-1} $ , the current context vector $ c_{t} $ , and the decoder state $ d_{t} $ to construct the readout state $ r_{t} $ . The readout state is then passed through a maxout hidden layer BIBREF20 to predict the next word with a softmax layer over the decoder vocabulary. $$r_{t} &= W_{r}w_{t-1} + U_{r}c_{t} + V_{r}d_{t} \nonumber \\ m_{t} &= [\max \lbrace r_{t, 2j-1}, r_{t, 2j}\rbrace ]^{\mathrm {T}} \nonumber \\ p(y_{t} &\vert y_{1}, \dots , y_{t-1}) = \text{softmax}(W_{o}m_{t})$$ (Eq. 31) where $ W_{a} $ , $ U_{a} $ , $ W_{r} $ , $ U_{r} $ , $ V_{r} $ and $ W_{o} $ are parameters to be learned. Readout state $ r_{t} $ is a $ 2d $ -dimensional vector, and the maxout layer (Equation 31 ) picks the max value for every two numbers in $ r_{t} $ and produces a d-dimensional vector $ m_{t} $ . Our goal is to maximize the output probability given the input sentence. Therefore, we optimize the negative log-likelihood loss function: $$\mathcal {L}_{S}= - \frac{1}{\vert \mathcal {D} \vert } \Sigma _{(X, Y) \in \mathcal {D}} \log p(Y|X)$$ (Eq. 32) where $\mathcal {D}$ is the set of data. $X$ represents the question and passage including evidence snippets, and $Y$ represents the answer. Experiment We conduct our experiments on the MS-MARCO dataset BIBREF1 . We compare our extraction-then-synthesis framework with pure extraction model and other baseline methods on the leaderboard of MS-MARCO. Experimental results show that our model achieves better results in official evaluation metrics. We also conduct ablation tests to verify our method, and compare our framework with the end-to-end generation framework. Dataset and Evaluation Metrics For the MS-MARCO dataset, the questions are user queries issued to the Bing search engine and the context passages are from real web documents. The data has been split into a training set (82,326 pairs), a development set (10,047 pairs) and a test set (9,650 pairs). The answers are human-generated and not necessarily sub-spans of the passages so that the metrics in the official tool of MS-MARCO evaluation are BLEU BIBREF21 and ROUGE-L BIBREF22 . In the official evaluation tool, the ROUGE-L is calculated by averaging the score per question, however, the BLEU is normalized with all questions. We hold that the answer should be evaluated case-by-case in the reading comprehension task. Therefore, we mainly focus on the result in the ROUGE-L. Implementation Details The evidence extraction and the answer synthesis are trained in two stages. For evidence extraction, since the answers are not necessarily sub-spans of the passages, we choose the span with the highest ROUGE-L score with the reference answer as the gold span in the training. Moreover, we only use the data whose ROUGE-L score of chosen text span is higher than 0.7, therefore we only use 71,417 training pairs in our experiments. For answer synthesis, the training data consists of two parts. First, for all passages in the training data, we choose the best span with highest ROUGE-L score as the evidence, and use the corresponding reference answer as the output. We only use the data whose ROUGE-L score of chosen evidence snippet is higher than 0.5. Second, we apply our evidence extraction model to all training data to obtain the extracted span. Then we treat the passage to which this span belongs as the input. For answer extraction, we use 300-dimensional uncased pre-trained GloVe embeddings BIBREF23 for both question and passage without update during training. We use zero vectors to represent all out-of-vocabulary words. Hidden vector length is set to 150 for all layers. We also apply dropout BIBREF24 between layers, with dropout rate 0.1. The weight $r$ is set to 0.8. For answer synthesis, we use an identical vocabulary set for the input and output collected from the training data. We set the vocabulary size to 30,000 according to the frequency and the other words are set to $<$ unk $>$ . All word embeddings are updated during the training. We set the word embedding size to 300, set the feature embedding size of start and end positions of the extracted snippet to 50, and set all GRU hidden state sizes to 150. The model is optimized using AdaDelta BIBREF25 with initial learning rate of 1.0. All hyper-parameters are selected on the MS-MARCO development set. When decoding, we first run our extraction model to obtain the extracted span, and run our synthesis model with the extracted result and the passage that contains this span. We use the beam search with beam size of 12 to generate the sequence. After the sequence-to-sequence model, we post-process the sequence with following rules: We only keep once if the sequence-to-sequence model generates duplicated words or phrases. For all “ $<$ unk $>$ ” and the word as well as phrase which are not existed in the extracted answer, we try to refine it by finding a word or phrase with the same adjacent words in the extracted span and passage. If the generated answer only contains a single word “ $<$ unk $>$ ”, we use the extracted span as the final answer. Baseline Methods We conduct experiments with following settings: S-Net (Extraction): the model that only has the evidence extraction part. S-Net: the model that consists of the evidence extraction part and the answer synthesis part. We implement two state-of-the-art baselines on reading comprehension, namely BiDAF BIBREF9 and Prediction BIBREF7 , to extract text spans as evidence snippets. Moreover, we implement a baseline that only has the evidence extraction part without the passage ranking. Then we apply the answer synthesis part on top of their results. We also compare with other methods on the MS-MARCO leaderboard, including FastQAExt BIBREF26 , ReasoNet BIBREF27 , and R-Net BIBREF10 . Result Table 2 shows the results on the MS-MARCO test data. Our extraction model achieves 41.45 and 44.08 in terms of ROUGE-L and BLEU-1, respectively. Next we train the model 30 times with the same setting, and select models using a greedy search. We sum the probability at each position of each single model to decide the ensemble result. Finally we select 13 models for ensemble, which achieves 42.92 and 44.97 in terms of ROUGE-L and BLEU-1, respectively, which achieves the state-of-the-art results of the extraction model. Then we test our synthesis model based on the extracted evidence. Our synthesis model achieves 3.78% and 3.73% improvement on the single model and ensemble model in terms of ROUGE-L, respectively. Our best result achieves 46.65 in terms of ROUGE-L and 44.78 in terms of BLEU-1, which outperforms all existing methods with a large margin and are very close to human performance. Moreover, we observe that our method only achieves significant improvement in terms of ROUGE-L compared with our baseline. The reason is that our synthesis model works better when the answer is short, which almost has no effect on BLEU as it is normalized with all questions. Since answers on the test set are not published, we analyze our model on the development set. Table 3 shows results on the development set in terms of ROUGE-L. As we can see, our method outperforms the baseline and several strong state-of-the-art systems. For the evidence extraction part, our proposed multi-task learning framework achieves 42.23 and 44.11 for the single and ensemble model in terms of ROUGE-L. For the answer synthesis, the single and ensemble models improve 3.72% and 3.65% respectively in terms of ROUGE-L. We observe the consistent improvement when applying our answer synthesis model to other answer span prediction models, such as BiDAF and Prediction. Discussion We analyze the result of incorporating passage ranking as an additional task. We compare our multi-task framework with two baselines as shown in Table 4 . For passage selection, our multi-task model achieves the accuracy of 38.9, which outperforms the pure answer prediction model with 4.3. Moreover, jointly learning the answer prediction part and the passage ranking part is better than solving this task by two separated steps because the answer span can provide more information with stronger supervision, which benefits the passage ranking part. The ROUGE-L is calculated by the best answer span in the selected passage, which shows our multi-task learning framework has more potential for better answer. We compare the result of answer extraction and answer synthesis in different categories grouped by the upper bound of extraction method in Table 5 . For the question whose answer can be exactly matched in the passage, our answer synthesis model performs slightly worse because the sequence-to-sequence model makes some deviation when copying extracted evidences. In other categories, our synthesis model achieves more or less improvement. For the question whose answer can be almost found in the passage (ROUGE-L $\ge $ 0.8), our model achieves 0.2 improvement even though the space that can be raised is limited. For the question whose upper performance via answer extraction is between 0.6 and 0.8, our model achieves a large improvement of 2.0. Part of questions in the last category (ROUGE-L $<$ 0.2) are the polar questions whose answers are “yes” or “no”. Although the answer is not in the passage or question, our synthesis model can easily solve this problem and determine the correct answer through the extracted evidences, which leads to such improvement in this category. However, in these questions, answers are too short to influence the final score in terms of BLEU because it is normalized in all questions. Moreover, the score decreases due to the penalty of length. Due to the limitation of BLEU, we only report the result in terms of ROUGE-L in our analysis. We compare our extraction-then-synthesis model with several end-to-end generation models in Table 6 . S2S represents the sequence-to-sequence framework shown in Figure 3 . The difference among our synthesis model and all entries in the Table 6 is the information we use in the encoding part. The authors of MS-MACRO publish a baseline of training a sequence-to-sequence model with the question and answer, which only achieves 8.9 in terms of ROUGE-L. Adding all passages to the sequence-to-sequence model can obviously improve the result to 28.75. Then we only use the question and the selected passage to generate the answer. The only difference with our synthesis model is that we add the position features to the basic sequence-to-sequence model. The result is still worse than our synthesis model with a large margin, which shows the matching between question and passage is very important for generating answer. Next, we build an end-to-end framework combining matching and generation. We apply the sequence-to-sequence model on top of the matching information by taking question sensitive passage representation $v^P_t$ in the Equation 14 as the input of sequence-to-sequence model, which only achieves 6.28 in terms of ROUGE-L. Above results show the effectiveness of our model that solves this task with two steps. In the future, we hope the reinforcement learning can help the connection between evidence extraction and answer synthesis. Conclusion and Future Work In this paper, we propose S-Net, an extraction-then-synthesis framework, for machine reading comprehension. The extraction model aims to match the question and passage and predict most important sub-spans in the passage related to the question as evidence. Then, the synthesis model synthesizes the question information and the evidence snippet to generate the final answer. We propose a multi-task learning framework to improve the evidence extraction model by passage ranking to extract the evidence snippet, and use the sequence-to-sequence model for answer synthesis. We conduct experiments on the MS-MARCO dataset. Results demonstrate that our approach outperforms pure answer extraction model and other existing methods. We only annotate one evidence snippet in the sequence-to-sequence model for synthesizing answer, which cannot solve the question whose answer comes from multiple evidences, such as the second example in Table 1 . Our extraction model is based on the pointer network which selects the evidence by predicting the start and end positions of the text span. Therefore the top candidates are similar as they usually share the same start or end positions. By ranking separated candidates for predicting evidence snippets, we can annotate multiple evidence snippets as features in the sequence-to-sequence model for questions in this category in the future. Acknowledgement We thank the MS-MARCO organizers for help in submissions.
extraction-then-synthesis framework
dd8f72cb3c0961b5ca1413697a00529ba60571fe
dd8f72cb3c0961b5ca1413697a00529ba60571fe_0
Q: Why MS-MARCO is different from SQuAD? Text: Introduction Machine reading comprehension BIBREF0 , BIBREF1 , which attempts to enable machines to answer questions after reading a passage or a set of passages, attracts great attentions from both research and industry communities in recent years. The release of the Stanford Question Answering Dataset (SQuAD) BIBREF0 and the Microsoft MAchine Reading COmprehension Dataset (MS-MARCO) BIBREF1 provides the large-scale manually created datasets for model training and testing of machine learning (especially deep learning) algorithms for this task. There are two main differences in existing machine reading comprehension datasets. First, the SQuAD dataset constrains the answer to be an exact sub-span in the passage, while words in the answer are not necessary in the passages in the MS-MARCO dataset. Second, the SQuAD dataset only has one passage for a question, while the MS-MARCO dataset contains multiple passages. Existing methods for the MS-MARCO dataset usually follow the extraction based approach for single passage in the SQuAD dataset. It formulates the task as predicting the start and end positions of the answer in the passage. However, as defined in the MS-MARCO dataset, the answer may come from multiple spans, and the system needs to elaborate the answer using words in the passages and words from the questions as well as words that cannot be found in the passages or questions. Table 1 shows several examples from the MS-MARCO dataset. Except in the first example the answer is an exact text span in the passage, in other examples the answers need to be synthesized or generated from the question and passage. In the second example the answer consists of multiple text spans (hereafter evidence snippets) from the passage. In the third example, the answer contains words from the question. In the fourth example, the answer has words that cannot be found in the passages or question. In the last example, all words are not in the passages or questions. In this paper, we present an extraction-then-synthesis framework for machine reading comprehension shown in Figure 1 , in which the answer is synthesized from the extraction results. We build an evidence extraction model to predict the most important sub-spans from the passages as evidence, and then develop an answer synthesis model which takes the evidence as additional features along with the question and passage to further elaborate the final answers. Specifically, we develop the answer extraction model with state-of-the-art attention based neural networks which predict the start and end positions of evidence snippets. As multiple passages are provided for each question in the MS-MARCO dataset, we propose incorporating passage ranking as an additional task to improve the results of evidence extraction under a multi-task learning framework. We use the bidirectional recurrent neural networks (RNN) for the word-level representation, and then apply the attention mechanism BIBREF2 to incorporate matching information from question to passage at the word level. Next, we predict start and end positions of the evidence snippet by pointer networks BIBREF3 . Moreover, we aggregate the word-level matching information of each passage using the attention pooling, and use the passage-level representation to rank all candidate passages as an additional task. For the answer synthesis, we apply the sequence-to-sequence model to synthesize the final answer based on the extracted evidence. The question and passage are encoded by a bi-directional RNN in which the start and end positions of extracted snippet are labeled as features. We combine the question and passage information in the encoding part to initialize the attention-equipped decoder to generate the answer. We conduct experiments on the MS-MARCO dataset. The results show our extraction-then-synthesis framework outperforms our baselines and all other existing methods in terms of ROUGE-L and BLEU-1. Our contributions can be summarized as follows: Related Work Benchmark datasets play an important role in recent progress in reading comprehension and question answering research. BIBREF4 release MCTest whose goal is to select the best answer from four options given the question and the passage. CNN/Daily-Mail BIBREF5 and CBT BIBREF6 are the cloze-style datasets in which the goal is to predict the missing word (often a named entity) in a passage. Different from above datasets, the SQuAD dataset BIBREF0 whose answer can be much longer phrase is more challenging. The answer in SQuAD is a segment of text, or span, from the corresponding reading passage. Similar to the SQuAD, MS-MARCO BIBREF1 is the reading comprehension dataset which aims to answer the question given a set of passages. The answer in MS-MARCO is generated by human after reading all related passages and not necessarily sub-spans of the passages. To the best of our knowledge, the existing works on the MS-MARCO dataset follow their methods on the SQuAD. BIBREF7 combine match-LSTM and pointer networks to produce the boundary of the answer. BIBREF8 and BIBREF9 employ variant co-attention mechanism to match the question and passage mutually. BIBREF8 propose a dynamic pointer network to iteratively infer the answer. BIBREF10 apply an additional gate to the attention-based recurrent networks and propose a self-matching mechanism for aggregating evidence from the whole passage, which achieves the state-of-the-art result on SQuAD dataset. Other works which only focus on the SQuAD dataset may also be applied on the MS-MARCO dataset BIBREF11 , BIBREF12 , BIBREF13 . The sequence-to-sequence model is widely-used in many tasks such as machine translation BIBREF14 , parsing BIBREF15 , response generation BIBREF16 , and summarization generation BIBREF17 . We use it to generate the synthetic answer with the start and end positions of the evidence snippet as features. Our Approach Following the overview in Figure 1 , our approach consists of two parts as evidence extraction and answer synthesis. The two parts are trained in two stages. The evidence extraction part aims to extract evidence snippets related to the question and passage. The answer synthesis part aims to generate the answer based on the extracted evidence snippets. We propose a multi-task learning framework for the evidence extraction shown in Figure 15 , and use the sequence-to-sequence model with additional features of the start and end positions of the evidence snippet for the answer synthesis shown in Figure 3 . Gated Recurrent Unit We use Gated Recurrent Unit (GRU) BIBREF18 instead of basic RNN. Equation 8 describes the mathematical model of the GRU. $r_t$ and $z_t$ are the gates and $h_t$ is the hidden state. $$z_t &= \sigma (W_{hz} h_{t-1} + W_{xz} x_t + b_z)\nonumber \\ r_t &= \sigma (W_{hr} h_{t-1} + W_{xr} x_t + b_r)\nonumber \\ \hat{h_t} &= \Phi (W_h (r_t \odot h_{t-1}) + W_x x_t + b)\nonumber \\ h_t &= (1-z_t)\odot h_{t-1} + z_t \odot \hat{h_t}$$ (Eq. 8) Evidence Extraction We propose a multi-task learning framework for evidence extraction. Unlike the SQuAD dataset, which only has one passage given a question, there are several related passages for each question in the MS-MARCO dataset. In addition to annotating the answer, MS-MARCO also annotates which passage is correct. To this end, we propose improving text span prediction with passage ranking. Specifically, as shown in Figure 2 , in addition to predicting a text span, we apply another task to rank candidate passages with the passage-level representation. Consider a question Q = $\lbrace w_t^Q\rbrace _{t=1}^m$ and a passage P = $\lbrace w_t^P\rbrace _{t=1}^n$ , we first convert the words to their respective word-level embeddings and character-level embeddings. The character-level embeddings are generated by taking the final hidden states of a bi-directional GRU applied to embeddings of characters in the token. We then use a bi-directional GRU to produce new representation $u^Q_1, \dots , u^Q_m$ and $u^P_1, \dots , u^P_n$ of all words in the question and passage respectively: $$u_t^Q = \mathrm {BiGRU}_Q(u_{t - 1}^Q, [e_t^Q,char_t^Q]) \nonumber \\ u_t^P = \mathrm {BiGRU}_P(u_{t - 1}^P, [e_t^P,char_t^P])$$ (Eq. 11) Given question and passage representation $\lbrace u_t^Q\rbrace _{t=1}^m$ and $\lbrace u_t^P\rbrace _{t=1}^n$ , BIBREF2 propose generating sentence-pair representation $\lbrace v_t^P\rbrace _{t=1}^n$ via soft-alignment of words in the question and passage as follows: $$v_t^P = \mathrm {GRU} (v_{t-1}^P, c^Q_t)$$ (Eq. 12) where $c^Q_t=att(u^Q, [u_t^P, v_{t-1}^P])$ is an attention-pooling vector of the whole question ( $u^Q$ ): $$s_j^t &= \mathrm {v}^\mathrm {T}\mathrm {tanh}(W_u^Q u_j^Q + W_u^P u_t^P) \nonumber \\ a_i^t &= \mathrm {exp}(s_i^t) / \Sigma _{j=1}^m \mathrm {exp}(s_j^t) \nonumber \\ c^Q_t &= \Sigma _{i=1}^m a_i^t u_i^Q$$ (Eq. 13) BIBREF19 introduce match-LSTM, which takes $u_j^P$ as an additional input into the recurrent network. BIBREF10 propose adding gate to the input ( $[u_t^P, c^Q_t]$ ) of RNN to determine the importance of passage parts. $$&g_t = \mathrm {sigmoid}(W_g [u_t^P, c^Q_t]) \nonumber \\ &[u_t^P, c^Q_t]^* = g_t\odot [u_t^P, c^Q_t] \nonumber \\ &v_t^P = \mathrm {GRU} (v_{t-1}^P, [u_t^P, c^Q_t]^*)$$ (Eq. 14) We use pointer networks BIBREF3 to predict the position of evidence snippets. Following the previous work BIBREF7 , we concatenate all passages to predict one span for the evidence snippet prediction. Given the representation $\lbrace v_t^P\rbrace _{t=1}^N$ where $N$ is the sum of the length of all passages, the attention mechanism is utilized as a pointer to select the start position ( $p^1$ ) and end position ( $p^2$ ), which can be formulated as follows: $$s_j^t &= \mathrm {v}^\mathrm {T}\mathrm {tanh}(W_h^{P} v_j^P + W_{h}^{a} h_{t-1}^a) \nonumber \\ a_i^t &= \mathrm {exp}(s_i^t) / \Sigma _{j=1}^N \mathrm {exp}(s_j^t) \nonumber \\ p^t &= \mathrm {argmax}(a_1^t, \dots , a_N^t)$$ (Eq. 16) Here $h_{t-1}^a$ represents the last hidden state of the answer recurrent network (pointer network). The input of the answer recurrent network is the attention-pooling vector based on current predicted probability $a^t$ : $$c_t &= \Sigma _{i=1}^N a_i^t v_i^P \nonumber \\ h_t^a &= \mathrm {GRU}(h_{t-1}^a, c_t)$$ (Eq. 17) When predicting the start position, $h_{t-1}^a$ represents the initial hidden state of the answer recurrent network. We utilize the question vector $r^Q$ as the initial state of the answer recurrent network. $r^Q = att(u^Q, v^Q_r)$ is an attention-pooling vector of the question based on the parameter $v^Q_r$ : $$s_j &= \mathrm {v}^\mathrm {T}\mathrm {tanh}(W_u^{Q} u_j^Q + W_{v}^{Q} v_r^Q) \nonumber \\ a_i &= \mathrm {exp}(s_i) / \Sigma _{j=1}^m \mathrm {exp}(s_j) \nonumber \\ r^Q &= \Sigma _{i=1}^m a_i u_i^Q$$ (Eq. 18) For this part, the objective function is to minimize the following cross entropy: $$\mathcal {L}_{AP} = -\Sigma _{t=1}^{2}\Sigma _{i=1}^{N}[y^t_i\log a^t_i + (1-y^t_i)\log (1-a^t_i)]$$ (Eq. 19) where $y^t_i \in \lbrace 0,1\rbrace $ denotes a label. $y^t_i=1$ means $i$ is a correct position, otherwise $y^t_i=0$ . In this part, we match the question and each passage from word level to passage level. Firstly, we use the question representation $r^Q$ to attend words in each passage to obtain the passage representation $r^P$ where $r^P = att(v^P, r^Q)$ . $$s_j &= \mathrm {v}^\mathrm {T}\mathrm {tanh}(W_v^{P} v_j^P + W_{v}^{Q} r^Q) \nonumber \\ a_i &= \mathrm {exp}(s_i) / \Sigma _{j=1}^n \mathrm {exp}(s_j) \nonumber \\ r^P &= \Sigma _{i=1}^n a_i v_i^P$$ (Eq. 21) Next, the question representation $r^Q$ and the passage representation $r^P$ are combined to pass two fully connected layers for a matching score, $$g = v_g^{\mathrm {T}}(\mathrm {tanh}(W_g[r^Q,r^P]))$$ (Eq. 22) For one question, each candidate passage $P_i$ has a matching score $g_i$ . We normalize their scores and optimize following objective function: $$\hat{g}_i = \mathrm {exp}(g_i) / \Sigma _{j=1}^k \mathrm {exp}(g_j) \nonumber \\ \mathcal {L}_{PR} = -\sum _{i=1}^{k}[y_i\log \hat{g}_i + (1-y_i)\log (1-\hat{g}_i)]$$ (Eq. 23) where $k$ is the number of passages. $y_i \in \lbrace 0,1\rbrace $ denotes a label. $y_i=1$ means $P_i$ is the correct passage, otherwise $y_i=0$ . The evident extraction part is trained by minimizing joint objective functions: $$\mathcal {L}_{E} = r \mathcal {L}_{AP} + (1-r) \mathcal {L}_{PR}$$ (Eq. 25) where $r$ is the hyper-parameter for weights of two loss functions. Answer Synthesis As shown in Figure 3 , we use the sequence-to-sequence model to synthesize the answer with the extracted evidences as features. We first produce the representation $h_{t}^P$ and $h_{t}^Q$ of all words in the passage and question respectively. When producing the answer representation, we combine the basic word embedding $e_t^p$ with additional features $f_t^s$ and $f_t^e$ to indicate the start and end positions of the evidence snippet respectively predicted by evidence extraction model. $f_t^s =1$ and $f_t^e =1$ mean the position $t$ is the start and end of the evidence span, respectively. $$&h_{t}^P =\mathrm {BiGRU}(h_{t-1}^P, [e_t^p,f_t^s,f_t^e]) \nonumber \\ &h_{t}^Q = \mathrm {BiGRU}(h_{t-1}^Q,e_t^Q)$$ (Eq. 27) On top of the encoder, we use GRU with attention as the decoder to produce the answer. At each decoding time step $t$ , the GRU reads the previous word embedding $ w_{t-1} $ and previous context vector $ c_{t-1} $ as inputs to compute the new hidden state $ d_{t} $ . To initialize the GRU hidden state, we use a linear layer with the last backward encoder hidden state $ \scalebox {-1}[1]{\vec{\scalebox {-1}[1]{h}}}_{1}^P $ and $ \scalebox {-1}[1]{\vec{\scalebox {-1}[1]{h}}}_{1}^Q $ as input: $$d_{t} &= \text{GRU}(w_{t-1}, c_{t-1}, d_{t-1}) \nonumber \\ d_{0} &= \tanh (W_{d}[\scalebox {-1}[1]{\vec{\scalebox {-1}[1]{h}}}_{1}^P,\scalebox {-1}[1]{\vec{\scalebox {-1}[1]{h}}}_{1}^Q] + b)$$ (Eq. 28) where $ W_{d} $ is the weight matrix and $ b $ is the bias vector. The context vector $ c_{t} $ for current time step $ t $ is computed through the concatenate attention mechanism BIBREF14 , which matches the current decoder state $ d_{t} $ with each encoder hidden state $ h_{t} $ to get the weighted sum representation. Here $h_{i}$ consists of the passage representation $h_{t}^P$ and the question representation $h_{t}^Q$ . $$s^t_j &= v_{a}^{\mathrm {T}}\tanh (W_{a}d_{t-1} + U_{a}h_{j}) \nonumber \\ a^t_i &= \mathrm {exp}(s_i^t) / \Sigma _{j=1}^n \mathrm {exp}(s_j^t) \nonumber \\ c_{t} &= \Sigma _{i = 1}^{n} a^t_ih_{i}$$ (Eq. 30) We then combine the previous word embedding $ w_{t-1} $ , the current context vector $ c_{t} $ , and the decoder state $ d_{t} $ to construct the readout state $ r_{t} $ . The readout state is then passed through a maxout hidden layer BIBREF20 to predict the next word with a softmax layer over the decoder vocabulary. $$r_{t} &= W_{r}w_{t-1} + U_{r}c_{t} + V_{r}d_{t} \nonumber \\ m_{t} &= [\max \lbrace r_{t, 2j-1}, r_{t, 2j}\rbrace ]^{\mathrm {T}} \nonumber \\ p(y_{t} &\vert y_{1}, \dots , y_{t-1}) = \text{softmax}(W_{o}m_{t})$$ (Eq. 31) where $ W_{a} $ , $ U_{a} $ , $ W_{r} $ , $ U_{r} $ , $ V_{r} $ and $ W_{o} $ are parameters to be learned. Readout state $ r_{t} $ is a $ 2d $ -dimensional vector, and the maxout layer (Equation 31 ) picks the max value for every two numbers in $ r_{t} $ and produces a d-dimensional vector $ m_{t} $ . Our goal is to maximize the output probability given the input sentence. Therefore, we optimize the negative log-likelihood loss function: $$\mathcal {L}_{S}= - \frac{1}{\vert \mathcal {D} \vert } \Sigma _{(X, Y) \in \mathcal {D}} \log p(Y|X)$$ (Eq. 32) where $\mathcal {D}$ is the set of data. $X$ represents the question and passage including evidence snippets, and $Y$ represents the answer. Experiment We conduct our experiments on the MS-MARCO dataset BIBREF1 . We compare our extraction-then-synthesis framework with pure extraction model and other baseline methods on the leaderboard of MS-MARCO. Experimental results show that our model achieves better results in official evaluation metrics. We also conduct ablation tests to verify our method, and compare our framework with the end-to-end generation framework. Dataset and Evaluation Metrics For the MS-MARCO dataset, the questions are user queries issued to the Bing search engine and the context passages are from real web documents. The data has been split into a training set (82,326 pairs), a development set (10,047 pairs) and a test set (9,650 pairs). The answers are human-generated and not necessarily sub-spans of the passages so that the metrics in the official tool of MS-MARCO evaluation are BLEU BIBREF21 and ROUGE-L BIBREF22 . In the official evaluation tool, the ROUGE-L is calculated by averaging the score per question, however, the BLEU is normalized with all questions. We hold that the answer should be evaluated case-by-case in the reading comprehension task. Therefore, we mainly focus on the result in the ROUGE-L. Implementation Details The evidence extraction and the answer synthesis are trained in two stages. For evidence extraction, since the answers are not necessarily sub-spans of the passages, we choose the span with the highest ROUGE-L score with the reference answer as the gold span in the training. Moreover, we only use the data whose ROUGE-L score of chosen text span is higher than 0.7, therefore we only use 71,417 training pairs in our experiments. For answer synthesis, the training data consists of two parts. First, for all passages in the training data, we choose the best span with highest ROUGE-L score as the evidence, and use the corresponding reference answer as the output. We only use the data whose ROUGE-L score of chosen evidence snippet is higher than 0.5. Second, we apply our evidence extraction model to all training data to obtain the extracted span. Then we treat the passage to which this span belongs as the input. For answer extraction, we use 300-dimensional uncased pre-trained GloVe embeddings BIBREF23 for both question and passage without update during training. We use zero vectors to represent all out-of-vocabulary words. Hidden vector length is set to 150 for all layers. We also apply dropout BIBREF24 between layers, with dropout rate 0.1. The weight $r$ is set to 0.8. For answer synthesis, we use an identical vocabulary set for the input and output collected from the training data. We set the vocabulary size to 30,000 according to the frequency and the other words are set to $<$ unk $>$ . All word embeddings are updated during the training. We set the word embedding size to 300, set the feature embedding size of start and end positions of the extracted snippet to 50, and set all GRU hidden state sizes to 150. The model is optimized using AdaDelta BIBREF25 with initial learning rate of 1.0. All hyper-parameters are selected on the MS-MARCO development set. When decoding, we first run our extraction model to obtain the extracted span, and run our synthesis model with the extracted result and the passage that contains this span. We use the beam search with beam size of 12 to generate the sequence. After the sequence-to-sequence model, we post-process the sequence with following rules: We only keep once if the sequence-to-sequence model generates duplicated words or phrases. For all “ $<$ unk $>$ ” and the word as well as phrase which are not existed in the extracted answer, we try to refine it by finding a word or phrase with the same adjacent words in the extracted span and passage. If the generated answer only contains a single word “ $<$ unk $>$ ”, we use the extracted span as the final answer. Baseline Methods We conduct experiments with following settings: S-Net (Extraction): the model that only has the evidence extraction part. S-Net: the model that consists of the evidence extraction part and the answer synthesis part. We implement two state-of-the-art baselines on reading comprehension, namely BiDAF BIBREF9 and Prediction BIBREF7 , to extract text spans as evidence snippets. Moreover, we implement a baseline that only has the evidence extraction part without the passage ranking. Then we apply the answer synthesis part on top of their results. We also compare with other methods on the MS-MARCO leaderboard, including FastQAExt BIBREF26 , ReasoNet BIBREF27 , and R-Net BIBREF10 . Result Table 2 shows the results on the MS-MARCO test data. Our extraction model achieves 41.45 and 44.08 in terms of ROUGE-L and BLEU-1, respectively. Next we train the model 30 times with the same setting, and select models using a greedy search. We sum the probability at each position of each single model to decide the ensemble result. Finally we select 13 models for ensemble, which achieves 42.92 and 44.97 in terms of ROUGE-L and BLEU-1, respectively, which achieves the state-of-the-art results of the extraction model. Then we test our synthesis model based on the extracted evidence. Our synthesis model achieves 3.78% and 3.73% improvement on the single model and ensemble model in terms of ROUGE-L, respectively. Our best result achieves 46.65 in terms of ROUGE-L and 44.78 in terms of BLEU-1, which outperforms all existing methods with a large margin and are very close to human performance. Moreover, we observe that our method only achieves significant improvement in terms of ROUGE-L compared with our baseline. The reason is that our synthesis model works better when the answer is short, which almost has no effect on BLEU as it is normalized with all questions. Since answers on the test set are not published, we analyze our model on the development set. Table 3 shows results on the development set in terms of ROUGE-L. As we can see, our method outperforms the baseline and several strong state-of-the-art systems. For the evidence extraction part, our proposed multi-task learning framework achieves 42.23 and 44.11 for the single and ensemble model in terms of ROUGE-L. For the answer synthesis, the single and ensemble models improve 3.72% and 3.65% respectively in terms of ROUGE-L. We observe the consistent improvement when applying our answer synthesis model to other answer span prediction models, such as BiDAF and Prediction. Discussion We analyze the result of incorporating passage ranking as an additional task. We compare our multi-task framework with two baselines as shown in Table 4 . For passage selection, our multi-task model achieves the accuracy of 38.9, which outperforms the pure answer prediction model with 4.3. Moreover, jointly learning the answer prediction part and the passage ranking part is better than solving this task by two separated steps because the answer span can provide more information with stronger supervision, which benefits the passage ranking part. The ROUGE-L is calculated by the best answer span in the selected passage, which shows our multi-task learning framework has more potential for better answer. We compare the result of answer extraction and answer synthesis in different categories grouped by the upper bound of extraction method in Table 5 . For the question whose answer can be exactly matched in the passage, our answer synthesis model performs slightly worse because the sequence-to-sequence model makes some deviation when copying extracted evidences. In other categories, our synthesis model achieves more or less improvement. For the question whose answer can be almost found in the passage (ROUGE-L $\ge $ 0.8), our model achieves 0.2 improvement even though the space that can be raised is limited. For the question whose upper performance via answer extraction is between 0.6 and 0.8, our model achieves a large improvement of 2.0. Part of questions in the last category (ROUGE-L $<$ 0.2) are the polar questions whose answers are “yes” or “no”. Although the answer is not in the passage or question, our synthesis model can easily solve this problem and determine the correct answer through the extracted evidences, which leads to such improvement in this category. However, in these questions, answers are too short to influence the final score in terms of BLEU because it is normalized in all questions. Moreover, the score decreases due to the penalty of length. Due to the limitation of BLEU, we only report the result in terms of ROUGE-L in our analysis. We compare our extraction-then-synthesis model with several end-to-end generation models in Table 6 . S2S represents the sequence-to-sequence framework shown in Figure 3 . The difference among our synthesis model and all entries in the Table 6 is the information we use in the encoding part. The authors of MS-MACRO publish a baseline of training a sequence-to-sequence model with the question and answer, which only achieves 8.9 in terms of ROUGE-L. Adding all passages to the sequence-to-sequence model can obviously improve the result to 28.75. Then we only use the question and the selected passage to generate the answer. The only difference with our synthesis model is that we add the position features to the basic sequence-to-sequence model. The result is still worse than our synthesis model with a large margin, which shows the matching between question and passage is very important for generating answer. Next, we build an end-to-end framework combining matching and generation. We apply the sequence-to-sequence model on top of the matching information by taking question sensitive passage representation $v^P_t$ in the Equation 14 as the input of sequence-to-sequence model, which only achieves 6.28 in terms of ROUGE-L. Above results show the effectiveness of our model that solves this task with two steps. In the future, we hope the reinforcement learning can help the connection between evidence extraction and answer synthesis. Conclusion and Future Work In this paper, we propose S-Net, an extraction-then-synthesis framework, for machine reading comprehension. The extraction model aims to match the question and passage and predict most important sub-spans in the passage related to the question as evidence. Then, the synthesis model synthesizes the question information and the evidence snippet to generate the final answer. We propose a multi-task learning framework to improve the evidence extraction model by passage ranking to extract the evidence snippet, and use the sequence-to-sequence model for answer synthesis. We conduct experiments on the MS-MARCO dataset. Results demonstrate that our approach outperforms pure answer extraction model and other existing methods. We only annotate one evidence snippet in the sequence-to-sequence model for synthesizing answer, which cannot solve the question whose answer comes from multiple evidences, such as the second example in Table 1 . Our extraction model is based on the pointer network which selects the evidence by predicting the start and end positions of the text span. Therefore the top candidates are similar as they usually share the same start or end positions. By ranking separated candidates for predicting evidence snippets, we can annotate multiple evidence snippets as features in the sequence-to-sequence model for questions in this category in the future. Acknowledgement We thank the MS-MARCO organizers for help in submissions.
there are several related passages for each question in the MS-MARCO dataset., MS-MARCO also annotates which passage is correct
fbd094918b493122b3bba99cefe5da80cf88959c
fbd094918b493122b3bba99cefe5da80cf88959c_0
Q: Did they experiment with pre-training schemes? Text: Introduction Sentiment analysis and emotion recognition, as two closely related subfields of affective computing, play a key role in the advancement of artificial intelligence BIBREF0 . However, the complexity and ambiguity of natural language constitutes a wide range of challenges for computational systems. In the past years irony and sarcasm detection have received great traction within the machine learning and NLP community BIBREF1 , mainly due to the high frequency of sarcastic and ironic expressions in social media. Their linguistic collocation inclines to flip polarity in the context of sentiment analysis, which makes machine-based irony detection critical for sentiment analysis BIBREF2 , BIBREF3 . Irony is a profoundly pragmatic and versatile linguistic phenomenon. As its foundations usually lay beyond explicit linguistic patterns in re-constructing contextual dependencies and latent meaning, such as shared knowledge or common knowledge BIBREF1 , automatically detecting it remains a challenging task in natural language processing. In this paper, we introduce our system for the shared task of Irony detection in English tweets, a part of the 2018 SemEval BIBREF4 . We note that computational approaches to automatically detecting irony often deploy expensive feature-engineered systems which rely on a rich body of linguistic and contextual cues BIBREF5 , BIBREF6 . The advent of Deep Learning applied to NLP has introduced models that have succeeded in large part because they learn and use their own continuous numeric representations BIBREF7 of words BIBREF8 , offering us the dream of forgetting manually-designed features. To this extent, in this paper we propose a representation learning approach for irony detection, which relies on a bidirectional LSTM and pre-trained word embeddings. Data and pre-processing For the shared task, a balanced dataset of 2,396 ironic and 2,396 non-ironic tweets is provided. The ironic corpus was constructed by collecting self-annotated tweets with the hashtags #irony, #sarcasm and #not. The tweets were then cleaned and manually checked and labeled, using a fine-grained annotation scheme BIBREF3 . The corpus comprises different types of irony: Verbal irony is often referred to as an utterance that conveys the opposite meaning of what of literally expressed BIBREF9 , BIBREF10 , e.g. I love annoying people. Situational irony appears in settings, that diverge from the expected BIBREF11 , e.g. an old man who won the lottery and died the next day. The latter does not necessarily exhibit polarity contrast or other typical linguistic features, which makes it particularly difficult to classify correctly. For the pre-processing we used the Natural Language Toolkit BIBREF12 . As a first step, we removed the following words and hashtagged words: not, sarc, sarcasm, irony, ironic, sarcastic and sarcast, in order to ascertain a clean corpus without topic-related triggers. To ease the tokenizing process with the NLTK TweetTokenizer, we replaced two spaces with one space and removed usernames and urls, as they do not generally provide any useful information for detecting irony. We do not stem or lowercase the tokens, since some patterns within that scope might serve as an indicator for ironic tweets, for instance a word or a sequence of words, in which all letters are capitalized BIBREF13 . Proposed Approach The goal of the subtask A was to build a binary classification system that predicts if a tweet is ironic or non-ironic. In the following sections, we first describe the dataset provided for the task and our pre-processing pipeline. Later, we lay out the proposed model architecture, our experiments and results. Word representation Representation learning approaches usually require extensive amounts of data to derive proper results. Moreover, previous studies have shown that initializing representations using random values generally causes the performance to drop. For these reasons, we rely on pre-trained word embeddings as a means of providing the model the adequate setting. We experiment with GloVe BIBREF14 for small sizes, namely 25, 50 and 100. This is based on previous work showing that representation learning models based on convolutional neural networks perform well compared to traditional machine learning methods with a significantly smaller feature vector size, while at the same time preventing over-fitting and accelerates computation (e.g BIBREF2 . GloVe embeddings are trained on a dataset of 2B tweets, with a total vocabulary of 1.2 M tokens. However, we observed a significant overlap with the vocabulary extracted from the shared task dataset. To deal with out-of-vocabulary terms that have a frequency above a given threshold, we create a new vector which is initialized based on the space described by the infrequent words in GloVe. Concretely, we uniformly sample a vector from a sphere centered in the centroid of the 10% less frequent words in the GloVe vocabulary, whose radius is the mean distance between the centroid and all the words in the low frequency set. For the other case, we use the special UNK token. To maximize the knowledge that may be recovered from the pre-trained embeddings, specially for out-of-vocabulary terms, we add several token-level and sentence-level binary features derived from simple linguistic patterns, which are concatenated to the corresponding vectors. If the token is fully lowercased. If the Token is fully uppercased. If only the first letter is capitalized. If the token contains digits. If any token is fully lowercased. If any token is fully uppercased. If any token appears more than once. Model architecture Recurrent neural networks are powerful sequence learning models that have achieved excellent results for a variety of difficult NLP tasks BIBREF15 . In particular, we use the last hidden state of a bidirectional LSTM architecture BIBREF16 to obtain our tweet representations. This setting is currently regarded as the state-of-the-art BIBREF17 for the task on other datasets. To avoid over-fitting we use Dropout BIBREF18 and for training we set binary cross-entropy as a loss function. For evaluation we use our own wrappers of the the official evaluation scripts provided for the shared tasks, which are based on accuracy, precision, recall and F1-score. Experimental setup Our model is implemented in PyTorch BIBREF19 , which allowed us to easily deal with the variable tweet length due to the dynamic nature of the platform. We experimented with different values for the LSTM hidden state size, as well as for the dropout probability, obtaining best results for a dropout probability of INLINEFORM0 and 150 units for the the hidden vector. We trained our models using 80% of the provided data, while the remaining 20% was used for model development. We used Adam BIBREF20 , with a learning rate of INLINEFORM1 and early stopping when performance did not improve on the development set. Using embeddings of size 100 provided better results in practice. Our final best model is an ensemble of four models with the same architecture but different random initialization. To compare our results, we use the provided baseline, which is a non-parameter optimized linear-kernel SVM that uses TF-IDF bag-of-word vectors as inputs. For pre-processing, in this case we do not preserve casing and delete English stopwords. Results To understand how our strategies to recover more information from the pre-trained word embeddings affected the results, we ran ablation studies to compare how the token-level and sentence-level features contributed to the performance. Table TABREF16 summarizes the impact of these features in terms of F1-score on the validation set. We see that sentence-level features had a positive yet small impact, while token-level features seemed to actually hurt the performance. We think that since the task is performed at the sentence-level, probably features that capture linguistic phenomena at the same level provide useful information to the model, while the contributions of other finer granularity features seem to be too specific for the model to leverage on. Table TABREF17 summarizes our best single-model results on the validation set (20% of the provided data) compared to the baseline, as well as the official results of our model ensemble on the test data. Out of 43 teams our system ranked 421st with an official F1-score of 0.2905 on the test set. Although our model outperforms the baseline in the validation set in terms of F1-score, we observe important drops for all metrics compared to the test set, showing that the architecture seems to be unable to generalize well. We think these results highlight the necessity of an ad-hoc architecture for the task as well as the relevance of additional information. The work of BIBREF21 offers interesting contributions in these two aspects, achieving good results for a range of tasks that include sarcasm detection, using an additional attention layer over a BiLSTM like ours, while also pre-training their model on an emoji-based dataset of 1246 million tweets. Moreover, we think that due to the complexity of the problem and the size of the training data in the context of deep learning better results could be obtained with additional resources for pre-training. Concretely, we see transfer learning as one option to add knowledge from a larger, related dataset could significantly improve the results BIBREF22 . Manually labeling and checking data is a vastly time-consuming effort. Even if noisy, collecting a considerably larger self-annotated dataset such as in BIBREF23 could potentially boost model performance. Conclusion In this paper we presented our system to SemEval-2018 shared task on irony detection in English tweets (subtask A), which leverages on a BiLSTM and pre-trained word embeddings for representation learning, without using human-engineered features. Our results showed that although the generalization capabilities of the model are limited, there are clear future directions to improve. In particular, access to more training data and the deployment of methods like transfer learning seem to be promising directions for future research in representation learning-based sarcasm detection.
No
78661bdd4d11148e07bdf17141cf088db4ad60c6
78661bdd4d11148e07bdf17141cf088db4ad60c6_0
Q: What were their results on the test set? Text: Introduction Sentiment analysis and emotion recognition, as two closely related subfields of affective computing, play a key role in the advancement of artificial intelligence BIBREF0 . However, the complexity and ambiguity of natural language constitutes a wide range of challenges for computational systems. In the past years irony and sarcasm detection have received great traction within the machine learning and NLP community BIBREF1 , mainly due to the high frequency of sarcastic and ironic expressions in social media. Their linguistic collocation inclines to flip polarity in the context of sentiment analysis, which makes machine-based irony detection critical for sentiment analysis BIBREF2 , BIBREF3 . Irony is a profoundly pragmatic and versatile linguistic phenomenon. As its foundations usually lay beyond explicit linguistic patterns in re-constructing contextual dependencies and latent meaning, such as shared knowledge or common knowledge BIBREF1 , automatically detecting it remains a challenging task in natural language processing. In this paper, we introduce our system for the shared task of Irony detection in English tweets, a part of the 2018 SemEval BIBREF4 . We note that computational approaches to automatically detecting irony often deploy expensive feature-engineered systems which rely on a rich body of linguistic and contextual cues BIBREF5 , BIBREF6 . The advent of Deep Learning applied to NLP has introduced models that have succeeded in large part because they learn and use their own continuous numeric representations BIBREF7 of words BIBREF8 , offering us the dream of forgetting manually-designed features. To this extent, in this paper we propose a representation learning approach for irony detection, which relies on a bidirectional LSTM and pre-trained word embeddings. Data and pre-processing For the shared task, a balanced dataset of 2,396 ironic and 2,396 non-ironic tweets is provided. The ironic corpus was constructed by collecting self-annotated tweets with the hashtags #irony, #sarcasm and #not. The tweets were then cleaned and manually checked and labeled, using a fine-grained annotation scheme BIBREF3 . The corpus comprises different types of irony: Verbal irony is often referred to as an utterance that conveys the opposite meaning of what of literally expressed BIBREF9 , BIBREF10 , e.g. I love annoying people. Situational irony appears in settings, that diverge from the expected BIBREF11 , e.g. an old man who won the lottery and died the next day. The latter does not necessarily exhibit polarity contrast or other typical linguistic features, which makes it particularly difficult to classify correctly. For the pre-processing we used the Natural Language Toolkit BIBREF12 . As a first step, we removed the following words and hashtagged words: not, sarc, sarcasm, irony, ironic, sarcastic and sarcast, in order to ascertain a clean corpus without topic-related triggers. To ease the tokenizing process with the NLTK TweetTokenizer, we replaced two spaces with one space and removed usernames and urls, as they do not generally provide any useful information for detecting irony. We do not stem or lowercase the tokens, since some patterns within that scope might serve as an indicator for ironic tweets, for instance a word or a sequence of words, in which all letters are capitalized BIBREF13 . Proposed Approach The goal of the subtask A was to build a binary classification system that predicts if a tweet is ironic or non-ironic. In the following sections, we first describe the dataset provided for the task and our pre-processing pipeline. Later, we lay out the proposed model architecture, our experiments and results. Word representation Representation learning approaches usually require extensive amounts of data to derive proper results. Moreover, previous studies have shown that initializing representations using random values generally causes the performance to drop. For these reasons, we rely on pre-trained word embeddings as a means of providing the model the adequate setting. We experiment with GloVe BIBREF14 for small sizes, namely 25, 50 and 100. This is based on previous work showing that representation learning models based on convolutional neural networks perform well compared to traditional machine learning methods with a significantly smaller feature vector size, while at the same time preventing over-fitting and accelerates computation (e.g BIBREF2 . GloVe embeddings are trained on a dataset of 2B tweets, with a total vocabulary of 1.2 M tokens. However, we observed a significant overlap with the vocabulary extracted from the shared task dataset. To deal with out-of-vocabulary terms that have a frequency above a given threshold, we create a new vector which is initialized based on the space described by the infrequent words in GloVe. Concretely, we uniformly sample a vector from a sphere centered in the centroid of the 10% less frequent words in the GloVe vocabulary, whose radius is the mean distance between the centroid and all the words in the low frequency set. For the other case, we use the special UNK token. To maximize the knowledge that may be recovered from the pre-trained embeddings, specially for out-of-vocabulary terms, we add several token-level and sentence-level binary features derived from simple linguistic patterns, which are concatenated to the corresponding vectors. If the token is fully lowercased. If the Token is fully uppercased. If only the first letter is capitalized. If the token contains digits. If any token is fully lowercased. If any token is fully uppercased. If any token appears more than once. Model architecture Recurrent neural networks are powerful sequence learning models that have achieved excellent results for a variety of difficult NLP tasks BIBREF15 . In particular, we use the last hidden state of a bidirectional LSTM architecture BIBREF16 to obtain our tweet representations. This setting is currently regarded as the state-of-the-art BIBREF17 for the task on other datasets. To avoid over-fitting we use Dropout BIBREF18 and for training we set binary cross-entropy as a loss function. For evaluation we use our own wrappers of the the official evaluation scripts provided for the shared tasks, which are based on accuracy, precision, recall and F1-score. Experimental setup Our model is implemented in PyTorch BIBREF19 , which allowed us to easily deal with the variable tweet length due to the dynamic nature of the platform. We experimented with different values for the LSTM hidden state size, as well as for the dropout probability, obtaining best results for a dropout probability of INLINEFORM0 and 150 units for the the hidden vector. We trained our models using 80% of the provided data, while the remaining 20% was used for model development. We used Adam BIBREF20 , with a learning rate of INLINEFORM1 and early stopping when performance did not improve on the development set. Using embeddings of size 100 provided better results in practice. Our final best model is an ensemble of four models with the same architecture but different random initialization. To compare our results, we use the provided baseline, which is a non-parameter optimized linear-kernel SVM that uses TF-IDF bag-of-word vectors as inputs. For pre-processing, in this case we do not preserve casing and delete English stopwords. Results To understand how our strategies to recover more information from the pre-trained word embeddings affected the results, we ran ablation studies to compare how the token-level and sentence-level features contributed to the performance. Table TABREF16 summarizes the impact of these features in terms of F1-score on the validation set. We see that sentence-level features had a positive yet small impact, while token-level features seemed to actually hurt the performance. We think that since the task is performed at the sentence-level, probably features that capture linguistic phenomena at the same level provide useful information to the model, while the contributions of other finer granularity features seem to be too specific for the model to leverage on. Table TABREF17 summarizes our best single-model results on the validation set (20% of the provided data) compared to the baseline, as well as the official results of our model ensemble on the test data. Out of 43 teams our system ranked 421st with an official F1-score of 0.2905 on the test set. Although our model outperforms the baseline in the validation set in terms of F1-score, we observe important drops for all metrics compared to the test set, showing that the architecture seems to be unable to generalize well. We think these results highlight the necessity of an ad-hoc architecture for the task as well as the relevance of additional information. The work of BIBREF21 offers interesting contributions in these two aspects, achieving good results for a range of tasks that include sarcasm detection, using an additional attention layer over a BiLSTM like ours, while also pre-training their model on an emoji-based dataset of 1246 million tweets. Moreover, we think that due to the complexity of the problem and the size of the training data in the context of deep learning better results could be obtained with additional resources for pre-training. Concretely, we see transfer learning as one option to add knowledge from a larger, related dataset could significantly improve the results BIBREF22 . Manually labeling and checking data is a vastly time-consuming effort. Even if noisy, collecting a considerably larger self-annotated dataset such as in BIBREF23 could potentially boost model performance. Conclusion In this paper we presented our system to SemEval-2018 shared task on irony detection in English tweets (subtask A), which leverages on a BiLSTM and pre-trained word embeddings for representation learning, without using human-engineered features. Our results showed that although the generalization capabilities of the model are limited, there are clear future directions to improve. In particular, access to more training data and the deployment of methods like transfer learning seem to be promising directions for future research in representation learning-based sarcasm detection.
an official F1-score of 0.2905 on the test set
95d98b2a7fbecd1990ec9a070f9d5624891a4f26
95d98b2a7fbecd1990ec9a070f9d5624891a4f26_0
Q: What is the size of the dataset? Text: Introduction Sentiment analysis and emotion recognition, as two closely related subfields of affective computing, play a key role in the advancement of artificial intelligence BIBREF0 . However, the complexity and ambiguity of natural language constitutes a wide range of challenges for computational systems. In the past years irony and sarcasm detection have received great traction within the machine learning and NLP community BIBREF1 , mainly due to the high frequency of sarcastic and ironic expressions in social media. Their linguistic collocation inclines to flip polarity in the context of sentiment analysis, which makes machine-based irony detection critical for sentiment analysis BIBREF2 , BIBREF3 . Irony is a profoundly pragmatic and versatile linguistic phenomenon. As its foundations usually lay beyond explicit linguistic patterns in re-constructing contextual dependencies and latent meaning, such as shared knowledge or common knowledge BIBREF1 , automatically detecting it remains a challenging task in natural language processing. In this paper, we introduce our system for the shared task of Irony detection in English tweets, a part of the 2018 SemEval BIBREF4 . We note that computational approaches to automatically detecting irony often deploy expensive feature-engineered systems which rely on a rich body of linguistic and contextual cues BIBREF5 , BIBREF6 . The advent of Deep Learning applied to NLP has introduced models that have succeeded in large part because they learn and use their own continuous numeric representations BIBREF7 of words BIBREF8 , offering us the dream of forgetting manually-designed features. To this extent, in this paper we propose a representation learning approach for irony detection, which relies on a bidirectional LSTM and pre-trained word embeddings. Data and pre-processing For the shared task, a balanced dataset of 2,396 ironic and 2,396 non-ironic tweets is provided. The ironic corpus was constructed by collecting self-annotated tweets with the hashtags #irony, #sarcasm and #not. The tweets were then cleaned and manually checked and labeled, using a fine-grained annotation scheme BIBREF3 . The corpus comprises different types of irony: Verbal irony is often referred to as an utterance that conveys the opposite meaning of what of literally expressed BIBREF9 , BIBREF10 , e.g. I love annoying people. Situational irony appears in settings, that diverge from the expected BIBREF11 , e.g. an old man who won the lottery and died the next day. The latter does not necessarily exhibit polarity contrast or other typical linguistic features, which makes it particularly difficult to classify correctly. For the pre-processing we used the Natural Language Toolkit BIBREF12 . As a first step, we removed the following words and hashtagged words: not, sarc, sarcasm, irony, ironic, sarcastic and sarcast, in order to ascertain a clean corpus without topic-related triggers. To ease the tokenizing process with the NLTK TweetTokenizer, we replaced two spaces with one space and removed usernames and urls, as they do not generally provide any useful information for detecting irony. We do not stem or lowercase the tokens, since some patterns within that scope might serve as an indicator for ironic tweets, for instance a word or a sequence of words, in which all letters are capitalized BIBREF13 . Proposed Approach The goal of the subtask A was to build a binary classification system that predicts if a tweet is ironic or non-ironic. In the following sections, we first describe the dataset provided for the task and our pre-processing pipeline. Later, we lay out the proposed model architecture, our experiments and results. Word representation Representation learning approaches usually require extensive amounts of data to derive proper results. Moreover, previous studies have shown that initializing representations using random values generally causes the performance to drop. For these reasons, we rely on pre-trained word embeddings as a means of providing the model the adequate setting. We experiment with GloVe BIBREF14 for small sizes, namely 25, 50 and 100. This is based on previous work showing that representation learning models based on convolutional neural networks perform well compared to traditional machine learning methods with a significantly smaller feature vector size, while at the same time preventing over-fitting and accelerates computation (e.g BIBREF2 . GloVe embeddings are trained on a dataset of 2B tweets, with a total vocabulary of 1.2 M tokens. However, we observed a significant overlap with the vocabulary extracted from the shared task dataset. To deal with out-of-vocabulary terms that have a frequency above a given threshold, we create a new vector which is initialized based on the space described by the infrequent words in GloVe. Concretely, we uniformly sample a vector from a sphere centered in the centroid of the 10% less frequent words in the GloVe vocabulary, whose radius is the mean distance between the centroid and all the words in the low frequency set. For the other case, we use the special UNK token. To maximize the knowledge that may be recovered from the pre-trained embeddings, specially for out-of-vocabulary terms, we add several token-level and sentence-level binary features derived from simple linguistic patterns, which are concatenated to the corresponding vectors. If the token is fully lowercased. If the Token is fully uppercased. If only the first letter is capitalized. If the token contains digits. If any token is fully lowercased. If any token is fully uppercased. If any token appears more than once. Model architecture Recurrent neural networks are powerful sequence learning models that have achieved excellent results for a variety of difficult NLP tasks BIBREF15 . In particular, we use the last hidden state of a bidirectional LSTM architecture BIBREF16 to obtain our tweet representations. This setting is currently regarded as the state-of-the-art BIBREF17 for the task on other datasets. To avoid over-fitting we use Dropout BIBREF18 and for training we set binary cross-entropy as a loss function. For evaluation we use our own wrappers of the the official evaluation scripts provided for the shared tasks, which are based on accuracy, precision, recall and F1-score. Experimental setup Our model is implemented in PyTorch BIBREF19 , which allowed us to easily deal with the variable tweet length due to the dynamic nature of the platform. We experimented with different values for the LSTM hidden state size, as well as for the dropout probability, obtaining best results for a dropout probability of INLINEFORM0 and 150 units for the the hidden vector. We trained our models using 80% of the provided data, while the remaining 20% was used for model development. We used Adam BIBREF20 , with a learning rate of INLINEFORM1 and early stopping when performance did not improve on the development set. Using embeddings of size 100 provided better results in practice. Our final best model is an ensemble of four models with the same architecture but different random initialization. To compare our results, we use the provided baseline, which is a non-parameter optimized linear-kernel SVM that uses TF-IDF bag-of-word vectors as inputs. For pre-processing, in this case we do not preserve casing and delete English stopwords. Results To understand how our strategies to recover more information from the pre-trained word embeddings affected the results, we ran ablation studies to compare how the token-level and sentence-level features contributed to the performance. Table TABREF16 summarizes the impact of these features in terms of F1-score on the validation set. We see that sentence-level features had a positive yet small impact, while token-level features seemed to actually hurt the performance. We think that since the task is performed at the sentence-level, probably features that capture linguistic phenomena at the same level provide useful information to the model, while the contributions of other finer granularity features seem to be too specific for the model to leverage on. Table TABREF17 summarizes our best single-model results on the validation set (20% of the provided data) compared to the baseline, as well as the official results of our model ensemble on the test data. Out of 43 teams our system ranked 421st with an official F1-score of 0.2905 on the test set. Although our model outperforms the baseline in the validation set in terms of F1-score, we observe important drops for all metrics compared to the test set, showing that the architecture seems to be unable to generalize well. We think these results highlight the necessity of an ad-hoc architecture for the task as well as the relevance of additional information. The work of BIBREF21 offers interesting contributions in these two aspects, achieving good results for a range of tasks that include sarcasm detection, using an additional attention layer over a BiLSTM like ours, while also pre-training their model on an emoji-based dataset of 1246 million tweets. Moreover, we think that due to the complexity of the problem and the size of the training data in the context of deep learning better results could be obtained with additional resources for pre-training. Concretely, we see transfer learning as one option to add knowledge from a larger, related dataset could significantly improve the results BIBREF22 . Manually labeling and checking data is a vastly time-consuming effort. Even if noisy, collecting a considerably larger self-annotated dataset such as in BIBREF23 could potentially boost model performance. Conclusion In this paper we presented our system to SemEval-2018 shared task on irony detection in English tweets (subtask A), which leverages on a BiLSTM and pre-trained word embeddings for representation learning, without using human-engineered features. Our results showed that although the generalization capabilities of the model are limited, there are clear future directions to improve. In particular, access to more training data and the deployment of methods like transfer learning seem to be promising directions for future research in representation learning-based sarcasm detection.
a balanced dataset of 2,396 ironic and 2,396 non-ironic tweets is provided
586566de02abdf20b7bfd0d5a43ba93cb02795c3
586566de02abdf20b7bfd0d5a43ba93cb02795c3_0
Q: What was the baseline model? Text: Introduction Sentiment analysis and emotion recognition, as two closely related subfields of affective computing, play a key role in the advancement of artificial intelligence BIBREF0 . However, the complexity and ambiguity of natural language constitutes a wide range of challenges for computational systems. In the past years irony and sarcasm detection have received great traction within the machine learning and NLP community BIBREF1 , mainly due to the high frequency of sarcastic and ironic expressions in social media. Their linguistic collocation inclines to flip polarity in the context of sentiment analysis, which makes machine-based irony detection critical for sentiment analysis BIBREF2 , BIBREF3 . Irony is a profoundly pragmatic and versatile linguistic phenomenon. As its foundations usually lay beyond explicit linguistic patterns in re-constructing contextual dependencies and latent meaning, such as shared knowledge or common knowledge BIBREF1 , automatically detecting it remains a challenging task in natural language processing. In this paper, we introduce our system for the shared task of Irony detection in English tweets, a part of the 2018 SemEval BIBREF4 . We note that computational approaches to automatically detecting irony often deploy expensive feature-engineered systems which rely on a rich body of linguistic and contextual cues BIBREF5 , BIBREF6 . The advent of Deep Learning applied to NLP has introduced models that have succeeded in large part because they learn and use their own continuous numeric representations BIBREF7 of words BIBREF8 , offering us the dream of forgetting manually-designed features. To this extent, in this paper we propose a representation learning approach for irony detection, which relies on a bidirectional LSTM and pre-trained word embeddings. Data and pre-processing For the shared task, a balanced dataset of 2,396 ironic and 2,396 non-ironic tweets is provided. The ironic corpus was constructed by collecting self-annotated tweets with the hashtags #irony, #sarcasm and #not. The tweets were then cleaned and manually checked and labeled, using a fine-grained annotation scheme BIBREF3 . The corpus comprises different types of irony: Verbal irony is often referred to as an utterance that conveys the opposite meaning of what of literally expressed BIBREF9 , BIBREF10 , e.g. I love annoying people. Situational irony appears in settings, that diverge from the expected BIBREF11 , e.g. an old man who won the lottery and died the next day. The latter does not necessarily exhibit polarity contrast or other typical linguistic features, which makes it particularly difficult to classify correctly. For the pre-processing we used the Natural Language Toolkit BIBREF12 . As a first step, we removed the following words and hashtagged words: not, sarc, sarcasm, irony, ironic, sarcastic and sarcast, in order to ascertain a clean corpus without topic-related triggers. To ease the tokenizing process with the NLTK TweetTokenizer, we replaced two spaces with one space and removed usernames and urls, as they do not generally provide any useful information for detecting irony. We do not stem or lowercase the tokens, since some patterns within that scope might serve as an indicator for ironic tweets, for instance a word or a sequence of words, in which all letters are capitalized BIBREF13 . Proposed Approach The goal of the subtask A was to build a binary classification system that predicts if a tweet is ironic or non-ironic. In the following sections, we first describe the dataset provided for the task and our pre-processing pipeline. Later, we lay out the proposed model architecture, our experiments and results. Word representation Representation learning approaches usually require extensive amounts of data to derive proper results. Moreover, previous studies have shown that initializing representations using random values generally causes the performance to drop. For these reasons, we rely on pre-trained word embeddings as a means of providing the model the adequate setting. We experiment with GloVe BIBREF14 for small sizes, namely 25, 50 and 100. This is based on previous work showing that representation learning models based on convolutional neural networks perform well compared to traditional machine learning methods with a significantly smaller feature vector size, while at the same time preventing over-fitting and accelerates computation (e.g BIBREF2 . GloVe embeddings are trained on a dataset of 2B tweets, with a total vocabulary of 1.2 M tokens. However, we observed a significant overlap with the vocabulary extracted from the shared task dataset. To deal with out-of-vocabulary terms that have a frequency above a given threshold, we create a new vector which is initialized based on the space described by the infrequent words in GloVe. Concretely, we uniformly sample a vector from a sphere centered in the centroid of the 10% less frequent words in the GloVe vocabulary, whose radius is the mean distance between the centroid and all the words in the low frequency set. For the other case, we use the special UNK token. To maximize the knowledge that may be recovered from the pre-trained embeddings, specially for out-of-vocabulary terms, we add several token-level and sentence-level binary features derived from simple linguistic patterns, which are concatenated to the corresponding vectors. If the token is fully lowercased. If the Token is fully uppercased. If only the first letter is capitalized. If the token contains digits. If any token is fully lowercased. If any token is fully uppercased. If any token appears more than once. Model architecture Recurrent neural networks are powerful sequence learning models that have achieved excellent results for a variety of difficult NLP tasks BIBREF15 . In particular, we use the last hidden state of a bidirectional LSTM architecture BIBREF16 to obtain our tweet representations. This setting is currently regarded as the state-of-the-art BIBREF17 for the task on other datasets. To avoid over-fitting we use Dropout BIBREF18 and for training we set binary cross-entropy as a loss function. For evaluation we use our own wrappers of the the official evaluation scripts provided for the shared tasks, which are based on accuracy, precision, recall and F1-score. Experimental setup Our model is implemented in PyTorch BIBREF19 , which allowed us to easily deal with the variable tweet length due to the dynamic nature of the platform. We experimented with different values for the LSTM hidden state size, as well as for the dropout probability, obtaining best results for a dropout probability of INLINEFORM0 and 150 units for the the hidden vector. We trained our models using 80% of the provided data, while the remaining 20% was used for model development. We used Adam BIBREF20 , with a learning rate of INLINEFORM1 and early stopping when performance did not improve on the development set. Using embeddings of size 100 provided better results in practice. Our final best model is an ensemble of four models with the same architecture but different random initialization. To compare our results, we use the provided baseline, which is a non-parameter optimized linear-kernel SVM that uses TF-IDF bag-of-word vectors as inputs. For pre-processing, in this case we do not preserve casing and delete English stopwords. Results To understand how our strategies to recover more information from the pre-trained word embeddings affected the results, we ran ablation studies to compare how the token-level and sentence-level features contributed to the performance. Table TABREF16 summarizes the impact of these features in terms of F1-score on the validation set. We see that sentence-level features had a positive yet small impact, while token-level features seemed to actually hurt the performance. We think that since the task is performed at the sentence-level, probably features that capture linguistic phenomena at the same level provide useful information to the model, while the contributions of other finer granularity features seem to be too specific for the model to leverage on. Table TABREF17 summarizes our best single-model results on the validation set (20% of the provided data) compared to the baseline, as well as the official results of our model ensemble on the test data. Out of 43 teams our system ranked 421st with an official F1-score of 0.2905 on the test set. Although our model outperforms the baseline in the validation set in terms of F1-score, we observe important drops for all metrics compared to the test set, showing that the architecture seems to be unable to generalize well. We think these results highlight the necessity of an ad-hoc architecture for the task as well as the relevance of additional information. The work of BIBREF21 offers interesting contributions in these two aspects, achieving good results for a range of tasks that include sarcasm detection, using an additional attention layer over a BiLSTM like ours, while also pre-training their model on an emoji-based dataset of 1246 million tweets. Moreover, we think that due to the complexity of the problem and the size of the training data in the context of deep learning better results could be obtained with additional resources for pre-training. Concretely, we see transfer learning as one option to add knowledge from a larger, related dataset could significantly improve the results BIBREF22 . Manually labeling and checking data is a vastly time-consuming effort. Even if noisy, collecting a considerably larger self-annotated dataset such as in BIBREF23 could potentially boost model performance. Conclusion In this paper we presented our system to SemEval-2018 shared task on irony detection in English tweets (subtask A), which leverages on a BiLSTM and pre-trained word embeddings for representation learning, without using human-engineered features. Our results showed that although the generalization capabilities of the model are limited, there are clear future directions to improve. In particular, access to more training data and the deployment of methods like transfer learning seem to be promising directions for future research in representation learning-based sarcasm detection.
a non-parameter optimized linear-kernel SVM that uses TF-IDF bag-of-word vectors as inputs
dfd9302615b27abf8cbef1a2f880a73dd5f0c753
dfd9302615b27abf8cbef1a2f880a73dd5f0c753_0
Q: What models are evaluated with QAGS? Text: Introduction Automatic summarization aims to produce summaries that are succinct, coherent, relevant, and — crucially — factually correct. Recent progress in conditional text generation has led to models that can generate fluent, topical summaries BIBREF2. However, model-generated summaries frequently contain factual inconsistencies, limiting their applicability BIBREF3. The problem of factual inconsistency is due in part to the lack of automatic evaluation metrics that can detect such errors. Standard metrics for evaluating generated text are predominantly based on counting $n$-grams, which weigh all $n$-grams equally and are insensitive to semantic errors. This inadequacy leaves human evaluation as the primary method for evaluating the factual consistencies, which has been noted to be challenging even for humans BIBREF4, BIBREF5, in addition to being slow and costly. We argue that evaluation metrics that are able to capture subtle semantic errors are required to build better models. In this work, we introduce a general framework for evaluating conditional text generation that is designed to detect factual inconsistencies in generated text with respect to some input. Our framework consists of three steps: (1) Given a generated text, a question generation (QG) model generates a set of questions about the text. (2) We then use question answering (QA) models to answer these questions given both the input and the generated text. (3) A quality score is computed based on the similarity of corresponding answers. This approach leverages recent progress in QA and QG to ask and answer human readable, on-topic questions BIBREF6, BIBREF7. It only assumes access to a question answering dataset to train the QG and QA models, and is applicable to any modality where a QA model is available, e.g. text, images, or knowledge graphs. We use this framework to develop QAGS (Question Answering and Generation for Summarization), a metric for evaluating the factual consistency of abstractive document summaries. Compared to commonly used automatic metrics such as ROUGE BIBREF8, QAGS shows dramatically higher correlations with human judgements of factuality, for example achieving a Pearson correlation coefficient of 54.52 on the CNN/DailyMail summarization task, compared to 17.72 for ROUGE-2. QAGS also achieves new state-of-the-art results on evaluating the factuality of summaries, outperforming recently proposed NLI models for this task BIBREF5. Finally, we analyse the robustness of QAGS through an ablation study. QAGS shows robustness to the quality of the underlying QG and QA models, the domain of the models, and the number of questions asked. Even under the worst ablation settings, QAGS still has stronger correlation with human judgments than other automatic metrics. Overall, we contribute the following: (1) We introduce QAGS, an automatic model-based evaluation metric for measuring the factual consistency of model-generated text. (2) We collect a new set of human judgments of factual consistency of model-generated summaries for two summarization datasets. We demonstrate that QAGS correlates with these judgments significantly better than other automatic metrics. (3) We show via ablations that QAGS is robust to a number of factors including underlying model quality and domain mismatch. (4) We analyze the questions and answers produced in computing QAGS to illustrate which parts of summaries are inconsistent. (5) We will release models and code to compute QAGS. Background: Automatically Evaluating Machine Generated Text Standard approaches to evaluating generated text are primarily based on counting $n$-gram overlap. These methods assume access to one or more reference texts, and score a generated summary based on the precision and recall of all reference $n$-grams in the generated summary. We briefly describe the most common metrics in this family, and refer readers to BIBREF9 for further discussion. ROUGE BIBREF8 was developed specifically for evaluating automatic summarization, and its variants are the de facto standard for such. The most common variant is ROUGE-$n$ (typically $n \in \lbrace 1, 2\rbrace $), which computes the F1 score for all reference $n$-grams in the generated summary. ROUGE-$L$, another commonly used variant, is the length of the longest common subsequence (possibly non-consecutive) between a summary and references. BLEU BIBREF10 is closely related to ROUGE but was developed for machine translation. BLEU computes the precision of the reference $n$-grams in the generated summary. METEOR BIBREF11 extends BLEU by using an alignment between the generated text and a reference, as well as using stemming and synonym replacement for more flexible $n$-gram matching. We identify two key deficiencies when using these $n$-gram based evaluation metrics to detect factual inconsistencies in generated text. First, these metrics require one or more reference texts to compare against. Obtaining references can be expensive and challenging, and as such many text generation datasets contain only a single reference. This problem is exacerbated with high-entropy generation tasks, such as summarization or dialogue, where there is a very large number of acceptable outputs. In these settings, comparing against a single reference is woefully inadequate. Second, given a reference to compare against, $n$-gram based approach weigh all portions of the text equally, even when only a small fraction of the $n$-grams carry most of the semantic content. Factual inconsistencies caused by minor changes may be drowned out by otherwise high $n$-gram overlap, making these metrics insensitive to these errors. For example, the sentences “I am writing my paper in Vancouver.” and “I am not writing my paper in Vancouver.” share nearly all unigrams and bigrams despite having the opposite meaning. A Framework for Automatically Evaluating Factual Consistency We introduce a framework for automatically detecting factual inconsistencies in generated text while also addressing the deficiencies of current approaches. Let $X$ and $Y$ be sequences of tokens coming from a vocabulary $V$ where $X$ is a source text and $Y$ is a summary of $X$. We define $p(Q|Y)$ as a distribution over all possible questions $Q$ given summary $Y$, and $p(A|Q, X)$ and $p(A|Q, Y)$ as distributions over all possible answers $A$ to a particular question $Q$ given either the source $X$ or the summary $Y$. We constrain the questions $Q$ and answers $A$ to also be sequences of tokens from $V$. Then the factual consistency of the summary $Y$ is where $D$ is some function measuring the similarity of the two answer distributions. This expression is maximized when $Y$ contains a subset of the information in $X$ such that it produces the same answer for any question from $p(Q|Y)$. This happens trivially when $Y=X$, e.g. we take $X$ as its own summary, but we usually have other desiderata of $Y$ such that this solution is undesirable. This framework addresses the two issues with $n$-gram based approaches. Instead of requiring a reference to compare against, our framework asks questions based on the generation itself, and compares answers with the provided source text. Also, the use of questions focuses the metric on the semantically relevant parts of the generated text, rather than weighting all parts of the text equally. In practice, exactly computing the expectation in Equation DISPLAY_FORM4 is intractable due to the large space of possible questions. One potential workaround is to randomly sample questions from $p(Q|Y)$, but this suffers from high variance and requires many samples to obtain a good estimate. Instead, we focus on producing highly probable questions, e.g. as produced by beam search, which may be biased in the limit, but will require fewer questions to estimate because of the higher quality of the questions. QAGS Using this framework requires specifying the question distribution $p(Q|Y)$, the answer distribution $p(A|Q, Y)$ (or $X$), and the answer similarity function $D$. We apply this framework to summarization to develop QAGS and describe our instantiations of these components. QAGS ::: Question Generation To instantiate $p(Q|Y)$, we draw on recent work on automatic question generation (QG), which models this distribution using neural seq2seq models BIBREF12, BIBREF13. We over-sample questions, and then filter out low quality questions as follows. First, we train and generate from answer-conditional QG models: The model receives both the answer and the source article, and is trained to maximize the likelihood of the paired question. At test time, we extract named entities and noun phrases as answers candidates using spaCy. Second, we filter out low-quality questions using a number of heuristics, such as duplicates and questions less than three tokens long. We also found it useful to run the QA model (see next section) on all of the candidate questions, and filter out questions for which the QA model predicted no answer. QAGS ::: Question Answering We instantiate the answer distributions $p(A|Q,*)$ as extractive QA models, for simplicity. We use extractive QA because we assume the facts are represented as text spans in the article and summary. Future work should explore using abstractive QA models, which could match paraphrases of the same answer. QAGS ::: Answer Similarity We use token-level F1 to compare answers, which is standard for extractive QA and equivalent to defining $D$ as QAGS ::: The QAGS Score Given these components, we obtain the QAGS score of a generation by (1) generating $K$ questions conditioned on the summary, (2) answering the questions using both the source article and the summary to get two sets of answers, (3) comparing corresponding answers using the answer similarity metric, and (4) averaging the answer similarity metric over all questions. We depict this process in Figure FIGREF3. Experiments ::: Human Evaluation We test whether QAGS accurately measures the factual consistency of a summary with respect to a source article by computing correlations with human judgments of factual consistency. Experiments ::: Human Evaluation ::: Datasets We evaluate on two abstractive summarization datasets, CNN/Daily Mail BIBREF0, BIBREF14 and XSUM BIBREF1. Abstractive summarization is particularly interesting because factual consistency with the original text is crucial to usability, and a lack of such consistency has plagued abstractive neural summarization models BIBREF15, BIBREF16, BIBREF5. CNN/DM is a standard dataset for summarization that consists of CNN and DailyMail articles. Each reference summary consists of the concatenation of three editor-written, bullet point highlights. For summaries, we use 235 test outputs from BIBREF17. XSUM was created by taking the first sentence of a news article as the summary, and using the rest of the article as the source. Consequently, XSUM summaries are significantly more abstractive than those of CNN/DM, and extractive summarization models perform poorly on this dataset. We found that while the XSUM summaries are more abstractive, frequently there are facts (e.g. first names) in the summary that are not available in the “article”. This quirk made it especially difficult for humans and QAGS to tell when factual errors were being made by the summarization model. To remedy this, for human evaluation and QAGS, we prepend the summary back to the “article”. We use a subset of 239 test outputs from BART fine-tuned on XSUM BIBREF2. Experiments ::: Human Evaluation ::: Annotation Protocol We collect human judgments on Amazon Mechanical Turk via ParlAI BIBREF18. We present summaries one sentence at a time, along with the entire article. For each summary sentence, the annotator makes a binary decision as to whether the sentence is factually consistent with the article. Workers are instructed to mark non-grammatical sentences as not consistent, and copies of article sentences as consistent. Workers are paid $1 per full summary annotated. See Appendix SECREF10 for further details. We collect 3 annotations per summary. To obtain a single “correctness” score per summary, we first take the majority vote for each sentence, then average the binary scores across summary sentences. Inter-annotator agreement as measured by Krippendorff's $\alpha $ is 0.51 and 0.34 for CNN/DM and XSUM, respectively indicating “moderate” and “fair” agreement BIBREF19. While not perfect, these agreement numbers are in-line with similar figures from previous work on summarization evaluation BIBREF4. Experiments ::: Experimental Details ::: Question Generation We use fairseq BIBREF20 to fine-tune a pretrained BART language model on NewsQA BIBREF21, a dataset consisting of CNN articles and crowdsourced questions. For each summary, we use 10 answer candidates and generate questions using beam search with width 10, for a total of 100 question candidates. After filtering, we use the $K = 20$ most probable questions. If a summary has too few filtered questions, we randomly sample questions to reach the required number. For details, see Appendix SECREF11. Experiments ::: Experimental Details ::: Question Answering We train QA models by fine-tuning BERT BIBREF6 on SQuAD2.0 BIBREF22. We use the large-uncased BERT variant via the transformers library BIBREF23. Experiments ::: Experimental Details ::: Baselines We compare against a number of automatic evaluation metrics: ROUGE BIBREF8, METEOR BIBREF11, BLEU BIBREF10, and BERTScore BIBREF24. The latter uses BERT representations to compute an alignment between generation and reference tokens, and which is then used to compute a soft version of unigram F1. We use the large-uncased BERT variant. Experiments ::: Results We present results in Table . QAGS strongly outperforms other automatic evaluation metrics in terms of correlation with human judgments of factual consistency. BLEU and ROUGE perform comparably, and lower order $n$-gram metrics work better. BERTScore matches the best $n$-gram metrics on CNN/DM, but the worst overall on XSUM. On CNN/DM, QAGS obtains nearly twice the correlation of the next best automatic metric (BLEU-1). We speculate that this large increase is due to the sensitivity of the QA model to the sentence fusing behavior exhibited in many summarization models trained on CNN/DM BIBREF25. When two sentences are fused to produce an incorrect summary statement, the QA model produces different answers than when using the source article versus when using the summary. On XSUM, all metrics correlate worse with human judgments than on CNN/DM, which reflects the fact that XSUM is more abstractive. QAGS still outperforms the next best automatic metric. Experiments ::: Ablations A potential issue with model-based evaluation is that the quality of the evaluation metric may depend heavily on specific hyperparameter settings. We explore whether this is true with QAGS by performing ablations on several factors. Experiments ::: Ablations ::: Model Quality We first consider the degree to which the quality of the underlying models impacts their evaluation capabilities. For QA quality, we answer this question by training QA models of varying quality by fine-tuning different versions of BERT on SQuAD. We present results in Table . The QA models perform similarly despite substantially different performances on the SQuAD development set. Surprisingly, using the best QA model (bert-large-wwm) does not lead to the best correlations with human judgments. On CNN/DM, bert-large-wwm slightly underperforms bert-base and bert-large. On XSUM, bert-base slightly outperforms the other two BERT variants. These results indicate that QAGS is fairly robust to the quality of the underlying QA model, though we note that BERT is a strong QA baseline, and using weaker QA models might lead to larger performance dropoffs. To ablate QG quality, we use models with increasing perplexity on the NewsQA development set. Results in Table show that QAGS is robust to the QG model quality, with some decrease in correlation with human judgments as perplexity increases on CNN/DM, and no clear trend on XSUM. Even the weakest QG model still significantly outperforms all other automatic metrics in Table . Experiments ::: Ablations ::: Domain Effects Our approach relies on having a labeled dataset to train QG and QA models. However, for relatively niche domains, such a labeled QA/QG dataset may not exist. Instead, we may need to resort to using models trained on out-of-domain data, leading to domain shift effects that negatively impact the quality of the QAGS scores. We simulate this setting by fine-tuning the QG model on SQuAD, which is of similar size to NewsQA but drawn from Wikipedia articles rather than CNN articles, which exactly matches the genre of the summarization datasets. Evaluating with this QG model, we get correlations of 51.53 and 15.28 with human judgments on CNN/DM and XSUM respectively, versus 54.53 and 17.49 when using the NewsQA-tuned QG model. The drop in performance indicates a negative domain shift effect. However using the SQuAD-tuned QG model still substantially outperforms all other automatic metrics, again pointing to the robustness of QAGS. Experiments ::: Ablations ::: Number of Questions Next, we investigate the correlation with human judgments when varying the number of questions used. Results in Table show that increasing the number of questions used improves correlations with human judgments. We observe a large increase when moving from 10 to 20 questions, and a smaller increase from 20 to 50 questions, indicating decreasing marginal benefit moving beyond 50 questions. With just 5 questions, QAGS still substantially outperforms other automatic metrics, indicating its robustness. Experiments ::: Ablations ::: Answer Similarity Metric Finally, we consider using exact match as an alternative answer similarity metric. Exact match is another common evaluation metric for extractive QA, and is more restrictive than F1. When using EM, we obtain Pearson correlations with human judgments of 45.97 and 18.10 on CNN/DM and XSUM, as opposed to 54.53 and 17.49 when using F1. Re-ranking with QAGS Several works explore the use of natural language inference (NLI) models to detect factual consistency in generated text BIBREF26, BIBREF16. We compare against these methods by evaluating on the sentence ranking experiment from BIBREF16. The experiment uses 373 triplets of source sentences from CNN/DM and two summary sentences generated from the model from BIBREF27. One summary sentence is factually consistent with the source sentence, and the other is inconsistent. A metric (or model) is evaluated based on how often it ranks the consistent sentence higher than the inconsistent sentence. We present the results in Table . Results using two NLI models fine-tuned on MultiNLI BIBREF28, BERT NLI and ESIM BIBREF29, are from BIBREF16. FactCC BIBREF5 is an NLI-based fact-checking model that is trained on a dataset tailor made for detecting factual inconsistencies in generated text. QAGS outperforms these methods, while requiring no special supervision for this task. Qualitative Analysis ::: Interpreting QAGS The questions and answers produced in computing QAGS are directly interpretable, and highlight errors in summaries. We present examples of articles, summaries, and the QAGS questions and answers in Table . On the first example (Table , top), QAGS detects several factual inconsistencies in the generated summary: The summary mistakes the first name of the attacker, the location of the attack, and the weapons used. Because the QG model focuses on these details, QAGS is able to correctly penalize the summary for its hallucinations. Because the answer candidates used are mostly named entities and noun phrases, QAGS is particularly effective at detecting errors of this kind. Using more diverse answer candidates may broaden the set of inconsistencies that QAGS is able to detect. The second example (Table , bottom), illustrates failure modes of QAGS. For example, the QA model incorrectly marks question 2 as unanswerable. On question 4, both answers produced are correct, but because they have no common tokens, they are marked inconsistent by QAGS. Qualitative Analysis ::: Error Analysis The interpretability of QAGS allows for error analysis on the metric. We manually annotate 400 triplets of generated questions, article answers, and summary answers that are produced in computing QAGS on the XSUM summaries, and label them by the quality of the generated questions, predicted answers, and answer similarity scores. Among the generated questions, 8.75% are nonsensical, while 3.00% are well-formed but unanswerable using the generated summary they were conditioned upon. These figures indicate that the vast majority of questions are understandable and on-topic. We frequently observe multiple questions with slightly different wordings, which is likely due to the low number of answer candidates in XSUM summaries (which are one sentence long) and due to beam search. 8.25% of questions are well-formed but unanswerable using the source, which is usually due to a hallucinated fact in the summary that the QG model turns into a question. Among predicted answers, 1.75% of questions are potentially answerable using the summary, but are incorrectly answered. This percentage increases to 32.50% for the article, which indicates that the transfer ability of the QA model is lacking. In a small number of cases, we found that while a question had a single answer in the summary, it could have multiple answers in the article. Finally, for 8.00% of the examples, the question is answered correctly using both the article and summary, but the answers have high lexical variation such that F1 score fails to detect their similarity. While this happens in a relatively small number of cases, exploring similarity metrics other than $n$-gram based approaches could be useful. Qualitative Analysis ::: Limitations We emphasize that QAGS and our overall framework are specifically designed to detect factual inconsistencies in generated summaries relative to the source article. QAGS does not measure other desirable properties of generated text, including fluency, readability, or factual recall. We therefore recommend using QAGS in conjunction with complementary evaluation metrics. The choices of QG and QA models in QAGS are particular to abstractive summarization and may require adaptation to be used for other conditional text generation tasks. For example, we expect that extractive summarization models may obtain nearly perfect QAGS scores because facts and statements are directly copied from the source article. Related Work Automatic summarization and its evaluation are long-standing lines of work in NLP, dating at least as far back as the Document Understanding Conferences BIBREF30. The primary evaluation metric then and now is ROUGE BIBREF8, though much work has demonstrated the limited ability of ROUGE and its relatives to evaluate summaries BIBREF31, BIBREF32, BIBREF33. Other metrics have focused on specific aspects of summarization quality, including content selection BIBREF34, relevance prediction BIBREF4, and many more. There has been a recent resurgence of work leveraging NLU models for evaluating the factuality of generated text. BIBREF35 use information extraction models to measure factual overlap, but facts are restricted to pre-defined schemas. BIBREF16 investigate the use of NLI models to evaluate the factual correctness of CNN/DM summaries, and conclude that current NLI models are too brittle to be reliably used in this manner. BIBREF5 train a NLI-based fact-checking model by building a dataset of factual inconsistencies based on noise heuristic. Our QA approach allows a finer-grained analysis, because NLI operates on complete sentences, whereas QAGS can ask many questions about the same sentence. Most relatedly, BIBREF36 and BIBREF37 use QA models to evaluate summarization. We diverge from these works in two important ways. First, both works use Cloze-style questions, which are generated by masking entities in either the source document or the reference summary. We instead generate the questions with a model, allowing a much greater range of questions. Second, we produce questions conditioned on the generated summary, rather than the reference summary or source article. Producing questions from the generated summary is more appropriate for verifying the accuracy of the text, whereas using the reference or source measures content selection. Conclusion We introduce a framework for automatically detecting factual inconsistencies in conditionally generated texts and use this framework to develop QAGS, a metric for measuring inconsistencies in abstractive summarization. QAGS correlates with human judgments of factuality significantly better than standard automatic evaluation metrics for summarization, and outperforms related NLI-based approaches to factual consistency checking. QAGS is naturally interpretable: The questions and answers produced in computing QAGS indicate which tokens in a generated summary are inconsistent and why. Error analysis shows that future work should explore improved QA models. Our approach can also be applied to diverse modalities, such as translation and image captioning. Overall, we believe QAGS is useful in quantifying and incentivizing factually consistent text generation. Human Evaluation Task Design We restrict our pool of workers to US-based workers. Workeres are required to have at least 1000 approved HITs with an acceptance rate of at least 98%. The base reward for our task is $0.15. For each summary, we include automatic quality checks including Time checks: workers who complete the task under 30s fail the check Attention checks: we include exact copies of article sentences and corrupted mixtures of two article sentences as positive and negative control task. If a worker fails to answer both of these examples correctly, they fail the check Explanation checks: For each sentence in the summary, the worker is required to provide a short explanation of their decision If a worker passes all checks, they are awarded a $0.85 bonus, totalling $1.00 per correct annotation. According to turkerview.com, workers of our HIT are paid well in excess of $15.00 on average. We show our annotation interfaces for the annotation task for CNN/DM and XSUM respectively in Figures FIGREF27 and FIGREF28. We use slightly different instructions to accommodate for the quirks of each dataset. For XSUM, we prepend the reference “summary” back onto the source article, as without it, workers were struggling to identify factual inconsistencies. Model and Generation Details ::: Question Generation We fine-tune BART for question generation using the same tuning hyperparameters as the original work. We optimize label smoothed cross entropy with smoothing parameter 0.1 BIBREF41 and a peak learning rate of 2e-5. We optimize for 100k steps with 5k warmup steps, and use the model with the best perplexity on the development set. To turn NewsQA into an answer conditional QG dataset, we concatenate the answer to the source article with a special marker token in between. We then concatenate another special marker token and the question. At test time, we get 10 named entities and noun phrases as answer candidates using the en-web-sm spaCy model. We downsampling if there are more than 10 and randomly duplicating some answers if there are more than 10. The model predicts the question after seeing an answer and the article. During decoding, we use beam search with beam size 10, length penalty 1.0, and trigram repetition blocking. We experimented with top-$k$ BIBREF39 and top-$p$ BIBREF38, but the outputted questions, while diverse, were quite noisy. Generations have minimum length 8 and max length 60. To filter the questions, we first use simple heuristics, including removing everything after the first question mark in a question exact duplicates questions shorter than three tokens long For the remaining questions, we use our QA model to answer each question and we remove questions for which the QA model deems unanswerable. We then take the top 20 most probable questions, random sampling some of the filtered questions if there were too few. Model and Generation Details ::: Question Answering We fine-tune BERT for question answering following the original work. We optimize using AdamW BIBREF40 with initial learning rate 5e-5. We train for 3 epochs, with a warmup ratio of 0.1. We use the model with the best development set performance. We use SQuAD2.0 because we found the unanswerable questions useful for filtering out questions and questions based on hallucinated facts in the summary should be unanswerable using the source article. Similar to the QG setting, we append the question and answer to the source article with intervening special marker tokens.
bert-large-wwm, bert-base, bert-large
e09dcb6fc163bba7d704178e7edba2e630b573c2
e09dcb6fc163bba7d704178e7edba2e630b573c2_0
Q: Do they use crowdsourcing to collect human judgements? Text: Introduction Automatic summarization aims to produce summaries that are succinct, coherent, relevant, and — crucially — factually correct. Recent progress in conditional text generation has led to models that can generate fluent, topical summaries BIBREF2. However, model-generated summaries frequently contain factual inconsistencies, limiting their applicability BIBREF3. The problem of factual inconsistency is due in part to the lack of automatic evaluation metrics that can detect such errors. Standard metrics for evaluating generated text are predominantly based on counting $n$-grams, which weigh all $n$-grams equally and are insensitive to semantic errors. This inadequacy leaves human evaluation as the primary method for evaluating the factual consistencies, which has been noted to be challenging even for humans BIBREF4, BIBREF5, in addition to being slow and costly. We argue that evaluation metrics that are able to capture subtle semantic errors are required to build better models. In this work, we introduce a general framework for evaluating conditional text generation that is designed to detect factual inconsistencies in generated text with respect to some input. Our framework consists of three steps: (1) Given a generated text, a question generation (QG) model generates a set of questions about the text. (2) We then use question answering (QA) models to answer these questions given both the input and the generated text. (3) A quality score is computed based on the similarity of corresponding answers. This approach leverages recent progress in QA and QG to ask and answer human readable, on-topic questions BIBREF6, BIBREF7. It only assumes access to a question answering dataset to train the QG and QA models, and is applicable to any modality where a QA model is available, e.g. text, images, or knowledge graphs. We use this framework to develop QAGS (Question Answering and Generation for Summarization), a metric for evaluating the factual consistency of abstractive document summaries. Compared to commonly used automatic metrics such as ROUGE BIBREF8, QAGS shows dramatically higher correlations with human judgements of factuality, for example achieving a Pearson correlation coefficient of 54.52 on the CNN/DailyMail summarization task, compared to 17.72 for ROUGE-2. QAGS also achieves new state-of-the-art results on evaluating the factuality of summaries, outperforming recently proposed NLI models for this task BIBREF5. Finally, we analyse the robustness of QAGS through an ablation study. QAGS shows robustness to the quality of the underlying QG and QA models, the domain of the models, and the number of questions asked. Even under the worst ablation settings, QAGS still has stronger correlation with human judgments than other automatic metrics. Overall, we contribute the following: (1) We introduce QAGS, an automatic model-based evaluation metric for measuring the factual consistency of model-generated text. (2) We collect a new set of human judgments of factual consistency of model-generated summaries for two summarization datasets. We demonstrate that QAGS correlates with these judgments significantly better than other automatic metrics. (3) We show via ablations that QAGS is robust to a number of factors including underlying model quality and domain mismatch. (4) We analyze the questions and answers produced in computing QAGS to illustrate which parts of summaries are inconsistent. (5) We will release models and code to compute QAGS. Background: Automatically Evaluating Machine Generated Text Standard approaches to evaluating generated text are primarily based on counting $n$-gram overlap. These methods assume access to one or more reference texts, and score a generated summary based on the precision and recall of all reference $n$-grams in the generated summary. We briefly describe the most common metrics in this family, and refer readers to BIBREF9 for further discussion. ROUGE BIBREF8 was developed specifically for evaluating automatic summarization, and its variants are the de facto standard for such. The most common variant is ROUGE-$n$ (typically $n \in \lbrace 1, 2\rbrace $), which computes the F1 score for all reference $n$-grams in the generated summary. ROUGE-$L$, another commonly used variant, is the length of the longest common subsequence (possibly non-consecutive) between a summary and references. BLEU BIBREF10 is closely related to ROUGE but was developed for machine translation. BLEU computes the precision of the reference $n$-grams in the generated summary. METEOR BIBREF11 extends BLEU by using an alignment between the generated text and a reference, as well as using stemming and synonym replacement for more flexible $n$-gram matching. We identify two key deficiencies when using these $n$-gram based evaluation metrics to detect factual inconsistencies in generated text. First, these metrics require one or more reference texts to compare against. Obtaining references can be expensive and challenging, and as such many text generation datasets contain only a single reference. This problem is exacerbated with high-entropy generation tasks, such as summarization or dialogue, where there is a very large number of acceptable outputs. In these settings, comparing against a single reference is woefully inadequate. Second, given a reference to compare against, $n$-gram based approach weigh all portions of the text equally, even when only a small fraction of the $n$-grams carry most of the semantic content. Factual inconsistencies caused by minor changes may be drowned out by otherwise high $n$-gram overlap, making these metrics insensitive to these errors. For example, the sentences “I am writing my paper in Vancouver.” and “I am not writing my paper in Vancouver.” share nearly all unigrams and bigrams despite having the opposite meaning. A Framework for Automatically Evaluating Factual Consistency We introduce a framework for automatically detecting factual inconsistencies in generated text while also addressing the deficiencies of current approaches. Let $X$ and $Y$ be sequences of tokens coming from a vocabulary $V$ where $X$ is a source text and $Y$ is a summary of $X$. We define $p(Q|Y)$ as a distribution over all possible questions $Q$ given summary $Y$, and $p(A|Q, X)$ and $p(A|Q, Y)$ as distributions over all possible answers $A$ to a particular question $Q$ given either the source $X$ or the summary $Y$. We constrain the questions $Q$ and answers $A$ to also be sequences of tokens from $V$. Then the factual consistency of the summary $Y$ is where $D$ is some function measuring the similarity of the two answer distributions. This expression is maximized when $Y$ contains a subset of the information in $X$ such that it produces the same answer for any question from $p(Q|Y)$. This happens trivially when $Y=X$, e.g. we take $X$ as its own summary, but we usually have other desiderata of $Y$ such that this solution is undesirable. This framework addresses the two issues with $n$-gram based approaches. Instead of requiring a reference to compare against, our framework asks questions based on the generation itself, and compares answers with the provided source text. Also, the use of questions focuses the metric on the semantically relevant parts of the generated text, rather than weighting all parts of the text equally. In practice, exactly computing the expectation in Equation DISPLAY_FORM4 is intractable due to the large space of possible questions. One potential workaround is to randomly sample questions from $p(Q|Y)$, but this suffers from high variance and requires many samples to obtain a good estimate. Instead, we focus on producing highly probable questions, e.g. as produced by beam search, which may be biased in the limit, but will require fewer questions to estimate because of the higher quality of the questions. QAGS Using this framework requires specifying the question distribution $p(Q|Y)$, the answer distribution $p(A|Q, Y)$ (or $X$), and the answer similarity function $D$. We apply this framework to summarization to develop QAGS and describe our instantiations of these components. QAGS ::: Question Generation To instantiate $p(Q|Y)$, we draw on recent work on automatic question generation (QG), which models this distribution using neural seq2seq models BIBREF12, BIBREF13. We over-sample questions, and then filter out low quality questions as follows. First, we train and generate from answer-conditional QG models: The model receives both the answer and the source article, and is trained to maximize the likelihood of the paired question. At test time, we extract named entities and noun phrases as answers candidates using spaCy. Second, we filter out low-quality questions using a number of heuristics, such as duplicates and questions less than three tokens long. We also found it useful to run the QA model (see next section) on all of the candidate questions, and filter out questions for which the QA model predicted no answer. QAGS ::: Question Answering We instantiate the answer distributions $p(A|Q,*)$ as extractive QA models, for simplicity. We use extractive QA because we assume the facts are represented as text spans in the article and summary. Future work should explore using abstractive QA models, which could match paraphrases of the same answer. QAGS ::: Answer Similarity We use token-level F1 to compare answers, which is standard for extractive QA and equivalent to defining $D$ as QAGS ::: The QAGS Score Given these components, we obtain the QAGS score of a generation by (1) generating $K$ questions conditioned on the summary, (2) answering the questions using both the source article and the summary to get two sets of answers, (3) comparing corresponding answers using the answer similarity metric, and (4) averaging the answer similarity metric over all questions. We depict this process in Figure FIGREF3. Experiments ::: Human Evaluation We test whether QAGS accurately measures the factual consistency of a summary with respect to a source article by computing correlations with human judgments of factual consistency. Experiments ::: Human Evaluation ::: Datasets We evaluate on two abstractive summarization datasets, CNN/Daily Mail BIBREF0, BIBREF14 and XSUM BIBREF1. Abstractive summarization is particularly interesting because factual consistency with the original text is crucial to usability, and a lack of such consistency has plagued abstractive neural summarization models BIBREF15, BIBREF16, BIBREF5. CNN/DM is a standard dataset for summarization that consists of CNN and DailyMail articles. Each reference summary consists of the concatenation of three editor-written, bullet point highlights. For summaries, we use 235 test outputs from BIBREF17. XSUM was created by taking the first sentence of a news article as the summary, and using the rest of the article as the source. Consequently, XSUM summaries are significantly more abstractive than those of CNN/DM, and extractive summarization models perform poorly on this dataset. We found that while the XSUM summaries are more abstractive, frequently there are facts (e.g. first names) in the summary that are not available in the “article”. This quirk made it especially difficult for humans and QAGS to tell when factual errors were being made by the summarization model. To remedy this, for human evaluation and QAGS, we prepend the summary back to the “article”. We use a subset of 239 test outputs from BART fine-tuned on XSUM BIBREF2. Experiments ::: Human Evaluation ::: Annotation Protocol We collect human judgments on Amazon Mechanical Turk via ParlAI BIBREF18. We present summaries one sentence at a time, along with the entire article. For each summary sentence, the annotator makes a binary decision as to whether the sentence is factually consistent with the article. Workers are instructed to mark non-grammatical sentences as not consistent, and copies of article sentences as consistent. Workers are paid $1 per full summary annotated. See Appendix SECREF10 for further details. We collect 3 annotations per summary. To obtain a single “correctness” score per summary, we first take the majority vote for each sentence, then average the binary scores across summary sentences. Inter-annotator agreement as measured by Krippendorff's $\alpha $ is 0.51 and 0.34 for CNN/DM and XSUM, respectively indicating “moderate” and “fair” agreement BIBREF19. While not perfect, these agreement numbers are in-line with similar figures from previous work on summarization evaluation BIBREF4. Experiments ::: Experimental Details ::: Question Generation We use fairseq BIBREF20 to fine-tune a pretrained BART language model on NewsQA BIBREF21, a dataset consisting of CNN articles and crowdsourced questions. For each summary, we use 10 answer candidates and generate questions using beam search with width 10, for a total of 100 question candidates. After filtering, we use the $K = 20$ most probable questions. If a summary has too few filtered questions, we randomly sample questions to reach the required number. For details, see Appendix SECREF11. Experiments ::: Experimental Details ::: Question Answering We train QA models by fine-tuning BERT BIBREF6 on SQuAD2.0 BIBREF22. We use the large-uncased BERT variant via the transformers library BIBREF23. Experiments ::: Experimental Details ::: Baselines We compare against a number of automatic evaluation metrics: ROUGE BIBREF8, METEOR BIBREF11, BLEU BIBREF10, and BERTScore BIBREF24. The latter uses BERT representations to compute an alignment between generation and reference tokens, and which is then used to compute a soft version of unigram F1. We use the large-uncased BERT variant. Experiments ::: Results We present results in Table . QAGS strongly outperforms other automatic evaluation metrics in terms of correlation with human judgments of factual consistency. BLEU and ROUGE perform comparably, and lower order $n$-gram metrics work better. BERTScore matches the best $n$-gram metrics on CNN/DM, but the worst overall on XSUM. On CNN/DM, QAGS obtains nearly twice the correlation of the next best automatic metric (BLEU-1). We speculate that this large increase is due to the sensitivity of the QA model to the sentence fusing behavior exhibited in many summarization models trained on CNN/DM BIBREF25. When two sentences are fused to produce an incorrect summary statement, the QA model produces different answers than when using the source article versus when using the summary. On XSUM, all metrics correlate worse with human judgments than on CNN/DM, which reflects the fact that XSUM is more abstractive. QAGS still outperforms the next best automatic metric. Experiments ::: Ablations A potential issue with model-based evaluation is that the quality of the evaluation metric may depend heavily on specific hyperparameter settings. We explore whether this is true with QAGS by performing ablations on several factors. Experiments ::: Ablations ::: Model Quality We first consider the degree to which the quality of the underlying models impacts their evaluation capabilities. For QA quality, we answer this question by training QA models of varying quality by fine-tuning different versions of BERT on SQuAD. We present results in Table . The QA models perform similarly despite substantially different performances on the SQuAD development set. Surprisingly, using the best QA model (bert-large-wwm) does not lead to the best correlations with human judgments. On CNN/DM, bert-large-wwm slightly underperforms bert-base and bert-large. On XSUM, bert-base slightly outperforms the other two BERT variants. These results indicate that QAGS is fairly robust to the quality of the underlying QA model, though we note that BERT is a strong QA baseline, and using weaker QA models might lead to larger performance dropoffs. To ablate QG quality, we use models with increasing perplexity on the NewsQA development set. Results in Table show that QAGS is robust to the QG model quality, with some decrease in correlation with human judgments as perplexity increases on CNN/DM, and no clear trend on XSUM. Even the weakest QG model still significantly outperforms all other automatic metrics in Table . Experiments ::: Ablations ::: Domain Effects Our approach relies on having a labeled dataset to train QG and QA models. However, for relatively niche domains, such a labeled QA/QG dataset may not exist. Instead, we may need to resort to using models trained on out-of-domain data, leading to domain shift effects that negatively impact the quality of the QAGS scores. We simulate this setting by fine-tuning the QG model on SQuAD, which is of similar size to NewsQA but drawn from Wikipedia articles rather than CNN articles, which exactly matches the genre of the summarization datasets. Evaluating with this QG model, we get correlations of 51.53 and 15.28 with human judgments on CNN/DM and XSUM respectively, versus 54.53 and 17.49 when using the NewsQA-tuned QG model. The drop in performance indicates a negative domain shift effect. However using the SQuAD-tuned QG model still substantially outperforms all other automatic metrics, again pointing to the robustness of QAGS. Experiments ::: Ablations ::: Number of Questions Next, we investigate the correlation with human judgments when varying the number of questions used. Results in Table show that increasing the number of questions used improves correlations with human judgments. We observe a large increase when moving from 10 to 20 questions, and a smaller increase from 20 to 50 questions, indicating decreasing marginal benefit moving beyond 50 questions. With just 5 questions, QAGS still substantially outperforms other automatic metrics, indicating its robustness. Experiments ::: Ablations ::: Answer Similarity Metric Finally, we consider using exact match as an alternative answer similarity metric. Exact match is another common evaluation metric for extractive QA, and is more restrictive than F1. When using EM, we obtain Pearson correlations with human judgments of 45.97 and 18.10 on CNN/DM and XSUM, as opposed to 54.53 and 17.49 when using F1. Re-ranking with QAGS Several works explore the use of natural language inference (NLI) models to detect factual consistency in generated text BIBREF26, BIBREF16. We compare against these methods by evaluating on the sentence ranking experiment from BIBREF16. The experiment uses 373 triplets of source sentences from CNN/DM and two summary sentences generated from the model from BIBREF27. One summary sentence is factually consistent with the source sentence, and the other is inconsistent. A metric (or model) is evaluated based on how often it ranks the consistent sentence higher than the inconsistent sentence. We present the results in Table . Results using two NLI models fine-tuned on MultiNLI BIBREF28, BERT NLI and ESIM BIBREF29, are from BIBREF16. FactCC BIBREF5 is an NLI-based fact-checking model that is trained on a dataset tailor made for detecting factual inconsistencies in generated text. QAGS outperforms these methods, while requiring no special supervision for this task. Qualitative Analysis ::: Interpreting QAGS The questions and answers produced in computing QAGS are directly interpretable, and highlight errors in summaries. We present examples of articles, summaries, and the QAGS questions and answers in Table . On the first example (Table , top), QAGS detects several factual inconsistencies in the generated summary: The summary mistakes the first name of the attacker, the location of the attack, and the weapons used. Because the QG model focuses on these details, QAGS is able to correctly penalize the summary for its hallucinations. Because the answer candidates used are mostly named entities and noun phrases, QAGS is particularly effective at detecting errors of this kind. Using more diverse answer candidates may broaden the set of inconsistencies that QAGS is able to detect. The second example (Table , bottom), illustrates failure modes of QAGS. For example, the QA model incorrectly marks question 2 as unanswerable. On question 4, both answers produced are correct, but because they have no common tokens, they are marked inconsistent by QAGS. Qualitative Analysis ::: Error Analysis The interpretability of QAGS allows for error analysis on the metric. We manually annotate 400 triplets of generated questions, article answers, and summary answers that are produced in computing QAGS on the XSUM summaries, and label them by the quality of the generated questions, predicted answers, and answer similarity scores. Among the generated questions, 8.75% are nonsensical, while 3.00% are well-formed but unanswerable using the generated summary they were conditioned upon. These figures indicate that the vast majority of questions are understandable and on-topic. We frequently observe multiple questions with slightly different wordings, which is likely due to the low number of answer candidates in XSUM summaries (which are one sentence long) and due to beam search. 8.25% of questions are well-formed but unanswerable using the source, which is usually due to a hallucinated fact in the summary that the QG model turns into a question. Among predicted answers, 1.75% of questions are potentially answerable using the summary, but are incorrectly answered. This percentage increases to 32.50% for the article, which indicates that the transfer ability of the QA model is lacking. In a small number of cases, we found that while a question had a single answer in the summary, it could have multiple answers in the article. Finally, for 8.00% of the examples, the question is answered correctly using both the article and summary, but the answers have high lexical variation such that F1 score fails to detect their similarity. While this happens in a relatively small number of cases, exploring similarity metrics other than $n$-gram based approaches could be useful. Qualitative Analysis ::: Limitations We emphasize that QAGS and our overall framework are specifically designed to detect factual inconsistencies in generated summaries relative to the source article. QAGS does not measure other desirable properties of generated text, including fluency, readability, or factual recall. We therefore recommend using QAGS in conjunction with complementary evaluation metrics. The choices of QG and QA models in QAGS are particular to abstractive summarization and may require adaptation to be used for other conditional text generation tasks. For example, we expect that extractive summarization models may obtain nearly perfect QAGS scores because facts and statements are directly copied from the source article. Related Work Automatic summarization and its evaluation are long-standing lines of work in NLP, dating at least as far back as the Document Understanding Conferences BIBREF30. The primary evaluation metric then and now is ROUGE BIBREF8, though much work has demonstrated the limited ability of ROUGE and its relatives to evaluate summaries BIBREF31, BIBREF32, BIBREF33. Other metrics have focused on specific aspects of summarization quality, including content selection BIBREF34, relevance prediction BIBREF4, and many more. There has been a recent resurgence of work leveraging NLU models for evaluating the factuality of generated text. BIBREF35 use information extraction models to measure factual overlap, but facts are restricted to pre-defined schemas. BIBREF16 investigate the use of NLI models to evaluate the factual correctness of CNN/DM summaries, and conclude that current NLI models are too brittle to be reliably used in this manner. BIBREF5 train a NLI-based fact-checking model by building a dataset of factual inconsistencies based on noise heuristic. Our QA approach allows a finer-grained analysis, because NLI operates on complete sentences, whereas QAGS can ask many questions about the same sentence. Most relatedly, BIBREF36 and BIBREF37 use QA models to evaluate summarization. We diverge from these works in two important ways. First, both works use Cloze-style questions, which are generated by masking entities in either the source document or the reference summary. We instead generate the questions with a model, allowing a much greater range of questions. Second, we produce questions conditioned on the generated summary, rather than the reference summary or source article. Producing questions from the generated summary is more appropriate for verifying the accuracy of the text, whereas using the reference or source measures content selection. Conclusion We introduce a framework for automatically detecting factual inconsistencies in conditionally generated texts and use this framework to develop QAGS, a metric for measuring inconsistencies in abstractive summarization. QAGS correlates with human judgments of factuality significantly better than standard automatic evaluation metrics for summarization, and outperforms related NLI-based approaches to factual consistency checking. QAGS is naturally interpretable: The questions and answers produced in computing QAGS indicate which tokens in a generated summary are inconsistent and why. Error analysis shows that future work should explore improved QA models. Our approach can also be applied to diverse modalities, such as translation and image captioning. Overall, we believe QAGS is useful in quantifying and incentivizing factually consistent text generation. Human Evaluation Task Design We restrict our pool of workers to US-based workers. Workeres are required to have at least 1000 approved HITs with an acceptance rate of at least 98%. The base reward for our task is $0.15. For each summary, we include automatic quality checks including Time checks: workers who complete the task under 30s fail the check Attention checks: we include exact copies of article sentences and corrupted mixtures of two article sentences as positive and negative control task. If a worker fails to answer both of these examples correctly, they fail the check Explanation checks: For each sentence in the summary, the worker is required to provide a short explanation of their decision If a worker passes all checks, they are awarded a $0.85 bonus, totalling $1.00 per correct annotation. According to turkerview.com, workers of our HIT are paid well in excess of $15.00 on average. We show our annotation interfaces for the annotation task for CNN/DM and XSUM respectively in Figures FIGREF27 and FIGREF28. We use slightly different instructions to accommodate for the quirks of each dataset. For XSUM, we prepend the reference “summary” back onto the source article, as without it, workers were struggling to identify factual inconsistencies. Model and Generation Details ::: Question Generation We fine-tune BART for question generation using the same tuning hyperparameters as the original work. We optimize label smoothed cross entropy with smoothing parameter 0.1 BIBREF41 and a peak learning rate of 2e-5. We optimize for 100k steps with 5k warmup steps, and use the model with the best perplexity on the development set. To turn NewsQA into an answer conditional QG dataset, we concatenate the answer to the source article with a special marker token in between. We then concatenate another special marker token and the question. At test time, we get 10 named entities and noun phrases as answer candidates using the en-web-sm spaCy model. We downsampling if there are more than 10 and randomly duplicating some answers if there are more than 10. The model predicts the question after seeing an answer and the article. During decoding, we use beam search with beam size 10, length penalty 1.0, and trigram repetition blocking. We experimented with top-$k$ BIBREF39 and top-$p$ BIBREF38, but the outputted questions, while diverse, were quite noisy. Generations have minimum length 8 and max length 60. To filter the questions, we first use simple heuristics, including removing everything after the first question mark in a question exact duplicates questions shorter than three tokens long For the remaining questions, we use our QA model to answer each question and we remove questions for which the QA model deems unanswerable. We then take the top 20 most probable questions, random sampling some of the filtered questions if there were too few. Model and Generation Details ::: Question Answering We fine-tune BERT for question answering following the original work. We optimize using AdamW BIBREF40 with initial learning rate 5e-5. We train for 3 epochs, with a warmup ratio of 0.1. We use the model with the best development set performance. We use SQuAD2.0 because we found the unanswerable questions useful for filtering out questions and questions based on hallucinated facts in the summary should be unanswerable using the source article. Similar to the QG setting, we append the question and answer to the source article with intervening special marker tokens.
Yes
c8f11561fc4da90bcdd72f76414421e1527c0287
c8f11561fc4da90bcdd72f76414421e1527c0287_0
Q: Which dataset(s) do they evaluate on? Text: Introduction Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech. The recent push to utilize deep, end-to-end TTS architectures BIBREF1 BIBREF2 that can be trained on <text,audio> pairs shows that deep neural networks can indeed be used to synthesize realistic sounding speech, while at the same time eliminating the need for complex sub-systems that neede to be developed and trained seperately. The problem of TTS can be summed up as a signal-inversion problem: given a highly compressed source signal (text), we need to invert or "decompress" it into audio. This is a difficult problem as there're multi ways for the same text to be spoken. In addtion, unlike end-to-end translation or speech recognition, TTS ouptuts are continuous, and output sequences are much longer than input squences. Recent work on neural TTS can be split into two camps, in one camp Seq2Seq models with recurrent architectures are used BIBREF1 BIBREF3 . In the other camp, full convolutional Seq2Seq models are used BIBREF2 . Our model belongs in the first of these classes using recurrent architectures. Specifically we make the following contributions: Related Work Neural text-to-speech systems have garnered large research interest in the past 2 years. The first to fully explore this avenue of research was Google's tacotron BIBREF1 system. Their architecture based off the original Seq2Seq framework. In addition to encoder/decoder RNNs from the original Seq2Seq , they also included a bottleneck prenet module termed CBHG, which is composed of sets of 1-D convolution networks followed by highway residual layers. The attention mechanism follows the original Seq2Seq BIBREF7 mechanism (often termed Bahdanau attention). This is the first work to propose training a Seq2Seq model to convert text to mel spectrogram, which can then be converted to audio wav via iterative algorithms such as Griffin Lim BIBREF8 . A parrallel work exploring Seq2Seq RNN architecture for text-to-speech was called Char2Wav BIBREF3 . This work utilized a very similar RNN-based Seq2Seq architecture, albeit without any prenet modules. The attention mechanism is guassian mixture model (GMM) attention from Alex Grave's work. Their model mapped text sequence to 80 dimension vectors used for the WORLD Vocoder BIBREF9 , which invert these vectors into audio wave. More recently, a fully convolutional Seq2Seq architecture was investigated by Baidu Research BIBREF2 BIBREF10 . The deepvoice architecture is composed of causal 1-D convolution layers for both encoder and decoder. They utilized query-key attention similar to that from the transformer architecure BIBREF5 . Another fully convolutional Seq2Seq architecture known as DCTTS was proposed BIBREF6 . In this architecture they employ modules composed of Causal 1-D convolution layers combined with Highway networks. In addition they introduced methods for help guide attention alignments early. As well as a forced incremental attention mechanism that ensures monotonic increasing of attention read as the model decodes during inference. Model Overview The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . The generated mel spetrogram can either be inverted via iterative algorithms such as Griffin Lim, or through more complicated neural vocoder networks such as a mel spectrogram conditioned Wavenet BIBREF11 . Figure FIGREF3 below shows the overall architecture of our model. Text Encoder The encoder acts to encoder the input text sequence into a compact hidden representation, which is consumed by the decoder at every decoding step. The encoder is composed of a INLINEFORM0 -dim embedding layer that maps the input sequence into a dense vector. This is followed by a 1-layer bidirectional LSTM/GRU with INLINEFORM1 hidden dim ( INLINEFORM2 hidden dim total for both directions). two linear projections layers project the LSTM/GRU hidden output into two vectors INLINEFORM3 and INLINEFORM4 of the same INLINEFORM5 -dimension, these are the key and value vectors. DISPLAYFORM0 where INLINEFORM0 . Query-Key Attention Query key attention is similar to that from transformers BIBREF5 . Given INLINEFORM0 and INLINEFORM1 from the encoder, the query, INLINEFORM2 , is computed from a linear transform of the concatenation of previous decoder-rnn hidden state, INLINEFORM3 , combined with attention-rnn hidden state, INLINEFORM4 ). DISPLAYFORM0 Given INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , the attention at each decoding step is computed by the scaled dot-product operation as: DISPLAYFORM0 Note that similar to transformers BIBREF5 , we apply a scale the dot-product by INLINEFORM0 to prevent softmax function into regions where it has extremely small gradients. Decoder The decoder is an autoregressive recurrent neural network that predicts mel spectrogram from the encoded input sentence one frame at a time. The decoder decodes the hidden representation from the encoder, with the guidance of attention. The decoder is composed of two uni-directional LSTM/GRU with INLINEFORM0 hidden dimensions. The first LSTM/GRU, called the AttentionRNN, is for computing attention-mechanism related items such as the attention query INLINEFORM1 . DISPLAYFORM0 The second LSTM/GRU, DecoderRNN, is used to compute the decoder hidden output, INLINEFORM0 . DISPLAYFORM0 A 2-layer dense prenet of dimensions (256,256) projects the previous mel spectrogram output INLINEFORM0 into hidden dimension INLINEFORM1 . Similar to Tacotron 2, the prenet acts as an information bottleneck to help produce useful representation for the downstream attention mechanism. Our model differs from Tacotron 2 in that we jointly project 5 consequetive mel frames at once into our hidden representation, which is faster and unlike Tacotron 2 which project 1 mel frame at at time. The DecoderRNN's hidden state INLINEFORM0 is also projected to mel spectrogram INLINEFORM1 . A residual post-net composed of 2 dense layer followed by a tanh activation function also projects the same decoder hidden state INLINEFORM2 to mel spectrogram INLINEFORM3 , which is added to the linear projected mel INLINEFORM4 to produce the final mel spectrogram INLINEFORM5 . DISPLAYFORM0 A linear spectrogram INLINEFORM0 is also computed from a linear projection of the decoder hidden state INLINEFORM1 . This acts as an additional condition on the decoder hidden input. DISPLAYFORM0 A single scalar stop token is computed from a linear projection of the decoder hidden state INLINEFORM0 to a scalar, followed by INLINEFORM1 , or sigmoid function. This stop token allows the model to learn when to stop decoding during inference. During inference, if stop token output is INLINEFORM2 , we stop decoding. DISPLAYFORM0 Training and Loss Total loss on the model is computed as the sum of 3 component losses: 1. Mean-Squared-Error(MSE) of predicted and ground-truth mel spectrogram 2. MSE of Linear Spectrogram 3. Binary Cross Entropy Loss of our stop token. Adam optimizer is used to optimize the model with learning rate of INLINEFORM0 . Model is trained via teacher forcing, where the ground-truth mel spectrogram is supplied at every decoding step instead of the model's own predicted mel spectrogram. To ensure the model can learn for long term sequences, teacher forcing ratio is annealed from 1.0 (full teacher forcing) to 0.2 (20 percent teacher forcing) over 300 epochs. Proposed Improvements Our proposed improvements come from the observation that employing generic Seq2seq models for TTS application misses out on further optimization that can be achieved when we consider the specific problem of TTS. Specifically, we notice that in TTS, unlike in applications like machine translation, the Seq2Seq attention mechanism should be mostly monotonic. In other words, when one reads a sequence of text, it is natural to assume that the text position progress nearly linearly in time with the sequence of output mel spectrogram. With this insight, we can make 3 modifications to the model that allows us to train faster while using a a smaller model. Changes to Attention Mechanism In the original Tacotron 2, the attention mechanism used was location sensitive attention BIBREF12 combined the original additive Seq2Seq BIBREF7 Bahdanau attention. We propose to replace this attention with the simpler query-key attention from transformer model. As mentioned earlier, since for TTS the attention mechanism is an easier problem than say machine translation, we employ query-key attention as it's simple to implement and requires less parameters than the original Bahdanau attention. Guided Attention Mask Following the logic above, we utilize a similar method from BIBREF6 that adds an additional guided attention loss to the overall loss objective, which acts to help the attention mechanism become monotoic as early as possible. As seen from FIGREF24 , an attention loss mask, INLINEFORM0 , is created applies a loss to force the attention alignment, INLINEFORM1 , to be nearly diagonal. That is: DISPLAYFORM0 Where INLINEFORM0 , INLINEFORM1 is the INLINEFORM2 -th character, INLINEFORM3 is the max character length, INLINEFORM4 is the INLINEFORM5 -th mel frame, INLINEFORM6 is the max mel frame, and INLINEFORM7 is set at 0.2. This modification dramatically speed up the attention alignment and model convergence. Figure 3 below shows the results visually. The two images are side by side comparison of the model's attention after 10k training steps. The image on the left is trained with the atention mask, and the image on the right is not. We can see that with the attention mask, clear attention alignment is achieved much faster. Forced Incremental Attention During inference, the attention INLINEFORM0 occasionally skips multiple charaters or stall on the same character for multiple output frames. To make generation more robust, we modify INLINEFORM1 during inference to force it to be diagonal. The Forced incremental attention is implemented as follows: Given INLINEFORM0 , the position of character read at INLINEFORM1 -th time frame, where INLINEFORM2 , if INLINEFORM3 , the current attention is forcibly set to INLINEFORM4 , so that attention is incremental, i.e INLINEFORM5 . Experiment Dataset The open source LJSpeech Dataset was used to train our TTS model. This dataset contains around 13k <text,audio> pairs of a single female english speaker collect from across 7 different non-fictional books. The total training data time is around 21 hours of audio. One thing to note that since this is open-source audio recorded in a semi-professional setting, the audio quality is not as good as that of proprietary internal data from Google or Baidu. As most things with deep learning, the better the data, the better the model and results. Experiment Procedure Our model was trained for 300 epochs, with batch size of 32. We used pre-trained opensource implementation of Tactron 2 (https://github.com/NVIDIA/tacotron2) as baseline comparison. Note this open-source version is trained for much longer (around 1000 epochs) however due to our limited compute we only trained our model up to 300 epochs Evaluation Metrics We decide to evaluate our model against previous baselines on two fronts, Mean Opnion Score (MOS) and training speed. Typical TTS system evaluation is done with mean opinion score (MOS). To compute this score, many samples of a TTS system is given to human evaluators and rated on a score from 1 (Bad) to 5 (Excellent). the MOS is then computed as the arithmetic mean of these score: DISPLAYFORM0 Where INLINEFORM0 are individual ratings for a given sample by N subjects. For TTS models from google and Baidu, they utilized Amazon mechanical Turk to collect and generate MOS score from larger number of workers. However due to our limited resources, we chose to collect MOS score from friends and families (total 6 people). For training time comparison, we choose the training time as when attention alignment start to become linear and clear. After digging through the git issues in the Tacotron 2 open-source implementation, we found a few posts where users posted their training curve and attention alignment during training (they also used the default batch size of 32). We used their training steps to roughly estimate the training time of Tacotron 2 when attention roughly aligns. For all other models the training time is not comparable as they either don't apply (e.g parametric model) or are not reported (Tacotron griffin lim, Deepvoice 3). Direct comparison of model parameters between ours and the open-source tacotron 2, our model contains 4.5 million parameters, whereas the Tacotron 2 contains around 13 million parameters with default setting. By helping our model learn attention alignment faster, we can afford to use a smaller overall model to achieve similar quality speech quality. Conclusion We introduce a new architecture for end-to-end neural text-to-speech system. Our model relies on RNN-based Seq2seq architecture with a query-key attention. We introduce novel guided attention mask to improve model training speed, and at the same time is able to reduce model parameters. This allows our model to achieve attention alignment at least 3 times faster than previous RNN-based Seq2seq models such as Tacotron 2. We also introduce forced incremental attention during synthesis to prevent attention alignment mistakes and allow model to generate coherent speech for very long sentences.
LJSpeech
51de39c8bad62d3cbfbec1deb74bd8a3ac5e69a8
51de39c8bad62d3cbfbec1deb74bd8a3ac5e69a8_0
Q: Which modifications do they make to well-established Seq2seq architectures? Text: Introduction Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech. The recent push to utilize deep, end-to-end TTS architectures BIBREF1 BIBREF2 that can be trained on <text,audio> pairs shows that deep neural networks can indeed be used to synthesize realistic sounding speech, while at the same time eliminating the need for complex sub-systems that neede to be developed and trained seperately. The problem of TTS can be summed up as a signal-inversion problem: given a highly compressed source signal (text), we need to invert or "decompress" it into audio. This is a difficult problem as there're multi ways for the same text to be spoken. In addtion, unlike end-to-end translation or speech recognition, TTS ouptuts are continuous, and output sequences are much longer than input squences. Recent work on neural TTS can be split into two camps, in one camp Seq2Seq models with recurrent architectures are used BIBREF1 BIBREF3 . In the other camp, full convolutional Seq2Seq models are used BIBREF2 . Our model belongs in the first of these classes using recurrent architectures. Specifically we make the following contributions: Related Work Neural text-to-speech systems have garnered large research interest in the past 2 years. The first to fully explore this avenue of research was Google's tacotron BIBREF1 system. Their architecture based off the original Seq2Seq framework. In addition to encoder/decoder RNNs from the original Seq2Seq , they also included a bottleneck prenet module termed CBHG, which is composed of sets of 1-D convolution networks followed by highway residual layers. The attention mechanism follows the original Seq2Seq BIBREF7 mechanism (often termed Bahdanau attention). This is the first work to propose training a Seq2Seq model to convert text to mel spectrogram, which can then be converted to audio wav via iterative algorithms such as Griffin Lim BIBREF8 . A parrallel work exploring Seq2Seq RNN architecture for text-to-speech was called Char2Wav BIBREF3 . This work utilized a very similar RNN-based Seq2Seq architecture, albeit without any prenet modules. The attention mechanism is guassian mixture model (GMM) attention from Alex Grave's work. Their model mapped text sequence to 80 dimension vectors used for the WORLD Vocoder BIBREF9 , which invert these vectors into audio wave. More recently, a fully convolutional Seq2Seq architecture was investigated by Baidu Research BIBREF2 BIBREF10 . The deepvoice architecture is composed of causal 1-D convolution layers for both encoder and decoder. They utilized query-key attention similar to that from the transformer architecure BIBREF5 . Another fully convolutional Seq2Seq architecture known as DCTTS was proposed BIBREF6 . In this architecture they employ modules composed of Causal 1-D convolution layers combined with Highway networks. In addition they introduced methods for help guide attention alignments early. As well as a forced incremental attention mechanism that ensures monotonic increasing of attention read as the model decodes during inference. Model Overview The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . The generated mel spetrogram can either be inverted via iterative algorithms such as Griffin Lim, or through more complicated neural vocoder networks such as a mel spectrogram conditioned Wavenet BIBREF11 . Figure FIGREF3 below shows the overall architecture of our model. Text Encoder The encoder acts to encoder the input text sequence into a compact hidden representation, which is consumed by the decoder at every decoding step. The encoder is composed of a INLINEFORM0 -dim embedding layer that maps the input sequence into a dense vector. This is followed by a 1-layer bidirectional LSTM/GRU with INLINEFORM1 hidden dim ( INLINEFORM2 hidden dim total for both directions). two linear projections layers project the LSTM/GRU hidden output into two vectors INLINEFORM3 and INLINEFORM4 of the same INLINEFORM5 -dimension, these are the key and value vectors. DISPLAYFORM0 where INLINEFORM0 . Query-Key Attention Query key attention is similar to that from transformers BIBREF5 . Given INLINEFORM0 and INLINEFORM1 from the encoder, the query, INLINEFORM2 , is computed from a linear transform of the concatenation of previous decoder-rnn hidden state, INLINEFORM3 , combined with attention-rnn hidden state, INLINEFORM4 ). DISPLAYFORM0 Given INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , the attention at each decoding step is computed by the scaled dot-product operation as: DISPLAYFORM0 Note that similar to transformers BIBREF5 , we apply a scale the dot-product by INLINEFORM0 to prevent softmax function into regions where it has extremely small gradients. Decoder The decoder is an autoregressive recurrent neural network that predicts mel spectrogram from the encoded input sentence one frame at a time. The decoder decodes the hidden representation from the encoder, with the guidance of attention. The decoder is composed of two uni-directional LSTM/GRU with INLINEFORM0 hidden dimensions. The first LSTM/GRU, called the AttentionRNN, is for computing attention-mechanism related items such as the attention query INLINEFORM1 . DISPLAYFORM0 The second LSTM/GRU, DecoderRNN, is used to compute the decoder hidden output, INLINEFORM0 . DISPLAYFORM0 A 2-layer dense prenet of dimensions (256,256) projects the previous mel spectrogram output INLINEFORM0 into hidden dimension INLINEFORM1 . Similar to Tacotron 2, the prenet acts as an information bottleneck to help produce useful representation for the downstream attention mechanism. Our model differs from Tacotron 2 in that we jointly project 5 consequetive mel frames at once into our hidden representation, which is faster and unlike Tacotron 2 which project 1 mel frame at at time. The DecoderRNN's hidden state INLINEFORM0 is also projected to mel spectrogram INLINEFORM1 . A residual post-net composed of 2 dense layer followed by a tanh activation function also projects the same decoder hidden state INLINEFORM2 to mel spectrogram INLINEFORM3 , which is added to the linear projected mel INLINEFORM4 to produce the final mel spectrogram INLINEFORM5 . DISPLAYFORM0 A linear spectrogram INLINEFORM0 is also computed from a linear projection of the decoder hidden state INLINEFORM1 . This acts as an additional condition on the decoder hidden input. DISPLAYFORM0 A single scalar stop token is computed from a linear projection of the decoder hidden state INLINEFORM0 to a scalar, followed by INLINEFORM1 , or sigmoid function. This stop token allows the model to learn when to stop decoding during inference. During inference, if stop token output is INLINEFORM2 , we stop decoding. DISPLAYFORM0 Training and Loss Total loss on the model is computed as the sum of 3 component losses: 1. Mean-Squared-Error(MSE) of predicted and ground-truth mel spectrogram 2. MSE of Linear Spectrogram 3. Binary Cross Entropy Loss of our stop token. Adam optimizer is used to optimize the model with learning rate of INLINEFORM0 . Model is trained via teacher forcing, where the ground-truth mel spectrogram is supplied at every decoding step instead of the model's own predicted mel spectrogram. To ensure the model can learn for long term sequences, teacher forcing ratio is annealed from 1.0 (full teacher forcing) to 0.2 (20 percent teacher forcing) over 300 epochs. Proposed Improvements Our proposed improvements come from the observation that employing generic Seq2seq models for TTS application misses out on further optimization that can be achieved when we consider the specific problem of TTS. Specifically, we notice that in TTS, unlike in applications like machine translation, the Seq2Seq attention mechanism should be mostly monotonic. In other words, when one reads a sequence of text, it is natural to assume that the text position progress nearly linearly in time with the sequence of output mel spectrogram. With this insight, we can make 3 modifications to the model that allows us to train faster while using a a smaller model. Changes to Attention Mechanism In the original Tacotron 2, the attention mechanism used was location sensitive attention BIBREF12 combined the original additive Seq2Seq BIBREF7 Bahdanau attention. We propose to replace this attention with the simpler query-key attention from transformer model. As mentioned earlier, since for TTS the attention mechanism is an easier problem than say machine translation, we employ query-key attention as it's simple to implement and requires less parameters than the original Bahdanau attention. Guided Attention Mask Following the logic above, we utilize a similar method from BIBREF6 that adds an additional guided attention loss to the overall loss objective, which acts to help the attention mechanism become monotoic as early as possible. As seen from FIGREF24 , an attention loss mask, INLINEFORM0 , is created applies a loss to force the attention alignment, INLINEFORM1 , to be nearly diagonal. That is: DISPLAYFORM0 Where INLINEFORM0 , INLINEFORM1 is the INLINEFORM2 -th character, INLINEFORM3 is the max character length, INLINEFORM4 is the INLINEFORM5 -th mel frame, INLINEFORM6 is the max mel frame, and INLINEFORM7 is set at 0.2. This modification dramatically speed up the attention alignment and model convergence. Figure 3 below shows the results visually. The two images are side by side comparison of the model's attention after 10k training steps. The image on the left is trained with the atention mask, and the image on the right is not. We can see that with the attention mask, clear attention alignment is achieved much faster. Forced Incremental Attention During inference, the attention INLINEFORM0 occasionally skips multiple charaters or stall on the same character for multiple output frames. To make generation more robust, we modify INLINEFORM1 during inference to force it to be diagonal. The Forced incremental attention is implemented as follows: Given INLINEFORM0 , the position of character read at INLINEFORM1 -th time frame, where INLINEFORM2 , if INLINEFORM3 , the current attention is forcibly set to INLINEFORM4 , so that attention is incremental, i.e INLINEFORM5 . Experiment Dataset The open source LJSpeech Dataset was used to train our TTS model. This dataset contains around 13k <text,audio> pairs of a single female english speaker collect from across 7 different non-fictional books. The total training data time is around 21 hours of audio. One thing to note that since this is open-source audio recorded in a semi-professional setting, the audio quality is not as good as that of proprietary internal data from Google or Baidu. As most things with deep learning, the better the data, the better the model and results. Experiment Procedure Our model was trained for 300 epochs, with batch size of 32. We used pre-trained opensource implementation of Tactron 2 (https://github.com/NVIDIA/tacotron2) as baseline comparison. Note this open-source version is trained for much longer (around 1000 epochs) however due to our limited compute we only trained our model up to 300 epochs Evaluation Metrics We decide to evaluate our model against previous baselines on two fronts, Mean Opnion Score (MOS) and training speed. Typical TTS system evaluation is done with mean opinion score (MOS). To compute this score, many samples of a TTS system is given to human evaluators and rated on a score from 1 (Bad) to 5 (Excellent). the MOS is then computed as the arithmetic mean of these score: DISPLAYFORM0 Where INLINEFORM0 are individual ratings for a given sample by N subjects. For TTS models from google and Baidu, they utilized Amazon mechanical Turk to collect and generate MOS score from larger number of workers. However due to our limited resources, we chose to collect MOS score from friends and families (total 6 people). For training time comparison, we choose the training time as when attention alignment start to become linear and clear. After digging through the git issues in the Tacotron 2 open-source implementation, we found a few posts where users posted their training curve and attention alignment during training (they also used the default batch size of 32). We used their training steps to roughly estimate the training time of Tacotron 2 when attention roughly aligns. For all other models the training time is not comparable as they either don't apply (e.g parametric model) or are not reported (Tacotron griffin lim, Deepvoice 3). Direct comparison of model parameters between ours and the open-source tacotron 2, our model contains 4.5 million parameters, whereas the Tacotron 2 contains around 13 million parameters with default setting. By helping our model learn attention alignment faster, we can afford to use a smaller overall model to achieve similar quality speech quality. Conclusion We introduce a new architecture for end-to-end neural text-to-speech system. Our model relies on RNN-based Seq2seq architecture with a query-key attention. We introduce novel guided attention mask to improve model training speed, and at the same time is able to reduce model parameters. This allows our model to achieve attention alignment at least 3 times faster than previous RNN-based Seq2seq models such as Tacotron 2. We also introduce forced incremental attention during synthesis to prevent attention alignment mistakes and allow model to generate coherent speech for very long sentences.
Replacing attention mechanism to query-key attention, and adding a loss to make the attention mask as diagonal as possible
d9cbcaf8f0457b4be59178446f1a280d17a923fa
d9cbcaf8f0457b4be59178446f1a280d17a923fa_0
Q: How do they measure the size of models? Text: Introduction Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech. The recent push to utilize deep, end-to-end TTS architectures BIBREF1 BIBREF2 that can be trained on <text,audio> pairs shows that deep neural networks can indeed be used to synthesize realistic sounding speech, while at the same time eliminating the need for complex sub-systems that neede to be developed and trained seperately. The problem of TTS can be summed up as a signal-inversion problem: given a highly compressed source signal (text), we need to invert or "decompress" it into audio. This is a difficult problem as there're multi ways for the same text to be spoken. In addtion, unlike end-to-end translation or speech recognition, TTS ouptuts are continuous, and output sequences are much longer than input squences. Recent work on neural TTS can be split into two camps, in one camp Seq2Seq models with recurrent architectures are used BIBREF1 BIBREF3 . In the other camp, full convolutional Seq2Seq models are used BIBREF2 . Our model belongs in the first of these classes using recurrent architectures. Specifically we make the following contributions: Related Work Neural text-to-speech systems have garnered large research interest in the past 2 years. The first to fully explore this avenue of research was Google's tacotron BIBREF1 system. Their architecture based off the original Seq2Seq framework. In addition to encoder/decoder RNNs from the original Seq2Seq , they also included a bottleneck prenet module termed CBHG, which is composed of sets of 1-D convolution networks followed by highway residual layers. The attention mechanism follows the original Seq2Seq BIBREF7 mechanism (often termed Bahdanau attention). This is the first work to propose training a Seq2Seq model to convert text to mel spectrogram, which can then be converted to audio wav via iterative algorithms such as Griffin Lim BIBREF8 . A parrallel work exploring Seq2Seq RNN architecture for text-to-speech was called Char2Wav BIBREF3 . This work utilized a very similar RNN-based Seq2Seq architecture, albeit without any prenet modules. The attention mechanism is guassian mixture model (GMM) attention from Alex Grave's work. Their model mapped text sequence to 80 dimension vectors used for the WORLD Vocoder BIBREF9 , which invert these vectors into audio wave. More recently, a fully convolutional Seq2Seq architecture was investigated by Baidu Research BIBREF2 BIBREF10 . The deepvoice architecture is composed of causal 1-D convolution layers for both encoder and decoder. They utilized query-key attention similar to that from the transformer architecure BIBREF5 . Another fully convolutional Seq2Seq architecture known as DCTTS was proposed BIBREF6 . In this architecture they employ modules composed of Causal 1-D convolution layers combined with Highway networks. In addition they introduced methods for help guide attention alignments early. As well as a forced incremental attention mechanism that ensures monotonic increasing of attention read as the model decodes during inference. Model Overview The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . The generated mel spetrogram can either be inverted via iterative algorithms such as Griffin Lim, or through more complicated neural vocoder networks such as a mel spectrogram conditioned Wavenet BIBREF11 . Figure FIGREF3 below shows the overall architecture of our model. Text Encoder The encoder acts to encoder the input text sequence into a compact hidden representation, which is consumed by the decoder at every decoding step. The encoder is composed of a INLINEFORM0 -dim embedding layer that maps the input sequence into a dense vector. This is followed by a 1-layer bidirectional LSTM/GRU with INLINEFORM1 hidden dim ( INLINEFORM2 hidden dim total for both directions). two linear projections layers project the LSTM/GRU hidden output into two vectors INLINEFORM3 and INLINEFORM4 of the same INLINEFORM5 -dimension, these are the key and value vectors. DISPLAYFORM0 where INLINEFORM0 . Query-Key Attention Query key attention is similar to that from transformers BIBREF5 . Given INLINEFORM0 and INLINEFORM1 from the encoder, the query, INLINEFORM2 , is computed from a linear transform of the concatenation of previous decoder-rnn hidden state, INLINEFORM3 , combined with attention-rnn hidden state, INLINEFORM4 ). DISPLAYFORM0 Given INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , the attention at each decoding step is computed by the scaled dot-product operation as: DISPLAYFORM0 Note that similar to transformers BIBREF5 , we apply a scale the dot-product by INLINEFORM0 to prevent softmax function into regions where it has extremely small gradients. Decoder The decoder is an autoregressive recurrent neural network that predicts mel spectrogram from the encoded input sentence one frame at a time. The decoder decodes the hidden representation from the encoder, with the guidance of attention. The decoder is composed of two uni-directional LSTM/GRU with INLINEFORM0 hidden dimensions. The first LSTM/GRU, called the AttentionRNN, is for computing attention-mechanism related items such as the attention query INLINEFORM1 . DISPLAYFORM0 The second LSTM/GRU, DecoderRNN, is used to compute the decoder hidden output, INLINEFORM0 . DISPLAYFORM0 A 2-layer dense prenet of dimensions (256,256) projects the previous mel spectrogram output INLINEFORM0 into hidden dimension INLINEFORM1 . Similar to Tacotron 2, the prenet acts as an information bottleneck to help produce useful representation for the downstream attention mechanism. Our model differs from Tacotron 2 in that we jointly project 5 consequetive mel frames at once into our hidden representation, which is faster and unlike Tacotron 2 which project 1 mel frame at at time. The DecoderRNN's hidden state INLINEFORM0 is also projected to mel spectrogram INLINEFORM1 . A residual post-net composed of 2 dense layer followed by a tanh activation function also projects the same decoder hidden state INLINEFORM2 to mel spectrogram INLINEFORM3 , which is added to the linear projected mel INLINEFORM4 to produce the final mel spectrogram INLINEFORM5 . DISPLAYFORM0 A linear spectrogram INLINEFORM0 is also computed from a linear projection of the decoder hidden state INLINEFORM1 . This acts as an additional condition on the decoder hidden input. DISPLAYFORM0 A single scalar stop token is computed from a linear projection of the decoder hidden state INLINEFORM0 to a scalar, followed by INLINEFORM1 , or sigmoid function. This stop token allows the model to learn when to stop decoding during inference. During inference, if stop token output is INLINEFORM2 , we stop decoding. DISPLAYFORM0 Training and Loss Total loss on the model is computed as the sum of 3 component losses: 1. Mean-Squared-Error(MSE) of predicted and ground-truth mel spectrogram 2. MSE of Linear Spectrogram 3. Binary Cross Entropy Loss of our stop token. Adam optimizer is used to optimize the model with learning rate of INLINEFORM0 . Model is trained via teacher forcing, where the ground-truth mel spectrogram is supplied at every decoding step instead of the model's own predicted mel spectrogram. To ensure the model can learn for long term sequences, teacher forcing ratio is annealed from 1.0 (full teacher forcing) to 0.2 (20 percent teacher forcing) over 300 epochs. Proposed Improvements Our proposed improvements come from the observation that employing generic Seq2seq models for TTS application misses out on further optimization that can be achieved when we consider the specific problem of TTS. Specifically, we notice that in TTS, unlike in applications like machine translation, the Seq2Seq attention mechanism should be mostly monotonic. In other words, when one reads a sequence of text, it is natural to assume that the text position progress nearly linearly in time with the sequence of output mel spectrogram. With this insight, we can make 3 modifications to the model that allows us to train faster while using a a smaller model. Changes to Attention Mechanism In the original Tacotron 2, the attention mechanism used was location sensitive attention BIBREF12 combined the original additive Seq2Seq BIBREF7 Bahdanau attention. We propose to replace this attention with the simpler query-key attention from transformer model. As mentioned earlier, since for TTS the attention mechanism is an easier problem than say machine translation, we employ query-key attention as it's simple to implement and requires less parameters than the original Bahdanau attention. Guided Attention Mask Following the logic above, we utilize a similar method from BIBREF6 that adds an additional guided attention loss to the overall loss objective, which acts to help the attention mechanism become monotoic as early as possible. As seen from FIGREF24 , an attention loss mask, INLINEFORM0 , is created applies a loss to force the attention alignment, INLINEFORM1 , to be nearly diagonal. That is: DISPLAYFORM0 Where INLINEFORM0 , INLINEFORM1 is the INLINEFORM2 -th character, INLINEFORM3 is the max character length, INLINEFORM4 is the INLINEFORM5 -th mel frame, INLINEFORM6 is the max mel frame, and INLINEFORM7 is set at 0.2. This modification dramatically speed up the attention alignment and model convergence. Figure 3 below shows the results visually. The two images are side by side comparison of the model's attention after 10k training steps. The image on the left is trained with the atention mask, and the image on the right is not. We can see that with the attention mask, clear attention alignment is achieved much faster. Forced Incremental Attention During inference, the attention INLINEFORM0 occasionally skips multiple charaters or stall on the same character for multiple output frames. To make generation more robust, we modify INLINEFORM1 during inference to force it to be diagonal. The Forced incremental attention is implemented as follows: Given INLINEFORM0 , the position of character read at INLINEFORM1 -th time frame, where INLINEFORM2 , if INLINEFORM3 , the current attention is forcibly set to INLINEFORM4 , so that attention is incremental, i.e INLINEFORM5 . Experiment Dataset The open source LJSpeech Dataset was used to train our TTS model. This dataset contains around 13k <text,audio> pairs of a single female english speaker collect from across 7 different non-fictional books. The total training data time is around 21 hours of audio. One thing to note that since this is open-source audio recorded in a semi-professional setting, the audio quality is not as good as that of proprietary internal data from Google or Baidu. As most things with deep learning, the better the data, the better the model and results. Experiment Procedure Our model was trained for 300 epochs, with batch size of 32. We used pre-trained opensource implementation of Tactron 2 (https://github.com/NVIDIA/tacotron2) as baseline comparison. Note this open-source version is trained for much longer (around 1000 epochs) however due to our limited compute we only trained our model up to 300 epochs Evaluation Metrics We decide to evaluate our model against previous baselines on two fronts, Mean Opnion Score (MOS) and training speed. Typical TTS system evaluation is done with mean opinion score (MOS). To compute this score, many samples of a TTS system is given to human evaluators and rated on a score from 1 (Bad) to 5 (Excellent). the MOS is then computed as the arithmetic mean of these score: DISPLAYFORM0 Where INLINEFORM0 are individual ratings for a given sample by N subjects. For TTS models from google and Baidu, they utilized Amazon mechanical Turk to collect and generate MOS score from larger number of workers. However due to our limited resources, we chose to collect MOS score from friends and families (total 6 people). For training time comparison, we choose the training time as when attention alignment start to become linear and clear. After digging through the git issues in the Tacotron 2 open-source implementation, we found a few posts where users posted their training curve and attention alignment during training (they also used the default batch size of 32). We used their training steps to roughly estimate the training time of Tacotron 2 when attention roughly aligns. For all other models the training time is not comparable as they either don't apply (e.g parametric model) or are not reported (Tacotron griffin lim, Deepvoice 3). Direct comparison of model parameters between ours and the open-source tacotron 2, our model contains 4.5 million parameters, whereas the Tacotron 2 contains around 13 million parameters with default setting. By helping our model learn attention alignment faster, we can afford to use a smaller overall model to achieve similar quality speech quality. Conclusion We introduce a new architecture for end-to-end neural text-to-speech system. Our model relies on RNN-based Seq2seq architecture with a query-key attention. We introduce novel guided attention mask to improve model training speed, and at the same time is able to reduce model parameters. This allows our model to achieve attention alignment at least 3 times faster than previous RNN-based Seq2seq models such as Tacotron 2. We also introduce forced incremental attention during synthesis to prevent attention alignment mistakes and allow model to generate coherent speech for very long sentences.
Direct comparison of model parameters
fc69f5d9464cdba6db43a525cecde2bf6ddaaa57
fc69f5d9464cdba6db43a525cecde2bf6ddaaa57_0
Q: Do they reduce the number of parameters in their architecture compared to other direct text-to-speech models? Text: Introduction Traditional text-to-speech (TTS) systems are composed of complex pipelines BIBREF0 , these often include accoustic frontends, duration model, acoustic prediction model and vocoder models. The complexity of the TTS problem coupled with the requirement for deep domain expertise means these systems are often brittle in design and results in un-natural synthesized speech. The recent push to utilize deep, end-to-end TTS architectures BIBREF1 BIBREF2 that can be trained on <text,audio> pairs shows that deep neural networks can indeed be used to synthesize realistic sounding speech, while at the same time eliminating the need for complex sub-systems that neede to be developed and trained seperately. The problem of TTS can be summed up as a signal-inversion problem: given a highly compressed source signal (text), we need to invert or "decompress" it into audio. This is a difficult problem as there're multi ways for the same text to be spoken. In addtion, unlike end-to-end translation or speech recognition, TTS ouptuts are continuous, and output sequences are much longer than input squences. Recent work on neural TTS can be split into two camps, in one camp Seq2Seq models with recurrent architectures are used BIBREF1 BIBREF3 . In the other camp, full convolutional Seq2Seq models are used BIBREF2 . Our model belongs in the first of these classes using recurrent architectures. Specifically we make the following contributions: Related Work Neural text-to-speech systems have garnered large research interest in the past 2 years. The first to fully explore this avenue of research was Google's tacotron BIBREF1 system. Their architecture based off the original Seq2Seq framework. In addition to encoder/decoder RNNs from the original Seq2Seq , they also included a bottleneck prenet module termed CBHG, which is composed of sets of 1-D convolution networks followed by highway residual layers. The attention mechanism follows the original Seq2Seq BIBREF7 mechanism (often termed Bahdanau attention). This is the first work to propose training a Seq2Seq model to convert text to mel spectrogram, which can then be converted to audio wav via iterative algorithms such as Griffin Lim BIBREF8 . A parrallel work exploring Seq2Seq RNN architecture for text-to-speech was called Char2Wav BIBREF3 . This work utilized a very similar RNN-based Seq2Seq architecture, albeit without any prenet modules. The attention mechanism is guassian mixture model (GMM) attention from Alex Grave's work. Their model mapped text sequence to 80 dimension vectors used for the WORLD Vocoder BIBREF9 , which invert these vectors into audio wave. More recently, a fully convolutional Seq2Seq architecture was investigated by Baidu Research BIBREF2 BIBREF10 . The deepvoice architecture is composed of causal 1-D convolution layers for both encoder and decoder. They utilized query-key attention similar to that from the transformer architecure BIBREF5 . Another fully convolutional Seq2Seq architecture known as DCTTS was proposed BIBREF6 . In this architecture they employ modules composed of Causal 1-D convolution layers combined with Highway networks. In addition they introduced methods for help guide attention alignments early. As well as a forced incremental attention mechanism that ensures monotonic increasing of attention read as the model decodes during inference. Model Overview The architecture of our model utilizes RNN-based Seq2Seq model for generating mel spectrogram from text. The architecture is similar to that of Tacotron 2 BIBREF4 . The generated mel spetrogram can either be inverted via iterative algorithms such as Griffin Lim, or through more complicated neural vocoder networks such as a mel spectrogram conditioned Wavenet BIBREF11 . Figure FIGREF3 below shows the overall architecture of our model. Text Encoder The encoder acts to encoder the input text sequence into a compact hidden representation, which is consumed by the decoder at every decoding step. The encoder is composed of a INLINEFORM0 -dim embedding layer that maps the input sequence into a dense vector. This is followed by a 1-layer bidirectional LSTM/GRU with INLINEFORM1 hidden dim ( INLINEFORM2 hidden dim total for both directions). two linear projections layers project the LSTM/GRU hidden output into two vectors INLINEFORM3 and INLINEFORM4 of the same INLINEFORM5 -dimension, these are the key and value vectors. DISPLAYFORM0 where INLINEFORM0 . Query-Key Attention Query key attention is similar to that from transformers BIBREF5 . Given INLINEFORM0 and INLINEFORM1 from the encoder, the query, INLINEFORM2 , is computed from a linear transform of the concatenation of previous decoder-rnn hidden state, INLINEFORM3 , combined with attention-rnn hidden state, INLINEFORM4 ). DISPLAYFORM0 Given INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , the attention at each decoding step is computed by the scaled dot-product operation as: DISPLAYFORM0 Note that similar to transformers BIBREF5 , we apply a scale the dot-product by INLINEFORM0 to prevent softmax function into regions where it has extremely small gradients. Decoder The decoder is an autoregressive recurrent neural network that predicts mel spectrogram from the encoded input sentence one frame at a time. The decoder decodes the hidden representation from the encoder, with the guidance of attention. The decoder is composed of two uni-directional LSTM/GRU with INLINEFORM0 hidden dimensions. The first LSTM/GRU, called the AttentionRNN, is for computing attention-mechanism related items such as the attention query INLINEFORM1 . DISPLAYFORM0 The second LSTM/GRU, DecoderRNN, is used to compute the decoder hidden output, INLINEFORM0 . DISPLAYFORM0 A 2-layer dense prenet of dimensions (256,256) projects the previous mel spectrogram output INLINEFORM0 into hidden dimension INLINEFORM1 . Similar to Tacotron 2, the prenet acts as an information bottleneck to help produce useful representation for the downstream attention mechanism. Our model differs from Tacotron 2 in that we jointly project 5 consequetive mel frames at once into our hidden representation, which is faster and unlike Tacotron 2 which project 1 mel frame at at time. The DecoderRNN's hidden state INLINEFORM0 is also projected to mel spectrogram INLINEFORM1 . A residual post-net composed of 2 dense layer followed by a tanh activation function also projects the same decoder hidden state INLINEFORM2 to mel spectrogram INLINEFORM3 , which is added to the linear projected mel INLINEFORM4 to produce the final mel spectrogram INLINEFORM5 . DISPLAYFORM0 A linear spectrogram INLINEFORM0 is also computed from a linear projection of the decoder hidden state INLINEFORM1 . This acts as an additional condition on the decoder hidden input. DISPLAYFORM0 A single scalar stop token is computed from a linear projection of the decoder hidden state INLINEFORM0 to a scalar, followed by INLINEFORM1 , or sigmoid function. This stop token allows the model to learn when to stop decoding during inference. During inference, if stop token output is INLINEFORM2 , we stop decoding. DISPLAYFORM0 Training and Loss Total loss on the model is computed as the sum of 3 component losses: 1. Mean-Squared-Error(MSE) of predicted and ground-truth mel spectrogram 2. MSE of Linear Spectrogram 3. Binary Cross Entropy Loss of our stop token. Adam optimizer is used to optimize the model with learning rate of INLINEFORM0 . Model is trained via teacher forcing, where the ground-truth mel spectrogram is supplied at every decoding step instead of the model's own predicted mel spectrogram. To ensure the model can learn for long term sequences, teacher forcing ratio is annealed from 1.0 (full teacher forcing) to 0.2 (20 percent teacher forcing) over 300 epochs. Proposed Improvements Our proposed improvements come from the observation that employing generic Seq2seq models for TTS application misses out on further optimization that can be achieved when we consider the specific problem of TTS. Specifically, we notice that in TTS, unlike in applications like machine translation, the Seq2Seq attention mechanism should be mostly monotonic. In other words, when one reads a sequence of text, it is natural to assume that the text position progress nearly linearly in time with the sequence of output mel spectrogram. With this insight, we can make 3 modifications to the model that allows us to train faster while using a a smaller model. Changes to Attention Mechanism In the original Tacotron 2, the attention mechanism used was location sensitive attention BIBREF12 combined the original additive Seq2Seq BIBREF7 Bahdanau attention. We propose to replace this attention with the simpler query-key attention from transformer model. As mentioned earlier, since for TTS the attention mechanism is an easier problem than say machine translation, we employ query-key attention as it's simple to implement and requires less parameters than the original Bahdanau attention. Guided Attention Mask Following the logic above, we utilize a similar method from BIBREF6 that adds an additional guided attention loss to the overall loss objective, which acts to help the attention mechanism become monotoic as early as possible. As seen from FIGREF24 , an attention loss mask, INLINEFORM0 , is created applies a loss to force the attention alignment, INLINEFORM1 , to be nearly diagonal. That is: DISPLAYFORM0 Where INLINEFORM0 , INLINEFORM1 is the INLINEFORM2 -th character, INLINEFORM3 is the max character length, INLINEFORM4 is the INLINEFORM5 -th mel frame, INLINEFORM6 is the max mel frame, and INLINEFORM7 is set at 0.2. This modification dramatically speed up the attention alignment and model convergence. Figure 3 below shows the results visually. The two images are side by side comparison of the model's attention after 10k training steps. The image on the left is trained with the atention mask, and the image on the right is not. We can see that with the attention mask, clear attention alignment is achieved much faster. Forced Incremental Attention During inference, the attention INLINEFORM0 occasionally skips multiple charaters or stall on the same character for multiple output frames. To make generation more robust, we modify INLINEFORM1 during inference to force it to be diagonal. The Forced incremental attention is implemented as follows: Given INLINEFORM0 , the position of character read at INLINEFORM1 -th time frame, where INLINEFORM2 , if INLINEFORM3 , the current attention is forcibly set to INLINEFORM4 , so that attention is incremental, i.e INLINEFORM5 . Experiment Dataset The open source LJSpeech Dataset was used to train our TTS model. This dataset contains around 13k <text,audio> pairs of a single female english speaker collect from across 7 different non-fictional books. The total training data time is around 21 hours of audio. One thing to note that since this is open-source audio recorded in a semi-professional setting, the audio quality is not as good as that of proprietary internal data from Google or Baidu. As most things with deep learning, the better the data, the better the model and results. Experiment Procedure Our model was trained for 300 epochs, with batch size of 32. We used pre-trained opensource implementation of Tactron 2 (https://github.com/NVIDIA/tacotron2) as baseline comparison. Note this open-source version is trained for much longer (around 1000 epochs) however due to our limited compute we only trained our model up to 300 epochs Evaluation Metrics We decide to evaluate our model against previous baselines on two fronts, Mean Opnion Score (MOS) and training speed. Typical TTS system evaluation is done with mean opinion score (MOS). To compute this score, many samples of a TTS system is given to human evaluators and rated on a score from 1 (Bad) to 5 (Excellent). the MOS is then computed as the arithmetic mean of these score: DISPLAYFORM0 Where INLINEFORM0 are individual ratings for a given sample by N subjects. For TTS models from google and Baidu, they utilized Amazon mechanical Turk to collect and generate MOS score from larger number of workers. However due to our limited resources, we chose to collect MOS score from friends and families (total 6 people). For training time comparison, we choose the training time as when attention alignment start to become linear and clear. After digging through the git issues in the Tacotron 2 open-source implementation, we found a few posts where users posted their training curve and attention alignment during training (they also used the default batch size of 32). We used their training steps to roughly estimate the training time of Tacotron 2 when attention roughly aligns. For all other models the training time is not comparable as they either don't apply (e.g parametric model) or are not reported (Tacotron griffin lim, Deepvoice 3). Direct comparison of model parameters between ours and the open-source tacotron 2, our model contains 4.5 million parameters, whereas the Tacotron 2 contains around 13 million parameters with default setting. By helping our model learn attention alignment faster, we can afford to use a smaller overall model to achieve similar quality speech quality. Conclusion We introduce a new architecture for end-to-end neural text-to-speech system. Our model relies on RNN-based Seq2seq architecture with a query-key attention. We introduce novel guided attention mask to improve model training speed, and at the same time is able to reduce model parameters. This allows our model to achieve attention alignment at least 3 times faster than previous RNN-based Seq2seq models such as Tacotron 2. We also introduce forced incremental attention during synthesis to prevent attention alignment mistakes and allow model to generate coherent speech for very long sentences.
Yes
e1f5531ed04d0aae1dfcb0559f1512a43134c43a
e1f5531ed04d0aae1dfcb0559f1512a43134c43a_0
Q: Do they use pretrained models? Text: Introduction Supervised text classification has achieved great success in the past decades due to the availability of rich training data and deep learning techniques. However, zero-shot text classification ($\textsc {0shot-tc}$) has attracted little attention despite its great potential in real world applications, e.g., the intent recognition of bank consumers. $\textsc {0shot-tc}$ is challenging because we often have to deal with classes that are compound, ultra-fine-grained, changing over time, and from different aspects such as topic, emotion, etc. Existing $\textsc {0shot-tc}$ studies have mainly the following three problems. Introduction ::: First problem. The $\textsc {0shot-tc}$ problem was modeled in a too restrictive vision. Firstly, most work only explored a single task, which was mainly topic categorization, e.g., BIBREF1, BIBREF2, BIBREF3. We argue that this is only the tiny tip of the iceberg for $\textsc {0shot-tc}$. Secondly, there is often a precondition that a part of classes are seen and their labeled instances are available to train a model, as we define here as Definition-Restrictive: Definition-Restrictive ($\textsc {0shot-tc}$). Given labeled instances belonging to a set of seen classes $S$, $\textsc {0shot-tc}$ aims at learning a classifier $f(\cdot ): X \rightarrow Y$, where $Y=S\cup U$; $U$ is a set of unseen classes and belongs to the same aspect as $S$. In this work, we formulate the $\textsc {0shot-tc}$ in a broader vision. As Figure FIGREF2 demonstrates, a piece of text can be assigned labels which interpret the text in different aspects, such as the “topic” aspect, the “emotion” aspect, or the “situation” aspect described in the text. Different aspects, therefore, differ in interpreting the text. For instance, by “topic”, it means “this text is about {health, finance $\cdots $}”; by “emotion”, it means “this text expresses a sense of {joy, anger, $\cdots $}”; by “situation”, it means “the people there need {shelter, medical assistance, $\cdots $}”. Figure FIGREF2 also shows another essential property of $\textsc {0shot-tc}$ – the applicable label space for a piece of text has no boundary, e.g., “this text is news”, “the situation described in this text is serious”, etc. Therefore, we argue that we have to emphasize a more challenging scenario to satisfy the real-world problems: seeing no labels, no label-specific training data. Here is our new $\textsc {0shot-tc}$ definition: Definition-Wild ($\textsc {0shot-tc}$). $\textsc {0shot-tc}$ aims at learning a classifier $f(\cdot ): X \rightarrow Y$, where classifier $f(\cdot )$ never sees $Y$-specific labeled data in its model development. Introduction ::: Second problem. Usually, conventional text classification denotes labels as indices {0,1,2, $\cdots $, $n$} without understanding neither the aspect's specific interpretation nor the meaning of the labels. This does not apply to $\textsc {0shot-tc}$ as we can not pre-define the size of the label space anymore, and we can not presume the availability of labeled data. Humans can easily decide the truth value of any upcoming labels because humans can interpret those aspects correctly and understand the meaning of those labels. The ultimate goal of $\textsc {0shot-tc}$ should be to develop machines to catch up with humans in this capability. To this end, making sure the system can understand the described aspect and the label meanings plays a key role. Introduction ::: Third problem. Prior work is mostly evaluated on different datasets and adopted different evaluation setups, which makes it hard to compare them fairly. For example, DBLPRiosK18 work on medical data while reporting R@K as metric; DBLPXiaZYCY18 work on SNIPS-NLU intent detection data while only unseen intents are in the label-searching space in evaluation. In this work, we benchmark the datasets and evaluation setups of $\textsc {0shot-tc}$. Furthermore, we propose a textual entailment approach to handle the $\textsc {0shot-tc}$ problem of diverse aspects in a unified paradigm. To be specific, we contribute in the following three aspects: Introduction ::: Dataset. We provide datasets for studying three aspects of $\textsc {0shot-tc}$: topic categorization, emotion detection, and situation frame detection – an event level recognition problem. For each dataset, we have standard split for train, dev, and test, and standard separation of seen and unseen classes. Introduction ::: Evaluation. Our standardized evaluations correspond to the Definition-Restrictive and Definition-Wild. i) Label-partially-unseen evaluation. This corresponds to the commonly studied $\textsc {0shot-tc}$ defined in Definition-Restrictive: for the set of labels of a specific aspect, given training data for a part of labels, predicting in the full label set. This is the most basic setup in $\textsc {0shot-tc}$. It checks whether the system can generalize to some labels in the same aspect. To satisfy Definition-Wild, we define a new evaluation: ii) Label-fully-unseen evaluation. In this setup, we assume the system is unaware of the upcoming aspects and can not access any labeled data for task-specific training. Introduction ::: Entailment approach. Our Definition-Wild challenges the system design – how to develop a $\textsc {0shot-tc}$ system, without accessing any task-specific labeled data, to deal with labels from diverse aspects? In this work, we propose to treat $\textsc {0shot-tc}$ as a textual entailment problem. This is to imitate how humans decide the truth value of labels from any aspects. Usually, humans understand the problem described by the aspect and the meaning of the label candidates. Then humans mentally construct a hypothesis by filling a label candidate, e.g., “sports”, into the aspect-defined problem “the text is about $\underline{?}$”, and ask ourselves if this hypothesis is true, given the text. We treat $\textsc {0shot-tc}$ as a textual entailment problem so that our model can gain knowledge from entailment datasets, and we show that it applies to both Definition-Restrictive and Definition-Wild. Overall, this work aims at benchmarking the research of $\textsc {0shot-tc}$ by providing standardized datasets, evaluations, and a state-of-the-art entailment system. All datasets and codes are released. Related Work $\textsc {Zero-stc}$ was first explored by the paradigm “Dataless Classification” BIBREF0. Dataless classification first maps the text and labels into a common space by Explicit Semantic Analysis (ESA) BIBREF4, then picks the label with the highest matching score. Dataless classification emphasizes that the representation of labels takes the equally crucial role as the representation learning of text. Then this idea was further developed in BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9. With the prevalence of word embeddings, more and more work adopts pretrained word embeddings to represent the meaning of words, so as to provide the models with the knowledge of labels BIBREF10, BIBREF2, BIBREF11, BIBREF12. DBLPYogatamaDLB17 build generative LSTM to generate text given the embedded labels. DBLPRiosK18 use label embedding to attend the text representation in the developing of a multi-label classifier. But they report R@K, so it is unclear whether the system can really predict unseen labels. DBLPXiaZYCY18 study the zero-shot intent detection problem. The learned representations of intents are still the sum of word embeddings. But during testing, the intent space includes only new intents; seen intents are not covered. All of these studies can only meet the definition in Definition-Restrictive, so they do not really generalize to open aspects of $\textsc {0shot-tc}$. JiangqngGuo enrich the embedding representations by incorporating class descriptions, class hierarchy, and the word-to-label paths in ConceptNet. DBLPMitchellSL18 assume that some natural language explanations about new labels are available. Then those explanations are parsed into formal constraints which are further combined with unlabeled data to yield new label oriented classifiers through posterior regularization. However, those explanatory statements about new labels are collected from crowd-sourcing. This limits its application in real world $\textsc {0shot-tc}$ scenarios. There are a few works that study a specific zero-shot problem by indirect supervision from other problems. DBLPLevySCZ17 and obamuyide2018zero study zero-shot relation extraction by converting it into a machine comprehension and textual entailment problem respectively. Then, a supervised system pretrained on an existing machine comprehension dataset or textual entailment dataset is used to do inference. Our work studies the $\textsc {0shot-tc}$ by formulating a broader vision: datasets of multiple apsects and evaluations. Other zero-shot problems studied in NLP involve entity typing BIBREF13, sequence labeling BIBREF14, etc. Benchmark the dataset In this work, we standardize the datasets for $\textsc {0shot-tc}$ for three aspects: topic detection, emotion detection, and situation detection. For each dataset, we insist on two principles: i) Label-partially-unseen: A part of labels are unseen. This corresponds to Definition-Restrictive, enabling us to check the performance of unseen labels as well as seen labels. ii) Label-fully-unseen: All labels are unseen. This corresponds to Definition-Wild, enabling us to check the system performance in test-agnostic setups. Benchmark the dataset ::: Topic detection ::: Yahoo. We use the large-scale Yahoo dataset released by DBLPZhangZL15. Yahoo has 10 classes: {“Society & Culture”, “Science & Mathematics”, “Health”, “Education & Reference”, “Computers & Internet”, “Sports”, “Business & Finance”, “Entertainment & Music”, “Family & Relationships”, “Politics & Government”}, with original split: 1.4M/60k in train/test (all labels are balanced distributed). We reorganize the dataset by first fixing the dev and test sets as follows: for dev, all 10 labels are included, with 6k labeled instances for each; For test, all 10 labels are included, with 10k instances for each. Then training sets are created on remaining instances as follows. For label-partially-unseen, we create two versions of Yahoo train for $\textsc {0shot-tc}$: Train-v0: 5 classes: {“Society & Culture”, “Health”, “Computers & Internet”, “Business & Finance”, “Family & Relationships”} are included; each is equipped with 130k labeled instances. Train-v1: 5 classes: { “Science & Mathematics”, “Education & Reference”, “Sports”, “Entertainment & Music”, “Politics & Government”} are included; each is equipped with 130k labeled instances. We always create two versions of train with non-overlapping labels so as to get rid of the model's over-fitting on one of them. Label-fully-unseen share the same test and dev with the label-partially-unseen except that it has no training set. It is worth mentioning that our setup of label-partially-unseen and label-fully-unseen enables us to compare the performance mutually; it can show the system's capabilities while seeing different sizes of classes. Benchmark the dataset ::: Emotion detection ::: UnifyEmotion. This emotion dataset was released by DBLPBostanK18. It was constructed by unifying the emotion labels of multiple public emotion datasets. This dataset consists of text from multiple domains: tweet, emotional events, fairy tale and artificial sentences, and it contains 9 emotion types {“sadness”, “joy”, “anger”, “disgust”, “fear”, “surprise”, “shame”, “guilt”, “love”} and “none” (if no emotion applies). We remove the multi-label instances (appro. 4k) so that the remaining instances always have a single positive label. The official evaluation metric is label-weighted F1. Since the labels in this dataset has unbalanced distribution. We first directly list the fixed $\emph {test}$ and $\emph {dev}$ in Table TABREF9 and Table TABREF10, respectively. They are shared by following label-partial-unseen and label-fully-unseen setups of train. Label-partial-unseen has the following two versions of train: Train-v0: 5 classes: {“sadness”, “anger”, “fear”, “shame”, “love”} are included. Train-v1: 4 classes: { “joy”, “disgust”, “surprise”, “guilt”} are included. For label-fully-unseen, no training set is provided. Benchmark the dataset ::: Situation detection The situation frame typing is one example of an event-type classification task. A situation frame studied here is a need situation such as the need for water or medical aid, or an issue situation such as crime violence BIBREF16, BIBREF17. It was originally designed for low-resource situation detection, where annotated data is unavailable. This is why it is particularly suitable for $\textsc {0shot-tc}$. We use the Situation Typing dataset released by mayhewuniversity. It has 5,956 labeled instances. Totally 11 situation types: “food supply”, “infrastructure”, “medical assistance”, “search/rescue”, “shelter”, “utilities, energy, or sanitation”, “water supply”, “evacuation”, “regime change”, “terrisms”, “crime violence” and an extra type “none” – if none of the 11 types applies. This dataset is a multi-label classification, and label-wise weighted F1 is the official evaluation. The train, test and dev are listed in Table TABREF22. Benchmark the dataset ::: Situation detection ::: Summary of @!START@$\textsc {0shot-tc}$@!END@ datasets. Our three datasets covers single-label classification (i.e., “topic” and “emotion”) and multi-label classification (i.e., “situation”). In addition, a “none” type is adopted in “emotion” and “situation” tasks if no predefined types apply – this makes the problem more realistic. Benchmark the evaluation How to evaluate a $\textsc {0shot-tc}$ system? This needs to review the original motivation of doing $\textsc {0shot-tc}$ research. As we discussed in Introduction section, ideally, we aim to build a system that works like humans – figuring out if a piece of text can be assigned with an open-defined label, without any constrains on the domains and the aspects described by the labels. Therefore, we challenge the system in two setups: label-partially-unseen and label-fully-unseen. Benchmark the evaluation ::: Label-partially-unseen. This is the most common setup in existing $\textsc {0shot-tc}$ literature: for a given dataset of a specific problem such as topic categorization, emotion detection, etc, train a system on a part of the labels, then test on the whole label space. Usually all labels describe the same aspect of the text. Benchmark the evaluation ::: Label-fully-unseen. In this setup, we push “zero-shot” to the extreme – no annotated data for any labels. So, we imagine that learning a system through whatever approaches, then testing it on $\textsc {0shot-tc}$ datasets of open aspects. This label-fully-unseen setup is more like the dataless learning principle BIBREF0, in which no task-specific annotated data is provided for training a model (since usually this kind of model fails to generalize in other domains and other tasks), therefore, we are encouraged to learn models with open-data or test-agnostic data. In this way, the learned models behave more like humans. An entailment model for @!START@$\textsc {0shot-tc}$@!END@ As one contribution of this work, we propose to deal with $\textsc {0shot-tc}$ as a textual entailment problem. It is inspired by: i) text classification is essentially a textual entailment problem. Let us think about how humans do classification: we mentally think “whether this text is about sport?”, or “whether this text expresses a specific feeling?”, or “whether the people there need water supply?” and so on. The reason that conventional text classification did not employ entailment approach is it always has pre-defined, fixed-size of classes equipped with annotated data. However, in $\textsc {0shot-tc}$, we can neither estimate how many and what classes will be handled nor have annotated data to train class-specific parameters. Textual entailment, instead, does not preordain the boundary of the hypothesis space. ii) To pursue the ideal generalization of classifiers, we definitely need to make sure that the classifiers understand the problem encoded in the aspects and understand the meaning of labels. Conventional supervised classifiers fail in this aspect since label names are converted into indices – this means the classifiers do not really understand the labels, let alone the problem. Therefore, exploring $\textsc {0shot-tc}$ as a textual entailment paradigm is a reasonable way to achieve generalization. An entailment model for @!START@$\textsc {0shot-tc}$@!END@ ::: Convert labels into hypotheses. The first step of dealing with $\textsc {0shot-tc}$ as an entailment problem is to convert labels into hypotheses. To this end, we first convert each aspect into an interpretation (we discussed before that generally one aspect defines one interpretation). E.g., “topic” aspect to interpretation “the text is about the topic”. Table TABREF24 lists some examples for the three aspects: “topic”, “emotion” and “situation”. In this work, we just explored two simple methods to generate the hypotheses. As Table TABREF24 shows, one is to use the label name to complete the interpretation, the other is to use the label's definition in WordNet to complete the interpretation. In testing, once one of them results in an “entailment” decision, then we decide the corresponding label is positive. We can definitely create more natural hypotheses through crowd-sourcing, such as “food” into “the people there are starving”. Here we just set the baseline examples by automatic approaches, more explorations are left as future work, and we welcome the community to contribute. An entailment model for @!START@$\textsc {0shot-tc}$@!END@ ::: Convert classification data into entailment data. For a data split (train, dev and test), each input text, acting as the premise, has a positive hypothesis corresponding to the positive label, and all negative labels in the data split provide negative hypotheses. Note that unseen labels do not provide negative hypotheses for instances in train. An entailment model for @!START@$\textsc {0shot-tc}$@!END@ ::: Entailment model learning. In this work, we make use of the widely-recognized state of the art entailment technique – BERT BIBREF18, and train it on three mainstream entailment datasets: MNLI BIBREF19, GLUE RTE BIBREF20, BIBREF21 and FEVER BIBREF22, respectively. We convert all datasets into binary case: “entailment” vs. “non-entailment”, by changing the label “neutral” (if exist in some datasets) into “non-entailment”. For our label-fully-unseen setup, we directly apply this pretrained entailment model on the test sets of all $\textsc {0shot-tc}$ aspects. For label-partially-unseen setup in which we intentionally provide annotated data, we first pretrain BERT on the MNLI/FEVER/RTE, then fine-tune on the provided training data. An entailment model for @!START@$\textsc {0shot-tc}$@!END@ ::: Harsh policy in testing. Since seen labels have annotated data for training, we adopt different policies to pick up seen and unseen labels. To be specific, we pick a seen label with a harsher rule: i) In single-label classification, if both seen and unseen labels are predicted as positive, we pick the seen label only if its probability of being positive is higher than that of the unseen label by a hyperparameter $\alpha $. If only seen or unseen labels are predicted as positive, we pick the one with the highest probability; ii) In multi-label classification, if both seen and unseen labels are predicted as positive, we change the seen labels into “negative” if their probability of being positive is higher than that of the unseen label by less than $\alpha $. Finally, all labels labeled positive will be selected. If no positive labels, we choose “none” type. $\alpha $ = 0.05 in our systems, tuned on dev. Experiments ::: Label-partially-unseen evaluation In this setup, there is annotated data for partial labels as train. So, we report performance for unseen classes as well as seen classes. We compare our entailment approaches, trained separately on MNLI, FEVER and RTE, with the following baselines. Experiments ::: Label-partially-unseen evaluation ::: Baselines. Majority: the text picks the label of the largest size. ESA: A dataless classifier proposed in BIBREF0. It maps the words (in text and label names) into the title space of Wikipedia articles, then compares the text with label names. This method does not rely on train. We implemented ESA based on 08/01/2019 Wikipedia dump. There are about 6.1M words and 5.9M articles. Word2Vec BIBREF23: Both the representations of the text and the labels are the addition of word embeddings element-wisely. Then cosine similarity determines the labels. This method does not rely on train either. Binary-BERT: We fine-tune BERT on train, which will yield a binary classifier for entailment or not; then we test it on test – picking the label with the maximal probability in single-label scenarios while choosing all the labels with “entailment” decision in multi-label cases. Experiments ::: Label-partially-unseen evaluation ::: Discussion. The results of label-partially-unseen are listed in Table TABREF30. “ESA” performs slightly worse than “Word2Vec” in topic detection, mainly because the label names, i.e., topics such as “sports”, are closer than some keywords such as “basketball” in Word2Vec space. However, “ESA” is clearly better than “Word2Vec” in situation detection; this should be mainly due to the fact that the label names (e.g., “shelter”, “evaculation”, etc.) can hardly find close words in the text by Word2Vec embeddings. Quite the contrary, “ESA” is easier to make a class such as “shelter” closer to some keywords like “earthquake”. Unfortunately, both Word2Vec and ESA work poorly for emotion detection problem. We suspect that emotion detection requires more entailment capability. For example, the text snippet “when my brother was very late in arriving home from work”, its gold emotion “fear” requires some common-knowledge inference, rather than just word semantic matching through Word2Vec and ESA. The supervised method “Binary-BERT” is indeed strong in learning the seen-label-specific models – this is why it predicts very well for seen classes while performing much worse for unseen classes. Our entailment models, especially the one pretrained on MNLI, generally get competitive performance with the “Binary-BERT” for seen (slightly worse on “topic” and “emotion” while clearly better on “situation”) and improve the performance regarding unseen by large margins. At this stage, fine-tuning on an MNLI-based pretrained entailment model seems more powerful. Experiments ::: Label-fully-unseen evaluation Regarding this label-fully-unseen evaluation, apart from our entailment models and three unsupervised baselines “Majority”, “Word2Vec” and “ESA”, we also report the following baseline: Wikipedia-based: We train a binary classifier based on BERT on a dataset collected from Wikipedia. Wikipedia is a corpus of general purpose, without targeting any specific $\textsc {0shot-tc}$ task. Collecting categorized articles from Wikipedia is popular way of creating training data for text categorization, such as BIBREF13. More specifically, we collected 100K articles along with their categories in the bottom of each article. For each article, apart from its attached positive categories, we randomly sample three negative categories. Then each article and its positive/negative categories act as training pairs for the binary classifier. We notice “Wikipedia-based” training indeed contributes a lot for the topic detection task; however, its performances on emotion and situation detection problems are far from satisfactory. We believe this is mainly because the Yahoo-based topic categorization task is much closer to the Wikipedia-based topic categorization task; emotion and situation categorizations, however, are relatively further. Our entailment models, pretrained on MNLI/FEVER/RTE respectively, perform more robust on the three $\textsc {0shot-tc}$ aspects (except for the RTE on emotion). Recall that they are not trained on any text classification data, and never know the domain and the aspects in the test. This clearly shows the great promise of developing textual entailment models for $\textsc {0shot-tc}$. Our ensemble approach further boosts the performances on all three tasks. An interesting phenomenon, comparing the label-partially-unseen results in Table TABREF30 and the label-fully-unseen results in Table TABREF32, is that the pretrained entailment models work in this order for label-fully-unseen case: RTE $>$ FEVER $>$MNLI; on the contrary, if we fine-tune them on the label-partially-unseen case, the MNLI-based model performs best. This could be due to a possibility that, on one hand, the constructed situation entailment dataset is closer to the RTE dataset than to the MNLI dataset, so an RTE-based model can generalize well to situation data, but, on the other hand, it could also be more likely to over-fit the training set of “situation” during fine-tuning. A deeper exploration of this is left as future work. Experiments ::: How do the generated hypotheses influence In Table TABREF24, we listed examples for converting class names into hypotheses. In this work, we only tried to make use of the class names and their definitions in WordNet. Table TABREF33 lists the fine-grained performance of three ways of generating hypotheses: “word”, “definition”, and “combination” (i.e., word&definition). This table indicates that: i) Definition alone usually does not work well in any of the three tasks, no matter which pretrained entailment model is used; ii) Whether “word” alone or “word&definition” works better depends on the specific task and the pretrained entailment model. For example, the pretrained MNLI model prefers “word&definition” in both “emotion” and “situation” detection tasks. However, the other two entailment models (RTE and FEVER) mostly prefer “word”. iii) Since it is unrealistic to adopt only one entailment model, such as from {RTE, FEVER, MNLI}, for any open $\textsc {0shot-tc}$ problem, an ensemble system should be preferred. However, the concrete implementation of the ensemble system also influences the strengths of different hypothesis generation approaches. In this work, our ensemble method reaches the top performance when combining the “word” and “definition”. More ensemble systems and hypothesis generation paradigms need to be studied in the future. To better understand the impact of generated hypotheses, we dive into the performance of each labels, taking “situation detection” as an example. Figure FIGREF47 illustrates the separate F1 scores for each situation class, predicted by the ensemble model for label-fully-unseen setup. This enables us to check in detail how easily the constructed hypotheses can be understood by the entailment model. Unfortunately, some classes are still challenging, such as “evacuation”, “infrastructure”, and “regime change”. This should be attributed to their over-abstract meaning. Some classes were well recognized, such as “water”, “shelter”, and “food”. One reason is that these labels mostly are common words – systems can more easily match them to the text; the other reason is that they are situation classes with higher frequencies (refer to Table TABREF22) – this is reasonable based on our common knowledge about disasters. Summary In this work, we analyzed the problems of existing research on zero-shot text classification ($\textsc {0shot-tc}$): restrictive problem definition, the weakness in understanding the problem and the labels' meaning, and the chaos of datasets and evaluation setups. Therefore, we are benchmarking $\textsc {0shot-tc}$ by standardizing the datasets and evaluations. More importantly, to tackle the broader-defined $\textsc {0shot-tc}$, we proposed a textual entailment framework which can work with or without the annotated data of seen labels. Acknowledgments The authors would like to thank Jennifer Sheffield and the anonymous reviewers for insightful comments and suggestions. This work was supported by Contracts HR0011-15-C-0113 and HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.
Yes
4a4b7c0d3e7365440b49e9e6b67908ea5cea687d
4a4b7c0d3e7365440b49e9e6b67908ea5cea687d_0
Q: What are their baseline models? Text: Introduction Supervised text classification has achieved great success in the past decades due to the availability of rich training data and deep learning techniques. However, zero-shot text classification ($\textsc {0shot-tc}$) has attracted little attention despite its great potential in real world applications, e.g., the intent recognition of bank consumers. $\textsc {0shot-tc}$ is challenging because we often have to deal with classes that are compound, ultra-fine-grained, changing over time, and from different aspects such as topic, emotion, etc. Existing $\textsc {0shot-tc}$ studies have mainly the following three problems. Introduction ::: First problem. The $\textsc {0shot-tc}$ problem was modeled in a too restrictive vision. Firstly, most work only explored a single task, which was mainly topic categorization, e.g., BIBREF1, BIBREF2, BIBREF3. We argue that this is only the tiny tip of the iceberg for $\textsc {0shot-tc}$. Secondly, there is often a precondition that a part of classes are seen and their labeled instances are available to train a model, as we define here as Definition-Restrictive: Definition-Restrictive ($\textsc {0shot-tc}$). Given labeled instances belonging to a set of seen classes $S$, $\textsc {0shot-tc}$ aims at learning a classifier $f(\cdot ): X \rightarrow Y$, where $Y=S\cup U$; $U$ is a set of unseen classes and belongs to the same aspect as $S$. In this work, we formulate the $\textsc {0shot-tc}$ in a broader vision. As Figure FIGREF2 demonstrates, a piece of text can be assigned labels which interpret the text in different aspects, such as the “topic” aspect, the “emotion” aspect, or the “situation” aspect described in the text. Different aspects, therefore, differ in interpreting the text. For instance, by “topic”, it means “this text is about {health, finance $\cdots $}”; by “emotion”, it means “this text expresses a sense of {joy, anger, $\cdots $}”; by “situation”, it means “the people there need {shelter, medical assistance, $\cdots $}”. Figure FIGREF2 also shows another essential property of $\textsc {0shot-tc}$ – the applicable label space for a piece of text has no boundary, e.g., “this text is news”, “the situation described in this text is serious”, etc. Therefore, we argue that we have to emphasize a more challenging scenario to satisfy the real-world problems: seeing no labels, no label-specific training data. Here is our new $\textsc {0shot-tc}$ definition: Definition-Wild ($\textsc {0shot-tc}$). $\textsc {0shot-tc}$ aims at learning a classifier $f(\cdot ): X \rightarrow Y$, where classifier $f(\cdot )$ never sees $Y$-specific labeled data in its model development. Introduction ::: Second problem. Usually, conventional text classification denotes labels as indices {0,1,2, $\cdots $, $n$} without understanding neither the aspect's specific interpretation nor the meaning of the labels. This does not apply to $\textsc {0shot-tc}$ as we can not pre-define the size of the label space anymore, and we can not presume the availability of labeled data. Humans can easily decide the truth value of any upcoming labels because humans can interpret those aspects correctly and understand the meaning of those labels. The ultimate goal of $\textsc {0shot-tc}$ should be to develop machines to catch up with humans in this capability. To this end, making sure the system can understand the described aspect and the label meanings plays a key role. Introduction ::: Third problem. Prior work is mostly evaluated on different datasets and adopted different evaluation setups, which makes it hard to compare them fairly. For example, DBLPRiosK18 work on medical data while reporting R@K as metric; DBLPXiaZYCY18 work on SNIPS-NLU intent detection data while only unseen intents are in the label-searching space in evaluation. In this work, we benchmark the datasets and evaluation setups of $\textsc {0shot-tc}$. Furthermore, we propose a textual entailment approach to handle the $\textsc {0shot-tc}$ problem of diverse aspects in a unified paradigm. To be specific, we contribute in the following three aspects: Introduction ::: Dataset. We provide datasets for studying three aspects of $\textsc {0shot-tc}$: topic categorization, emotion detection, and situation frame detection – an event level recognition problem. For each dataset, we have standard split for train, dev, and test, and standard separation of seen and unseen classes. Introduction ::: Evaluation. Our standardized evaluations correspond to the Definition-Restrictive and Definition-Wild. i) Label-partially-unseen evaluation. This corresponds to the commonly studied $\textsc {0shot-tc}$ defined in Definition-Restrictive: for the set of labels of a specific aspect, given training data for a part of labels, predicting in the full label set. This is the most basic setup in $\textsc {0shot-tc}$. It checks whether the system can generalize to some labels in the same aspect. To satisfy Definition-Wild, we define a new evaluation: ii) Label-fully-unseen evaluation. In this setup, we assume the system is unaware of the upcoming aspects and can not access any labeled data for task-specific training. Introduction ::: Entailment approach. Our Definition-Wild challenges the system design – how to develop a $\textsc {0shot-tc}$ system, without accessing any task-specific labeled data, to deal with labels from diverse aspects? In this work, we propose to treat $\textsc {0shot-tc}$ as a textual entailment problem. This is to imitate how humans decide the truth value of labels from any aspects. Usually, humans understand the problem described by the aspect and the meaning of the label candidates. Then humans mentally construct a hypothesis by filling a label candidate, e.g., “sports”, into the aspect-defined problem “the text is about $\underline{?}$”, and ask ourselves if this hypothesis is true, given the text. We treat $\textsc {0shot-tc}$ as a textual entailment problem so that our model can gain knowledge from entailment datasets, and we show that it applies to both Definition-Restrictive and Definition-Wild. Overall, this work aims at benchmarking the research of $\textsc {0shot-tc}$ by providing standardized datasets, evaluations, and a state-of-the-art entailment system. All datasets and codes are released. Related Work $\textsc {Zero-stc}$ was first explored by the paradigm “Dataless Classification” BIBREF0. Dataless classification first maps the text and labels into a common space by Explicit Semantic Analysis (ESA) BIBREF4, then picks the label with the highest matching score. Dataless classification emphasizes that the representation of labels takes the equally crucial role as the representation learning of text. Then this idea was further developed in BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9. With the prevalence of word embeddings, more and more work adopts pretrained word embeddings to represent the meaning of words, so as to provide the models with the knowledge of labels BIBREF10, BIBREF2, BIBREF11, BIBREF12. DBLPYogatamaDLB17 build generative LSTM to generate text given the embedded labels. DBLPRiosK18 use label embedding to attend the text representation in the developing of a multi-label classifier. But they report R@K, so it is unclear whether the system can really predict unseen labels. DBLPXiaZYCY18 study the zero-shot intent detection problem. The learned representations of intents are still the sum of word embeddings. But during testing, the intent space includes only new intents; seen intents are not covered. All of these studies can only meet the definition in Definition-Restrictive, so they do not really generalize to open aspects of $\textsc {0shot-tc}$. JiangqngGuo enrich the embedding representations by incorporating class descriptions, class hierarchy, and the word-to-label paths in ConceptNet. DBLPMitchellSL18 assume that some natural language explanations about new labels are available. Then those explanations are parsed into formal constraints which are further combined with unlabeled data to yield new label oriented classifiers through posterior regularization. However, those explanatory statements about new labels are collected from crowd-sourcing. This limits its application in real world $\textsc {0shot-tc}$ scenarios. There are a few works that study a specific zero-shot problem by indirect supervision from other problems. DBLPLevySCZ17 and obamuyide2018zero study zero-shot relation extraction by converting it into a machine comprehension and textual entailment problem respectively. Then, a supervised system pretrained on an existing machine comprehension dataset or textual entailment dataset is used to do inference. Our work studies the $\textsc {0shot-tc}$ by formulating a broader vision: datasets of multiple apsects and evaluations. Other zero-shot problems studied in NLP involve entity typing BIBREF13, sequence labeling BIBREF14, etc. Benchmark the dataset In this work, we standardize the datasets for $\textsc {0shot-tc}$ for three aspects: topic detection, emotion detection, and situation detection. For each dataset, we insist on two principles: i) Label-partially-unseen: A part of labels are unseen. This corresponds to Definition-Restrictive, enabling us to check the performance of unseen labels as well as seen labels. ii) Label-fully-unseen: All labels are unseen. This corresponds to Definition-Wild, enabling us to check the system performance in test-agnostic setups. Benchmark the dataset ::: Topic detection ::: Yahoo. We use the large-scale Yahoo dataset released by DBLPZhangZL15. Yahoo has 10 classes: {“Society & Culture”, “Science & Mathematics”, “Health”, “Education & Reference”, “Computers & Internet”, “Sports”, “Business & Finance”, “Entertainment & Music”, “Family & Relationships”, “Politics & Government”}, with original split: 1.4M/60k in train/test (all labels are balanced distributed). We reorganize the dataset by first fixing the dev and test sets as follows: for dev, all 10 labels are included, with 6k labeled instances for each; For test, all 10 labels are included, with 10k instances for each. Then training sets are created on remaining instances as follows. For label-partially-unseen, we create two versions of Yahoo train for $\textsc {0shot-tc}$: Train-v0: 5 classes: {“Society & Culture”, “Health”, “Computers & Internet”, “Business & Finance”, “Family & Relationships”} are included; each is equipped with 130k labeled instances. Train-v1: 5 classes: { “Science & Mathematics”, “Education & Reference”, “Sports”, “Entertainment & Music”, “Politics & Government”} are included; each is equipped with 130k labeled instances. We always create two versions of train with non-overlapping labels so as to get rid of the model's over-fitting on one of them. Label-fully-unseen share the same test and dev with the label-partially-unseen except that it has no training set. It is worth mentioning that our setup of label-partially-unseen and label-fully-unseen enables us to compare the performance mutually; it can show the system's capabilities while seeing different sizes of classes. Benchmark the dataset ::: Emotion detection ::: UnifyEmotion. This emotion dataset was released by DBLPBostanK18. It was constructed by unifying the emotion labels of multiple public emotion datasets. This dataset consists of text from multiple domains: tweet, emotional events, fairy tale and artificial sentences, and it contains 9 emotion types {“sadness”, “joy”, “anger”, “disgust”, “fear”, “surprise”, “shame”, “guilt”, “love”} and “none” (if no emotion applies). We remove the multi-label instances (appro. 4k) so that the remaining instances always have a single positive label. The official evaluation metric is label-weighted F1. Since the labels in this dataset has unbalanced distribution. We first directly list the fixed $\emph {test}$ and $\emph {dev}$ in Table TABREF9 and Table TABREF10, respectively. They are shared by following label-partial-unseen and label-fully-unseen setups of train. Label-partial-unseen has the following two versions of train: Train-v0: 5 classes: {“sadness”, “anger”, “fear”, “shame”, “love”} are included. Train-v1: 4 classes: { “joy”, “disgust”, “surprise”, “guilt”} are included. For label-fully-unseen, no training set is provided. Benchmark the dataset ::: Situation detection The situation frame typing is one example of an event-type classification task. A situation frame studied here is a need situation such as the need for water or medical aid, or an issue situation such as crime violence BIBREF16, BIBREF17. It was originally designed for low-resource situation detection, where annotated data is unavailable. This is why it is particularly suitable for $\textsc {0shot-tc}$. We use the Situation Typing dataset released by mayhewuniversity. It has 5,956 labeled instances. Totally 11 situation types: “food supply”, “infrastructure”, “medical assistance”, “search/rescue”, “shelter”, “utilities, energy, or sanitation”, “water supply”, “evacuation”, “regime change”, “terrisms”, “crime violence” and an extra type “none” – if none of the 11 types applies. This dataset is a multi-label classification, and label-wise weighted F1 is the official evaluation. The train, test and dev are listed in Table TABREF22. Benchmark the dataset ::: Situation detection ::: Summary of @!START@$\textsc {0shot-tc}$@!END@ datasets. Our three datasets covers single-label classification (i.e., “topic” and “emotion”) and multi-label classification (i.e., “situation”). In addition, a “none” type is adopted in “emotion” and “situation” tasks if no predefined types apply – this makes the problem more realistic. Benchmark the evaluation How to evaluate a $\textsc {0shot-tc}$ system? This needs to review the original motivation of doing $\textsc {0shot-tc}$ research. As we discussed in Introduction section, ideally, we aim to build a system that works like humans – figuring out if a piece of text can be assigned with an open-defined label, without any constrains on the domains and the aspects described by the labels. Therefore, we challenge the system in two setups: label-partially-unseen and label-fully-unseen. Benchmark the evaluation ::: Label-partially-unseen. This is the most common setup in existing $\textsc {0shot-tc}$ literature: for a given dataset of a specific problem such as topic categorization, emotion detection, etc, train a system on a part of the labels, then test on the whole label space. Usually all labels describe the same aspect of the text. Benchmark the evaluation ::: Label-fully-unseen. In this setup, we push “zero-shot” to the extreme – no annotated data for any labels. So, we imagine that learning a system through whatever approaches, then testing it on $\textsc {0shot-tc}$ datasets of open aspects. This label-fully-unseen setup is more like the dataless learning principle BIBREF0, in which no task-specific annotated data is provided for training a model (since usually this kind of model fails to generalize in other domains and other tasks), therefore, we are encouraged to learn models with open-data or test-agnostic data. In this way, the learned models behave more like humans. An entailment model for @!START@$\textsc {0shot-tc}$@!END@ As one contribution of this work, we propose to deal with $\textsc {0shot-tc}$ as a textual entailment problem. It is inspired by: i) text classification is essentially a textual entailment problem. Let us think about how humans do classification: we mentally think “whether this text is about sport?”, or “whether this text expresses a specific feeling?”, or “whether the people there need water supply?” and so on. The reason that conventional text classification did not employ entailment approach is it always has pre-defined, fixed-size of classes equipped with annotated data. However, in $\textsc {0shot-tc}$, we can neither estimate how many and what classes will be handled nor have annotated data to train class-specific parameters. Textual entailment, instead, does not preordain the boundary of the hypothesis space. ii) To pursue the ideal generalization of classifiers, we definitely need to make sure that the classifiers understand the problem encoded in the aspects and understand the meaning of labels. Conventional supervised classifiers fail in this aspect since label names are converted into indices – this means the classifiers do not really understand the labels, let alone the problem. Therefore, exploring $\textsc {0shot-tc}$ as a textual entailment paradigm is a reasonable way to achieve generalization. An entailment model for @!START@$\textsc {0shot-tc}$@!END@ ::: Convert labels into hypotheses. The first step of dealing with $\textsc {0shot-tc}$ as an entailment problem is to convert labels into hypotheses. To this end, we first convert each aspect into an interpretation (we discussed before that generally one aspect defines one interpretation). E.g., “topic” aspect to interpretation “the text is about the topic”. Table TABREF24 lists some examples for the three aspects: “topic”, “emotion” and “situation”. In this work, we just explored two simple methods to generate the hypotheses. As Table TABREF24 shows, one is to use the label name to complete the interpretation, the other is to use the label's definition in WordNet to complete the interpretation. In testing, once one of them results in an “entailment” decision, then we decide the corresponding label is positive. We can definitely create more natural hypotheses through crowd-sourcing, such as “food” into “the people there are starving”. Here we just set the baseline examples by automatic approaches, more explorations are left as future work, and we welcome the community to contribute. An entailment model for @!START@$\textsc {0shot-tc}$@!END@ ::: Convert classification data into entailment data. For a data split (train, dev and test), each input text, acting as the premise, has a positive hypothesis corresponding to the positive label, and all negative labels in the data split provide negative hypotheses. Note that unseen labels do not provide negative hypotheses for instances in train. An entailment model for @!START@$\textsc {0shot-tc}$@!END@ ::: Entailment model learning. In this work, we make use of the widely-recognized state of the art entailment technique – BERT BIBREF18, and train it on three mainstream entailment datasets: MNLI BIBREF19, GLUE RTE BIBREF20, BIBREF21 and FEVER BIBREF22, respectively. We convert all datasets into binary case: “entailment” vs. “non-entailment”, by changing the label “neutral” (if exist in some datasets) into “non-entailment”. For our label-fully-unseen setup, we directly apply this pretrained entailment model on the test sets of all $\textsc {0shot-tc}$ aspects. For label-partially-unseen setup in which we intentionally provide annotated data, we first pretrain BERT on the MNLI/FEVER/RTE, then fine-tune on the provided training data. An entailment model for @!START@$\textsc {0shot-tc}$@!END@ ::: Harsh policy in testing. Since seen labels have annotated data for training, we adopt different policies to pick up seen and unseen labels. To be specific, we pick a seen label with a harsher rule: i) In single-label classification, if both seen and unseen labels are predicted as positive, we pick the seen label only if its probability of being positive is higher than that of the unseen label by a hyperparameter $\alpha $. If only seen or unseen labels are predicted as positive, we pick the one with the highest probability; ii) In multi-label classification, if both seen and unseen labels are predicted as positive, we change the seen labels into “negative” if their probability of being positive is higher than that of the unseen label by less than $\alpha $. Finally, all labels labeled positive will be selected. If no positive labels, we choose “none” type. $\alpha $ = 0.05 in our systems, tuned on dev. Experiments ::: Label-partially-unseen evaluation In this setup, there is annotated data for partial labels as train. So, we report performance for unseen classes as well as seen classes. We compare our entailment approaches, trained separately on MNLI, FEVER and RTE, with the following baselines. Experiments ::: Label-partially-unseen evaluation ::: Baselines. Majority: the text picks the label of the largest size. ESA: A dataless classifier proposed in BIBREF0. It maps the words (in text and label names) into the title space of Wikipedia articles, then compares the text with label names. This method does not rely on train. We implemented ESA based on 08/01/2019 Wikipedia dump. There are about 6.1M words and 5.9M articles. Word2Vec BIBREF23: Both the representations of the text and the labels are the addition of word embeddings element-wisely. Then cosine similarity determines the labels. This method does not rely on train either. Binary-BERT: We fine-tune BERT on train, which will yield a binary classifier for entailment or not; then we test it on test – picking the label with the maximal probability in single-label scenarios while choosing all the labels with “entailment” decision in multi-label cases. Experiments ::: Label-partially-unseen evaluation ::: Discussion. The results of label-partially-unseen are listed in Table TABREF30. “ESA” performs slightly worse than “Word2Vec” in topic detection, mainly because the label names, i.e., topics such as “sports”, are closer than some keywords such as “basketball” in Word2Vec space. However, “ESA” is clearly better than “Word2Vec” in situation detection; this should be mainly due to the fact that the label names (e.g., “shelter”, “evaculation”, etc.) can hardly find close words in the text by Word2Vec embeddings. Quite the contrary, “ESA” is easier to make a class such as “shelter” closer to some keywords like “earthquake”. Unfortunately, both Word2Vec and ESA work poorly for emotion detection problem. We suspect that emotion detection requires more entailment capability. For example, the text snippet “when my brother was very late in arriving home from work”, its gold emotion “fear” requires some common-knowledge inference, rather than just word semantic matching through Word2Vec and ESA. The supervised method “Binary-BERT” is indeed strong in learning the seen-label-specific models – this is why it predicts very well for seen classes while performing much worse for unseen classes. Our entailment models, especially the one pretrained on MNLI, generally get competitive performance with the “Binary-BERT” for seen (slightly worse on “topic” and “emotion” while clearly better on “situation”) and improve the performance regarding unseen by large margins. At this stage, fine-tuning on an MNLI-based pretrained entailment model seems more powerful. Experiments ::: Label-fully-unseen evaluation Regarding this label-fully-unseen evaluation, apart from our entailment models and three unsupervised baselines “Majority”, “Word2Vec” and “ESA”, we also report the following baseline: Wikipedia-based: We train a binary classifier based on BERT on a dataset collected from Wikipedia. Wikipedia is a corpus of general purpose, without targeting any specific $\textsc {0shot-tc}$ task. Collecting categorized articles from Wikipedia is popular way of creating training data for text categorization, such as BIBREF13. More specifically, we collected 100K articles along with their categories in the bottom of each article. For each article, apart from its attached positive categories, we randomly sample three negative categories. Then each article and its positive/negative categories act as training pairs for the binary classifier. We notice “Wikipedia-based” training indeed contributes a lot for the topic detection task; however, its performances on emotion and situation detection problems are far from satisfactory. We believe this is mainly because the Yahoo-based topic categorization task is much closer to the Wikipedia-based topic categorization task; emotion and situation categorizations, however, are relatively further. Our entailment models, pretrained on MNLI/FEVER/RTE respectively, perform more robust on the three $\textsc {0shot-tc}$ aspects (except for the RTE on emotion). Recall that they are not trained on any text classification data, and never know the domain and the aspects in the test. This clearly shows the great promise of developing textual entailment models for $\textsc {0shot-tc}$. Our ensemble approach further boosts the performances on all three tasks. An interesting phenomenon, comparing the label-partially-unseen results in Table TABREF30 and the label-fully-unseen results in Table TABREF32, is that the pretrained entailment models work in this order for label-fully-unseen case: RTE $>$ FEVER $>$MNLI; on the contrary, if we fine-tune them on the label-partially-unseen case, the MNLI-based model performs best. This could be due to a possibility that, on one hand, the constructed situation entailment dataset is closer to the RTE dataset than to the MNLI dataset, so an RTE-based model can generalize well to situation data, but, on the other hand, it could also be more likely to over-fit the training set of “situation” during fine-tuning. A deeper exploration of this is left as future work. Experiments ::: How do the generated hypotheses influence In Table TABREF24, we listed examples for converting class names into hypotheses. In this work, we only tried to make use of the class names and their definitions in WordNet. Table TABREF33 lists the fine-grained performance of three ways of generating hypotheses: “word”, “definition”, and “combination” (i.e., word&definition). This table indicates that: i) Definition alone usually does not work well in any of the three tasks, no matter which pretrained entailment model is used; ii) Whether “word” alone or “word&definition” works better depends on the specific task and the pretrained entailment model. For example, the pretrained MNLI model prefers “word&definition” in both “emotion” and “situation” detection tasks. However, the other two entailment models (RTE and FEVER) mostly prefer “word”. iii) Since it is unrealistic to adopt only one entailment model, such as from {RTE, FEVER, MNLI}, for any open $\textsc {0shot-tc}$ problem, an ensemble system should be preferred. However, the concrete implementation of the ensemble system also influences the strengths of different hypothesis generation approaches. In this work, our ensemble method reaches the top performance when combining the “word” and “definition”. More ensemble systems and hypothesis generation paradigms need to be studied in the future. To better understand the impact of generated hypotheses, we dive into the performance of each labels, taking “situation detection” as an example. Figure FIGREF47 illustrates the separate F1 scores for each situation class, predicted by the ensemble model for label-fully-unseen setup. This enables us to check in detail how easily the constructed hypotheses can be understood by the entailment model. Unfortunately, some classes are still challenging, such as “evacuation”, “infrastructure”, and “regime change”. This should be attributed to their over-abstract meaning. Some classes were well recognized, such as “water”, “shelter”, and “food”. One reason is that these labels mostly are common words – systems can more easily match them to the text; the other reason is that they are situation classes with higher frequencies (refer to Table TABREF22) – this is reasonable based on our common knowledge about disasters. Summary In this work, we analyzed the problems of existing research on zero-shot text classification ($\textsc {0shot-tc}$): restrictive problem definition, the weakness in understanding the problem and the labels' meaning, and the chaos of datasets and evaluation setups. Therefore, we are benchmarking $\textsc {0shot-tc}$ by standardizing the datasets and evaluations. More importantly, to tackle the broader-defined $\textsc {0shot-tc}$, we proposed a textual entailment framework which can work with or without the annotated data of seen labels. Acknowledgments The authors would like to thank Jennifer Sheffield and the anonymous reviewers for insightful comments and suggestions. This work was supported by Contracts HR0011-15-C-0113 and HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government.
Majority, ESA, Word2Vec , Binary-BERT
da845a2a930fd6a3267950bec5928205b6c6e8e8
da845a2a930fd6a3267950bec5928205b6c6e8e8_0
Q: How was speed measured? Text: Introduction Lemmatization is the process of finding the base form (or lemma) of a word by considering its inflected forms. Lemma is also called dictionary form, or citation form, and it refers to all words having the same meaning. Lemmatization is an important preprocessing step for many applications of text mining and question-answering systems, and researches in Arabic Information Retrieval (IR) systems show the need for representing Arabic words at lemma level for many applications, including keyphrase extraction BIBREF0 and machine translation BIBREF1 . In addition, lemmatization provides a productive way to generate generic keywords for search engines (SE) or labels for concept maps BIBREF2 . Word stem is that core part of the word that never changes even with morphological inflections; the part that remains after prefix and suffix removal. Sometimes the stem of the word is different than its lemma, for example the words: believe, believed, believing, and unbelievable share the stem (believ-), and have the normalized word form (believe) standing for the infinitive of the verb (believe). While stemming tries to remove prefixes and suffixes from words that appear with inflections in free text, lemmatization tries to replace word suffixes with (typically) different suffix to get its lemma. This extended abstract is organized as follows: Section SECREF2 shows some complexities in building Arabic lemmatization, and surveys prior work on Arabic stemming and lemmatization; Section SECREF3 introduces the dataset that we created to test lemmatization accuracy; Section SECREF4 describes the algorithm of the system that we built and report results and error analysis in section SECREF5 ; and Section SECREF6 discusses the results and concludes the abstract. Background Arabic is the largest Semitic language spoken by more than 400 million people. It's one of the six official languages in the United Nations, and the fifth most widely spoken language after Chinese, Spanish, English, and Hindi. Arabic has a very rich morphology, both derivational and inflectional. Generally, Arabic words are derived from a root that uses three or more consonants to define a broad meaning or concept, and they follow some templatic morphological patterns. By adding vowels, prefixes and suffixes to the root, word inflections are generated. For instance, the word وسيفتحون> (wsyftHwn) “and they will open” has the triliteral root فتح> (ftH), which has the basic meaning of opening, has prefixes وس> (ws) “and will”, suffixes ون> (wn) “they”, stem يفتح> (yftH) “open”, and lemma فتح> (ftH) “the concept of opening”. IR systems typically cluster words together into groups according to three main levels: root, stem, or lemma. The root level is considered by many researchers in the IR field which leads to high recall but low precision due to language complexity, for example words كتب، ٠كتبة، كتاب> (ktb, mktbp, ktAb) “wrote, library, book” have the same root كتب> (ktb) with the basic meaning of writing, so searching for any of these words by root, yields getting the other words which may not be desirable for many users. Other researchers show the importance of using stem level for improving retrieval precision and recall as they capture semantic similarity between inflected words. However, in Arabic, stem patterns may not capture similar words having the same semantic meaning. For example, stem patterns for broken plurals are different from their singular patterns, e.g. the plural أقلا٠> (AqlAm) “pens” will not match the stem of its singular form قل٠> (qlm) “pen”. The same applies to many imperfect verbs that have different stem patterns than their perfect verbs, e.g. the verbs استطاع، يستطيع> (AstTAE, ystTyE) “he could, he can” will not match because they have different stems. Indexing using lemmatization can enhance the performance of Arabic IR systems. A lot of work has been done in word stemming and lemmatization in different languages, for example the famous Porter stemmer for English, but for Arabic, there are few work has been done especially in lemmatization, and there is no open-source code and new testing data that can be used by other researchers for word lemmatization. Xerox Arabic Morphological Analysis and Generation BIBREF3 is one of the early Arabic stemmers, and it uses morphological rules to obtain stems for nouns and verbs by looking into a table of thousands of roots. Khoja's stemmer BIBREF4 and Buckwalter morphological analyzer BIBREF5 are other root-based analyzers and stemmers which use tables of valid combinations between prefixes and suffixes, prefixes and stems, and stems and suffixes. Recently, MADAMIRA BIBREF6 system has been evaluated using a blind testset (25K words for Modern Standard Arabic (MSA) selected from Penn Arabic Tree bank (PATB)), and the reported accuracy was 96.2% as the percentage of words where the chosen analysis (provided by SAMA morphological analyzer BIBREF7 ) has the correct lemma. In this paper, we present an open-source Java code to extract Arabic word lemmas, and a new publicly available testset for lemmatization allowing researches to evaluate using the same dataset that we used, and reproduce same experiments. Data Description To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each. Word are white-space and punctuation separated, and some spelling errors are corrected (1.33% of the total words) to have very clean test cases. Lemmatization is done by an expert Arabic linguist where spelling corrections are marked, and lemmas are provided with full diacritization as shown in Figure FIGREF2 . As MSA is usually written without diacritics and IR systems normally remove all diacritics from search queries and indexed data as a basic preprocessing step, so another column for undiacritized lemma is added and it's used for evaluating our lemmatizer and comparing with state-of-the-art system for lemmatization; MADAMIRA. system Description We were inspired by the work done by BIBREF8 for segmenting Arabic words out of context. They achieved an accuracy of almost 99%; slightly better than state-of-the-art system for segmentation (MADAMIRA) which considers surrounding context and many linguistic features. This system shows enhancements in both Machine Translation, and Information Retrieval tasks BIBREF9 . This work can be considered as an extension to word segmentation. From a large diacritized corpus, we constructed a dictionary of words and their possible diacritizations ordered by number of occurrences of each diacritized form. This diacritized corpus was created by a commercial vendor and contains 9.7 million words with almost 200K unique surface words. About 73% of the corpus is in MSA and covers variety of genres like politics, economy, sports, society, etc. and the remaining part is mostly religious texts written in classical Arabic (CA). The effectiveness of using this corpus in building state-of-the-art diacritizer was proven in BIBREF10 .For example, the word وبنود> (wbnwd) “and items” is found 4 times in this corpus with two full diacritization forms وَبُنُودِ، وَبُنُودٍ> (wabunudi, wabunudK) “items, with different grammatical case endings” which appeared 3 times and once respectively. All unique undiacritized words in this corpus were analyzed using Buckwalter morphological analyzer which gives all possible word diacritizations, and their segmentation, POS tag and lemma as shown in Figure FIGREF3 . The idea is to take the most frequent diacritized form for words appear in this corpus, and find the morphological analysis with highest matching score between its diacritized form and the corpus word. This means that we search for the most common diacritization of the word regardless of its surrounding context. In the above example, the first solution is preferred and hence its lemma بند> (banod, bnd after diacritics removal) “item”. While comparing two diacritized forms from the corpus and Buckwalter analysis, special cases were applied to solve inconsistencies between the two diacritization schemas, for example while words are fully diacritized in the corpus, Buckwalter analysis gives diacritics without case ending (i.e. without context), and removes short vowels in some cases, for example before long vowels, and after the definite article ال> (Al) “the”, etc. It worths mentioning that there are many cases in Buckwalter analysis where for the input word, there are two or more identical diacritizations with different lemmas, and the analyses of such words are provided without any meaningful order. For example the word سيارة> (syArp) “car” has two morphological analyses with different lemmas, namely سيار> (syAr) “walker”, and سيارة> (syArp) “car” in this order while the second lemma is the most common one. To solve tis problem, all these words are reported and the top frequent words are revised and order of lemmas were changed according to actual usage in modern language. The lemmatization algorithm can be summarized in Figure FIGREF4 , and the online system can be tested through the site http://alt.qcri.org/farasa/segmenter.html Evaluation Data was formatted in a plain text format where sentences are written in separate lines and words are separated by spaces, and the outputs of MADAMIRA and our system are compared against the undiacritized lemma for each word. For accurate results, all differences were revised manually to accept cases that should not be counted as errors (different writings of foreign names entities for example as in هونغ كونغ، هونج كونج> (hwng kwng, hwnj kwnj) “Hong Kong”, or more than one accepted lemma for some function words, e.g the lemmas في، في٠ا> (fy, fymA) are both valid for the function word في٠ا> (fymA) “while”). Table TABREF5 shows results of testing our system and MADAMIRA on the WikiNews testset (for undiacritized lemmas). Our approach gives +7% relative gain above MADAMIRA in lemmatization task. In terms of speed, our system was able to lemmatize 7.4 million words on a personal laptop in almost 2 minutes compared to 2.5 hours for MADAMIRA, i.e. 75 times faster. The code is written entirely in Java without any external dependency which makes its integration in other systems quite simple. Error Analysis Most of the lemmatization errors in our system are due to fact that we use the most common diacritization of words without considering their contexts which cannot solve the ambiguity in cases like nouns and adjectives that share the same diacritization forms, for example the word أكادي٠ية> (AkAdymyp) can be either a noun and its lemma is أكادي٠ية> (AkAdymyp) “academy”, or an adjective and its lemma is أكادي٠ي> (AkAdymy) “academic”. Also for MADAMIRA, errors in selecting the correct Part-of-Speech (POS) for ambiguous words, and foreign named entities. In the full paper, we will quantify error cases in our lemmatizer and MADAMIRA and give examples for each case which can help in enhancing both systems. Discussion In this paper, we introduce a new dataset for Arabic lemmatization and a very fast and accurate lemmatization algorithm that performs better than state-of-the art system; MADAMIRA. Both the dataset and the code will be publicly available. We show that to build an effective IR system for complex derivational languages like Arabic, there is a a big need for very fast and accurate lemmatization algorithms, and we show that this can be achieved by considering only the most frequent diacritized form for words and matching this form with the morphological analysis with highest similarity score. We plan to study the performance if the algorithm was modified to provide diacritized lemmas which can be useful for other applications.
how long it takes the system to lemmatize a set number of words
2fa0b9d0cb26e1be8eae7e782ada6820bc2c037f
2fa0b9d0cb26e1be8eae7e782ada6820bc2c037f_0
Q: What were their accuracy results on the task? Text: Introduction Lemmatization is the process of finding the base form (or lemma) of a word by considering its inflected forms. Lemma is also called dictionary form, or citation form, and it refers to all words having the same meaning. Lemmatization is an important preprocessing step for many applications of text mining and question-answering systems, and researches in Arabic Information Retrieval (IR) systems show the need for representing Arabic words at lemma level for many applications, including keyphrase extraction BIBREF0 and machine translation BIBREF1 . In addition, lemmatization provides a productive way to generate generic keywords for search engines (SE) or labels for concept maps BIBREF2 . Word stem is that core part of the word that never changes even with morphological inflections; the part that remains after prefix and suffix removal. Sometimes the stem of the word is different than its lemma, for example the words: believe, believed, believing, and unbelievable share the stem (believ-), and have the normalized word form (believe) standing for the infinitive of the verb (believe). While stemming tries to remove prefixes and suffixes from words that appear with inflections in free text, lemmatization tries to replace word suffixes with (typically) different suffix to get its lemma. This extended abstract is organized as follows: Section SECREF2 shows some complexities in building Arabic lemmatization, and surveys prior work on Arabic stemming and lemmatization; Section SECREF3 introduces the dataset that we created to test lemmatization accuracy; Section SECREF4 describes the algorithm of the system that we built and report results and error analysis in section SECREF5 ; and Section SECREF6 discusses the results and concludes the abstract. Background Arabic is the largest Semitic language spoken by more than 400 million people. It's one of the six official languages in the United Nations, and the fifth most widely spoken language after Chinese, Spanish, English, and Hindi. Arabic has a very rich morphology, both derivational and inflectional. Generally, Arabic words are derived from a root that uses three or more consonants to define a broad meaning or concept, and they follow some templatic morphological patterns. By adding vowels, prefixes and suffixes to the root, word inflections are generated. For instance, the word وسيفتحون> (wsyftHwn) “and they will open” has the triliteral root فتح> (ftH), which has the basic meaning of opening, has prefixes وس> (ws) “and will”, suffixes ون> (wn) “they”, stem يفتح> (yftH) “open”, and lemma فتح> (ftH) “the concept of opening”. IR systems typically cluster words together into groups according to three main levels: root, stem, or lemma. The root level is considered by many researchers in the IR field which leads to high recall but low precision due to language complexity, for example words كتب، ٠كتبة، كتاب> (ktb, mktbp, ktAb) “wrote, library, book” have the same root كتب> (ktb) with the basic meaning of writing, so searching for any of these words by root, yields getting the other words which may not be desirable for many users. Other researchers show the importance of using stem level for improving retrieval precision and recall as they capture semantic similarity between inflected words. However, in Arabic, stem patterns may not capture similar words having the same semantic meaning. For example, stem patterns for broken plurals are different from their singular patterns, e.g. the plural أقلا٠> (AqlAm) “pens” will not match the stem of its singular form قل٠> (qlm) “pen”. The same applies to many imperfect verbs that have different stem patterns than their perfect verbs, e.g. the verbs استطاع، يستطيع> (AstTAE, ystTyE) “he could, he can” will not match because they have different stems. Indexing using lemmatization can enhance the performance of Arabic IR systems. A lot of work has been done in word stemming and lemmatization in different languages, for example the famous Porter stemmer for English, but for Arabic, there are few work has been done especially in lemmatization, and there is no open-source code and new testing data that can be used by other researchers for word lemmatization. Xerox Arabic Morphological Analysis and Generation BIBREF3 is one of the early Arabic stemmers, and it uses morphological rules to obtain stems for nouns and verbs by looking into a table of thousands of roots. Khoja's stemmer BIBREF4 and Buckwalter morphological analyzer BIBREF5 are other root-based analyzers and stemmers which use tables of valid combinations between prefixes and suffixes, prefixes and stems, and stems and suffixes. Recently, MADAMIRA BIBREF6 system has been evaluated using a blind testset (25K words for Modern Standard Arabic (MSA) selected from Penn Arabic Tree bank (PATB)), and the reported accuracy was 96.2% as the percentage of words where the chosen analysis (provided by SAMA morphological analyzer BIBREF7 ) has the correct lemma. In this paper, we present an open-source Java code to extract Arabic word lemmas, and a new publicly available testset for lemmatization allowing researches to evaluate using the same dataset that we used, and reproduce same experiments. Data Description To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each. Word are white-space and punctuation separated, and some spelling errors are corrected (1.33% of the total words) to have very clean test cases. Lemmatization is done by an expert Arabic linguist where spelling corrections are marked, and lemmas are provided with full diacritization as shown in Figure FIGREF2 . As MSA is usually written without diacritics and IR systems normally remove all diacritics from search queries and indexed data as a basic preprocessing step, so another column for undiacritized lemma is added and it's used for evaluating our lemmatizer and comparing with state-of-the-art system for lemmatization; MADAMIRA. system Description We were inspired by the work done by BIBREF8 for segmenting Arabic words out of context. They achieved an accuracy of almost 99%; slightly better than state-of-the-art system for segmentation (MADAMIRA) which considers surrounding context and many linguistic features. This system shows enhancements in both Machine Translation, and Information Retrieval tasks BIBREF9 . This work can be considered as an extension to word segmentation. From a large diacritized corpus, we constructed a dictionary of words and their possible diacritizations ordered by number of occurrences of each diacritized form. This diacritized corpus was created by a commercial vendor and contains 9.7 million words with almost 200K unique surface words. About 73% of the corpus is in MSA and covers variety of genres like politics, economy, sports, society, etc. and the remaining part is mostly religious texts written in classical Arabic (CA). The effectiveness of using this corpus in building state-of-the-art diacritizer was proven in BIBREF10 .For example, the word وبنود> (wbnwd) “and items” is found 4 times in this corpus with two full diacritization forms وَبُنُودِ، وَبُنُودٍ> (wabunudi, wabunudK) “items, with different grammatical case endings” which appeared 3 times and once respectively. All unique undiacritized words in this corpus were analyzed using Buckwalter morphological analyzer which gives all possible word diacritizations, and their segmentation, POS tag and lemma as shown in Figure FIGREF3 . The idea is to take the most frequent diacritized form for words appear in this corpus, and find the morphological analysis with highest matching score between its diacritized form and the corpus word. This means that we search for the most common diacritization of the word regardless of its surrounding context. In the above example, the first solution is preferred and hence its lemma بند> (banod, bnd after diacritics removal) “item”. While comparing two diacritized forms from the corpus and Buckwalter analysis, special cases were applied to solve inconsistencies between the two diacritization schemas, for example while words are fully diacritized in the corpus, Buckwalter analysis gives diacritics without case ending (i.e. without context), and removes short vowels in some cases, for example before long vowels, and after the definite article ال> (Al) “the”, etc. It worths mentioning that there are many cases in Buckwalter analysis where for the input word, there are two or more identical diacritizations with different lemmas, and the analyses of such words are provided without any meaningful order. For example the word سيارة> (syArp) “car” has two morphological analyses with different lemmas, namely سيار> (syAr) “walker”, and سيارة> (syArp) “car” in this order while the second lemma is the most common one. To solve tis problem, all these words are reported and the top frequent words are revised and order of lemmas were changed according to actual usage in modern language. The lemmatization algorithm can be summarized in Figure FIGREF4 , and the online system can be tested through the site http://alt.qcri.org/farasa/segmenter.html Evaluation Data was formatted in a plain text format where sentences are written in separate lines and words are separated by spaces, and the outputs of MADAMIRA and our system are compared against the undiacritized lemma for each word. For accurate results, all differences were revised manually to accept cases that should not be counted as errors (different writings of foreign names entities for example as in هونغ كونغ، هونج كونج> (hwng kwng, hwnj kwnj) “Hong Kong”, or more than one accepted lemma for some function words, e.g the lemmas في، في٠ا> (fy, fymA) are both valid for the function word في٠ا> (fymA) “while”). Table TABREF5 shows results of testing our system and MADAMIRA on the WikiNews testset (for undiacritized lemmas). Our approach gives +7% relative gain above MADAMIRA in lemmatization task. In terms of speed, our system was able to lemmatize 7.4 million words on a personal laptop in almost 2 minutes compared to 2.5 hours for MADAMIRA, i.e. 75 times faster. The code is written entirely in Java without any external dependency which makes its integration in other systems quite simple. Error Analysis Most of the lemmatization errors in our system are due to fact that we use the most common diacritization of words without considering their contexts which cannot solve the ambiguity in cases like nouns and adjectives that share the same diacritization forms, for example the word أكادي٠ية> (AkAdymyp) can be either a noun and its lemma is أكادي٠ية> (AkAdymyp) “academy”, or an adjective and its lemma is أكادي٠ي> (AkAdymy) “academic”. Also for MADAMIRA, errors in selecting the correct Part-of-Speech (POS) for ambiguous words, and foreign named entities. In the full paper, we will quantify error cases in our lemmatizer and MADAMIRA and give examples for each case which can help in enhancing both systems. Discussion In this paper, we introduce a new dataset for Arabic lemmatization and a very fast and accurate lemmatization algorithm that performs better than state-of-the art system; MADAMIRA. Both the dataset and the code will be publicly available. We show that to build an effective IR system for complex derivational languages like Arabic, there is a a big need for very fast and accurate lemmatization algorithms, and we show that this can be achieved by considering only the most frequent diacritized form for words and matching this form with the morphological analysis with highest similarity score. We plan to study the performance if the algorithm was modified to provide diacritized lemmas which can be useful for other applications.
97.32%
76ce9e02d97e2d77fe28c0fa78526809e7c195c6
76ce9e02d97e2d77fe28c0fa78526809e7c195c6_0
Q: What is the state of the art? Text: Introduction Lemmatization is the process of finding the base form (or lemma) of a word by considering its inflected forms. Lemma is also called dictionary form, or citation form, and it refers to all words having the same meaning. Lemmatization is an important preprocessing step for many applications of text mining and question-answering systems, and researches in Arabic Information Retrieval (IR) systems show the need for representing Arabic words at lemma level for many applications, including keyphrase extraction BIBREF0 and machine translation BIBREF1 . In addition, lemmatization provides a productive way to generate generic keywords for search engines (SE) or labels for concept maps BIBREF2 . Word stem is that core part of the word that never changes even with morphological inflections; the part that remains after prefix and suffix removal. Sometimes the stem of the word is different than its lemma, for example the words: believe, believed, believing, and unbelievable share the stem (believ-), and have the normalized word form (believe) standing for the infinitive of the verb (believe). While stemming tries to remove prefixes and suffixes from words that appear with inflections in free text, lemmatization tries to replace word suffixes with (typically) different suffix to get its lemma. This extended abstract is organized as follows: Section SECREF2 shows some complexities in building Arabic lemmatization, and surveys prior work on Arabic stemming and lemmatization; Section SECREF3 introduces the dataset that we created to test lemmatization accuracy; Section SECREF4 describes the algorithm of the system that we built and report results and error analysis in section SECREF5 ; and Section SECREF6 discusses the results and concludes the abstract. Background Arabic is the largest Semitic language spoken by more than 400 million people. It's one of the six official languages in the United Nations, and the fifth most widely spoken language after Chinese, Spanish, English, and Hindi. Arabic has a very rich morphology, both derivational and inflectional. Generally, Arabic words are derived from a root that uses three or more consonants to define a broad meaning or concept, and they follow some templatic morphological patterns. By adding vowels, prefixes and suffixes to the root, word inflections are generated. For instance, the word وسيفتحون> (wsyftHwn) “and they will open” has the triliteral root فتح> (ftH), which has the basic meaning of opening, has prefixes وس> (ws) “and will”, suffixes ون> (wn) “they”, stem يفتح> (yftH) “open”, and lemma فتح> (ftH) “the concept of opening”. IR systems typically cluster words together into groups according to three main levels: root, stem, or lemma. The root level is considered by many researchers in the IR field which leads to high recall but low precision due to language complexity, for example words كتب، ٠كتبة، كتاب> (ktb, mktbp, ktAb) “wrote, library, book” have the same root كتب> (ktb) with the basic meaning of writing, so searching for any of these words by root, yields getting the other words which may not be desirable for many users. Other researchers show the importance of using stem level for improving retrieval precision and recall as they capture semantic similarity between inflected words. However, in Arabic, stem patterns may not capture similar words having the same semantic meaning. For example, stem patterns for broken plurals are different from their singular patterns, e.g. the plural أقلا٠> (AqlAm) “pens” will not match the stem of its singular form قل٠> (qlm) “pen”. The same applies to many imperfect verbs that have different stem patterns than their perfect verbs, e.g. the verbs استطاع، يستطيع> (AstTAE, ystTyE) “he could, he can” will not match because they have different stems. Indexing using lemmatization can enhance the performance of Arabic IR systems. A lot of work has been done in word stemming and lemmatization in different languages, for example the famous Porter stemmer for English, but for Arabic, there are few work has been done especially in lemmatization, and there is no open-source code and new testing data that can be used by other researchers for word lemmatization. Xerox Arabic Morphological Analysis and Generation BIBREF3 is one of the early Arabic stemmers, and it uses morphological rules to obtain stems for nouns and verbs by looking into a table of thousands of roots. Khoja's stemmer BIBREF4 and Buckwalter morphological analyzer BIBREF5 are other root-based analyzers and stemmers which use tables of valid combinations between prefixes and suffixes, prefixes and stems, and stems and suffixes. Recently, MADAMIRA BIBREF6 system has been evaluated using a blind testset (25K words for Modern Standard Arabic (MSA) selected from Penn Arabic Tree bank (PATB)), and the reported accuracy was 96.2% as the percentage of words where the chosen analysis (provided by SAMA morphological analyzer BIBREF7 ) has the correct lemma. In this paper, we present an open-source Java code to extract Arabic word lemmas, and a new publicly available testset for lemmatization allowing researches to evaluate using the same dataset that we used, and reproduce same experiments. Data Description To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each. Word are white-space and punctuation separated, and some spelling errors are corrected (1.33% of the total words) to have very clean test cases. Lemmatization is done by an expert Arabic linguist where spelling corrections are marked, and lemmas are provided with full diacritization as shown in Figure FIGREF2 . As MSA is usually written without diacritics and IR systems normally remove all diacritics from search queries and indexed data as a basic preprocessing step, so another column for undiacritized lemma is added and it's used for evaluating our lemmatizer and comparing with state-of-the-art system for lemmatization; MADAMIRA. system Description We were inspired by the work done by BIBREF8 for segmenting Arabic words out of context. They achieved an accuracy of almost 99%; slightly better than state-of-the-art system for segmentation (MADAMIRA) which considers surrounding context and many linguistic features. This system shows enhancements in both Machine Translation, and Information Retrieval tasks BIBREF9 . This work can be considered as an extension to word segmentation. From a large diacritized corpus, we constructed a dictionary of words and their possible diacritizations ordered by number of occurrences of each diacritized form. This diacritized corpus was created by a commercial vendor and contains 9.7 million words with almost 200K unique surface words. About 73% of the corpus is in MSA and covers variety of genres like politics, economy, sports, society, etc. and the remaining part is mostly religious texts written in classical Arabic (CA). The effectiveness of using this corpus in building state-of-the-art diacritizer was proven in BIBREF10 .For example, the word وبنود> (wbnwd) “and items” is found 4 times in this corpus with two full diacritization forms وَبُنُودِ، وَبُنُودٍ> (wabunudi, wabunudK) “items, with different grammatical case endings” which appeared 3 times and once respectively. All unique undiacritized words in this corpus were analyzed using Buckwalter morphological analyzer which gives all possible word diacritizations, and their segmentation, POS tag and lemma as shown in Figure FIGREF3 . The idea is to take the most frequent diacritized form for words appear in this corpus, and find the morphological analysis with highest matching score between its diacritized form and the corpus word. This means that we search for the most common diacritization of the word regardless of its surrounding context. In the above example, the first solution is preferred and hence its lemma بند> (banod, bnd after diacritics removal) “item”. While comparing two diacritized forms from the corpus and Buckwalter analysis, special cases were applied to solve inconsistencies between the two diacritization schemas, for example while words are fully diacritized in the corpus, Buckwalter analysis gives diacritics without case ending (i.e. without context), and removes short vowels in some cases, for example before long vowels, and after the definite article ال> (Al) “the”, etc. It worths mentioning that there are many cases in Buckwalter analysis where for the input word, there are two or more identical diacritizations with different lemmas, and the analyses of such words are provided without any meaningful order. For example the word سيارة> (syArp) “car” has two morphological analyses with different lemmas, namely سيار> (syAr) “walker”, and سيارة> (syArp) “car” in this order while the second lemma is the most common one. To solve tis problem, all these words are reported and the top frequent words are revised and order of lemmas were changed according to actual usage in modern language. The lemmatization algorithm can be summarized in Figure FIGREF4 , and the online system can be tested through the site http://alt.qcri.org/farasa/segmenter.html Evaluation Data was formatted in a plain text format where sentences are written in separate lines and words are separated by spaces, and the outputs of MADAMIRA and our system are compared against the undiacritized lemma for each word. For accurate results, all differences were revised manually to accept cases that should not be counted as errors (different writings of foreign names entities for example as in هونغ كونغ، هونج كونج> (hwng kwng, hwnj kwnj) “Hong Kong”, or more than one accepted lemma for some function words, e.g the lemmas في، في٠ا> (fy, fymA) are both valid for the function word في٠ا> (fymA) “while”). Table TABREF5 shows results of testing our system and MADAMIRA on the WikiNews testset (for undiacritized lemmas). Our approach gives +7% relative gain above MADAMIRA in lemmatization task. In terms of speed, our system was able to lemmatize 7.4 million words on a personal laptop in almost 2 minutes compared to 2.5 hours for MADAMIRA, i.e. 75 times faster. The code is written entirely in Java without any external dependency which makes its integration in other systems quite simple. Error Analysis Most of the lemmatization errors in our system are due to fact that we use the most common diacritization of words without considering their contexts which cannot solve the ambiguity in cases like nouns and adjectives that share the same diacritization forms, for example the word أكادي٠ية> (AkAdymyp) can be either a noun and its lemma is أكادي٠ية> (AkAdymyp) “academy”, or an adjective and its lemma is أكادي٠ي> (AkAdymy) “academic”. Also for MADAMIRA, errors in selecting the correct Part-of-Speech (POS) for ambiguous words, and foreign named entities. In the full paper, we will quantify error cases in our lemmatizer and MADAMIRA and give examples for each case which can help in enhancing both systems. Discussion In this paper, we introduce a new dataset for Arabic lemmatization and a very fast and accurate lemmatization algorithm that performs better than state-of-the art system; MADAMIRA. Both the dataset and the code will be publicly available. We show that to build an effective IR system for complex derivational languages like Arabic, there is a a big need for very fast and accurate lemmatization algorithms, and we show that this can be achieved by considering only the most frequent diacritized form for words and matching this form with the morphological analysis with highest similarity score. We plan to study the performance if the algorithm was modified to provide diacritized lemmas which can be useful for other applications.
MADAMIRA BIBREF6 system
64c7545ce349265e0c97fd6c434a5f8efdc23777
64c7545ce349265e0c97fd6c434a5f8efdc23777_0
Q: How was the dataset annotated? Text: Introduction Lemmatization is the process of finding the base form (or lemma) of a word by considering its inflected forms. Lemma is also called dictionary form, or citation form, and it refers to all words having the same meaning. Lemmatization is an important preprocessing step for many applications of text mining and question-answering systems, and researches in Arabic Information Retrieval (IR) systems show the need for representing Arabic words at lemma level for many applications, including keyphrase extraction BIBREF0 and machine translation BIBREF1 . In addition, lemmatization provides a productive way to generate generic keywords for search engines (SE) or labels for concept maps BIBREF2 . Word stem is that core part of the word that never changes even with morphological inflections; the part that remains after prefix and suffix removal. Sometimes the stem of the word is different than its lemma, for example the words: believe, believed, believing, and unbelievable share the stem (believ-), and have the normalized word form (believe) standing for the infinitive of the verb (believe). While stemming tries to remove prefixes and suffixes from words that appear with inflections in free text, lemmatization tries to replace word suffixes with (typically) different suffix to get its lemma. This extended abstract is organized as follows: Section SECREF2 shows some complexities in building Arabic lemmatization, and surveys prior work on Arabic stemming and lemmatization; Section SECREF3 introduces the dataset that we created to test lemmatization accuracy; Section SECREF4 describes the algorithm of the system that we built and report results and error analysis in section SECREF5 ; and Section SECREF6 discusses the results and concludes the abstract. Background Arabic is the largest Semitic language spoken by more than 400 million people. It's one of the six official languages in the United Nations, and the fifth most widely spoken language after Chinese, Spanish, English, and Hindi. Arabic has a very rich morphology, both derivational and inflectional. Generally, Arabic words are derived from a root that uses three or more consonants to define a broad meaning or concept, and they follow some templatic morphological patterns. By adding vowels, prefixes and suffixes to the root, word inflections are generated. For instance, the word وسيفتحون> (wsyftHwn) “and they will open” has the triliteral root فتح> (ftH), which has the basic meaning of opening, has prefixes وس> (ws) “and will”, suffixes ون> (wn) “they”, stem يفتح> (yftH) “open”, and lemma فتح> (ftH) “the concept of opening”. IR systems typically cluster words together into groups according to three main levels: root, stem, or lemma. The root level is considered by many researchers in the IR field which leads to high recall but low precision due to language complexity, for example words كتب، ٠كتبة، كتاب> (ktb, mktbp, ktAb) “wrote, library, book” have the same root كتب> (ktb) with the basic meaning of writing, so searching for any of these words by root, yields getting the other words which may not be desirable for many users. Other researchers show the importance of using stem level for improving retrieval precision and recall as they capture semantic similarity between inflected words. However, in Arabic, stem patterns may not capture similar words having the same semantic meaning. For example, stem patterns for broken plurals are different from their singular patterns, e.g. the plural أقلا٠> (AqlAm) “pens” will not match the stem of its singular form قل٠> (qlm) “pen”. The same applies to many imperfect verbs that have different stem patterns than their perfect verbs, e.g. the verbs استطاع، يستطيع> (AstTAE, ystTyE) “he could, he can” will not match because they have different stems. Indexing using lemmatization can enhance the performance of Arabic IR systems. A lot of work has been done in word stemming and lemmatization in different languages, for example the famous Porter stemmer for English, but for Arabic, there are few work has been done especially in lemmatization, and there is no open-source code and new testing data that can be used by other researchers for word lemmatization. Xerox Arabic Morphological Analysis and Generation BIBREF3 is one of the early Arabic stemmers, and it uses morphological rules to obtain stems for nouns and verbs by looking into a table of thousands of roots. Khoja's stemmer BIBREF4 and Buckwalter morphological analyzer BIBREF5 are other root-based analyzers and stemmers which use tables of valid combinations between prefixes and suffixes, prefixes and stems, and stems and suffixes. Recently, MADAMIRA BIBREF6 system has been evaluated using a blind testset (25K words for Modern Standard Arabic (MSA) selected from Penn Arabic Tree bank (PATB)), and the reported accuracy was 96.2% as the percentage of words where the chosen analysis (provided by SAMA morphological analyzer BIBREF7 ) has the correct lemma. In this paper, we present an open-source Java code to extract Arabic word lemmas, and a new publicly available testset for lemmatization allowing researches to evaluate using the same dataset that we used, and reproduce same experiments. Data Description To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each. Word are white-space and punctuation separated, and some spelling errors are corrected (1.33% of the total words) to have very clean test cases. Lemmatization is done by an expert Arabic linguist where spelling corrections are marked, and lemmas are provided with full diacritization as shown in Figure FIGREF2 . As MSA is usually written without diacritics and IR systems normally remove all diacritics from search queries and indexed data as a basic preprocessing step, so another column for undiacritized lemma is added and it's used for evaluating our lemmatizer and comparing with state-of-the-art system for lemmatization; MADAMIRA. system Description We were inspired by the work done by BIBREF8 for segmenting Arabic words out of context. They achieved an accuracy of almost 99%; slightly better than state-of-the-art system for segmentation (MADAMIRA) which considers surrounding context and many linguistic features. This system shows enhancements in both Machine Translation, and Information Retrieval tasks BIBREF9 . This work can be considered as an extension to word segmentation. From a large diacritized corpus, we constructed a dictionary of words and their possible diacritizations ordered by number of occurrences of each diacritized form. This diacritized corpus was created by a commercial vendor and contains 9.7 million words with almost 200K unique surface words. About 73% of the corpus is in MSA and covers variety of genres like politics, economy, sports, society, etc. and the remaining part is mostly religious texts written in classical Arabic (CA). The effectiveness of using this corpus in building state-of-the-art diacritizer was proven in BIBREF10 .For example, the word وبنود> (wbnwd) “and items” is found 4 times in this corpus with two full diacritization forms وَبُنُودِ، وَبُنُودٍ> (wabunudi, wabunudK) “items, with different grammatical case endings” which appeared 3 times and once respectively. All unique undiacritized words in this corpus were analyzed using Buckwalter morphological analyzer which gives all possible word diacritizations, and their segmentation, POS tag and lemma as shown in Figure FIGREF3 . The idea is to take the most frequent diacritized form for words appear in this corpus, and find the morphological analysis with highest matching score between its diacritized form and the corpus word. This means that we search for the most common diacritization of the word regardless of its surrounding context. In the above example, the first solution is preferred and hence its lemma بند> (banod, bnd after diacritics removal) “item”. While comparing two diacritized forms from the corpus and Buckwalter analysis, special cases were applied to solve inconsistencies between the two diacritization schemas, for example while words are fully diacritized in the corpus, Buckwalter analysis gives diacritics without case ending (i.e. without context), and removes short vowels in some cases, for example before long vowels, and after the definite article ال> (Al) “the”, etc. It worths mentioning that there are many cases in Buckwalter analysis where for the input word, there are two or more identical diacritizations with different lemmas, and the analyses of such words are provided without any meaningful order. For example the word سيارة> (syArp) “car” has two morphological analyses with different lemmas, namely سيار> (syAr) “walker”, and سيارة> (syArp) “car” in this order while the second lemma is the most common one. To solve tis problem, all these words are reported and the top frequent words are revised and order of lemmas were changed according to actual usage in modern language. The lemmatization algorithm can be summarized in Figure FIGREF4 , and the online system can be tested through the site http://alt.qcri.org/farasa/segmenter.html Evaluation Data was formatted in a plain text format where sentences are written in separate lines and words are separated by spaces, and the outputs of MADAMIRA and our system are compared against the undiacritized lemma for each word. For accurate results, all differences were revised manually to accept cases that should not be counted as errors (different writings of foreign names entities for example as in هونغ كونغ، هونج كونج> (hwng kwng, hwnj kwnj) “Hong Kong”, or more than one accepted lemma for some function words, e.g the lemmas في، في٠ا> (fy, fymA) are both valid for the function word في٠ا> (fymA) “while”). Table TABREF5 shows results of testing our system and MADAMIRA on the WikiNews testset (for undiacritized lemmas). Our approach gives +7% relative gain above MADAMIRA in lemmatization task. In terms of speed, our system was able to lemmatize 7.4 million words on a personal laptop in almost 2 minutes compared to 2.5 hours for MADAMIRA, i.e. 75 times faster. The code is written entirely in Java without any external dependency which makes its integration in other systems quite simple. Error Analysis Most of the lemmatization errors in our system are due to fact that we use the most common diacritization of words without considering their contexts which cannot solve the ambiguity in cases like nouns and adjectives that share the same diacritization forms, for example the word أكادي٠ية> (AkAdymyp) can be either a noun and its lemma is أكادي٠ية> (AkAdymyp) “academy”, or an adjective and its lemma is أكادي٠ي> (AkAdymy) “academic”. Also for MADAMIRA, errors in selecting the correct Part-of-Speech (POS) for ambiguous words, and foreign named entities. In the full paper, we will quantify error cases in our lemmatizer and MADAMIRA and give examples for each case which can help in enhancing both systems. Discussion In this paper, we introduce a new dataset for Arabic lemmatization and a very fast and accurate lemmatization algorithm that performs better than state-of-the art system; MADAMIRA. Both the dataset and the code will be publicly available. We show that to build an effective IR system for complex derivational languages like Arabic, there is a a big need for very fast and accurate lemmatization algorithms, and we show that this can be achieved by considering only the most frequent diacritized form for words and matching this form with the morphological analysis with highest similarity score. We plan to study the performance if the algorithm was modified to provide diacritized lemmas which can be useful for other applications.
Lemmatization is done by an expert Arabic linguist where spelling corrections are marked, and lemmas are provided with full diacritization
47822fec590e840438a3054b7f512fec09dbd1e1
47822fec590e840438a3054b7f512fec09dbd1e1_0
Q: What is the size of the dataset? Text: Introduction Lemmatization is the process of finding the base form (or lemma) of a word by considering its inflected forms. Lemma is also called dictionary form, or citation form, and it refers to all words having the same meaning. Lemmatization is an important preprocessing step for many applications of text mining and question-answering systems, and researches in Arabic Information Retrieval (IR) systems show the need for representing Arabic words at lemma level for many applications, including keyphrase extraction BIBREF0 and machine translation BIBREF1 . In addition, lemmatization provides a productive way to generate generic keywords for search engines (SE) or labels for concept maps BIBREF2 . Word stem is that core part of the word that never changes even with morphological inflections; the part that remains after prefix and suffix removal. Sometimes the stem of the word is different than its lemma, for example the words: believe, believed, believing, and unbelievable share the stem (believ-), and have the normalized word form (believe) standing for the infinitive of the verb (believe). While stemming tries to remove prefixes and suffixes from words that appear with inflections in free text, lemmatization tries to replace word suffixes with (typically) different suffix to get its lemma. This extended abstract is organized as follows: Section SECREF2 shows some complexities in building Arabic lemmatization, and surveys prior work on Arabic stemming and lemmatization; Section SECREF3 introduces the dataset that we created to test lemmatization accuracy; Section SECREF4 describes the algorithm of the system that we built and report results and error analysis in section SECREF5 ; and Section SECREF6 discusses the results and concludes the abstract. Background Arabic is the largest Semitic language spoken by more than 400 million people. It's one of the six official languages in the United Nations, and the fifth most widely spoken language after Chinese, Spanish, English, and Hindi. Arabic has a very rich morphology, both derivational and inflectional. Generally, Arabic words are derived from a root that uses three or more consonants to define a broad meaning or concept, and they follow some templatic morphological patterns. By adding vowels, prefixes and suffixes to the root, word inflections are generated. For instance, the word وسيفتحون> (wsyftHwn) “and they will open” has the triliteral root فتح> (ftH), which has the basic meaning of opening, has prefixes وس> (ws) “and will”, suffixes ون> (wn) “they”, stem يفتح> (yftH) “open”, and lemma فتح> (ftH) “the concept of opening”. IR systems typically cluster words together into groups according to three main levels: root, stem, or lemma. The root level is considered by many researchers in the IR field which leads to high recall but low precision due to language complexity, for example words كتب، ٠كتبة، كتاب> (ktb, mktbp, ktAb) “wrote, library, book” have the same root كتب> (ktb) with the basic meaning of writing, so searching for any of these words by root, yields getting the other words which may not be desirable for many users. Other researchers show the importance of using stem level for improving retrieval precision and recall as they capture semantic similarity between inflected words. However, in Arabic, stem patterns may not capture similar words having the same semantic meaning. For example, stem patterns for broken plurals are different from their singular patterns, e.g. the plural أقلا٠> (AqlAm) “pens” will not match the stem of its singular form قل٠> (qlm) “pen”. The same applies to many imperfect verbs that have different stem patterns than their perfect verbs, e.g. the verbs استطاع، يستطيع> (AstTAE, ystTyE) “he could, he can” will not match because they have different stems. Indexing using lemmatization can enhance the performance of Arabic IR systems. A lot of work has been done in word stemming and lemmatization in different languages, for example the famous Porter stemmer for English, but for Arabic, there are few work has been done especially in lemmatization, and there is no open-source code and new testing data that can be used by other researchers for word lemmatization. Xerox Arabic Morphological Analysis and Generation BIBREF3 is one of the early Arabic stemmers, and it uses morphological rules to obtain stems for nouns and verbs by looking into a table of thousands of roots. Khoja's stemmer BIBREF4 and Buckwalter morphological analyzer BIBREF5 are other root-based analyzers and stemmers which use tables of valid combinations between prefixes and suffixes, prefixes and stems, and stems and suffixes. Recently, MADAMIRA BIBREF6 system has been evaluated using a blind testset (25K words for Modern Standard Arabic (MSA) selected from Penn Arabic Tree bank (PATB)), and the reported accuracy was 96.2% as the percentage of words where the chosen analysis (provided by SAMA morphological analyzer BIBREF7 ) has the correct lemma. In this paper, we present an open-source Java code to extract Arabic word lemmas, and a new publicly available testset for lemmatization allowing researches to evaluate using the same dataset that we used, and reproduce same experiments. Data Description To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each. Word are white-space and punctuation separated, and some spelling errors are corrected (1.33% of the total words) to have very clean test cases. Lemmatization is done by an expert Arabic linguist where spelling corrections are marked, and lemmas are provided with full diacritization as shown in Figure FIGREF2 . As MSA is usually written without diacritics and IR systems normally remove all diacritics from search queries and indexed data as a basic preprocessing step, so another column for undiacritized lemma is added and it's used for evaluating our lemmatizer and comparing with state-of-the-art system for lemmatization; MADAMIRA. system Description We were inspired by the work done by BIBREF8 for segmenting Arabic words out of context. They achieved an accuracy of almost 99%; slightly better than state-of-the-art system for segmentation (MADAMIRA) which considers surrounding context and many linguistic features. This system shows enhancements in both Machine Translation, and Information Retrieval tasks BIBREF9 . This work can be considered as an extension to word segmentation. From a large diacritized corpus, we constructed a dictionary of words and their possible diacritizations ordered by number of occurrences of each diacritized form. This diacritized corpus was created by a commercial vendor and contains 9.7 million words with almost 200K unique surface words. About 73% of the corpus is in MSA and covers variety of genres like politics, economy, sports, society, etc. and the remaining part is mostly religious texts written in classical Arabic (CA). The effectiveness of using this corpus in building state-of-the-art diacritizer was proven in BIBREF10 .For example, the word وبنود> (wbnwd) “and items” is found 4 times in this corpus with two full diacritization forms وَبُنُودِ، وَبُنُودٍ> (wabunudi, wabunudK) “items, with different grammatical case endings” which appeared 3 times and once respectively. All unique undiacritized words in this corpus were analyzed using Buckwalter morphological analyzer which gives all possible word diacritizations, and their segmentation, POS tag and lemma as shown in Figure FIGREF3 . The idea is to take the most frequent diacritized form for words appear in this corpus, and find the morphological analysis with highest matching score between its diacritized form and the corpus word. This means that we search for the most common diacritization of the word regardless of its surrounding context. In the above example, the first solution is preferred and hence its lemma بند> (banod, bnd after diacritics removal) “item”. While comparing two diacritized forms from the corpus and Buckwalter analysis, special cases were applied to solve inconsistencies between the two diacritization schemas, for example while words are fully diacritized in the corpus, Buckwalter analysis gives diacritics without case ending (i.e. without context), and removes short vowels in some cases, for example before long vowels, and after the definite article ال> (Al) “the”, etc. It worths mentioning that there are many cases in Buckwalter analysis where for the input word, there are two or more identical diacritizations with different lemmas, and the analyses of such words are provided without any meaningful order. For example the word سيارة> (syArp) “car” has two morphological analyses with different lemmas, namely سيار> (syAr) “walker”, and سيارة> (syArp) “car” in this order while the second lemma is the most common one. To solve tis problem, all these words are reported and the top frequent words are revised and order of lemmas were changed according to actual usage in modern language. The lemmatization algorithm can be summarized in Figure FIGREF4 , and the online system can be tested through the site http://alt.qcri.org/farasa/segmenter.html Evaluation Data was formatted in a plain text format where sentences are written in separate lines and words are separated by spaces, and the outputs of MADAMIRA and our system are compared against the undiacritized lemma for each word. For accurate results, all differences were revised manually to accept cases that should not be counted as errors (different writings of foreign names entities for example as in هونغ كونغ، هونج كونج> (hwng kwng, hwnj kwnj) “Hong Kong”, or more than one accepted lemma for some function words, e.g the lemmas في، في٠ا> (fy, fymA) are both valid for the function word في٠ا> (fymA) “while”). Table TABREF5 shows results of testing our system and MADAMIRA on the WikiNews testset (for undiacritized lemmas). Our approach gives +7% relative gain above MADAMIRA in lemmatization task. In terms of speed, our system was able to lemmatize 7.4 million words on a personal laptop in almost 2 minutes compared to 2.5 hours for MADAMIRA, i.e. 75 times faster. The code is written entirely in Java without any external dependency which makes its integration in other systems quite simple. Error Analysis Most of the lemmatization errors in our system are due to fact that we use the most common diacritization of words without considering their contexts which cannot solve the ambiguity in cases like nouns and adjectives that share the same diacritization forms, for example the word أكادي٠ية> (AkAdymyp) can be either a noun and its lemma is أكادي٠ية> (AkAdymyp) “academy”, or an adjective and its lemma is أكادي٠ي> (AkAdymy) “academic”. Also for MADAMIRA, errors in selecting the correct Part-of-Speech (POS) for ambiguous words, and foreign named entities. In the full paper, we will quantify error cases in our lemmatizer and MADAMIRA and give examples for each case which can help in enhancing both systems. Discussion In this paper, we introduce a new dataset for Arabic lemmatization and a very fast and accurate lemmatization algorithm that performs better than state-of-the art system; MADAMIRA. Both the dataset and the code will be publicly available. We show that to build an effective IR system for complex derivational languages like Arabic, there is a a big need for very fast and accurate lemmatization algorithms, and we show that this can be achieved by considering only the most frequent diacritized form for words and matching this form with the morphological analysis with highest similarity score. We plan to study the performance if the algorithm was modified to provide diacritized lemmas which can be useful for other applications.
Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each
989271972b3176d0a5dabd1cc0e4bdb671269c96
989271972b3176d0a5dabd1cc0e4bdb671269c96_0
Q: Where did they collect their dataset from? Text: Introduction Lemmatization is the process of finding the base form (or lemma) of a word by considering its inflected forms. Lemma is also called dictionary form, or citation form, and it refers to all words having the same meaning. Lemmatization is an important preprocessing step for many applications of text mining and question-answering systems, and researches in Arabic Information Retrieval (IR) systems show the need for representing Arabic words at lemma level for many applications, including keyphrase extraction BIBREF0 and machine translation BIBREF1 . In addition, lemmatization provides a productive way to generate generic keywords for search engines (SE) or labels for concept maps BIBREF2 . Word stem is that core part of the word that never changes even with morphological inflections; the part that remains after prefix and suffix removal. Sometimes the stem of the word is different than its lemma, for example the words: believe, believed, believing, and unbelievable share the stem (believ-), and have the normalized word form (believe) standing for the infinitive of the verb (believe). While stemming tries to remove prefixes and suffixes from words that appear with inflections in free text, lemmatization tries to replace word suffixes with (typically) different suffix to get its lemma. This extended abstract is organized as follows: Section SECREF2 shows some complexities in building Arabic lemmatization, and surveys prior work on Arabic stemming and lemmatization; Section SECREF3 introduces the dataset that we created to test lemmatization accuracy; Section SECREF4 describes the algorithm of the system that we built and report results and error analysis in section SECREF5 ; and Section SECREF6 discusses the results and concludes the abstract. Background Arabic is the largest Semitic language spoken by more than 400 million people. It's one of the six official languages in the United Nations, and the fifth most widely spoken language after Chinese, Spanish, English, and Hindi. Arabic has a very rich morphology, both derivational and inflectional. Generally, Arabic words are derived from a root that uses three or more consonants to define a broad meaning or concept, and they follow some templatic morphological patterns. By adding vowels, prefixes and suffixes to the root, word inflections are generated. For instance, the word وسيفتحون> (wsyftHwn) “and they will open” has the triliteral root فتح> (ftH), which has the basic meaning of opening, has prefixes وس> (ws) “and will”, suffixes ون> (wn) “they”, stem يفتح> (yftH) “open”, and lemma فتح> (ftH) “the concept of opening”. IR systems typically cluster words together into groups according to three main levels: root, stem, or lemma. The root level is considered by many researchers in the IR field which leads to high recall but low precision due to language complexity, for example words كتب، ٠كتبة، كتاب> (ktb, mktbp, ktAb) “wrote, library, book” have the same root كتب> (ktb) with the basic meaning of writing, so searching for any of these words by root, yields getting the other words which may not be desirable for many users. Other researchers show the importance of using stem level for improving retrieval precision and recall as they capture semantic similarity between inflected words. However, in Arabic, stem patterns may not capture similar words having the same semantic meaning. For example, stem patterns for broken plurals are different from their singular patterns, e.g. the plural أقلا٠> (AqlAm) “pens” will not match the stem of its singular form قل٠> (qlm) “pen”. The same applies to many imperfect verbs that have different stem patterns than their perfect verbs, e.g. the verbs استطاع، يستطيع> (AstTAE, ystTyE) “he could, he can” will not match because they have different stems. Indexing using lemmatization can enhance the performance of Arabic IR systems. A lot of work has been done in word stemming and lemmatization in different languages, for example the famous Porter stemmer for English, but for Arabic, there are few work has been done especially in lemmatization, and there is no open-source code and new testing data that can be used by other researchers for word lemmatization. Xerox Arabic Morphological Analysis and Generation BIBREF3 is one of the early Arabic stemmers, and it uses morphological rules to obtain stems for nouns and verbs by looking into a table of thousands of roots. Khoja's stemmer BIBREF4 and Buckwalter morphological analyzer BIBREF5 are other root-based analyzers and stemmers which use tables of valid combinations between prefixes and suffixes, prefixes and stems, and stems and suffixes. Recently, MADAMIRA BIBREF6 system has been evaluated using a blind testset (25K words for Modern Standard Arabic (MSA) selected from Penn Arabic Tree bank (PATB)), and the reported accuracy was 96.2% as the percentage of words where the chosen analysis (provided by SAMA morphological analyzer BIBREF7 ) has the correct lemma. In this paper, we present an open-source Java code to extract Arabic word lemmas, and a new publicly available testset for lemmatization allowing researches to evaluate using the same dataset that we used, and reproduce same experiments. Data Description To make the annotated data publicly available, we selected 70 news articles from Arabic WikiNews site https://ar.wikinews.org/wiki. These articles cover recent news from year 2013 to year 2015 in multiple genres (politics, economics, health, science and technology, sports, arts, and culture.) Articles contain 18,300 words, and they are evenly distributed among these 7 genres with 10 articles per each. Word are white-space and punctuation separated, and some spelling errors are corrected (1.33% of the total words) to have very clean test cases. Lemmatization is done by an expert Arabic linguist where spelling corrections are marked, and lemmas are provided with full diacritization as shown in Figure FIGREF2 . As MSA is usually written without diacritics and IR systems normally remove all diacritics from search queries and indexed data as a basic preprocessing step, so another column for undiacritized lemma is added and it's used for evaluating our lemmatizer and comparing with state-of-the-art system for lemmatization; MADAMIRA. system Description We were inspired by the work done by BIBREF8 for segmenting Arabic words out of context. They achieved an accuracy of almost 99%; slightly better than state-of-the-art system for segmentation (MADAMIRA) which considers surrounding context and many linguistic features. This system shows enhancements in both Machine Translation, and Information Retrieval tasks BIBREF9 . This work can be considered as an extension to word segmentation. From a large diacritized corpus, we constructed a dictionary of words and their possible diacritizations ordered by number of occurrences of each diacritized form. This diacritized corpus was created by a commercial vendor and contains 9.7 million words with almost 200K unique surface words. About 73% of the corpus is in MSA and covers variety of genres like politics, economy, sports, society, etc. and the remaining part is mostly religious texts written in classical Arabic (CA). The effectiveness of using this corpus in building state-of-the-art diacritizer was proven in BIBREF10 .For example, the word وبنود> (wbnwd) “and items” is found 4 times in this corpus with two full diacritization forms وَبُنُودِ، وَبُنُودٍ> (wabunudi, wabunudK) “items, with different grammatical case endings” which appeared 3 times and once respectively. All unique undiacritized words in this corpus were analyzed using Buckwalter morphological analyzer which gives all possible word diacritizations, and their segmentation, POS tag and lemma as shown in Figure FIGREF3 . The idea is to take the most frequent diacritized form for words appear in this corpus, and find the morphological analysis with highest matching score between its diacritized form and the corpus word. This means that we search for the most common diacritization of the word regardless of its surrounding context. In the above example, the first solution is preferred and hence its lemma بند> (banod, bnd after diacritics removal) “item”. While comparing two diacritized forms from the corpus and Buckwalter analysis, special cases were applied to solve inconsistencies between the two diacritization schemas, for example while words are fully diacritized in the corpus, Buckwalter analysis gives diacritics without case ending (i.e. without context), and removes short vowels in some cases, for example before long vowels, and after the definite article ال> (Al) “the”, etc. It worths mentioning that there are many cases in Buckwalter analysis where for the input word, there are two or more identical diacritizations with different lemmas, and the analyses of such words are provided without any meaningful order. For example the word سيارة> (syArp) “car” has two morphological analyses with different lemmas, namely سيار> (syAr) “walker”, and سيارة> (syArp) “car” in this order while the second lemma is the most common one. To solve tis problem, all these words are reported and the top frequent words are revised and order of lemmas were changed according to actual usage in modern language. The lemmatization algorithm can be summarized in Figure FIGREF4 , and the online system can be tested through the site http://alt.qcri.org/farasa/segmenter.html Evaluation Data was formatted in a plain text format where sentences are written in separate lines and words are separated by spaces, and the outputs of MADAMIRA and our system are compared against the undiacritized lemma for each word. For accurate results, all differences were revised manually to accept cases that should not be counted as errors (different writings of foreign names entities for example as in هونغ كونغ، هونج كونج> (hwng kwng, hwnj kwnj) “Hong Kong”, or more than one accepted lemma for some function words, e.g the lemmas في، في٠ا> (fy, fymA) are both valid for the function word في٠ا> (fymA) “while”). Table TABREF5 shows results of testing our system and MADAMIRA on the WikiNews testset (for undiacritized lemmas). Our approach gives +7% relative gain above MADAMIRA in lemmatization task. In terms of speed, our system was able to lemmatize 7.4 million words on a personal laptop in almost 2 minutes compared to 2.5 hours for MADAMIRA, i.e. 75 times faster. The code is written entirely in Java without any external dependency which makes its integration in other systems quite simple. Error Analysis Most of the lemmatization errors in our system are due to fact that we use the most common diacritization of words without considering their contexts which cannot solve the ambiguity in cases like nouns and adjectives that share the same diacritization forms, for example the word أكادي٠ية> (AkAdymyp) can be either a noun and its lemma is أكادي٠ية> (AkAdymyp) “academy”, or an adjective and its lemma is أكادي٠ي> (AkAdymy) “academic”. Also for MADAMIRA, errors in selecting the correct Part-of-Speech (POS) for ambiguous words, and foreign named entities. In the full paper, we will quantify error cases in our lemmatizer and MADAMIRA and give examples for each case which can help in enhancing both systems. Discussion In this paper, we introduce a new dataset for Arabic lemmatization and a very fast and accurate lemmatization algorithm that performs better than state-of-the art system; MADAMIRA. Both the dataset and the code will be publicly available. We show that to build an effective IR system for complex derivational languages like Arabic, there is a a big need for very fast and accurate lemmatization algorithms, and we show that this can be achieved by considering only the most frequent diacritized form for words and matching this form with the morphological analysis with highest similarity score. We plan to study the performance if the algorithm was modified to provide diacritized lemmas which can be useful for other applications.
from Arabic WikiNews site https://ar.wikinews.org/wiki
26c64edbc5fa4cdded69ace66fdba64a9648b78e
26c64edbc5fa4cdded69ace66fdba64a9648b78e_0
Q: How much in-domain data is enough for joint models to outperform baselines? Text: Introduction Goal-oriented dialogue systems aim to automatically identify the intent of the user as expressed in natural language, extract associated arguments or slots, and take actions accordingly to satisfy the user’s requests BIBREF0. In such systems, the speakers' utterances are typically recognized using an ASR system. Then the intent of the speaker and related slots are identified from the recognized word sequence using an LU component. Finally, a dialogue manager (DM) interacts with the user (not necessarily in natural language) and helps the user achieve the task that the system is designed to support. As a result, the quality of ASR systems has a direct impact on downstream tasks such as LU and DM. This becomes more evident in cases where a generic ASR is used, instead of a domain-specific one BIBREF1. A standard approach to improve ASR output is to use an SLM or a neural model to re-rank different ASR hypotheses and use the one with the highest score for downstream tasks. Moreover, neural language correction models can also be trained to recover from the errors introduced by the ASR system via mapping ASR outputs to the ground-truth text in end-to-end speech recognition BIBREF2. In this paper we experiment with training ASR reranking/correction models jointly with LU tasks in an effort to improve both tasks simultaneously, towards End-to-End Spoken Language Understanding (SLU). The major contributions of this work are as follows: Presented a cascaded approach to first select the best ASR output and then perform LU Presented a novel alignment scheme to create a word confusion network from ASR n-best transcriptions to ensure consistency between model training and inference Proposed a framework for using ASR n-best output to improve end-to-end SLU by multi-task learning, i.e. ASR correction, and LU (intent and slot detection). Proposed several novel architectures adopting GPT BIBREF3 and Pointer network BIBREF4 with a 2D attention mechanism Comprehensive experimentation to compare different model architectures, uncover their strengths and weaknesses and demonstrate the effectiveness of End-to-End learning of ASR ranking/correction and LU models. Related Work Word Confusion Networks: A compact and normalized class of word lattices, called word confusion networks (WCNs) were initially proposed for improving ASR performance BIBREF5. WCNs are much smaller than ASR lattices but have better or comparable word and oracle accuracy, and because of this they have been used for many tasks, including SLU BIBREF6. However, to the best of our knowledge they have not been used with Neural Semantic Parsers implemented by Recurrent Neural Networks (RNNs) or similar architectures. The closest work would be BIBREF7, who propose to traverse an input lattice in topological order and use the RNN hidden state of the lattice final state as the dense vector representing the entire lattice. However, word confusion networks provide a much better and more efficient solution thanks to token alignments. We use this idea to first infer WCNs from ASR n-best and then directly use them for ASR correction and LU in joint fashion. ASR Correction: Neural language correction models have been widely used to tackle a variety of tasks including grammar correction, text or spelling correction and completion of ASR systems. BIBREF2, BIBREF8 are highly relevant to our work as they performed spelling correction on top of ASR errors to improve the quality of speech recognition. However, our work differs significantly from existing work as we tackle neural language correction together with a downstream task (LU in this case) in a multi-task learning setting. In addition, we use the alignment information contained in the n-best list by an inferred word confusion network and input all n-best into a single neural network. Re-ranking and Joint Modeling: BIBREF9 showed that n-best re-ranking helps in reducing WER, while BIBREF1, BIBREF10 showed that using ranking or in-domain language models or semantic parsers over n-best hypotheses significantly improves LU accuracy. Moreover, BIBREF11, BIBREF12 showcased the importance of context in ASR performance. However, none of the above-mentioned works involved joint or contextual modeling with end-to-end comparison. BIBREF13 showcased that audio features can be directly used for LU, however, such systems are less robust for task completion, especially those which involve multi-turn state tracking. Moreover, another objective of our research is to evaluate if generalized language models such as GPT BIBREF3 can be useful for joint ASR re-ranking and LU tasks. SLU Background and Baselines ::: ASR Ranking and Error Correction To prevent the propagation of ASR errors to downstream applications such as NLU in a dialogue system, ASR error correction BIBREF14, BIBREF15 has been explored extensively using a variety of approaches such as language modeling and neural language correction. In the following, we cover the formulation of ASR error corrections using both approaches. Language Modeling: Significant research has been conducted around count-based and neural LMs BIBREF16, BIBREF17. Even though RNN-LMs have significantly advanced the state of the art (through re-ranking and Seq2Seq architectures), they still do not fully preserve the context, especially in ASR for Dialogue Systems, wherein context for a word might not correspond to words immediately observed before. Bidirectional and Attention based Neural LMs such as Embeddings for Language Models (ELMo) and Contextual Word Vectors (Cove) have shown some improvements BIBREF18, BIBREF19. More recently, Transformer Networks based LMs such as Bidirectional Encoder Representations from Transformers (BERT) BIBREF20 and GPT BIBREF21, BIBREF3 have significantly outperformed most baselines in a variety of tasks. Statistical and Neural LMs for Re-ranking/Re-scoring: We trained a variety of LMs on the DSTC2 training data, which are then used for re-ranking the ASR hypotheses based on perplexity. We trained the following LMs: (1) Count based word level Statistical Language Model (SLM) (experimented with several context sizes with backoff) (2) Transformer based OpenAI GPT LM BIBREF3, which uses Multi-headed Self-attention over the context followed by position-wise Feed-Forward layers to generate distribution over output sequence. While the GPT is trained with sub-word level LMs as proposed in the initial architecture. We start with a pre-trained GPT-LM released by OpenAI BIBREF3 and then fine-tune on DSTC-2 data along with passing contextual information (past system and user turns along with current system turn separated by a special token) as input to the model. We experimented with the number of previous turns provided as context to the language model and picked the best configuration based on the development data. These LMs are used for re-ranking and obtaining the best hypothesis, which is then fed into a Bi-LSTM CRF BIBREF22 for intent and slot detection, which are used as baselines. Neural Language Correction (NLC): Neural language correction BIBREF23 aims at using neural architectures to map an input sentence $X=(x_1, \dots , x_{T_X})$ containing errors, to a ground-truth output sentence $Y=(y_1, \dots , y_{T_Y})$. We use WCN (inferred from the n-best) to align the n-best list with the ground-truth. This way, the input $X$ and output $Y$ will have the same length and they are aligned at word-level: namely $x_i$ and $y_i$ are highly plausible pairs. As a result, we can use the same RNN decoder for slot tagging as described in Section SECREF9. Note that sequence tagging architectures can be used for multi-task learning with multiple prediction heads of word-correction and IOB tag prediction. SLU Background and Baselines ::: Language Understanding The state-of-the-art in SLU relies on RNN or Transformer based approaches and its variations, which have first been used for slot filling by BIBREF24 and BIBREF25 simultaneously. More formally, to estimate the sequence of tags $Y = y_1, ..., y_n$ in the form of IOB labels as in BIBREF26 (with 3 outputs corresponding to `B', `I' and `O'), and corresponding to an input sequence of tokens $X = x_1, ..., x_n$, the RNN architecture consists of an input layer, a number of hidden layers, and an output layer. Nowadays, state-of-the-art slot filling methods usually rely on sequence models like RNNs BIBREF27, BIBREF28. Extensions include encoder-decoder models BIBREF29, BIBREF30, transformers BIBREF31, or memory BIBREF32. Historically, intent determination has been seen as a classification problem and slot filling as sequence classification problem, and in the pre-deep-learning era these two tasks were typically modeled separately. To this end BIBREF27 proposed a single RNN architecture that integrates intent detection and slot filling. The input of this RNN is the input sequence of words (e.g., user queries) and the output is the full semantic frame (intent and slots). Joint ASR Correction and NLU Models ::: Word Confusion Network and N-best Alignment N-best output from out of box ASR systems are usually not aligned. So, for WCN based models (Section SECREF14), an extra step is needed to align the n-best. Here's our approach: Use the word level Levenshtein distance to align every ASR hypothesis with the one-best hypothesis (as we do not have the transcription during testing). To unify these n-references, we merge insertions across all hypotheses to create a global reference $R_{global}$, which is then used to expand all the original n-best to obtain hypotheses of same length as $R_{global}$. During training, we align transcriptions with $R_{global}$ for and NLU tasks such as tagging experiments. Joint ASR Correction and NLU Models ::: GPT based Joint SLU As described in Section SECREF3, GPT based LM is used for re-scoring the n-best hypothesis. We extend the GPT-LM with three additional heads (Figure FIGREF12): Discriminatory Ranking, Dialogue Act Classification, and Slot Tagging. In addition to the likelihood of the sequence obtained from the LM, we train a discriminatory ranker to select the oracle. The ranker takes the last state (or `clf' token embedding) as input for each hypothesis and outputs 1 if it is oracle or 0 otherwise. Similarly, we sum the last state for all the hypotheses and use it for Dialogue Act classification. For tagging, we use the transcription during training and hypothesis selected by the ranker during testing or validation. We add a Bi-LSTM layer on top of the embeddings obtained from GPT-LM to predict IOB tags. The model inputs are context (last system and user utterance, current system utterance) and n-best hypothesis, all separated by a delimiter used in the original GPT. Joint ASR Correction and NLU Models ::: Hierarchical CNN-RNN Neural Ranker Given the n-best as input, we built a multi-head hierarchical CNN-RNN (Hier-CNN-RNN) model to predict the index of the oracle directly. The nbest ASR hypothesis is first input to a 1D Convolutional Neural Network (CNN) to extract the n-gram information. The motivation to use CNN is to align the words in the n-best hypothesis since the convolutional filters are invariant to translation. The features extracted from CNN are then fed to a RNN to capture the sequential information in the sentences. The hidden states from RNN are concatenated together. The last hidden states from all n-best is averaged to predict the index of the oracle in n-best. For the joint model, the predicted oracle is fed into a LU head module to predict the intent and slots. The joint model did not perform well, so we have excluded it from the results in the interest of space. Joint ASR Correction and NLU Models ::: WCN Pointer Joint Neural Correction and NLU The WCN model, as illustrated in Figure FIGREF15, takes all the N-best in at the same time. Specifically, for a given n-best, a word confusion network alignment is constructed. Then, for each time step, the model concatenates the embeddings of all its n-best into a word bin and processes them through a multi-headed Bi-LSTM, where each hidden states is concatenated with embedding vectors as residual connection. Next, a multihead self attention layer is applied to all hidden states, which, in addition to predicting the IOB tags, generates the correct word based on vocabulary (word generation head) or predicts the index at which the correct word is found (pointer head) for each time step. If there is no correct words, we select first best. We append an EOS token in the last time step and use the last hidden state for intent prediction. The rationale behind this is that the correct word often exists in the WCN alignment but can be at different positions. Experiments and Results Data: We use DSTC-2 data BIBREF33, wherein the system provides information about restaurants that fit users' preferences (price, food type, and area) and for each user utterance the 10-best hypotheses are given. We modified the original labels for dialogue acts to a combination of dialogue act and slot type (e.g. dialogue act for “whats the price range" becomes “request_pricerange" instead of “request"), which gets us a total of 25 unique dialogue acts instead of initial the 14. Further, we address the slot detection problem as a slot tagging problem by using the slot annotations and converting them into IOB format. In our analysis, we ignore the cases that have empty n-best hypotheses or dialogue acts, and those with the following transcriptions: “noise", “unintelligible", “silence", “system", “inaudible", and “hello and welcome". This leads to 10,881 train, 9,159 test, and 3,560 development utterances. Our objective is not to out-perform the state-of-the-art approaches on DSTC-2 data, but to evaluate if we can leverage ASR n-best in a contextual manner for overall better LU through multi-task learning. We also plan to release the data for enabling future research. Baseline and Upper Bound: We obtain WER and sentence error rate (SER) to evaluate ASR and dialogue act accuracy (DA-Acc), tag error rate (TER), slot F1, and frame error rate (FER) to evaluate LU. We compare the metrics obtained for joint models with the ones through cascading (i.e. non-joint models). For ASR, we consider three baselines: 1-best, SLM and GPT based re-ranked hypothesis. For LU, we trained a separate Bi-LSTM CRF tagger with an extra head for Dialogue Act classification, which we run on top of the three baselines mentioned above to obtain LU baseline numbers. To better understand the upper-bound, we obtain the metrics for the oracle and ground truth transcription as well. Experiments and Results ::: Results and Discussion As shown in Table TABREF10, it can be observed that all models outperform the 1-best in ASR metrics. Even SLM trained and GPT-LM fine-tuned on 11k training utterances perform significantly better than the 1-best on ASR metrics. However this does not translate into improvement in the LU metrics. In fact, the output reranked using SLM does worse on the LU metrics. This indicates that just reducing WER and SER doesn't lead to improvement in LU. The Hier-CNN-RNN Ranker model achieves 14% lower WER while also improving the LU metrics (5.2% reduction in FER). The GPT based discriminatory ranker also improves both ASR (13% reduction in WER) and LU (10% reduction in FER). This indicates that training a discriminatory ranker which identifies the oracle would do better than training a task-specific Language Model. Some of the models even out-perform the oracle on DA-Acc ($>$2% absolute improvement) because the Dialogue Act prediction head uses an encoding of all hypotheses (including oracle). On the other hand, WCN models lead to the best LU slot tagging performance. WCN models out-perform the baseline with 2.2% absolute improvement in slot F1 score, 12% TER reduction and most importantly 8% FER reduction. The GPT joint models on the other hand improve the TER but their slot F1 is significantly lower compared to the GPT ranker. This is probably because there are a lot more `O' tags compared to `B' and `I'. We noticed that we were able to achieve even higher accuracy by running the baseline tagger on the corrected output of the joint models. Our lowest FER is achieved by running the baseline tagger model on the joint WCN model (with word generation head) output. While the WCN model's performance is improved by using the baseline tagger, the difference is much more profound for the GPT models (the frame error rate drops by almost 4%). We believe this is because the WCN models consume aligned n-best, which improves the model learning efficiency and they converge better when data size is relatively small. Furthermore, we observed that adding multihead attention layer and multiple heads helps the WCN models across all metrics. Conclusions We have presented a joint ASR reranker and LU model and showed experimental results with significant improvements on the DSTC-2 corpus. To the best of our knowledge this is the first deep learning based study to this end. We have also contrasted these models with cascaded approaches building state-of-the-art GPT based rankers. Our future work involves extending such end to end LU approaches towards tighter integration with a generic ASR model.
Unanswerable
e06e1b103483e1e58201075c03e610202968c877
e06e1b103483e1e58201075c03e610202968c877_0
Q: How many parameters does their proposed joint model have? Text: Introduction Goal-oriented dialogue systems aim to automatically identify the intent of the user as expressed in natural language, extract associated arguments or slots, and take actions accordingly to satisfy the user’s requests BIBREF0. In such systems, the speakers' utterances are typically recognized using an ASR system. Then the intent of the speaker and related slots are identified from the recognized word sequence using an LU component. Finally, a dialogue manager (DM) interacts with the user (not necessarily in natural language) and helps the user achieve the task that the system is designed to support. As a result, the quality of ASR systems has a direct impact on downstream tasks such as LU and DM. This becomes more evident in cases where a generic ASR is used, instead of a domain-specific one BIBREF1. A standard approach to improve ASR output is to use an SLM or a neural model to re-rank different ASR hypotheses and use the one with the highest score for downstream tasks. Moreover, neural language correction models can also be trained to recover from the errors introduced by the ASR system via mapping ASR outputs to the ground-truth text in end-to-end speech recognition BIBREF2. In this paper we experiment with training ASR reranking/correction models jointly with LU tasks in an effort to improve both tasks simultaneously, towards End-to-End Spoken Language Understanding (SLU). The major contributions of this work are as follows: Presented a cascaded approach to first select the best ASR output and then perform LU Presented a novel alignment scheme to create a word confusion network from ASR n-best transcriptions to ensure consistency between model training and inference Proposed a framework for using ASR n-best output to improve end-to-end SLU by multi-task learning, i.e. ASR correction, and LU (intent and slot detection). Proposed several novel architectures adopting GPT BIBREF3 and Pointer network BIBREF4 with a 2D attention mechanism Comprehensive experimentation to compare different model architectures, uncover their strengths and weaknesses and demonstrate the effectiveness of End-to-End learning of ASR ranking/correction and LU models. Related Work Word Confusion Networks: A compact and normalized class of word lattices, called word confusion networks (WCNs) were initially proposed for improving ASR performance BIBREF5. WCNs are much smaller than ASR lattices but have better or comparable word and oracle accuracy, and because of this they have been used for many tasks, including SLU BIBREF6. However, to the best of our knowledge they have not been used with Neural Semantic Parsers implemented by Recurrent Neural Networks (RNNs) or similar architectures. The closest work would be BIBREF7, who propose to traverse an input lattice in topological order and use the RNN hidden state of the lattice final state as the dense vector representing the entire lattice. However, word confusion networks provide a much better and more efficient solution thanks to token alignments. We use this idea to first infer WCNs from ASR n-best and then directly use them for ASR correction and LU in joint fashion. ASR Correction: Neural language correction models have been widely used to tackle a variety of tasks including grammar correction, text or spelling correction and completion of ASR systems. BIBREF2, BIBREF8 are highly relevant to our work as they performed spelling correction on top of ASR errors to improve the quality of speech recognition. However, our work differs significantly from existing work as we tackle neural language correction together with a downstream task (LU in this case) in a multi-task learning setting. In addition, we use the alignment information contained in the n-best list by an inferred word confusion network and input all n-best into a single neural network. Re-ranking and Joint Modeling: BIBREF9 showed that n-best re-ranking helps in reducing WER, while BIBREF1, BIBREF10 showed that using ranking or in-domain language models or semantic parsers over n-best hypotheses significantly improves LU accuracy. Moreover, BIBREF11, BIBREF12 showcased the importance of context in ASR performance. However, none of the above-mentioned works involved joint or contextual modeling with end-to-end comparison. BIBREF13 showcased that audio features can be directly used for LU, however, such systems are less robust for task completion, especially those which involve multi-turn state tracking. Moreover, another objective of our research is to evaluate if generalized language models such as GPT BIBREF3 can be useful for joint ASR re-ranking and LU tasks. SLU Background and Baselines ::: ASR Ranking and Error Correction To prevent the propagation of ASR errors to downstream applications such as NLU in a dialogue system, ASR error correction BIBREF14, BIBREF15 has been explored extensively using a variety of approaches such as language modeling and neural language correction. In the following, we cover the formulation of ASR error corrections using both approaches. Language Modeling: Significant research has been conducted around count-based and neural LMs BIBREF16, BIBREF17. Even though RNN-LMs have significantly advanced the state of the art (through re-ranking and Seq2Seq architectures), they still do not fully preserve the context, especially in ASR for Dialogue Systems, wherein context for a word might not correspond to words immediately observed before. Bidirectional and Attention based Neural LMs such as Embeddings for Language Models (ELMo) and Contextual Word Vectors (Cove) have shown some improvements BIBREF18, BIBREF19. More recently, Transformer Networks based LMs such as Bidirectional Encoder Representations from Transformers (BERT) BIBREF20 and GPT BIBREF21, BIBREF3 have significantly outperformed most baselines in a variety of tasks. Statistical and Neural LMs for Re-ranking/Re-scoring: We trained a variety of LMs on the DSTC2 training data, which are then used for re-ranking the ASR hypotheses based on perplexity. We trained the following LMs: (1) Count based word level Statistical Language Model (SLM) (experimented with several context sizes with backoff) (2) Transformer based OpenAI GPT LM BIBREF3, which uses Multi-headed Self-attention over the context followed by position-wise Feed-Forward layers to generate distribution over output sequence. While the GPT is trained with sub-word level LMs as proposed in the initial architecture. We start with a pre-trained GPT-LM released by OpenAI BIBREF3 and then fine-tune on DSTC-2 data along with passing contextual information (past system and user turns along with current system turn separated by a special token) as input to the model. We experimented with the number of previous turns provided as context to the language model and picked the best configuration based on the development data. These LMs are used for re-ranking and obtaining the best hypothesis, which is then fed into a Bi-LSTM CRF BIBREF22 for intent and slot detection, which are used as baselines. Neural Language Correction (NLC): Neural language correction BIBREF23 aims at using neural architectures to map an input sentence $X=(x_1, \dots , x_{T_X})$ containing errors, to a ground-truth output sentence $Y=(y_1, \dots , y_{T_Y})$. We use WCN (inferred from the n-best) to align the n-best list with the ground-truth. This way, the input $X$ and output $Y$ will have the same length and they are aligned at word-level: namely $x_i$ and $y_i$ are highly plausible pairs. As a result, we can use the same RNN decoder for slot tagging as described in Section SECREF9. Note that sequence tagging architectures can be used for multi-task learning with multiple prediction heads of word-correction and IOB tag prediction. SLU Background and Baselines ::: Language Understanding The state-of-the-art in SLU relies on RNN or Transformer based approaches and its variations, which have first been used for slot filling by BIBREF24 and BIBREF25 simultaneously. More formally, to estimate the sequence of tags $Y = y_1, ..., y_n$ in the form of IOB labels as in BIBREF26 (with 3 outputs corresponding to `B', `I' and `O'), and corresponding to an input sequence of tokens $X = x_1, ..., x_n$, the RNN architecture consists of an input layer, a number of hidden layers, and an output layer. Nowadays, state-of-the-art slot filling methods usually rely on sequence models like RNNs BIBREF27, BIBREF28. Extensions include encoder-decoder models BIBREF29, BIBREF30, transformers BIBREF31, or memory BIBREF32. Historically, intent determination has been seen as a classification problem and slot filling as sequence classification problem, and in the pre-deep-learning era these two tasks were typically modeled separately. To this end BIBREF27 proposed a single RNN architecture that integrates intent detection and slot filling. The input of this RNN is the input sequence of words (e.g., user queries) and the output is the full semantic frame (intent and slots). Joint ASR Correction and NLU Models ::: Word Confusion Network and N-best Alignment N-best output from out of box ASR systems are usually not aligned. So, for WCN based models (Section SECREF14), an extra step is needed to align the n-best. Here's our approach: Use the word level Levenshtein distance to align every ASR hypothesis with the one-best hypothesis (as we do not have the transcription during testing). To unify these n-references, we merge insertions across all hypotheses to create a global reference $R_{global}$, which is then used to expand all the original n-best to obtain hypotheses of same length as $R_{global}$. During training, we align transcriptions with $R_{global}$ for and NLU tasks such as tagging experiments. Joint ASR Correction and NLU Models ::: GPT based Joint SLU As described in Section SECREF3, GPT based LM is used for re-scoring the n-best hypothesis. We extend the GPT-LM with three additional heads (Figure FIGREF12): Discriminatory Ranking, Dialogue Act Classification, and Slot Tagging. In addition to the likelihood of the sequence obtained from the LM, we train a discriminatory ranker to select the oracle. The ranker takes the last state (or `clf' token embedding) as input for each hypothesis and outputs 1 if it is oracle or 0 otherwise. Similarly, we sum the last state for all the hypotheses and use it for Dialogue Act classification. For tagging, we use the transcription during training and hypothesis selected by the ranker during testing or validation. We add a Bi-LSTM layer on top of the embeddings obtained from GPT-LM to predict IOB tags. The model inputs are context (last system and user utterance, current system utterance) and n-best hypothesis, all separated by a delimiter used in the original GPT. Joint ASR Correction and NLU Models ::: Hierarchical CNN-RNN Neural Ranker Given the n-best as input, we built a multi-head hierarchical CNN-RNN (Hier-CNN-RNN) model to predict the index of the oracle directly. The nbest ASR hypothesis is first input to a 1D Convolutional Neural Network (CNN) to extract the n-gram information. The motivation to use CNN is to align the words in the n-best hypothesis since the convolutional filters are invariant to translation. The features extracted from CNN are then fed to a RNN to capture the sequential information in the sentences. The hidden states from RNN are concatenated together. The last hidden states from all n-best is averaged to predict the index of the oracle in n-best. For the joint model, the predicted oracle is fed into a LU head module to predict the intent and slots. The joint model did not perform well, so we have excluded it from the results in the interest of space. Joint ASR Correction and NLU Models ::: WCN Pointer Joint Neural Correction and NLU The WCN model, as illustrated in Figure FIGREF15, takes all the N-best in at the same time. Specifically, for a given n-best, a word confusion network alignment is constructed. Then, for each time step, the model concatenates the embeddings of all its n-best into a word bin and processes them through a multi-headed Bi-LSTM, where each hidden states is concatenated with embedding vectors as residual connection. Next, a multihead self attention layer is applied to all hidden states, which, in addition to predicting the IOB tags, generates the correct word based on vocabulary (word generation head) or predicts the index at which the correct word is found (pointer head) for each time step. If there is no correct words, we select first best. We append an EOS token in the last time step and use the last hidden state for intent prediction. The rationale behind this is that the correct word often exists in the WCN alignment but can be at different positions. Experiments and Results Data: We use DSTC-2 data BIBREF33, wherein the system provides information about restaurants that fit users' preferences (price, food type, and area) and for each user utterance the 10-best hypotheses are given. We modified the original labels for dialogue acts to a combination of dialogue act and slot type (e.g. dialogue act for “whats the price range" becomes “request_pricerange" instead of “request"), which gets us a total of 25 unique dialogue acts instead of initial the 14. Further, we address the slot detection problem as a slot tagging problem by using the slot annotations and converting them into IOB format. In our analysis, we ignore the cases that have empty n-best hypotheses or dialogue acts, and those with the following transcriptions: “noise", “unintelligible", “silence", “system", “inaudible", and “hello and welcome". This leads to 10,881 train, 9,159 test, and 3,560 development utterances. Our objective is not to out-perform the state-of-the-art approaches on DSTC-2 data, but to evaluate if we can leverage ASR n-best in a contextual manner for overall better LU through multi-task learning. We also plan to release the data for enabling future research. Baseline and Upper Bound: We obtain WER and sentence error rate (SER) to evaluate ASR and dialogue act accuracy (DA-Acc), tag error rate (TER), slot F1, and frame error rate (FER) to evaluate LU. We compare the metrics obtained for joint models with the ones through cascading (i.e. non-joint models). For ASR, we consider three baselines: 1-best, SLM and GPT based re-ranked hypothesis. For LU, we trained a separate Bi-LSTM CRF tagger with an extra head for Dialogue Act classification, which we run on top of the three baselines mentioned above to obtain LU baseline numbers. To better understand the upper-bound, we obtain the metrics for the oracle and ground truth transcription as well. Experiments and Results ::: Results and Discussion As shown in Table TABREF10, it can be observed that all models outperform the 1-best in ASR metrics. Even SLM trained and GPT-LM fine-tuned on 11k training utterances perform significantly better than the 1-best on ASR metrics. However this does not translate into improvement in the LU metrics. In fact, the output reranked using SLM does worse on the LU metrics. This indicates that just reducing WER and SER doesn't lead to improvement in LU. The Hier-CNN-RNN Ranker model achieves 14% lower WER while also improving the LU metrics (5.2% reduction in FER). The GPT based discriminatory ranker also improves both ASR (13% reduction in WER) and LU (10% reduction in FER). This indicates that training a discriminatory ranker which identifies the oracle would do better than training a task-specific Language Model. Some of the models even out-perform the oracle on DA-Acc ($>$2% absolute improvement) because the Dialogue Act prediction head uses an encoding of all hypotheses (including oracle). On the other hand, WCN models lead to the best LU slot tagging performance. WCN models out-perform the baseline with 2.2% absolute improvement in slot F1 score, 12% TER reduction and most importantly 8% FER reduction. The GPT joint models on the other hand improve the TER but their slot F1 is significantly lower compared to the GPT ranker. This is probably because there are a lot more `O' tags compared to `B' and `I'. We noticed that we were able to achieve even higher accuracy by running the baseline tagger on the corrected output of the joint models. Our lowest FER is achieved by running the baseline tagger model on the joint WCN model (with word generation head) output. While the WCN model's performance is improved by using the baseline tagger, the difference is much more profound for the GPT models (the frame error rate drops by almost 4%). We believe this is because the WCN models consume aligned n-best, which improves the model learning efficiency and they converge better when data size is relatively small. Furthermore, we observed that adding multihead attention layer and multiple heads helps the WCN models across all metrics. Conclusions We have presented a joint ASR reranker and LU model and showed experimental results with significant improvements on the DSTC-2 corpus. To the best of our knowledge this is the first deep learning based study to this end. We have also contrasted these models with cascaded approaches building state-of-the-art GPT based rankers. Our future work involves extending such end to end LU approaches towards tighter integration with a generic ASR model.
Unanswerable
b0fd686183b056ea3f63a7ab494620df1d598c24
b0fd686183b056ea3f63a7ab494620df1d598c24_0
Q: How does the model work if no treebank is available? Text: Introduction Developing tools for processing many languages has long been an important goal in NLP BIBREF0 , BIBREF1 , but it was only when statistical methods became standard that massively multilingual NLP became economical. The mainstream approach for multilingual NLP is to design language-specific models. For each language of interest, the resources necessary for training the model are obtained (or created), and separate parameters are fit for each language separately. This approach is simple and grants the flexibility of customizing the model and features to the needs of each language, but it is suboptimal for theoretical and practical reasons. Theoretically, the study of linguistic typology tells us that many languages share morphological, phonological, and syntactic phenomena BIBREF3 ; therefore, the mainstream approach misses an opportunity to exploit relevant supervision from typologically related languages. Practically, it is inconvenient to deploy or distribute NLP tools that are customized for many different languages because, for each language of interest, we need to configure, train, tune, monitor, and occasionally update the model. Furthermore, code-switching or code-mixing (mixing more than one language in the same discourse), which is pervasive in some genres, in particular social media, presents a challenge for monolingually-trained NLP models BIBREF4 . In parsing, the availability of homogeneous syntactic dependency annotations in many languages BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 has created an opportunity to develop a parser that is capable of parsing sentences in multiple languages, addressing these theoretical and practical concerns. A multilingual parser can potentially replace an array of language-specific monolingually-trained parsers (for languages with a large treebank). The same approach has been used in low-resource scenarios (with no treebank or a small treebank in the target language), where indirect supervision from auxiliary languages improves the parsing quality BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , but these models may sacrifice accuracy on source languages with a large treebank. In this paper, we describe a model that works well for both low-resource and high-resource scenarios. We propose a parsing architecture that takes as input sentences in several languages, optionally predicting the part-of-speech (POS) tags and input language. The parser is trained on the union of available universal dependency annotations in different languages. Our approach integrates and critically relies on several recent developments related to dependency parsing: universal POS tagsets BIBREF17 , cross-lingual word clusters BIBREF18 , selective sharing BIBREF19 , universal dependency annotations BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , advances in neural network architectures BIBREF20 , BIBREF21 , and multilingual word embeddings BIBREF22 , BIBREF16 , BIBREF23 . We show that our parser compares favorably to strong baselines trained on the same treebanks in three data scenarios: when the target language has a large treebank (Table 3 ), a small treebank (Table 7 ), or no treebank (Table 8 ). Our parser is publicly available. Overview Our goal is to train a dependency parser for a set of target languages ${L}^t$ , given universal dependency annotations in a set of source languages ${L}^s$ . Ideally, we would like to have training data in all target languages (i.e., $L^t \subseteq L^s$ ), but we are also interested in the case where the sets of source and target languages are disjoint (i.e., $L^t \cap L^s = \emptyset $ ). When all languages in $L^t$ have a large treebank, the mainstream approach has been to train one monolingual parser per target language and route sentences of a given language to the corresponding parser at test time. In contrast, our approach is to train one parsing model with the union of treebanks in $L^s$ , then use this single trained model to parse text in any language in $L^t$ , hence the name “Many Languages, One Parser” (MaLOPa). MaLOPa strikes a balance between: (1) enabling cross-lingual model transfer via language-invariant input representations; i.e., coarse POS tags, multilingual word embeddings and multilingual word clusters, and (2) tweaking the behavior of the parser depending on the current input language via language-specific representations; i.e., fine-grained POS tags and language embeddings. In addition to universal dependency annotations in source languages (see Table 1 ), we use the following data resources for each language in ${L} = {L}^t \cup {L}^s$ : Novel contributions of this paper include: (i) using one parser instead of an array of monolingually-trained parsers without sacrificing accuracy on languages with a large treebank, (ii) an effective neural network architecture for using language embeddings to improve multilingual parsing, and (iii) a study of how automatic language identification affects the performance of a multilingual dependency parser. While not the primary focus of this paper, we also show that a variant of our parser outperforms previous work on multi-source cross-lingual parsing in low resource scenarios, where languages in $L^t$ have a small treebank (see Table 7 ) or where $L^t \cap L^s = \emptyset $ (see Table 8 ). In the small treebank setup with 3,000 token annotations, we show that our parser consistently outperforms a strong monolingual baseline with 5.7 absolute LAS (labeled attachment score) points per language, on average. Parsing Model Recent advances suggest that recurrent neural networks, especially long short-term memory (LSTM) architectures, are capable of learning useful representations for modeling problems of sequential nature BIBREF24 , BIBREF25 . In this section, we describe our language-universal parser, which extends the stack LSTM (S-LSTM) parser of dyer:15. Transition-based Parsing with S-LSTMs This section briefly reviews Dyer et al.'s S-LSTM parser, which we modify in the following sections. The core parser can be understood as the sequential manipulation of three data structures: a buffer (from which we read the token sequence), a stack (which contains partially-built parse trees), and a list of actions previously taken by the parser. The parser uses the arc-standard transition system BIBREF26 . At each timestep $t$ , a transition action is applied that alters these data structures according to Table 2 . Along with the discrete transitions of the arc-standard system, the parser computes vector representations for the buffer, stack and list of actions at time step $t$ denoted $\mathbf {b}_t$ , $\mathbf {s}_t$ , and $\mathbf {a}_t$ , respectively. The parser state at time $t$ is given by: $$\mathbf {p}_t = \max \left\lbrace 0, \mathbf {W}[\mathbf {s}_t; \mathbf {b}_t; \mathbf {a}_t] + \mathbf {W}_{\text{bias}}\right\rbrace $$ (Eq. 24) where the matrix $\mathbf {W}$ and the vector $\mathbf {W}_{\text{bias}}$ are learned parameters. The matrix $\mathbf {W}$ is multiplied by the vector $[\mathbf {s}_t; \mathbf {b}_t; \mathbf {a}_t]$ created by the concatenation of $\mathbf {s}_t, \mathbf {b}_t, \mathbf {a}_t$ . The parser state $\mathbf {p}_t$ is then used to define a categorical distribution over possible next actions $z$ : $$p(z \mid \mathbf {p}_t) = \frac{\exp \left( \mathbf {g}_{z}^{\top } \mathbf {p}_t + q_{z} \right)}{\sum _{z^{\prime }} \exp \left( \mathbf {g}_{z^{\prime }}^{\top } \mathbf {p}_t + q_{z^{\prime }} \right)}$$ (Eq. 26) where $\mathbf {g}_z$ and $q_z$ are parameters associated with action $z$ . The selected action is then used to update the buffer, stack and list of actions, and to compute $\mathbf {b}_{t+1}$ , $\mathbf {s}_{t+1}$ and $\mathbf {a}_{t+1}$ accordingly. The model is trained to maximize the log-likelihood of correct actions. At test time, the parser greedily chooses the most probable action in every time step until a complete parse tree is produced. The following sections describe our extensions of the core parser. More details about the core parser can be found in dyer:15. Token Representations The vector representations of input tokens feed into the stack-LSTM modules of the buffer and the stack. For monolingual parsing, we represent each token by concatenating the following vectors: a fixed, pretrained embedding of the word type, a learned embedding of the word type, a learned embedding of the Brown cluster, a learned embedding of the fine-grained POS tag, a learned embedding of the coarse POS tag. For multilingual parsing with MaLOPa, we start with a simple delexicalized model where the token representation only consists of learned embeddings of coarse POS tags, which are shared across all languages to enable model transfer. In the following subsections, we enhance the token representation in MaLOPa to include lexical embeddings, language embeddings, and fine-grained POS embeddings. Lexical Embeddings Previous work has shown that sacrificing lexical features amounts to a substantial decrease in the performance of a dependency parser BIBREF11 , BIBREF18 , BIBREF28 , BIBREF29 . Therefore, we extend the token representation in MaLOPa by concatenating learned embeddings of multilingual word clusters, and pretrained multilingual embeddings of word types. Before training the parser, we estimate Brown clusters of English words and project them via word alignments to words in other languages. This is similar to the `projected clusters' method in tackstrom:12. To go from Brown clusters to embeddings, we ignore the hierarchy within Brown clusters and assign a unique parameter vector to each cluster. We also use Guo et al.'s (2016) `robust projection' method to pretrain multilingual word embeddings. The first step in `robust projection' is to learn embeddings for English words using the skip-gram model BIBREF30 . Then, we compute an embedding of non-English words as the weighted average of English word embeddings, using word alignment probabilities as weights. The last step computes an embedding of non-English words which are not aligned to any English words by averaging the embeddings of all words within an edit distance of 1 in the same language. We experiment with two other methods—`multiCCA' and `multiCluster,' both proposed by ammar:16—for pretraining multilingual word embeddings in § "Target Languages with a Treebank (L t =L s L^t = L^s)" . `MultiCCA' uses a linear operator to project pretrained monolingual embeddings in each language (except English) to the vector space of pretrained English word embeddings, while `multiCluster' uses the same embedding for translationally-equivalent words in different languages. The results in Table 6 illustrate that the three methods perform similarly on this task. Language Embeddings While many languages, especially ones that belong to the same family, exhibit some similar syntactic phenomena (e.g., all languages have subjects, verbs, and objects), substantial syntactic differences abound. Some of these differences are easy to characterize (e.g., subject-verb-object vs. verb-subject-object, prepositions vs. postpositions, adjective-noun vs. noun-adjective), while others are subtle (e.g., number and positions of negation morphemes). It is not at all clear how to translate descriptive facts about a language's syntax into features for a parser. Consequently, training a language-universal parser on treebanks in multiple source languages requires caution. While exposing the parser to a diverse set of syntactic patterns across many languages has the potential to improve its performance in each, dependency annotations in one language will, in some ways, contradict those in typologically different languages. For instance, consider a context where the next word on the buffer is a noun, and the top word on the stack is an adjective, followed by a noun. Treebanks of languages where postpositive adjectives are typical (e.g., French) will often teach the parser to predict reduce-left, while those of languages where prepositive adjectives are more typical (e.g., English) will teach the parser to predict shift. Inspired by naseem:12, we address this problem by informing the parser about the input language it is currently parsing. Let $\mathbf {l}$ be the input vector representation of a particular language. We consider three definitions for $\mathbf {l}$ : one-hot encoding of the language ID, one-hot encoding of individual word-order properties, and averaged one-hot encoding of WALS typological properties (including word-order properties). It is worth noting that the first definition (language ID) turns out to work best in our experiments. We use a hidden layer with $\tanh $ nonlinearity to compute the language embedding $\mathbf {l^{\prime }}$ as: $$\mathbf {l^{\prime }} = \tanh (\mathbf {L l + L_{\text{bias}}}) \nonumber $$ (Eq. 43) where the matrix $\mathbf {L}$ and the vector $\mathbf {L_{\text{bias}}}$ are additional model parameters. We modify the parsing architecture as follows: include $\mathbf {l^{\prime }}$ in the token representation (which feeds into the stack-LSTM modules of the buffer and the stack as described in § "Transition-based Parsing with S-LSTMs" ), include $\mathbf {l^{\prime }}$ in the action vector representation (which feeds into the stack-LSTM module that represents previous actions as described in § "Transition-based Parsing with S-LSTMs" ), and redefine the parser state at time $t$ as $\mathbf {p}_t = \max \left\lbrace 0, \mathbf {W}[\mathbf {s}_t; \mathbf {b}_t; \mathbf {a}_t; \mathbf {l^{\prime }}] + \mathbf {W}_{\text{bias}}\right\rbrace $ . Intuitively, the first two modifications allow the input language to influence the vector representation of the stack, the buffer and the list of actions. The third modification allows the input language to influence the parser state which in turn is used to predict the next action. In preliminary experiments, we found that adding the language embeddings at the token and action level is important. We also experimented with computing more complex functions of ( $\mathbf {s}_t, \mathbf {b}_t, \mathbf {a}_t, \mathbf {l^{\prime }}$ ) to define the parser state, but they did not help. Fine-grained POS Tag Embeddings tiedemann:15 shows that omitting fine-grained POS tags significantly hurts the performance of a dependency parser. However, those fine-grained POS tagsets are defined monolingually and are only available for a subset of the languages with universal dependency treebanks. We extend the token representation to include a fine-grained POS embedding (in addition to the coarse POS embedding). We stochastically dropout the fine-grained POS embedding for each token with 50% probability BIBREF31 so that the parser can make use of fine-grained POS tags when available but stay reliable when the fine-grained POS tags are missing. Predicting POS Tags The model discussed thus far conditions on the POS tags of words in the input sentence. However, gold POS tags may not be available in real applications (e.g., parsing the web). Here, we describe two modifications to (i) model both POS tagging and dependency parsing, and (ii) increase the robustness of the parser to incorrect POS predictions. Let $x_1, \ldots , x_n$ , $y_1,\ldots , y_n$ , $z_1, \ldots , z_{2n}$ be the sequence of words, POS tags, and parsing actions, respectively, for a sentence of length $n$ . We define the joint distribution of a POS tag sequence and parsing actions given a sequence of words as follows: $$p&(y_1,\ldots , y_n, z_1, \ldots ,z_{2n} \mid x_1,\ldots ,x_n) = \nonumber \\ &\prod _{i=1}^{n} p(y_i \mid x_1,\ldots ,x_n) \nonumber \\ \times & \prod _{j=1}^{2n} p(z_j \mid x_1, \ldots , x_n, y_1, \ldots , y_n, z_1, \ldots , z_{j-1}) \nonumber $$ (Eq. 50) where $p(z_j \mid \ldots )$ is defined in Eq. 26 , and $p(y_i \mid x_1, \ldots , x_n)$ uses a bidirectional LSTM BIBREF24 . huang:15 show that the performance of a bidirectional LSTM POS tagger is on par with a conditional random field tagger. We use slightly different token representations for tagging and parsing in the same model. For tagging, we construct the token representation by concatenating the embeddings of the word type (pretrained), the Brown cluster and the input language. This token representation feeds into the bidirectional LSTM, followed by a softmax layer (at each position) which defines a categorical distribution over possible POS tags. For parsing, we construct the token representation by further concatenating the embeddings of predicted POS tags. This token representation feeds into the stack-LSTM modules of the buffer and stack components of the transition-based parser. This multi-task learning setup enables us to predict both POS tags and dependency trees in the same model. We note that pretrained word embeddings, cluster embeddings and language embeddings are shared for tagging and parsing. We use an independently developed variant of word dropout BIBREF32 , which we call block dropout. The token representation used for parsing includes the embedding of predicted POS tags, which may be incorrect. We introduce another modification which makes the parser more robust to incorrect POS tag predictions, by stochastically zeroing out the entire embedding of the POS tag. While training the parser, we replace the POS embedding vector $\mathbf {e}$ with another vector (of the same dimensionality) stochastically computed as: $\mathbf {e^{\prime }} = (1-b)/\mu \times \mathbf {e}$ , where $b \in \lbrace 0,1\rbrace $ is a Bernoulli-distributed random variable with parameter $\mu $ which is initialized to 1.0 (i.e., always dropout, setting $b=1, \mathbf {e^{\prime }} = 0$ ), and is dynamically updated to match the error rate of the POS tagger on the development set. At test time, we never dropout the predicted POS embedding, i.e., $\mathbf {e^{\prime }}=\mathbf {e}$ . Intuitively, this method extends the dropout method BIBREF31 to address structured noise in the input layer. Experiments In this section, we evaluate the MaLOPa approach in three data scenarios: when the target language has a large treebank (Table 3 ), a small treebank (Table 7 ) or no treebank (Table 8 ). Target Languages with a Treebank (L t =L s L^t = L^s) Here, we evaluate our MaLOPa parser when the target language has a treebank. For each target language, the strong baseline we use is a monolingually-trained S-LSTM parser with a token representation which concatenates: pretrained word embeddings (50 dimensions), learned word embeddings (50 dimensions), coarse (universal) POS tag embeddings (12 dimensions), fine-grained (language-specific, when available) POS tag embeddings (12 dimensions), and embeddings of Brown clusters (12 dimensions), and uses a two-layer S-LSTM for each of the stack, the buffer and the list of actions. We independently train one baseline parser for each target language, and share no model parameters. This baseline, denoted `monolingual' in Tables 3 and 7 , achieves UAS score 93.0 and LAS score 91.5 when trained on the English Penn Treebank, which is comparable to dyer:15. We train MaLOPa on the concantenation of training sections of all seven languages. To balance the development set, we only concatenate the first 300 sentences of each language's development section. The first MaLOPa parser we evaluate uses only coarse POS embeddings to construct the token representation. As shown in Table 3 , this parser consistently underperforms the monolingual baselines, with a gap of 12.5 LAS points on average. Augmenting the token representation with lexical embeddings to the token representation (both multilingual word clusters and pretrained multilingual word embeddings, as described in § "Lexical Embeddings" ) substantially improves the performance of MaLOPa, recovering 83% of the gap in average performance. We experimented with three ways to include language information in the token representation, namely: `language ID', `word order' and `full typology' (see § "Language Embeddings" for details), and found all three to improve the performance of MaLOPa giving LAS scores 83.5, 83.2 and 82.5, respectively. It is noteworthy that the model benefits more from language ID than from typological properties. Using `language ID,' we recover another 12% of the original gap. Finally, the best configuration of MaLOPa adds fine-grained POS embeddings to the token representation. Surprisingly, adding fine-grained POS embeddings improves the performance even for some languages where fine-grained POS tags are not available (e.g., Spanish). This parser outperforms the monolingual baseline in five out of seven target languages, and wins on average by 0.3 LAS points. We emphasize that this model is only trained once on all languages, and the same model is used to parse the test set of each language, which simplifies the distribution or deployment of multilingual parsing software. To gain a better understanding of the model behavior, we analyze certain classes of dependency attachments/relations in German, which has notably flexible word order, in Table 4 . We consider the recall of left attachments (where the head word precedes the dependent word in the sentence), right attachments, root attachments, short-attachments (with distance $=1$ ), long-attachments (with distance $>6$ ), as well as the following relation groups: nsubj* (nominal subjects: nsubj, nsubjpass), dobj (direct object: dobj), conj (conjunct: conj), *comp (clausal complements: ccomp, xcomp), case (clitics and adpositions: case), *mod (modifiers of a noun: nmod, nummod, amod, appos), neg (negation modifier: neg). We found that each of the three improvements (lexical embeddings, language embeddings and fine-grained POS embeddings) tends to improve recall for most classes. MaLOPa underperforms (compared to the monolingual baseline) in some classes: nominal subjects, direct objects and modifiers of a noun. Nevertheless, MaLOPa outperforms the baseline in some important classes such as: root, long attachments and conjunctions. In Table 3 , we assume that both gold language ID of the input language and gold POS tags are given at test time. However, this assumption is not realistic in practical applications. Here, we quantify the degradation in parsing accuracy when language ID and POS tags are only given at training time, but must be predicted at test time. We do not use fine-grained POS tags in these experiments because some languages use a very large fine-grained POS tag set (e.g., 866 unique tags in Portuguese). In order to predict language ID, we use the langid.py library BIBREF34 and classify individual sentences in the test sets to one of the seven languages of interest, using the default models included in the library. The macro average language ID prediction accuracy on the test set across sentences is 94.7%. In order to predict POS tags, we use the model described in § "Predicting POS Tags" with both input and hidden LSTM dimensions of 60, and with block dropout. The macro average accuracy of the POS tagger is 93.3%. Table 5 summarizes the four configurations: {gold language ID, predicted language ID} $\times $ {gold POS tags, predicted POS tags}. The performance of the parser suffers mildly (–0.8 LAS points) when using predicted language IDs, but more (–5.1 LAS points) when using predicted POS tags. As an alternative approach to predicting POS tags, we trained the Stanford POS tagger, for each target language, on the coarse POS tag annotations in the training section of the universal dependency treebanks, then replaced the gold POS tags in the test set of each language with predictions of the monolingual tagger. The resulting degradation in parsing performance between gold vs. predicted POS tags is –6.0 LAS points (on average, compared to a degradation of –5.1 LAS points in Table 5 ). The disparity in parsing results with gold vs. predicted POS tags is an important open problem, and has been previously discussed by tiedemann:15. The predicted POS results in Table 5 use block dropout. Without using block dropout, we lose an extra 0.2 LAS points in both configurations using predicted POS tags. Several methods have been proposed for pretraining multilingual word embeddings. We compare three of them: multiCCA BIBREF23 uses a linear operator to project pretrained monolingual embeddings in each language (except English) to the vector space of pretrained English word embeddings. multiCluster BIBREF23 uses the same embedding for translationally-equivalent words in different languages. robust projection BIBREF29 first pretrains monolingual English word embeddings, then defines the embedding of a non-English word as the weighted average embedding of English words aligned to the non-English words (in a parallel corpus). The embedding of a non-English word which is not aligned to any English words is defined as the average embedding of words with a unit edit distance in the same language (e.g., `playz' is the average of `plays' and `play'). All embeddings are trained on the same data and use the same number of dimensions (100). Table 6 illustrates that the three methods perform similarly on this task. Aside from Table 6 , in this paper, we exclusively use the robust projection multilingual embeddings trained in guo:16. The “robust projection” result in Table 6 (which uses 100 dimensions) is comparable to the last row in Table 3 (which uses 50 dimensions). duong:15 considered a setup where the target language has a small treebank of $\sim $ 3,000 tokens, and the source language (English) has a large treebank of $\sim $ 205,000 tokens. The parser proposed in duong:15 is a neural network parser based on chen:14, which shares most of the parameters between English and the target language, and uses an $\ell _2$ regularizer to tie the lexical embeddings of translationally-equivalent words. While not the primary focus of this paper, we compare our proposed method to that of duong:15 on five target languages for which multilingual Brown clusters are available from guo:16. For each target language, we train the parser on the English training data in the UD version 1.0 corpus BIBREF6 and a small treebank in the target language. Following duong:15, in this setup, we only use gold coarse POS tags, we do not use any development data in the target languages (we use the English development set instead), and we subsample the English training data in each epoch to the same number of sentences in the target language. We use the same hyperparameters specified before for the single MaLOPa parser and each of the monolingual baselines. Table 7 shows that our method outperforms duong:15 by 1.4 LAS points on average. Our method consistently outperforms the monolingual baselines in this setup, with an average improvement of 5.7 absolute LAS points. Target Languages without a Treebank (L t ∩L s =∅L^t \cap L^s = \emptyset ) mcdonald:11 established that, when no treebank annotations are available in the target language, training on multiple source languages outperforms training on one (i.e., multi-source model transfer outperforms single-source model transfer). In this section, we evaluate the performance of our parser in this setup. We use two strong baseline multi-source model transfer parsers with no supervision in the target language: zhang:15 is a graph-based arc-factored parsing model with a tensor-based scoring function. It takes typological properties of a language as input. We compare to the best reported configuration (i.e., the column titled “OURS” in Table 5 of Zhang and Barzilay, 2015). guo:16 is a transition-based neural-network parsing model based on chen:14. It uses a multilingual embeddings and Brown clusters as lexical features. We compare to the best reported configuration (i.e., the column titled “MULTI-PROJ” in Table 1 of Guo et al., 2016). Following guo:16, for each target language, we train the parser on six other languages in the Google universal dependency treebanks version 2.0 (de, en, es, fr, it, pt, sv, excluding whichever is the target language), and we use gold coarse POS tags. Our parser uses the same word embeddings and word clusters used in guo:16, and does not use any typology information. The results in Table 8 show that, on average, our parser outperforms both baselines by more than 1 point in LAS, and gives the best LAS results in four (out of six) languages. Related Work Our work builds on the model transfer approach, which was pioneered by zeman:08 who trained a parser on a source language treebank then applied it to parse sentences in a target language. cohen:11 and mcdonald:11 trained unlexicalized parsers on treebanks of multiple source languages and applied the parser to different languages. naseem:12, tackstrom:13, and zhang:15 used language typology to improve model transfer. To add lexical information, tackstrom:12 used multilingual word clusters, while xiao:14, guo:15, sogaard:15 and guo:16 used multilingual word embeddings. duong:15 used a neural network based model, sharing most of the parameters between two languages, and used an $\ell _2$ regularizer to tie the lexical embeddings of translationally-equivalent words. We incorporate these ideas in our framework, while proposing a novel neural architecture for embedding language typology (see § "Language Embeddings" ), and use a variant of word dropout BIBREF32 for consuming noisy structured inputs. We also show how to replace an array of monolingually trained parsers with one multilingually-trained parser without sacrificing accuracy, which is related to vilares:16. Neural network parsing models which preceded dyer:15 include henderson:03, titov:07, henderson:10 and chen:14. Related to lexical features in cross-lingual parsing is durrett:12 who defined lexico-syntactic features based on bilingual lexicons. Other related work include ostling:15, which may be used to induce more useful typological properties to inform multilingual parsing. Another popular approach for cross-lingual supervision is to project annotations from the source language to the target language via a parallel corpus BIBREF35 , BIBREF36 or via automatically-translated sentences BIBREF37 . ma:14 used entropy regularization to learn from both parallel data (with projected annotations) and unlabeled data in the target language. rasooli:15 trained an array of target-language parsers on fully annotated trees, by iteratively decoding sentences in the target language with incomplete annotations. One research direction worth pursuing is to find synergies between the model transfer approach and annotation projection approach. Conclusion We presented MaLOPa, a single parser trained on a multilingual set of treebanks. We showed that this parser, equipped with language embeddings and fine-grained POS embeddings, on average outperforms monolingually-trained parsers for target languages with a treebank. This pattern of results is quite encouraging. Although languages may share underlying syntactic properties, individual parsing models must behave quite differently, and our model allows this while sharing parameters across languages. The value of this sharing is more pronounced in scenarios where the target language's training treebank is small or non-existent, where our parser outperforms previous cross-lingual multi-source model transfer methods. Acknowledgments Waleed Ammar is supported by the Google fellowship in natural language processing. Miguel Ballesteros is supported by the European Commission under the contract numbers FP7-ICT-610411 (project MULTISENSOR) and H2020-RIA-645012 (project KRISTINA). Part of this material is based upon work supported by a subcontract with Raytheon BBN Technologies Corp. under DARPA Prime Contract No. HR0011-15-C-0013, and part of this research was supported by a Google research award to Noah Smith. We thank Jiang Guo for sharing the multilingual word embeddings and multilingual word clusters. We thank Lori Levin, Ryan McDonald, Jörg Tiedemann, Yulia Tsvetkov, and Yuan Zhang for helpful discussions. Last but not least, we thank the anonymous TACL reviewers for their valuable feedback.
train the parser on six other languages in the Google universal dependency treebanks version 2.0 (de, en, es, fr, it, pt, sv, excluding whichever is the target language), and we use gold coarse POS tags
7065e6140dbaffadebe62c9c9d3863ca0f829d52
7065e6140dbaffadebe62c9c9d3863ca0f829d52_0
Q: How many languages have this parser been tried on? Text: Introduction Developing tools for processing many languages has long been an important goal in NLP BIBREF0 , BIBREF1 , but it was only when statistical methods became standard that massively multilingual NLP became economical. The mainstream approach for multilingual NLP is to design language-specific models. For each language of interest, the resources necessary for training the model are obtained (or created), and separate parameters are fit for each language separately. This approach is simple and grants the flexibility of customizing the model and features to the needs of each language, but it is suboptimal for theoretical and practical reasons. Theoretically, the study of linguistic typology tells us that many languages share morphological, phonological, and syntactic phenomena BIBREF3 ; therefore, the mainstream approach misses an opportunity to exploit relevant supervision from typologically related languages. Practically, it is inconvenient to deploy or distribute NLP tools that are customized for many different languages because, for each language of interest, we need to configure, train, tune, monitor, and occasionally update the model. Furthermore, code-switching or code-mixing (mixing more than one language in the same discourse), which is pervasive in some genres, in particular social media, presents a challenge for monolingually-trained NLP models BIBREF4 . In parsing, the availability of homogeneous syntactic dependency annotations in many languages BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 has created an opportunity to develop a parser that is capable of parsing sentences in multiple languages, addressing these theoretical and practical concerns. A multilingual parser can potentially replace an array of language-specific monolingually-trained parsers (for languages with a large treebank). The same approach has been used in low-resource scenarios (with no treebank or a small treebank in the target language), where indirect supervision from auxiliary languages improves the parsing quality BIBREF11 , BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , but these models may sacrifice accuracy on source languages with a large treebank. In this paper, we describe a model that works well for both low-resource and high-resource scenarios. We propose a parsing architecture that takes as input sentences in several languages, optionally predicting the part-of-speech (POS) tags and input language. The parser is trained on the union of available universal dependency annotations in different languages. Our approach integrates and critically relies on several recent developments related to dependency parsing: universal POS tagsets BIBREF17 , cross-lingual word clusters BIBREF18 , selective sharing BIBREF19 , universal dependency annotations BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , advances in neural network architectures BIBREF20 , BIBREF21 , and multilingual word embeddings BIBREF22 , BIBREF16 , BIBREF23 . We show that our parser compares favorably to strong baselines trained on the same treebanks in three data scenarios: when the target language has a large treebank (Table 3 ), a small treebank (Table 7 ), or no treebank (Table 8 ). Our parser is publicly available. Overview Our goal is to train a dependency parser for a set of target languages ${L}^t$ , given universal dependency annotations in a set of source languages ${L}^s$ . Ideally, we would like to have training data in all target languages (i.e., $L^t \subseteq L^s$ ), but we are also interested in the case where the sets of source and target languages are disjoint (i.e., $L^t \cap L^s = \emptyset $ ). When all languages in $L^t$ have a large treebank, the mainstream approach has been to train one monolingual parser per target language and route sentences of a given language to the corresponding parser at test time. In contrast, our approach is to train one parsing model with the union of treebanks in $L^s$ , then use this single trained model to parse text in any language in $L^t$ , hence the name “Many Languages, One Parser” (MaLOPa). MaLOPa strikes a balance between: (1) enabling cross-lingual model transfer via language-invariant input representations; i.e., coarse POS tags, multilingual word embeddings and multilingual word clusters, and (2) tweaking the behavior of the parser depending on the current input language via language-specific representations; i.e., fine-grained POS tags and language embeddings. In addition to universal dependency annotations in source languages (see Table 1 ), we use the following data resources for each language in ${L} = {L}^t \cup {L}^s$ : Novel contributions of this paper include: (i) using one parser instead of an array of monolingually-trained parsers without sacrificing accuracy on languages with a large treebank, (ii) an effective neural network architecture for using language embeddings to improve multilingual parsing, and (iii) a study of how automatic language identification affects the performance of a multilingual dependency parser. While not the primary focus of this paper, we also show that a variant of our parser outperforms previous work on multi-source cross-lingual parsing in low resource scenarios, where languages in $L^t$ have a small treebank (see Table 7 ) or where $L^t \cap L^s = \emptyset $ (see Table 8 ). In the small treebank setup with 3,000 token annotations, we show that our parser consistently outperforms a strong monolingual baseline with 5.7 absolute LAS (labeled attachment score) points per language, on average. Parsing Model Recent advances suggest that recurrent neural networks, especially long short-term memory (LSTM) architectures, are capable of learning useful representations for modeling problems of sequential nature BIBREF24 , BIBREF25 . In this section, we describe our language-universal parser, which extends the stack LSTM (S-LSTM) parser of dyer:15. Transition-based Parsing with S-LSTMs This section briefly reviews Dyer et al.'s S-LSTM parser, which we modify in the following sections. The core parser can be understood as the sequential manipulation of three data structures: a buffer (from which we read the token sequence), a stack (which contains partially-built parse trees), and a list of actions previously taken by the parser. The parser uses the arc-standard transition system BIBREF26 . At each timestep $t$ , a transition action is applied that alters these data structures according to Table 2 . Along with the discrete transitions of the arc-standard system, the parser computes vector representations for the buffer, stack and list of actions at time step $t$ denoted $\mathbf {b}_t$ , $\mathbf {s}_t$ , and $\mathbf {a}_t$ , respectively. The parser state at time $t$ is given by: $$\mathbf {p}_t = \max \left\lbrace 0, \mathbf {W}[\mathbf {s}_t; \mathbf {b}_t; \mathbf {a}_t] + \mathbf {W}_{\text{bias}}\right\rbrace $$ (Eq. 24) where the matrix $\mathbf {W}$ and the vector $\mathbf {W}_{\text{bias}}$ are learned parameters. The matrix $\mathbf {W}$ is multiplied by the vector $[\mathbf {s}_t; \mathbf {b}_t; \mathbf {a}_t]$ created by the concatenation of $\mathbf {s}_t, \mathbf {b}_t, \mathbf {a}_t$ . The parser state $\mathbf {p}_t$ is then used to define a categorical distribution over possible next actions $z$ : $$p(z \mid \mathbf {p}_t) = \frac{\exp \left( \mathbf {g}_{z}^{\top } \mathbf {p}_t + q_{z} \right)}{\sum _{z^{\prime }} \exp \left( \mathbf {g}_{z^{\prime }}^{\top } \mathbf {p}_t + q_{z^{\prime }} \right)}$$ (Eq. 26) where $\mathbf {g}_z$ and $q_z$ are parameters associated with action $z$ . The selected action is then used to update the buffer, stack and list of actions, and to compute $\mathbf {b}_{t+1}$ , $\mathbf {s}_{t+1}$ and $\mathbf {a}_{t+1}$ accordingly. The model is trained to maximize the log-likelihood of correct actions. At test time, the parser greedily chooses the most probable action in every time step until a complete parse tree is produced. The following sections describe our extensions of the core parser. More details about the core parser can be found in dyer:15. Token Representations The vector representations of input tokens feed into the stack-LSTM modules of the buffer and the stack. For monolingual parsing, we represent each token by concatenating the following vectors: a fixed, pretrained embedding of the word type, a learned embedding of the word type, a learned embedding of the Brown cluster, a learned embedding of the fine-grained POS tag, a learned embedding of the coarse POS tag. For multilingual parsing with MaLOPa, we start with a simple delexicalized model where the token representation only consists of learned embeddings of coarse POS tags, which are shared across all languages to enable model transfer. In the following subsections, we enhance the token representation in MaLOPa to include lexical embeddings, language embeddings, and fine-grained POS embeddings. Lexical Embeddings Previous work has shown that sacrificing lexical features amounts to a substantial decrease in the performance of a dependency parser BIBREF11 , BIBREF18 , BIBREF28 , BIBREF29 . Therefore, we extend the token representation in MaLOPa by concatenating learned embeddings of multilingual word clusters, and pretrained multilingual embeddings of word types. Before training the parser, we estimate Brown clusters of English words and project them via word alignments to words in other languages. This is similar to the `projected clusters' method in tackstrom:12. To go from Brown clusters to embeddings, we ignore the hierarchy within Brown clusters and assign a unique parameter vector to each cluster. We also use Guo et al.'s (2016) `robust projection' method to pretrain multilingual word embeddings. The first step in `robust projection' is to learn embeddings for English words using the skip-gram model BIBREF30 . Then, we compute an embedding of non-English words as the weighted average of English word embeddings, using word alignment probabilities as weights. The last step computes an embedding of non-English words which are not aligned to any English words by averaging the embeddings of all words within an edit distance of 1 in the same language. We experiment with two other methods—`multiCCA' and `multiCluster,' both proposed by ammar:16—for pretraining multilingual word embeddings in § "Target Languages with a Treebank (L t =L s L^t = L^s)" . `MultiCCA' uses a linear operator to project pretrained monolingual embeddings in each language (except English) to the vector space of pretrained English word embeddings, while `multiCluster' uses the same embedding for translationally-equivalent words in different languages. The results in Table 6 illustrate that the three methods perform similarly on this task. Language Embeddings While many languages, especially ones that belong to the same family, exhibit some similar syntactic phenomena (e.g., all languages have subjects, verbs, and objects), substantial syntactic differences abound. Some of these differences are easy to characterize (e.g., subject-verb-object vs. verb-subject-object, prepositions vs. postpositions, adjective-noun vs. noun-adjective), while others are subtle (e.g., number and positions of negation morphemes). It is not at all clear how to translate descriptive facts about a language's syntax into features for a parser. Consequently, training a language-universal parser on treebanks in multiple source languages requires caution. While exposing the parser to a diverse set of syntactic patterns across many languages has the potential to improve its performance in each, dependency annotations in one language will, in some ways, contradict those in typologically different languages. For instance, consider a context where the next word on the buffer is a noun, and the top word on the stack is an adjective, followed by a noun. Treebanks of languages where postpositive adjectives are typical (e.g., French) will often teach the parser to predict reduce-left, while those of languages where prepositive adjectives are more typical (e.g., English) will teach the parser to predict shift. Inspired by naseem:12, we address this problem by informing the parser about the input language it is currently parsing. Let $\mathbf {l}$ be the input vector representation of a particular language. We consider three definitions for $\mathbf {l}$ : one-hot encoding of the language ID, one-hot encoding of individual word-order properties, and averaged one-hot encoding of WALS typological properties (including word-order properties). It is worth noting that the first definition (language ID) turns out to work best in our experiments. We use a hidden layer with $\tanh $ nonlinearity to compute the language embedding $\mathbf {l^{\prime }}$ as: $$\mathbf {l^{\prime }} = \tanh (\mathbf {L l + L_{\text{bias}}}) \nonumber $$ (Eq. 43) where the matrix $\mathbf {L}$ and the vector $\mathbf {L_{\text{bias}}}$ are additional model parameters. We modify the parsing architecture as follows: include $\mathbf {l^{\prime }}$ in the token representation (which feeds into the stack-LSTM modules of the buffer and the stack as described in § "Transition-based Parsing with S-LSTMs" ), include $\mathbf {l^{\prime }}$ in the action vector representation (which feeds into the stack-LSTM module that represents previous actions as described in § "Transition-based Parsing with S-LSTMs" ), and redefine the parser state at time $t$ as $\mathbf {p}_t = \max \left\lbrace 0, \mathbf {W}[\mathbf {s}_t; \mathbf {b}_t; \mathbf {a}_t; \mathbf {l^{\prime }}] + \mathbf {W}_{\text{bias}}\right\rbrace $ . Intuitively, the first two modifications allow the input language to influence the vector representation of the stack, the buffer and the list of actions. The third modification allows the input language to influence the parser state which in turn is used to predict the next action. In preliminary experiments, we found that adding the language embeddings at the token and action level is important. We also experimented with computing more complex functions of ( $\mathbf {s}_t, \mathbf {b}_t, \mathbf {a}_t, \mathbf {l^{\prime }}$ ) to define the parser state, but they did not help. Fine-grained POS Tag Embeddings tiedemann:15 shows that omitting fine-grained POS tags significantly hurts the performance of a dependency parser. However, those fine-grained POS tagsets are defined monolingually and are only available for a subset of the languages with universal dependency treebanks. We extend the token representation to include a fine-grained POS embedding (in addition to the coarse POS embedding). We stochastically dropout the fine-grained POS embedding for each token with 50% probability BIBREF31 so that the parser can make use of fine-grained POS tags when available but stay reliable when the fine-grained POS tags are missing. Predicting POS Tags The model discussed thus far conditions on the POS tags of words in the input sentence. However, gold POS tags may not be available in real applications (e.g., parsing the web). Here, we describe two modifications to (i) model both POS tagging and dependency parsing, and (ii) increase the robustness of the parser to incorrect POS predictions. Let $x_1, \ldots , x_n$ , $y_1,\ldots , y_n$ , $z_1, \ldots , z_{2n}$ be the sequence of words, POS tags, and parsing actions, respectively, for a sentence of length $n$ . We define the joint distribution of a POS tag sequence and parsing actions given a sequence of words as follows: $$p&(y_1,\ldots , y_n, z_1, \ldots ,z_{2n} \mid x_1,\ldots ,x_n) = \nonumber \\ &\prod _{i=1}^{n} p(y_i \mid x_1,\ldots ,x_n) \nonumber \\ \times & \prod _{j=1}^{2n} p(z_j \mid x_1, \ldots , x_n, y_1, \ldots , y_n, z_1, \ldots , z_{j-1}) \nonumber $$ (Eq. 50) where $p(z_j \mid \ldots )$ is defined in Eq. 26 , and $p(y_i \mid x_1, \ldots , x_n)$ uses a bidirectional LSTM BIBREF24 . huang:15 show that the performance of a bidirectional LSTM POS tagger is on par with a conditional random field tagger. We use slightly different token representations for tagging and parsing in the same model. For tagging, we construct the token representation by concatenating the embeddings of the word type (pretrained), the Brown cluster and the input language. This token representation feeds into the bidirectional LSTM, followed by a softmax layer (at each position) which defines a categorical distribution over possible POS tags. For parsing, we construct the token representation by further concatenating the embeddings of predicted POS tags. This token representation feeds into the stack-LSTM modules of the buffer and stack components of the transition-based parser. This multi-task learning setup enables us to predict both POS tags and dependency trees in the same model. We note that pretrained word embeddings, cluster embeddings and language embeddings are shared for tagging and parsing. We use an independently developed variant of word dropout BIBREF32 , which we call block dropout. The token representation used for parsing includes the embedding of predicted POS tags, which may be incorrect. We introduce another modification which makes the parser more robust to incorrect POS tag predictions, by stochastically zeroing out the entire embedding of the POS tag. While training the parser, we replace the POS embedding vector $\mathbf {e}$ with another vector (of the same dimensionality) stochastically computed as: $\mathbf {e^{\prime }} = (1-b)/\mu \times \mathbf {e}$ , where $b \in \lbrace 0,1\rbrace $ is a Bernoulli-distributed random variable with parameter $\mu $ which is initialized to 1.0 (i.e., always dropout, setting $b=1, \mathbf {e^{\prime }} = 0$ ), and is dynamically updated to match the error rate of the POS tagger on the development set. At test time, we never dropout the predicted POS embedding, i.e., $\mathbf {e^{\prime }}=\mathbf {e}$ . Intuitively, this method extends the dropout method BIBREF31 to address structured noise in the input layer. Experiments In this section, we evaluate the MaLOPa approach in three data scenarios: when the target language has a large treebank (Table 3 ), a small treebank (Table 7 ) or no treebank (Table 8 ). Target Languages with a Treebank (L t =L s L^t = L^s) Here, we evaluate our MaLOPa parser when the target language has a treebank. For each target language, the strong baseline we use is a monolingually-trained S-LSTM parser with a token representation which concatenates: pretrained word embeddings (50 dimensions), learned word embeddings (50 dimensions), coarse (universal) POS tag embeddings (12 dimensions), fine-grained (language-specific, when available) POS tag embeddings (12 dimensions), and embeddings of Brown clusters (12 dimensions), and uses a two-layer S-LSTM for each of the stack, the buffer and the list of actions. We independently train one baseline parser for each target language, and share no model parameters. This baseline, denoted `monolingual' in Tables 3 and 7 , achieves UAS score 93.0 and LAS score 91.5 when trained on the English Penn Treebank, which is comparable to dyer:15. We train MaLOPa on the concantenation of training sections of all seven languages. To balance the development set, we only concatenate the first 300 sentences of each language's development section. The first MaLOPa parser we evaluate uses only coarse POS embeddings to construct the token representation. As shown in Table 3 , this parser consistently underperforms the monolingual baselines, with a gap of 12.5 LAS points on average. Augmenting the token representation with lexical embeddings to the token representation (both multilingual word clusters and pretrained multilingual word embeddings, as described in § "Lexical Embeddings" ) substantially improves the performance of MaLOPa, recovering 83% of the gap in average performance. We experimented with three ways to include language information in the token representation, namely: `language ID', `word order' and `full typology' (see § "Language Embeddings" for details), and found all three to improve the performance of MaLOPa giving LAS scores 83.5, 83.2 and 82.5, respectively. It is noteworthy that the model benefits more from language ID than from typological properties. Using `language ID,' we recover another 12% of the original gap. Finally, the best configuration of MaLOPa adds fine-grained POS embeddings to the token representation. Surprisingly, adding fine-grained POS embeddings improves the performance even for some languages where fine-grained POS tags are not available (e.g., Spanish). This parser outperforms the monolingual baseline in five out of seven target languages, and wins on average by 0.3 LAS points. We emphasize that this model is only trained once on all languages, and the same model is used to parse the test set of each language, which simplifies the distribution or deployment of multilingual parsing software. To gain a better understanding of the model behavior, we analyze certain classes of dependency attachments/relations in German, which has notably flexible word order, in Table 4 . We consider the recall of left attachments (where the head word precedes the dependent word in the sentence), right attachments, root attachments, short-attachments (with distance $=1$ ), long-attachments (with distance $>6$ ), as well as the following relation groups: nsubj* (nominal subjects: nsubj, nsubjpass), dobj (direct object: dobj), conj (conjunct: conj), *comp (clausal complements: ccomp, xcomp), case (clitics and adpositions: case), *mod (modifiers of a noun: nmod, nummod, amod, appos), neg (negation modifier: neg). We found that each of the three improvements (lexical embeddings, language embeddings and fine-grained POS embeddings) tends to improve recall for most classes. MaLOPa underperforms (compared to the monolingual baseline) in some classes: nominal subjects, direct objects and modifiers of a noun. Nevertheless, MaLOPa outperforms the baseline in some important classes such as: root, long attachments and conjunctions. In Table 3 , we assume that both gold language ID of the input language and gold POS tags are given at test time. However, this assumption is not realistic in practical applications. Here, we quantify the degradation in parsing accuracy when language ID and POS tags are only given at training time, but must be predicted at test time. We do not use fine-grained POS tags in these experiments because some languages use a very large fine-grained POS tag set (e.g., 866 unique tags in Portuguese). In order to predict language ID, we use the langid.py library BIBREF34 and classify individual sentences in the test sets to one of the seven languages of interest, using the default models included in the library. The macro average language ID prediction accuracy on the test set across sentences is 94.7%. In order to predict POS tags, we use the model described in § "Predicting POS Tags" with both input and hidden LSTM dimensions of 60, and with block dropout. The macro average accuracy of the POS tagger is 93.3%. Table 5 summarizes the four configurations: {gold language ID, predicted language ID} $\times $ {gold POS tags, predicted POS tags}. The performance of the parser suffers mildly (–0.8 LAS points) when using predicted language IDs, but more (–5.1 LAS points) when using predicted POS tags. As an alternative approach to predicting POS tags, we trained the Stanford POS tagger, for each target language, on the coarse POS tag annotations in the training section of the universal dependency treebanks, then replaced the gold POS tags in the test set of each language with predictions of the monolingual tagger. The resulting degradation in parsing performance between gold vs. predicted POS tags is –6.0 LAS points (on average, compared to a degradation of –5.1 LAS points in Table 5 ). The disparity in parsing results with gold vs. predicted POS tags is an important open problem, and has been previously discussed by tiedemann:15. The predicted POS results in Table 5 use block dropout. Without using block dropout, we lose an extra 0.2 LAS points in both configurations using predicted POS tags. Several methods have been proposed for pretraining multilingual word embeddings. We compare three of them: multiCCA BIBREF23 uses a linear operator to project pretrained monolingual embeddings in each language (except English) to the vector space of pretrained English word embeddings. multiCluster BIBREF23 uses the same embedding for translationally-equivalent words in different languages. robust projection BIBREF29 first pretrains monolingual English word embeddings, then defines the embedding of a non-English word as the weighted average embedding of English words aligned to the non-English words (in a parallel corpus). The embedding of a non-English word which is not aligned to any English words is defined as the average embedding of words with a unit edit distance in the same language (e.g., `playz' is the average of `plays' and `play'). All embeddings are trained on the same data and use the same number of dimensions (100). Table 6 illustrates that the three methods perform similarly on this task. Aside from Table 6 , in this paper, we exclusively use the robust projection multilingual embeddings trained in guo:16. The “robust projection” result in Table 6 (which uses 100 dimensions) is comparable to the last row in Table 3 (which uses 50 dimensions). duong:15 considered a setup where the target language has a small treebank of $\sim $ 3,000 tokens, and the source language (English) has a large treebank of $\sim $ 205,000 tokens. The parser proposed in duong:15 is a neural network parser based on chen:14, which shares most of the parameters between English and the target language, and uses an $\ell _2$ regularizer to tie the lexical embeddings of translationally-equivalent words. While not the primary focus of this paper, we compare our proposed method to that of duong:15 on five target languages for which multilingual Brown clusters are available from guo:16. For each target language, we train the parser on the English training data in the UD version 1.0 corpus BIBREF6 and a small treebank in the target language. Following duong:15, in this setup, we only use gold coarse POS tags, we do not use any development data in the target languages (we use the English development set instead), and we subsample the English training data in each epoch to the same number of sentences in the target language. We use the same hyperparameters specified before for the single MaLOPa parser and each of the monolingual baselines. Table 7 shows that our method outperforms duong:15 by 1.4 LAS points on average. Our method consistently outperforms the monolingual baselines in this setup, with an average improvement of 5.7 absolute LAS points. Target Languages without a Treebank (L t ∩L s =∅L^t \cap L^s = \emptyset ) mcdonald:11 established that, when no treebank annotations are available in the target language, training on multiple source languages outperforms training on one (i.e., multi-source model transfer outperforms single-source model transfer). In this section, we evaluate the performance of our parser in this setup. We use two strong baseline multi-source model transfer parsers with no supervision in the target language: zhang:15 is a graph-based arc-factored parsing model with a tensor-based scoring function. It takes typological properties of a language as input. We compare to the best reported configuration (i.e., the column titled “OURS” in Table 5 of Zhang and Barzilay, 2015). guo:16 is a transition-based neural-network parsing model based on chen:14. It uses a multilingual embeddings and Brown clusters as lexical features. We compare to the best reported configuration (i.e., the column titled “MULTI-PROJ” in Table 1 of Guo et al., 2016). Following guo:16, for each target language, we train the parser on six other languages in the Google universal dependency treebanks version 2.0 (de, en, es, fr, it, pt, sv, excluding whichever is the target language), and we use gold coarse POS tags. Our parser uses the same word embeddings and word clusters used in guo:16, and does not use any typology information. The results in Table 8 show that, on average, our parser outperforms both baselines by more than 1 point in LAS, and gives the best LAS results in four (out of six) languages. Related Work Our work builds on the model transfer approach, which was pioneered by zeman:08 who trained a parser on a source language treebank then applied it to parse sentences in a target language. cohen:11 and mcdonald:11 trained unlexicalized parsers on treebanks of multiple source languages and applied the parser to different languages. naseem:12, tackstrom:13, and zhang:15 used language typology to improve model transfer. To add lexical information, tackstrom:12 used multilingual word clusters, while xiao:14, guo:15, sogaard:15 and guo:16 used multilingual word embeddings. duong:15 used a neural network based model, sharing most of the parameters between two languages, and used an $\ell _2$ regularizer to tie the lexical embeddings of translationally-equivalent words. We incorporate these ideas in our framework, while proposing a novel neural architecture for embedding language typology (see § "Language Embeddings" ), and use a variant of word dropout BIBREF32 for consuming noisy structured inputs. We also show how to replace an array of monolingually trained parsers with one multilingually-trained parser without sacrificing accuracy, which is related to vilares:16. Neural network parsing models which preceded dyer:15 include henderson:03, titov:07, henderson:10 and chen:14. Related to lexical features in cross-lingual parsing is durrett:12 who defined lexico-syntactic features based on bilingual lexicons. Other related work include ostling:15, which may be used to induce more useful typological properties to inform multilingual parsing. Another popular approach for cross-lingual supervision is to project annotations from the source language to the target language via a parallel corpus BIBREF35 , BIBREF36 or via automatically-translated sentences BIBREF37 . ma:14 used entropy regularization to learn from both parallel data (with projected annotations) and unlabeled data in the target language. rasooli:15 trained an array of target-language parsers on fully annotated trees, by iteratively decoding sentences in the target language with incomplete annotations. One research direction worth pursuing is to find synergies between the model transfer approach and annotation projection approach. Conclusion We presented MaLOPa, a single parser trained on a multilingual set of treebanks. We showed that this parser, equipped with language embeddings and fine-grained POS embeddings, on average outperforms monolingually-trained parsers for target languages with a treebank. This pattern of results is quite encouraging. Although languages may share underlying syntactic properties, individual parsing models must behave quite differently, and our model allows this while sharing parameters across languages. The value of this sharing is more pronounced in scenarios where the target language's training treebank is small or non-existent, where our parser outperforms previous cross-lingual multi-source model transfer methods. Acknowledgments Waleed Ammar is supported by the Google fellowship in natural language processing. Miguel Ballesteros is supported by the European Commission under the contract numbers FP7-ICT-610411 (project MULTISENSOR) and H2020-RIA-645012 (project KRISTINA). Part of this material is based upon work supported by a subcontract with Raytheon BBN Technologies Corp. under DARPA Prime Contract No. HR0011-15-C-0013, and part of this research was supported by a Google research award to Noah Smith. We thank Jiang Guo for sharing the multilingual word embeddings and multilingual word clusters. We thank Lori Levin, Ryan McDonald, Jörg Tiedemann, Yulia Tsvetkov, and Yuan Zhang for helpful discussions. Last but not least, we thank the anonymous TACL reviewers for their valuable feedback.
seven
9508e9ec675b6512854e830fa89fa6a747b520c5
9508e9ec675b6512854e830fa89fa6a747b520c5_0
Q: Do they use attention? Text: Introduction Natural Language Generation (NLG) is an NLP task that consists in generating a sequence of natural language sentences from non-linguistic data. Traditional approaches of NLG consist in creating specific algorithms in the consensual NLG pipeline BIBREF0, but there has been recently a strong interest in End-to-End (E2E) NLG systems which are able to jointly learn sentence planning and surface realization BIBREF1, BIBREF2, BIBREF3, BIBREF4. Probably the most well known effort of this trend is the E2E NLG challenge BIBREF5 whose task was to perform sentence planing and realization from dialogue act-based Meaning Representation (MR) on unaligned data. For instance, Figure FIGREF1 presents, on the upper part, a meaning representation and on the lower part, one possible textual realization to convey this meaning. Although the challenge was a great success, the data used in the challenge contained a lot of redundancy of structure and a limited amount of concepts and several reference texts per MR input (8.1 in average). This is an ideal case for machine learning but is it the one that is encountered in all E2E NLG real-world applications? In this work, we are interested in learning E2E models for real world applications in which there is a low amount of annotated data. Indeed, it is well known that neural approaches need a large amount of carefully annotated data to be able to induce NLP models. For the NLG task, that means that MR and (possibly many) reference texts must be paired together so that supervised learning is made possible. In NLG, such paired datasets are rare and remains tedious to acquire BIBREF5, BIBREF6, BIBREF7. On the contrary, large amount of unpaired meaning representations and texts can be available but cannot be exploited for supervised learning. In order to tackle this problem, we propose a semi-supervised learning approach which is able to benefit from unpaired (non-annotated) dataset which are much easier to acquire in real life applications. In an unpaired dataset, only the input data is assumed to be representative of the task. In such case, autoencoders can be used to learn an (often more compact) internal representation of the data. Monolingual word embeddings learning also benefit from unpaired data. However, none of these techniques are fit for the task of generating from a constrained MR representation. Hence, we extend the idea of autoencoder which is to regenerate the input sequence by using an NLG and an NLU models. To learn the NLG model, the input text is fed to the NLU model which in turn feeds the NLG model. The output of the NLG model is compared to the input and a loss can be computed. A similar strategy is applied for NLU. This approach brings several advantages: 1) the learning is performed from a large unpaired (non-annotated) dataset and a small amount of paired data to constrain the inner representation of the models to respect the format of the task (here MR and abstract text); 2) the architecture is completely differentiable which enables a fully joint learning; and 3) the two NLG and NLU models remain independent and can thus be applied to different tasks separately. The remaining of this paper gives some background about seq2seq models (Sec SECREF2) before introducing the joint learning approach (Sec SECREF3). Two benchmarks, described in Sec SECREF4, have been used to evaluate the method and whose results are presented in Sec SECREF5. The method is then positioned with respect to the state-of-the-art in Sec SECREF6 before providing some concluding remarks in Sec SECREF7. Background: E2E systems E2E Natural Language Generation systems are typically based on the Recurrent Neural Network (RNN) architecture consisting of an encoder and a decoder also known as seq2seq BIBREF8. The encoder takes a sequence of source words $\mathbf {x}~=~\lbrace {x_1},{x_2}, ..., {x_{T_x}}\rbrace $ and encodes it to a fixed length vector. The decoder then decodes this vector into a sequence of target words $\mathbf {y}~=~\lbrace {y_1},{y_2}, ..., {y_{T_y}}\rbrace $. Seq2seq models are able to treat variable sized source and target sequences making them a great choice for NLG and NLU tasks. More formally, in a seq2seq model, the recurrent unit of the encoder, at each time step $t$ receives an input word $x_t$ (in practice the embedding vector of the word) and a previous hidden state ${h_t-1}$ then generates a new hidden state $h_t$ using: where the function $f$ is an RNN unit such as Long Short-Term Memory (LSTM) BIBREF9 or Gated Recurrent Unit (GRU) BIBREF10. Once the encoder has treated the entire source sequence, the last hidden state ${h_{T_x}}$ is passed to the decoder. To generate the sequence of target words, the decoder also uses an RNN and computes, at each time step, a new hidden state $s_t$ from its previous hidden state $s_{t-1}$ and the previously generated word $y_{t-1}$. At training time, $y_{t-1}$ is the previous word in the target sequence (teacher-forcing). Lastly, the conditional probability of each target word $y_t$ is computed as follows: where $W$ and $b$ are a trainable parameters used to map the output to the same size as the target vocabulary and $c_t$ is the context vector obtained using the sum of hidden states in the encoder, weighted by its attention BIBREF11, BIBREF12. The context is computed as follow: Attention weights $\alpha _{i}^{t}$ are computed by applying a softmax function over a score calculated using the encoder and decoder hidden states: The choice of the score adopted in this papers is based on the dot attention mechanism introduced in BIBREF12. The attention mechanism helps the decoder to find relevant information on the encoder side based on the current decoder hidden state. Joint NLG/NLU learning scheme The joint NLG/NLU learning scheme is shown in Figure FIGREF7. It consists of two seq2seq models for NLG and NLU tasks. Both models can be trained separately on paired data. In that case, the NLG task is to predict the text $\hat{y}$ from the input MR $x$ while the NLU task is to predict the MR $\hat{x}$ from the input text $y$. On unpaired data, the two models are connected through two different loops. In the first case, when the unpaired input source is text, $y$ is provided to the NLU models which feeds the NLG model to produce $\hat{y}$. A loss is computed between $y$ and $\hat{y}$ (but not between $\hat{x}$ and $x$ since $x$ is unknown). In the second case, when the input is only MR, $x$ is provided to the NLG model which then feeds the NLU model and finally predicts $\hat{x}$. Similarly, a loss is computed between $x$ and $\hat{x}$ (but not between $\hat{y}$ and $y$ since $y$ is unknown). This section details these four steps and how the loss is backpropagated through the loops. Learning with Paired Data: The NLG model is a seq2seq model with attention as described in section SECREF2. It takes as input a MR and generates a natural language text. The objective is to find the model parameters $\theta ^{nlg}$ such that they minimize the loss which is defined as follows: The NLU model is based on the same architecture but takes a natural language text and outputs a MR and its loss can be formulated as: Learning with Unpaired Data: When data are unpaired, there is also a loop connection between the two seq2seq models. This is achieved by feeding MR to the NLG model in order to generate a sequence of natural language text $\hat{y}$ by applying an argmax over the probability distribution at each time step ($\hat{y}_t = \mbox{argmax}P(y_t|\mathbf {x};\theta ^{nlg})$). This text is then fed back into the NLU model which in turn generates an MR. Finally, we compute the loss between the original MR and the reconstructed MR: The same can be applied in the opposite direction where we feed text to the NLU model and then the NLG model reconstructs back the text. This loss is given by: To perform joint learning, all four losses are summed together to provide the uniq loss $\mathcal {L}$ as follows: The weights $\alpha , \beta , \delta $ and $\gamma \in [0,1]$ are defined to fine tune the contribution of each task and data to the learning or to bias the learning towards one specific task. We show in the experiment section the impact of different settings. Since the loss functions in Equation DISPLAY_FORM8 and DISPLAY_FORM9 force the model to generate a sequence of words based on the target and the losses in Equation DISPLAY_FORM11 and DISPLAY_FORM10 force the model to reconstruct back the input sequence, this way the model is encouraged to generate text that is supported by the facts found in the input sequence. It is important to note that the gradients based on $\mathcal {L}_{p}^{nlg}$ and $\mathcal {L}_{p}^{nlu}$ can only backpropagate through their respective model (i.e., NLG and NLU), while $\mathcal {L}_{u}^{nlg}$ and $\mathcal {L}_{u}^{nlu}$ gradients should backpropagate through both models. Straight-Through Gumbel-Softmax: A major problem with the proposed joint learning architecture in the unpaired case is that the model is not fully differentiable. Indeed, given the input $x$ and the intermediate output $\hat{y}$, the $\mathcal {L}_{u}^{nlu}$ and the NLG parameter $\theta _{nlg}$, the gradient is computed as: At each time step $t$, the output probability $p_{y_t}$ is computed trough the softmax layer and $\hat{y}_t$ is obtained using $\hat{y}_t = onehot(argmax_w p_{y_t}[w])$ that is the word index $w$ with maximum probability at time step $t$. To address this problem, one solution is to replace this operation by the identity matrix $\frac{\partial \hat{y}_t}{\partial p_{y_t}} \approx \mathbb {1}$. This approach is called the Straight-Through (ST) estimator, which simply consists of backpropagating through the argmax function as if it had been the identity function BIBREF13, BIBREF14. A more principled way of dealing with the non-differential nature of argmax, is to use the Gumbel-Softmax which proposes a continuous approximation to sampling from a categorical distribution BIBREF15. Hence, the discontinuous argmax is replaced by a differentiable and smooth function. More formally, consider a $k$-dimensional categorical distribution $u$ with probabilities $\pi _1, \pi _2, ..., \pi _k$. Samples from $u$ can be approximated using: where $g_i$ is the Gumbel noise drawn from a uniform distribution and $\tau $ is a temperature parameter. The sample distribution from the Gumbel-Softmax resembles the argmax operation as $\tau \rightarrow 0$, and it becomes uniform when $\tau \rightarrow \infty $. Although Gumbel-Softmax is differentiable, the samples drawn from it are not adequate input to the subsequent models which expect a discrete values in order to retrieve the embedding matrix of the input words. So, instead, we use the Straight-Through (ST) Gumbel-Softmax which is basically the discrete version of the Gumbel-Softmax. During the forward phase, ST Gumbel-Softmax discretizes $y$ in Equation DISPLAY_FORM14 but it uses the continuous approximation in the backward pass. Although the Gumbel-Softmax estimator is biased due to the sample mismatch between the backward and forward phases, many studies have shown that ST Gumbel-Softmax can lead to significant improvements in several tasks BIBREF16, BIBREF17, BIBREF18. Dataset The models developed were evaluated on two datasets. The first one is the E2E NLG challenge dataset BIBREF5 which contains 51k of annotated samples. The second one is the Wikipedia Company Dataset BIBREF7 which consists of around 51K of noisy MR-abstract pairs of company descriptions. Dataset ::: E2E NLG challenge Dataset The E2E NLG challenge Dataset has become one of the benchmarks of reference for end-to-end sentence-planning NLG systems. It is still one of the largest dataset available for this task. The dataset was collected via crowd-sourcing using pictorial representations in the domain of restaurant recommendation. Although the E2E challenge dataset contains more than 50k samples, each MR is associated on average with 8.1 different reference utterances leading to around 6K unique MRs. Each MR consists of 3 to 8 slots, such as name, food or area, and their values and slot types are fairly equally distributed. The majority of MRs consist of 5 or 6 slots while human utterances consist mainly of one or two sentences only. The vocabulary size of the dataset is of 2780 distinct tokens. Dataset ::: The Wikipedia Company Dataset The wikipedia company dataset BIBREF7, is composed of a set of company data from English Wikipedia. The dataset contains 51k samples where each sample is composed of up to 3 components: the Wikipedia article abstract, the Wikipedia article body, and the infobox which is a set of attribute–value pairs containing primary information about the company (founder, creation date etc.). The infobox part was taken as MR where each attribute–value pair was represented as a sequence of string attribute [value]. The MR representation is composed of 41 attributes with 4.5 attributes per article and 2 words per value in average. The abstract length is between 1 to 5 sentences. The vocabulary size is of 158464 words. The Wikipedia company dataset contains much more lexical variation and semantic information than the E2E challenge dataset. Furthermore, company texts have been written by humans within the Wikipedia ecosystem and not during a controlled experiment whose human engagement was unknown. Hence, the Wikipedia dataset seems an ecological target for research in NLG. However, as pointed out by the authors, the Wikipedia dataset is not ideal for machine learning. First, the data is not controlled and each article contains only one reference (vs. 8.1 for the E2E challenge dataset). Second the abstract, the body and the infobox are only loosely correlated. Indeed, the meaning representation coverage is poor since, for some MR, none of the information is found in the text and vice-versa. To give a rough estimate of this coverage, we performed an analysis of 100 articles randomly selected in the test set. Over 868 total slot instances, 28% of the slots in the infobox cannot be found in their respective abstract text, while 13% are missing in the infobox. Despite these problems, we believe the E2E and the Wikipedia company datasets can provide contrasted evaluation, the first being well controlled and lexically focused, the latter representing the kind of data that can be found in real situations and that E2E systems must deal with in order to percolate in the society. Experiments The performance of the joint learning architecture was evaluated on the two datasets described in the previous section. The joint learning model requires a paired and an unpaired dataset, so each of the two datasets was split into several parts. E2E NLG challenge Dataset: The training set of the E2E challenge dataset which consists of 42K samples was partitioned into a 10K paired and 32K unpaired datasets by a random process. The unpaired database was composed of two sets, one containing MRs only and the other containing natural texts only. This process resulted in 3 training sets: paired set, unpaired text set and unpaired MR set. The original development set (4.7K) and test set (4.7K) of the E2E dataset have been kept. The Wikipedia Company Dataset: The Wikipedia company dataset presented in Section SECREF18 was filtered to contain only companies having abstracts of at least 7 words and at most 105 words. As a result of this process, 43K companies were retained. The dataset was then divided into: a training set (35K), a development set (4.3K) and a test set (4.3K). Of course, there was no intersection between these sets. The training set was also partitioned in order to obtain the paired and unpaired datasets. Because of the loose correlation between the MRs and their corresponding text, the paired dataset was selected such that it contained the infobox values with the highest similarity with its reference text. The similarity was computed using “difflib” library, which is an extension of the Ratcliff and Obershelp algorithm BIBREF19. The paired set was selected in this way (rather than randomly) to get samples as close as possible to a carefully annotated set. At the end of partitioning, the following training sets were obtained: paired set (10.5K), unpaired text set (24.5K) and unpaired MR set (24.5K). The way the datasets are split into paired and unpaired sets is artificial and might be biased particularly for the E2E dataset as it is a rather easy dataset. This is why we included the Wikipedia dataset in our study since the possibility of having such bias is low because 1) each company summary/infobox was written by different authors at different time within the wikipedia eco-system making this data far more natural than in the E2E challenge case, 2) there is a large amount of variation in the dataset, and 3) the dataset was split in such a way that the paired set contains perfect matches between the MR and the text, while reserving the least matching samples for the the unpaired set (i.e., the more representative of real-life Wikipedia articles). As a result, the paired and unpaired sets of the Wikipedia dataset are different from each other and the text and MR unpaired samples are only loosely correlated. Experiments ::: Evaluation with Automatic Metrics For the experiments, each seq2seq model was composed of 2 layers of Bi-LSTM in the encoder and two layers of LSTM in the decoder with 256 hidden units and dot attention trained using Adam optimization with learning rate of 0.001. The embeddings had 500 dimensions and the vocabulary was limited to 50K words. The Gumbel-Softmax temperature $\tau $ was set to 1. Hyper-parameters tuning was performed on the development set and models were trained until the loss on the development set stops decreasing for several consecutive iterations. All models were implemented with PyTorch library. Results of the experiment on the E2E challenge data are summarized Table TABREF21 for both the NLG and the NLU tasks. BLEU, Rouge-L and Meteor were computed using the E2E challenge metrics script with default settings. NLU performances were computed at the slot level. The model learned using paired+unpaired methods shows significant superior performances than the paired version. Among the paired+unpaired methods, the one of last row exhibits the highest balanced score between NLG and NLU. This is achieved when the weights $\alpha $ and $\gamma $ favor the NLG task against NLU ($\beta =\delta =0.1$). This setting has been chosen since the NLU task converged much quicker than the NLG task. Hence lower weight for NLU during the learning avoided over-fitting. This best system exhibits similar performances than the E2E challenge winner for ROUGE-L and METEOR whereas it did not use any pre-processing (delexicalisation, slot alignment, data augmentation) or re-scoring and was trained on far less annotated data. Results of the experiment on Wikipedia company dataset are summarized Table TABREF22 for both the NLG and the NLU tasks. Due to noise in the dataset and the fact that only one reference is available for each sample, the automatic metrics show very low scores. This is in line with BIBREF7 for which the best system obtained BLEU$=0.0413$, ROUGE-L$=0.266$ and METEOR$=0.1076$. Contrary to the previous results, the paired method brings one of the best performance. However, the best performing system is the one of the last row which again put more emphasis on the NLG task than on the NLU one. Once again, this system obtained performances comparable to the best system of BIBREF7 but without using any pointer generator or coverage mechanisms. In order to further analyze the results, in Table TABREF24 we show samples of the generated text by different models alongside the reference texts. The first two examples are from the model trained on the E2E NLG dataset and the last two are from the Wikipedia dataset. Although on the E2E dataset the outputs of paired and paired+unpaired models seem very similar, the latter resembles the reference slightly more and because of this it achieves a higher score in the automatic metrics. This resemblance to the reference could be attributed to the fact that we use a reconstruction loss which forces the model to generate text that is only supported by facts found in the input. As for the Wikipedia dataset examples, we can see that the model with paired+unpaired data is less noisy and the outputs are generally shorter. The model with only paired data generates unnecessarily longer text with lots of unsupported facts and repetitions. Needless to say that both models are doing lots of mistakes and this is because of all the noise contained in the training data. Experiments ::: Human Evaluation It is well know that automatic metrics in NLG are poorly predictive of human ratings although they are useful for system analysis and development BIBREF20, BIBREF0. Hence, to gain more insight about the generation properties of each model, a human evaluation with 16 human subjects was performed on the Wikipedia dataset models. We set up a web-based experiment and used the same 4 questions as in BIBREF7 which were asked on a 5-point Lickert scale: How do you judge the Information Coverage of the company summary? How do you judge the Non-Redundancy of Information in the company summary? How do you judge the Semantic Adequacy of the company summary? How do you judge the Grammatical Correctness of the company summary? For this experiment, 40 company summaries were selected randomly from the test set. Each participant had to treat 10 summaries by first reading the summary and the infobox, then answering the aforementioned four questions. Results of the human experiment are reported in Table TABREF26. The first line reports the results of the reference (i.e., the Wikipedia abstract) for comparison, while the second line is the model with paired data, and the last line is the model trained on paired+unpaired data with parameters reported in the last row of Table TABREF22, i.e., $\alpha =\gamma =1$ and $\beta =\delta =0.1$ . It is clear from the coverage metric that no system nor the reference was seen as doing a good job at conveying the information present in the infobox. This is in line with the corpus analysis of section SECREF4. However, between the automatic methods, the unpaired models exhibit a clear superiority in coverage and in semantic adequacy, two measures that are linked. On the other side, the model learned with paired data is slightly more performing in term of non-redundancy and grammaticality. The results of the unpaired model with coverage and grammaticality are equivalent to best models of BIBREF7 but for non-redundancy and semantic adequacy the result are slightly below. This is probably because the authors have used a pointer generator mechanism BIBREF21, a trick we avoided and which is subject of further work. These results express the difference between the learning methods: on the one hand, the unpaired learning relaxes the intermediate labels which are noisy so that the model learns to express what is really in the input (this explain the higher result for coverage) while, on the other hand, the paired learning is only constrained by the output text (not also with the NLU loss as in the unpaired case) which results in slightly more grammatical sentence to the expense of semantic coverage. Experiments ::: Ablation Study In this section, we further discuss different aspects of the proposed joint learning approach. In particular we are interested in studying the impact of: 1) having different amounts of paired data and 2) the weight of each loss function on the overall performance. Since only the E2E dataset is non-noisy and hence provide meaningful automatic metrics, the ablation study was performed only on this dataset. To evaluate the dependence on the amount of paired data, the best model was re-trained by changing the size of the paired data ranging from 3% of the training data (i.e., 1K) up to 24% (i.e., 10K). The results are shown in Figure FIGREF27. The figure reveals that regardless of the amount of paired data, the joint learning approach: 1) always improves over the model with only paired data and 2) is always able to benefit from supplementary paired data. This is particularly true when the amount of paired data is very small and the difference seems to get smaller as the percentage of the paired data increases. Next, to evaluate which of the four losses contribute most to the overall performance, the best model was re-trained in different settings. In short, in each setting, one of the weights was set to zero while the others three weights were kept similar as in the best case. The results are presented in Table TABREF29 and Table TABREF30 for NLG and NLU tasks respectively. In these table the first line if the best model as reported in Table TABREF21. It can be seen that all the four losses are important since setting any of the weights to zero leads to a decrease in performance. However, the results of both tables show that the most important loss is the NLG unpaired loss $\mathcal {L}_{u}^{nlg}$ since setting $\gamma $ to zeros leads to a significant reduction in the performance for both NLU and NLG. Related Work The approach of joint learning has been tested in the literature in other domains than NLG/NLU for tasks such machine translation BIBREF22, BIBREF23, BIBREF24 and speech processing BIBREF25, BIBREF18, BIBREF26. In BIBREF24 an encoder-decoder-reconstructor for MT is proposed. The reconstructor, integrated to the NMT model, rebuilds the source sentence from the hidden layer of the output target sentence, to ensure that the information in the source side is transformed to the target side as much as possible. In BIBREF18, a joint learning architecture of Automatic Speech Recognition (ASR) and Text-To-Speech (TTS) is proposed which leverages unannotated data. In the unannotated case, during the learning, ASR output is fed to the TTS and the TTS output is compared with the original ASR signal input to compute a loss which is back-propagated through both modules. Regarding NLU, joint learning of NLU with other tasks remain scarce. In BIBREF27, an NLU model is jointly learned with a system action prediction (SAP) model on supervised dialogue data. The NLU model is integrated into the sequence-to-sequence SAP model so that three losses (intent prediction, slot prediction and action prediction) are used to backpropagate through both models. The paper shows that this approach is competitive against the baselines. To the best of our knowledge, the idea of joint NLG/NLU learning has not been tested previously in NLG. In NLG E2E models BIBREF1, BIBREF3, some approaches have learned a concept extractor (which is close to but simpler than an NLU model), but this was not integrated in the NLG learning scheme and only used for output re-scoring. Probably the closest work to our is BIBREF28 in which a seq2seq auto-encoder was used to generate biographies from MR. In this work, the generated text of the `forward' seq2seq model was constrained by a `backward' seq2seq model, which shared parameters. However, this works differs from ours since their model was not completely differentiable. Furthermore, their NLU backward model was only used as a support for the forward NLG. Finally, the shared parameters, although in line with the definition of an auto-encoder, make each model impossible to specialize. Conclusion and Further Work In this paper, we describe a learning scheme which provides the ability to jointly learn two models for NLG and for NLU using large amount of unannotated data and small amount of annotated data. The results obtained with this method on the E2E challenge benchmark, show that the method can achieve a similar score of the winner of the challenge BIBREF3 but with far less annotated data and without using any pre-processing (delexicalisation, data augmentation) or re-scoring tricks. Results on the challenging Wikipedia company dataset shows that highest score can be achieve by mixing paired and unpaired datasets. These results are at the state-of-the-art level BIBREF7 but without using any pointer generator or coverage mechanisms. These findings open the way to the exploitation of unannotated data since the lack of large annotated data source is the current bottleneck of E2E NLG systems development for new applications. Next steps of the research include, replacing the ST Gumbel-Softmax with reinforcement learning techniques such as policy gradient. This is particularly interesting as with policy gradient we will be able do design reward functions that better suit the problem we are trying to solve. Furthermore, it would be interesting to evaluate how pointer generator mechanism BIBREF21 and coverage mechanism BIBREF29 can be integrated in the learning scheme to increase the non-redundancy and coverage performance of the generation. Acknowledgments This project was partly funded by the IDEX Université Grenoble Alpes innovation grant (AI4I-2018-2019) and the Région Auvergne-Rhône-Alpes (AISUA-2018-2019).
Yes
a65e5c97ade6e697ec10bcf3c3190dc6604a0cd5
a65e5c97ade6e697ec10bcf3c3190dc6604a0cd5_0
Q: What non-annotated datasets are considered? Text: Introduction Natural Language Generation (NLG) is an NLP task that consists in generating a sequence of natural language sentences from non-linguistic data. Traditional approaches of NLG consist in creating specific algorithms in the consensual NLG pipeline BIBREF0, but there has been recently a strong interest in End-to-End (E2E) NLG systems which are able to jointly learn sentence planning and surface realization BIBREF1, BIBREF2, BIBREF3, BIBREF4. Probably the most well known effort of this trend is the E2E NLG challenge BIBREF5 whose task was to perform sentence planing and realization from dialogue act-based Meaning Representation (MR) on unaligned data. For instance, Figure FIGREF1 presents, on the upper part, a meaning representation and on the lower part, one possible textual realization to convey this meaning. Although the challenge was a great success, the data used in the challenge contained a lot of redundancy of structure and a limited amount of concepts and several reference texts per MR input (8.1 in average). This is an ideal case for machine learning but is it the one that is encountered in all E2E NLG real-world applications? In this work, we are interested in learning E2E models for real world applications in which there is a low amount of annotated data. Indeed, it is well known that neural approaches need a large amount of carefully annotated data to be able to induce NLP models. For the NLG task, that means that MR and (possibly many) reference texts must be paired together so that supervised learning is made possible. In NLG, such paired datasets are rare and remains tedious to acquire BIBREF5, BIBREF6, BIBREF7. On the contrary, large amount of unpaired meaning representations and texts can be available but cannot be exploited for supervised learning. In order to tackle this problem, we propose a semi-supervised learning approach which is able to benefit from unpaired (non-annotated) dataset which are much easier to acquire in real life applications. In an unpaired dataset, only the input data is assumed to be representative of the task. In such case, autoencoders can be used to learn an (often more compact) internal representation of the data. Monolingual word embeddings learning also benefit from unpaired data. However, none of these techniques are fit for the task of generating from a constrained MR representation. Hence, we extend the idea of autoencoder which is to regenerate the input sequence by using an NLG and an NLU models. To learn the NLG model, the input text is fed to the NLU model which in turn feeds the NLG model. The output of the NLG model is compared to the input and a loss can be computed. A similar strategy is applied for NLU. This approach brings several advantages: 1) the learning is performed from a large unpaired (non-annotated) dataset and a small amount of paired data to constrain the inner representation of the models to respect the format of the task (here MR and abstract text); 2) the architecture is completely differentiable which enables a fully joint learning; and 3) the two NLG and NLU models remain independent and can thus be applied to different tasks separately. The remaining of this paper gives some background about seq2seq models (Sec SECREF2) before introducing the joint learning approach (Sec SECREF3). Two benchmarks, described in Sec SECREF4, have been used to evaluate the method and whose results are presented in Sec SECREF5. The method is then positioned with respect to the state-of-the-art in Sec SECREF6 before providing some concluding remarks in Sec SECREF7. Background: E2E systems E2E Natural Language Generation systems are typically based on the Recurrent Neural Network (RNN) architecture consisting of an encoder and a decoder also known as seq2seq BIBREF8. The encoder takes a sequence of source words $\mathbf {x}~=~\lbrace {x_1},{x_2}, ..., {x_{T_x}}\rbrace $ and encodes it to a fixed length vector. The decoder then decodes this vector into a sequence of target words $\mathbf {y}~=~\lbrace {y_1},{y_2}, ..., {y_{T_y}}\rbrace $. Seq2seq models are able to treat variable sized source and target sequences making them a great choice for NLG and NLU tasks. More formally, in a seq2seq model, the recurrent unit of the encoder, at each time step $t$ receives an input word $x_t$ (in practice the embedding vector of the word) and a previous hidden state ${h_t-1}$ then generates a new hidden state $h_t$ using: where the function $f$ is an RNN unit such as Long Short-Term Memory (LSTM) BIBREF9 or Gated Recurrent Unit (GRU) BIBREF10. Once the encoder has treated the entire source sequence, the last hidden state ${h_{T_x}}$ is passed to the decoder. To generate the sequence of target words, the decoder also uses an RNN and computes, at each time step, a new hidden state $s_t$ from its previous hidden state $s_{t-1}$ and the previously generated word $y_{t-1}$. At training time, $y_{t-1}$ is the previous word in the target sequence (teacher-forcing). Lastly, the conditional probability of each target word $y_t$ is computed as follows: where $W$ and $b$ are a trainable parameters used to map the output to the same size as the target vocabulary and $c_t$ is the context vector obtained using the sum of hidden states in the encoder, weighted by its attention BIBREF11, BIBREF12. The context is computed as follow: Attention weights $\alpha _{i}^{t}$ are computed by applying a softmax function over a score calculated using the encoder and decoder hidden states: The choice of the score adopted in this papers is based on the dot attention mechanism introduced in BIBREF12. The attention mechanism helps the decoder to find relevant information on the encoder side based on the current decoder hidden state. Joint NLG/NLU learning scheme The joint NLG/NLU learning scheme is shown in Figure FIGREF7. It consists of two seq2seq models for NLG and NLU tasks. Both models can be trained separately on paired data. In that case, the NLG task is to predict the text $\hat{y}$ from the input MR $x$ while the NLU task is to predict the MR $\hat{x}$ from the input text $y$. On unpaired data, the two models are connected through two different loops. In the first case, when the unpaired input source is text, $y$ is provided to the NLU models which feeds the NLG model to produce $\hat{y}$. A loss is computed between $y$ and $\hat{y}$ (but not between $\hat{x}$ and $x$ since $x$ is unknown). In the second case, when the input is only MR, $x$ is provided to the NLG model which then feeds the NLU model and finally predicts $\hat{x}$. Similarly, a loss is computed between $x$ and $\hat{x}$ (but not between $\hat{y}$ and $y$ since $y$ is unknown). This section details these four steps and how the loss is backpropagated through the loops. Learning with Paired Data: The NLG model is a seq2seq model with attention as described in section SECREF2. It takes as input a MR and generates a natural language text. The objective is to find the model parameters $\theta ^{nlg}$ such that they minimize the loss which is defined as follows: The NLU model is based on the same architecture but takes a natural language text and outputs a MR and its loss can be formulated as: Learning with Unpaired Data: When data are unpaired, there is also a loop connection between the two seq2seq models. This is achieved by feeding MR to the NLG model in order to generate a sequence of natural language text $\hat{y}$ by applying an argmax over the probability distribution at each time step ($\hat{y}_t = \mbox{argmax}P(y_t|\mathbf {x};\theta ^{nlg})$). This text is then fed back into the NLU model which in turn generates an MR. Finally, we compute the loss between the original MR and the reconstructed MR: The same can be applied in the opposite direction where we feed text to the NLU model and then the NLG model reconstructs back the text. This loss is given by: To perform joint learning, all four losses are summed together to provide the uniq loss $\mathcal {L}$ as follows: The weights $\alpha , \beta , \delta $ and $\gamma \in [0,1]$ are defined to fine tune the contribution of each task and data to the learning or to bias the learning towards one specific task. We show in the experiment section the impact of different settings. Since the loss functions in Equation DISPLAY_FORM8 and DISPLAY_FORM9 force the model to generate a sequence of words based on the target and the losses in Equation DISPLAY_FORM11 and DISPLAY_FORM10 force the model to reconstruct back the input sequence, this way the model is encouraged to generate text that is supported by the facts found in the input sequence. It is important to note that the gradients based on $\mathcal {L}_{p}^{nlg}$ and $\mathcal {L}_{p}^{nlu}$ can only backpropagate through their respective model (i.e., NLG and NLU), while $\mathcal {L}_{u}^{nlg}$ and $\mathcal {L}_{u}^{nlu}$ gradients should backpropagate through both models. Straight-Through Gumbel-Softmax: A major problem with the proposed joint learning architecture in the unpaired case is that the model is not fully differentiable. Indeed, given the input $x$ and the intermediate output $\hat{y}$, the $\mathcal {L}_{u}^{nlu}$ and the NLG parameter $\theta _{nlg}$, the gradient is computed as: At each time step $t$, the output probability $p_{y_t}$ is computed trough the softmax layer and $\hat{y}_t$ is obtained using $\hat{y}_t = onehot(argmax_w p_{y_t}[w])$ that is the word index $w$ with maximum probability at time step $t$. To address this problem, one solution is to replace this operation by the identity matrix $\frac{\partial \hat{y}_t}{\partial p_{y_t}} \approx \mathbb {1}$. This approach is called the Straight-Through (ST) estimator, which simply consists of backpropagating through the argmax function as if it had been the identity function BIBREF13, BIBREF14. A more principled way of dealing with the non-differential nature of argmax, is to use the Gumbel-Softmax which proposes a continuous approximation to sampling from a categorical distribution BIBREF15. Hence, the discontinuous argmax is replaced by a differentiable and smooth function. More formally, consider a $k$-dimensional categorical distribution $u$ with probabilities $\pi _1, \pi _2, ..., \pi _k$. Samples from $u$ can be approximated using: where $g_i$ is the Gumbel noise drawn from a uniform distribution and $\tau $ is a temperature parameter. The sample distribution from the Gumbel-Softmax resembles the argmax operation as $\tau \rightarrow 0$, and it becomes uniform when $\tau \rightarrow \infty $. Although Gumbel-Softmax is differentiable, the samples drawn from it are not adequate input to the subsequent models which expect a discrete values in order to retrieve the embedding matrix of the input words. So, instead, we use the Straight-Through (ST) Gumbel-Softmax which is basically the discrete version of the Gumbel-Softmax. During the forward phase, ST Gumbel-Softmax discretizes $y$ in Equation DISPLAY_FORM14 but it uses the continuous approximation in the backward pass. Although the Gumbel-Softmax estimator is biased due to the sample mismatch between the backward and forward phases, many studies have shown that ST Gumbel-Softmax can lead to significant improvements in several tasks BIBREF16, BIBREF17, BIBREF18. Dataset The models developed were evaluated on two datasets. The first one is the E2E NLG challenge dataset BIBREF5 which contains 51k of annotated samples. The second one is the Wikipedia Company Dataset BIBREF7 which consists of around 51K of noisy MR-abstract pairs of company descriptions. Dataset ::: E2E NLG challenge Dataset The E2E NLG challenge Dataset has become one of the benchmarks of reference for end-to-end sentence-planning NLG systems. It is still one of the largest dataset available for this task. The dataset was collected via crowd-sourcing using pictorial representations in the domain of restaurant recommendation. Although the E2E challenge dataset contains more than 50k samples, each MR is associated on average with 8.1 different reference utterances leading to around 6K unique MRs. Each MR consists of 3 to 8 slots, such as name, food or area, and their values and slot types are fairly equally distributed. The majority of MRs consist of 5 or 6 slots while human utterances consist mainly of one or two sentences only. The vocabulary size of the dataset is of 2780 distinct tokens. Dataset ::: The Wikipedia Company Dataset The wikipedia company dataset BIBREF7, is composed of a set of company data from English Wikipedia. The dataset contains 51k samples where each sample is composed of up to 3 components: the Wikipedia article abstract, the Wikipedia article body, and the infobox which is a set of attribute–value pairs containing primary information about the company (founder, creation date etc.). The infobox part was taken as MR where each attribute–value pair was represented as a sequence of string attribute [value]. The MR representation is composed of 41 attributes with 4.5 attributes per article and 2 words per value in average. The abstract length is between 1 to 5 sentences. The vocabulary size is of 158464 words. The Wikipedia company dataset contains much more lexical variation and semantic information than the E2E challenge dataset. Furthermore, company texts have been written by humans within the Wikipedia ecosystem and not during a controlled experiment whose human engagement was unknown. Hence, the Wikipedia dataset seems an ecological target for research in NLG. However, as pointed out by the authors, the Wikipedia dataset is not ideal for machine learning. First, the data is not controlled and each article contains only one reference (vs. 8.1 for the E2E challenge dataset). Second the abstract, the body and the infobox are only loosely correlated. Indeed, the meaning representation coverage is poor since, for some MR, none of the information is found in the text and vice-versa. To give a rough estimate of this coverage, we performed an analysis of 100 articles randomly selected in the test set. Over 868 total slot instances, 28% of the slots in the infobox cannot be found in their respective abstract text, while 13% are missing in the infobox. Despite these problems, we believe the E2E and the Wikipedia company datasets can provide contrasted evaluation, the first being well controlled and lexically focused, the latter representing the kind of data that can be found in real situations and that E2E systems must deal with in order to percolate in the society. Experiments The performance of the joint learning architecture was evaluated on the two datasets described in the previous section. The joint learning model requires a paired and an unpaired dataset, so each of the two datasets was split into several parts. E2E NLG challenge Dataset: The training set of the E2E challenge dataset which consists of 42K samples was partitioned into a 10K paired and 32K unpaired datasets by a random process. The unpaired database was composed of two sets, one containing MRs only and the other containing natural texts only. This process resulted in 3 training sets: paired set, unpaired text set and unpaired MR set. The original development set (4.7K) and test set (4.7K) of the E2E dataset have been kept. The Wikipedia Company Dataset: The Wikipedia company dataset presented in Section SECREF18 was filtered to contain only companies having abstracts of at least 7 words and at most 105 words. As a result of this process, 43K companies were retained. The dataset was then divided into: a training set (35K), a development set (4.3K) and a test set (4.3K). Of course, there was no intersection between these sets. The training set was also partitioned in order to obtain the paired and unpaired datasets. Because of the loose correlation between the MRs and their corresponding text, the paired dataset was selected such that it contained the infobox values with the highest similarity with its reference text. The similarity was computed using “difflib” library, which is an extension of the Ratcliff and Obershelp algorithm BIBREF19. The paired set was selected in this way (rather than randomly) to get samples as close as possible to a carefully annotated set. At the end of partitioning, the following training sets were obtained: paired set (10.5K), unpaired text set (24.5K) and unpaired MR set (24.5K). The way the datasets are split into paired and unpaired sets is artificial and might be biased particularly for the E2E dataset as it is a rather easy dataset. This is why we included the Wikipedia dataset in our study since the possibility of having such bias is low because 1) each company summary/infobox was written by different authors at different time within the wikipedia eco-system making this data far more natural than in the E2E challenge case, 2) there is a large amount of variation in the dataset, and 3) the dataset was split in such a way that the paired set contains perfect matches between the MR and the text, while reserving the least matching samples for the the unpaired set (i.e., the more representative of real-life Wikipedia articles). As a result, the paired and unpaired sets of the Wikipedia dataset are different from each other and the text and MR unpaired samples are only loosely correlated. Experiments ::: Evaluation with Automatic Metrics For the experiments, each seq2seq model was composed of 2 layers of Bi-LSTM in the encoder and two layers of LSTM in the decoder with 256 hidden units and dot attention trained using Adam optimization with learning rate of 0.001. The embeddings had 500 dimensions and the vocabulary was limited to 50K words. The Gumbel-Softmax temperature $\tau $ was set to 1. Hyper-parameters tuning was performed on the development set and models were trained until the loss on the development set stops decreasing for several consecutive iterations. All models were implemented with PyTorch library. Results of the experiment on the E2E challenge data are summarized Table TABREF21 for both the NLG and the NLU tasks. BLEU, Rouge-L and Meteor were computed using the E2E challenge metrics script with default settings. NLU performances were computed at the slot level. The model learned using paired+unpaired methods shows significant superior performances than the paired version. Among the paired+unpaired methods, the one of last row exhibits the highest balanced score between NLG and NLU. This is achieved when the weights $\alpha $ and $\gamma $ favor the NLG task against NLU ($\beta =\delta =0.1$). This setting has been chosen since the NLU task converged much quicker than the NLG task. Hence lower weight for NLU during the learning avoided over-fitting. This best system exhibits similar performances than the E2E challenge winner for ROUGE-L and METEOR whereas it did not use any pre-processing (delexicalisation, slot alignment, data augmentation) or re-scoring and was trained on far less annotated data. Results of the experiment on Wikipedia company dataset are summarized Table TABREF22 for both the NLG and the NLU tasks. Due to noise in the dataset and the fact that only one reference is available for each sample, the automatic metrics show very low scores. This is in line with BIBREF7 for which the best system obtained BLEU$=0.0413$, ROUGE-L$=0.266$ and METEOR$=0.1076$. Contrary to the previous results, the paired method brings one of the best performance. However, the best performing system is the one of the last row which again put more emphasis on the NLG task than on the NLU one. Once again, this system obtained performances comparable to the best system of BIBREF7 but without using any pointer generator or coverage mechanisms. In order to further analyze the results, in Table TABREF24 we show samples of the generated text by different models alongside the reference texts. The first two examples are from the model trained on the E2E NLG dataset and the last two are from the Wikipedia dataset. Although on the E2E dataset the outputs of paired and paired+unpaired models seem very similar, the latter resembles the reference slightly more and because of this it achieves a higher score in the automatic metrics. This resemblance to the reference could be attributed to the fact that we use a reconstruction loss which forces the model to generate text that is only supported by facts found in the input. As for the Wikipedia dataset examples, we can see that the model with paired+unpaired data is less noisy and the outputs are generally shorter. The model with only paired data generates unnecessarily longer text with lots of unsupported facts and repetitions. Needless to say that both models are doing lots of mistakes and this is because of all the noise contained in the training data. Experiments ::: Human Evaluation It is well know that automatic metrics in NLG are poorly predictive of human ratings although they are useful for system analysis and development BIBREF20, BIBREF0. Hence, to gain more insight about the generation properties of each model, a human evaluation with 16 human subjects was performed on the Wikipedia dataset models. We set up a web-based experiment and used the same 4 questions as in BIBREF7 which were asked on a 5-point Lickert scale: How do you judge the Information Coverage of the company summary? How do you judge the Non-Redundancy of Information in the company summary? How do you judge the Semantic Adequacy of the company summary? How do you judge the Grammatical Correctness of the company summary? For this experiment, 40 company summaries were selected randomly from the test set. Each participant had to treat 10 summaries by first reading the summary and the infobox, then answering the aforementioned four questions. Results of the human experiment are reported in Table TABREF26. The first line reports the results of the reference (i.e., the Wikipedia abstract) for comparison, while the second line is the model with paired data, and the last line is the model trained on paired+unpaired data with parameters reported in the last row of Table TABREF22, i.e., $\alpha =\gamma =1$ and $\beta =\delta =0.1$ . It is clear from the coverage metric that no system nor the reference was seen as doing a good job at conveying the information present in the infobox. This is in line with the corpus analysis of section SECREF4. However, between the automatic methods, the unpaired models exhibit a clear superiority in coverage and in semantic adequacy, two measures that are linked. On the other side, the model learned with paired data is slightly more performing in term of non-redundancy and grammaticality. The results of the unpaired model with coverage and grammaticality are equivalent to best models of BIBREF7 but for non-redundancy and semantic adequacy the result are slightly below. This is probably because the authors have used a pointer generator mechanism BIBREF21, a trick we avoided and which is subject of further work. These results express the difference between the learning methods: on the one hand, the unpaired learning relaxes the intermediate labels which are noisy so that the model learns to express what is really in the input (this explain the higher result for coverage) while, on the other hand, the paired learning is only constrained by the output text (not also with the NLU loss as in the unpaired case) which results in slightly more grammatical sentence to the expense of semantic coverage. Experiments ::: Ablation Study In this section, we further discuss different aspects of the proposed joint learning approach. In particular we are interested in studying the impact of: 1) having different amounts of paired data and 2) the weight of each loss function on the overall performance. Since only the E2E dataset is non-noisy and hence provide meaningful automatic metrics, the ablation study was performed only on this dataset. To evaluate the dependence on the amount of paired data, the best model was re-trained by changing the size of the paired data ranging from 3% of the training data (i.e., 1K) up to 24% (i.e., 10K). The results are shown in Figure FIGREF27. The figure reveals that regardless of the amount of paired data, the joint learning approach: 1) always improves over the model with only paired data and 2) is always able to benefit from supplementary paired data. This is particularly true when the amount of paired data is very small and the difference seems to get smaller as the percentage of the paired data increases. Next, to evaluate which of the four losses contribute most to the overall performance, the best model was re-trained in different settings. In short, in each setting, one of the weights was set to zero while the others three weights were kept similar as in the best case. The results are presented in Table TABREF29 and Table TABREF30 for NLG and NLU tasks respectively. In these table the first line if the best model as reported in Table TABREF21. It can be seen that all the four losses are important since setting any of the weights to zero leads to a decrease in performance. However, the results of both tables show that the most important loss is the NLG unpaired loss $\mathcal {L}_{u}^{nlg}$ since setting $\gamma $ to zeros leads to a significant reduction in the performance for both NLU and NLG. Related Work The approach of joint learning has been tested in the literature in other domains than NLG/NLU for tasks such machine translation BIBREF22, BIBREF23, BIBREF24 and speech processing BIBREF25, BIBREF18, BIBREF26. In BIBREF24 an encoder-decoder-reconstructor for MT is proposed. The reconstructor, integrated to the NMT model, rebuilds the source sentence from the hidden layer of the output target sentence, to ensure that the information in the source side is transformed to the target side as much as possible. In BIBREF18, a joint learning architecture of Automatic Speech Recognition (ASR) and Text-To-Speech (TTS) is proposed which leverages unannotated data. In the unannotated case, during the learning, ASR output is fed to the TTS and the TTS output is compared with the original ASR signal input to compute a loss which is back-propagated through both modules. Regarding NLU, joint learning of NLU with other tasks remain scarce. In BIBREF27, an NLU model is jointly learned with a system action prediction (SAP) model on supervised dialogue data. The NLU model is integrated into the sequence-to-sequence SAP model so that three losses (intent prediction, slot prediction and action prediction) are used to backpropagate through both models. The paper shows that this approach is competitive against the baselines. To the best of our knowledge, the idea of joint NLG/NLU learning has not been tested previously in NLG. In NLG E2E models BIBREF1, BIBREF3, some approaches have learned a concept extractor (which is close to but simpler than an NLU model), but this was not integrated in the NLG learning scheme and only used for output re-scoring. Probably the closest work to our is BIBREF28 in which a seq2seq auto-encoder was used to generate biographies from MR. In this work, the generated text of the `forward' seq2seq model was constrained by a `backward' seq2seq model, which shared parameters. However, this works differs from ours since their model was not completely differentiable. Furthermore, their NLU backward model was only used as a support for the forward NLG. Finally, the shared parameters, although in line with the definition of an auto-encoder, make each model impossible to specialize. Conclusion and Further Work In this paper, we describe a learning scheme which provides the ability to jointly learn two models for NLG and for NLU using large amount of unannotated data and small amount of annotated data. The results obtained with this method on the E2E challenge benchmark, show that the method can achieve a similar score of the winner of the challenge BIBREF3 but with far less annotated data and without using any pre-processing (delexicalisation, data augmentation) or re-scoring tricks. Results on the challenging Wikipedia company dataset shows that highest score can be achieve by mixing paired and unpaired datasets. These results are at the state-of-the-art level BIBREF7 but without using any pointer generator or coverage mechanisms. These findings open the way to the exploitation of unannotated data since the lack of large annotated data source is the current bottleneck of E2E NLG systems development for new applications. Next steps of the research include, replacing the ST Gumbel-Softmax with reinforcement learning techniques such as policy gradient. This is particularly interesting as with policy gradient we will be able do design reward functions that better suit the problem we are trying to solve. Furthermore, it would be interesting to evaluate how pointer generator mechanism BIBREF21 and coverage mechanism BIBREF29 can be integrated in the learning scheme to increase the non-redundancy and coverage performance of the generation. Acknowledgments This project was partly funded by the IDEX Université Grenoble Alpes innovation grant (AI4I-2018-2019) and the Région Auvergne-Rhône-Alpes (AISUA-2018-2019).
E2E NLG challenge Dataset, The Wikipedia Company Dataset
e28a6e3d8f3aa303e1e0daff26b659a842aba97b
e28a6e3d8f3aa303e1e0daff26b659a842aba97b_0
Q: Did they compare to Transformer based large language models? Text: Introduction Story generation is an important but challenging task because it requires to deal with logic and implicit knowledge BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Story ending generation aims at concluding a story and completing the plot given a story context. We argue that solving this task involves addressing the following issues: 1) Representing the context clues which contain key information for planning a reasonable ending; and 2) Using implicit knowledge (e.g., commonsense knowledge) to facilitate understanding of the story and better predict what will happen next. Comparing to textual entailment or reading comprehension BIBREF6 , BIBREF7 story ending generation requires more to deal with the logic and causality information that may span multiple sentences in a story context. The logic information in story can be captured by the appropriate sequence of events or entities occurring in a sequence of sentences, and the chronological order or causal relationship between events or entities. The ending should be generated from the whole context clue rather than merely inferred from a single entity or the last sentence. It is thus important for story ending generation to represent the context clues for predicting what will happen in an ending. However, deciding a reasonable ending not only depends on representing the context clues properly, but also on the ability of language understanding with implicit knowledge that is beyond the text surface. Humans use their own experiences and implicit knowledge to help understand a story. As shown in Figure 1 , the ending talks about candy which can be viewed as commonsense knowledge about Halloween. Such knowledge can be crucial for story ending generation. Figure 1 shows an example of a typical story in the ROCStories corpus BIBREF8 . In this example, the events or entities in the story context constitute the context clues which reveal the logical or causal relationships between events or entities. These concepts, including Halloween, trick or treat, and monster, are connected as a graph structure. A reasonable ending should consider all the connected concepts rather than just some individual one. Furthermore, with the help of commonsense knowledge retrieved from ConceptNet BIBREF9 , it is easier to infer a reasonable ending with the knowledge that candy is highly related to Halloween. To address the two issues in story ending generation, we devise a model that is equipped with an incremental encoding scheme to encode context clues effectively, and a multi-source attention mechanism to use commonsense knowledge. The representation of the context clues is built through incremental reading (or encoding) of the sentences in the story context one by one. When encoding a current sentence in a story context, the model can attend not only to the words in the preceding sentence, but also the knowledge graphs which are retrieved from ConceptNet for each word. In this manner, commonsense knowledge can be encoded in the model through graph representation techniques, and therefore, be used to facilitate understanding story context and inferring coherent endings. Integrating the context clues and commonsense knowledge, the model can generate more reasonable endings than state-of-the-art baselines. Our contributions are as follows: Related Work The corpus we used in this paper was first designed for Story Cloze Test (SCT) BIBREF10 , which requires to select a correct ending from two candidates given a story context. Feature-based BIBREF11 , BIBREF12 or neural BIBREF8 , BIBREF13 classification models are proposed to measure the coherence between a candidate ending and a story context from various aspects such as event, sentiment, and topic. However, story ending generation BIBREF14 , BIBREF15 , BIBREF16 is more challenging in that the task requires to modeling context clues and implicit knowledge to produce reasonable endings. Story generation, moving forward to complete story comprehension, is approached as selecting a sequence of events to form a story by satisfying a set of criteria BIBREF0 . Previous studies can be roughly categorized into two lines: rule-based methods and neural models. Most of the traditional rule-based methods for story generation BIBREF0 , BIBREF1 retrieve events from a knowledge base with some pre-specified semantic relations. Neural models for story generation has been widely studied with sequence-to-sequence (seq2seq) learning BIBREF17 . And various contents such as photos and independent descriptions are largely used to inspire the story BIBREF3 .To capture the deep meaning of key entities and events, BIBREF2 ( BIBREF2 ) and BIBREF5 ( BIBREF5 ) explicitly modeled the entities mentioned in story with dynamic representation, and BIBREF4 ( BIBREF4 ) decomposed the problem into planning successive events and generating sentences from some given events. BIBREF18 ( BIBREF18 ) adopted a hierarchical architecture to generate the whole story from some given keywords. Commonsense knowledge is beneficial for many natural language tasks such as semantic reasoning and text entailment, which is particularly important for story generation. BIBREF19 ( BIBREF19 ) characterized the types of commonsense knowledge mostly involved in recognizing textual entailment. Afterwards, commonsense knowledge was used in natural language inference BIBREF20 , BIBREF21 and language generation BIBREF22 . BIBREF23 ( BIBREF23 ) incorporated external commonsense knowledge into a neural cloze-style reading comprehension model. BIBREF24 ( BIBREF24 ) performed commonsense inference on people's intents and reactions of the event's participants given a short text. Similarly, BIBREF25 ( BIBREF25 ) introduced a new annotation framework to explain psychology of story characters with commonsense knowledge. And commonsense knowledge has also been shown useful to choose a correct story ending from two candidate endings BIBREF12 , BIBREF26 . Overview The task of story ending generation can be stated as follows: given a story context consisting of a sentence sequence $X=\lbrace X_1, X_2, \cdots , X_K\rbrace $ , where $X_i=x_1^{(i)}x_2^{(i)}\cdots x_{l_i}^{(i)}$ represents the $i$ -th sentence containing $l_i$ words, the model should generate a one-sentence ending $Y=y_1y_2...y_l$ which is reasonable in logic, formally as $${Y^*} = \mathop {argmax}\limits _{Y} \mathcal {P}(Y|X).$$ (Eq. 9) As aforementioned, context clue and commonsense knowledge is important for modeling the logic and casual information in story ending generation. To this end, we devise an incremental encoding scheme based on the general encoder-decoder framework BIBREF27 . As shown in Figure 2 , the scheme encodes the sentences in a story context incrementally with a multi-source attention mechanism: when encoding sentence $X_{i}$ , the encoder obtains a context vector which is an attentive read of the hidden states, and the graph vectors of the preceding sentence $X_{i-1}$ . In this manner, the relationship between words (some are entities or events) in sentence $X_{i-1}$ and those in $X_{i}$ is built incrementally, and therefore, the chronological order or causal relationship between entities (or events) in adjacent sentences can be captured implicitly. To leverage commonsense knowledge which is important for generating a reasonable ending, a one-hop knowledge graph for each word in a sentence is retrieved from ConceptNet, and each graph can be represented by a vector in two ways. The incremental encoder not only attends to the hidden states of $X_{i-1}$ , but also to the graph vectors at each position of $X_{i-1}$ . By this means, our model can generate more reasonable endings by representing context clues and encoding commonsense knowledge. Background: Encoder-Decoder Framework The encoder-decoder framework is a general framework widely used in text generation. Formally, the model encodes the input sequence $X=x_1x_2\cdots x_m$ into a sequence of hidden states, as follows, $$\textbf {h}_{t} &= \mathbf {LSTM}(\textbf {h}_{t-1}, \mathbf {e}(x_t)), $$ (Eq. 11) where $\textbf {h}_{t}$ denotes the hidden state at step $t$ and $\mathbf {e}(x)$ is the word vector of $x$ . At each decoding position, the framework will generate a word by sampling from the word distribution $\mathcal {P}(y_t|y_{<t},X)$ ( $y_{<t}=y_1y_2\cdots y_{t-1}$ denotes the sequences that has been generated before step $t$ ), which is computed as follows: $$&\mathcal {P}(y_t|y_{<t}, X) = \mathbf {softmax}(\textbf {W}_{0}\mathbf {s}_{t}+\textbf {b}_0), \\ &\textbf {s}_{ t} = \mathbf {LSTM}(\textbf {s}_{ t-1}, \mathbf {e}(y_{t-1}), \textbf {c}_{t-1}), $$ (Eq. 12) where $\textbf {s}_t$ denotes the decoder state at step $t$ . When an attention mechanism is applied, $\textbf {c}_{t-1}$ is an attentive read of the context, which is a weighted sum of the encoder's hidden states as $\textbf {c}_{t-1}=\sum _{i=1}^m\alpha _{(t-1)i}\textbf {h}_i$ , and $\alpha _{(t-1)i}$ measures the association between the decoder state $\textbf {s}_{t-1}$ and the encoder state $\textbf {h}_i$ . Refer to BIBREF28 for more details. Incremental Encoding Scheme Straightforward solutions for encoding the story context can be: 1) Concatenating the $K$ sentences to a long sentence and encoding it with an LSTM ; or 2) Using a hierarchical LSTM with hierarchical attention BIBREF29 , which firstly attends to the hidden states of a sentence-level LSTM, and then to the states of a word-level LSTM. However, these solutions are not effective to represent the context clues which may capture the key logic information. Such information revealed by the chronological order or causal relationship between events or entities in adjacent sentences. To better represent the context clues, we propose an incremental encoding scheme: when encoding the current sentence $X_i$ , it obtains a context vector which is an attentive read of the preceding sentence $X_{i-1}$ . In this manner, the order/relationship between the words in adjacent sentences can be captured implicitly. This process can be stated formally as follows: $$\textbf {h}_{j}^{(i)} = \mathbf {LSTM}(\textbf {h}_{j-1}^{(i)}, \mathbf {e}(x_j^{(i)}), \textbf {c}_{\textbf {l}j}^{(i)}), ~i\ge 2. $$ (Eq. 14) where $\textbf {h}^{(i)}_{j}$ denotes the hidden state at the $j$ -th position of the $i$ -th sentence, $\mathbf {e}(x_j^{(i)})$ denotes the word vector of the $j$ -th word $x_j^{(i)}$ . $\textbf {c}_{\textbf {l},j}^{(i)}$ is the context vector which is an attentive read of the preceding sentence $X_{i-1}$ , conditioned on $\textbf {h}^{(i)}_{j-1}$ . We will describe the context vector in the next section. During the decoding process, the decoder obtains a context vector from the last sentence $X_{K}$ in the context to utilize the context clues. The hidden state is obtained as below: $$&\textbf {s}_{t} = \mathbf {LSTM}(\textbf {s}_{t-1}, \mathbf {e}(y_{t-1}), \textbf {c}_{\textbf {l}t}), $$ (Eq. 15) where $\textbf {c}_{\textbf {l}t}$ is the context vector which is the attentive read of the last sentence $X_K$ , conditioned on $\textbf {s}_{t-1}$ . More details of the context vector will be presented in the next section. Multi-Source Attention (MSA) The context vector ( $\textbf {c}_{\textbf {l}}$ ) plays a key role in representing the context clues because it captures the relationship between words (or states) in the current sentence and those in the preceding sentence. As aforementioned, story comprehension sometime requires the access of implicit knowledge that is beyond the text. Therefore, the context vector consists of two parts, computed with multi-source attention. The first one $\textbf {c}_{\textbf {h}j}^{(i)}$ is derived by attending to the hidden states of the preceding sentence, and the second one $\textbf {c}_{\textbf {x}j}^{(i)}$ by attending to the knowledge graph vectors which represent the one-hop graphs in the preceding sentence. The MSA context vector is computed as follows: $$\textbf {c}_{\textbf {l}j}^{(i)} = \textbf {W}_\textbf {l}([\textbf {c}_{\textbf {h}j}^{(i)}; \textbf {c}_{\textbf {x}j}^{(i)}])+\textbf {b}_\textbf {l},$$ (Eq. 17) where $\oplus $ indicates vector concatenation. Hereafter, $\textbf {c}_{\textbf {h}j}^{(i)}$ is called state context vector, and $\textbf {c}_{\textbf {x}j}^{(i)}$ is called knowledge context vector. The state context vector is a weighted sum of the hidden states of the preceding sentence $X_{i-1}$ and can be computed as follows: $$\textbf {c}_{\textbf {h}j}^{(i)} &= \sum _{k = 1}^{l_{i-1}}\alpha _{h_k,j}^{(i)}\textbf {h}_{k}^{(i-1)}, \\ \alpha _{h_k,j}^{(i)} &= \frac{e^{\beta _{h_k,j}^{(i)}}}{\;\sum \limits _{m=1}^{l_{i-1}}e^{\beta _{h_m,j}^{(i)}}\;},\\ \beta _{h_k,j}^{(i)} &= \textbf {h}_{j-1}^{(i)\rm T}\textbf {W}_\textbf {s} \textbf {h}_k^{(i-1)},$$ (Eq. 18) where $\beta _{h_k,j}^{(i)}$ can be viewed as a weight between hidden state $\textbf {h}_{j-1}^{(i)}$ in sentence $X_i$ and hidden state $\textbf {h}_k^{(i-1)}$ in the preceding sentence $X_{i-1}$ . Similarly, the knowledge context vector is a weighted sum of the graph vectors for the preceding sentence. Each word in a sentence will be used as a query to retrieve a one-hop commonsense knowledge graph from ConceptNet, and then, each graph will be represented by a graph vector. After obtaining the graph vectors, the knowledge context vector can be computed by: $$\textbf {c}_{\textbf {x}j}^{(i)} &= \sum _{k = 1}^{l_{i-1}}\alpha _{x_k,j}^{(i)}\textbf {g}(x_{k}^{(i-1)}), \\ \alpha _{x_k,j}^{(i)} &= \frac{e^{\beta _{x_k,j}^{(i)}}}{\;\sum \limits _{m=1}^{l_{i-1}}e^{\beta _{x_m,j}^{(i)}}\;},\\ \beta _{x_k,j}^{(i)} &= \textbf {h}_{j-1}^{(i)\rm T}\textbf {W} _\textbf {k}\textbf {g}(x_k^{(i-1)}),$$ (Eq. 19) where $\textbf {g}(x_k^{(i-1)})$ is the graph vector for the graph which is retrieved for word $x_k^{(i-1)}$ . Different from $\mathbf {e}(x_k^{(i-1)})$ which is the word vector, $\textbf {g}(x_k^{(i-1)})$ encodes commonsense knowledge and extends the semantic representation of a word through neighboring entities and relations. During the decoding process, the knowledge context vectors are similarly computed by attending to the last input sentence $X_K$ . There is no need to attend to all the context sentences because the context clues have been propagated within the incremental encoding scheme. Knowledge Graph Representation Commonsense knowledge can facilitate language understanding and generation. To retrieve commonsense knowledge for story comprehension, we resort to ConceptNet BIBREF9 . ConceptNet is a semantic network which consists of triples $R=(h, r, t)$ meaning that head concept $h$ has the relation $r$ with tail concept $t$ . Each word in a sentence is used as a query to retrieve a one-hop graph from ConceptNet. The knowledge graph for a word extends (encodes) its meaning by representing the graph from neighboring concepts and relations. There have been a few approaches to represent commonsense knowledge. Since our focus in this paper is on using knowledge to benefit story ending generation, instead of devising new methods for representing knowledge, we adopt two existing methods: 1) graph attention BIBREF30 , BIBREF22 , and 2) contextual attention BIBREF23 . We compared the two means of knowledge representation in the experiment. Formally, the knowledge graph of word (or concept) $x$ is represented by a set of triples, $\mathbf {G}(x)=\lbrace R_1, R_2, \cdots , R_{N_x}\rbrace $ (where each triple $R_i$ has the same head concept $x$ ), and the graph vector $\mathbf {g}(x)$ for word $x$ can be computed via graph attention, as below: $$\textbf {g}(x) &= \sum _{i = 1}^{N_x}\alpha _{R_i}[\textbf {h}_i ; \textbf {t}_i],\\ \alpha _{R_i} &= \frac{e^{\beta _{R_i}}}{\;\sum \limits _{j=1}^{N_x}e^{\beta _{R_j}}\;},\\ \beta _{R_i} = (\textbf {W}_{\textbf {r}}&\textbf {r}_i)^{\rm T}\mathop {tanh}(\textbf {W}_{\textbf {h}}\textbf {h}_i+\textbf {W}_{\textbf {t}}\textbf {t}_i),$$ (Eq. 23) where $(h_i, r_i, t_i) = R_i \in \mathbf {G}(x)$ is the $i$ -th triple in the graph. We use word vectors to represent concepts, i.e. $\textbf {h}_i = \mathbf {e}(h_i), \textbf {t}_i = \mathbf {e}(t_i)$ , and learn trainable vector $\textbf {r}_i$ for relation $r_i$ , which is randomly initialized. Intuitively, the above formulation assumes that the knowledge meaning of a word can be represented by its neighboring concepts (and corresponding relations) in the knowledge base. Note that entities in ConceptNet are common words (such as tree, leaf, animal), we thus use word vectors to represent h/r/t directly, instead of using geometric embedding methods (e.g., TransE) to learn entity and relation embeddings. In this way, there is no need to bridge the representation gap between geometric embeddings and text-contextual embeddings (i.e., word vectors). When using contextual attention, the graph vector $\textbf {g}(x)$ can be computed as follows: $$\textbf {g}(x)&=\sum _{i=1}^{N_x}\alpha _{R_i}\textbf {M}_{R_i},\\ \textbf {M}_{R_i}&=BiGRU(\textbf {h}_i,\textbf {r}_i,\textbf {t}_i),\\ \alpha _{R_i} &= \frac{e^{\beta _{R_i}}}{\;\sum \limits _{j=1}^{N_x}e^{\beta _{R_j}}\;},\\ \beta _{R_i}&= \textbf {h}_{(x)}^{\rm T}\textbf {W}_\textbf {c}\textbf {M}_{R_i},$$ (Eq. 25) where $\textbf {M}_{R_i}$ is the final state of a BiGRU connecting the elements of triple $R_i$ , which can be seen as the knowledge memory of the $i$ -th triple, while $\textbf {h}_{(x)}$ denotes the hidden state at the encoding position of word $x$ . Loss Function As aforementioned, the incremental encoding scheme is central for story ending generation. To better model the chronological order and causal relationship between adjacent sentences, we impose supervision on the encoding network. At each encoding step, we also generate a distribution over the vocabulary, very similar to the decoding process: $$\mathcal {P}(y_t|y_{<t}, X) =\mathbf {softmax}(\textbf {W}_{0}\textbf {h}_{j}^{(i)}+\textbf {b}_0),$$ (Eq. 27) Then, we calculate the negative data likelihood as loss function: $$\Phi &= \Phi _{en} + \Phi _{de}\\ \Phi _{en} &= \sum _{i=2}^K\sum _{j=1}^{l_i} - \log \mathcal {P}(x_j^{(i)}=\widetilde{x}_j^{(i)}|x_{<j}^{(i)}, X_{<i}),\\ \Phi _{de} &= \sum _t - \log \mathcal {P}(y_t=\tilde{y}_t|y_{<t}, X),$$ (Eq. 28) where $\widetilde{x}_j^{(i)}$ means the reference word used for encoding at the $j$ -th position in sentence $i$ , and $\tilde{y}_t$ represents the $j$ -th word in the reference ending. Such an approach does not mean that at each step there is only one correct next sentence, exactly as many other generation tasks. Experiments show that it is better in logic than merely imposing supervision on the decoding network. Dataset We evaluated our model on the ROCStories corpus BIBREF10 . The corpus contains 98,162 five-sentence stories for evaluating story understanding and script learning. The original task is designed to select a correct story ending from two candidates, while our task is to generate a reasonable ending given a four-sentence story context. We randomly selected 90,000 stories for training and the left 8,162 for evaluation. The average number of words in $X_1/X_2/X_3/X_4/Y$ is 8.9/9.9/10.1/10.0/10.5 respectively. The training data contains 43,095 unique words, and 11,192 words appear more than 10 times. For each word, we retrieved a set of triples from ConceptNet and stored those whose head entity and tail entity are noun or verb, meanwhile both occurring in SCT. Moreover, we retained at most 10 triples if there are too many. The average number of triples for each query word is 3.4. Baselines We compared our models with the following state-of-the-art baselines: Sequence to Sequence (Seq2Seq): A simple encoder-decoder model which concatenates four sentences to a long sentence with an attention mechanism BIBREF31 . Hierarchical LSTM (HLSTM): The story context is represented by a hierarchical LSTM: a word-level LSTM for each sentence and a sentence-level LSTM connecting the four sentences BIBREF29 . A hierarchical attention mechanism is applied, which attends to the states of the two LSTMs respectively. HLSTM+Copy: The copy mechanism BIBREF32 is applied to hierarchical states to copy the words in the story context for generation. HLSTM+Graph Attention(GA): We applied multi-source attention HLSTM where commonsense knowledge is encoded by graph attention. HLSTM+Contextual Attention(CA): Contextual attention is applied to represent commonsense knowledge. Experiment Settings The parameters are set as follows: GloVe.6B BIBREF33 is used as word vectors, and the vocabulary size is set to 10,000 and the word vector dimension to 200. We applied 2-layer LSTM units with 512-dimension hidden states. These settings were applied to all the baselines. The parameters of the LSTMs (Eq. 14 and 15 ) are shared by the encoder and the decoder. Automatic Evaluation We conducted the automatic evaluation on the 8,162 stories (the entire test set). We generated endings from all the models for each story context. We adopted perplexity(PPL) and BLEU BIBREF34 to evaluate the generation performance. Smaller perplexity scores indicate better performance. BLEU evaluates $n$ -gram overlap between a generated ending and a reference ending. However, since there is only one reference ending for each story context, BLEU scores will become extremely low for larger $n$ . We thus experimented with $n=1,2$ . Note also that there may exist multiple reasonable endings for the same story context. The results of the automatic evaluation are shown in Table 1 , where IE means a simple incremental encoding framework that ablated the knowledge context vector from $\textbf {c}_{\textbf {l}}$ in Eq. ( 17 ). Our models have lower perplexity and higher BLEU scores than the baselines. IE and IE+MSA have remarkably lower perplexity than other models. As for BLEU, IE+MSA(CA) obtained the highest BLEU-1 and BLEU-2 scores. This indicates that multi-source attention leads to generate story endings that have more overlaps with the reference endings. Manual Evaluation Manual evaluations are indispensable to evaluate the coherence and logic of generated endings. For manual evaluation, we randomly sampled 200 stories from the test set and obtained 1,600 endings from the eight models. Then, we resorted to Amazon Mechanical Turk (MTurk) for annotation. Each ending will be scored by three annotators and majority voting is used to select the final label. We defined two metrics - grammar and logicality for manual evaluation. Score 0/1/2 is applied to each metric during annotation. Whether an ending is natural and fluent. Score 2 is for endings without any grammar errors, 1 for endings with a few errors but still understandable and 0 for endings with severe errors and incomprehensible. Whether an ending is reasonable and coherent with the story context in logic. Score 2 is for reasonable endings that are coherent in logic, 1 for relevant endings but with some discrepancy between an ending and a given context, and 0 for totally incompatible endings. Note that the two metrics are scored independently. To produce high-quality annotation, we prepared guidelines and typical examples for each metric score. The results of the manual evaluation are also shown in Table 1 . Note that the difference between IE and IE+MSA exists in that IE does not attend to knowledge graph vectors in a preceding sentence, and thus it does use any commonsense knowledge. The incremental encoding scheme without MSA obtained the best grammar score and our full mode IE+MSA(GA) has the best logicality score. All the models have fairly good grammar scores (maximum is 2.0), while the logicality scores differ remarkably, much lower than the maximum score, indicating the challenges of this task. More specifically, incremental encoding is effective due to the facts: 1) IE is significantly better than Seq2Seq and HLSTM in grammar (Sign Test, 1.84 vs. $1.74/1.57$ , p-value= $0.046/0.037$ , respectively), and in logicality (1.10 vs. 0.70/0.84, p-value $<0.001/0.001$ ). 2) IE+MSA is significantly better than HLSTM+MSA in logicality (1.26 vs. 1.06, p-value= $0.014$ for GA; 1.24 vs. 1.02, p-value= $0.022$ for CA). This indicates that incremental encoding is more powerful than traditional (Seq2Seq) and hierarchical (HLSTM) encoding/attention in utilizing context clues. Furthermore, using commonsense knowledge leads to significant improvements in logicality. The comparison in logicality between IE+MSA and IE (1.26/1.24 vs. 1.10, p-value= $0.028/0.042$ for GA/CA, respectively), HLSTM+MSA and HLSTM (1.06/1.02 vs. 0.84, p-value $<0.001/0.001$ for GA/CA, respectively), and HLSTM+MSA and HLSTM+Copy (1.06/1.02 vs. 0.90, p-value= $0.044/0.048$ , respectively) all approve this claim. In addition, similar results between GA and CA show that commonsense knowledge is useful but multi-source attention is not sensitive to the knowledge representation scheme. More detailed results are listed in Table 2 . Comparing to other models, IE+MSA has a much larger proportion of endings that are good both in grammar and logicality (2-2). The proportion of good logicality (score=2.0) from IE+MSA is much larger than that from IE (45.0%+5.0%/41.0%+4.0% vs. 36.0%+2.0% for GA/CA, respectively), and also remarkable larger than those from other baselines. Further, HLSTM equipped with MSA is better than those without MSA, indicating that commonsense knowledge is helpful. And the kappa measuring inter-rater agreement is 0.29 for three annotators, which implies a fair agreement. Examples and Attention Visualization We presented an example of generated story endings in Table 3 . Our model generates more natural and reasonable endings than the baselines. In this example, the baselines predicted wrong events in the ending. Baselines (Seq2Seq, HLSTM, and HLSTM+Copy) have predicted improper entities (cake), generated repetitive contents (her family), or copied wrong words (eat). The models equipped with incremental encoding or knowledge through MSA(GA/CA) perform better in this example. The ending by IE+MSA is more coherent in logic, and fluent in grammar. We can see that there may exist multiple reasonable endings for the same story context. In order to verify the ability of our model to utilize the context clues and implicit knowledge when planning the story plot, we visualized the attention weights of this example, as shown in Figure 3 . Note that this example is produced from graph attention. In Figure 3 , phrases in the box are key events of the sentences that are manually highlighted. Words in blue or purple are entities that can be retrieved from ConceptNet, respectively in story context or in ending. An arrow indicates that the words in the current box (e.g., they eat in $X_2$ ) all have largest attention weights to some words in the box of the preceding sentence (e.g., cooking a special meal in $X_1$ ). Black arrows are for state context vector (see Eq. 18 ) and blue for knowledge context vector (see Eq. 19 ). For instance, eat has the largest knowledge attention to meal through the knowledge graph ( $<$ meal, AtLocation, dinner $>$ , $<$ meal, RelatedTo, eat $>$ ). Similarly, eat also has the second largest attention weight to cooking through the knowledge graph. For attention weights of state context vector, both words in perfects everything has the largest weight to some of everything to be just right (everything $\rightarrow $ everything, perfect $\rightarrow $ right). The example illustrates how the connection between context clues are built through incremental encoding and use of commonsense knowledge. The chain of context clues, such as ${be\_cooking}\rightarrow {want\_everything\_be\_right}\rightarrow {perfect\_everything}\rightarrow {lay\_down}\rightarrow {get\_back}$ , and the commonsense knowledge, such as $<$ cook, AtLocation, kitchen $>$ and $<$ oven, UsedFor, burn $>$ , are useful for generating reasonable story endings. Conclusion and Future Work We present a story ending generation model that builds context clues via incremental encoding and leverages commonsense knowledge with multi-source attention. It encodes a story context incrementally with a multi-source attention mechanism to utilize not only context clues but also commonsense knowledge: when encoding a sentence, the model obtains a multi-source context vector which is an attentive read of the words and the corresponding knowledge graphs of the preceding sentence in the story context. Experiments show that our models can generate more coherent and reasonable story endings. As future work, our incremental encoding and multi-source attention for using commonsense knowledge may be applicable to other language generation tasks. Refer to the Appendix for more details. Acknowledgements This work was jointly supported by the National Science Foundation of China (Grant No.61876096/61332007), and the National Key R&D Program of China (Grant No. 2018YFC0830200). We would like to thank Prof. Xiaoyan Zhu for her generous support. Appendix A: Annotation Statistics We presented the statistics of annotation agreement in Table 4 . The proportion of the annotations in which at least two annotators (3/3+2/3) assigned the same score to an ending is 96% for grammar and 94% for logicality. We can also see that the 3/3 agreement for logicality is much lower than that for grammar, indicating that logicality is more complicated for annotation than grammar. Appendix B: Error Analysis We analyzed error types by manually checking all 46 bad endings generated by our model, where bad means the average score in terms of at least one metric is not greater than 1. There are 3 typical error types: bad grammar (BG), bad logicality (BL), and other errors. The distribution of types is shown in Table 5 . We also presented some typical cases for each error type in Table 6 . Note that we only took graph attention as example. The first case shows an instance of bad grammar for repetitive generation. The second case shows that our model predicted a wrong entity at the last position where car is obviously more appropriate than daughter. It happens when the attention focuses on the wrong position, but in more cases it happens due to the noise of the commonsense knowledge base. The ending of the third case contains a relevant event work on his own but the event is not consistent to the previous word relieved. Other cases show that our model is not good at dealing with rare words. However, this can be further improved by applying copy mechanism, as our future work. These errors also indicate that story ending generation is challenging, and logic and implicit knowledge plays a central role in this task. Appendix C: Attention Visualization The multi-source attention mechanism computes the state context vectors and knowledge context vectors respectively as follows: $$\textbf {c}_{\textbf {h}j}^{(i)} &= \sum _{k = 1}^{l_{i-1}}\alpha _{h_k,j}^{(i)}\textbf {h}_{k}^{(i-1)}, \\ \textbf {c}_{\textbf {x}j}^{(i)} &= \sum _{k = 1}^{l_{i-1}}\alpha _{x_k,j}^{(i)}\textbf {g}(x_{k}^{(i-1)}), $$ (Eq. 53) The visualization analysis in Section 4.6 “Generated Ending Examples and Attention Visualization" is based on the attention weights ( $\alpha _{h_{k,j}}^{(i)}$ and $\alpha _{x_{k,j}}^{(i)}$ ), as presented in Figure 4 . Similarly we take as example the graph attention method to represent commonsense knowledge. The figure illustrates how the incremental encoding scheme with the multi-source attention utilizes context clues and implicit knowledge. 1) The left column: for utilizing context clues, when the model encodes $X_2$ , cooking in $X_1$ obtains the largest state attention weight ( $\alpha _{h_{k,j}}^{(i)}$ ), which illustrates cooking is an important word (or event) for the context clue. Similarly, the key events in each sentence have largest attention weights to some entities or events in the preceding sentence, which forms the context clue (e.g., perfects in $X_3$ to right in $X_2$ , lay/down in $X_4$ to perfect/everything in $X_3$ , get/back in $Y$ to lay/down in $X_4$ , etc.). 2) The right column: for the use of commonsense knowledge, each sentence has attention weights ( $\alpha _{x_{k,j}}^{(i)}$ ) to the knowledge graphs of the preceding sentence (e.g. eat in $X_2$ to meal in $X_1$ , dinner in $X_3$ to eat in $X_2$ , etc.). In this manner, the knowledge information is added into the encoding process of each sentence, which helps story comprehension for better ending generation (e.g., kitchen in $Y$ to oven in $X_2$ , etc.).
No
0fce128b8aaa327ac0d58ec30cd2ecbea2019baa
0fce128b8aaa327ac0d58ec30cd2ecbea2019baa_0
Q: Which baselines are they using? Text: Introduction Story generation is an important but challenging task because it requires to deal with logic and implicit knowledge BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Story ending generation aims at concluding a story and completing the plot given a story context. We argue that solving this task involves addressing the following issues: 1) Representing the context clues which contain key information for planning a reasonable ending; and 2) Using implicit knowledge (e.g., commonsense knowledge) to facilitate understanding of the story and better predict what will happen next. Comparing to textual entailment or reading comprehension BIBREF6 , BIBREF7 story ending generation requires more to deal with the logic and causality information that may span multiple sentences in a story context. The logic information in story can be captured by the appropriate sequence of events or entities occurring in a sequence of sentences, and the chronological order or causal relationship between events or entities. The ending should be generated from the whole context clue rather than merely inferred from a single entity or the last sentence. It is thus important for story ending generation to represent the context clues for predicting what will happen in an ending. However, deciding a reasonable ending not only depends on representing the context clues properly, but also on the ability of language understanding with implicit knowledge that is beyond the text surface. Humans use their own experiences and implicit knowledge to help understand a story. As shown in Figure 1 , the ending talks about candy which can be viewed as commonsense knowledge about Halloween. Such knowledge can be crucial for story ending generation. Figure 1 shows an example of a typical story in the ROCStories corpus BIBREF8 . In this example, the events or entities in the story context constitute the context clues which reveal the logical or causal relationships between events or entities. These concepts, including Halloween, trick or treat, and monster, are connected as a graph structure. A reasonable ending should consider all the connected concepts rather than just some individual one. Furthermore, with the help of commonsense knowledge retrieved from ConceptNet BIBREF9 , it is easier to infer a reasonable ending with the knowledge that candy is highly related to Halloween. To address the two issues in story ending generation, we devise a model that is equipped with an incremental encoding scheme to encode context clues effectively, and a multi-source attention mechanism to use commonsense knowledge. The representation of the context clues is built through incremental reading (or encoding) of the sentences in the story context one by one. When encoding a current sentence in a story context, the model can attend not only to the words in the preceding sentence, but also the knowledge graphs which are retrieved from ConceptNet for each word. In this manner, commonsense knowledge can be encoded in the model through graph representation techniques, and therefore, be used to facilitate understanding story context and inferring coherent endings. Integrating the context clues and commonsense knowledge, the model can generate more reasonable endings than state-of-the-art baselines. Our contributions are as follows: Related Work The corpus we used in this paper was first designed for Story Cloze Test (SCT) BIBREF10 , which requires to select a correct ending from two candidates given a story context. Feature-based BIBREF11 , BIBREF12 or neural BIBREF8 , BIBREF13 classification models are proposed to measure the coherence between a candidate ending and a story context from various aspects such as event, sentiment, and topic. However, story ending generation BIBREF14 , BIBREF15 , BIBREF16 is more challenging in that the task requires to modeling context clues and implicit knowledge to produce reasonable endings. Story generation, moving forward to complete story comprehension, is approached as selecting a sequence of events to form a story by satisfying a set of criteria BIBREF0 . Previous studies can be roughly categorized into two lines: rule-based methods and neural models. Most of the traditional rule-based methods for story generation BIBREF0 , BIBREF1 retrieve events from a knowledge base with some pre-specified semantic relations. Neural models for story generation has been widely studied with sequence-to-sequence (seq2seq) learning BIBREF17 . And various contents such as photos and independent descriptions are largely used to inspire the story BIBREF3 .To capture the deep meaning of key entities and events, BIBREF2 ( BIBREF2 ) and BIBREF5 ( BIBREF5 ) explicitly modeled the entities mentioned in story with dynamic representation, and BIBREF4 ( BIBREF4 ) decomposed the problem into planning successive events and generating sentences from some given events. BIBREF18 ( BIBREF18 ) adopted a hierarchical architecture to generate the whole story from some given keywords. Commonsense knowledge is beneficial for many natural language tasks such as semantic reasoning and text entailment, which is particularly important for story generation. BIBREF19 ( BIBREF19 ) characterized the types of commonsense knowledge mostly involved in recognizing textual entailment. Afterwards, commonsense knowledge was used in natural language inference BIBREF20 , BIBREF21 and language generation BIBREF22 . BIBREF23 ( BIBREF23 ) incorporated external commonsense knowledge into a neural cloze-style reading comprehension model. BIBREF24 ( BIBREF24 ) performed commonsense inference on people's intents and reactions of the event's participants given a short text. Similarly, BIBREF25 ( BIBREF25 ) introduced a new annotation framework to explain psychology of story characters with commonsense knowledge. And commonsense knowledge has also been shown useful to choose a correct story ending from two candidate endings BIBREF12 , BIBREF26 . Overview The task of story ending generation can be stated as follows: given a story context consisting of a sentence sequence $X=\lbrace X_1, X_2, \cdots , X_K\rbrace $ , where $X_i=x_1^{(i)}x_2^{(i)}\cdots x_{l_i}^{(i)}$ represents the $i$ -th sentence containing $l_i$ words, the model should generate a one-sentence ending $Y=y_1y_2...y_l$ which is reasonable in logic, formally as $${Y^*} = \mathop {argmax}\limits _{Y} \mathcal {P}(Y|X).$$ (Eq. 9) As aforementioned, context clue and commonsense knowledge is important for modeling the logic and casual information in story ending generation. To this end, we devise an incremental encoding scheme based on the general encoder-decoder framework BIBREF27 . As shown in Figure 2 , the scheme encodes the sentences in a story context incrementally with a multi-source attention mechanism: when encoding sentence $X_{i}$ , the encoder obtains a context vector which is an attentive read of the hidden states, and the graph vectors of the preceding sentence $X_{i-1}$ . In this manner, the relationship between words (some are entities or events) in sentence $X_{i-1}$ and those in $X_{i}$ is built incrementally, and therefore, the chronological order or causal relationship between entities (or events) in adjacent sentences can be captured implicitly. To leverage commonsense knowledge which is important for generating a reasonable ending, a one-hop knowledge graph for each word in a sentence is retrieved from ConceptNet, and each graph can be represented by a vector in two ways. The incremental encoder not only attends to the hidden states of $X_{i-1}$ , but also to the graph vectors at each position of $X_{i-1}$ . By this means, our model can generate more reasonable endings by representing context clues and encoding commonsense knowledge. Background: Encoder-Decoder Framework The encoder-decoder framework is a general framework widely used in text generation. Formally, the model encodes the input sequence $X=x_1x_2\cdots x_m$ into a sequence of hidden states, as follows, $$\textbf {h}_{t} &= \mathbf {LSTM}(\textbf {h}_{t-1}, \mathbf {e}(x_t)), $$ (Eq. 11) where $\textbf {h}_{t}$ denotes the hidden state at step $t$ and $\mathbf {e}(x)$ is the word vector of $x$ . At each decoding position, the framework will generate a word by sampling from the word distribution $\mathcal {P}(y_t|y_{<t},X)$ ( $y_{<t}=y_1y_2\cdots y_{t-1}$ denotes the sequences that has been generated before step $t$ ), which is computed as follows: $$&\mathcal {P}(y_t|y_{<t}, X) = \mathbf {softmax}(\textbf {W}_{0}\mathbf {s}_{t}+\textbf {b}_0), \\ &\textbf {s}_{ t} = \mathbf {LSTM}(\textbf {s}_{ t-1}, \mathbf {e}(y_{t-1}), \textbf {c}_{t-1}), $$ (Eq. 12) where $\textbf {s}_t$ denotes the decoder state at step $t$ . When an attention mechanism is applied, $\textbf {c}_{t-1}$ is an attentive read of the context, which is a weighted sum of the encoder's hidden states as $\textbf {c}_{t-1}=\sum _{i=1}^m\alpha _{(t-1)i}\textbf {h}_i$ , and $\alpha _{(t-1)i}$ measures the association between the decoder state $\textbf {s}_{t-1}$ and the encoder state $\textbf {h}_i$ . Refer to BIBREF28 for more details. Incremental Encoding Scheme Straightforward solutions for encoding the story context can be: 1) Concatenating the $K$ sentences to a long sentence and encoding it with an LSTM ; or 2) Using a hierarchical LSTM with hierarchical attention BIBREF29 , which firstly attends to the hidden states of a sentence-level LSTM, and then to the states of a word-level LSTM. However, these solutions are not effective to represent the context clues which may capture the key logic information. Such information revealed by the chronological order or causal relationship between events or entities in adjacent sentences. To better represent the context clues, we propose an incremental encoding scheme: when encoding the current sentence $X_i$ , it obtains a context vector which is an attentive read of the preceding sentence $X_{i-1}$ . In this manner, the order/relationship between the words in adjacent sentences can be captured implicitly. This process can be stated formally as follows: $$\textbf {h}_{j}^{(i)} = \mathbf {LSTM}(\textbf {h}_{j-1}^{(i)}, \mathbf {e}(x_j^{(i)}), \textbf {c}_{\textbf {l}j}^{(i)}), ~i\ge 2. $$ (Eq. 14) where $\textbf {h}^{(i)}_{j}$ denotes the hidden state at the $j$ -th position of the $i$ -th sentence, $\mathbf {e}(x_j^{(i)})$ denotes the word vector of the $j$ -th word $x_j^{(i)}$ . $\textbf {c}_{\textbf {l},j}^{(i)}$ is the context vector which is an attentive read of the preceding sentence $X_{i-1}$ , conditioned on $\textbf {h}^{(i)}_{j-1}$ . We will describe the context vector in the next section. During the decoding process, the decoder obtains a context vector from the last sentence $X_{K}$ in the context to utilize the context clues. The hidden state is obtained as below: $$&\textbf {s}_{t} = \mathbf {LSTM}(\textbf {s}_{t-1}, \mathbf {e}(y_{t-1}), \textbf {c}_{\textbf {l}t}), $$ (Eq. 15) where $\textbf {c}_{\textbf {l}t}$ is the context vector which is the attentive read of the last sentence $X_K$ , conditioned on $\textbf {s}_{t-1}$ . More details of the context vector will be presented in the next section. Multi-Source Attention (MSA) The context vector ( $\textbf {c}_{\textbf {l}}$ ) plays a key role in representing the context clues because it captures the relationship between words (or states) in the current sentence and those in the preceding sentence. As aforementioned, story comprehension sometime requires the access of implicit knowledge that is beyond the text. Therefore, the context vector consists of two parts, computed with multi-source attention. The first one $\textbf {c}_{\textbf {h}j}^{(i)}$ is derived by attending to the hidden states of the preceding sentence, and the second one $\textbf {c}_{\textbf {x}j}^{(i)}$ by attending to the knowledge graph vectors which represent the one-hop graphs in the preceding sentence. The MSA context vector is computed as follows: $$\textbf {c}_{\textbf {l}j}^{(i)} = \textbf {W}_\textbf {l}([\textbf {c}_{\textbf {h}j}^{(i)}; \textbf {c}_{\textbf {x}j}^{(i)}])+\textbf {b}_\textbf {l},$$ (Eq. 17) where $\oplus $ indicates vector concatenation. Hereafter, $\textbf {c}_{\textbf {h}j}^{(i)}$ is called state context vector, and $\textbf {c}_{\textbf {x}j}^{(i)}$ is called knowledge context vector. The state context vector is a weighted sum of the hidden states of the preceding sentence $X_{i-1}$ and can be computed as follows: $$\textbf {c}_{\textbf {h}j}^{(i)} &= \sum _{k = 1}^{l_{i-1}}\alpha _{h_k,j}^{(i)}\textbf {h}_{k}^{(i-1)}, \\ \alpha _{h_k,j}^{(i)} &= \frac{e^{\beta _{h_k,j}^{(i)}}}{\;\sum \limits _{m=1}^{l_{i-1}}e^{\beta _{h_m,j}^{(i)}}\;},\\ \beta _{h_k,j}^{(i)} &= \textbf {h}_{j-1}^{(i)\rm T}\textbf {W}_\textbf {s} \textbf {h}_k^{(i-1)},$$ (Eq. 18) where $\beta _{h_k,j}^{(i)}$ can be viewed as a weight between hidden state $\textbf {h}_{j-1}^{(i)}$ in sentence $X_i$ and hidden state $\textbf {h}_k^{(i-1)}$ in the preceding sentence $X_{i-1}$ . Similarly, the knowledge context vector is a weighted sum of the graph vectors for the preceding sentence. Each word in a sentence will be used as a query to retrieve a one-hop commonsense knowledge graph from ConceptNet, and then, each graph will be represented by a graph vector. After obtaining the graph vectors, the knowledge context vector can be computed by: $$\textbf {c}_{\textbf {x}j}^{(i)} &= \sum _{k = 1}^{l_{i-1}}\alpha _{x_k,j}^{(i)}\textbf {g}(x_{k}^{(i-1)}), \\ \alpha _{x_k,j}^{(i)} &= \frac{e^{\beta _{x_k,j}^{(i)}}}{\;\sum \limits _{m=1}^{l_{i-1}}e^{\beta _{x_m,j}^{(i)}}\;},\\ \beta _{x_k,j}^{(i)} &= \textbf {h}_{j-1}^{(i)\rm T}\textbf {W} _\textbf {k}\textbf {g}(x_k^{(i-1)}),$$ (Eq. 19) where $\textbf {g}(x_k^{(i-1)})$ is the graph vector for the graph which is retrieved for word $x_k^{(i-1)}$ . Different from $\mathbf {e}(x_k^{(i-1)})$ which is the word vector, $\textbf {g}(x_k^{(i-1)})$ encodes commonsense knowledge and extends the semantic representation of a word through neighboring entities and relations. During the decoding process, the knowledge context vectors are similarly computed by attending to the last input sentence $X_K$ . There is no need to attend to all the context sentences because the context clues have been propagated within the incremental encoding scheme. Knowledge Graph Representation Commonsense knowledge can facilitate language understanding and generation. To retrieve commonsense knowledge for story comprehension, we resort to ConceptNet BIBREF9 . ConceptNet is a semantic network which consists of triples $R=(h, r, t)$ meaning that head concept $h$ has the relation $r$ with tail concept $t$ . Each word in a sentence is used as a query to retrieve a one-hop graph from ConceptNet. The knowledge graph for a word extends (encodes) its meaning by representing the graph from neighboring concepts and relations. There have been a few approaches to represent commonsense knowledge. Since our focus in this paper is on using knowledge to benefit story ending generation, instead of devising new methods for representing knowledge, we adopt two existing methods: 1) graph attention BIBREF30 , BIBREF22 , and 2) contextual attention BIBREF23 . We compared the two means of knowledge representation in the experiment. Formally, the knowledge graph of word (or concept) $x$ is represented by a set of triples, $\mathbf {G}(x)=\lbrace R_1, R_2, \cdots , R_{N_x}\rbrace $ (where each triple $R_i$ has the same head concept $x$ ), and the graph vector $\mathbf {g}(x)$ for word $x$ can be computed via graph attention, as below: $$\textbf {g}(x) &= \sum _{i = 1}^{N_x}\alpha _{R_i}[\textbf {h}_i ; \textbf {t}_i],\\ \alpha _{R_i} &= \frac{e^{\beta _{R_i}}}{\;\sum \limits _{j=1}^{N_x}e^{\beta _{R_j}}\;},\\ \beta _{R_i} = (\textbf {W}_{\textbf {r}}&\textbf {r}_i)^{\rm T}\mathop {tanh}(\textbf {W}_{\textbf {h}}\textbf {h}_i+\textbf {W}_{\textbf {t}}\textbf {t}_i),$$ (Eq. 23) where $(h_i, r_i, t_i) = R_i \in \mathbf {G}(x)$ is the $i$ -th triple in the graph. We use word vectors to represent concepts, i.e. $\textbf {h}_i = \mathbf {e}(h_i), \textbf {t}_i = \mathbf {e}(t_i)$ , and learn trainable vector $\textbf {r}_i$ for relation $r_i$ , which is randomly initialized. Intuitively, the above formulation assumes that the knowledge meaning of a word can be represented by its neighboring concepts (and corresponding relations) in the knowledge base. Note that entities in ConceptNet are common words (such as tree, leaf, animal), we thus use word vectors to represent h/r/t directly, instead of using geometric embedding methods (e.g., TransE) to learn entity and relation embeddings. In this way, there is no need to bridge the representation gap between geometric embeddings and text-contextual embeddings (i.e., word vectors). When using contextual attention, the graph vector $\textbf {g}(x)$ can be computed as follows: $$\textbf {g}(x)&=\sum _{i=1}^{N_x}\alpha _{R_i}\textbf {M}_{R_i},\\ \textbf {M}_{R_i}&=BiGRU(\textbf {h}_i,\textbf {r}_i,\textbf {t}_i),\\ \alpha _{R_i} &= \frac{e^{\beta _{R_i}}}{\;\sum \limits _{j=1}^{N_x}e^{\beta _{R_j}}\;},\\ \beta _{R_i}&= \textbf {h}_{(x)}^{\rm T}\textbf {W}_\textbf {c}\textbf {M}_{R_i},$$ (Eq. 25) where $\textbf {M}_{R_i}$ is the final state of a BiGRU connecting the elements of triple $R_i$ , which can be seen as the knowledge memory of the $i$ -th triple, while $\textbf {h}_{(x)}$ denotes the hidden state at the encoding position of word $x$ . Loss Function As aforementioned, the incremental encoding scheme is central for story ending generation. To better model the chronological order and causal relationship between adjacent sentences, we impose supervision on the encoding network. At each encoding step, we also generate a distribution over the vocabulary, very similar to the decoding process: $$\mathcal {P}(y_t|y_{<t}, X) =\mathbf {softmax}(\textbf {W}_{0}\textbf {h}_{j}^{(i)}+\textbf {b}_0),$$ (Eq. 27) Then, we calculate the negative data likelihood as loss function: $$\Phi &= \Phi _{en} + \Phi _{de}\\ \Phi _{en} &= \sum _{i=2}^K\sum _{j=1}^{l_i} - \log \mathcal {P}(x_j^{(i)}=\widetilde{x}_j^{(i)}|x_{<j}^{(i)}, X_{<i}),\\ \Phi _{de} &= \sum _t - \log \mathcal {P}(y_t=\tilde{y}_t|y_{<t}, X),$$ (Eq. 28) where $\widetilde{x}_j^{(i)}$ means the reference word used for encoding at the $j$ -th position in sentence $i$ , and $\tilde{y}_t$ represents the $j$ -th word in the reference ending. Such an approach does not mean that at each step there is only one correct next sentence, exactly as many other generation tasks. Experiments show that it is better in logic than merely imposing supervision on the decoding network. Dataset We evaluated our model on the ROCStories corpus BIBREF10 . The corpus contains 98,162 five-sentence stories for evaluating story understanding and script learning. The original task is designed to select a correct story ending from two candidates, while our task is to generate a reasonable ending given a four-sentence story context. We randomly selected 90,000 stories for training and the left 8,162 for evaluation. The average number of words in $X_1/X_2/X_3/X_4/Y$ is 8.9/9.9/10.1/10.0/10.5 respectively. The training data contains 43,095 unique words, and 11,192 words appear more than 10 times. For each word, we retrieved a set of triples from ConceptNet and stored those whose head entity and tail entity are noun or verb, meanwhile both occurring in SCT. Moreover, we retained at most 10 triples if there are too many. The average number of triples for each query word is 3.4. Baselines We compared our models with the following state-of-the-art baselines: Sequence to Sequence (Seq2Seq): A simple encoder-decoder model which concatenates four sentences to a long sentence with an attention mechanism BIBREF31 . Hierarchical LSTM (HLSTM): The story context is represented by a hierarchical LSTM: a word-level LSTM for each sentence and a sentence-level LSTM connecting the four sentences BIBREF29 . A hierarchical attention mechanism is applied, which attends to the states of the two LSTMs respectively. HLSTM+Copy: The copy mechanism BIBREF32 is applied to hierarchical states to copy the words in the story context for generation. HLSTM+Graph Attention(GA): We applied multi-source attention HLSTM where commonsense knowledge is encoded by graph attention. HLSTM+Contextual Attention(CA): Contextual attention is applied to represent commonsense knowledge. Experiment Settings The parameters are set as follows: GloVe.6B BIBREF33 is used as word vectors, and the vocabulary size is set to 10,000 and the word vector dimension to 200. We applied 2-layer LSTM units with 512-dimension hidden states. These settings were applied to all the baselines. The parameters of the LSTMs (Eq. 14 and 15 ) are shared by the encoder and the decoder. Automatic Evaluation We conducted the automatic evaluation on the 8,162 stories (the entire test set). We generated endings from all the models for each story context. We adopted perplexity(PPL) and BLEU BIBREF34 to evaluate the generation performance. Smaller perplexity scores indicate better performance. BLEU evaluates $n$ -gram overlap between a generated ending and a reference ending. However, since there is only one reference ending for each story context, BLEU scores will become extremely low for larger $n$ . We thus experimented with $n=1,2$ . Note also that there may exist multiple reasonable endings for the same story context. The results of the automatic evaluation are shown in Table 1 , where IE means a simple incremental encoding framework that ablated the knowledge context vector from $\textbf {c}_{\textbf {l}}$ in Eq. ( 17 ). Our models have lower perplexity and higher BLEU scores than the baselines. IE and IE+MSA have remarkably lower perplexity than other models. As for BLEU, IE+MSA(CA) obtained the highest BLEU-1 and BLEU-2 scores. This indicates that multi-source attention leads to generate story endings that have more overlaps with the reference endings. Manual Evaluation Manual evaluations are indispensable to evaluate the coherence and logic of generated endings. For manual evaluation, we randomly sampled 200 stories from the test set and obtained 1,600 endings from the eight models. Then, we resorted to Amazon Mechanical Turk (MTurk) for annotation. Each ending will be scored by three annotators and majority voting is used to select the final label. We defined two metrics - grammar and logicality for manual evaluation. Score 0/1/2 is applied to each metric during annotation. Whether an ending is natural and fluent. Score 2 is for endings without any grammar errors, 1 for endings with a few errors but still understandable and 0 for endings with severe errors and incomprehensible. Whether an ending is reasonable and coherent with the story context in logic. Score 2 is for reasonable endings that are coherent in logic, 1 for relevant endings but with some discrepancy between an ending and a given context, and 0 for totally incompatible endings. Note that the two metrics are scored independently. To produce high-quality annotation, we prepared guidelines and typical examples for each metric score. The results of the manual evaluation are also shown in Table 1 . Note that the difference between IE and IE+MSA exists in that IE does not attend to knowledge graph vectors in a preceding sentence, and thus it does use any commonsense knowledge. The incremental encoding scheme without MSA obtained the best grammar score and our full mode IE+MSA(GA) has the best logicality score. All the models have fairly good grammar scores (maximum is 2.0), while the logicality scores differ remarkably, much lower than the maximum score, indicating the challenges of this task. More specifically, incremental encoding is effective due to the facts: 1) IE is significantly better than Seq2Seq and HLSTM in grammar (Sign Test, 1.84 vs. $1.74/1.57$ , p-value= $0.046/0.037$ , respectively), and in logicality (1.10 vs. 0.70/0.84, p-value $<0.001/0.001$ ). 2) IE+MSA is significantly better than HLSTM+MSA in logicality (1.26 vs. 1.06, p-value= $0.014$ for GA; 1.24 vs. 1.02, p-value= $0.022$ for CA). This indicates that incremental encoding is more powerful than traditional (Seq2Seq) and hierarchical (HLSTM) encoding/attention in utilizing context clues. Furthermore, using commonsense knowledge leads to significant improvements in logicality. The comparison in logicality between IE+MSA and IE (1.26/1.24 vs. 1.10, p-value= $0.028/0.042$ for GA/CA, respectively), HLSTM+MSA and HLSTM (1.06/1.02 vs. 0.84, p-value $<0.001/0.001$ for GA/CA, respectively), and HLSTM+MSA and HLSTM+Copy (1.06/1.02 vs. 0.90, p-value= $0.044/0.048$ , respectively) all approve this claim. In addition, similar results between GA and CA show that commonsense knowledge is useful but multi-source attention is not sensitive to the knowledge representation scheme. More detailed results are listed in Table 2 . Comparing to other models, IE+MSA has a much larger proportion of endings that are good both in grammar and logicality (2-2). The proportion of good logicality (score=2.0) from IE+MSA is much larger than that from IE (45.0%+5.0%/41.0%+4.0% vs. 36.0%+2.0% for GA/CA, respectively), and also remarkable larger than those from other baselines. Further, HLSTM equipped with MSA is better than those without MSA, indicating that commonsense knowledge is helpful. And the kappa measuring inter-rater agreement is 0.29 for three annotators, which implies a fair agreement. Examples and Attention Visualization We presented an example of generated story endings in Table 3 . Our model generates more natural and reasonable endings than the baselines. In this example, the baselines predicted wrong events in the ending. Baselines (Seq2Seq, HLSTM, and HLSTM+Copy) have predicted improper entities (cake), generated repetitive contents (her family), or copied wrong words (eat). The models equipped with incremental encoding or knowledge through MSA(GA/CA) perform better in this example. The ending by IE+MSA is more coherent in logic, and fluent in grammar. We can see that there may exist multiple reasonable endings for the same story context. In order to verify the ability of our model to utilize the context clues and implicit knowledge when planning the story plot, we visualized the attention weights of this example, as shown in Figure 3 . Note that this example is produced from graph attention. In Figure 3 , phrases in the box are key events of the sentences that are manually highlighted. Words in blue or purple are entities that can be retrieved from ConceptNet, respectively in story context or in ending. An arrow indicates that the words in the current box (e.g., they eat in $X_2$ ) all have largest attention weights to some words in the box of the preceding sentence (e.g., cooking a special meal in $X_1$ ). Black arrows are for state context vector (see Eq. 18 ) and blue for knowledge context vector (see Eq. 19 ). For instance, eat has the largest knowledge attention to meal through the knowledge graph ( $<$ meal, AtLocation, dinner $>$ , $<$ meal, RelatedTo, eat $>$ ). Similarly, eat also has the second largest attention weight to cooking through the knowledge graph. For attention weights of state context vector, both words in perfects everything has the largest weight to some of everything to be just right (everything $\rightarrow $ everything, perfect $\rightarrow $ right). The example illustrates how the connection between context clues are built through incremental encoding and use of commonsense knowledge. The chain of context clues, such as ${be\_cooking}\rightarrow {want\_everything\_be\_right}\rightarrow {perfect\_everything}\rightarrow {lay\_down}\rightarrow {get\_back}$ , and the commonsense knowledge, such as $<$ cook, AtLocation, kitchen $>$ and $<$ oven, UsedFor, burn $>$ , are useful for generating reasonable story endings. Conclusion and Future Work We present a story ending generation model that builds context clues via incremental encoding and leverages commonsense knowledge with multi-source attention. It encodes a story context incrementally with a multi-source attention mechanism to utilize not only context clues but also commonsense knowledge: when encoding a sentence, the model obtains a multi-source context vector which is an attentive read of the words and the corresponding knowledge graphs of the preceding sentence in the story context. Experiments show that our models can generate more coherent and reasonable story endings. As future work, our incremental encoding and multi-source attention for using commonsense knowledge may be applicable to other language generation tasks. Refer to the Appendix for more details. Acknowledgements This work was jointly supported by the National Science Foundation of China (Grant No.61876096/61332007), and the National Key R&D Program of China (Grant No. 2018YFC0830200). We would like to thank Prof. Xiaoyan Zhu for her generous support. Appendix A: Annotation Statistics We presented the statistics of annotation agreement in Table 4 . The proportion of the annotations in which at least two annotators (3/3+2/3) assigned the same score to an ending is 96% for grammar and 94% for logicality. We can also see that the 3/3 agreement for logicality is much lower than that for grammar, indicating that logicality is more complicated for annotation than grammar. Appendix B: Error Analysis We analyzed error types by manually checking all 46 bad endings generated by our model, where bad means the average score in terms of at least one metric is not greater than 1. There are 3 typical error types: bad grammar (BG), bad logicality (BL), and other errors. The distribution of types is shown in Table 5 . We also presented some typical cases for each error type in Table 6 . Note that we only took graph attention as example. The first case shows an instance of bad grammar for repetitive generation. The second case shows that our model predicted a wrong entity at the last position where car is obviously more appropriate than daughter. It happens when the attention focuses on the wrong position, but in more cases it happens due to the noise of the commonsense knowledge base. The ending of the third case contains a relevant event work on his own but the event is not consistent to the previous word relieved. Other cases show that our model is not good at dealing with rare words. However, this can be further improved by applying copy mechanism, as our future work. These errors also indicate that story ending generation is challenging, and logic and implicit knowledge plays a central role in this task. Appendix C: Attention Visualization The multi-source attention mechanism computes the state context vectors and knowledge context vectors respectively as follows: $$\textbf {c}_{\textbf {h}j}^{(i)} &= \sum _{k = 1}^{l_{i-1}}\alpha _{h_k,j}^{(i)}\textbf {h}_{k}^{(i-1)}, \\ \textbf {c}_{\textbf {x}j}^{(i)} &= \sum _{k = 1}^{l_{i-1}}\alpha _{x_k,j}^{(i)}\textbf {g}(x_{k}^{(i-1)}), $$ (Eq. 53) The visualization analysis in Section 4.6 “Generated Ending Examples and Attention Visualization" is based on the attention weights ( $\alpha _{h_{k,j}}^{(i)}$ and $\alpha _{x_{k,j}}^{(i)}$ ), as presented in Figure 4 . Similarly we take as example the graph attention method to represent commonsense knowledge. The figure illustrates how the incremental encoding scheme with the multi-source attention utilizes context clues and implicit knowledge. 1) The left column: for utilizing context clues, when the model encodes $X_2$ , cooking in $X_1$ obtains the largest state attention weight ( $\alpha _{h_{k,j}}^{(i)}$ ), which illustrates cooking is an important word (or event) for the context clue. Similarly, the key events in each sentence have largest attention weights to some entities or events in the preceding sentence, which forms the context clue (e.g., perfects in $X_3$ to right in $X_2$ , lay/down in $X_4$ to perfect/everything in $X_3$ , get/back in $Y$ to lay/down in $X_4$ , etc.). 2) The right column: for the use of commonsense knowledge, each sentence has attention weights ( $\alpha _{x_{k,j}}^{(i)}$ ) to the knowledge graphs of the preceding sentence (e.g. eat in $X_2$ to meal in $X_1$ , dinner in $X_3$ to eat in $X_2$ , etc.). In this manner, the knowledge information is added into the encoding process of each sentence, which helps story comprehension for better ending generation (e.g., kitchen in $Y$ to oven in $X_2$ , etc.).
Seq2Seq, HLSTM, HLSTM+Copy, HLSTM+Graph Attention, HLSTM+Contextual Attention
7a7e279170e7a2f3bc953c37ee393de8ea7bd82f
7a7e279170e7a2f3bc953c37ee393de8ea7bd82f_0
Q: What two types the Chinese reading comprehension dataset consists of? Text: Introduction Machine Reading Comprehension (MRC) has become enormously popular in recent research, which aims to teach the machine to comprehend human languages and answer the questions based on the reading materials. Among various reading comprehension tasks, the cloze-style reaing comprehension is relatively easy to follow due to its simplicity in definition, which requires the model to fill an exact word into the query to form a coherent sentence according to the document material. Several cloze-style reading comprehension datasets are publicly available, such as CNN/Daily Mail BIBREF0 , Children's Book Test BIBREF1 , People Daily and Children's Fairy Tale BIBREF2 . In this paper, we provide a new Chinese reading comprehension dataset, which has the following features We also host the 1st Evaluation on Chinese Machine Reading Comprehension (CMRC2017), which has attracted over 30 participants and finally there were 17 participants submitted their evaluation systems for testing their reading comprehension models on our newly developed dataset, suggesting its potential impact. We hope the release of the dataset to the public will accelerate the progress of Chinese research community on machine reading comprehension field. We also provide four official baselines for the evaluations, including two traditional baselines and two neural baselines. In this paper, we adopt two widely used neural reading comprehension model: AS Reader BIBREF3 and AoA Reader BIBREF4 . The rest of the paper will be organized as follows. In Section 2, we will introduce the related works on the reading comprehension dataset, and then the proposed dataset as well as our competitions will be illustrated in Section 3. The baseline and participant system results will be given in Section 4 and we will made a brief conclusion at the end of this paper. Related Works In this section, we will introduce several public cloze-style reading comprehension dataset. CNN/Daily Mail Some news articles often come along with a short summary or brief introduction. Inspired by this, Hermann et al. hermann-etal-2015 release the first cloze-style reading comprehension dataset, called CNN/Daily Mail. Firstly, they obtained large-scale CNN and Daily Mail news data from online websites, including main body and its summary. Then they regard the main body of the news as the Document. The Query is generated by replacing a name entity word from the summary by a placeholder, and the replaced named entity word becomes the Answer. Along with the techniques illustrated above, after the initial data generation, they also propose to anonymize all named entity tokens in the data to avoid the model exploit world knowledge of specific entities, increasing the difficulties in this dataset. However, as we have known that world knowledge is very important when we do reading comprehension in reality, which makes this dataset much artificial than real situation. Chen et al. chen-etal-2016 also showed that the proposed anonymization in CNN/Daily Mail dataset is less useful, and the current models BIBREF3 , BIBREF5 are nearly reaching ceiling performance with the automatically generated dataset which contains much errors, such as coreference errors, ambiguous questions etc. Children's Book Test Another popular cloze-style reading comprehension dataset is the Children's Book Test (CBT) proposed by Hill et al. hill-etal-2015 which was built from the children's book stories. Though the CBT dataset also use an automatic way for data generation, there are several differences to the CNN/Daily Mail dataset. They regard the first 20 consecutive sentences in a story as the Document and the following 21st sentence as the Query where one token is replaced by a placeholder to indicate the blank to fill in. Unlike the CNN/Daily Mail dataset, in CBT, the replaced word are chosen from various types: Name Entity (NE), Common Nouns (CN), Verbs (V) and Prepositions (P). The experimental results showed that, the verb and preposition answers are not sensitive to the changes of document, so the following works are mainly focusing on solving the NE and CN genres. People Daily & Children's Fairy Tale The previously mentioned datasets are all in English. To add diversities to the reading comprehension datasets, Cui et al. cui-etal-2016 proposed the first Chinese cloze-style reading comprehension dataset: People Daily & Children's Fairy Tale, including People Daily news datasets and Children's Fairy Tale datasets. They also generate the data in an automatic manner, which is similar to the previous datasets. They choose short articles (several hundreds of words) as Document and remove a word from it, whose type is mostly named entities and common nouns. Then the sentence that contains the removed word will be regarded as Query. To add difficulties to the dataset, along with the automatically generated evaluation sets (validation/test), they also release a human-annotated evaluation set. The experimental results show that the human-annotated evaluation set is significantly harder than the automatically generated questions. The reason would be that the automatically generated data is accordance with the training data which is also automatically generated and they share many similar characteristics, which is not the case when it comes to human-annotated data. The Proposed Dataset In this section, we will briefly introduce the evaluation tracks and then the generation method of our dataset will be illustrated in detail. The 1st Evaluation on Chinese Machine Reading Comprehension (CMRC-2017) The proposed dataset is typically used for the 1st Evaluation on Chinese Machine Reading Comprehension (CMRC-2017), which aims to provide a communication platform to the Chinese communities in the related fields. In this evaluation, we provide two tracks. We provide a shared training data for both tracks and separated evaluation data. Cloze Track: In this track, the participants are required to use the large-scale training data to train their cloze system and evaluate on the cloze evaluation track, where training and test set are exactly the same type. User Query Track: This track is designed for using transfer learning or domain adaptation to minimize the gap between cloze training data and user query evaluation data, i.e. training and testing is fairly different. Following Rajpurkar et al. rajpurkar-etal-2016, we preserve the test set only visible to ourselves and require the participants submit their system in order to provide a fair comparison among participants and avoid tuning performance on the test set. The examples of Cloze and User Query Track are given in Figure 1 . Definition of Cloze Task The cloze-style reading comprehension can be described as a triple $\langle \mathcal {D}, \mathcal {Q}, \mathcal {A} \rangle $ , where $\mathcal {D}$ represents Document, $\mathcal {Q}$ represents Query and the $\mathcal {A}$ represents Answer. There is a restriction that the answer should be a single word and should appear in the document, which was also adopted in BIBREF1 , BIBREF2 . In our dataset, we mainly focus on answering common nouns and named entities which require further comprehension of the document. Automatic Generation Following Cui et al. BIBREF2 , we also use similar way to generate our training data automatically. Firstly we roughly collected 20,000 passages from children's reading materials which were crawled in-house. Briefly, we choose an answer word in the document and treat the sentence containing answer word as the query, where the answer is replaced by a placeholder “XXXXX”. The detailed procedures can be illustrated as follows. Pre-processing: For each sentence in the document, we do word segmentation, POS tagging and dependency parsing using LTP toolkit BIBREF6 . Dependency Extraction: Extract following dependencies: COO, SBV, VOB, HED, FOB, IOB, POB, and only preserve the parts that have dependencies. Further Filtering: Only preserve SBV, VOB and restrict the related words not to be pronouns and verbs. Frequency Restriction: After calculating word frequencies, only word frequency that greater than 2 is valid for generating question. Question Restriction: Only five questions can be extracted within one passage. Human Annotation Apart from the automatically generated large-scale training data, we also provide human-annotated validation and test data to improve the estimation quality. The annotation procedure costs one month with 5 annotators and each question is cross-validated by another annotator. The detailed procedure for each type of dataset can be illustrated as follows. For the validation and test set in cloze data, we first randomly choose 5,000 paragraphs each for automatically generating questions using the techniques mentioned above. Then we invite our resource team to manually select 2,000 questions based on the following rules. Whether the question is appropriate and correct Whether the question is hard for LMs to answer Only select one question for each paragraph Unlike the cloze dataset, we have no automatic question generation procedure in this type. In the user query dataset, we asked our annotator to directly raise questions according to the passage, which is much difficult and time-consuming than just selecting automatically generated questions. We also assign 5,000 paragraphs for question annotations in both validation and test data. Following rules are applied in asking questions. The paragraph should be read carefully and judged whether appropriate for asking questions No more than 5 questions for each passage The answer should be better in the type of nouns, named entities to be fully evaluated Too long or too short paragraphs should be skipped Experiments In this section, we will give several baseline systems for evaluating our datasets as well as presenting several top-ranked systems in the competition. Baseline Systems We set several baseline systems for testing basic performance of our datasets and provide meaningful comparisons to the participant systems. In this paper, we provide four baseline systems, including two simple ones and two neural network models. The details of the baseline systems are illustrated as follows. Random Guess: In this baseline, we randomly choose one word in the document as the answer. Top Frequency: We choose the most frequent word in the document as the answer. AS Reader: We implemented Attention Sum Reader (AS Reader) BIBREF3 for modeling document and query and predicting the answer with the Pointer Network BIBREF7 , which is a popular framework for cloze-style reading comprehension. Apart from setting embedding and hidden layer size as 256, we did not change other hyper-parameters and experimental setups as used in Kadlec et al. kadlec-etal-2016, nor we tuned the system for further improvements. AoA Reader: We also implemented Attention-over-Attention Reader (AoA Reader) BIBREF4 which is the state-of-the-art model for cloze-style reading comprehension. We follow hyper-parameter settings in AS Reader baseline without further tuning. In the User Query Track, as there is a gap between training and validation, we follow BIBREF8 and regard this task as domain adaptation or transfer learning problem. The neural baselines are built by the following steps. We first use the shared training data to build a general systems, and choose the best performing model (in terms of cloze validation set) as baseline. Use User Query validation data for further tuning the systems with 10-fold cross-validations. Increase dropout rate BIBREF9 to 0.5 for preventing over-fitting issue. All baseline systems are chosen according to the performance of the validation set. Participant Systems The participant system results are given in Table 2 and 3 . As we can see that two neural baselines are competitive among participant systems and AoA Reader successfully outperform AS Reader and all participant systems in single model condition, which proves that it is a strong baseline system even without further fine-tuning procedure. Also, the best performing single model among participant systems failed to win in the ensemble condition, which suggest that choosing right ensemble method is essential in most of the competitions and should be carefully studied for further performance improvements. Not surprisingly, we only received three participant systems in User Query Track, as it is much difficult than Cloze Track. As shown in Table 3 , the test set performance is significantly lower than that of Cloze Track, due to the mismatch between training and test data. The baseline results give competitive performance among three participants, while failed to outperform the best single model by ECNU, which suggest that there is much room for tuning and using more complex methods for domain adaptation. Conclusion In this paper, we propose a new Chinese reading comprehension dataset for the 1st Evaluation on Chinese Machine Reading Comprehension (CMRC-2017), consisting large-scale automatically generated training set and human-annotated validation and test set. Many participants have verified their algorithms on this dataset and tested on the hidden test set for final evaluation. The experimental results show that the neural baselines are tough to beat and there is still much room for using complicated transfer learning method to better solve the User Query Task. We hope the release of the full dataset (including hidden test set) could help the participants have a better knowledge of their systems and encourage more researchers to do experiments on. Acknowledgements We would like to thank the anonymous reviewers for their thorough reviewing and providing thoughtful comments to improve our paper. We thank the Sixteenth China National Conference on Computational Linguistics (CCL 2017) and Nanjing Normal University for providing space for evaluation workshop. Also we want to thank our resource team for annotating and verifying evaluation data. This work was supported by the National 863 Leading Technology Research Project via grant 2015AA015409.
cloze-style reading comprehension and user query reading comprehension questions
e3981a11d3d6a8ab31e1b0aa2de96f253653cfb2
e3981a11d3d6a8ab31e1b0aa2de96f253653cfb2_0
Q: For which languages most of the existing MRC datasets are created? Text: Introduction Machine Reading Comprehension (MRC) has become enormously popular in recent research, which aims to teach the machine to comprehend human languages and answer the questions based on the reading materials. Among various reading comprehension tasks, the cloze-style reaing comprehension is relatively easy to follow due to its simplicity in definition, which requires the model to fill an exact word into the query to form a coherent sentence according to the document material. Several cloze-style reading comprehension datasets are publicly available, such as CNN/Daily Mail BIBREF0 , Children's Book Test BIBREF1 , People Daily and Children's Fairy Tale BIBREF2 . In this paper, we provide a new Chinese reading comprehension dataset, which has the following features We also host the 1st Evaluation on Chinese Machine Reading Comprehension (CMRC2017), which has attracted over 30 participants and finally there were 17 participants submitted their evaluation systems for testing their reading comprehension models on our newly developed dataset, suggesting its potential impact. We hope the release of the dataset to the public will accelerate the progress of Chinese research community on machine reading comprehension field. We also provide four official baselines for the evaluations, including two traditional baselines and two neural baselines. In this paper, we adopt two widely used neural reading comprehension model: AS Reader BIBREF3 and AoA Reader BIBREF4 . The rest of the paper will be organized as follows. In Section 2, we will introduce the related works on the reading comprehension dataset, and then the proposed dataset as well as our competitions will be illustrated in Section 3. The baseline and participant system results will be given in Section 4 and we will made a brief conclusion at the end of this paper. Related Works In this section, we will introduce several public cloze-style reading comprehension dataset. CNN/Daily Mail Some news articles often come along with a short summary or brief introduction. Inspired by this, Hermann et al. hermann-etal-2015 release the first cloze-style reading comprehension dataset, called CNN/Daily Mail. Firstly, they obtained large-scale CNN and Daily Mail news data from online websites, including main body and its summary. Then they regard the main body of the news as the Document. The Query is generated by replacing a name entity word from the summary by a placeholder, and the replaced named entity word becomes the Answer. Along with the techniques illustrated above, after the initial data generation, they also propose to anonymize all named entity tokens in the data to avoid the model exploit world knowledge of specific entities, increasing the difficulties in this dataset. However, as we have known that world knowledge is very important when we do reading comprehension in reality, which makes this dataset much artificial than real situation. Chen et al. chen-etal-2016 also showed that the proposed anonymization in CNN/Daily Mail dataset is less useful, and the current models BIBREF3 , BIBREF5 are nearly reaching ceiling performance with the automatically generated dataset which contains much errors, such as coreference errors, ambiguous questions etc. Children's Book Test Another popular cloze-style reading comprehension dataset is the Children's Book Test (CBT) proposed by Hill et al. hill-etal-2015 which was built from the children's book stories. Though the CBT dataset also use an automatic way for data generation, there are several differences to the CNN/Daily Mail dataset. They regard the first 20 consecutive sentences in a story as the Document and the following 21st sentence as the Query where one token is replaced by a placeholder to indicate the blank to fill in. Unlike the CNN/Daily Mail dataset, in CBT, the replaced word are chosen from various types: Name Entity (NE), Common Nouns (CN), Verbs (V) and Prepositions (P). The experimental results showed that, the verb and preposition answers are not sensitive to the changes of document, so the following works are mainly focusing on solving the NE and CN genres. People Daily & Children's Fairy Tale The previously mentioned datasets are all in English. To add diversities to the reading comprehension datasets, Cui et al. cui-etal-2016 proposed the first Chinese cloze-style reading comprehension dataset: People Daily & Children's Fairy Tale, including People Daily news datasets and Children's Fairy Tale datasets. They also generate the data in an automatic manner, which is similar to the previous datasets. They choose short articles (several hundreds of words) as Document and remove a word from it, whose type is mostly named entities and common nouns. Then the sentence that contains the removed word will be regarded as Query. To add difficulties to the dataset, along with the automatically generated evaluation sets (validation/test), they also release a human-annotated evaluation set. The experimental results show that the human-annotated evaluation set is significantly harder than the automatically generated questions. The reason would be that the automatically generated data is accordance with the training data which is also automatically generated and they share many similar characteristics, which is not the case when it comes to human-annotated data. The Proposed Dataset In this section, we will briefly introduce the evaluation tracks and then the generation method of our dataset will be illustrated in detail. The 1st Evaluation on Chinese Machine Reading Comprehension (CMRC-2017) The proposed dataset is typically used for the 1st Evaluation on Chinese Machine Reading Comprehension (CMRC-2017), which aims to provide a communication platform to the Chinese communities in the related fields. In this evaluation, we provide two tracks. We provide a shared training data for both tracks and separated evaluation data. Cloze Track: In this track, the participants are required to use the large-scale training data to train their cloze system and evaluate on the cloze evaluation track, where training and test set are exactly the same type. User Query Track: This track is designed for using transfer learning or domain adaptation to minimize the gap between cloze training data and user query evaluation data, i.e. training and testing is fairly different. Following Rajpurkar et al. rajpurkar-etal-2016, we preserve the test set only visible to ourselves and require the participants submit their system in order to provide a fair comparison among participants and avoid tuning performance on the test set. The examples of Cloze and User Query Track are given in Figure 1 . Definition of Cloze Task The cloze-style reading comprehension can be described as a triple $\langle \mathcal {D}, \mathcal {Q}, \mathcal {A} \rangle $ , where $\mathcal {D}$ represents Document, $\mathcal {Q}$ represents Query and the $\mathcal {A}$ represents Answer. There is a restriction that the answer should be a single word and should appear in the document, which was also adopted in BIBREF1 , BIBREF2 . In our dataset, we mainly focus on answering common nouns and named entities which require further comprehension of the document. Automatic Generation Following Cui et al. BIBREF2 , we also use similar way to generate our training data automatically. Firstly we roughly collected 20,000 passages from children's reading materials which were crawled in-house. Briefly, we choose an answer word in the document and treat the sentence containing answer word as the query, where the answer is replaced by a placeholder “XXXXX”. The detailed procedures can be illustrated as follows. Pre-processing: For each sentence in the document, we do word segmentation, POS tagging and dependency parsing using LTP toolkit BIBREF6 . Dependency Extraction: Extract following dependencies: COO, SBV, VOB, HED, FOB, IOB, POB, and only preserve the parts that have dependencies. Further Filtering: Only preserve SBV, VOB and restrict the related words not to be pronouns and verbs. Frequency Restriction: After calculating word frequencies, only word frequency that greater than 2 is valid for generating question. Question Restriction: Only five questions can be extracted within one passage. Human Annotation Apart from the automatically generated large-scale training data, we also provide human-annotated validation and test data to improve the estimation quality. The annotation procedure costs one month with 5 annotators and each question is cross-validated by another annotator. The detailed procedure for each type of dataset can be illustrated as follows. For the validation and test set in cloze data, we first randomly choose 5,000 paragraphs each for automatically generating questions using the techniques mentioned above. Then we invite our resource team to manually select 2,000 questions based on the following rules. Whether the question is appropriate and correct Whether the question is hard for LMs to answer Only select one question for each paragraph Unlike the cloze dataset, we have no automatic question generation procedure in this type. In the user query dataset, we asked our annotator to directly raise questions according to the passage, which is much difficult and time-consuming than just selecting automatically generated questions. We also assign 5,000 paragraphs for question annotations in both validation and test data. Following rules are applied in asking questions. The paragraph should be read carefully and judged whether appropriate for asking questions No more than 5 questions for each passage The answer should be better in the type of nouns, named entities to be fully evaluated Too long or too short paragraphs should be skipped Experiments In this section, we will give several baseline systems for evaluating our datasets as well as presenting several top-ranked systems in the competition. Baseline Systems We set several baseline systems for testing basic performance of our datasets and provide meaningful comparisons to the participant systems. In this paper, we provide four baseline systems, including two simple ones and two neural network models. The details of the baseline systems are illustrated as follows. Random Guess: In this baseline, we randomly choose one word in the document as the answer. Top Frequency: We choose the most frequent word in the document as the answer. AS Reader: We implemented Attention Sum Reader (AS Reader) BIBREF3 for modeling document and query and predicting the answer with the Pointer Network BIBREF7 , which is a popular framework for cloze-style reading comprehension. Apart from setting embedding and hidden layer size as 256, we did not change other hyper-parameters and experimental setups as used in Kadlec et al. kadlec-etal-2016, nor we tuned the system for further improvements. AoA Reader: We also implemented Attention-over-Attention Reader (AoA Reader) BIBREF4 which is the state-of-the-art model for cloze-style reading comprehension. We follow hyper-parameter settings in AS Reader baseline without further tuning. In the User Query Track, as there is a gap between training and validation, we follow BIBREF8 and regard this task as domain adaptation or transfer learning problem. The neural baselines are built by the following steps. We first use the shared training data to build a general systems, and choose the best performing model (in terms of cloze validation set) as baseline. Use User Query validation data for further tuning the systems with 10-fold cross-validations. Increase dropout rate BIBREF9 to 0.5 for preventing over-fitting issue. All baseline systems are chosen according to the performance of the validation set. Participant Systems The participant system results are given in Table 2 and 3 . As we can see that two neural baselines are competitive among participant systems and AoA Reader successfully outperform AS Reader and all participant systems in single model condition, which proves that it is a strong baseline system even without further fine-tuning procedure. Also, the best performing single model among participant systems failed to win in the ensemble condition, which suggest that choosing right ensemble method is essential in most of the competitions and should be carefully studied for further performance improvements. Not surprisingly, we only received three participant systems in User Query Track, as it is much difficult than Cloze Track. As shown in Table 3 , the test set performance is significantly lower than that of Cloze Track, due to the mismatch between training and test data. The baseline results give competitive performance among three participants, while failed to outperform the best single model by ECNU, which suggest that there is much room for tuning and using more complex methods for domain adaptation. Conclusion In this paper, we propose a new Chinese reading comprehension dataset for the 1st Evaluation on Chinese Machine Reading Comprehension (CMRC-2017), consisting large-scale automatically generated training set and human-annotated validation and test set. Many participants have verified their algorithms on this dataset and tested on the hidden test set for final evaluation. The experimental results show that the neural baselines are tough to beat and there is still much room for using complicated transfer learning method to better solve the User Query Task. We hope the release of the full dataset (including hidden test set) could help the participants have a better knowledge of their systems and encourage more researchers to do experiments on. Acknowledgements We would like to thank the anonymous reviewers for their thorough reviewing and providing thoughtful comments to improve our paper. We thank the Sixteenth China National Conference on Computational Linguistics (CCL 2017) and Nanjing Normal University for providing space for evaluation workshop. Also we want to thank our resource team for annotating and verifying evaluation data. This work was supported by the National 863 Leading Technology Research Project via grant 2015AA015409.
English
74b0d3ee0cc9b0a3d9b264aba9901ff97048a897
74b0d3ee0cc9b0a3d9b264aba9901ff97048a897_0
Q: How did they induce the CFG? Text: Introduction One of the ultimate goals of Natural Language Processing (NLP) is machine reading BIBREF0 , the automatic, unsupervised understanding of text. One way of pursuing machine reading is by semantic parsing, which transforms text into its meaning representation. However, capturing the meaning is not the final goal, the meaning representation needs to be predefined and structured in a way that supports reasoning. Ontologies provide a common vocabulary for meaning representations and support reasoning, which is vital for understanding the text. To enable flexibility when encountering new concepts and relations in text, in machine reading we want to be able to learn and extend the ontology while reading. Traditional methods for ontology learning BIBREF1 , BIBREF2 are only concerned with discovering the salient concepts from text. Thus, they work in a macro-reading fashion BIBREF3 , where the goal is to extract facts from a large collection of texts, but not necessarily all of them, as opposed to a micro-reading fashion, where the goal is to extract every fact from the input text. Semantic parsers operate in a micro-reading fashion. Consequently, the ontologies with only the salient concepts are not enough for semantic parsing. Furthermore, the traditional methods learn an ontology for a particular domain, where the text is used just as a tool. On the other hand, ontologies are used just as tool to represent meaning in the semantic parsing setting. When developing a semantic parser it is not trivial to get the best meaning representation for the observed text, especially if the content is not known yet. Semantic parsing datasets have been created by either selecting texts that can be expressed with a given meaning representation, like Free917 dataset BIBREF4 , or by manually deriving the meaning representation given the text, like Atis dataset BIBREF5 . In both datasets, each unit of text has its corresponding meaning representation. While Free917 uses Freebase BIBREF6 , which is a very big multi-domain ontology, it is not possible to represent an arbitrary sentence with Freebase or any other existing ontology. In this paper, we propose a novel approach to joint learning of ontology and semantic parsing, which is designed for homogeneous collections of text, where each fact is usually stated only once, therefore we cannot rely on data redundancy. Our approach is text-driven, semi-automatic and based on grammar induction. It is presented in Figure 1 .The input is a seed ontology together with text annotated with concepts from the seed ontology. The result of the process is an ontology with extended instances, classes, taxonomic and non-taxonomic relations, and a semantic parser, which transform basic units of text, i.e sentences, into semantic trees. Compared to trees that structure sentences based on syntactic information, nodes of semantic trees contain semantic classes, like location, profession, color, etc. Our approach does not rely on any syntactic analysis of text, like part-of-speech tagging or dependency parsing. The grammar induction method works on the premise of curriculum learning BIBREF7 , where the parser first learns to parse simple sentences, then proceeds to learn more complex ones. The induction method is iterative, semi-automatic and based on frequent patterns. A context-free grammar (CFG) is induced from the text, which is represented by several layers of semantic annotations. The motivation to use CFG is that it is very suitable for the proposed alternating usage of top-down and bottom-up parsing, where new rules are induced from previously unparsable parts. Furthermore, it has been shown by BIBREF8 that CFGs are expressive enough to model almost every language phenomena. The induction is based on a greedy iterative procedure that involves minor human involvement, which is needed for seed rule definition and rule categorization. Our experiments show that although the grammar is ambiguous, it is scalable enough to parse a large dataset of sentences. The grammar and semantic trees serve as an input for the new ontology. Classes, instances and taxonomic relations are constructed from the grammar. We also propose a method for discovering less frequent instances and their classes, and a supervised method to learn relations between instances. Both methods work on semantic trees. For experimentation, first sentences of Wikipedia pages describing people are taken as a dataset. These sentences are already annotated with links to other pages, which are also instances of DBpedia knowledge base BIBREF9 . Using relations from DBpedia as a training set, several models to predict relations have been trained and evaluated. The rest of the paper is organized in the following way. The grammar induction approach is presented in Section "Grammar induction" . The ontology induction approach follows in Section "Ontology induction" . In Section "Experiments" we present the conducted experiments with grammar induction, and instance and relation extraction. We examine the related work in Section "Related Work" , and conclude with the discussion in Section "Discussion" . Grammar induction In this section, we propose a semi-automatic bootstrapping procedure for grammar induction, which searches for the most frequent patterns and constructs new production rules from them. One of the main challenges is to make the induction in a way that minimizes human involvement and maximizes the quality of semantic trees. The input to the process, which is illustrated in Figure 2 , is a set of predefined seed grammar rules (see Section "Seed rules" ) and a sample of sentences in a layered representation (see Section "Experiments" ) from the dataset. The output of the process is a larger set of rules forming the induced grammar. One rule is added to the grammar on each iteration. At the beginning of each iteration all the sentences are parsed with a top-down parser. The output of parsing a single sentence is a semantic tree – a set of semantic nodes connected into a tree. Here we distinguish two possible outcomes of the parsing: 1) the sentence was completely parsed, which is the final goal and 2) there is at least one part of the sentence that cannot be parsed. From the perspective of a parser the second scenario happens when there is a node that cannot be parsed by any of the rules. We name these nodes – null nodes – and they serve as the input for the next step, the rule induction. In this step several rules are constructed by generalization of null nodes. The generalization (see Section "Rule induction" ) is based on utilization of semantic annotations and bottom-up composition of the existing rules. Out of the induced rules, a rule with the highest frequency (the one that was generalized from the highest number of null nodes) is added to the grammar. To improve quality of the grammar, the rules are marked by so called property, which instructs the parser how to use the rule (eg., us it in parsing but not in induction). The property vitally affects result of the parsing in the following iterations potentially causing a huge semantic drift for the rest of process. Consequently, a human user needs to mark the property of each rule. The iterative process runs until a predefined stopping criteria is met. The criteria is either connected to the quality of the grammar or time limitation. For the sake of transparency of the experiments, the human is involved in the beginning, when the seed rules are created and later when the rule properties are specified. However, in another setting the user could also define new rules in the middle of the bootstrapping procedure. In the following sections, we describe each component of the process in more details. Our goal was to develop a semi-automatic method that induces a grammar suitable for our scenario, in which an ontology is extracted, and text is parsed into semantic trees. A survey by BIBREF27 compares several papers on grammar induction. According to their classification, our method falls into unsupervised, text-based (no negative examples of sentences) methods. Many such methods induce context-free grammars. However, their focus is more on learning syntactic structures rather than semantic. This is evident in evaluation strategies, where their parse trees are compared against golden parse trees in treebanks, like Penn treebank BIBREF28 , which are annotated according to syntactic policies. Furthermore, our grammar should not limited to a specific form, like for instance Chomsky normal form or Greibach normal form, instead it may contain arbitrary context-free rules. Several algorithms, like ours, employ the greedy strategy of grammar induction, where the grammar is updated with the best decision at each step. Whereas our method adds a rule after all sentences are parsed, The Incremental Parsing algorithm BIBREF29 updates the grammar after each sentence. This is also done in ADIOS method BIBREF30 , where it has been shown that order of sentences affects the grammar. Our method employs frequency analysis and human supervision to control the grammar construction, while others use Minimum Description Length principle BIBREF31 , clustering of sequences BIBREF32 , or significance of word co-occurrences BIBREF33 . Textual data representation The input textual data needs to be properly structured in order to work best with the proposed algorithms. Shallow NLP tools, like sentence splitting, word tokenization, named entity recognition, might help obtaining this structure. The basic unit is a sentence, represented by several layers. An example is presented in Table 1 . Each layer consists of several tokens, which span over one or more words. The basic layer is the lexical layer, where each token represents a single word. All other layers are created from the annotations. Some annotations, like named-entities, may span over several words; some of the words may not have an annotation, thus they are given a null token. It is crucial that all algorithms are aware how to deal with a particular layer. For instance, the parser must not break apart a multi-word annotation. Some layers may be derived from others using the seed ontology. For example, instance layer contains annotations to instances of the ontology and the derived class layer represents the classes of these annotations, which are also from the ontology. Annotation layers are valuable if they provide good means for generalization or connection with the ontology. A term is a subpart of the sentence, defined by the starting and ending position in the sentence. It has different interpretation in each layer. If the interpretation breaks any of the tokens, it is not valid. For instance, term representing Madeira is not valid in named-entity layer in Table 1 because it breaks Person. Grammar Definition Our context-free grammar $G$ is defined by the 5-tuple: $G = (V, \sigma , P, S, R)$ , where $V$ is a set of non-terminals. Each non-terminal represents a semantic class, e.g. $\langle \text{Person} \rangle $ , $\langle \text{Color} \rangle $ , $\langle \text{Organization} \rangle $ . There is also a universal non-terminal $\langle * \rangle $ , which can be replaced by any other non-terminal. The same non-terminal replaces all occurrences in a rule. It is used to represent several rules, with a notation. The grammar is still context-free. See seed rule examples in Section "Seed rules" . $\sigma $ is a set of terminals. Terminal is any existing non-null token from any sentence layer. We denote a terminal by value{layer}. For instance, [location]{named-entity}, Phil_Madeira{instance}. If the terminal is from the lexical layer, the layer is skipped in the denotation. $P$ is a set of production rules that represents a relation from $V \rightarrow (V \cup E)^*$ . For example, $S$ is the starting non-terminal symbol. Since non-terminals represent semantic classes, the starting symbol is chosen based on the semantic class of the input examples. If the input examples are sentences, then the appropriate category may be $\langle \text{Relation} \rangle $ . While if the input examples are noun phrases, the starting symbol may be a more specific category, like $\langle \text{Job Title} \rangle $ . $R$ is a set of properties: positive, neutral, negative, non-inducible. The property controls the usage of the rule in the parsing and in the rule induction phase. More details are given in the following subsections. Parser For parsing, a recursive descent parser with backtracking was developed. This is a top-down parser, which first looks at the higher level sentence structure and then proceeds down the parse tree to identify low level details of the sentence. The advantage of top-down parsing is the ability to partially parse sentences and to detect unparsable parts of sentences. The parser takes a layered sentence as an input and returns a semantic tree as an output (see Figure 3 ). The recursive structure of the program closely follows the structure of the parse tree. The recursive function Parse (see Algorithm "Parser" ) takes a term and a non-terminal as input and returns a parse node as an output. The parse node contains the class of node (non-terminal), the rule that parsed the node, the term, and the list of children nodes. In order for the rule to parse the node, the left-hand side must match the input non-terminal and the right-hand side must match the layered input. In the pattern matching function Match (line "Parser" ), the right hand side of a rule is treated like a regular expression; non-terminals present the ( $+$ ) wildcard characters, which match at least one word. The terminals are treated as literal characters, which are matched against the layer that defines them. The result of successfully matched pattern is a list of terms, where each term represents a non-terminal of the pattern. Due to ambiguity of pattern matching there might be several matches. For each of the term – non-terminal pair in every list the parse function is recursively called (line "Parser" ). GetEligibleRulesGetEligibleRules CreateNodeNode MergeSelectBestNode MatchMatch ParseParse Grammargrammar Patternpattern NonTerminalsnon terminals Rulesrules ChildTreechild node Unexpandedinduction nodes AListambiguous lists PListterm list Childchild nodes Finalfinal node Ambiguousnodes Inputinput Outputoutput FnFunction Phrase $p$ , Non-terminal $n$ parse node $\leftarrow $ $\lbrace \rbrace $ rule $r$ of n = left side of $r$ $\leftarrow $ right hand side of $r$ $\leftarrow $ , $p$ of $\leftarrow $ $\lbrace \rbrace $ $i\leftarrow 0$ size of $\leftarrow $ $_i$ , $._i$ add to add $type$ , $p$ , $r$ , to is empty $\leftarrow $ $type$ , $p$ , null, $\lbrace \rbrace $ $\leftarrow $ $\operatornamewithlimits{arg\,max}_{n \in nodes} r(n)$ is not fully parsed add to Pseudocode of the main function parse of the top-down parser. Since the grammar is ambiguous, a term can be parsed in multiple ways. There are two types of ambiguity. Two or more rules can expand the same term and one rule can expand the term in more than one way. For each ambiguity one node is created, and the best node according to reliability measure is selected to be the result (line "Parser" ). The reliability measure $r(n)$ is $$r(n)= {\left\lbrace \begin{array}{ll} 1, & \text{if node is fully parsed} \\ \beta \cdot (1 -tp(n)) + (1 - \beta )\frac{\displaystyle \sum \limits _{c \in C(n)} |c|\cdot r(c)}{\displaystyle \sum \limits _{c \in C(n)} |c|} ,& \text{if node is partially parsed} \\ 0, & \text{if node is null} \\ \end{array}\right.}$$ (Eq. 14) where $tp(n)$ is the trigger probability of the rule that parsed the node $n$ , $\beta $ is a predefined weight, $C(n)$ is the set of children of $n$ , and $|c|$ is the length of the term of node $c$ . The trigger probability of the rule is the probability that a the right-hand side of the rule pattern matches a random term in the dataset and it is estimated after the rule is induced. The range of the measure is between 0 and 1. The measure was defined in such a way that the more text the node parses, the higher is the reliability (the second summand in the middle row of Eq. 14 ). On the other hand, nodes with rules that are more frequently matched have lower reliability; this penalizes rules that are very loosely defined (the first summand in the middle row of Eq. 14 ). The $\beta $ parameter was set to 0.05, using grid search, with average F1 score from relation extraction experiment from Section "Relation extraction" as a performance measure. If none of the rules match the term, a null node is created and added to the list of nodes, which will be later used for grammar induction (line "Parser" ). Note that even if a null node is discarded, because it is not the most reliable, it will still be used in the grammar induction step. A node is fully parsed if the node itself and all of its descendants are parsed. If a node is parsed and if at least one of its descendants is not parsed, then the node is partially parsed. All nodes that are not fully parsed are added to the list for induction. Since the ambiguity of the grammar may make parsing computationally infeasible, several optimization techniques are used. Memoization BIBREF10 is used to reduce the complexity from exponential time to $\mathcal {O}(n^3)$ BIBREF11 , where $n$ is the length of the sentence. The parser does not support $\epsilon $ productions mainly because the grammar induction will not produce them. The patterns that do not contain terminals are the most ambiguous. At most two non-terminals are allowed, and the maximal length of the term that corresponds to the first non-terminal is three tokens. We argue that this is not a huge limitation, since the way human languages are structured, usually two longer terms are connected with a word, like comma or a verb. Furthermore, the way how our induction works, these connectors do not get generalized and become a terminal in the rule. There was an attempt to introduce rules with negative property. Whenever such rule fully parses a node, that indicates that the current parsing path is incorrect. This allows the parser to backtrack sooner and also prevents adding null sister nodes (null sister nodes are in this case usually wrong) to the rule induction. However, it turned out that negative rules actually slow down the parsing, since the grammar gets bigger. It is better to mark these rules as neutral, therefore they are not added to the grammar. Rule induction The goal of the rule induction step is to convert the null nodes from the parsing step into rules. Out of these rules, the most frequent one is promoted. The term from the null node is generalized to form the right side of the rule. The class non-terminal of the null node will present the left side of the rule. Recently induced rule will parse all the nodes, from which it was induced, in the following iterations. Additionally, some rules may parse the children of those nodes. Generalization is done in two steps. First, terms are generalized on the layer level. The output of this process is a sequence of tokens, which might be from different layers. For each position in the term a single layer is selected, according to predefined layer order. In the beginning, term is generalized with the first layer. All the non-null tokens from this layer are taken to be part of the generalized term. All the positions of the term that have not been generalized are attempted to be generalized with the next layer, etc. The last layer is without null-tokens, therefore each position of the term is assigned a layer. Usually, this is the lexical layer. For example, top part of Table 2 shows generalization of term from Table 1 . The layer list is constructed manually. Good layers for generalization are typically those that express semantic classes of individual terms. Preferably, these types are not too general (loss of information) and not too specific (larger grammar). In the next step of generalization, tokens are further generalized using a greedy bottom-up parser using the rules from the grammar. The right sides of all the rules are matched against the input token term. If there is a match, the matched sub-term is replaced with the left side of the rule. Actually, in each iteration all the disjunct matches are replaced. To get only the disjunct matches, overlapping matches are discarded greedily, where longer matches have the priority. This process is repeated until no more rules match the term. An example is presented in the lower part of Table 2 . The bottom-up parsing algorithm needs to be fast because the number of unexpanded nodes can be very high due to ambiguities in the top-down parsing. Consequently, the algorithm is greedy, instead of exhaustive, and yields only one result. Aho-Corasick string matching algorithm BIBREF12 is selected for matching for its ability to match all the rules simultaneously. Like the top-down parser, this parser generates partial parses because the bottom-up parser will never fully parse – the output is the same as the non-terminal type in the unexpanded node. This would generate a cyclical rule, i.e. $<$ Class $>$ :== $<$ Class $>$ . However, this never happens because the top-down parser would already expand the null node. The last step of the iteration is assigning the property to the newly induced rule. Property controls the role of the rule in the parsing and induction. The default property is positive, which defines the default behavior of the rule in all procedures. Rules with neutral property are not used in any procedure. They also cannot be re-induced. Some rules are good for parsing, but may introduce errors in the induction. These rules should be given non-inducible property. For instance, rule $<$ Date $>$ :== $<$ Number $>$ is a candidate for the non-inducible property, since years are represented by a single number. On the contrary, not every number is a date. In our experiments, the assignment was done manually. The human user sees the induced rule and few examples of the null nodes, from which it was induced. This should provide enough information for the user to decide in a few seconds, which property to assign. After the stopping criteria is met, the iterative procedure can continue automatically by assigning positive property to each rule. Initial experimenting showed that just a single mistake in the assignment can cause a huge drift, making all further rules wrong. Seed rules Before the start, a list of seed rules may be needed in order for grammar induction to be successful. Since this step is done manually, it is reasonable to have a list of seed rules short and efficient. Seed rules can be divided in three groups: domain independent linguistic rules, class rules, top-level domain rules. Domain independent linguistic rules, such as parse the top and mid-level nodes. They can be applied on many different datasets. Class rules connect class tokens, like named-entity tokens with non-terminals. For example, They parse the leaf nodes of the trees. On the other hand, top-level domain rules, define the basic structure of the sentence. For example, As the name suggests, they parse nodes close to the root. Altogether, these rule groups parse on all levels of the tree, and may already be enough to parse the most basic sentences, but more importantly, they provide the basis for learning to parse more complex sentences. The decision on which and how many seed rules should be defined relies on human judgment whether the current set of seed rules is powerful enough to ignite the bootstrapping procedure. This judgment may be supported by running one iteration and inspecting the top induced rules. Ontology induction This section describes how to utilize the grammar and manipulate semantic trees to discover ontology components in the textual data. Ontology induction from grammar We propose a procedure for mapping grammar components to ontology components. In particular, classes, instances and taxonomic relations are extracted. First, we distinguish between instances and classes in the grammar. Classes are represented by all non-terminals and terminals that come from a layer populated with classes, for example, named-entity layer and class layer from Table 1 . Instances might already exist in the instance layer, or they are created from rules, whose right hand side contains only tokens from the lexical layer. These tokens represent the label of the new instance. For instance rule $<$ Profession $>$ ::= software engineer is a candidate for instance extraction. Furthermore, we distinguish between class and instance rules. Class rules have a single symbol representing a class on the right-hand side. Class rules map to subClassOf relations in the ontology. If the rule is positive, then the class on the right side is the subclass of the class on the left side. For instance, rule $<$ Organization $>$ ::= $<$ Company $>$ yields relation (subClassOf Company Organization). On the other hand, instance rules have one or more symbols representing an instance on the right side, and define the isa relation. If the rule is positive, then the instance on the right side is a member of a class on the left side. For instance, rule $<$ Profession $>$ ::= software engineer yields relation (isa SoftwareEngineer Profession). If class or instance rule is neutral then the relation can be treated as false. Note that many other relations may be inferred by combing newly induced relations and relations from the seed ontology. For instance, induced relation (subClassOf new-class seed-class) and seed relation (isa seed-class seed-instance) are used to infer a new relation (isa new-class seed-instance). In this section, we described how to discover relations on the taxonomic level. In the next section, we describe how to discover relations between instances. Relation extraction from semantic trees We propose a method for learning relations from semantic trees, which tries to solve the same problem as the classical relation extraction methods. Given a dataset of positive relation examples that represent one relation type, e.g. birthPlace, the goal is to discover new unseen relations. The method is based on the assumption that a relation between entities is expressed in the shortest path between them in the semantic tree BIBREF13 . The input for training are sentences in layered representation, corresponding parse trees, and relation examples. Given a relation from the training set, we first try to identify the sentence containing each entity of the relation. The relation can have one, two, or even more entities. Each entity is matched to the layer that corresponds to the entity type. For example, strings are matched to the lexical layer; ontology entities are matched to the layer containing such entities. The result of a successfully matched entity is a sub-term of the sentence. In the next step, the corresponding semantic tree is searched for a node that contains the sub-term. At this point, each entity has a corresponding entity node. Otherwise, the relation is discarded from the learning process. Given the entity nodes, a minimum spanning tree containing all off them is extracted. If there is only one entity node, then the resulting subtree is the path between this node and the root node. The extracted sub-tree is converted to a variable tree, so that different semantic trees can have the same variable sub-trees, for example see Figure 4 . The semantic nodes of the sub-tree are converted into variable nodes, by retaining the class and the rule of the node, as well as the places of the children in the original tree. For entity nodes also the position in the relation is memorized. Variable tree extracted from a relation is a positive example in the training process. For negative examples all other sub-trees that do not present any relations are converted to variable trees. Each variable node represents one feature. Therefore, a classification algorithm, such as logistic regression can be used for training. When predicting, all possible sub-trees of the semantic tree are predicted. If a sub-tree is predicted as positive, then the terms in the leaf nodes represent the arguments of the relation. Experiments In this section, we present experiments evaluating the proposed approach. We have conducted experimentation on Wikipedia–DBpedia dataset (Section "Datasets" ). First, we have induced a grammar on the Wikipedia dataset (Section "Grammar Induction Experiments" ) to present its characteristics, and the scalability of the approach. In the next experiment, we present a method for discovering less prominent instances (Section "Instance extraction" ). The last experiment demonstrates one application of semantic parsing – the supervised learning of DBpedia relations(Section "Relation extraction" ). Datasets The datasets for experiments were constructed from English Wikipedia and knowledge bases DBpedia BIBREF9 and Freebase BIBREF6 . DBpedia provides structured information about Wikipedia articles that was scraped out of their infoboxes. First sentences of Wikipedia pages describing people were taken as the textual dataset, while DBpedia relations expressing facts about the same people were taken as the dataset for supervised relation learning. Note that each DBpedia instance has a Wikipedia page. A set of person instances was identified by querying DBpedia for instances that have a person class. For the textual dataset, Wikipedia pages representing these entities were parsed by the in-house Wikipedia markup parser to convert the markup into plain text. Furthermore, the links to other Wikipedia pages were retained. Here is an example of a sentence in plain text: Victor Francis Hess (24 June 1883 – 17 December 1964) was an Austrian-American physicist, and Nobel laureate in physics, who discovered cosmic rays. Using the Standford OpenNLP BIBREF14 on plain texts we obtained sentence and token splits, and named-entity annotation. Notice, that only the first sentence of each page was retained and converted to the proposed layered representation (see Section "Experiments" ). The layered representation contains five layers: lexical (plain text), named-entity (named entity recognizer), wiki-link (Wikipedia page in link – DBpedia instance), dbpedia-class (class of Wikipedia page in Dbpedia) and freebase-class (class of Wikipedia page in Freebase). Freebase also contains its own classes of Wikipedia pages. For the last two layers, there might be several classes per Wikipedia page. Only one was selected using a short priority list of classes. If none of the categories is on the list, then the category is chosen at random. After comparing the dbpedia-class and freebase-class layers, only freebase-class was utilized in the experiments because more wiki-link tokens has a class in freebase-class layer than in dbpedia-class layer. There are almost 1.1 million sentences in the collection. The average length of a sentence is 18.3 words, while the median length is 13.8 words. There are 2.3 links per sentence. The dataset for supervised relation learning contains all relations where a person instance appears as a subject in DBpedia relation. For example, dbpedia:Victor_Francis_Hess dbpedia-owl:birthDate 1883-06-24 There are 119 different relation types (unique predicates), having from just a few relations to a few million relations. Since DBpedia and Freebase are available in RDF format, we used the RDF store for querying and for storage of existing and new relations. Grammar Induction Experiments The grammar was induced on 10.000 random sentences taken from the dataset described in Section "Datasets" . First, a list of 45 seed nodes was constructed. There were 22 domain independent linguistic rules, 17 category rules and 6 top-level rules. The property assignment was done by the authors. In every iteration, the best rule is shown together with the number of nodes it was induced from, and ten of those nodes together with the sentences they appear in. The goal was set to stop the iterative process after two hours. We believe this is the right amount of time to still expect quality feedback from a human user. There were 689 new rules created. A sample of them is presented in Table 3 . Table 4 presents the distributions of properties. Around $36 \%$ of rules were used for parsing (non neutral rules). Together with the seed rules there are 297 rules used for parsing. Different properties are very evenly dispersed across the iterations. Using the procedure for conversion of grammar rules into taxonomy presented in Section "Ontology induction" , 33 classes and subClassOf relations, and 95 instances and isa relations were generated. The grammar was also tested by parsing a sample of 100.000 test sentences. A few statistic are presented in Table 4 . More than a quarter of sentences were fully parsed, meaning that they do not have any null leaf nodes. Coverage represents the fraction of words in a sentence that were parsed (words that are not in null-nodes). The number of operations shows how many times was the Parse function called during the parsing of a sentences. It is highly correlated with the time spend for parsing a sentence, which is on average 0.16ms. This measurement was done on a single CPU core. Consequently, it is feasible to parse a collection of a million sentences, like our dataset. The same statistics were also calculated on the training set, the numbers are very similar to the test set. The fully parsed % and coverage are even slightly lower than on the test set. Some of the statistics were calculated after each iteration, but only when a non neutral rule was created. The graphs in Figure 5 show how have the statistics changed over the course of the grammar induction. Graph 5 shows that coverage and the fraction of fully parsed sentences are correlated and they grow very rapidly at the beginning, then the growth starts to slow down, which indicates that there is a long tail of unparsed nodes/sentences. In the following section, we present a concept learning method, which deals with the long tail. Furthermore, the number of operations per sentence also slows down (see Graph 5 ) with the number of rules, which gives a positive sign of retaining computational feasibility with the growth of the grammar. Graph 5 somewhat elaborates the dynamics of the grammar induction. In the earlier phase of induction many rules that define the upper structure of the tree are induced. These rules can rapidly increase the depth and number of null nodes, like rule 1 and rule 2 . They also explain the spikes on Graph 5 . Their addition to the grammar causes some rules to emerge on the top of the list with a significantly higher frequency. After these rules are induced the frequency gets back to the previous values and slowly decreases over the long run. Instance extraction In this section, we present an experiment with a method for discovering new instances, which appear in the long tail of null nodes. Note that the majority of the instances were already placed in the ontology by the method in Section "Ontology induction from grammar" . Here, less prominent instances are extracted to increase the coverage of semantic parsing. The term and the class of the null node will form an isa relation. The class of the node represents the class of the relation. The terms are converted to instances. They are first generalized on the layer level (see Section "Experiments" ). The goal is to exclude non-atomic terms, which do not represent instances. Therefore, only terms consisting of one wiki-link token or exclusively of lexical tokens are retained. The relations were sorted according to their frequency. We observe that accuracy of the relations drops with the frequency. Therefore, relations that occurred less than three times were excluded. The number and accuracy for six classes is reported in Table 5 . Other classes were less accurate. For each class, the accuracy was manually evaluated on a random sample of 100 instance relations. Taking into account the estimated accuracy, there were more than 13.000 correct isa relations. Relation extraction In this section, we present an experiment of the relation extraction methods presented in Section "Relation extraction from semantic trees" . The input for the supervision is the DBpedia relation dataset from Section "Datasets" . The subject (first argument) of every relation is a person DBpedia instance – person Wikipedia page. In the beginning, the first sentence of that Wikipedia page has been identified in the textual dataset. If the object (last argument) of this relation matches a sub-term of this sentence, then the relation is eligible for experiments. We distinguish three types of values in objects. DBpedia resources are matched with wiki-link layer. Dates get converted to the format that is used in English Wikipedia. They are matched against the lexical layer, and so are the string objects. Only relation types that have 200 or more eligible relations have been retained. This is 74 out of 119 relations. The macro average number of eligible relations per relation type is 17.7%. While the micro average is 23.8%, meaning that roughly a quarter of all DBpedia person relations are expressed in the first sentence of their Wikipedia page. For the rest of this section, all stated averages are micro-averages. The prediction problem is designed in the following way. Given the predicate (relation type) and the first argument of the relation (person), the model predicts the second argument of the relation (object). Because not all relations are functional, like for instance child relation, there can be several values per predicate–person pair; on average there are 1.1. Since only one argument of the relation is predicted, the variable trees presented in Section "Relation extraction from semantic trees" , will be paths from the root to a single node. Analysis of variable tree extraction shows that on average 60.8% of eligible relations were successfully converted to variable trees (the object term exactly matches the term in the node). Others were not converted because 8.2% of the terms were split between nodes and 30.9% terms are sub-terms in nodes instead of complete terms. Measuring the diversity of variable trees shows that a distinct variable tree appeared 2.7 times on average. Several models based on variable trees were trained for solving this classification problem: Basic (Basic model) – The model contains positive trained variable trees. In the prediction, if the test variable tree matches one of the trees in the model, then the example is predicted positive. Net (Automaton model) – All positive variable trees are paths with start and end points. In this model they are merged into a net, which acts as a deterministic automaton. If the automaton accepts the test variable tree, than it is predicted positive. An example of automaton model is presented in Figure 6 . LR (Logistic regression) – A logistic regression model is trained with positive and negative examples, where nodes in variable trees represents features. LRC (Logistic regression + Context nodes) – All leaf nodes that are siblings of any of the nodes in the variable tree are added to the LR model. LRCL (Logistic regression + Context nodes + Lexical Tokens) – Tokens from the lexical layer of the entity nodes are added to the LRC as features. For training all or a maximum of 10.000 eligible relations was taken for each of 74 relation types. A 10-fold cross validation was performed for evaluation. The results are presented in Table 6 . The converted recall and converted F1 score presents recall and F1 on converted examples, which are the one, where relations were successfully converted into variable trees. The performance increases with each model, however the interpretability decreases. We also compared our method to the conditional random fields(CRF). In the CRF method, tokens from all layers with window size 7 were taken as features for sequence prediction. On the converted examples CRF achieved F1 score of 80.8, which is comparable to our best model's (LRCL) F1 score of 80.0. Related Work There are many known approaches to ontology learning and semantic parsing, however, to the best of our knowledge, this is the first work to jointly learn an ontology and semantic parser. In the following sections, we make comparisons to other work on semantic parsing, ontology learning, grammar induction and others. Semantic parsing The goal of semantic parsing is to map text to meaning representations. Several approaches have used Combinatory categorial grammar (CCG) and lambda calculus as a meaning representation BIBREF15 , BIBREF16 . CCG grammar closely connects syntax and semantics with a lexicon, where each entry consist of a term, a syntactical category and a lambda statement. Similarly, our context-free grammar contains production rules. Some of these rules do not contain lexical tokens (the grammar is not lexicalized), which gives ability to express some relations with a single rule. For instance, to parse jazz drummer, rule $<$ Musician_Type $>$ ::= $<$ Musical_Genre $>$ $<$ Musician_Type $>$ is used to directly express the relation, which determines the genre of the musician. Lambda calculus may provide a more formal meaning representation than semantic trees, but the lexicon of CCG requires mappings to lambda statements. Other approaches use dependency-based compositional semantics BIBREF17 , ungrounded graphs BIBREF18 , etc. as meaning representations. Early semantic parsers were trained on datasets, such as Geoquery BIBREF19 and Atis BIBREF5 , that map sentences to domain-specific databases. Later on datasets for question answering based on Freebase were created – Free917 BIBREF4 and WebQuestions BIBREF20 These datasets contain short questions from multiple domains, and since the meaning representations are formed of Freebase concepts, they allow reasoning over Freebase's ontology, which is much richer than databases in GeoQuery and Atis. All those datasets were constructed by either forming sentences given the meaning representation or vice-versa. Consequently, systems that were trained and evaluated on these datasets, might not work on sentences that cannot be represented by the underlying ontology. To overcome this limitation BIBREF16 developed a open vocabulary semantic parser. Their approach uses a CCG parser on questions to from labmda statements, which besides Freebase vocabulary contain underspecified predicates. These lambda statements are together with answers – Freebase entities – used to learn a low-dimensional probabilistic database, which is then used to answer fill-in-the-blank natural language questions. In a very similar fashion, BIBREF21 defines underspecified entities, types and relations, when the corresponding concept does not exist in Freebase. In contrast, the purpose of our method is to identify new concepts and ground them in the ontology. Ontology Learning Many ontology learning approaches address the same ontology components as our approach. However, their goal is to learn only the salient concepts for a particular domain, while our goal is to learn all the concepts (including instances, like particular organizations), so that they can be used in the meaning representation. As survey by BIBREF22 summarizes, the learning mechanisms are based either on statistics, linguistics, or logic. Our approach is unique because part of our ontology is constructed from the grammar. Many approaches use lexico-syntactic patterns for ontology learning. These are often based on dependency parses, like in BIBREF2 , BIBREF23 . Our approach does not rely on linguistic preprocessing, which makes it suitable for non-standard texts and poorly resourced languages. Our approach also build patterns, however in form of grammar rules. Instead of lexico-syntactic patterns, which contain linguistic classes, our approach models semantic patterns, which contain semantic classes, like Person and Color. These patterns are constructed in advance, which is sometimes difficult because the constructor is not always aware of all the phenomena that is expressed in the input text. Our approach allows to create a small number of seed patterns in advance, then explore other patterns through process of grammar learning. A similar bootstrapping semi-automatic approach to ontology learning was developed in BIBREF24 , where the user validates lexicalizations of a particular relation to learn new instances, and in BIBREF25 , where the user validates newly identified terms, while in our approach the user validates grammar rules to learn the composition of whole sentences. A similar approach with combining DBpedia with Wikipedia for superised learning has been taken in BIBREF26 , however their focus is more on lexicalization of relations and classes. Other Approaches Related work linking short terms to ontology concepts BIBREF34 is designed similarly as our approach in terms of bootstrapping procedure to induce patterns. But instead of inducing context-free grammar production rules, suggestions for rewrite rules that transform text directly to ontology language are provided. Another bootstrapping semi-automatic approach was developed for knowledge base population BIBREF35 . The task of knowledge base population is concerned only with extracting instances and relations given the ontology. In our work we also extract the backbone of the ontology – classes and taxonomic relations. Also, many other approaches focus only on one aspect of knowledge extraction, like taxonomy extraction BIBREF36 , BIBREF37 or relation extraction BIBREF13 , BIBREF38 . Combining these approaches can lead to cumbersome concept matching problems. This problem was also observed by BIBREF39 . Their system OntoUSP tries to overcome this by unsupervised inducing and populating a probabilistic grammar to solve question answering problem. However, the result are logical-form clusters connected in an isa hierarchy, not grounded concepts, which are connected with an existing ontology. Discussion We have presented an approach for joint ontology learning and semantic parsing. The approach was evaluated by building an ontology representing biographies of people. The first sentences of person Wikipedia pages and the combination of DBpedia and Freebase were used as a dataset. This dataset was suitable for our approach, because the text is equipped with human tagged annotations, which are already linked to the ontology. In other cases a named entity disambiguation would be needed to obtain the annotations. The next trait of the dataset, that is suitable for our approach, is the homogeneous style of writing. Otherwise, if the style was more heterogeneous, the users would have to participate in more iterations to achieve the same level of coverage. The participation of the users may be seen a cost, but on the other hand it allows them to learn about the dataset without reading it all. The users does not learn so much about specific facts as they learn about the second order information, like what types of relations are expressed and their distribution. Semantic trees offer a compact tree-structured meaning representation, which could be exploited for scenarios not covered by this paper, like relation type discovery and question answering. Furthermore, they can be used for more interpretable representation of meaning, like the automaton representation in Figure 6 , compared to some other methods, like the one based on neural networks BIBREF40 . Our approach may not be superior on one specific part of the ontology learning, but it rather provides an integrated approach for learning on several levels of the ontology. Also, our approach does not use syntactic analysis, like part of speech tags or dependency parsing, which makes our approach more language independent and useful for non-standard texts, where such analysis is not available. On the other hand, we are looking into integrating syntactic analysis for future work. One scenario is to automatically detect the property of the rule. Another idea for future work is to integrate some ideas from other grammar induction methods to detect meaningful patterns without relying on the annotation of text. This work was supported by Slovenian Research Agency and the ICT Programme of the EC under XLike (FP7-ICT-288342-STREP) and XLime (FP7-ICT-611346).
the parser first learns to parse simple sentences, then proceeds to learn more complex ones. The induction method is iterative, semi-automatic and based on frequent patterns
9eb5b336b3dcb7ab63f673ba9ab1818573cce6c3
9eb5b336b3dcb7ab63f673ba9ab1818573cce6c3_0
Q: How big is their dataset? Text: Introduction One of the ultimate goals of Natural Language Processing (NLP) is machine reading BIBREF0 , the automatic, unsupervised understanding of text. One way of pursuing machine reading is by semantic parsing, which transforms text into its meaning representation. However, capturing the meaning is not the final goal, the meaning representation needs to be predefined and structured in a way that supports reasoning. Ontologies provide a common vocabulary for meaning representations and support reasoning, which is vital for understanding the text. To enable flexibility when encountering new concepts and relations in text, in machine reading we want to be able to learn and extend the ontology while reading. Traditional methods for ontology learning BIBREF1 , BIBREF2 are only concerned with discovering the salient concepts from text. Thus, they work in a macro-reading fashion BIBREF3 , where the goal is to extract facts from a large collection of texts, but not necessarily all of them, as opposed to a micro-reading fashion, where the goal is to extract every fact from the input text. Semantic parsers operate in a micro-reading fashion. Consequently, the ontologies with only the salient concepts are not enough for semantic parsing. Furthermore, the traditional methods learn an ontology for a particular domain, where the text is used just as a tool. On the other hand, ontologies are used just as tool to represent meaning in the semantic parsing setting. When developing a semantic parser it is not trivial to get the best meaning representation for the observed text, especially if the content is not known yet. Semantic parsing datasets have been created by either selecting texts that can be expressed with a given meaning representation, like Free917 dataset BIBREF4 , or by manually deriving the meaning representation given the text, like Atis dataset BIBREF5 . In both datasets, each unit of text has its corresponding meaning representation. While Free917 uses Freebase BIBREF6 , which is a very big multi-domain ontology, it is not possible to represent an arbitrary sentence with Freebase or any other existing ontology. In this paper, we propose a novel approach to joint learning of ontology and semantic parsing, which is designed for homogeneous collections of text, where each fact is usually stated only once, therefore we cannot rely on data redundancy. Our approach is text-driven, semi-automatic and based on grammar induction. It is presented in Figure 1 .The input is a seed ontology together with text annotated with concepts from the seed ontology. The result of the process is an ontology with extended instances, classes, taxonomic and non-taxonomic relations, and a semantic parser, which transform basic units of text, i.e sentences, into semantic trees. Compared to trees that structure sentences based on syntactic information, nodes of semantic trees contain semantic classes, like location, profession, color, etc. Our approach does not rely on any syntactic analysis of text, like part-of-speech tagging or dependency parsing. The grammar induction method works on the premise of curriculum learning BIBREF7 , where the parser first learns to parse simple sentences, then proceeds to learn more complex ones. The induction method is iterative, semi-automatic and based on frequent patterns. A context-free grammar (CFG) is induced from the text, which is represented by several layers of semantic annotations. The motivation to use CFG is that it is very suitable for the proposed alternating usage of top-down and bottom-up parsing, where new rules are induced from previously unparsable parts. Furthermore, it has been shown by BIBREF8 that CFGs are expressive enough to model almost every language phenomena. The induction is based on a greedy iterative procedure that involves minor human involvement, which is needed for seed rule definition and rule categorization. Our experiments show that although the grammar is ambiguous, it is scalable enough to parse a large dataset of sentences. The grammar and semantic trees serve as an input for the new ontology. Classes, instances and taxonomic relations are constructed from the grammar. We also propose a method for discovering less frequent instances and their classes, and a supervised method to learn relations between instances. Both methods work on semantic trees. For experimentation, first sentences of Wikipedia pages describing people are taken as a dataset. These sentences are already annotated with links to other pages, which are also instances of DBpedia knowledge base BIBREF9 . Using relations from DBpedia as a training set, several models to predict relations have been trained and evaluated. The rest of the paper is organized in the following way. The grammar induction approach is presented in Section "Grammar induction" . The ontology induction approach follows in Section "Ontology induction" . In Section "Experiments" we present the conducted experiments with grammar induction, and instance and relation extraction. We examine the related work in Section "Related Work" , and conclude with the discussion in Section "Discussion" . Grammar induction In this section, we propose a semi-automatic bootstrapping procedure for grammar induction, which searches for the most frequent patterns and constructs new production rules from them. One of the main challenges is to make the induction in a way that minimizes human involvement and maximizes the quality of semantic trees. The input to the process, which is illustrated in Figure 2 , is a set of predefined seed grammar rules (see Section "Seed rules" ) and a sample of sentences in a layered representation (see Section "Experiments" ) from the dataset. The output of the process is a larger set of rules forming the induced grammar. One rule is added to the grammar on each iteration. At the beginning of each iteration all the sentences are parsed with a top-down parser. The output of parsing a single sentence is a semantic tree – a set of semantic nodes connected into a tree. Here we distinguish two possible outcomes of the parsing: 1) the sentence was completely parsed, which is the final goal and 2) there is at least one part of the sentence that cannot be parsed. From the perspective of a parser the second scenario happens when there is a node that cannot be parsed by any of the rules. We name these nodes – null nodes – and they serve as the input for the next step, the rule induction. In this step several rules are constructed by generalization of null nodes. The generalization (see Section "Rule induction" ) is based on utilization of semantic annotations and bottom-up composition of the existing rules. Out of the induced rules, a rule with the highest frequency (the one that was generalized from the highest number of null nodes) is added to the grammar. To improve quality of the grammar, the rules are marked by so called property, which instructs the parser how to use the rule (eg., us it in parsing but not in induction). The property vitally affects result of the parsing in the following iterations potentially causing a huge semantic drift for the rest of process. Consequently, a human user needs to mark the property of each rule. The iterative process runs until a predefined stopping criteria is met. The criteria is either connected to the quality of the grammar or time limitation. For the sake of transparency of the experiments, the human is involved in the beginning, when the seed rules are created and later when the rule properties are specified. However, in another setting the user could also define new rules in the middle of the bootstrapping procedure. In the following sections, we describe each component of the process in more details. Our goal was to develop a semi-automatic method that induces a grammar suitable for our scenario, in which an ontology is extracted, and text is parsed into semantic trees. A survey by BIBREF27 compares several papers on grammar induction. According to their classification, our method falls into unsupervised, text-based (no negative examples of sentences) methods. Many such methods induce context-free grammars. However, their focus is more on learning syntactic structures rather than semantic. This is evident in evaluation strategies, where their parse trees are compared against golden parse trees in treebanks, like Penn treebank BIBREF28 , which are annotated according to syntactic policies. Furthermore, our grammar should not limited to a specific form, like for instance Chomsky normal form or Greibach normal form, instead it may contain arbitrary context-free rules. Several algorithms, like ours, employ the greedy strategy of grammar induction, where the grammar is updated with the best decision at each step. Whereas our method adds a rule after all sentences are parsed, The Incremental Parsing algorithm BIBREF29 updates the grammar after each sentence. This is also done in ADIOS method BIBREF30 , where it has been shown that order of sentences affects the grammar. Our method employs frequency analysis and human supervision to control the grammar construction, while others use Minimum Description Length principle BIBREF31 , clustering of sequences BIBREF32 , or significance of word co-occurrences BIBREF33 . Textual data representation The input textual data needs to be properly structured in order to work best with the proposed algorithms. Shallow NLP tools, like sentence splitting, word tokenization, named entity recognition, might help obtaining this structure. The basic unit is a sentence, represented by several layers. An example is presented in Table 1 . Each layer consists of several tokens, which span over one or more words. The basic layer is the lexical layer, where each token represents a single word. All other layers are created from the annotations. Some annotations, like named-entities, may span over several words; some of the words may not have an annotation, thus they are given a null token. It is crucial that all algorithms are aware how to deal with a particular layer. For instance, the parser must not break apart a multi-word annotation. Some layers may be derived from others using the seed ontology. For example, instance layer contains annotations to instances of the ontology and the derived class layer represents the classes of these annotations, which are also from the ontology. Annotation layers are valuable if they provide good means for generalization or connection with the ontology. A term is a subpart of the sentence, defined by the starting and ending position in the sentence. It has different interpretation in each layer. If the interpretation breaks any of the tokens, it is not valid. For instance, term representing Madeira is not valid in named-entity layer in Table 1 because it breaks Person. Grammar Definition Our context-free grammar $G$ is defined by the 5-tuple: $G = (V, \sigma , P, S, R)$ , where $V$ is a set of non-terminals. Each non-terminal represents a semantic class, e.g. $\langle \text{Person} \rangle $ , $\langle \text{Color} \rangle $ , $\langle \text{Organization} \rangle $ . There is also a universal non-terminal $\langle * \rangle $ , which can be replaced by any other non-terminal. The same non-terminal replaces all occurrences in a rule. It is used to represent several rules, with a notation. The grammar is still context-free. See seed rule examples in Section "Seed rules" . $\sigma $ is a set of terminals. Terminal is any existing non-null token from any sentence layer. We denote a terminal by value{layer}. For instance, [location]{named-entity}, Phil_Madeira{instance}. If the terminal is from the lexical layer, the layer is skipped in the denotation. $P$ is a set of production rules that represents a relation from $V \rightarrow (V \cup E)^*$ . For example, $S$ is the starting non-terminal symbol. Since non-terminals represent semantic classes, the starting symbol is chosen based on the semantic class of the input examples. If the input examples are sentences, then the appropriate category may be $\langle \text{Relation} \rangle $ . While if the input examples are noun phrases, the starting symbol may be a more specific category, like $\langle \text{Job Title} \rangle $ . $R$ is a set of properties: positive, neutral, negative, non-inducible. The property controls the usage of the rule in the parsing and in the rule induction phase. More details are given in the following subsections. Parser For parsing, a recursive descent parser with backtracking was developed. This is a top-down parser, which first looks at the higher level sentence structure and then proceeds down the parse tree to identify low level details of the sentence. The advantage of top-down parsing is the ability to partially parse sentences and to detect unparsable parts of sentences. The parser takes a layered sentence as an input and returns a semantic tree as an output (see Figure 3 ). The recursive structure of the program closely follows the structure of the parse tree. The recursive function Parse (see Algorithm "Parser" ) takes a term and a non-terminal as input and returns a parse node as an output. The parse node contains the class of node (non-terminal), the rule that parsed the node, the term, and the list of children nodes. In order for the rule to parse the node, the left-hand side must match the input non-terminal and the right-hand side must match the layered input. In the pattern matching function Match (line "Parser" ), the right hand side of a rule is treated like a regular expression; non-terminals present the ( $+$ ) wildcard characters, which match at least one word. The terminals are treated as literal characters, which are matched against the layer that defines them. The result of successfully matched pattern is a list of terms, where each term represents a non-terminal of the pattern. Due to ambiguity of pattern matching there might be several matches. For each of the term – non-terminal pair in every list the parse function is recursively called (line "Parser" ). GetEligibleRulesGetEligibleRules CreateNodeNode MergeSelectBestNode MatchMatch ParseParse Grammargrammar Patternpattern NonTerminalsnon terminals Rulesrules ChildTreechild node Unexpandedinduction nodes AListambiguous lists PListterm list Childchild nodes Finalfinal node Ambiguousnodes Inputinput Outputoutput FnFunction Phrase $p$ , Non-terminal $n$ parse node $\leftarrow $ $\lbrace \rbrace $ rule $r$ of n = left side of $r$ $\leftarrow $ right hand side of $r$ $\leftarrow $ , $p$ of $\leftarrow $ $\lbrace \rbrace $ $i\leftarrow 0$ size of $\leftarrow $ $_i$ , $._i$ add to add $type$ , $p$ , $r$ , to is empty $\leftarrow $ $type$ , $p$ , null, $\lbrace \rbrace $ $\leftarrow $ $\operatornamewithlimits{arg\,max}_{n \in nodes} r(n)$ is not fully parsed add to Pseudocode of the main function parse of the top-down parser. Since the grammar is ambiguous, a term can be parsed in multiple ways. There are two types of ambiguity. Two or more rules can expand the same term and one rule can expand the term in more than one way. For each ambiguity one node is created, and the best node according to reliability measure is selected to be the result (line "Parser" ). The reliability measure $r(n)$ is $$r(n)= {\left\lbrace \begin{array}{ll} 1, & \text{if node is fully parsed} \\ \beta \cdot (1 -tp(n)) + (1 - \beta )\frac{\displaystyle \sum \limits _{c \in C(n)} |c|\cdot r(c)}{\displaystyle \sum \limits _{c \in C(n)} |c|} ,& \text{if node is partially parsed} \\ 0, & \text{if node is null} \\ \end{array}\right.}$$ (Eq. 14) where $tp(n)$ is the trigger probability of the rule that parsed the node $n$ , $\beta $ is a predefined weight, $C(n)$ is the set of children of $n$ , and $|c|$ is the length of the term of node $c$ . The trigger probability of the rule is the probability that a the right-hand side of the rule pattern matches a random term in the dataset and it is estimated after the rule is induced. The range of the measure is between 0 and 1. The measure was defined in such a way that the more text the node parses, the higher is the reliability (the second summand in the middle row of Eq. 14 ). On the other hand, nodes with rules that are more frequently matched have lower reliability; this penalizes rules that are very loosely defined (the first summand in the middle row of Eq. 14 ). The $\beta $ parameter was set to 0.05, using grid search, with average F1 score from relation extraction experiment from Section "Relation extraction" as a performance measure. If none of the rules match the term, a null node is created and added to the list of nodes, which will be later used for grammar induction (line "Parser" ). Note that even if a null node is discarded, because it is not the most reliable, it will still be used in the grammar induction step. A node is fully parsed if the node itself and all of its descendants are parsed. If a node is parsed and if at least one of its descendants is not parsed, then the node is partially parsed. All nodes that are not fully parsed are added to the list for induction. Since the ambiguity of the grammar may make parsing computationally infeasible, several optimization techniques are used. Memoization BIBREF10 is used to reduce the complexity from exponential time to $\mathcal {O}(n^3)$ BIBREF11 , where $n$ is the length of the sentence. The parser does not support $\epsilon $ productions mainly because the grammar induction will not produce them. The patterns that do not contain terminals are the most ambiguous. At most two non-terminals are allowed, and the maximal length of the term that corresponds to the first non-terminal is three tokens. We argue that this is not a huge limitation, since the way human languages are structured, usually two longer terms are connected with a word, like comma or a verb. Furthermore, the way how our induction works, these connectors do not get generalized and become a terminal in the rule. There was an attempt to introduce rules with negative property. Whenever such rule fully parses a node, that indicates that the current parsing path is incorrect. This allows the parser to backtrack sooner and also prevents adding null sister nodes (null sister nodes are in this case usually wrong) to the rule induction. However, it turned out that negative rules actually slow down the parsing, since the grammar gets bigger. It is better to mark these rules as neutral, therefore they are not added to the grammar. Rule induction The goal of the rule induction step is to convert the null nodes from the parsing step into rules. Out of these rules, the most frequent one is promoted. The term from the null node is generalized to form the right side of the rule. The class non-terminal of the null node will present the left side of the rule. Recently induced rule will parse all the nodes, from which it was induced, in the following iterations. Additionally, some rules may parse the children of those nodes. Generalization is done in two steps. First, terms are generalized on the layer level. The output of this process is a sequence of tokens, which might be from different layers. For each position in the term a single layer is selected, according to predefined layer order. In the beginning, term is generalized with the first layer. All the non-null tokens from this layer are taken to be part of the generalized term. All the positions of the term that have not been generalized are attempted to be generalized with the next layer, etc. The last layer is without null-tokens, therefore each position of the term is assigned a layer. Usually, this is the lexical layer. For example, top part of Table 2 shows generalization of term from Table 1 . The layer list is constructed manually. Good layers for generalization are typically those that express semantic classes of individual terms. Preferably, these types are not too general (loss of information) and not too specific (larger grammar). In the next step of generalization, tokens are further generalized using a greedy bottom-up parser using the rules from the grammar. The right sides of all the rules are matched against the input token term. If there is a match, the matched sub-term is replaced with the left side of the rule. Actually, in each iteration all the disjunct matches are replaced. To get only the disjunct matches, overlapping matches are discarded greedily, where longer matches have the priority. This process is repeated until no more rules match the term. An example is presented in the lower part of Table 2 . The bottom-up parsing algorithm needs to be fast because the number of unexpanded nodes can be very high due to ambiguities in the top-down parsing. Consequently, the algorithm is greedy, instead of exhaustive, and yields only one result. Aho-Corasick string matching algorithm BIBREF12 is selected for matching for its ability to match all the rules simultaneously. Like the top-down parser, this parser generates partial parses because the bottom-up parser will never fully parse – the output is the same as the non-terminal type in the unexpanded node. This would generate a cyclical rule, i.e. $<$ Class $>$ :== $<$ Class $>$ . However, this never happens because the top-down parser would already expand the null node. The last step of the iteration is assigning the property to the newly induced rule. Property controls the role of the rule in the parsing and induction. The default property is positive, which defines the default behavior of the rule in all procedures. Rules with neutral property are not used in any procedure. They also cannot be re-induced. Some rules are good for parsing, but may introduce errors in the induction. These rules should be given non-inducible property. For instance, rule $<$ Date $>$ :== $<$ Number $>$ is a candidate for the non-inducible property, since years are represented by a single number. On the contrary, not every number is a date. In our experiments, the assignment was done manually. The human user sees the induced rule and few examples of the null nodes, from which it was induced. This should provide enough information for the user to decide in a few seconds, which property to assign. After the stopping criteria is met, the iterative procedure can continue automatically by assigning positive property to each rule. Initial experimenting showed that just a single mistake in the assignment can cause a huge drift, making all further rules wrong. Seed rules Before the start, a list of seed rules may be needed in order for grammar induction to be successful. Since this step is done manually, it is reasonable to have a list of seed rules short and efficient. Seed rules can be divided in three groups: domain independent linguistic rules, class rules, top-level domain rules. Domain independent linguistic rules, such as parse the top and mid-level nodes. They can be applied on many different datasets. Class rules connect class tokens, like named-entity tokens with non-terminals. For example, They parse the leaf nodes of the trees. On the other hand, top-level domain rules, define the basic structure of the sentence. For example, As the name suggests, they parse nodes close to the root. Altogether, these rule groups parse on all levels of the tree, and may already be enough to parse the most basic sentences, but more importantly, they provide the basis for learning to parse more complex sentences. The decision on which and how many seed rules should be defined relies on human judgment whether the current set of seed rules is powerful enough to ignite the bootstrapping procedure. This judgment may be supported by running one iteration and inspecting the top induced rules. Ontology induction This section describes how to utilize the grammar and manipulate semantic trees to discover ontology components in the textual data. Ontology induction from grammar We propose a procedure for mapping grammar components to ontology components. In particular, classes, instances and taxonomic relations are extracted. First, we distinguish between instances and classes in the grammar. Classes are represented by all non-terminals and terminals that come from a layer populated with classes, for example, named-entity layer and class layer from Table 1 . Instances might already exist in the instance layer, or they are created from rules, whose right hand side contains only tokens from the lexical layer. These tokens represent the label of the new instance. For instance rule $<$ Profession $>$ ::= software engineer is a candidate for instance extraction. Furthermore, we distinguish between class and instance rules. Class rules have a single symbol representing a class on the right-hand side. Class rules map to subClassOf relations in the ontology. If the rule is positive, then the class on the right side is the subclass of the class on the left side. For instance, rule $<$ Organization $>$ ::= $<$ Company $>$ yields relation (subClassOf Company Organization). On the other hand, instance rules have one or more symbols representing an instance on the right side, and define the isa relation. If the rule is positive, then the instance on the right side is a member of a class on the left side. For instance, rule $<$ Profession $>$ ::= software engineer yields relation (isa SoftwareEngineer Profession). If class or instance rule is neutral then the relation can be treated as false. Note that many other relations may be inferred by combing newly induced relations and relations from the seed ontology. For instance, induced relation (subClassOf new-class seed-class) and seed relation (isa seed-class seed-instance) are used to infer a new relation (isa new-class seed-instance). In this section, we described how to discover relations on the taxonomic level. In the next section, we describe how to discover relations between instances. Relation extraction from semantic trees We propose a method for learning relations from semantic trees, which tries to solve the same problem as the classical relation extraction methods. Given a dataset of positive relation examples that represent one relation type, e.g. birthPlace, the goal is to discover new unseen relations. The method is based on the assumption that a relation between entities is expressed in the shortest path between them in the semantic tree BIBREF13 . The input for training are sentences in layered representation, corresponding parse trees, and relation examples. Given a relation from the training set, we first try to identify the sentence containing each entity of the relation. The relation can have one, two, or even more entities. Each entity is matched to the layer that corresponds to the entity type. For example, strings are matched to the lexical layer; ontology entities are matched to the layer containing such entities. The result of a successfully matched entity is a sub-term of the sentence. In the next step, the corresponding semantic tree is searched for a node that contains the sub-term. At this point, each entity has a corresponding entity node. Otherwise, the relation is discarded from the learning process. Given the entity nodes, a minimum spanning tree containing all off them is extracted. If there is only one entity node, then the resulting subtree is the path between this node and the root node. The extracted sub-tree is converted to a variable tree, so that different semantic trees can have the same variable sub-trees, for example see Figure 4 . The semantic nodes of the sub-tree are converted into variable nodes, by retaining the class and the rule of the node, as well as the places of the children in the original tree. For entity nodes also the position in the relation is memorized. Variable tree extracted from a relation is a positive example in the training process. For negative examples all other sub-trees that do not present any relations are converted to variable trees. Each variable node represents one feature. Therefore, a classification algorithm, such as logistic regression can be used for training. When predicting, all possible sub-trees of the semantic tree are predicted. If a sub-tree is predicted as positive, then the terms in the leaf nodes represent the arguments of the relation. Experiments In this section, we present experiments evaluating the proposed approach. We have conducted experimentation on Wikipedia–DBpedia dataset (Section "Datasets" ). First, we have induced a grammar on the Wikipedia dataset (Section "Grammar Induction Experiments" ) to present its characteristics, and the scalability of the approach. In the next experiment, we present a method for discovering less prominent instances (Section "Instance extraction" ). The last experiment demonstrates one application of semantic parsing – the supervised learning of DBpedia relations(Section "Relation extraction" ). Datasets The datasets for experiments were constructed from English Wikipedia and knowledge bases DBpedia BIBREF9 and Freebase BIBREF6 . DBpedia provides structured information about Wikipedia articles that was scraped out of their infoboxes. First sentences of Wikipedia pages describing people were taken as the textual dataset, while DBpedia relations expressing facts about the same people were taken as the dataset for supervised relation learning. Note that each DBpedia instance has a Wikipedia page. A set of person instances was identified by querying DBpedia for instances that have a person class. For the textual dataset, Wikipedia pages representing these entities were parsed by the in-house Wikipedia markup parser to convert the markup into plain text. Furthermore, the links to other Wikipedia pages were retained. Here is an example of a sentence in plain text: Victor Francis Hess (24 June 1883 – 17 December 1964) was an Austrian-American physicist, and Nobel laureate in physics, who discovered cosmic rays. Using the Standford OpenNLP BIBREF14 on plain texts we obtained sentence and token splits, and named-entity annotation. Notice, that only the first sentence of each page was retained and converted to the proposed layered representation (see Section "Experiments" ). The layered representation contains five layers: lexical (plain text), named-entity (named entity recognizer), wiki-link (Wikipedia page in link – DBpedia instance), dbpedia-class (class of Wikipedia page in Dbpedia) and freebase-class (class of Wikipedia page in Freebase). Freebase also contains its own classes of Wikipedia pages. For the last two layers, there might be several classes per Wikipedia page. Only one was selected using a short priority list of classes. If none of the categories is on the list, then the category is chosen at random. After comparing the dbpedia-class and freebase-class layers, only freebase-class was utilized in the experiments because more wiki-link tokens has a class in freebase-class layer than in dbpedia-class layer. There are almost 1.1 million sentences in the collection. The average length of a sentence is 18.3 words, while the median length is 13.8 words. There are 2.3 links per sentence. The dataset for supervised relation learning contains all relations where a person instance appears as a subject in DBpedia relation. For example, dbpedia:Victor_Francis_Hess dbpedia-owl:birthDate 1883-06-24 There are 119 different relation types (unique predicates), having from just a few relations to a few million relations. Since DBpedia and Freebase are available in RDF format, we used the RDF store for querying and for storage of existing and new relations. Grammar Induction Experiments The grammar was induced on 10.000 random sentences taken from the dataset described in Section "Datasets" . First, a list of 45 seed nodes was constructed. There were 22 domain independent linguistic rules, 17 category rules and 6 top-level rules. The property assignment was done by the authors. In every iteration, the best rule is shown together with the number of nodes it was induced from, and ten of those nodes together with the sentences they appear in. The goal was set to stop the iterative process after two hours. We believe this is the right amount of time to still expect quality feedback from a human user. There were 689 new rules created. A sample of them is presented in Table 3 . Table 4 presents the distributions of properties. Around $36 \%$ of rules were used for parsing (non neutral rules). Together with the seed rules there are 297 rules used for parsing. Different properties are very evenly dispersed across the iterations. Using the procedure for conversion of grammar rules into taxonomy presented in Section "Ontology induction" , 33 classes and subClassOf relations, and 95 instances and isa relations were generated. The grammar was also tested by parsing a sample of 100.000 test sentences. A few statistic are presented in Table 4 . More than a quarter of sentences were fully parsed, meaning that they do not have any null leaf nodes. Coverage represents the fraction of words in a sentence that were parsed (words that are not in null-nodes). The number of operations shows how many times was the Parse function called during the parsing of a sentences. It is highly correlated with the time spend for parsing a sentence, which is on average 0.16ms. This measurement was done on a single CPU core. Consequently, it is feasible to parse a collection of a million sentences, like our dataset. The same statistics were also calculated on the training set, the numbers are very similar to the test set. The fully parsed % and coverage are even slightly lower than on the test set. Some of the statistics were calculated after each iteration, but only when a non neutral rule was created. The graphs in Figure 5 show how have the statistics changed over the course of the grammar induction. Graph 5 shows that coverage and the fraction of fully parsed sentences are correlated and they grow very rapidly at the beginning, then the growth starts to slow down, which indicates that there is a long tail of unparsed nodes/sentences. In the following section, we present a concept learning method, which deals with the long tail. Furthermore, the number of operations per sentence also slows down (see Graph 5 ) with the number of rules, which gives a positive sign of retaining computational feasibility with the growth of the grammar. Graph 5 somewhat elaborates the dynamics of the grammar induction. In the earlier phase of induction many rules that define the upper structure of the tree are induced. These rules can rapidly increase the depth and number of null nodes, like rule 1 and rule 2 . They also explain the spikes on Graph 5 . Their addition to the grammar causes some rules to emerge on the top of the list with a significantly higher frequency. After these rules are induced the frequency gets back to the previous values and slowly decreases over the long run. Instance extraction In this section, we present an experiment with a method for discovering new instances, which appear in the long tail of null nodes. Note that the majority of the instances were already placed in the ontology by the method in Section "Ontology induction from grammar" . Here, less prominent instances are extracted to increase the coverage of semantic parsing. The term and the class of the null node will form an isa relation. The class of the node represents the class of the relation. The terms are converted to instances. They are first generalized on the layer level (see Section "Experiments" ). The goal is to exclude non-atomic terms, which do not represent instances. Therefore, only terms consisting of one wiki-link token or exclusively of lexical tokens are retained. The relations were sorted according to their frequency. We observe that accuracy of the relations drops with the frequency. Therefore, relations that occurred less than three times were excluded. The number and accuracy for six classes is reported in Table 5 . Other classes were less accurate. For each class, the accuracy was manually evaluated on a random sample of 100 instance relations. Taking into account the estimated accuracy, there were more than 13.000 correct isa relations. Relation extraction In this section, we present an experiment of the relation extraction methods presented in Section "Relation extraction from semantic trees" . The input for the supervision is the DBpedia relation dataset from Section "Datasets" . The subject (first argument) of every relation is a person DBpedia instance – person Wikipedia page. In the beginning, the first sentence of that Wikipedia page has been identified in the textual dataset. If the object (last argument) of this relation matches a sub-term of this sentence, then the relation is eligible for experiments. We distinguish three types of values in objects. DBpedia resources are matched with wiki-link layer. Dates get converted to the format that is used in English Wikipedia. They are matched against the lexical layer, and so are the string objects. Only relation types that have 200 or more eligible relations have been retained. This is 74 out of 119 relations. The macro average number of eligible relations per relation type is 17.7%. While the micro average is 23.8%, meaning that roughly a quarter of all DBpedia person relations are expressed in the first sentence of their Wikipedia page. For the rest of this section, all stated averages are micro-averages. The prediction problem is designed in the following way. Given the predicate (relation type) and the first argument of the relation (person), the model predicts the second argument of the relation (object). Because not all relations are functional, like for instance child relation, there can be several values per predicate–person pair; on average there are 1.1. Since only one argument of the relation is predicted, the variable trees presented in Section "Relation extraction from semantic trees" , will be paths from the root to a single node. Analysis of variable tree extraction shows that on average 60.8% of eligible relations were successfully converted to variable trees (the object term exactly matches the term in the node). Others were not converted because 8.2% of the terms were split between nodes and 30.9% terms are sub-terms in nodes instead of complete terms. Measuring the diversity of variable trees shows that a distinct variable tree appeared 2.7 times on average. Several models based on variable trees were trained for solving this classification problem: Basic (Basic model) – The model contains positive trained variable trees. In the prediction, if the test variable tree matches one of the trees in the model, then the example is predicted positive. Net (Automaton model) – All positive variable trees are paths with start and end points. In this model they are merged into a net, which acts as a deterministic automaton. If the automaton accepts the test variable tree, than it is predicted positive. An example of automaton model is presented in Figure 6 . LR (Logistic regression) – A logistic regression model is trained with positive and negative examples, where nodes in variable trees represents features. LRC (Logistic regression + Context nodes) – All leaf nodes that are siblings of any of the nodes in the variable tree are added to the LR model. LRCL (Logistic regression + Context nodes + Lexical Tokens) – Tokens from the lexical layer of the entity nodes are added to the LRC as features. For training all or a maximum of 10.000 eligible relations was taken for each of 74 relation types. A 10-fold cross validation was performed for evaluation. The results are presented in Table 6 . The converted recall and converted F1 score presents recall and F1 on converted examples, which are the one, where relations were successfully converted into variable trees. The performance increases with each model, however the interpretability decreases. We also compared our method to the conditional random fields(CRF). In the CRF method, tokens from all layers with window size 7 were taken as features for sequence prediction. On the converted examples CRF achieved F1 score of 80.8, which is comparable to our best model's (LRCL) F1 score of 80.0. Related Work There are many known approaches to ontology learning and semantic parsing, however, to the best of our knowledge, this is the first work to jointly learn an ontology and semantic parser. In the following sections, we make comparisons to other work on semantic parsing, ontology learning, grammar induction and others. Semantic parsing The goal of semantic parsing is to map text to meaning representations. Several approaches have used Combinatory categorial grammar (CCG) and lambda calculus as a meaning representation BIBREF15 , BIBREF16 . CCG grammar closely connects syntax and semantics with a lexicon, where each entry consist of a term, a syntactical category and a lambda statement. Similarly, our context-free grammar contains production rules. Some of these rules do not contain lexical tokens (the grammar is not lexicalized), which gives ability to express some relations with a single rule. For instance, to parse jazz drummer, rule $<$ Musician_Type $>$ ::= $<$ Musical_Genre $>$ $<$ Musician_Type $>$ is used to directly express the relation, which determines the genre of the musician. Lambda calculus may provide a more formal meaning representation than semantic trees, but the lexicon of CCG requires mappings to lambda statements. Other approaches use dependency-based compositional semantics BIBREF17 , ungrounded graphs BIBREF18 , etc. as meaning representations. Early semantic parsers were trained on datasets, such as Geoquery BIBREF19 and Atis BIBREF5 , that map sentences to domain-specific databases. Later on datasets for question answering based on Freebase were created – Free917 BIBREF4 and WebQuestions BIBREF20 These datasets contain short questions from multiple domains, and since the meaning representations are formed of Freebase concepts, they allow reasoning over Freebase's ontology, which is much richer than databases in GeoQuery and Atis. All those datasets were constructed by either forming sentences given the meaning representation or vice-versa. Consequently, systems that were trained and evaluated on these datasets, might not work on sentences that cannot be represented by the underlying ontology. To overcome this limitation BIBREF16 developed a open vocabulary semantic parser. Their approach uses a CCG parser on questions to from labmda statements, which besides Freebase vocabulary contain underspecified predicates. These lambda statements are together with answers – Freebase entities – used to learn a low-dimensional probabilistic database, which is then used to answer fill-in-the-blank natural language questions. In a very similar fashion, BIBREF21 defines underspecified entities, types and relations, when the corresponding concept does not exist in Freebase. In contrast, the purpose of our method is to identify new concepts and ground them in the ontology. Ontology Learning Many ontology learning approaches address the same ontology components as our approach. However, their goal is to learn only the salient concepts for a particular domain, while our goal is to learn all the concepts (including instances, like particular organizations), so that they can be used in the meaning representation. As survey by BIBREF22 summarizes, the learning mechanisms are based either on statistics, linguistics, or logic. Our approach is unique because part of our ontology is constructed from the grammar. Many approaches use lexico-syntactic patterns for ontology learning. These are often based on dependency parses, like in BIBREF2 , BIBREF23 . Our approach does not rely on linguistic preprocessing, which makes it suitable for non-standard texts and poorly resourced languages. Our approach also build patterns, however in form of grammar rules. Instead of lexico-syntactic patterns, which contain linguistic classes, our approach models semantic patterns, which contain semantic classes, like Person and Color. These patterns are constructed in advance, which is sometimes difficult because the constructor is not always aware of all the phenomena that is expressed in the input text. Our approach allows to create a small number of seed patterns in advance, then explore other patterns through process of grammar learning. A similar bootstrapping semi-automatic approach to ontology learning was developed in BIBREF24 , where the user validates lexicalizations of a particular relation to learn new instances, and in BIBREF25 , where the user validates newly identified terms, while in our approach the user validates grammar rules to learn the composition of whole sentences. A similar approach with combining DBpedia with Wikipedia for superised learning has been taken in BIBREF26 , however their focus is more on lexicalization of relations and classes. Other Approaches Related work linking short terms to ontology concepts BIBREF34 is designed similarly as our approach in terms of bootstrapping procedure to induce patterns. But instead of inducing context-free grammar production rules, suggestions for rewrite rules that transform text directly to ontology language are provided. Another bootstrapping semi-automatic approach was developed for knowledge base population BIBREF35 . The task of knowledge base population is concerned only with extracting instances and relations given the ontology. In our work we also extract the backbone of the ontology – classes and taxonomic relations. Also, many other approaches focus only on one aspect of knowledge extraction, like taxonomy extraction BIBREF36 , BIBREF37 or relation extraction BIBREF13 , BIBREF38 . Combining these approaches can lead to cumbersome concept matching problems. This problem was also observed by BIBREF39 . Their system OntoUSP tries to overcome this by unsupervised inducing and populating a probabilistic grammar to solve question answering problem. However, the result are logical-form clusters connected in an isa hierarchy, not grounded concepts, which are connected with an existing ontology. Discussion We have presented an approach for joint ontology learning and semantic parsing. The approach was evaluated by building an ontology representing biographies of people. The first sentences of person Wikipedia pages and the combination of DBpedia and Freebase were used as a dataset. This dataset was suitable for our approach, because the text is equipped with human tagged annotations, which are already linked to the ontology. In other cases a named entity disambiguation would be needed to obtain the annotations. The next trait of the dataset, that is suitable for our approach, is the homogeneous style of writing. Otherwise, if the style was more heterogeneous, the users would have to participate in more iterations to achieve the same level of coverage. The participation of the users may be seen a cost, but on the other hand it allows them to learn about the dataset without reading it all. The users does not learn so much about specific facts as they learn about the second order information, like what types of relations are expressed and their distribution. Semantic trees offer a compact tree-structured meaning representation, which could be exploited for scenarios not covered by this paper, like relation type discovery and question answering. Furthermore, they can be used for more interpretable representation of meaning, like the automaton representation in Figure 6 , compared to some other methods, like the one based on neural networks BIBREF40 . Our approach may not be superior on one specific part of the ontology learning, but it rather provides an integrated approach for learning on several levels of the ontology. Also, our approach does not use syntactic analysis, like part of speech tags or dependency parsing, which makes our approach more language independent and useful for non-standard texts, where such analysis is not available. On the other hand, we are looking into integrating syntactic analysis for future work. One scenario is to automatically detect the property of the rule. Another idea for future work is to integrate some ideas from other grammar induction methods to detect meaningful patterns without relying on the annotation of text. This work was supported by Slovenian Research Agency and the ICT Programme of the EC under XLike (FP7-ICT-288342-STREP) and XLime (FP7-ICT-611346).
1.1 million sentences, 119 different relation types (unique predicates)
0a92352839b549d07ac3f4cb997b8dc83f64ba6f
0a92352839b549d07ac3f4cb997b8dc83f64ba6f_0
Q: By how much do they outperform basic greedy and cross-entropy beam decoding? Text: Introduction [t] Standard Beam Search [1] INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 t = 0 to T i = 1 to k INLINEFORM4 INLINEFORM5 INLINEFORM6 is the local output scoring function INLINEFORM7 top-k-max INLINEFORM8 Top k values of the input matrix INLINEFORM9 top-k-argmax INLINEFORM10 Top INLINEFORM11 argmax index pairs of the input matrix i = 1 to k INLINEFORM12 embedding( INLINEFORM13 ) INLINEFORM14 INLINEFORM15 is a nonlinear recurrent function that returns state at next step INLINEFORM16 INLINEFORM17 follow-backpointer( INLINEFORM18 ) INLINEFORM19 Sequence-to-sequence (seq2seq) models have been successfully used for many sequential decision tasks such as machine translation BIBREF0 , BIBREF1 , parsing BIBREF2 , BIBREF3 , summarization BIBREF4 , dialog generation BIBREF5 , and image captioning BIBREF6 . Beam search is a desirable choice of test-time decoding algorithm for such models because it potentially avoids search errors made by simpler greedy methods. However, the typical approach to training neural sequence models is to use a locally normalized maximum likelihood objective (cross-entropy training) BIBREF0 . This objective does not directly reason about the behaviour of the final decoding method. As a result, for cross-entropy trained models, beam decoding can sometimes yield reduced test performance when compared with greedy decoding BIBREF7 , BIBREF8 , BIBREF9 . These negative results are not unexpected. The training procedure was not search-aware: it was not able to consider the effect that changing the model's scores might have on the ease of search while using a beam decoding, greedy decoding, or otherwise. We hypothesize that the under-performance of beam search in certain scenarios can be resolved by using a better designed training objective. Because beam search potentially offers more accurate search when compared to greedy decoding, we hope that appropriately trained models should be able to leverage beam search to improve performance. In order to train models that can more effectively make use of beam search, we propose a new training procedure that focuses on the final loss metric (e.g. Hamming loss) evaluated on the output of beam search. While well-defined and a valid training criterion, this “direct loss” objective is discontinuous and thus difficult to optimize. Hence, in our approach, we form a sub-differentiable surrogate objective by introducing a novel continuous approximation of the beam search decoding procedure. In experiments, we show that optimizing this new training objective yields substantially better results on two sequence tasks (Named Entity Recognition and CCG Supertagging) when compared with both cross-entropy trained greedy decoding and cross-entropy trained beam decoding baselines. Several related methods, including reinforcement learning BIBREF10 , BIBREF11 , imitation learning BIBREF12 , BIBREF13 , BIBREF14 , and discrete search based methods BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , have also been proposed to make training search-aware. These methods include approaches that forgo direct optimization of a global training objective, instead incorporating credit assignment for search errors by using methods like early updates BIBREF19 that explicitly track the reachability of the gold target sequence during the search procedure. While addressing a related problem – credit assignment for search errors during training – in this paper, we propose an approach with a novel property: we directly optimize a continuous and global training objective using backpropagation. As a result, in our approach, credit assignment is handled directly via gradient optimization in an end-to-end computation graph. The most closely related work to our own approach was proposed by Goyal et al. BIBREF20 . They do not consider beam search, but develop a continuous approximation of greedy decoding for scheduled sampling objectives. Other related work involves training a generator with a Gumbel reparamterized sampling module to more reliably find the MAP sequences at decode-time BIBREF21 , and constructing surrogate loss functions BIBREF22 that are close to task losses. Model We denote the seq2seq model parameterized by INLINEFORM0 as INLINEFORM1 . We denote the input sequence as INLINEFORM2 , the gold output sequence as INLINEFORM3 and the result of beam search over INLINEFORM4 as INLINEFORM5 . Ideally, we would like to directly minimize a final evaluation loss, INLINEFORM6 , evaluated on the result of running beam search with input INLINEFORM7 and model INLINEFORM8 . Throughout this paper we assume that the evaluation loss decomposes over time steps INLINEFORM9 as: INLINEFORM10 . We refer to this idealized training objective that directly evaluates prediction loss as the “direct loss” objective and define it as: DISPLAYFORM0 Unfortunately, optimizing this objective using gradient methods is difficult because the objective is discontinuous. The two sources of discontinuity are: We introduce a surrogate training objective that avoids these problems and as a result is fully continuous. In order to accomplish this, we propose a continuous relaxation to the composition of our final loss metric, INLINEFORM0 , and our decoder function, INLINEFORM1 : INLINEFORM2 Specifically, we form a continuous function softLB that seeks to approximate the result of running our decoder on input INLINEFORM0 and then evaluating the result against INLINEFORM1 using INLINEFORM2 . By introducing this new module, we are now able to construct our surrogate training objective: DISPLAYFORM0 Specified in more detail in Section SECREF9 , our surrogate objective in Equation 2 will additionally take a hyperparameter INLINEFORM0 that trades approximation quality for smoothness of the objective. Under certain conditions, Equation 2 converges to the objective in Equation 1 as INLINEFORM1 is increased. We first describe the standard discontinuous beam search procedure and then our training approach (Equation 2) involving a continuous relaxation of beam search. Discontinuity in Beam Search [t] continuous-top-k-argmax [1] INLINEFORM0 INLINEFORM1 , s.t. INLINEFORM2 INLINEFORM3 INLINEFORM4 = 1 to k peaked-softmax will be dominated by scores closer to INLINEFORM5 INLINEFORM6 The square operation is element-wise Formally, beam search is a procedure with hyperparameter INLINEFORM7 that maintains a beam of INLINEFORM8 elements at each time step and expands each of the INLINEFORM9 elements to find the INLINEFORM10 -best candidates for the next time step. The procedure finds an approximate argmax of a scoring function defined on output sequences. We describe beam search in the context of seq2seq models in Algorithm SECREF1 – more specifically, for an encoder-decoder BIBREF0 model with a nonlinear auto-regressive decoder (e.g. an LSTM BIBREF23 ). We define the global model score of a sequence INLINEFORM0 with length INLINEFORM1 to be the sum of local output scores at each time step of the seq2seq model: INLINEFORM2 . In neural models, the function INLINEFORM3 is implemented as a differentiable mapping, INLINEFORM4 , which yields scores for vocabulary elements using the recurrent hidden states at corresponding time steps. In our notation, INLINEFORM5 is the hidden state of the decoder at time step INLINEFORM6 for beam element INLINEFORM7 , INLINEFORM8 is the embedding of the output symbol at time-step INLINEFORM9 for beam element INLINEFORM10 , and INLINEFORM11 is the cumulative model score at step INLINEFORM12 for beam element INLINEFORM13 . In Algorithm SECREF1 , we denote by INLINEFORM14 the cumulative candidate score matrix which represents the model score of each successor candidate in the vocabulary for each beam element. This score is obtained by adding the local output score (computed as INLINEFORM15 ) to the running total of the score for the candidate. The function INLINEFORM16 in Algorithms SECREF1 and SECREF7 yields successive hidden states in recurrent neural models like RNNs, LSTMs etc. The INLINEFORM17 operation maps a word in the vocabulary INLINEFORM18 , to a continuous embedding vector. Finally, backpointers at each time step to the beam elements at the previous time step are also stored for identifying the best sequence INLINEFORM19 , at the conclusion of the search procedure. A backpointer at time step INLINEFORM20 for a beam element INLINEFORM21 is denoted by INLINEFORM22 which points to one of the INLINEFORM23 elements at the previous beam. We denote a vector of backpointers for all the beam elements by INLINEFORM24 . The INLINEFORM25 operation takes as input backpointers ( INLINEFORM26 ) and candidates ( INLINEFORM27 ) for all the beam elements at each time step and traverses the sequence in reverse (from time-step INLINEFORM28 through 1) following backpointers at each time step and identifying candidate words associated with each backpointer that results in a sequence INLINEFORM29 , of length INLINEFORM30 . The procedure described in Algorithm SECREF1 is discontinuous because of the top-k-argmax procedure that returns a pair of vectors corresponding to the INLINEFORM0 highest-scoring indices for backpointers and vocabulary items from the score matrix INLINEFORM1 . This index selection results in hard backpointers at each time step which restrict the gradient flow during backpropagation. In the next section, we describe a continuous relaxation to the top-k-argmax procedure which forms the crux of our approach. Continuous Approximation to top-k-argmax [t] Continuous relaxation to beam search [1] INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 t = 0 to T INLINEFORM5 i=1 to k INLINEFORM6 INLINEFORM7 is a local output scoring function INLINEFORM8 INLINEFORM9 is used to compute INLINEFORM10 INLINEFORM11 Call Algorithm 2 i = 1 to k INLINEFORM12 Soft back pointer computation INLINEFORM13 Contribution from vocabulary items INLINEFORM14 Peaked distribution over the candidates to compute INLINEFORM15 INLINEFORM16 INLINEFORM17 INLINEFORM18 j = 1 to k Get contributions from soft backpointers for each beam element INLINEFORM19 INLINEFORM20 INLINEFORM21 INLINEFORM22 is a nonlinear recurrent function that returns state at next step INLINEFORM23 Pick the loss for the sequence with highest model score on the beam in a soft manner. The key property that we use in our approximation is that for a real valued vector INLINEFORM0 , the argmax with respect to a vector of scores, INLINEFORM1 , can be approximated by a temperature controlled softmax operation. The argmax operation can be represented as: INLINEFORM2 which can be relaxed by replacing the indicator function with a peaked-softmax operation with hyperparameter INLINEFORM0 : INLINEFORM1 As INLINEFORM0 , INLINEFORM1 so long as there is only one maximum value in the vector INLINEFORM2 . This peaked-softmax operation has been shown to be effective in recent work BIBREF24 , BIBREF25 , BIBREF20 involving continuous relaxation to the argmax operation, although to our knowledge, this is the first work to apply it to approximate the beam search procedure. Using this peaked-softmax operation, we propose an iterative algorithm for computing a continuous relaxation to the top-k-argmax procedure in Algorithm SECREF6 which takes as input a score matrix of size INLINEFORM0 and returns INLINEFORM1 peaked matrices INLINEFORM2 of size INLINEFORM3 . Each matrix INLINEFORM4 represents the index of INLINEFORM5 -th max. For example, INLINEFORM6 will have most of its mass concentrated on the index in the matrix that corresponds to the argmax, while INLINEFORM7 will have most of its mass concentrated on the index of the 2nd-highest scoring element. Specifically, we obtain matrix INLINEFORM8 by computing the squared difference between the INLINEFORM9 -highest score and all the scores in the matrix and then using the peaked-softmax operation over the negative squared differences. This results in scores closer to the INLINEFORM10 -highest score to have a higher mass than scores far away from the INLINEFORM11 -highest score. Hence, the continuous relaxation to top-k-argmax operation can be simply implemented by iteratively using the max operation which is continuous and allows for gradient flow during backpropagation. As INLINEFORM0 , each INLINEFORM1 vector converges to hard index pairs representing hard backpointers and successor candidates described in Algorithm SECREF1 . For finite INLINEFORM2 , we introduce a notion of a soft backpointer, represented as a vector INLINEFORM3 in the INLINEFORM4 -probability simplex, which represents the contribution of each beam element from the previous time step to a beam element at current time step. This is obtained by a row-wise sum over INLINEFORM5 to get INLINEFORM6 values representing soft backpointers. Training with Continuous Relaxation of Beam Search We describe our approach in detail in Algorithm 3 and illustrate the soft beam recurrence step in Figure 1. For composing the loss function and the beam search function for our optimization as proposed in Equation 2, we make use of decomposability of the loss function across time-steps. Thus for a sequence y, the total loss is: INLINEFORM0 . In our experiments, INLINEFORM1 is the Hamming loss which can be easily computed at each time-step by simply comparing gold INLINEFORM2 with INLINEFORM3 . While exact computation of INLINEFORM4 will vary according to the loss, our proposed procedure will be applicable as long as the total loss is decomposable across time-steps. While decomposability of loss is a strong assumption, existing literature on structured prediction BIBREF26 , BIBREF27 has made due with this assumption, often using decomposable losses as surrogates for non-decomposable ones. We detail the continuous relaxation to beam search in Algorithm SECREF7 with INLINEFORM5 being the cumulative loss of beam element INLINEFORM6 at time step INLINEFORM7 and INLINEFORM8 being the embedding matrix of the target vocabulary which is of size INLINEFORM9 where INLINEFORM10 is the size of the embedding vector. In Algorithm SECREF7 , all the discrete selection functions have been replaced by their soft, continuous counterparts which can be backpropagated through. This results in all the operations being matrix and vector operations which is ideal for a GPU implementation. An important aspect of this algorithm is that we no longer rely on exactly identifying a discrete search prediction INLINEFORM0 since we are only interested in a continuous approximation to the direct loss INLINEFORM1 (line 18 of Algorithm SECREF7 ), and all the computation is expressed via the soft beam search formulation which eliminates all the sources of discontinuities associated with the training objective in Equation 1. The computational complexity of our approach for training scales linearly with the beam size and hence is roughly INLINEFORM2 times slower than standard CE training for beam size INLINEFORM3 . Since we have established the pointwise convergence of peaked-softmax to argmax as INLINEFORM4 for all vectors that have a unique maximum value, we can establish pointwise convergence of objective in Equation 2 to objective in Equation 1 as INLINEFORM5 , as long as there are no ties among the top-k scores of the beam expansion candidates at any time step. We posit that absolute ties are unlikely due to random initialization of weights and the domain of the scores being INLINEFORM6 . Empirically, we did not observe any noticeable impact of potential ties on the training procedure and our approach performed well on the tasks as discussed in Section SECREF4 . DISPLAYFORM0 We experimented with different annealing schedules for INLINEFORM0 starting with non-peaked softmax moving toward peaked-softmax across epochs so that learning is stable with informative gradients. This is important because cost functions like Hamming distance with very high INLINEFORM1 tend to be non-smooth and are generally flat in regions far away from changepoints and have a very large gradient near the changepoints which makes optimization difficult. Decoding The motivation behind our approach is to make the optimization aware of beam search decoding while maintaining the continuity of the objective. However, since our approach doesn't introduce any new model parameters and optimization is agnostic to the architecture of the seq2seq model, we were able to experiment with various decoding schemes like locally normalized greedy decoding, and hard beam search, once the model has been trained. However, to reduce the gap between the training procedure and test procedure, we also experimented with soft beam search decoding. This decoding approach closely follows Algorithm SECREF7 , but along with soft back pointers, we also compute hard back pointers at each time step. After computing all the relevant quantities like model score, loss etc., we follow the hard backpointers to obtain the best sequence INLINEFORM0 . This is very different from hard beam decoding because at each time step, the selection decisions are made via our soft continuous relaxation which influences the scores, LSTM hidden states and input embeddings at subsequent time-steps. The hard backpointers are essentially the MAP estimate of the soft backpointers at each step. With small, finite INLINEFORM1 , we observe differences between soft beam search and hard beam search decoding in our experiments. Comparison with Max-Margin Objectives Max-margin based objectives are typically motivated as another kind of surrogate training objective which avoid the discontinuities associated with direct loss optimization. Hinge loss for structured prediction typically takes the form: INLINEFORM0 where INLINEFORM0 is the input sequence, INLINEFORM1 is the gold target sequence, INLINEFORM2 is the output search space and INLINEFORM3 is the discontinuous cost function which we assume is decomposable across the time-steps of a sequence. Finding the cost augmented maximum score is generally difficult in large structured models and often involves searching over the output space and computing the approximate cost augmented maximal output sequence and the score associated with it via beam search. This procedure introduces discontinuities in the training procedure of structured max-margin objectives and renders it non amenable to training via backpropagation. Related work BIBREF15 on incorporating beam search into the training of neural sequence models does involve cost-augmented max-margin loss but it relies on discontinuous beam search forward passes and an explicit mechanism to ensure that the gold sequence stays in the beam during training, and hence does not involve back propagation through the beam search procedure itself. Our continuous approximation to beam search can very easily be modified to compute an approximation to the structured hinge loss so that it can be trained via backpropagation if the cost function is decomposable across time-steps. In Algorithm SECREF7 , we only need to modify line 5 as: INLINEFORM0 and instead of computing INLINEFORM0 in Algorithm SECREF7 , we first compute the cost augmented maximum score as: INLINEFORM1 and also compute the target score INLINEFORM0 by simply running the forward pass of the LSTM decoder over the gold target sequence. The continuous approximation to the hinge loss to be optimized is then: INLINEFORM1 . We empirically compare this approach with the proposed approach to optimize direct loss in experiments. Experimental Setup Since our goal is to investigate the efficacy of our approach for training generic seq2seq models, we perform experiments on two NLP tagging tasks with very different characteristics and output search spaces: Named Entity Recognition (NER) and CCG supertagging. While seq2seq models are appropriate for CCG supertagging task because of the long-range correlations between the sequential output elements and a large search space, they are not ideal for NER which has a considerably smaller search space and weaker correlations between predictions at subsequent time steps. In our experiments, we observe improvements from our approach on both of the tasks. We use a seq2seq model with a bi-directional LSTM encoder (1 layer with tanh activation function) for the input sequence INLINEFORM0 , and an LSTM decoder (1 layer with tanh activation function) with a fixed attention mechanism that deterministically attends to the INLINEFORM1 -th input token when decoding the INLINEFORM2 -th output, and hence does not involve learning of any attention parameters. Since, computational complexity of our approach for optimization scales linearly with beam size for each instance, it is impractical to use very large beam sizes for training. Hence, beam size for all the beam search based experiments was set to 3 which resulted in improvements on both the tasks as discussed in the results. For both tasks, the direct loss function was the Hamming distance cost which aims to maximize word level accuracy. Named Entity Recognition For named entity recognition, we use the CONLL 2003 shared task data BIBREF28 for German language and use the provided data splits. We perform no preprocessing on the data. The output vocabulary length (label space) is 10. A peculiar characteristic of this problem is that the training data is naturally skewed toward one default label (`O') because sentences typically do not contain many named entities and the evaluation focuses on the performance recognizing entities. Therefore, we modify the Hamming cost such that incorrect prediction of `O' is doubly penalized compared to other incorrect predictions. We use the hidden layers of size 64 and label embeddings of size 8. As mentioned earlier, seq2seq models are not an ideal choice for NER (tag-level correlations are short-ranged in NER – the unnecessary expressivity of full seq2seq models over simple encoder-classifier neural models makes training harder). However, we wanted to evaluate the effectiveness of our approach on different instantiations of seq2seq models. CCG Supertagging We used the standard splits of CCG bank BIBREF29 for training, development, and testing. The label space of supertags is 1,284 which is much larger than NER. The distribution of supertags in the training data exhibits a long tail because these supertags encode specific syntactic information about the words' usage. The supertag labels are correlated with each other and many tags encode similar information about the syntax. Moreover, this task is sensitive to the long range sequential decisions and search effects because of how it holistically encodes the syntax of the entire sentence. We perform minor preprocessing on the data similar to the preprocessing in BIBREF30 . For this task, we used hidden layers of size 512 and the supertag label embeddings were also of size 512. The standard evaluation metric for this task is the word level label accuracy which directly corresponds to Hamming loss. Hyperparameter tuning For tuning all the hyperparameters related to optimization we trained our models for 50 epochs and picked the models with the best performance on the development set. We also ran multiple random restarts for all the systems evaluated to account for performance variance across randomly started runs. We pretrained all our models with standard cross entropy training which was important for stable optimization of the non convex neural objective with a large parameter search space. This warm starting is a common practice in prior work on complex neural models BIBREF10 , BIBREF4 , BIBREF14 . Comparison We report performance on validation and test sets for both the tasks in Tables 1 and 2. The baseline model is a cross entropy trained seq2seq model (Baseline CE) which is also used to warm start the the proposed optimization procedures in this paper. This baseline has been compared against the approximate direct loss training objective (Section SECREF9 ), referred to as INLINEFORM0 in the tables, and the approximate max-margin training objective (Section SECREF12 ), referred to as INLINEFORM1 in the tables. Results are reported for models when trained with annealing INLINEFORM2 , and also with a constant setting of INLINEFORM3 which is a very smooth but inaccurate approximation of the original direct loss that we aim to optimize. Comparisons have been made on the basis of performance of the models under different decoding paradigms (represented as different column in the tables): locally normalized decoding (CE greedy), hard beam search decoding and soft beam search decoding described in Section SECREF11 . Results As shown in Tables 1 and 2, our approach INLINEFORM0 shows significant improvements over the locally normalized CE baseline with greedy decoding for both the tasks (+5.5 accuracy points gain for supertagging and +1.5 F1 points for NER). The improvement is more pronounced on the supertagging task, which is not surprising because: (i) the evaluation metric is tag-level accuracy which is congruent with the Hamming loss that INLINEFORM1 directly optimizes and (ii) the supertagging task itself is very sensitive to the search procedure because tags across time-steps tend to exhibit long range dependencies as they encode specialized syntactic information about word usage in the sentence. Another common trend to observe is that annealing INLINEFORM0 always results in better performance than training with a constant INLINEFORM1 for both INLINEFORM2 (Section SECREF9 ) and INLINEFORM3 (Section SECREF12 ). This shows that a stable training scheme that smoothly approaches minimizing the actual direct loss is important for our proposed approach. Additionally, we did not observe a large difference when our soft approximation is used for decoding (Section SECREF11 ) compared to hard beam search decoding, which suggests that our approximation to the hard beam search is as effective as its discrete counterpart. For supertagging, we observe that the baseline cross entropy trained model improves its predictions with beam search decoding compared to greedy decoding by 2 accuracy points, which suggests that beam search is already helpful for this task, even without search-aware training. Both the optimization schemes proposed in this paper improve upon the baseline with soft direct loss optimization ( INLINEFORM0 ), performing better than the approximate max-margin approach. For NER, we observe that optimizing INLINEFORM0 outperforms all the other approaches but we also observe interesting behaviour of beam search decoding and the approximate max-margin objective for this task. The pretrained CE baseline model yields worse performance when beam search is done instead of greedy locally normalized decoding. This is because the training data is heavily skewed toward the `O' label and hence the absolute score resolution between different tags at each time-step during decoding isn't enough to avoid leading beam search toward a wrong hypothesis path. We observed in our experiments that hard beam search resulted in predicting more `O's which also hurt the prediction of tags at future time steps and hurt precision as well as recall. Encouragingly, INLINEFORM1 optimization, even though warm started with a CE trained model that performs worse with beam search, led to the NER model becoming more search aware, which resulted in superior performance. However, we also observe that the approximate max-margin approach ( INLINEFORM2 ) performs poorly here. We attribute this to a deficiency in the max-margin objective when coupled with approximate search methods like beam search that do not provide guarantees on finding the supremum: one way to drive this objective down is to learn model scores such that the search for the best hypothesis is difficult, so that the value of the loss augmented decode is low, while the gold sequence maintains higher model score. Because we also warm started with a pre-trained model that results in a worse performance with beam search decode than with greedy decode, we observe the adverse effect of this deficiency. The result is a model that scores the gold hypothesis highly, but yields poor decoding outputs. This observation indicates that using max-margin based objectives with beam search during training actually may achieve the opposite of our original intent: the objective can be driven down by introducing search errors. The observation that our optimization method led to improvements on both the tasks–even on NER for which hard beam search during decoding on a CE trained model hurt the performance–by making the optimization more search aware, indicates the effectiveness of our approach for training seq2seq models. Conclusion While beam search is a method of choice for performing search in neural sequence models, as our experiments confirm, it is not necessarily guaranteed to improve accuracy when applied to cross-entropy-trained models. In this paper, we propose a novel method for optimizing model parameters that directly takes into account the process of beam search itself through a continuous, end-to-end sub-differentiable relaxation of beam search composed with the final evaluation loss. Experiments demonstrate that our method is able to improve overall test-time results for models using beam search as a test-time inference method, leading to substantial improvements in accuracy.
2 accuracy points
242f96142116cf9ff763e97aecd54e22cb1c8b5a
242f96142116cf9ff763e97aecd54e22cb1c8b5a_0
Q: Do they provide a framework for building a sub-differentiable for any final loss metric? Text: Introduction [t] Standard Beam Search [1] INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 t = 0 to T i = 1 to k INLINEFORM4 INLINEFORM5 INLINEFORM6 is the local output scoring function INLINEFORM7 top-k-max INLINEFORM8 Top k values of the input matrix INLINEFORM9 top-k-argmax INLINEFORM10 Top INLINEFORM11 argmax index pairs of the input matrix i = 1 to k INLINEFORM12 embedding( INLINEFORM13 ) INLINEFORM14 INLINEFORM15 is a nonlinear recurrent function that returns state at next step INLINEFORM16 INLINEFORM17 follow-backpointer( INLINEFORM18 ) INLINEFORM19 Sequence-to-sequence (seq2seq) models have been successfully used for many sequential decision tasks such as machine translation BIBREF0 , BIBREF1 , parsing BIBREF2 , BIBREF3 , summarization BIBREF4 , dialog generation BIBREF5 , and image captioning BIBREF6 . Beam search is a desirable choice of test-time decoding algorithm for such models because it potentially avoids search errors made by simpler greedy methods. However, the typical approach to training neural sequence models is to use a locally normalized maximum likelihood objective (cross-entropy training) BIBREF0 . This objective does not directly reason about the behaviour of the final decoding method. As a result, for cross-entropy trained models, beam decoding can sometimes yield reduced test performance when compared with greedy decoding BIBREF7 , BIBREF8 , BIBREF9 . These negative results are not unexpected. The training procedure was not search-aware: it was not able to consider the effect that changing the model's scores might have on the ease of search while using a beam decoding, greedy decoding, or otherwise. We hypothesize that the under-performance of beam search in certain scenarios can be resolved by using a better designed training objective. Because beam search potentially offers more accurate search when compared to greedy decoding, we hope that appropriately trained models should be able to leverage beam search to improve performance. In order to train models that can more effectively make use of beam search, we propose a new training procedure that focuses on the final loss metric (e.g. Hamming loss) evaluated on the output of beam search. While well-defined and a valid training criterion, this “direct loss” objective is discontinuous and thus difficult to optimize. Hence, in our approach, we form a sub-differentiable surrogate objective by introducing a novel continuous approximation of the beam search decoding procedure. In experiments, we show that optimizing this new training objective yields substantially better results on two sequence tasks (Named Entity Recognition and CCG Supertagging) when compared with both cross-entropy trained greedy decoding and cross-entropy trained beam decoding baselines. Several related methods, including reinforcement learning BIBREF10 , BIBREF11 , imitation learning BIBREF12 , BIBREF13 , BIBREF14 , and discrete search based methods BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , have also been proposed to make training search-aware. These methods include approaches that forgo direct optimization of a global training objective, instead incorporating credit assignment for search errors by using methods like early updates BIBREF19 that explicitly track the reachability of the gold target sequence during the search procedure. While addressing a related problem – credit assignment for search errors during training – in this paper, we propose an approach with a novel property: we directly optimize a continuous and global training objective using backpropagation. As a result, in our approach, credit assignment is handled directly via gradient optimization in an end-to-end computation graph. The most closely related work to our own approach was proposed by Goyal et al. BIBREF20 . They do not consider beam search, but develop a continuous approximation of greedy decoding for scheduled sampling objectives. Other related work involves training a generator with a Gumbel reparamterized sampling module to more reliably find the MAP sequences at decode-time BIBREF21 , and constructing surrogate loss functions BIBREF22 that are close to task losses. Model We denote the seq2seq model parameterized by INLINEFORM0 as INLINEFORM1 . We denote the input sequence as INLINEFORM2 , the gold output sequence as INLINEFORM3 and the result of beam search over INLINEFORM4 as INLINEFORM5 . Ideally, we would like to directly minimize a final evaluation loss, INLINEFORM6 , evaluated on the result of running beam search with input INLINEFORM7 and model INLINEFORM8 . Throughout this paper we assume that the evaluation loss decomposes over time steps INLINEFORM9 as: INLINEFORM10 . We refer to this idealized training objective that directly evaluates prediction loss as the “direct loss” objective and define it as: DISPLAYFORM0 Unfortunately, optimizing this objective using gradient methods is difficult because the objective is discontinuous. The two sources of discontinuity are: We introduce a surrogate training objective that avoids these problems and as a result is fully continuous. In order to accomplish this, we propose a continuous relaxation to the composition of our final loss metric, INLINEFORM0 , and our decoder function, INLINEFORM1 : INLINEFORM2 Specifically, we form a continuous function softLB that seeks to approximate the result of running our decoder on input INLINEFORM0 and then evaluating the result against INLINEFORM1 using INLINEFORM2 . By introducing this new module, we are now able to construct our surrogate training objective: DISPLAYFORM0 Specified in more detail in Section SECREF9 , our surrogate objective in Equation 2 will additionally take a hyperparameter INLINEFORM0 that trades approximation quality for smoothness of the objective. Under certain conditions, Equation 2 converges to the objective in Equation 1 as INLINEFORM1 is increased. We first describe the standard discontinuous beam search procedure and then our training approach (Equation 2) involving a continuous relaxation of beam search. Discontinuity in Beam Search [t] continuous-top-k-argmax [1] INLINEFORM0 INLINEFORM1 , s.t. INLINEFORM2 INLINEFORM3 INLINEFORM4 = 1 to k peaked-softmax will be dominated by scores closer to INLINEFORM5 INLINEFORM6 The square operation is element-wise Formally, beam search is a procedure with hyperparameter INLINEFORM7 that maintains a beam of INLINEFORM8 elements at each time step and expands each of the INLINEFORM9 elements to find the INLINEFORM10 -best candidates for the next time step. The procedure finds an approximate argmax of a scoring function defined on output sequences. We describe beam search in the context of seq2seq models in Algorithm SECREF1 – more specifically, for an encoder-decoder BIBREF0 model with a nonlinear auto-regressive decoder (e.g. an LSTM BIBREF23 ). We define the global model score of a sequence INLINEFORM0 with length INLINEFORM1 to be the sum of local output scores at each time step of the seq2seq model: INLINEFORM2 . In neural models, the function INLINEFORM3 is implemented as a differentiable mapping, INLINEFORM4 , which yields scores for vocabulary elements using the recurrent hidden states at corresponding time steps. In our notation, INLINEFORM5 is the hidden state of the decoder at time step INLINEFORM6 for beam element INLINEFORM7 , INLINEFORM8 is the embedding of the output symbol at time-step INLINEFORM9 for beam element INLINEFORM10 , and INLINEFORM11 is the cumulative model score at step INLINEFORM12 for beam element INLINEFORM13 . In Algorithm SECREF1 , we denote by INLINEFORM14 the cumulative candidate score matrix which represents the model score of each successor candidate in the vocabulary for each beam element. This score is obtained by adding the local output score (computed as INLINEFORM15 ) to the running total of the score for the candidate. The function INLINEFORM16 in Algorithms SECREF1 and SECREF7 yields successive hidden states in recurrent neural models like RNNs, LSTMs etc. The INLINEFORM17 operation maps a word in the vocabulary INLINEFORM18 , to a continuous embedding vector. Finally, backpointers at each time step to the beam elements at the previous time step are also stored for identifying the best sequence INLINEFORM19 , at the conclusion of the search procedure. A backpointer at time step INLINEFORM20 for a beam element INLINEFORM21 is denoted by INLINEFORM22 which points to one of the INLINEFORM23 elements at the previous beam. We denote a vector of backpointers for all the beam elements by INLINEFORM24 . The INLINEFORM25 operation takes as input backpointers ( INLINEFORM26 ) and candidates ( INLINEFORM27 ) for all the beam elements at each time step and traverses the sequence in reverse (from time-step INLINEFORM28 through 1) following backpointers at each time step and identifying candidate words associated with each backpointer that results in a sequence INLINEFORM29 , of length INLINEFORM30 . The procedure described in Algorithm SECREF1 is discontinuous because of the top-k-argmax procedure that returns a pair of vectors corresponding to the INLINEFORM0 highest-scoring indices for backpointers and vocabulary items from the score matrix INLINEFORM1 . This index selection results in hard backpointers at each time step which restrict the gradient flow during backpropagation. In the next section, we describe a continuous relaxation to the top-k-argmax procedure which forms the crux of our approach. Continuous Approximation to top-k-argmax [t] Continuous relaxation to beam search [1] INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 t = 0 to T INLINEFORM5 i=1 to k INLINEFORM6 INLINEFORM7 is a local output scoring function INLINEFORM8 INLINEFORM9 is used to compute INLINEFORM10 INLINEFORM11 Call Algorithm 2 i = 1 to k INLINEFORM12 Soft back pointer computation INLINEFORM13 Contribution from vocabulary items INLINEFORM14 Peaked distribution over the candidates to compute INLINEFORM15 INLINEFORM16 INLINEFORM17 INLINEFORM18 j = 1 to k Get contributions from soft backpointers for each beam element INLINEFORM19 INLINEFORM20 INLINEFORM21 INLINEFORM22 is a nonlinear recurrent function that returns state at next step INLINEFORM23 Pick the loss for the sequence with highest model score on the beam in a soft manner. The key property that we use in our approximation is that for a real valued vector INLINEFORM0 , the argmax with respect to a vector of scores, INLINEFORM1 , can be approximated by a temperature controlled softmax operation. The argmax operation can be represented as: INLINEFORM2 which can be relaxed by replacing the indicator function with a peaked-softmax operation with hyperparameter INLINEFORM0 : INLINEFORM1 As INLINEFORM0 , INLINEFORM1 so long as there is only one maximum value in the vector INLINEFORM2 . This peaked-softmax operation has been shown to be effective in recent work BIBREF24 , BIBREF25 , BIBREF20 involving continuous relaxation to the argmax operation, although to our knowledge, this is the first work to apply it to approximate the beam search procedure. Using this peaked-softmax operation, we propose an iterative algorithm for computing a continuous relaxation to the top-k-argmax procedure in Algorithm SECREF6 which takes as input a score matrix of size INLINEFORM0 and returns INLINEFORM1 peaked matrices INLINEFORM2 of size INLINEFORM3 . Each matrix INLINEFORM4 represents the index of INLINEFORM5 -th max. For example, INLINEFORM6 will have most of its mass concentrated on the index in the matrix that corresponds to the argmax, while INLINEFORM7 will have most of its mass concentrated on the index of the 2nd-highest scoring element. Specifically, we obtain matrix INLINEFORM8 by computing the squared difference between the INLINEFORM9 -highest score and all the scores in the matrix and then using the peaked-softmax operation over the negative squared differences. This results in scores closer to the INLINEFORM10 -highest score to have a higher mass than scores far away from the INLINEFORM11 -highest score. Hence, the continuous relaxation to top-k-argmax operation can be simply implemented by iteratively using the max operation which is continuous and allows for gradient flow during backpropagation. As INLINEFORM0 , each INLINEFORM1 vector converges to hard index pairs representing hard backpointers and successor candidates described in Algorithm SECREF1 . For finite INLINEFORM2 , we introduce a notion of a soft backpointer, represented as a vector INLINEFORM3 in the INLINEFORM4 -probability simplex, which represents the contribution of each beam element from the previous time step to a beam element at current time step. This is obtained by a row-wise sum over INLINEFORM5 to get INLINEFORM6 values representing soft backpointers. Training with Continuous Relaxation of Beam Search We describe our approach in detail in Algorithm 3 and illustrate the soft beam recurrence step in Figure 1. For composing the loss function and the beam search function for our optimization as proposed in Equation 2, we make use of decomposability of the loss function across time-steps. Thus for a sequence y, the total loss is: INLINEFORM0 . In our experiments, INLINEFORM1 is the Hamming loss which can be easily computed at each time-step by simply comparing gold INLINEFORM2 with INLINEFORM3 . While exact computation of INLINEFORM4 will vary according to the loss, our proposed procedure will be applicable as long as the total loss is decomposable across time-steps. While decomposability of loss is a strong assumption, existing literature on structured prediction BIBREF26 , BIBREF27 has made due with this assumption, often using decomposable losses as surrogates for non-decomposable ones. We detail the continuous relaxation to beam search in Algorithm SECREF7 with INLINEFORM5 being the cumulative loss of beam element INLINEFORM6 at time step INLINEFORM7 and INLINEFORM8 being the embedding matrix of the target vocabulary which is of size INLINEFORM9 where INLINEFORM10 is the size of the embedding vector. In Algorithm SECREF7 , all the discrete selection functions have been replaced by their soft, continuous counterparts which can be backpropagated through. This results in all the operations being matrix and vector operations which is ideal for a GPU implementation. An important aspect of this algorithm is that we no longer rely on exactly identifying a discrete search prediction INLINEFORM0 since we are only interested in a continuous approximation to the direct loss INLINEFORM1 (line 18 of Algorithm SECREF7 ), and all the computation is expressed via the soft beam search formulation which eliminates all the sources of discontinuities associated with the training objective in Equation 1. The computational complexity of our approach for training scales linearly with the beam size and hence is roughly INLINEFORM2 times slower than standard CE training for beam size INLINEFORM3 . Since we have established the pointwise convergence of peaked-softmax to argmax as INLINEFORM4 for all vectors that have a unique maximum value, we can establish pointwise convergence of objective in Equation 2 to objective in Equation 1 as INLINEFORM5 , as long as there are no ties among the top-k scores of the beam expansion candidates at any time step. We posit that absolute ties are unlikely due to random initialization of weights and the domain of the scores being INLINEFORM6 . Empirically, we did not observe any noticeable impact of potential ties on the training procedure and our approach performed well on the tasks as discussed in Section SECREF4 . DISPLAYFORM0 We experimented with different annealing schedules for INLINEFORM0 starting with non-peaked softmax moving toward peaked-softmax across epochs so that learning is stable with informative gradients. This is important because cost functions like Hamming distance with very high INLINEFORM1 tend to be non-smooth and are generally flat in regions far away from changepoints and have a very large gradient near the changepoints which makes optimization difficult. Decoding The motivation behind our approach is to make the optimization aware of beam search decoding while maintaining the continuity of the objective. However, since our approach doesn't introduce any new model parameters and optimization is agnostic to the architecture of the seq2seq model, we were able to experiment with various decoding schemes like locally normalized greedy decoding, and hard beam search, once the model has been trained. However, to reduce the gap between the training procedure and test procedure, we also experimented with soft beam search decoding. This decoding approach closely follows Algorithm SECREF7 , but along with soft back pointers, we also compute hard back pointers at each time step. After computing all the relevant quantities like model score, loss etc., we follow the hard backpointers to obtain the best sequence INLINEFORM0 . This is very different from hard beam decoding because at each time step, the selection decisions are made via our soft continuous relaxation which influences the scores, LSTM hidden states and input embeddings at subsequent time-steps. The hard backpointers are essentially the MAP estimate of the soft backpointers at each step. With small, finite INLINEFORM1 , we observe differences between soft beam search and hard beam search decoding in our experiments. Comparison with Max-Margin Objectives Max-margin based objectives are typically motivated as another kind of surrogate training objective which avoid the discontinuities associated with direct loss optimization. Hinge loss for structured prediction typically takes the form: INLINEFORM0 where INLINEFORM0 is the input sequence, INLINEFORM1 is the gold target sequence, INLINEFORM2 is the output search space and INLINEFORM3 is the discontinuous cost function which we assume is decomposable across the time-steps of a sequence. Finding the cost augmented maximum score is generally difficult in large structured models and often involves searching over the output space and computing the approximate cost augmented maximal output sequence and the score associated with it via beam search. This procedure introduces discontinuities in the training procedure of structured max-margin objectives and renders it non amenable to training via backpropagation. Related work BIBREF15 on incorporating beam search into the training of neural sequence models does involve cost-augmented max-margin loss but it relies on discontinuous beam search forward passes and an explicit mechanism to ensure that the gold sequence stays in the beam during training, and hence does not involve back propagation through the beam search procedure itself. Our continuous approximation to beam search can very easily be modified to compute an approximation to the structured hinge loss so that it can be trained via backpropagation if the cost function is decomposable across time-steps. In Algorithm SECREF7 , we only need to modify line 5 as: INLINEFORM0 and instead of computing INLINEFORM0 in Algorithm SECREF7 , we first compute the cost augmented maximum score as: INLINEFORM1 and also compute the target score INLINEFORM0 by simply running the forward pass of the LSTM decoder over the gold target sequence. The continuous approximation to the hinge loss to be optimized is then: INLINEFORM1 . We empirically compare this approach with the proposed approach to optimize direct loss in experiments. Experimental Setup Since our goal is to investigate the efficacy of our approach for training generic seq2seq models, we perform experiments on two NLP tagging tasks with very different characteristics and output search spaces: Named Entity Recognition (NER) and CCG supertagging. While seq2seq models are appropriate for CCG supertagging task because of the long-range correlations between the sequential output elements and a large search space, they are not ideal for NER which has a considerably smaller search space and weaker correlations between predictions at subsequent time steps. In our experiments, we observe improvements from our approach on both of the tasks. We use a seq2seq model with a bi-directional LSTM encoder (1 layer with tanh activation function) for the input sequence INLINEFORM0 , and an LSTM decoder (1 layer with tanh activation function) with a fixed attention mechanism that deterministically attends to the INLINEFORM1 -th input token when decoding the INLINEFORM2 -th output, and hence does not involve learning of any attention parameters. Since, computational complexity of our approach for optimization scales linearly with beam size for each instance, it is impractical to use very large beam sizes for training. Hence, beam size for all the beam search based experiments was set to 3 which resulted in improvements on both the tasks as discussed in the results. For both tasks, the direct loss function was the Hamming distance cost which aims to maximize word level accuracy. Named Entity Recognition For named entity recognition, we use the CONLL 2003 shared task data BIBREF28 for German language and use the provided data splits. We perform no preprocessing on the data. The output vocabulary length (label space) is 10. A peculiar characteristic of this problem is that the training data is naturally skewed toward one default label (`O') because sentences typically do not contain many named entities and the evaluation focuses on the performance recognizing entities. Therefore, we modify the Hamming cost such that incorrect prediction of `O' is doubly penalized compared to other incorrect predictions. We use the hidden layers of size 64 and label embeddings of size 8. As mentioned earlier, seq2seq models are not an ideal choice for NER (tag-level correlations are short-ranged in NER – the unnecessary expressivity of full seq2seq models over simple encoder-classifier neural models makes training harder). However, we wanted to evaluate the effectiveness of our approach on different instantiations of seq2seq models. CCG Supertagging We used the standard splits of CCG bank BIBREF29 for training, development, and testing. The label space of supertags is 1,284 which is much larger than NER. The distribution of supertags in the training data exhibits a long tail because these supertags encode specific syntactic information about the words' usage. The supertag labels are correlated with each other and many tags encode similar information about the syntax. Moreover, this task is sensitive to the long range sequential decisions and search effects because of how it holistically encodes the syntax of the entire sentence. We perform minor preprocessing on the data similar to the preprocessing in BIBREF30 . For this task, we used hidden layers of size 512 and the supertag label embeddings were also of size 512. The standard evaluation metric for this task is the word level label accuracy which directly corresponds to Hamming loss. Hyperparameter tuning For tuning all the hyperparameters related to optimization we trained our models for 50 epochs and picked the models with the best performance on the development set. We also ran multiple random restarts for all the systems evaluated to account for performance variance across randomly started runs. We pretrained all our models with standard cross entropy training which was important for stable optimization of the non convex neural objective with a large parameter search space. This warm starting is a common practice in prior work on complex neural models BIBREF10 , BIBREF4 , BIBREF14 . Comparison We report performance on validation and test sets for both the tasks in Tables 1 and 2. The baseline model is a cross entropy trained seq2seq model (Baseline CE) which is also used to warm start the the proposed optimization procedures in this paper. This baseline has been compared against the approximate direct loss training objective (Section SECREF9 ), referred to as INLINEFORM0 in the tables, and the approximate max-margin training objective (Section SECREF12 ), referred to as INLINEFORM1 in the tables. Results are reported for models when trained with annealing INLINEFORM2 , and also with a constant setting of INLINEFORM3 which is a very smooth but inaccurate approximation of the original direct loss that we aim to optimize. Comparisons have been made on the basis of performance of the models under different decoding paradigms (represented as different column in the tables): locally normalized decoding (CE greedy), hard beam search decoding and soft beam search decoding described in Section SECREF11 . Results As shown in Tables 1 and 2, our approach INLINEFORM0 shows significant improvements over the locally normalized CE baseline with greedy decoding for both the tasks (+5.5 accuracy points gain for supertagging and +1.5 F1 points for NER). The improvement is more pronounced on the supertagging task, which is not surprising because: (i) the evaluation metric is tag-level accuracy which is congruent with the Hamming loss that INLINEFORM1 directly optimizes and (ii) the supertagging task itself is very sensitive to the search procedure because tags across time-steps tend to exhibit long range dependencies as they encode specialized syntactic information about word usage in the sentence. Another common trend to observe is that annealing INLINEFORM0 always results in better performance than training with a constant INLINEFORM1 for both INLINEFORM2 (Section SECREF9 ) and INLINEFORM3 (Section SECREF12 ). This shows that a stable training scheme that smoothly approaches minimizing the actual direct loss is important for our proposed approach. Additionally, we did not observe a large difference when our soft approximation is used for decoding (Section SECREF11 ) compared to hard beam search decoding, which suggests that our approximation to the hard beam search is as effective as its discrete counterpart. For supertagging, we observe that the baseline cross entropy trained model improves its predictions with beam search decoding compared to greedy decoding by 2 accuracy points, which suggests that beam search is already helpful for this task, even without search-aware training. Both the optimization schemes proposed in this paper improve upon the baseline with soft direct loss optimization ( INLINEFORM0 ), performing better than the approximate max-margin approach. For NER, we observe that optimizing INLINEFORM0 outperforms all the other approaches but we also observe interesting behaviour of beam search decoding and the approximate max-margin objective for this task. The pretrained CE baseline model yields worse performance when beam search is done instead of greedy locally normalized decoding. This is because the training data is heavily skewed toward the `O' label and hence the absolute score resolution between different tags at each time-step during decoding isn't enough to avoid leading beam search toward a wrong hypothesis path. We observed in our experiments that hard beam search resulted in predicting more `O's which also hurt the prediction of tags at future time steps and hurt precision as well as recall. Encouragingly, INLINEFORM1 optimization, even though warm started with a CE trained model that performs worse with beam search, led to the NER model becoming more search aware, which resulted in superior performance. However, we also observe that the approximate max-margin approach ( INLINEFORM2 ) performs poorly here. We attribute this to a deficiency in the max-margin objective when coupled with approximate search methods like beam search that do not provide guarantees on finding the supremum: one way to drive this objective down is to learn model scores such that the search for the best hypothesis is difficult, so that the value of the loss augmented decode is low, while the gold sequence maintains higher model score. Because we also warm started with a pre-trained model that results in a worse performance with beam search decode than with greedy decode, we observe the adverse effect of this deficiency. The result is a model that scores the gold hypothesis highly, but yields poor decoding outputs. This observation indicates that using max-margin based objectives with beam search during training actually may achieve the opposite of our original intent: the objective can be driven down by introducing search errors. The observation that our optimization method led to improvements on both the tasks–even on NER for which hard beam search during decoding on a CE trained model hurt the performance–by making the optimization more search aware, indicates the effectiveness of our approach for training seq2seq models. Conclusion While beam search is a method of choice for performing search in neural sequence models, as our experiments confirm, it is not necessarily guaranteed to improve accuracy when applied to cross-entropy-trained models. In this paper, we propose a novel method for optimizing model parameters that directly takes into account the process of beam search itself through a continuous, end-to-end sub-differentiable relaxation of beam search composed with the final evaluation loss. Experiments demonstrate that our method is able to improve overall test-time results for models using beam search as a test-time inference method, leading to substantial improvements in accuracy.
Yes
fcd0bd2db39898ee4f444ae970b80ea4d1d9b054
fcd0bd2db39898ee4f444ae970b80ea4d1d9b054_0
Q: Do they compare partially complete sequences (created during steps of beam search) to gold/target sequences? Text: Introduction [t] Standard Beam Search [1] INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 t = 0 to T i = 1 to k INLINEFORM4 INLINEFORM5 INLINEFORM6 is the local output scoring function INLINEFORM7 top-k-max INLINEFORM8 Top k values of the input matrix INLINEFORM9 top-k-argmax INLINEFORM10 Top INLINEFORM11 argmax index pairs of the input matrix i = 1 to k INLINEFORM12 embedding( INLINEFORM13 ) INLINEFORM14 INLINEFORM15 is a nonlinear recurrent function that returns state at next step INLINEFORM16 INLINEFORM17 follow-backpointer( INLINEFORM18 ) INLINEFORM19 Sequence-to-sequence (seq2seq) models have been successfully used for many sequential decision tasks such as machine translation BIBREF0 , BIBREF1 , parsing BIBREF2 , BIBREF3 , summarization BIBREF4 , dialog generation BIBREF5 , and image captioning BIBREF6 . Beam search is a desirable choice of test-time decoding algorithm for such models because it potentially avoids search errors made by simpler greedy methods. However, the typical approach to training neural sequence models is to use a locally normalized maximum likelihood objective (cross-entropy training) BIBREF0 . This objective does not directly reason about the behaviour of the final decoding method. As a result, for cross-entropy trained models, beam decoding can sometimes yield reduced test performance when compared with greedy decoding BIBREF7 , BIBREF8 , BIBREF9 . These negative results are not unexpected. The training procedure was not search-aware: it was not able to consider the effect that changing the model's scores might have on the ease of search while using a beam decoding, greedy decoding, or otherwise. We hypothesize that the under-performance of beam search in certain scenarios can be resolved by using a better designed training objective. Because beam search potentially offers more accurate search when compared to greedy decoding, we hope that appropriately trained models should be able to leverage beam search to improve performance. In order to train models that can more effectively make use of beam search, we propose a new training procedure that focuses on the final loss metric (e.g. Hamming loss) evaluated on the output of beam search. While well-defined and a valid training criterion, this “direct loss” objective is discontinuous and thus difficult to optimize. Hence, in our approach, we form a sub-differentiable surrogate objective by introducing a novel continuous approximation of the beam search decoding procedure. In experiments, we show that optimizing this new training objective yields substantially better results on two sequence tasks (Named Entity Recognition and CCG Supertagging) when compared with both cross-entropy trained greedy decoding and cross-entropy trained beam decoding baselines. Several related methods, including reinforcement learning BIBREF10 , BIBREF11 , imitation learning BIBREF12 , BIBREF13 , BIBREF14 , and discrete search based methods BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , have also been proposed to make training search-aware. These methods include approaches that forgo direct optimization of a global training objective, instead incorporating credit assignment for search errors by using methods like early updates BIBREF19 that explicitly track the reachability of the gold target sequence during the search procedure. While addressing a related problem – credit assignment for search errors during training – in this paper, we propose an approach with a novel property: we directly optimize a continuous and global training objective using backpropagation. As a result, in our approach, credit assignment is handled directly via gradient optimization in an end-to-end computation graph. The most closely related work to our own approach was proposed by Goyal et al. BIBREF20 . They do not consider beam search, but develop a continuous approximation of greedy decoding for scheduled sampling objectives. Other related work involves training a generator with a Gumbel reparamterized sampling module to more reliably find the MAP sequences at decode-time BIBREF21 , and constructing surrogate loss functions BIBREF22 that are close to task losses. Model We denote the seq2seq model parameterized by INLINEFORM0 as INLINEFORM1 . We denote the input sequence as INLINEFORM2 , the gold output sequence as INLINEFORM3 and the result of beam search over INLINEFORM4 as INLINEFORM5 . Ideally, we would like to directly minimize a final evaluation loss, INLINEFORM6 , evaluated on the result of running beam search with input INLINEFORM7 and model INLINEFORM8 . Throughout this paper we assume that the evaluation loss decomposes over time steps INLINEFORM9 as: INLINEFORM10 . We refer to this idealized training objective that directly evaluates prediction loss as the “direct loss” objective and define it as: DISPLAYFORM0 Unfortunately, optimizing this objective using gradient methods is difficult because the objective is discontinuous. The two sources of discontinuity are: We introduce a surrogate training objective that avoids these problems and as a result is fully continuous. In order to accomplish this, we propose a continuous relaxation to the composition of our final loss metric, INLINEFORM0 , and our decoder function, INLINEFORM1 : INLINEFORM2 Specifically, we form a continuous function softLB that seeks to approximate the result of running our decoder on input INLINEFORM0 and then evaluating the result against INLINEFORM1 using INLINEFORM2 . By introducing this new module, we are now able to construct our surrogate training objective: DISPLAYFORM0 Specified in more detail in Section SECREF9 , our surrogate objective in Equation 2 will additionally take a hyperparameter INLINEFORM0 that trades approximation quality for smoothness of the objective. Under certain conditions, Equation 2 converges to the objective in Equation 1 as INLINEFORM1 is increased. We first describe the standard discontinuous beam search procedure and then our training approach (Equation 2) involving a continuous relaxation of beam search. Discontinuity in Beam Search [t] continuous-top-k-argmax [1] INLINEFORM0 INLINEFORM1 , s.t. INLINEFORM2 INLINEFORM3 INLINEFORM4 = 1 to k peaked-softmax will be dominated by scores closer to INLINEFORM5 INLINEFORM6 The square operation is element-wise Formally, beam search is a procedure with hyperparameter INLINEFORM7 that maintains a beam of INLINEFORM8 elements at each time step and expands each of the INLINEFORM9 elements to find the INLINEFORM10 -best candidates for the next time step. The procedure finds an approximate argmax of a scoring function defined on output sequences. We describe beam search in the context of seq2seq models in Algorithm SECREF1 – more specifically, for an encoder-decoder BIBREF0 model with a nonlinear auto-regressive decoder (e.g. an LSTM BIBREF23 ). We define the global model score of a sequence INLINEFORM0 with length INLINEFORM1 to be the sum of local output scores at each time step of the seq2seq model: INLINEFORM2 . In neural models, the function INLINEFORM3 is implemented as a differentiable mapping, INLINEFORM4 , which yields scores for vocabulary elements using the recurrent hidden states at corresponding time steps. In our notation, INLINEFORM5 is the hidden state of the decoder at time step INLINEFORM6 for beam element INLINEFORM7 , INLINEFORM8 is the embedding of the output symbol at time-step INLINEFORM9 for beam element INLINEFORM10 , and INLINEFORM11 is the cumulative model score at step INLINEFORM12 for beam element INLINEFORM13 . In Algorithm SECREF1 , we denote by INLINEFORM14 the cumulative candidate score matrix which represents the model score of each successor candidate in the vocabulary for each beam element. This score is obtained by adding the local output score (computed as INLINEFORM15 ) to the running total of the score for the candidate. The function INLINEFORM16 in Algorithms SECREF1 and SECREF7 yields successive hidden states in recurrent neural models like RNNs, LSTMs etc. The INLINEFORM17 operation maps a word in the vocabulary INLINEFORM18 , to a continuous embedding vector. Finally, backpointers at each time step to the beam elements at the previous time step are also stored for identifying the best sequence INLINEFORM19 , at the conclusion of the search procedure. A backpointer at time step INLINEFORM20 for a beam element INLINEFORM21 is denoted by INLINEFORM22 which points to one of the INLINEFORM23 elements at the previous beam. We denote a vector of backpointers for all the beam elements by INLINEFORM24 . The INLINEFORM25 operation takes as input backpointers ( INLINEFORM26 ) and candidates ( INLINEFORM27 ) for all the beam elements at each time step and traverses the sequence in reverse (from time-step INLINEFORM28 through 1) following backpointers at each time step and identifying candidate words associated with each backpointer that results in a sequence INLINEFORM29 , of length INLINEFORM30 . The procedure described in Algorithm SECREF1 is discontinuous because of the top-k-argmax procedure that returns a pair of vectors corresponding to the INLINEFORM0 highest-scoring indices for backpointers and vocabulary items from the score matrix INLINEFORM1 . This index selection results in hard backpointers at each time step which restrict the gradient flow during backpropagation. In the next section, we describe a continuous relaxation to the top-k-argmax procedure which forms the crux of our approach. Continuous Approximation to top-k-argmax [t] Continuous relaxation to beam search [1] INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 t = 0 to T INLINEFORM5 i=1 to k INLINEFORM6 INLINEFORM7 is a local output scoring function INLINEFORM8 INLINEFORM9 is used to compute INLINEFORM10 INLINEFORM11 Call Algorithm 2 i = 1 to k INLINEFORM12 Soft back pointer computation INLINEFORM13 Contribution from vocabulary items INLINEFORM14 Peaked distribution over the candidates to compute INLINEFORM15 INLINEFORM16 INLINEFORM17 INLINEFORM18 j = 1 to k Get contributions from soft backpointers for each beam element INLINEFORM19 INLINEFORM20 INLINEFORM21 INLINEFORM22 is a nonlinear recurrent function that returns state at next step INLINEFORM23 Pick the loss for the sequence with highest model score on the beam in a soft manner. The key property that we use in our approximation is that for a real valued vector INLINEFORM0 , the argmax with respect to a vector of scores, INLINEFORM1 , can be approximated by a temperature controlled softmax operation. The argmax operation can be represented as: INLINEFORM2 which can be relaxed by replacing the indicator function with a peaked-softmax operation with hyperparameter INLINEFORM0 : INLINEFORM1 As INLINEFORM0 , INLINEFORM1 so long as there is only one maximum value in the vector INLINEFORM2 . This peaked-softmax operation has been shown to be effective in recent work BIBREF24 , BIBREF25 , BIBREF20 involving continuous relaxation to the argmax operation, although to our knowledge, this is the first work to apply it to approximate the beam search procedure. Using this peaked-softmax operation, we propose an iterative algorithm for computing a continuous relaxation to the top-k-argmax procedure in Algorithm SECREF6 which takes as input a score matrix of size INLINEFORM0 and returns INLINEFORM1 peaked matrices INLINEFORM2 of size INLINEFORM3 . Each matrix INLINEFORM4 represents the index of INLINEFORM5 -th max. For example, INLINEFORM6 will have most of its mass concentrated on the index in the matrix that corresponds to the argmax, while INLINEFORM7 will have most of its mass concentrated on the index of the 2nd-highest scoring element. Specifically, we obtain matrix INLINEFORM8 by computing the squared difference between the INLINEFORM9 -highest score and all the scores in the matrix and then using the peaked-softmax operation over the negative squared differences. This results in scores closer to the INLINEFORM10 -highest score to have a higher mass than scores far away from the INLINEFORM11 -highest score. Hence, the continuous relaxation to top-k-argmax operation can be simply implemented by iteratively using the max operation which is continuous and allows for gradient flow during backpropagation. As INLINEFORM0 , each INLINEFORM1 vector converges to hard index pairs representing hard backpointers and successor candidates described in Algorithm SECREF1 . For finite INLINEFORM2 , we introduce a notion of a soft backpointer, represented as a vector INLINEFORM3 in the INLINEFORM4 -probability simplex, which represents the contribution of each beam element from the previous time step to a beam element at current time step. This is obtained by a row-wise sum over INLINEFORM5 to get INLINEFORM6 values representing soft backpointers. Training with Continuous Relaxation of Beam Search We describe our approach in detail in Algorithm 3 and illustrate the soft beam recurrence step in Figure 1. For composing the loss function and the beam search function for our optimization as proposed in Equation 2, we make use of decomposability of the loss function across time-steps. Thus for a sequence y, the total loss is: INLINEFORM0 . In our experiments, INLINEFORM1 is the Hamming loss which can be easily computed at each time-step by simply comparing gold INLINEFORM2 with INLINEFORM3 . While exact computation of INLINEFORM4 will vary according to the loss, our proposed procedure will be applicable as long as the total loss is decomposable across time-steps. While decomposability of loss is a strong assumption, existing literature on structured prediction BIBREF26 , BIBREF27 has made due with this assumption, often using decomposable losses as surrogates for non-decomposable ones. We detail the continuous relaxation to beam search in Algorithm SECREF7 with INLINEFORM5 being the cumulative loss of beam element INLINEFORM6 at time step INLINEFORM7 and INLINEFORM8 being the embedding matrix of the target vocabulary which is of size INLINEFORM9 where INLINEFORM10 is the size of the embedding vector. In Algorithm SECREF7 , all the discrete selection functions have been replaced by their soft, continuous counterparts which can be backpropagated through. This results in all the operations being matrix and vector operations which is ideal for a GPU implementation. An important aspect of this algorithm is that we no longer rely on exactly identifying a discrete search prediction INLINEFORM0 since we are only interested in a continuous approximation to the direct loss INLINEFORM1 (line 18 of Algorithm SECREF7 ), and all the computation is expressed via the soft beam search formulation which eliminates all the sources of discontinuities associated with the training objective in Equation 1. The computational complexity of our approach for training scales linearly with the beam size and hence is roughly INLINEFORM2 times slower than standard CE training for beam size INLINEFORM3 . Since we have established the pointwise convergence of peaked-softmax to argmax as INLINEFORM4 for all vectors that have a unique maximum value, we can establish pointwise convergence of objective in Equation 2 to objective in Equation 1 as INLINEFORM5 , as long as there are no ties among the top-k scores of the beam expansion candidates at any time step. We posit that absolute ties are unlikely due to random initialization of weights and the domain of the scores being INLINEFORM6 . Empirically, we did not observe any noticeable impact of potential ties on the training procedure and our approach performed well on the tasks as discussed in Section SECREF4 . DISPLAYFORM0 We experimented with different annealing schedules for INLINEFORM0 starting with non-peaked softmax moving toward peaked-softmax across epochs so that learning is stable with informative gradients. This is important because cost functions like Hamming distance with very high INLINEFORM1 tend to be non-smooth and are generally flat in regions far away from changepoints and have a very large gradient near the changepoints which makes optimization difficult. Decoding The motivation behind our approach is to make the optimization aware of beam search decoding while maintaining the continuity of the objective. However, since our approach doesn't introduce any new model parameters and optimization is agnostic to the architecture of the seq2seq model, we were able to experiment with various decoding schemes like locally normalized greedy decoding, and hard beam search, once the model has been trained. However, to reduce the gap between the training procedure and test procedure, we also experimented with soft beam search decoding. This decoding approach closely follows Algorithm SECREF7 , but along with soft back pointers, we also compute hard back pointers at each time step. After computing all the relevant quantities like model score, loss etc., we follow the hard backpointers to obtain the best sequence INLINEFORM0 . This is very different from hard beam decoding because at each time step, the selection decisions are made via our soft continuous relaxation which influences the scores, LSTM hidden states and input embeddings at subsequent time-steps. The hard backpointers are essentially the MAP estimate of the soft backpointers at each step. With small, finite INLINEFORM1 , we observe differences between soft beam search and hard beam search decoding in our experiments. Comparison with Max-Margin Objectives Max-margin based objectives are typically motivated as another kind of surrogate training objective which avoid the discontinuities associated with direct loss optimization. Hinge loss for structured prediction typically takes the form: INLINEFORM0 where INLINEFORM0 is the input sequence, INLINEFORM1 is the gold target sequence, INLINEFORM2 is the output search space and INLINEFORM3 is the discontinuous cost function which we assume is decomposable across the time-steps of a sequence. Finding the cost augmented maximum score is generally difficult in large structured models and often involves searching over the output space and computing the approximate cost augmented maximal output sequence and the score associated with it via beam search. This procedure introduces discontinuities in the training procedure of structured max-margin objectives and renders it non amenable to training via backpropagation. Related work BIBREF15 on incorporating beam search into the training of neural sequence models does involve cost-augmented max-margin loss but it relies on discontinuous beam search forward passes and an explicit mechanism to ensure that the gold sequence stays in the beam during training, and hence does not involve back propagation through the beam search procedure itself. Our continuous approximation to beam search can very easily be modified to compute an approximation to the structured hinge loss so that it can be trained via backpropagation if the cost function is decomposable across time-steps. In Algorithm SECREF7 , we only need to modify line 5 as: INLINEFORM0 and instead of computing INLINEFORM0 in Algorithm SECREF7 , we first compute the cost augmented maximum score as: INLINEFORM1 and also compute the target score INLINEFORM0 by simply running the forward pass of the LSTM decoder over the gold target sequence. The continuous approximation to the hinge loss to be optimized is then: INLINEFORM1 . We empirically compare this approach with the proposed approach to optimize direct loss in experiments. Experimental Setup Since our goal is to investigate the efficacy of our approach for training generic seq2seq models, we perform experiments on two NLP tagging tasks with very different characteristics and output search spaces: Named Entity Recognition (NER) and CCG supertagging. While seq2seq models are appropriate for CCG supertagging task because of the long-range correlations between the sequential output elements and a large search space, they are not ideal for NER which has a considerably smaller search space and weaker correlations between predictions at subsequent time steps. In our experiments, we observe improvements from our approach on both of the tasks. We use a seq2seq model with a bi-directional LSTM encoder (1 layer with tanh activation function) for the input sequence INLINEFORM0 , and an LSTM decoder (1 layer with tanh activation function) with a fixed attention mechanism that deterministically attends to the INLINEFORM1 -th input token when decoding the INLINEFORM2 -th output, and hence does not involve learning of any attention parameters. Since, computational complexity of our approach for optimization scales linearly with beam size for each instance, it is impractical to use very large beam sizes for training. Hence, beam size for all the beam search based experiments was set to 3 which resulted in improvements on both the tasks as discussed in the results. For both tasks, the direct loss function was the Hamming distance cost which aims to maximize word level accuracy. Named Entity Recognition For named entity recognition, we use the CONLL 2003 shared task data BIBREF28 for German language and use the provided data splits. We perform no preprocessing on the data. The output vocabulary length (label space) is 10. A peculiar characteristic of this problem is that the training data is naturally skewed toward one default label (`O') because sentences typically do not contain many named entities and the evaluation focuses on the performance recognizing entities. Therefore, we modify the Hamming cost such that incorrect prediction of `O' is doubly penalized compared to other incorrect predictions. We use the hidden layers of size 64 and label embeddings of size 8. As mentioned earlier, seq2seq models are not an ideal choice for NER (tag-level correlations are short-ranged in NER – the unnecessary expressivity of full seq2seq models over simple encoder-classifier neural models makes training harder). However, we wanted to evaluate the effectiveness of our approach on different instantiations of seq2seq models. CCG Supertagging We used the standard splits of CCG bank BIBREF29 for training, development, and testing. The label space of supertags is 1,284 which is much larger than NER. The distribution of supertags in the training data exhibits a long tail because these supertags encode specific syntactic information about the words' usage. The supertag labels are correlated with each other and many tags encode similar information about the syntax. Moreover, this task is sensitive to the long range sequential decisions and search effects because of how it holistically encodes the syntax of the entire sentence. We perform minor preprocessing on the data similar to the preprocessing in BIBREF30 . For this task, we used hidden layers of size 512 and the supertag label embeddings were also of size 512. The standard evaluation metric for this task is the word level label accuracy which directly corresponds to Hamming loss. Hyperparameter tuning For tuning all the hyperparameters related to optimization we trained our models for 50 epochs and picked the models with the best performance on the development set. We also ran multiple random restarts for all the systems evaluated to account for performance variance across randomly started runs. We pretrained all our models with standard cross entropy training which was important for stable optimization of the non convex neural objective with a large parameter search space. This warm starting is a common practice in prior work on complex neural models BIBREF10 , BIBREF4 , BIBREF14 . Comparison We report performance on validation and test sets for both the tasks in Tables 1 and 2. The baseline model is a cross entropy trained seq2seq model (Baseline CE) which is also used to warm start the the proposed optimization procedures in this paper. This baseline has been compared against the approximate direct loss training objective (Section SECREF9 ), referred to as INLINEFORM0 in the tables, and the approximate max-margin training objective (Section SECREF12 ), referred to as INLINEFORM1 in the tables. Results are reported for models when trained with annealing INLINEFORM2 , and also with a constant setting of INLINEFORM3 which is a very smooth but inaccurate approximation of the original direct loss that we aim to optimize. Comparisons have been made on the basis of performance of the models under different decoding paradigms (represented as different column in the tables): locally normalized decoding (CE greedy), hard beam search decoding and soft beam search decoding described in Section SECREF11 . Results As shown in Tables 1 and 2, our approach INLINEFORM0 shows significant improvements over the locally normalized CE baseline with greedy decoding for both the tasks (+5.5 accuracy points gain for supertagging and +1.5 F1 points for NER). The improvement is more pronounced on the supertagging task, which is not surprising because: (i) the evaluation metric is tag-level accuracy which is congruent with the Hamming loss that INLINEFORM1 directly optimizes and (ii) the supertagging task itself is very sensitive to the search procedure because tags across time-steps tend to exhibit long range dependencies as they encode specialized syntactic information about word usage in the sentence. Another common trend to observe is that annealing INLINEFORM0 always results in better performance than training with a constant INLINEFORM1 for both INLINEFORM2 (Section SECREF9 ) and INLINEFORM3 (Section SECREF12 ). This shows that a stable training scheme that smoothly approaches minimizing the actual direct loss is important for our proposed approach. Additionally, we did not observe a large difference when our soft approximation is used for decoding (Section SECREF11 ) compared to hard beam search decoding, which suggests that our approximation to the hard beam search is as effective as its discrete counterpart. For supertagging, we observe that the baseline cross entropy trained model improves its predictions with beam search decoding compared to greedy decoding by 2 accuracy points, which suggests that beam search is already helpful for this task, even without search-aware training. Both the optimization schemes proposed in this paper improve upon the baseline with soft direct loss optimization ( INLINEFORM0 ), performing better than the approximate max-margin approach. For NER, we observe that optimizing INLINEFORM0 outperforms all the other approaches but we also observe interesting behaviour of beam search decoding and the approximate max-margin objective for this task. The pretrained CE baseline model yields worse performance when beam search is done instead of greedy locally normalized decoding. This is because the training data is heavily skewed toward the `O' label and hence the absolute score resolution between different tags at each time-step during decoding isn't enough to avoid leading beam search toward a wrong hypothesis path. We observed in our experiments that hard beam search resulted in predicting more `O's which also hurt the prediction of tags at future time steps and hurt precision as well as recall. Encouragingly, INLINEFORM1 optimization, even though warm started with a CE trained model that performs worse with beam search, led to the NER model becoming more search aware, which resulted in superior performance. However, we also observe that the approximate max-margin approach ( INLINEFORM2 ) performs poorly here. We attribute this to a deficiency in the max-margin objective when coupled with approximate search methods like beam search that do not provide guarantees on finding the supremum: one way to drive this objective down is to learn model scores such that the search for the best hypothesis is difficult, so that the value of the loss augmented decode is low, while the gold sequence maintains higher model score. Because we also warm started with a pre-trained model that results in a worse performance with beam search decode than with greedy decode, we observe the adverse effect of this deficiency. The result is a model that scores the gold hypothesis highly, but yields poor decoding outputs. This observation indicates that using max-margin based objectives with beam search during training actually may achieve the opposite of our original intent: the objective can be driven down by introducing search errors. The observation that our optimization method led to improvements on both the tasks–even on NER for which hard beam search during decoding on a CE trained model hurt the performance–by making the optimization more search aware, indicates the effectiveness of our approach for training seq2seq models. Conclusion While beam search is a method of choice for performing search in neural sequence models, as our experiments confirm, it is not necessarily guaranteed to improve accuracy when applied to cross-entropy-trained models. In this paper, we propose a novel method for optimizing model parameters that directly takes into account the process of beam search itself through a continuous, end-to-end sub-differentiable relaxation of beam search composed with the final evaluation loss. Experiments demonstrate that our method is able to improve overall test-time results for models using beam search as a test-time inference method, leading to substantial improvements in accuracy.
Yes
5cc937c2dcb8fd4683cb2298d047f27a05e16d43
5cc937c2dcb8fd4683cb2298d047f27a05e16d43_0
Q: Which loss metrics do they try in their new training procedure evaluated on the output of beam search? Text: Introduction [t] Standard Beam Search [1] INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 t = 0 to T i = 1 to k INLINEFORM4 INLINEFORM5 INLINEFORM6 is the local output scoring function INLINEFORM7 top-k-max INLINEFORM8 Top k values of the input matrix INLINEFORM9 top-k-argmax INLINEFORM10 Top INLINEFORM11 argmax index pairs of the input matrix i = 1 to k INLINEFORM12 embedding( INLINEFORM13 ) INLINEFORM14 INLINEFORM15 is a nonlinear recurrent function that returns state at next step INLINEFORM16 INLINEFORM17 follow-backpointer( INLINEFORM18 ) INLINEFORM19 Sequence-to-sequence (seq2seq) models have been successfully used for many sequential decision tasks such as machine translation BIBREF0 , BIBREF1 , parsing BIBREF2 , BIBREF3 , summarization BIBREF4 , dialog generation BIBREF5 , and image captioning BIBREF6 . Beam search is a desirable choice of test-time decoding algorithm for such models because it potentially avoids search errors made by simpler greedy methods. However, the typical approach to training neural sequence models is to use a locally normalized maximum likelihood objective (cross-entropy training) BIBREF0 . This objective does not directly reason about the behaviour of the final decoding method. As a result, for cross-entropy trained models, beam decoding can sometimes yield reduced test performance when compared with greedy decoding BIBREF7 , BIBREF8 , BIBREF9 . These negative results are not unexpected. The training procedure was not search-aware: it was not able to consider the effect that changing the model's scores might have on the ease of search while using a beam decoding, greedy decoding, or otherwise. We hypothesize that the under-performance of beam search in certain scenarios can be resolved by using a better designed training objective. Because beam search potentially offers more accurate search when compared to greedy decoding, we hope that appropriately trained models should be able to leverage beam search to improve performance. In order to train models that can more effectively make use of beam search, we propose a new training procedure that focuses on the final loss metric (e.g. Hamming loss) evaluated on the output of beam search. While well-defined and a valid training criterion, this “direct loss” objective is discontinuous and thus difficult to optimize. Hence, in our approach, we form a sub-differentiable surrogate objective by introducing a novel continuous approximation of the beam search decoding procedure. In experiments, we show that optimizing this new training objective yields substantially better results on two sequence tasks (Named Entity Recognition and CCG Supertagging) when compared with both cross-entropy trained greedy decoding and cross-entropy trained beam decoding baselines. Several related methods, including reinforcement learning BIBREF10 , BIBREF11 , imitation learning BIBREF12 , BIBREF13 , BIBREF14 , and discrete search based methods BIBREF15 , BIBREF16 , BIBREF17 , BIBREF18 , have also been proposed to make training search-aware. These methods include approaches that forgo direct optimization of a global training objective, instead incorporating credit assignment for search errors by using methods like early updates BIBREF19 that explicitly track the reachability of the gold target sequence during the search procedure. While addressing a related problem – credit assignment for search errors during training – in this paper, we propose an approach with a novel property: we directly optimize a continuous and global training objective using backpropagation. As a result, in our approach, credit assignment is handled directly via gradient optimization in an end-to-end computation graph. The most closely related work to our own approach was proposed by Goyal et al. BIBREF20 . They do not consider beam search, but develop a continuous approximation of greedy decoding for scheduled sampling objectives. Other related work involves training a generator with a Gumbel reparamterized sampling module to more reliably find the MAP sequences at decode-time BIBREF21 , and constructing surrogate loss functions BIBREF22 that are close to task losses. Model We denote the seq2seq model parameterized by INLINEFORM0 as INLINEFORM1 . We denote the input sequence as INLINEFORM2 , the gold output sequence as INLINEFORM3 and the result of beam search over INLINEFORM4 as INLINEFORM5 . Ideally, we would like to directly minimize a final evaluation loss, INLINEFORM6 , evaluated on the result of running beam search with input INLINEFORM7 and model INLINEFORM8 . Throughout this paper we assume that the evaluation loss decomposes over time steps INLINEFORM9 as: INLINEFORM10 . We refer to this idealized training objective that directly evaluates prediction loss as the “direct loss” objective and define it as: DISPLAYFORM0 Unfortunately, optimizing this objective using gradient methods is difficult because the objective is discontinuous. The two sources of discontinuity are: We introduce a surrogate training objective that avoids these problems and as a result is fully continuous. In order to accomplish this, we propose a continuous relaxation to the composition of our final loss metric, INLINEFORM0 , and our decoder function, INLINEFORM1 : INLINEFORM2 Specifically, we form a continuous function softLB that seeks to approximate the result of running our decoder on input INLINEFORM0 and then evaluating the result against INLINEFORM1 using INLINEFORM2 . By introducing this new module, we are now able to construct our surrogate training objective: DISPLAYFORM0 Specified in more detail in Section SECREF9 , our surrogate objective in Equation 2 will additionally take a hyperparameter INLINEFORM0 that trades approximation quality for smoothness of the objective. Under certain conditions, Equation 2 converges to the objective in Equation 1 as INLINEFORM1 is increased. We first describe the standard discontinuous beam search procedure and then our training approach (Equation 2) involving a continuous relaxation of beam search. Discontinuity in Beam Search [t] continuous-top-k-argmax [1] INLINEFORM0 INLINEFORM1 , s.t. INLINEFORM2 INLINEFORM3 INLINEFORM4 = 1 to k peaked-softmax will be dominated by scores closer to INLINEFORM5 INLINEFORM6 The square operation is element-wise Formally, beam search is a procedure with hyperparameter INLINEFORM7 that maintains a beam of INLINEFORM8 elements at each time step and expands each of the INLINEFORM9 elements to find the INLINEFORM10 -best candidates for the next time step. The procedure finds an approximate argmax of a scoring function defined on output sequences. We describe beam search in the context of seq2seq models in Algorithm SECREF1 – more specifically, for an encoder-decoder BIBREF0 model with a nonlinear auto-regressive decoder (e.g. an LSTM BIBREF23 ). We define the global model score of a sequence INLINEFORM0 with length INLINEFORM1 to be the sum of local output scores at each time step of the seq2seq model: INLINEFORM2 . In neural models, the function INLINEFORM3 is implemented as a differentiable mapping, INLINEFORM4 , which yields scores for vocabulary elements using the recurrent hidden states at corresponding time steps. In our notation, INLINEFORM5 is the hidden state of the decoder at time step INLINEFORM6 for beam element INLINEFORM7 , INLINEFORM8 is the embedding of the output symbol at time-step INLINEFORM9 for beam element INLINEFORM10 , and INLINEFORM11 is the cumulative model score at step INLINEFORM12 for beam element INLINEFORM13 . In Algorithm SECREF1 , we denote by INLINEFORM14 the cumulative candidate score matrix which represents the model score of each successor candidate in the vocabulary for each beam element. This score is obtained by adding the local output score (computed as INLINEFORM15 ) to the running total of the score for the candidate. The function INLINEFORM16 in Algorithms SECREF1 and SECREF7 yields successive hidden states in recurrent neural models like RNNs, LSTMs etc. The INLINEFORM17 operation maps a word in the vocabulary INLINEFORM18 , to a continuous embedding vector. Finally, backpointers at each time step to the beam elements at the previous time step are also stored for identifying the best sequence INLINEFORM19 , at the conclusion of the search procedure. A backpointer at time step INLINEFORM20 for a beam element INLINEFORM21 is denoted by INLINEFORM22 which points to one of the INLINEFORM23 elements at the previous beam. We denote a vector of backpointers for all the beam elements by INLINEFORM24 . The INLINEFORM25 operation takes as input backpointers ( INLINEFORM26 ) and candidates ( INLINEFORM27 ) for all the beam elements at each time step and traverses the sequence in reverse (from time-step INLINEFORM28 through 1) following backpointers at each time step and identifying candidate words associated with each backpointer that results in a sequence INLINEFORM29 , of length INLINEFORM30 . The procedure described in Algorithm SECREF1 is discontinuous because of the top-k-argmax procedure that returns a pair of vectors corresponding to the INLINEFORM0 highest-scoring indices for backpointers and vocabulary items from the score matrix INLINEFORM1 . This index selection results in hard backpointers at each time step which restrict the gradient flow during backpropagation. In the next section, we describe a continuous relaxation to the top-k-argmax procedure which forms the crux of our approach. Continuous Approximation to top-k-argmax [t] Continuous relaxation to beam search [1] INLINEFORM0 , INLINEFORM1 , INLINEFORM2 , INLINEFORM3 , INLINEFORM4 t = 0 to T INLINEFORM5 i=1 to k INLINEFORM6 INLINEFORM7 is a local output scoring function INLINEFORM8 INLINEFORM9 is used to compute INLINEFORM10 INLINEFORM11 Call Algorithm 2 i = 1 to k INLINEFORM12 Soft back pointer computation INLINEFORM13 Contribution from vocabulary items INLINEFORM14 Peaked distribution over the candidates to compute INLINEFORM15 INLINEFORM16 INLINEFORM17 INLINEFORM18 j = 1 to k Get contributions from soft backpointers for each beam element INLINEFORM19 INLINEFORM20 INLINEFORM21 INLINEFORM22 is a nonlinear recurrent function that returns state at next step INLINEFORM23 Pick the loss for the sequence with highest model score on the beam in a soft manner. The key property that we use in our approximation is that for a real valued vector INLINEFORM0 , the argmax with respect to a vector of scores, INLINEFORM1 , can be approximated by a temperature controlled softmax operation. The argmax operation can be represented as: INLINEFORM2 which can be relaxed by replacing the indicator function with a peaked-softmax operation with hyperparameter INLINEFORM0 : INLINEFORM1 As INLINEFORM0 , INLINEFORM1 so long as there is only one maximum value in the vector INLINEFORM2 . This peaked-softmax operation has been shown to be effective in recent work BIBREF24 , BIBREF25 , BIBREF20 involving continuous relaxation to the argmax operation, although to our knowledge, this is the first work to apply it to approximate the beam search procedure. Using this peaked-softmax operation, we propose an iterative algorithm for computing a continuous relaxation to the top-k-argmax procedure in Algorithm SECREF6 which takes as input a score matrix of size INLINEFORM0 and returns INLINEFORM1 peaked matrices INLINEFORM2 of size INLINEFORM3 . Each matrix INLINEFORM4 represents the index of INLINEFORM5 -th max. For example, INLINEFORM6 will have most of its mass concentrated on the index in the matrix that corresponds to the argmax, while INLINEFORM7 will have most of its mass concentrated on the index of the 2nd-highest scoring element. Specifically, we obtain matrix INLINEFORM8 by computing the squared difference between the INLINEFORM9 -highest score and all the scores in the matrix and then using the peaked-softmax operation over the negative squared differences. This results in scores closer to the INLINEFORM10 -highest score to have a higher mass than scores far away from the INLINEFORM11 -highest score. Hence, the continuous relaxation to top-k-argmax operation can be simply implemented by iteratively using the max operation which is continuous and allows for gradient flow during backpropagation. As INLINEFORM0 , each INLINEFORM1 vector converges to hard index pairs representing hard backpointers and successor candidates described in Algorithm SECREF1 . For finite INLINEFORM2 , we introduce a notion of a soft backpointer, represented as a vector INLINEFORM3 in the INLINEFORM4 -probability simplex, which represents the contribution of each beam element from the previous time step to a beam element at current time step. This is obtained by a row-wise sum over INLINEFORM5 to get INLINEFORM6 values representing soft backpointers. Training with Continuous Relaxation of Beam Search We describe our approach in detail in Algorithm 3 and illustrate the soft beam recurrence step in Figure 1. For composing the loss function and the beam search function for our optimization as proposed in Equation 2, we make use of decomposability of the loss function across time-steps. Thus for a sequence y, the total loss is: INLINEFORM0 . In our experiments, INLINEFORM1 is the Hamming loss which can be easily computed at each time-step by simply comparing gold INLINEFORM2 with INLINEFORM3 . While exact computation of INLINEFORM4 will vary according to the loss, our proposed procedure will be applicable as long as the total loss is decomposable across time-steps. While decomposability of loss is a strong assumption, existing literature on structured prediction BIBREF26 , BIBREF27 has made due with this assumption, often using decomposable losses as surrogates for non-decomposable ones. We detail the continuous relaxation to beam search in Algorithm SECREF7 with INLINEFORM5 being the cumulative loss of beam element INLINEFORM6 at time step INLINEFORM7 and INLINEFORM8 being the embedding matrix of the target vocabulary which is of size INLINEFORM9 where INLINEFORM10 is the size of the embedding vector. In Algorithm SECREF7 , all the discrete selection functions have been replaced by their soft, continuous counterparts which can be backpropagated through. This results in all the operations being matrix and vector operations which is ideal for a GPU implementation. An important aspect of this algorithm is that we no longer rely on exactly identifying a discrete search prediction INLINEFORM0 since we are only interested in a continuous approximation to the direct loss INLINEFORM1 (line 18 of Algorithm SECREF7 ), and all the computation is expressed via the soft beam search formulation which eliminates all the sources of discontinuities associated with the training objective in Equation 1. The computational complexity of our approach for training scales linearly with the beam size and hence is roughly INLINEFORM2 times slower than standard CE training for beam size INLINEFORM3 . Since we have established the pointwise convergence of peaked-softmax to argmax as INLINEFORM4 for all vectors that have a unique maximum value, we can establish pointwise convergence of objective in Equation 2 to objective in Equation 1 as INLINEFORM5 , as long as there are no ties among the top-k scores of the beam expansion candidates at any time step. We posit that absolute ties are unlikely due to random initialization of weights and the domain of the scores being INLINEFORM6 . Empirically, we did not observe any noticeable impact of potential ties on the training procedure and our approach performed well on the tasks as discussed in Section SECREF4 . DISPLAYFORM0 We experimented with different annealing schedules for INLINEFORM0 starting with non-peaked softmax moving toward peaked-softmax across epochs so that learning is stable with informative gradients. This is important because cost functions like Hamming distance with very high INLINEFORM1 tend to be non-smooth and are generally flat in regions far away from changepoints and have a very large gradient near the changepoints which makes optimization difficult. Decoding The motivation behind our approach is to make the optimization aware of beam search decoding while maintaining the continuity of the objective. However, since our approach doesn't introduce any new model parameters and optimization is agnostic to the architecture of the seq2seq model, we were able to experiment with various decoding schemes like locally normalized greedy decoding, and hard beam search, once the model has been trained. However, to reduce the gap between the training procedure and test procedure, we also experimented with soft beam search decoding. This decoding approach closely follows Algorithm SECREF7 , but along with soft back pointers, we also compute hard back pointers at each time step. After computing all the relevant quantities like model score, loss etc., we follow the hard backpointers to obtain the best sequence INLINEFORM0 . This is very different from hard beam decoding because at each time step, the selection decisions are made via our soft continuous relaxation which influences the scores, LSTM hidden states and input embeddings at subsequent time-steps. The hard backpointers are essentially the MAP estimate of the soft backpointers at each step. With small, finite INLINEFORM1 , we observe differences between soft beam search and hard beam search decoding in our experiments. Comparison with Max-Margin Objectives Max-margin based objectives are typically motivated as another kind of surrogate training objective which avoid the discontinuities associated with direct loss optimization. Hinge loss for structured prediction typically takes the form: INLINEFORM0 where INLINEFORM0 is the input sequence, INLINEFORM1 is the gold target sequence, INLINEFORM2 is the output search space and INLINEFORM3 is the discontinuous cost function which we assume is decomposable across the time-steps of a sequence. Finding the cost augmented maximum score is generally difficult in large structured models and often involves searching over the output space and computing the approximate cost augmented maximal output sequence and the score associated with it via beam search. This procedure introduces discontinuities in the training procedure of structured max-margin objectives and renders it non amenable to training via backpropagation. Related work BIBREF15 on incorporating beam search into the training of neural sequence models does involve cost-augmented max-margin loss but it relies on discontinuous beam search forward passes and an explicit mechanism to ensure that the gold sequence stays in the beam during training, and hence does not involve back propagation through the beam search procedure itself. Our continuous approximation to beam search can very easily be modified to compute an approximation to the structured hinge loss so that it can be trained via backpropagation if the cost function is decomposable across time-steps. In Algorithm SECREF7 , we only need to modify line 5 as: INLINEFORM0 and instead of computing INLINEFORM0 in Algorithm SECREF7 , we first compute the cost augmented maximum score as: INLINEFORM1 and also compute the target score INLINEFORM0 by simply running the forward pass of the LSTM decoder over the gold target sequence. The continuous approximation to the hinge loss to be optimized is then: INLINEFORM1 . We empirically compare this approach with the proposed approach to optimize direct loss in experiments. Experimental Setup Since our goal is to investigate the efficacy of our approach for training generic seq2seq models, we perform experiments on two NLP tagging tasks with very different characteristics and output search spaces: Named Entity Recognition (NER) and CCG supertagging. While seq2seq models are appropriate for CCG supertagging task because of the long-range correlations between the sequential output elements and a large search space, they are not ideal for NER which has a considerably smaller search space and weaker correlations between predictions at subsequent time steps. In our experiments, we observe improvements from our approach on both of the tasks. We use a seq2seq model with a bi-directional LSTM encoder (1 layer with tanh activation function) for the input sequence INLINEFORM0 , and an LSTM decoder (1 layer with tanh activation function) with a fixed attention mechanism that deterministically attends to the INLINEFORM1 -th input token when decoding the INLINEFORM2 -th output, and hence does not involve learning of any attention parameters. Since, computational complexity of our approach for optimization scales linearly with beam size for each instance, it is impractical to use very large beam sizes for training. Hence, beam size for all the beam search based experiments was set to 3 which resulted in improvements on both the tasks as discussed in the results. For both tasks, the direct loss function was the Hamming distance cost which aims to maximize word level accuracy. Named Entity Recognition For named entity recognition, we use the CONLL 2003 shared task data BIBREF28 for German language and use the provided data splits. We perform no preprocessing on the data. The output vocabulary length (label space) is 10. A peculiar characteristic of this problem is that the training data is naturally skewed toward one default label (`O') because sentences typically do not contain many named entities and the evaluation focuses on the performance recognizing entities. Therefore, we modify the Hamming cost such that incorrect prediction of `O' is doubly penalized compared to other incorrect predictions. We use the hidden layers of size 64 and label embeddings of size 8. As mentioned earlier, seq2seq models are not an ideal choice for NER (tag-level correlations are short-ranged in NER – the unnecessary expressivity of full seq2seq models over simple encoder-classifier neural models makes training harder). However, we wanted to evaluate the effectiveness of our approach on different instantiations of seq2seq models. CCG Supertagging We used the standard splits of CCG bank BIBREF29 for training, development, and testing. The label space of supertags is 1,284 which is much larger than NER. The distribution of supertags in the training data exhibits a long tail because these supertags encode specific syntactic information about the words' usage. The supertag labels are correlated with each other and many tags encode similar information about the syntax. Moreover, this task is sensitive to the long range sequential decisions and search effects because of how it holistically encodes the syntax of the entire sentence. We perform minor preprocessing on the data similar to the preprocessing in BIBREF30 . For this task, we used hidden layers of size 512 and the supertag label embeddings were also of size 512. The standard evaluation metric for this task is the word level label accuracy which directly corresponds to Hamming loss. Hyperparameter tuning For tuning all the hyperparameters related to optimization we trained our models for 50 epochs and picked the models with the best performance on the development set. We also ran multiple random restarts for all the systems evaluated to account for performance variance across randomly started runs. We pretrained all our models with standard cross entropy training which was important for stable optimization of the non convex neural objective with a large parameter search space. This warm starting is a common practice in prior work on complex neural models BIBREF10 , BIBREF4 , BIBREF14 . Comparison We report performance on validation and test sets for both the tasks in Tables 1 and 2. The baseline model is a cross entropy trained seq2seq model (Baseline CE) which is also used to warm start the the proposed optimization procedures in this paper. This baseline has been compared against the approximate direct loss training objective (Section SECREF9 ), referred to as INLINEFORM0 in the tables, and the approximate max-margin training objective (Section SECREF12 ), referred to as INLINEFORM1 in the tables. Results are reported for models when trained with annealing INLINEFORM2 , and also with a constant setting of INLINEFORM3 which is a very smooth but inaccurate approximation of the original direct loss that we aim to optimize. Comparisons have been made on the basis of performance of the models under different decoding paradigms (represented as different column in the tables): locally normalized decoding (CE greedy), hard beam search decoding and soft beam search decoding described in Section SECREF11 . Results As shown in Tables 1 and 2, our approach INLINEFORM0 shows significant improvements over the locally normalized CE baseline with greedy decoding for both the tasks (+5.5 accuracy points gain for supertagging and +1.5 F1 points for NER). The improvement is more pronounced on the supertagging task, which is not surprising because: (i) the evaluation metric is tag-level accuracy which is congruent with the Hamming loss that INLINEFORM1 directly optimizes and (ii) the supertagging task itself is very sensitive to the search procedure because tags across time-steps tend to exhibit long range dependencies as they encode specialized syntactic information about word usage in the sentence. Another common trend to observe is that annealing INLINEFORM0 always results in better performance than training with a constant INLINEFORM1 for both INLINEFORM2 (Section SECREF9 ) and INLINEFORM3 (Section SECREF12 ). This shows that a stable training scheme that smoothly approaches minimizing the actual direct loss is important for our proposed approach. Additionally, we did not observe a large difference when our soft approximation is used for decoding (Section SECREF11 ) compared to hard beam search decoding, which suggests that our approximation to the hard beam search is as effective as its discrete counterpart. For supertagging, we observe that the baseline cross entropy trained model improves its predictions with beam search decoding compared to greedy decoding by 2 accuracy points, which suggests that beam search is already helpful for this task, even without search-aware training. Both the optimization schemes proposed in this paper improve upon the baseline with soft direct loss optimization ( INLINEFORM0 ), performing better than the approximate max-margin approach. For NER, we observe that optimizing INLINEFORM0 outperforms all the other approaches but we also observe interesting behaviour of beam search decoding and the approximate max-margin objective for this task. The pretrained CE baseline model yields worse performance when beam search is done instead of greedy locally normalized decoding. This is because the training data is heavily skewed toward the `O' label and hence the absolute score resolution between different tags at each time-step during decoding isn't enough to avoid leading beam search toward a wrong hypothesis path. We observed in our experiments that hard beam search resulted in predicting more `O's which also hurt the prediction of tags at future time steps and hurt precision as well as recall. Encouragingly, INLINEFORM1 optimization, even though warm started with a CE trained model that performs worse with beam search, led to the NER model becoming more search aware, which resulted in superior performance. However, we also observe that the approximate max-margin approach ( INLINEFORM2 ) performs poorly here. We attribute this to a deficiency in the max-margin objective when coupled with approximate search methods like beam search that do not provide guarantees on finding the supremum: one way to drive this objective down is to learn model scores such that the search for the best hypothesis is difficult, so that the value of the loss augmented decode is low, while the gold sequence maintains higher model score. Because we also warm started with a pre-trained model that results in a worse performance with beam search decode than with greedy decode, we observe the adverse effect of this deficiency. The result is a model that scores the gold hypothesis highly, but yields poor decoding outputs. This observation indicates that using max-margin based objectives with beam search during training actually may achieve the opposite of our original intent: the objective can be driven down by introducing search errors. The observation that our optimization method led to improvements on both the tasks–even on NER for which hard beam search during decoding on a CE trained model hurt the performance–by making the optimization more search aware, indicates the effectiveness of our approach for training seq2seq models. Conclusion While beam search is a method of choice for performing search in neural sequence models, as our experiments confirm, it is not necessarily guaranteed to improve accuracy when applied to cross-entropy-trained models. In this paper, we propose a novel method for optimizing model parameters that directly takes into account the process of beam search itself through a continuous, end-to-end sub-differentiable relaxation of beam search composed with the final evaluation loss. Experiments demonstrate that our method is able to improve overall test-time results for models using beam search as a test-time inference method, leading to substantial improvements in accuracy.
continuous relaxation to top-k-argmax
37016cc987d33be5ab877013ef26ec7239b48bd9
37016cc987d33be5ab877013ef26ec7239b48bd9_0
Q: How are different domains weighted in WDIRL? Text: Introduction Sentiment analysis aims to predict sentiment polarity of user-generated data with emotional orientation like movie reviews. The exponentially increase of online reviews makes it an interesting topic in research and industrial areas. However, reviews can span so many different domains and the collection and preprocessing of large amounts of data for new domains is often time-consuming and expensive. Therefore, cross-domain sentiment analysis is currently a hot topic, which aims to transfer knowledge from a label-rich source domain (S) to the label-few target domain (T). In recent years, one of the most popular frameworks for cross-domain sentiment analysis is the domain invariant representation learning (DIRL) framework BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. Methods of this framework follow the idea of extracting a domain-invariant feature representation, in which the data distributions of the source and target domains are similar. Based on the resultant representations, they learn the supervised classifier using source rich labeled data. The main difference among these methods is the applied technique to force the feature representations to be domain-invariant. However, in this work, we discover that applying DIRL may harm domain adaptation in the situation that the label distribution $\rm {P}(\rm {Y})$ shifts across domains. Specifically, let $\rm {X}$ and $\rm {Y}$ denote the input and label random variable, respectively, and $G(\rm {X})$ denote the feature representation of $\rm {X}$. We found out that when $\rm {P}(\rm {Y})$ changes across domains while $\rm {P}(\rm {X}|\rm {Y})$ stays the same, forcing $G(\rm {X})$ to be domain-invariant will make $G(\rm {X})$ uninformative to $\rm {Y}$. This will, in turn, harm the generation of the supervised classifier to the target domain. In addition, for the more general condition that both $\rm {P}(\rm {Y})$ and $\rm {P}(\rm {X}|\rm {Y})$ shift across domains, we deduced a conflict between the object of making the classification error small and that of making $G(\rm {X})$ domain-invariant. We argue that the problem is worthy of studying since the shift of $\rm {P}(\rm {Y})$ exists in many real-world cross-domain sentiment analysis tasks BIBREF0. For example, the marginal distribution of the sentiment of a product can be affected by the overall social environment and change in different time periods; and for different products, their marginal distributions of the sentiment are naturally considered different. Moreover, there are many factors, such as the original data distribution, data collection time, and data clearing method, that can affect $\rm {P}(\rm {Y})$ of the collected target domain unlabeled dataset. Note that in the real-world cross-domain tasks, we do not know the labels of the collected target domain data. Thus, we cannot previously align its label distribution $\rm {P}_T(\mathbf {Y})$ with that of source domain labeled data $\rm {P}_S(\mathbf {Y})$, as done in many previous works BIBREF0, BIBREF2, BIBREF5, BIBREF4, BIBREF6, BIBREF7. To address the problem of DIRL resulted from the shift of $\rm {P}(\rm {Y})$, we propose a modification to DIRL, obtaining a weighted domain-invariant representation learning (WDIRL) framework. This framework additionally introduces a class weight $\mathbf {w}$ to weigh source domain examples by class, hoping to make $\rm {P}(\rm {Y})$ of the weighted source domain close to that of the target domain. Based on $\mathbf {w}$, it resolves domain shift in two steps. In the first step, it forces the marginal distribution $\rm {P}(\rm {X})$ to be domain-invariant between the target domain and the weighted source domain instead of the original source, obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight $\mathbf {w}$. In the second step, it resolves the shift of $\rm {P}(\rm {Y}|\rm {X})$ by adjusting $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ for label prediction in the target domain. We detail these two steps in §SECREF4. Moreover, we will illustrate how to transfer existing DIRL models to their WDIRL counterparts, taking the representative metric-based CMD model BIBREF3 and the adversarial-learning-based DANN model BIBREF2 as an example, respectively. In summary, the contributions of this paper include: ($\mathbf {i}$) We theoretically and empirically analyse the problem of DIRL for domain adaptation when the marginal distribution $\rm {P}(\rm {Y})$ shifts across domains. ($\mathbf {ii}$) We proposed a novel method to address the problem and show how to incorporate it with existent DIRL models. ($\mathbf {iii}$) Experimental studies on extensive cross-domain sentiment analysis tasks show that models of our WDIRL framework can greatly outperform their DIRL counterparts. Preliminary and Related Work ::: Domain Adaptation For expression consistency, in this work, we consider domain adaptation in the unsupervised setting (however, we argue that our analysis and solution also applies to the supervised and semi-supervised domain adaptation settings). In the unsupervised domain adaptation setting, there are two different distributions over $\rm {X} \times \rm {Y}$: the source domain $\rm {P}_S(\rm {X},\rm {Y})$ and the target domain $\rm {P}_T(\rm {X},\rm {Y})$. And there is a labeled data set $\mathcal {D}_S$ drawn $i.i.d$ from $\rm {P}_S(\rm {X},\rm {Y})$ and an unlabeled data set $\mathcal {D}_T$ drawn $i.i.d.$ from the marginal distribution $\rm {P}_T(\rm {X})$: The goal of domain adaptation is to build a classier $f:\rm {X} \rightarrow \rm {Y}$ that has good performance in the target domain using $\mathcal {D}_S$ and $\mathcal {D}_T$. For this purpose, many approaches have been proposed from different views, such as instance reweighting BIBREF8, pivot-based information passing BIBREF9, spectral feature alignment BIBREF10 subsampling BIBREF11, and of course the domain-invariant representation learning BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. Preliminary and Related Work ::: Domain Invariant Representation Learning Domain invariant representation learning (DIRL) is a very popular framework for performing domain adaptation in the cross-domain sentiment analysis field BIBREF23, BIBREF4, BIBREF24, BIBREF7. It is heavily motivated by the following theorem BIBREF25. Theorem 1 For a hypothesis $h$, Here, $\mathcal {L}_S(h)$ denotes the expected loss with hypothesis $h$ in the source domain, $\mathcal {L}_T(h)$ denotes the counterpart in the target domain, $d_1$ is a measure of divergence between two distributions. Based on Theorem UNKREF3 and assuming that performing feature transform on $\rm {X}$ will not increase the values of the first and third terms of the right side of Ineq. (DISPLAY_FORM4), methods of the DIRL framework apply a feature map $G$ onto $\rm {X}$, hoping to obtain a feature representation $G(\rm {X})$ that has a lower value of ${d}_{1}(\rm {P}_S(G(\rm {X})), \rm {P}_T(G(\rm {X})))$. To this end, different methods have been proposed. These methods can be roughly divided into two directions. The first direction is to design a differentiable metric to explicitly evaluate the discrepancy between two distributions. We call methods of this direction as the metric-based DIRL methods. A representative work of this direction is the center-momentum-based model proposed by BIBREF3. In that work, they proposed a central moment discrepancy metric (CMD) to evaluate the discrepancy between two distributions. Specifically, let denote $\rm {X}_S$ and $\rm {X}_T$ an $M$ dimensional random vector on the compact interval $[a; b]^M$ over distribution $\rm {P}_S$ and $\rm {P}_T$, respectively. The CMD loss between $\rm {P}_S$ and $\rm {P}_T$ is defined by: Here, $\mathbb {E}(\rm {X})$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X})$, and is the $k$-th momentum, where $\rm {X}_i$ denotes the $i^{th}$ dimensional variable of $\rm {X}$. The second direction is to perform adversarial training between the feature generator $G$ and a domain discriminator $D$. We call methods of this direction as the adversarial-learning-based methods. As a representative, BIBREF2 trained $D$ to distinguish the domain of a given example $x$ based on its representation $G(x)$. At the same time, they encouraged $G$ to deceive $D$, i.e., to make $D$ unable to distinguish the domain of $x$. More specifically, $D$ was trained to minimize the loss: over its trainable parameters, while in contrast $G$ was trained to maximize $\mathcal {L}_d$. According to the work of BIBREF26, this is equivalent to minimize the Jensen-shannon divergence BIBREF27, BIBREF28 $\text{JSD}(\rm {P}_S, \rm {P}_T)$ between $\rm {P}_S(G(\rm {X}))$ and $\rm {P}_T(G(\rm {X}))$ over $G$. Here, for a concise expression, we write $\rm {P}$ as the shorthand for $\rm {P}(G(\rm {X}))$. The task loss is the combination of the supervised learning loss $\mathcal {L}_{sup}$ and the domain-invariant learning loss $\mathcal {L}_{inv}$, which are defined on $\mathcal {D}_S$ only and on the combination of $\mathcal {D}_S$ and $\mathcal {D}_T$, respectively: Here, $\alpha $ is a hyper-parameter for loss balance, and the aforementioned domain adversarial loss $\text{JSD}(\rm {P}_S, \rm {P}_T)$ and $\text{CMD}_K$ are two concrete forms of $\mathcal {L}_{inv}$. Problem of Domain-Invariant Representation Learning In this work, we found out that applying DIRL may harm domain adaptation in the situation that $\rm {P}(\rm {Y})$ shifts across domains. Specifically, when $\rm {P}_S(\rm {Y})$ differs from $\rm {P}_T(\rm {Y})$, forcing the feature representations $G(\rm {X})$ to be domain-invariant may increase the value of $\mathcal {L}_S(h)$ in Ineq. (DISPLAY_FORM4) and consequently increase the value of $\mathcal {L}_T(h)$, which means the decrease of target domain performance. In the following, we start our analysis under the condition that $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$. Then, we consider the more general condition that $\rm {P}_S(\rm {X}|\rm {Y})$ also differs from $\rm {P}_T(\rm {X}|\rm {Y})$. When $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, we have the following theorem. Theorem 2 Given $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, if $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and a feature map $G$ makes $\rm {P}_S \left( \mathcal {M}(\rm {X}))=\rm {P}_T(\mathcal {M}(\rm {X}) \right)$, then $\rm {P}_S(\rm {Y}=i|\mathcal {M}(\rm {X}))=\rm {P}_S(\rm {Y}=i)$. Proofs appear in Appendix A. Problem of Domain-Invariant Representation Learning ::: Remark. According to Theorem UNKREF8, we know that when $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$ and $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$, forcing $G(\rm {X})$ to be domain-invariant inclines to make data of class $i$ mix with data of other classes in the space of $G(\rm {X})$. This will make it difficult for the supervised classifier to distinguish inputs of class $i$ from inputs of the other classes. Think about such an extreme case that every instance $x$ is mapped to a consistent point $g_0$ in $G(\rm {X})$. In this case, $\rm {P}_S(G(\rm {X})=g_0)= \rm {P}_T(G(\rm {X})=g_0) = 1$. Therefore, $G(\rm {X})$ is domain-invariant. As a result, the supervised classifier will assign the label $y^* = \operatornamewithlimits{arg\,max}_y \rm {P}_S(\rm {Y}=y)$ to all input examples. This is definitely unacceptable. To give a more intuitive illustration of the above analysis, we offer several empirical studies on Theorem UNKREF8 in Appendix B. When $\rm {P}_S(\rm {Y})\ne \rm {P}_T(\rm {Y})$ and $\rm {P}_S(\rm {X}|\rm {Y}) \ne \rm {P}_T(\rm {X}|\rm {Y})$, we did not obtain such a strong conclusion as Theorem UNKREF8. Instead, we deduced a conflict between the object of achieving superior classification performance and that of making features domain-invariant. Suppose that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and instances of class $i$ are completely distinguishable from instances of the rest classes in $G(\rm {X})$, i.e.,: In DIRL, we hope that: Consider the region $x \in \mathcal {X}_i$, where $\rm {P}(G(\rm {X}=x)|\rm {Y}=i)>0$. According to the above assumption, we know that $\rm {P}(G(\rm {X}=x \in \mathcal {X}_i)|\rm {Y} \ne i) = 0$. Therefore, applying DIRL will force in region $x \in \mathcal {X}_i$. Taking the integral of $x$ over $\mathcal {X}_i$ for both sides of the equation, we have $\rm {P}_S(\rm {Y}=i) = \rm {P}_T(\rm {Y}=i)$. This deduction contradicts with the setting that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$. Therefore, $G(\rm {X})$ is impossible fully class-separable when it is domain-invariant. Note that the object of the supervised learning is exactly to make $G(\rm {X})$ class-separable. Thus, this actually indicates a conflict between the supervised learning and the domain-invariant representation learning. Based on the above analysis, we can conclude that it is impossible to obtain a feature representation $G(X)$ that is class-separable and at the same time, domain-invariant using the DIRL framework, when $\rm {P}(\rm {Y})$ shifts across domains. However, the shift of $\rm {P}(\rm {Y})$ can exist in many cross-domain sentiment analysis tasks. Therefore, it is worthy of studying in order to deal with the problem of DIRL. Weighted Domain Invariant Representation Learning According to the above analysis, we proposed a weighted version of DIRL to address the problem caused by the shift of $\rm {P}(\rm {Y})$ to DIRL. The key idea of this framework is to first align $\rm {P}(\rm {Y})$ across domains before performing domain-invariant learning, and then take account the shift of $\rm {P}(\rm {Y})$ in the label prediction procedure. Specifically, it introduces a class weight $\mathbf {w}$ to weigh source domain examples by class. Based on the weighted source domain, the domain shift problem is resolved in two steps. In the first step, it applies DIRL on the target domain and the weighted source domain, aiming to alleviate the influence of the shift of $\rm {P}(\rm {Y})$ during the alignment of $\rm {P}(\rm {X}|\rm {Y})$. In the second step, it uses $\mathbf {w}$ to reweigh the supervised classifier $\rm {P}_S(\rm {Y}|\rm {X})$ obtained in the first step for target domain label prediction. We detail these two steps in §SECREF10 and §SECREF14, respectively. Weighted Domain Invariant Representation Learning ::: Align @!START@$\rm {P}(\rm {X}|\rm {Y})$@!END@ with Class Weight The motivation behind this practice is to adjust data distribution of the source domain or the target domain to alleviate the shift of $\rm {P}(\rm {Y})$ across domains before applying DIRL. Consider that we only have labels of source domain data, we choose to adjust data distribution of the source domain. To achieve this purpose, we introduce a trainable class weight $\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\mathbf {w}_i > 0$. Specifically, we hope that: and we denote $\mathbf {w}^*$ the value of $\mathbf {w}$ that makes this equation hold. We shall see that when $\mathbf {w}=\mathbf {w}^*$, DIRL is to align $\rm {P}_S(G(\rm {X})|\rm {Y})$ with $\rm {P}_T(G(\rm {X})|\rm {Y})$ without the shift of $\rm {P}(\rm {Y})$. According to our analysis, we know that due to the shift of $\rm {P}(\rm {Y})$, there is a conflict between the training objects of the supervised learning $\mathcal {L}_{sup}$ and the domain-invariant learning $\mathcal {L}_{inv}$. And the conflict degree will decrease as $\rm {P}_S(\rm {Y})$ getting close to $\rm {P}_T(\rm {Y})$. Therefore, during model training, $\mathbf {w}$ is expected to be optimized toward $\mathbf {w}^*$ since it will make $\rm {P}(\rm {Y})$ of the weighted source domain close to $\rm {P}_T(\rm {Y})$, so as to solve the conflict. We now show how to transfer existing DIRL models to their WDIRL counterparts with the above idea. Let $\mathbb {S}:\rm {P} \rightarrow {R}$ denote a statistic function defined over a distribution $\rm {P}$. For example, the expectation function $\mathbb {E}(\rm {X})$ in $\mathbb {E}(\rm {X}_S) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}))$ is a concrete instaintiation of $\mathbb {S}$. In general, to transfer models from DIRL to WDIRL, we should replace $\mathbb {S}(\rm {P}_S(\rm {X}))$ defined in $\mathcal {L}_{inv}$ with Take the CMD metric as an example. In WDIRL, the revised form of ${\text{CMD}}_K$ is defined by: Here, $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}|\rm {Y}=i))$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X}|\rm {Y}=i)$. Note that both $\rm {P}_S(\rm {Y}=i)$ and $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i)$ can be estimated using source labeled data, and $\mathbb {E}(\rm {X}_T)$ can be estimated using target unlabeled data. As for those adversarial-learning-based DIRL methods, e.g., DANN BIBREF2, the revised domain-invariant loss can be precisely defined by: During model training, $D$ is optimized in the direction to minimize $\hat{\mathcal {L}}_d$, while $G$ and $\mathbf {w}$ are optimized to maximize $\hat{\mathcal {L}}_d$. In the following, we denote $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$ the equivalent loss defined over $G$ for the revised version of domain adversarial learning. The general task loss in WDIRL is defined by: where $\hat{\mathcal {L}}_{inv}$ is a unified representation of the domain-invariant loss in WDIRL, such as $\widehat{\text{CMD}}_K$ and $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$. Weighted Domain Invariant Representation Learning ::: Align @!START@$\rm {P}(\rm {Y}|\rm {X})$@!END@ with Class Weight In the above step, we align $\rm {P}(\rm {X}|\rm {Y})$ across domains by performing domain-invariant learning on the class-weighted source domain and the original target domain. In this step, we deal with the shift of $\rm {P}(\rm {Y})$. Suppose that we have successfully resolved the shift of $\rm {P}(\rm {X}|\rm {Y})$ with $G$, i.e., $\rm {P}_S(G(\rm {X})|\rm {Y})=\rm {P}_T(G(\rm {X})|\rm {Y})$. Then, according to the work of BIBREF29, we have: where $\gamma (\rm {Y}=i)={\rm {P}_T(\rm {Y}=i)}/{\rm {P}_S(\rm {Y}=i)}$. Of course, in most of the real-world tasks, we do not know the value of $\gamma (\rm {Y}=i)$. However, note that $\gamma (\rm {Y}=i)$ is exactly the expected class weight $\mathbf {w}^*_i$. Therefore, a natural practice of this step is to estimate $\gamma (\rm {Y}=i)$ with the obtained $\mathbf {w}_i$ in the first step and estimate $\rm {P}_T(\rm {Y}|G(\rm {X}))$ with: In summary, to transfer methods of the DIRL paradigm to WDIRL, we should: first revise the definition of $\mathcal {L}_{inv}$, obtaining its corresponding WDIRL form $\hat{\mathcal {L}}_{inv}$; then perform supervised learning and domain-invariant representation learning on $\mathcal {D}_S$ and $\mathcal {D}_T$ according to Eq. (DISPLAY_FORM13), obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight vector $\mathbf {w}$; and finally, adjust $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ according to Eq. (DISPLAY_FORM16) and obtain the target domain classifier $\rm {P}_T(\rm {Y}|\rm {X}; \mathbf {\Phi })$. Experiment ::: Experiment Design Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from. In addition, we studied the impact of each step described in §SECREF10 and §SECREF14 to our proposed solution, respectively. To performe the study, we carried out performance comparison between the following models: SO: the source-only model trained using source domain labeled data without any domain adaptation. CMD: the centre-momentum-based domain adaptation model BIBREF3 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{CMD}_K$. DANN: the adversarial-learning-based domain adaptation model BIBREF2 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{JSD}(\rm {P}_S, \rm {P}_T)$. $\text{CMD}^\dagger $: the weighted version of the CMD model that only applies the first step (described in §SECREF10) of our proposed method. $\text{DANN}^\dagger $: the weighted version of the DANN model that only applies the first step of our proposed method. $\text{CMD}^{\dagger \dagger }$: the weighted version of the CMD model that applies both the first and second (described in §SECREF14) steps of our proposed method. $\text{DANN}^{\dagger \dagger }$: the weighted version of the DANN model that applies both the first and second steps of our proposed method. $\text{CMD}^{*}$: a variant of $\text{CMD}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ (estimate from target labeled data) to $\mathbf {w}$ and fixes this value during model training. $\text{DANN}^{*}$: a variant of $\text{DANN}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ to $\mathbf {w}$ and fixes this value during model training. Intrinsically, SO can provide an empirical lowerbound for those domain adaptation methods. $\text{CMD}^{*}$ and $\text{DANN}^{*}$ can provide the empirical upbound of $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$, respectively. In addition, by comparing performance of $\text{CMD}^{*}$ and $\text{DANN}^{*}$ with that of $\text{SO}$, we can know the effectiveness of the DIRL framework when $\rm {P}(\rm {Y})$ dose not shift across domains. By comparing $\text{CMD}^\dagger $ with $\text{CMD}$, or comparing $\text{DANN}^\dagger $ with $\text{DANN}$, we can know the effectiveness of the first step of our proposed method. By comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}^{\dagger }$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}^{\dagger }$, we can know the impact of the second step of our proposed method. And finally, by comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}$, we can know the general effectiveness of our proposed solution. Experiment ::: Dataset and Task Design We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams. Experiment ::: Dataset and Task Design ::: Binary-Class. From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\rightarrow $D, B$\rightarrow $E, B$\rightarrow $K, D$\rightarrow $B, D$\rightarrow $E, D$\rightarrow $K, E$\rightarrow $B, E$\rightarrow $D, E$\rightarrow $K, K$\rightarrow $B, K$\rightarrow $D, K$\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\rm {P}(\rm {Y})$ shift, which was evaluated by the max value of $\rm {P}_S(\rm {Y}=i)/\rm {P}_T(\rm {Y}=i), \forall i=1, \cdots , L$. Please refer to Appendix C for more detail about the task design for this study. Experiment ::: Dataset and Task Design ::: Multi-Class. We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\mathcal {D}_S$ contained 1000 examples of each class, and $\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. Experiment ::: Implementation Detail For all studied models, we implemented $G$ and $f$ using the same architectures as those in BIBREF3. For those DANN-based methods (i.e., DANN, $\text{DANN}^{\dagger }$, $\text{DANN}^{\dagger \dagger }$, and $\text{DANN}^{*}$), we implemented the discriminator $D$ using a 50 dimensional hidden layer with relu activation functions and a linear classification layer. Hyper-parameter $K$ of $\text{CMD}_K$ and $\widehat{\text{CMD}}_K$ was set to 5 as suggested by BIBREF3. Model optimization was performed using RmsProp BIBREF30. Initial learning rate of $\mathbf {w}$ was set to 0.01, while that of other parameters was set to 0.005 for all tasks. Hyper-parameter $\alpha $ was set to 1 for all of the tested models. We searched for this value in range $\alpha =[1, \cdots , 10]$ on task B $\rightarrow $ K. Within the search, label distribution was set to be uniform, i.e., $\rm {P}(\rm {Y}=i)=1/L$, for both domain B and K. We chose the value that maximize the performance of CMD on testing data of domain K. You may notice that this practice conflicts with the setting of unsupervised domain adaptation that we do not have labeled data of the target domain for training or developing. However, we argue that this practice would not make it unfair for model comparison since all of the tested models shared the same value of $\alpha $ and $\alpha $ was not directly fine-tuned on any tested task. With the same consideration, for every tested model, we reported its best performance achieved on testing data of the target domain during its training. To initialize $\mathbf {w}$, we used label prediction of the source-only model. Specifically, let $\rm {P}_{SO}(\rm {Y}|\rm {X}; \mathbf {\theta }_{SO})$ denote the trained source-only model. We initialized $\mathbf {w}_i$ by: Here, $\mathbb {I}$ denotes the indication function. To offer an intuitive understanding to this strategy, we report performance of WCMD$^{\dagger \dagger }$ over different initializations of $\mathbf {w}$ on 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) binary-class domain adaptation tasks in Figure FIGREF33. Here, we say that domain B and D are of a group, and domain E and K are of another group since B and D are similar, as are E and K, but the two groups are different from one another BIBREF9. Note that $\rm {P}_{S}(\rm {Y}=1)=0.5$ is a constant, which is estimated using source labeled data. From the figure, we can obtain three main observations. First, WCMD$^{\dagger \dagger }$ generally outperformed its CMD counterparts with different initialization of $\mathbf {w}$. Second, it was better to initialize $\mathbf {w}$ with a relatively balanced value, i.e., $\mathbf {w}_i \rm {P}_S(\rm {Y}=i) \rightarrow \frac{1}{L}$ (in this experiment, $L=2$). Finally, $\mathbf {w}^0$ was often a good initialization of $\mathbf {w}$, indicating the effectiveness of the above strategy. Experiment ::: Main Result Table TABREF27 shows model performance on the 12 binary-class cross-domain tasks. From this table, we can obtain the following observations. First, CMD and DANN underperform the source-only model (SO) on all of the 12 tested tasks, indicating that DIRL in the studied situation will degrade the domain adaptation performance rather than improve it. This observation confirms our analysis. Second, $\text{CMD}^{\dagger \dagger }$ consistently outperformed CMD and SO. This observation shows the effectiveness of our proposed method for addressing the problem of the DIRL framework in the studied situation. Similar conclusion can also be obtained by comparing performance of $\text{DANN}^{\dagger \dagger }$ with that of DANN and SO. Third, $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ consistently outperformed $\text{CMD}$ and DANN, respectively, which shows the effectiveness of the first step of our proposed method. Finally, on most of the tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ outperforms $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. Figure FIGREF35 depicts the relative improvement, e.g., $(\text{Acc}(\text{CMD})-\text{Acc}(\text{SO}))/\text{Acc}(\text{SO})$, of the domain adaptation methods over the SO baseline under different degrees of $\rm {P}(\rm {Y})$ shift, on two binary-class domain adaptation tasks (You can refer to Appendix C for results of the other models on other tasks). From the figure, we can see that the performance of CMD generally got worse as the increase of $\rm {P}(\rm {Y})$ shift. In contrast, our proposed model $\text{CMD}^{\dagger \dagger }$ performed robustly to the varying of $\rm {P}(\rm {Y})$ shift degree. Moreover, it can achieve the near upbound performance characterized by $\text{CMD}^{*}$. This again verified the effectiveness of our solution. Table TABREF34 reports model performance on the 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and the 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) multi-class domain adaptation tasks (You can refer to Appendix D for results on the other tasks). From this table, we observe that on some tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ did not greatly outperform or even slightly underperformed $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. A possible explanation of this phenomenon is that the distribution of $\mathcal {D}_T$ also differs from that of the target domain testing dataset. Therefore, the estimated or learned value of $\mathbf {w}$ using $\mathcal {D}_T$ is not fully suitable for application to the testing dataset. This explanation is verified by the observation that $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ also slightly outperforms $\text{CMD}^{*}$ and $\text{DANN}^{*}$ on these tasks, respectively. Conclusion In this paper, we studied the problem of the popular domain-invariant representation learning (DIRL) framework for domain adaptation, when $\rm {P}(\rm {Y})$ changes across domains. To address the problem, we proposed a weighted version of DIRL (WDIRL). We showed that existing methods of the DIRL framework can be easily transferred to our WDIRL framework. Extensive experimental studies on benchmark cross-domain sentiment analysis datasets verified our analysis and showed the effectiveness of our proposed solution.
To achieve this purpose, we introduce a trainable class weight $\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\mathbf {w}_i > 0$
b3dc6d95d1570ad9a58274539ff1def12df8f474
b3dc6d95d1570ad9a58274539ff1def12df8f474_0
Q: How is DIRL evaluated? Text: Introduction Sentiment analysis aims to predict sentiment polarity of user-generated data with emotional orientation like movie reviews. The exponentially increase of online reviews makes it an interesting topic in research and industrial areas. However, reviews can span so many different domains and the collection and preprocessing of large amounts of data for new domains is often time-consuming and expensive. Therefore, cross-domain sentiment analysis is currently a hot topic, which aims to transfer knowledge from a label-rich source domain (S) to the label-few target domain (T). In recent years, one of the most popular frameworks for cross-domain sentiment analysis is the domain invariant representation learning (DIRL) framework BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. Methods of this framework follow the idea of extracting a domain-invariant feature representation, in which the data distributions of the source and target domains are similar. Based on the resultant representations, they learn the supervised classifier using source rich labeled data. The main difference among these methods is the applied technique to force the feature representations to be domain-invariant. However, in this work, we discover that applying DIRL may harm domain adaptation in the situation that the label distribution $\rm {P}(\rm {Y})$ shifts across domains. Specifically, let $\rm {X}$ and $\rm {Y}$ denote the input and label random variable, respectively, and $G(\rm {X})$ denote the feature representation of $\rm {X}$. We found out that when $\rm {P}(\rm {Y})$ changes across domains while $\rm {P}(\rm {X}|\rm {Y})$ stays the same, forcing $G(\rm {X})$ to be domain-invariant will make $G(\rm {X})$ uninformative to $\rm {Y}$. This will, in turn, harm the generation of the supervised classifier to the target domain. In addition, for the more general condition that both $\rm {P}(\rm {Y})$ and $\rm {P}(\rm {X}|\rm {Y})$ shift across domains, we deduced a conflict between the object of making the classification error small and that of making $G(\rm {X})$ domain-invariant. We argue that the problem is worthy of studying since the shift of $\rm {P}(\rm {Y})$ exists in many real-world cross-domain sentiment analysis tasks BIBREF0. For example, the marginal distribution of the sentiment of a product can be affected by the overall social environment and change in different time periods; and for different products, their marginal distributions of the sentiment are naturally considered different. Moreover, there are many factors, such as the original data distribution, data collection time, and data clearing method, that can affect $\rm {P}(\rm {Y})$ of the collected target domain unlabeled dataset. Note that in the real-world cross-domain tasks, we do not know the labels of the collected target domain data. Thus, we cannot previously align its label distribution $\rm {P}_T(\mathbf {Y})$ with that of source domain labeled data $\rm {P}_S(\mathbf {Y})$, as done in many previous works BIBREF0, BIBREF2, BIBREF5, BIBREF4, BIBREF6, BIBREF7. To address the problem of DIRL resulted from the shift of $\rm {P}(\rm {Y})$, we propose a modification to DIRL, obtaining a weighted domain-invariant representation learning (WDIRL) framework. This framework additionally introduces a class weight $\mathbf {w}$ to weigh source domain examples by class, hoping to make $\rm {P}(\rm {Y})$ of the weighted source domain close to that of the target domain. Based on $\mathbf {w}$, it resolves domain shift in two steps. In the first step, it forces the marginal distribution $\rm {P}(\rm {X})$ to be domain-invariant between the target domain and the weighted source domain instead of the original source, obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight $\mathbf {w}$. In the second step, it resolves the shift of $\rm {P}(\rm {Y}|\rm {X})$ by adjusting $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ for label prediction in the target domain. We detail these two steps in §SECREF4. Moreover, we will illustrate how to transfer existing DIRL models to their WDIRL counterparts, taking the representative metric-based CMD model BIBREF3 and the adversarial-learning-based DANN model BIBREF2 as an example, respectively. In summary, the contributions of this paper include: ($\mathbf {i}$) We theoretically and empirically analyse the problem of DIRL for domain adaptation when the marginal distribution $\rm {P}(\rm {Y})$ shifts across domains. ($\mathbf {ii}$) We proposed a novel method to address the problem and show how to incorporate it with existent DIRL models. ($\mathbf {iii}$) Experimental studies on extensive cross-domain sentiment analysis tasks show that models of our WDIRL framework can greatly outperform their DIRL counterparts. Preliminary and Related Work ::: Domain Adaptation For expression consistency, in this work, we consider domain adaptation in the unsupervised setting (however, we argue that our analysis and solution also applies to the supervised and semi-supervised domain adaptation settings). In the unsupervised domain adaptation setting, there are two different distributions over $\rm {X} \times \rm {Y}$: the source domain $\rm {P}_S(\rm {X},\rm {Y})$ and the target domain $\rm {P}_T(\rm {X},\rm {Y})$. And there is a labeled data set $\mathcal {D}_S$ drawn $i.i.d$ from $\rm {P}_S(\rm {X},\rm {Y})$ and an unlabeled data set $\mathcal {D}_T$ drawn $i.i.d.$ from the marginal distribution $\rm {P}_T(\rm {X})$: The goal of domain adaptation is to build a classier $f:\rm {X} \rightarrow \rm {Y}$ that has good performance in the target domain using $\mathcal {D}_S$ and $\mathcal {D}_T$. For this purpose, many approaches have been proposed from different views, such as instance reweighting BIBREF8, pivot-based information passing BIBREF9, spectral feature alignment BIBREF10 subsampling BIBREF11, and of course the domain-invariant representation learning BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. Preliminary and Related Work ::: Domain Invariant Representation Learning Domain invariant representation learning (DIRL) is a very popular framework for performing domain adaptation in the cross-domain sentiment analysis field BIBREF23, BIBREF4, BIBREF24, BIBREF7. It is heavily motivated by the following theorem BIBREF25. Theorem 1 For a hypothesis $h$, Here, $\mathcal {L}_S(h)$ denotes the expected loss with hypothesis $h$ in the source domain, $\mathcal {L}_T(h)$ denotes the counterpart in the target domain, $d_1$ is a measure of divergence between two distributions. Based on Theorem UNKREF3 and assuming that performing feature transform on $\rm {X}$ will not increase the values of the first and third terms of the right side of Ineq. (DISPLAY_FORM4), methods of the DIRL framework apply a feature map $G$ onto $\rm {X}$, hoping to obtain a feature representation $G(\rm {X})$ that has a lower value of ${d}_{1}(\rm {P}_S(G(\rm {X})), \rm {P}_T(G(\rm {X})))$. To this end, different methods have been proposed. These methods can be roughly divided into two directions. The first direction is to design a differentiable metric to explicitly evaluate the discrepancy between two distributions. We call methods of this direction as the metric-based DIRL methods. A representative work of this direction is the center-momentum-based model proposed by BIBREF3. In that work, they proposed a central moment discrepancy metric (CMD) to evaluate the discrepancy between two distributions. Specifically, let denote $\rm {X}_S$ and $\rm {X}_T$ an $M$ dimensional random vector on the compact interval $[a; b]^M$ over distribution $\rm {P}_S$ and $\rm {P}_T$, respectively. The CMD loss between $\rm {P}_S$ and $\rm {P}_T$ is defined by: Here, $\mathbb {E}(\rm {X})$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X})$, and is the $k$-th momentum, where $\rm {X}_i$ denotes the $i^{th}$ dimensional variable of $\rm {X}$. The second direction is to perform adversarial training between the feature generator $G$ and a domain discriminator $D$. We call methods of this direction as the adversarial-learning-based methods. As a representative, BIBREF2 trained $D$ to distinguish the domain of a given example $x$ based on its representation $G(x)$. At the same time, they encouraged $G$ to deceive $D$, i.e., to make $D$ unable to distinguish the domain of $x$. More specifically, $D$ was trained to minimize the loss: over its trainable parameters, while in contrast $G$ was trained to maximize $\mathcal {L}_d$. According to the work of BIBREF26, this is equivalent to minimize the Jensen-shannon divergence BIBREF27, BIBREF28 $\text{JSD}(\rm {P}_S, \rm {P}_T)$ between $\rm {P}_S(G(\rm {X}))$ and $\rm {P}_T(G(\rm {X}))$ over $G$. Here, for a concise expression, we write $\rm {P}$ as the shorthand for $\rm {P}(G(\rm {X}))$. The task loss is the combination of the supervised learning loss $\mathcal {L}_{sup}$ and the domain-invariant learning loss $\mathcal {L}_{inv}$, which are defined on $\mathcal {D}_S$ only and on the combination of $\mathcal {D}_S$ and $\mathcal {D}_T$, respectively: Here, $\alpha $ is a hyper-parameter for loss balance, and the aforementioned domain adversarial loss $\text{JSD}(\rm {P}_S, \rm {P}_T)$ and $\text{CMD}_K$ are two concrete forms of $\mathcal {L}_{inv}$. Problem of Domain-Invariant Representation Learning In this work, we found out that applying DIRL may harm domain adaptation in the situation that $\rm {P}(\rm {Y})$ shifts across domains. Specifically, when $\rm {P}_S(\rm {Y})$ differs from $\rm {P}_T(\rm {Y})$, forcing the feature representations $G(\rm {X})$ to be domain-invariant may increase the value of $\mathcal {L}_S(h)$ in Ineq. (DISPLAY_FORM4) and consequently increase the value of $\mathcal {L}_T(h)$, which means the decrease of target domain performance. In the following, we start our analysis under the condition that $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$. Then, we consider the more general condition that $\rm {P}_S(\rm {X}|\rm {Y})$ also differs from $\rm {P}_T(\rm {X}|\rm {Y})$. When $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, we have the following theorem. Theorem 2 Given $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, if $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and a feature map $G$ makes $\rm {P}_S \left( \mathcal {M}(\rm {X}))=\rm {P}_T(\mathcal {M}(\rm {X}) \right)$, then $\rm {P}_S(\rm {Y}=i|\mathcal {M}(\rm {X}))=\rm {P}_S(\rm {Y}=i)$. Proofs appear in Appendix A. Problem of Domain-Invariant Representation Learning ::: Remark. According to Theorem UNKREF8, we know that when $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$ and $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$, forcing $G(\rm {X})$ to be domain-invariant inclines to make data of class $i$ mix with data of other classes in the space of $G(\rm {X})$. This will make it difficult for the supervised classifier to distinguish inputs of class $i$ from inputs of the other classes. Think about such an extreme case that every instance $x$ is mapped to a consistent point $g_0$ in $G(\rm {X})$. In this case, $\rm {P}_S(G(\rm {X})=g_0)= \rm {P}_T(G(\rm {X})=g_0) = 1$. Therefore, $G(\rm {X})$ is domain-invariant. As a result, the supervised classifier will assign the label $y^* = \operatornamewithlimits{arg\,max}_y \rm {P}_S(\rm {Y}=y)$ to all input examples. This is definitely unacceptable. To give a more intuitive illustration of the above analysis, we offer several empirical studies on Theorem UNKREF8 in Appendix B. When $\rm {P}_S(\rm {Y})\ne \rm {P}_T(\rm {Y})$ and $\rm {P}_S(\rm {X}|\rm {Y}) \ne \rm {P}_T(\rm {X}|\rm {Y})$, we did not obtain such a strong conclusion as Theorem UNKREF8. Instead, we deduced a conflict between the object of achieving superior classification performance and that of making features domain-invariant. Suppose that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and instances of class $i$ are completely distinguishable from instances of the rest classes in $G(\rm {X})$, i.e.,: In DIRL, we hope that: Consider the region $x \in \mathcal {X}_i$, where $\rm {P}(G(\rm {X}=x)|\rm {Y}=i)>0$. According to the above assumption, we know that $\rm {P}(G(\rm {X}=x \in \mathcal {X}_i)|\rm {Y} \ne i) = 0$. Therefore, applying DIRL will force in region $x \in \mathcal {X}_i$. Taking the integral of $x$ over $\mathcal {X}_i$ for both sides of the equation, we have $\rm {P}_S(\rm {Y}=i) = \rm {P}_T(\rm {Y}=i)$. This deduction contradicts with the setting that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$. Therefore, $G(\rm {X})$ is impossible fully class-separable when it is domain-invariant. Note that the object of the supervised learning is exactly to make $G(\rm {X})$ class-separable. Thus, this actually indicates a conflict between the supervised learning and the domain-invariant representation learning. Based on the above analysis, we can conclude that it is impossible to obtain a feature representation $G(X)$ that is class-separable and at the same time, domain-invariant using the DIRL framework, when $\rm {P}(\rm {Y})$ shifts across domains. However, the shift of $\rm {P}(\rm {Y})$ can exist in many cross-domain sentiment analysis tasks. Therefore, it is worthy of studying in order to deal with the problem of DIRL. Weighted Domain Invariant Representation Learning According to the above analysis, we proposed a weighted version of DIRL to address the problem caused by the shift of $\rm {P}(\rm {Y})$ to DIRL. The key idea of this framework is to first align $\rm {P}(\rm {Y})$ across domains before performing domain-invariant learning, and then take account the shift of $\rm {P}(\rm {Y})$ in the label prediction procedure. Specifically, it introduces a class weight $\mathbf {w}$ to weigh source domain examples by class. Based on the weighted source domain, the domain shift problem is resolved in two steps. In the first step, it applies DIRL on the target domain and the weighted source domain, aiming to alleviate the influence of the shift of $\rm {P}(\rm {Y})$ during the alignment of $\rm {P}(\rm {X}|\rm {Y})$. In the second step, it uses $\mathbf {w}$ to reweigh the supervised classifier $\rm {P}_S(\rm {Y}|\rm {X})$ obtained in the first step for target domain label prediction. We detail these two steps in §SECREF10 and §SECREF14, respectively. Weighted Domain Invariant Representation Learning ::: Align @!START@$\rm {P}(\rm {X}|\rm {Y})$@!END@ with Class Weight The motivation behind this practice is to adjust data distribution of the source domain or the target domain to alleviate the shift of $\rm {P}(\rm {Y})$ across domains before applying DIRL. Consider that we only have labels of source domain data, we choose to adjust data distribution of the source domain. To achieve this purpose, we introduce a trainable class weight $\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\mathbf {w}_i > 0$. Specifically, we hope that: and we denote $\mathbf {w}^*$ the value of $\mathbf {w}$ that makes this equation hold. We shall see that when $\mathbf {w}=\mathbf {w}^*$, DIRL is to align $\rm {P}_S(G(\rm {X})|\rm {Y})$ with $\rm {P}_T(G(\rm {X})|\rm {Y})$ without the shift of $\rm {P}(\rm {Y})$. According to our analysis, we know that due to the shift of $\rm {P}(\rm {Y})$, there is a conflict between the training objects of the supervised learning $\mathcal {L}_{sup}$ and the domain-invariant learning $\mathcal {L}_{inv}$. And the conflict degree will decrease as $\rm {P}_S(\rm {Y})$ getting close to $\rm {P}_T(\rm {Y})$. Therefore, during model training, $\mathbf {w}$ is expected to be optimized toward $\mathbf {w}^*$ since it will make $\rm {P}(\rm {Y})$ of the weighted source domain close to $\rm {P}_T(\rm {Y})$, so as to solve the conflict. We now show how to transfer existing DIRL models to their WDIRL counterparts with the above idea. Let $\mathbb {S}:\rm {P} \rightarrow {R}$ denote a statistic function defined over a distribution $\rm {P}$. For example, the expectation function $\mathbb {E}(\rm {X})$ in $\mathbb {E}(\rm {X}_S) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}))$ is a concrete instaintiation of $\mathbb {S}$. In general, to transfer models from DIRL to WDIRL, we should replace $\mathbb {S}(\rm {P}_S(\rm {X}))$ defined in $\mathcal {L}_{inv}$ with Take the CMD metric as an example. In WDIRL, the revised form of ${\text{CMD}}_K$ is defined by: Here, $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}|\rm {Y}=i))$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X}|\rm {Y}=i)$. Note that both $\rm {P}_S(\rm {Y}=i)$ and $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i)$ can be estimated using source labeled data, and $\mathbb {E}(\rm {X}_T)$ can be estimated using target unlabeled data. As for those adversarial-learning-based DIRL methods, e.g., DANN BIBREF2, the revised domain-invariant loss can be precisely defined by: During model training, $D$ is optimized in the direction to minimize $\hat{\mathcal {L}}_d$, while $G$ and $\mathbf {w}$ are optimized to maximize $\hat{\mathcal {L}}_d$. In the following, we denote $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$ the equivalent loss defined over $G$ for the revised version of domain adversarial learning. The general task loss in WDIRL is defined by: where $\hat{\mathcal {L}}_{inv}$ is a unified representation of the domain-invariant loss in WDIRL, such as $\widehat{\text{CMD}}_K$ and $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$. Weighted Domain Invariant Representation Learning ::: Align @!START@$\rm {P}(\rm {Y}|\rm {X})$@!END@ with Class Weight In the above step, we align $\rm {P}(\rm {X}|\rm {Y})$ across domains by performing domain-invariant learning on the class-weighted source domain and the original target domain. In this step, we deal with the shift of $\rm {P}(\rm {Y})$. Suppose that we have successfully resolved the shift of $\rm {P}(\rm {X}|\rm {Y})$ with $G$, i.e., $\rm {P}_S(G(\rm {X})|\rm {Y})=\rm {P}_T(G(\rm {X})|\rm {Y})$. Then, according to the work of BIBREF29, we have: where $\gamma (\rm {Y}=i)={\rm {P}_T(\rm {Y}=i)}/{\rm {P}_S(\rm {Y}=i)}$. Of course, in most of the real-world tasks, we do not know the value of $\gamma (\rm {Y}=i)$. However, note that $\gamma (\rm {Y}=i)$ is exactly the expected class weight $\mathbf {w}^*_i$. Therefore, a natural practice of this step is to estimate $\gamma (\rm {Y}=i)$ with the obtained $\mathbf {w}_i$ in the first step and estimate $\rm {P}_T(\rm {Y}|G(\rm {X}))$ with: In summary, to transfer methods of the DIRL paradigm to WDIRL, we should: first revise the definition of $\mathcal {L}_{inv}$, obtaining its corresponding WDIRL form $\hat{\mathcal {L}}_{inv}$; then perform supervised learning and domain-invariant representation learning on $\mathcal {D}_S$ and $\mathcal {D}_T$ according to Eq. (DISPLAY_FORM13), obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight vector $\mathbf {w}$; and finally, adjust $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ according to Eq. (DISPLAY_FORM16) and obtain the target domain classifier $\rm {P}_T(\rm {Y}|\rm {X}; \mathbf {\Phi })$. Experiment ::: Experiment Design Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from. In addition, we studied the impact of each step described in §SECREF10 and §SECREF14 to our proposed solution, respectively. To performe the study, we carried out performance comparison between the following models: SO: the source-only model trained using source domain labeled data without any domain adaptation. CMD: the centre-momentum-based domain adaptation model BIBREF3 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{CMD}_K$. DANN: the adversarial-learning-based domain adaptation model BIBREF2 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{JSD}(\rm {P}_S, \rm {P}_T)$. $\text{CMD}^\dagger $: the weighted version of the CMD model that only applies the first step (described in §SECREF10) of our proposed method. $\text{DANN}^\dagger $: the weighted version of the DANN model that only applies the first step of our proposed method. $\text{CMD}^{\dagger \dagger }$: the weighted version of the CMD model that applies both the first and second (described in §SECREF14) steps of our proposed method. $\text{DANN}^{\dagger \dagger }$: the weighted version of the DANN model that applies both the first and second steps of our proposed method. $\text{CMD}^{*}$: a variant of $\text{CMD}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ (estimate from target labeled data) to $\mathbf {w}$ and fixes this value during model training. $\text{DANN}^{*}$: a variant of $\text{DANN}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ to $\mathbf {w}$ and fixes this value during model training. Intrinsically, SO can provide an empirical lowerbound for those domain adaptation methods. $\text{CMD}^{*}$ and $\text{DANN}^{*}$ can provide the empirical upbound of $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$, respectively. In addition, by comparing performance of $\text{CMD}^{*}$ and $\text{DANN}^{*}$ with that of $\text{SO}$, we can know the effectiveness of the DIRL framework when $\rm {P}(\rm {Y})$ dose not shift across domains. By comparing $\text{CMD}^\dagger $ with $\text{CMD}$, or comparing $\text{DANN}^\dagger $ with $\text{DANN}$, we can know the effectiveness of the first step of our proposed method. By comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}^{\dagger }$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}^{\dagger }$, we can know the impact of the second step of our proposed method. And finally, by comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}$, we can know the general effectiveness of our proposed solution. Experiment ::: Dataset and Task Design We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams. Experiment ::: Dataset and Task Design ::: Binary-Class. From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\rightarrow $D, B$\rightarrow $E, B$\rightarrow $K, D$\rightarrow $B, D$\rightarrow $E, D$\rightarrow $K, E$\rightarrow $B, E$\rightarrow $D, E$\rightarrow $K, K$\rightarrow $B, K$\rightarrow $D, K$\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\rm {P}(\rm {Y})$ shift, which was evaluated by the max value of $\rm {P}_S(\rm {Y}=i)/\rm {P}_T(\rm {Y}=i), \forall i=1, \cdots , L$. Please refer to Appendix C for more detail about the task design for this study. Experiment ::: Dataset and Task Design ::: Multi-Class. We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\mathcal {D}_S$ contained 1000 examples of each class, and $\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. Experiment ::: Implementation Detail For all studied models, we implemented $G$ and $f$ using the same architectures as those in BIBREF3. For those DANN-based methods (i.e., DANN, $\text{DANN}^{\dagger }$, $\text{DANN}^{\dagger \dagger }$, and $\text{DANN}^{*}$), we implemented the discriminator $D$ using a 50 dimensional hidden layer with relu activation functions and a linear classification layer. Hyper-parameter $K$ of $\text{CMD}_K$ and $\widehat{\text{CMD}}_K$ was set to 5 as suggested by BIBREF3. Model optimization was performed using RmsProp BIBREF30. Initial learning rate of $\mathbf {w}$ was set to 0.01, while that of other parameters was set to 0.005 for all tasks. Hyper-parameter $\alpha $ was set to 1 for all of the tested models. We searched for this value in range $\alpha =[1, \cdots , 10]$ on task B $\rightarrow $ K. Within the search, label distribution was set to be uniform, i.e., $\rm {P}(\rm {Y}=i)=1/L$, for both domain B and K. We chose the value that maximize the performance of CMD on testing data of domain K. You may notice that this practice conflicts with the setting of unsupervised domain adaptation that we do not have labeled data of the target domain for training or developing. However, we argue that this practice would not make it unfair for model comparison since all of the tested models shared the same value of $\alpha $ and $\alpha $ was not directly fine-tuned on any tested task. With the same consideration, for every tested model, we reported its best performance achieved on testing data of the target domain during its training. To initialize $\mathbf {w}$, we used label prediction of the source-only model. Specifically, let $\rm {P}_{SO}(\rm {Y}|\rm {X}; \mathbf {\theta }_{SO})$ denote the trained source-only model. We initialized $\mathbf {w}_i$ by: Here, $\mathbb {I}$ denotes the indication function. To offer an intuitive understanding to this strategy, we report performance of WCMD$^{\dagger \dagger }$ over different initializations of $\mathbf {w}$ on 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) binary-class domain adaptation tasks in Figure FIGREF33. Here, we say that domain B and D are of a group, and domain E and K are of another group since B and D are similar, as are E and K, but the two groups are different from one another BIBREF9. Note that $\rm {P}_{S}(\rm {Y}=1)=0.5$ is a constant, which is estimated using source labeled data. From the figure, we can obtain three main observations. First, WCMD$^{\dagger \dagger }$ generally outperformed its CMD counterparts with different initialization of $\mathbf {w}$. Second, it was better to initialize $\mathbf {w}$ with a relatively balanced value, i.e., $\mathbf {w}_i \rm {P}_S(\rm {Y}=i) \rightarrow \frac{1}{L}$ (in this experiment, $L=2$). Finally, $\mathbf {w}^0$ was often a good initialization of $\mathbf {w}$, indicating the effectiveness of the above strategy. Experiment ::: Main Result Table TABREF27 shows model performance on the 12 binary-class cross-domain tasks. From this table, we can obtain the following observations. First, CMD and DANN underperform the source-only model (SO) on all of the 12 tested tasks, indicating that DIRL in the studied situation will degrade the domain adaptation performance rather than improve it. This observation confirms our analysis. Second, $\text{CMD}^{\dagger \dagger }$ consistently outperformed CMD and SO. This observation shows the effectiveness of our proposed method for addressing the problem of the DIRL framework in the studied situation. Similar conclusion can also be obtained by comparing performance of $\text{DANN}^{\dagger \dagger }$ with that of DANN and SO. Third, $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ consistently outperformed $\text{CMD}$ and DANN, respectively, which shows the effectiveness of the first step of our proposed method. Finally, on most of the tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ outperforms $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. Figure FIGREF35 depicts the relative improvement, e.g., $(\text{Acc}(\text{CMD})-\text{Acc}(\text{SO}))/\text{Acc}(\text{SO})$, of the domain adaptation methods over the SO baseline under different degrees of $\rm {P}(\rm {Y})$ shift, on two binary-class domain adaptation tasks (You can refer to Appendix C for results of the other models on other tasks). From the figure, we can see that the performance of CMD generally got worse as the increase of $\rm {P}(\rm {Y})$ shift. In contrast, our proposed model $\text{CMD}^{\dagger \dagger }$ performed robustly to the varying of $\rm {P}(\rm {Y})$ shift degree. Moreover, it can achieve the near upbound performance characterized by $\text{CMD}^{*}$. This again verified the effectiveness of our solution. Table TABREF34 reports model performance on the 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and the 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) multi-class domain adaptation tasks (You can refer to Appendix D for results on the other tasks). From this table, we observe that on some tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ did not greatly outperform or even slightly underperformed $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. A possible explanation of this phenomenon is that the distribution of $\mathcal {D}_T$ also differs from that of the target domain testing dataset. Therefore, the estimated or learned value of $\mathbf {w}$ using $\mathcal {D}_T$ is not fully suitable for application to the testing dataset. This explanation is verified by the observation that $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ also slightly outperforms $\text{CMD}^{*}$ and $\text{DANN}^{*}$ on these tasks, respectively. Conclusion In this paper, we studied the problem of the popular domain-invariant representation learning (DIRL) framework for domain adaptation, when $\rm {P}(\rm {Y})$ changes across domains. To address the problem, we proposed a weighted version of DIRL (WDIRL). We showed that existing methods of the DIRL framework can be easily transferred to our WDIRL framework. Extensive experimental studies on benchmark cross-domain sentiment analysis datasets verified our analysis and showed the effectiveness of our proposed solution.
Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from.
cc5d3903913fa2e841f900372ec74b0efd5e0c71
cc5d3903913fa2e841f900372ec74b0efd5e0c71_0
Q: Which sentiment analysis tasks are addressed? Text: Introduction Sentiment analysis aims to predict sentiment polarity of user-generated data with emotional orientation like movie reviews. The exponentially increase of online reviews makes it an interesting topic in research and industrial areas. However, reviews can span so many different domains and the collection and preprocessing of large amounts of data for new domains is often time-consuming and expensive. Therefore, cross-domain sentiment analysis is currently a hot topic, which aims to transfer knowledge from a label-rich source domain (S) to the label-few target domain (T). In recent years, one of the most popular frameworks for cross-domain sentiment analysis is the domain invariant representation learning (DIRL) framework BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4. Methods of this framework follow the idea of extracting a domain-invariant feature representation, in which the data distributions of the source and target domains are similar. Based on the resultant representations, they learn the supervised classifier using source rich labeled data. The main difference among these methods is the applied technique to force the feature representations to be domain-invariant. However, in this work, we discover that applying DIRL may harm domain adaptation in the situation that the label distribution $\rm {P}(\rm {Y})$ shifts across domains. Specifically, let $\rm {X}$ and $\rm {Y}$ denote the input and label random variable, respectively, and $G(\rm {X})$ denote the feature representation of $\rm {X}$. We found out that when $\rm {P}(\rm {Y})$ changes across domains while $\rm {P}(\rm {X}|\rm {Y})$ stays the same, forcing $G(\rm {X})$ to be domain-invariant will make $G(\rm {X})$ uninformative to $\rm {Y}$. This will, in turn, harm the generation of the supervised classifier to the target domain. In addition, for the more general condition that both $\rm {P}(\rm {Y})$ and $\rm {P}(\rm {X}|\rm {Y})$ shift across domains, we deduced a conflict between the object of making the classification error small and that of making $G(\rm {X})$ domain-invariant. We argue that the problem is worthy of studying since the shift of $\rm {P}(\rm {Y})$ exists in many real-world cross-domain sentiment analysis tasks BIBREF0. For example, the marginal distribution of the sentiment of a product can be affected by the overall social environment and change in different time periods; and for different products, their marginal distributions of the sentiment are naturally considered different. Moreover, there are many factors, such as the original data distribution, data collection time, and data clearing method, that can affect $\rm {P}(\rm {Y})$ of the collected target domain unlabeled dataset. Note that in the real-world cross-domain tasks, we do not know the labels of the collected target domain data. Thus, we cannot previously align its label distribution $\rm {P}_T(\mathbf {Y})$ with that of source domain labeled data $\rm {P}_S(\mathbf {Y})$, as done in many previous works BIBREF0, BIBREF2, BIBREF5, BIBREF4, BIBREF6, BIBREF7. To address the problem of DIRL resulted from the shift of $\rm {P}(\rm {Y})$, we propose a modification to DIRL, obtaining a weighted domain-invariant representation learning (WDIRL) framework. This framework additionally introduces a class weight $\mathbf {w}$ to weigh source domain examples by class, hoping to make $\rm {P}(\rm {Y})$ of the weighted source domain close to that of the target domain. Based on $\mathbf {w}$, it resolves domain shift in two steps. In the first step, it forces the marginal distribution $\rm {P}(\rm {X})$ to be domain-invariant between the target domain and the weighted source domain instead of the original source, obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight $\mathbf {w}$. In the second step, it resolves the shift of $\rm {P}(\rm {Y}|\rm {X})$ by adjusting $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ for label prediction in the target domain. We detail these two steps in §SECREF4. Moreover, we will illustrate how to transfer existing DIRL models to their WDIRL counterparts, taking the representative metric-based CMD model BIBREF3 and the adversarial-learning-based DANN model BIBREF2 as an example, respectively. In summary, the contributions of this paper include: ($\mathbf {i}$) We theoretically and empirically analyse the problem of DIRL for domain adaptation when the marginal distribution $\rm {P}(\rm {Y})$ shifts across domains. ($\mathbf {ii}$) We proposed a novel method to address the problem and show how to incorporate it with existent DIRL models. ($\mathbf {iii}$) Experimental studies on extensive cross-domain sentiment analysis tasks show that models of our WDIRL framework can greatly outperform their DIRL counterparts. Preliminary and Related Work ::: Domain Adaptation For expression consistency, in this work, we consider domain adaptation in the unsupervised setting (however, we argue that our analysis and solution also applies to the supervised and semi-supervised domain adaptation settings). In the unsupervised domain adaptation setting, there are two different distributions over $\rm {X} \times \rm {Y}$: the source domain $\rm {P}_S(\rm {X},\rm {Y})$ and the target domain $\rm {P}_T(\rm {X},\rm {Y})$. And there is a labeled data set $\mathcal {D}_S$ drawn $i.i.d$ from $\rm {P}_S(\rm {X},\rm {Y})$ and an unlabeled data set $\mathcal {D}_T$ drawn $i.i.d.$ from the marginal distribution $\rm {P}_T(\rm {X})$: The goal of domain adaptation is to build a classier $f:\rm {X} \rightarrow \rm {Y}$ that has good performance in the target domain using $\mathcal {D}_S$ and $\mathcal {D}_T$. For this purpose, many approaches have been proposed from different views, such as instance reweighting BIBREF8, pivot-based information passing BIBREF9, spectral feature alignment BIBREF10 subsampling BIBREF11, and of course the domain-invariant representation learning BIBREF12, BIBREF13, BIBREF14, BIBREF15, BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22. Preliminary and Related Work ::: Domain Invariant Representation Learning Domain invariant representation learning (DIRL) is a very popular framework for performing domain adaptation in the cross-domain sentiment analysis field BIBREF23, BIBREF4, BIBREF24, BIBREF7. It is heavily motivated by the following theorem BIBREF25. Theorem 1 For a hypothesis $h$, Here, $\mathcal {L}_S(h)$ denotes the expected loss with hypothesis $h$ in the source domain, $\mathcal {L}_T(h)$ denotes the counterpart in the target domain, $d_1$ is a measure of divergence between two distributions. Based on Theorem UNKREF3 and assuming that performing feature transform on $\rm {X}$ will not increase the values of the first and third terms of the right side of Ineq. (DISPLAY_FORM4), methods of the DIRL framework apply a feature map $G$ onto $\rm {X}$, hoping to obtain a feature representation $G(\rm {X})$ that has a lower value of ${d}_{1}(\rm {P}_S(G(\rm {X})), \rm {P}_T(G(\rm {X})))$. To this end, different methods have been proposed. These methods can be roughly divided into two directions. The first direction is to design a differentiable metric to explicitly evaluate the discrepancy between two distributions. We call methods of this direction as the metric-based DIRL methods. A representative work of this direction is the center-momentum-based model proposed by BIBREF3. In that work, they proposed a central moment discrepancy metric (CMD) to evaluate the discrepancy between two distributions. Specifically, let denote $\rm {X}_S$ and $\rm {X}_T$ an $M$ dimensional random vector on the compact interval $[a; b]^M$ over distribution $\rm {P}_S$ and $\rm {P}_T$, respectively. The CMD loss between $\rm {P}_S$ and $\rm {P}_T$ is defined by: Here, $\mathbb {E}(\rm {X})$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X})$, and is the $k$-th momentum, where $\rm {X}_i$ denotes the $i^{th}$ dimensional variable of $\rm {X}$. The second direction is to perform adversarial training between the feature generator $G$ and a domain discriminator $D$. We call methods of this direction as the adversarial-learning-based methods. As a representative, BIBREF2 trained $D$ to distinguish the domain of a given example $x$ based on its representation $G(x)$. At the same time, they encouraged $G$ to deceive $D$, i.e., to make $D$ unable to distinguish the domain of $x$. More specifically, $D$ was trained to minimize the loss: over its trainable parameters, while in contrast $G$ was trained to maximize $\mathcal {L}_d$. According to the work of BIBREF26, this is equivalent to minimize the Jensen-shannon divergence BIBREF27, BIBREF28 $\text{JSD}(\rm {P}_S, \rm {P}_T)$ between $\rm {P}_S(G(\rm {X}))$ and $\rm {P}_T(G(\rm {X}))$ over $G$. Here, for a concise expression, we write $\rm {P}$ as the shorthand for $\rm {P}(G(\rm {X}))$. The task loss is the combination of the supervised learning loss $\mathcal {L}_{sup}$ and the domain-invariant learning loss $\mathcal {L}_{inv}$, which are defined on $\mathcal {D}_S$ only and on the combination of $\mathcal {D}_S$ and $\mathcal {D}_T$, respectively: Here, $\alpha $ is a hyper-parameter for loss balance, and the aforementioned domain adversarial loss $\text{JSD}(\rm {P}_S, \rm {P}_T)$ and $\text{CMD}_K$ are two concrete forms of $\mathcal {L}_{inv}$. Problem of Domain-Invariant Representation Learning In this work, we found out that applying DIRL may harm domain adaptation in the situation that $\rm {P}(\rm {Y})$ shifts across domains. Specifically, when $\rm {P}_S(\rm {Y})$ differs from $\rm {P}_T(\rm {Y})$, forcing the feature representations $G(\rm {X})$ to be domain-invariant may increase the value of $\mathcal {L}_S(h)$ in Ineq. (DISPLAY_FORM4) and consequently increase the value of $\mathcal {L}_T(h)$, which means the decrease of target domain performance. In the following, we start our analysis under the condition that $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$. Then, we consider the more general condition that $\rm {P}_S(\rm {X}|\rm {Y})$ also differs from $\rm {P}_T(\rm {X}|\rm {Y})$. When $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, we have the following theorem. Theorem 2 Given $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$, if $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and a feature map $G$ makes $\rm {P}_S \left( \mathcal {M}(\rm {X}))=\rm {P}_T(\mathcal {M}(\rm {X}) \right)$, then $\rm {P}_S(\rm {Y}=i|\mathcal {M}(\rm {X}))=\rm {P}_S(\rm {Y}=i)$. Proofs appear in Appendix A. Problem of Domain-Invariant Representation Learning ::: Remark. According to Theorem UNKREF8, we know that when $\rm {P}_S(\rm {X}|\rm {Y})=\rm {P}_T(\rm {X}|\rm {Y})$ and $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$, forcing $G(\rm {X})$ to be domain-invariant inclines to make data of class $i$ mix with data of other classes in the space of $G(\rm {X})$. This will make it difficult for the supervised classifier to distinguish inputs of class $i$ from inputs of the other classes. Think about such an extreme case that every instance $x$ is mapped to a consistent point $g_0$ in $G(\rm {X})$. In this case, $\rm {P}_S(G(\rm {X})=g_0)= \rm {P}_T(G(\rm {X})=g_0) = 1$. Therefore, $G(\rm {X})$ is domain-invariant. As a result, the supervised classifier will assign the label $y^* = \operatornamewithlimits{arg\,max}_y \rm {P}_S(\rm {Y}=y)$ to all input examples. This is definitely unacceptable. To give a more intuitive illustration of the above analysis, we offer several empirical studies on Theorem UNKREF8 in Appendix B. When $\rm {P}_S(\rm {Y})\ne \rm {P}_T(\rm {Y})$ and $\rm {P}_S(\rm {X}|\rm {Y}) \ne \rm {P}_T(\rm {X}|\rm {Y})$, we did not obtain such a strong conclusion as Theorem UNKREF8. Instead, we deduced a conflict between the object of achieving superior classification performance and that of making features domain-invariant. Suppose that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$ and instances of class $i$ are completely distinguishable from instances of the rest classes in $G(\rm {X})$, i.e.,: In DIRL, we hope that: Consider the region $x \in \mathcal {X}_i$, where $\rm {P}(G(\rm {X}=x)|\rm {Y}=i)>0$. According to the above assumption, we know that $\rm {P}(G(\rm {X}=x \in \mathcal {X}_i)|\rm {Y} \ne i) = 0$. Therefore, applying DIRL will force in region $x \in \mathcal {X}_i$. Taking the integral of $x$ over $\mathcal {X}_i$ for both sides of the equation, we have $\rm {P}_S(\rm {Y}=i) = \rm {P}_T(\rm {Y}=i)$. This deduction contradicts with the setting that $\rm {P}_S(\rm {Y}=i) \ne \rm {P}_T(\rm {Y}=i)$. Therefore, $G(\rm {X})$ is impossible fully class-separable when it is domain-invariant. Note that the object of the supervised learning is exactly to make $G(\rm {X})$ class-separable. Thus, this actually indicates a conflict between the supervised learning and the domain-invariant representation learning. Based on the above analysis, we can conclude that it is impossible to obtain a feature representation $G(X)$ that is class-separable and at the same time, domain-invariant using the DIRL framework, when $\rm {P}(\rm {Y})$ shifts across domains. However, the shift of $\rm {P}(\rm {Y})$ can exist in many cross-domain sentiment analysis tasks. Therefore, it is worthy of studying in order to deal with the problem of DIRL. Weighted Domain Invariant Representation Learning According to the above analysis, we proposed a weighted version of DIRL to address the problem caused by the shift of $\rm {P}(\rm {Y})$ to DIRL. The key idea of this framework is to first align $\rm {P}(\rm {Y})$ across domains before performing domain-invariant learning, and then take account the shift of $\rm {P}(\rm {Y})$ in the label prediction procedure. Specifically, it introduces a class weight $\mathbf {w}$ to weigh source domain examples by class. Based on the weighted source domain, the domain shift problem is resolved in two steps. In the first step, it applies DIRL on the target domain and the weighted source domain, aiming to alleviate the influence of the shift of $\rm {P}(\rm {Y})$ during the alignment of $\rm {P}(\rm {X}|\rm {Y})$. In the second step, it uses $\mathbf {w}$ to reweigh the supervised classifier $\rm {P}_S(\rm {Y}|\rm {X})$ obtained in the first step for target domain label prediction. We detail these two steps in §SECREF10 and §SECREF14, respectively. Weighted Domain Invariant Representation Learning ::: Align @!START@$\rm {P}(\rm {X}|\rm {Y})$@!END@ with Class Weight The motivation behind this practice is to adjust data distribution of the source domain or the target domain to alleviate the shift of $\rm {P}(\rm {Y})$ across domains before applying DIRL. Consider that we only have labels of source domain data, we choose to adjust data distribution of the source domain. To achieve this purpose, we introduce a trainable class weight $\mathbf {w}$ to reweigh source domain examples by class when performing DIRL, with $\mathbf {w}_i > 0$. Specifically, we hope that: and we denote $\mathbf {w}^*$ the value of $\mathbf {w}$ that makes this equation hold. We shall see that when $\mathbf {w}=\mathbf {w}^*$, DIRL is to align $\rm {P}_S(G(\rm {X})|\rm {Y})$ with $\rm {P}_T(G(\rm {X})|\rm {Y})$ without the shift of $\rm {P}(\rm {Y})$. According to our analysis, we know that due to the shift of $\rm {P}(\rm {Y})$, there is a conflict between the training objects of the supervised learning $\mathcal {L}_{sup}$ and the domain-invariant learning $\mathcal {L}_{inv}$. And the conflict degree will decrease as $\rm {P}_S(\rm {Y})$ getting close to $\rm {P}_T(\rm {Y})$. Therefore, during model training, $\mathbf {w}$ is expected to be optimized toward $\mathbf {w}^*$ since it will make $\rm {P}(\rm {Y})$ of the weighted source domain close to $\rm {P}_T(\rm {Y})$, so as to solve the conflict. We now show how to transfer existing DIRL models to their WDIRL counterparts with the above idea. Let $\mathbb {S}:\rm {P} \rightarrow {R}$ denote a statistic function defined over a distribution $\rm {P}$. For example, the expectation function $\mathbb {E}(\rm {X})$ in $\mathbb {E}(\rm {X}_S) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}))$ is a concrete instaintiation of $\mathbb {S}$. In general, to transfer models from DIRL to WDIRL, we should replace $\mathbb {S}(\rm {P}_S(\rm {X}))$ defined in $\mathcal {L}_{inv}$ with Take the CMD metric as an example. In WDIRL, the revised form of ${\text{CMD}}_K$ is defined by: Here, $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i) \equiv \mathbb {E}(\rm {X})(\rm {P}_S(\rm {X}|\rm {Y}=i))$ denotes the expectation of $\rm {X}$ over distribution $\rm {P}_S(\rm {X}|\rm {Y}=i)$. Note that both $\rm {P}_S(\rm {Y}=i)$ and $\mathbb {E}(\rm {X}_S|\rm {Y}_S=i)$ can be estimated using source labeled data, and $\mathbb {E}(\rm {X}_T)$ can be estimated using target unlabeled data. As for those adversarial-learning-based DIRL methods, e.g., DANN BIBREF2, the revised domain-invariant loss can be precisely defined by: During model training, $D$ is optimized in the direction to minimize $\hat{\mathcal {L}}_d$, while $G$ and $\mathbf {w}$ are optimized to maximize $\hat{\mathcal {L}}_d$. In the following, we denote $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$ the equivalent loss defined over $G$ for the revised version of domain adversarial learning. The general task loss in WDIRL is defined by: where $\hat{\mathcal {L}}_{inv}$ is a unified representation of the domain-invariant loss in WDIRL, such as $\widehat{\text{CMD}}_K$ and $\widehat{\text{JSD}}(\rm {P}_S, \rm {P}_T)$. Weighted Domain Invariant Representation Learning ::: Align @!START@$\rm {P}(\rm {Y}|\rm {X})$@!END@ with Class Weight In the above step, we align $\rm {P}(\rm {X}|\rm {Y})$ across domains by performing domain-invariant learning on the class-weighted source domain and the original target domain. In this step, we deal with the shift of $\rm {P}(\rm {Y})$. Suppose that we have successfully resolved the shift of $\rm {P}(\rm {X}|\rm {Y})$ with $G$, i.e., $\rm {P}_S(G(\rm {X})|\rm {Y})=\rm {P}_T(G(\rm {X})|\rm {Y})$. Then, according to the work of BIBREF29, we have: where $\gamma (\rm {Y}=i)={\rm {P}_T(\rm {Y}=i)}/{\rm {P}_S(\rm {Y}=i)}$. Of course, in most of the real-world tasks, we do not know the value of $\gamma (\rm {Y}=i)$. However, note that $\gamma (\rm {Y}=i)$ is exactly the expected class weight $\mathbf {w}^*_i$. Therefore, a natural practice of this step is to estimate $\gamma (\rm {Y}=i)$ with the obtained $\mathbf {w}_i$ in the first step and estimate $\rm {P}_T(\rm {Y}|G(\rm {X}))$ with: In summary, to transfer methods of the DIRL paradigm to WDIRL, we should: first revise the definition of $\mathcal {L}_{inv}$, obtaining its corresponding WDIRL form $\hat{\mathcal {L}}_{inv}$; then perform supervised learning and domain-invariant representation learning on $\mathcal {D}_S$ and $\mathcal {D}_T$ according to Eq. (DISPLAY_FORM13), obtaining a supervised classifier $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ and a class weight vector $\mathbf {w}$; and finally, adjust $\rm {P}_S(\rm {Y}|\rm {X}; \mathbf {\Phi })$ using $\mathbf {w}$ according to Eq. (DISPLAY_FORM16) and obtain the target domain classifier $\rm {P}_T(\rm {Y}|\rm {X}; \mathbf {\Phi })$. Experiment ::: Experiment Design Through the experiments, we empirically studied our analysis on DIRL and the effectiveness of our proposed solution in dealing with the problem it suffered from. In addition, we studied the impact of each step described in §SECREF10 and §SECREF14 to our proposed solution, respectively. To performe the study, we carried out performance comparison between the following models: SO: the source-only model trained using source domain labeled data without any domain adaptation. CMD: the centre-momentum-based domain adaptation model BIBREF3 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{CMD}_K$. DANN: the adversarial-learning-based domain adaptation model BIBREF2 of the original DIRL framework that implements $\mathcal {L}_{inv}$ with $\text{JSD}(\rm {P}_S, \rm {P}_T)$. $\text{CMD}^\dagger $: the weighted version of the CMD model that only applies the first step (described in §SECREF10) of our proposed method. $\text{DANN}^\dagger $: the weighted version of the DANN model that only applies the first step of our proposed method. $\text{CMD}^{\dagger \dagger }$: the weighted version of the CMD model that applies both the first and second (described in §SECREF14) steps of our proposed method. $\text{DANN}^{\dagger \dagger }$: the weighted version of the DANN model that applies both the first and second steps of our proposed method. $\text{CMD}^{*}$: a variant of $\text{CMD}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ (estimate from target labeled data) to $\mathbf {w}$ and fixes this value during model training. $\text{DANN}^{*}$: a variant of $\text{DANN}^{\dagger \dagger }$ that assigns $\mathbf {w}^*$ to $\mathbf {w}$ and fixes this value during model training. Intrinsically, SO can provide an empirical lowerbound for those domain adaptation methods. $\text{CMD}^{*}$ and $\text{DANN}^{*}$ can provide the empirical upbound of $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$, respectively. In addition, by comparing performance of $\text{CMD}^{*}$ and $\text{DANN}^{*}$ with that of $\text{SO}$, we can know the effectiveness of the DIRL framework when $\rm {P}(\rm {Y})$ dose not shift across domains. By comparing $\text{CMD}^\dagger $ with $\text{CMD}$, or comparing $\text{DANN}^\dagger $ with $\text{DANN}$, we can know the effectiveness of the first step of our proposed method. By comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}^{\dagger }$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}^{\dagger }$, we can know the impact of the second step of our proposed method. And finally, by comparing $\text{CMD}^{\dagger \dagger }$ with $\text{CMD}$, or comparing $\text{DANN}^{\dagger \dagger }$ with $\text{DANN}$, we can know the general effectiveness of our proposed solution. Experiment ::: Dataset and Task Design We conducted experiments on the Amazon reviews dataset BIBREF9, which is a benchmark dataset in the cross-domain sentiment analysis field. This dataset contains Amazon product reviews of four different product domains: Books (B), DVD (D), Electronics (E), and Kitchen (K) appliances. Each review is originally associated with a rating of 1-5 stars and is encoded in 5,000 dimensional feature vectors of bag-of-words unigrams and bigrams. Experiment ::: Dataset and Task Design ::: Binary-Class. From this dataset, we constructed 12 binary-class cross-domain sentiment analysis tasks: B$\rightarrow $D, B$\rightarrow $E, B$\rightarrow $K, D$\rightarrow $B, D$\rightarrow $E, D$\rightarrow $K, E$\rightarrow $B, E$\rightarrow $D, E$\rightarrow $K, K$\rightarrow $B, K$\rightarrow $D, K$\rightarrow $E. Following the setting of previous works, we treated a reviews as class `1' if it was ranked up to 3 stars, and as class `2' if it was ranked 4 or 5 stars. For each task, $\mathcal {D}_S$ consisted of 1,000 examples of each class, and $\mathcal {D}_T$ consists of 1500 examples of class `1' and 500 examples of class `2'. In addition, since it is reasonable to assume that $\mathcal {D}_T$ can reveal the distribution of target domain data, we controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. Using the same label assigning mechanism, we also studied model performance over different degrees of $\rm {P}(\rm {Y})$ shift, which was evaluated by the max value of $\rm {P}_S(\rm {Y}=i)/\rm {P}_T(\rm {Y}=i), \forall i=1, \cdots , L$. Please refer to Appendix C for more detail about the task design for this study. Experiment ::: Dataset and Task Design ::: Multi-Class. We additionally constructed 12 multi-class cross-domain sentiment classification tasks. Tasks were designed to distinguish reviews of 1 or 2 stars (class 1) from those of 4 stars (class 2) and those of 5 stars (class 3). For each task, $\mathcal {D}_S$ contained 1000 examples of each class, and $\mathcal {D}_T$ consisted of 500 examples of class 1, 1500 examples of class 2, and 1000 examples of class 3. Similarly, we also controlled the target domain testing dataset to have the same class ratio as $\mathcal {D}_T$. Experiment ::: Implementation Detail For all studied models, we implemented $G$ and $f$ using the same architectures as those in BIBREF3. For those DANN-based methods (i.e., DANN, $\text{DANN}^{\dagger }$, $\text{DANN}^{\dagger \dagger }$, and $\text{DANN}^{*}$), we implemented the discriminator $D$ using a 50 dimensional hidden layer with relu activation functions and a linear classification layer. Hyper-parameter $K$ of $\text{CMD}_K$ and $\widehat{\text{CMD}}_K$ was set to 5 as suggested by BIBREF3. Model optimization was performed using RmsProp BIBREF30. Initial learning rate of $\mathbf {w}$ was set to 0.01, while that of other parameters was set to 0.005 for all tasks. Hyper-parameter $\alpha $ was set to 1 for all of the tested models. We searched for this value in range $\alpha =[1, \cdots , 10]$ on task B $\rightarrow $ K. Within the search, label distribution was set to be uniform, i.e., $\rm {P}(\rm {Y}=i)=1/L$, for both domain B and K. We chose the value that maximize the performance of CMD on testing data of domain K. You may notice that this practice conflicts with the setting of unsupervised domain adaptation that we do not have labeled data of the target domain for training or developing. However, we argue that this practice would not make it unfair for model comparison since all of the tested models shared the same value of $\alpha $ and $\alpha $ was not directly fine-tuned on any tested task. With the same consideration, for every tested model, we reported its best performance achieved on testing data of the target domain during its training. To initialize $\mathbf {w}$, we used label prediction of the source-only model. Specifically, let $\rm {P}_{SO}(\rm {Y}|\rm {X}; \mathbf {\theta }_{SO})$ denote the trained source-only model. We initialized $\mathbf {w}_i$ by: Here, $\mathbb {I}$ denotes the indication function. To offer an intuitive understanding to this strategy, we report performance of WCMD$^{\dagger \dagger }$ over different initializations of $\mathbf {w}$ on 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) binary-class domain adaptation tasks in Figure FIGREF33. Here, we say that domain B and D are of a group, and domain E and K are of another group since B and D are similar, as are E and K, but the two groups are different from one another BIBREF9. Note that $\rm {P}_{S}(\rm {Y}=1)=0.5$ is a constant, which is estimated using source labeled data. From the figure, we can obtain three main observations. First, WCMD$^{\dagger \dagger }$ generally outperformed its CMD counterparts with different initialization of $\mathbf {w}$. Second, it was better to initialize $\mathbf {w}$ with a relatively balanced value, i.e., $\mathbf {w}_i \rm {P}_S(\rm {Y}=i) \rightarrow \frac{1}{L}$ (in this experiment, $L=2$). Finally, $\mathbf {w}^0$ was often a good initialization of $\mathbf {w}$, indicating the effectiveness of the above strategy. Experiment ::: Main Result Table TABREF27 shows model performance on the 12 binary-class cross-domain tasks. From this table, we can obtain the following observations. First, CMD and DANN underperform the source-only model (SO) on all of the 12 tested tasks, indicating that DIRL in the studied situation will degrade the domain adaptation performance rather than improve it. This observation confirms our analysis. Second, $\text{CMD}^{\dagger \dagger }$ consistently outperformed CMD and SO. This observation shows the effectiveness of our proposed method for addressing the problem of the DIRL framework in the studied situation. Similar conclusion can also be obtained by comparing performance of $\text{DANN}^{\dagger \dagger }$ with that of DANN and SO. Third, $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ consistently outperformed $\text{CMD}$ and DANN, respectively, which shows the effectiveness of the first step of our proposed method. Finally, on most of the tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ outperforms $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. Figure FIGREF35 depicts the relative improvement, e.g., $(\text{Acc}(\text{CMD})-\text{Acc}(\text{SO}))/\text{Acc}(\text{SO})$, of the domain adaptation methods over the SO baseline under different degrees of $\rm {P}(\rm {Y})$ shift, on two binary-class domain adaptation tasks (You can refer to Appendix C for results of the other models on other tasks). From the figure, we can see that the performance of CMD generally got worse as the increase of $\rm {P}(\rm {Y})$ shift. In contrast, our proposed model $\text{CMD}^{\dagger \dagger }$ performed robustly to the varying of $\rm {P}(\rm {Y})$ shift degree. Moreover, it can achieve the near upbound performance characterized by $\text{CMD}^{*}$. This again verified the effectiveness of our solution. Table TABREF34 reports model performance on the 2 within-group (B$\rightarrow $D, E$\rightarrow $K) and the 2 cross-group (B$\rightarrow $K, D$\rightarrow $E) multi-class domain adaptation tasks (You can refer to Appendix D for results on the other tasks). From this table, we observe that on some tested tasks, $\text{CMD}^{\dagger \dagger }$ and $\text{DANN}^{\dagger \dagger }$ did not greatly outperform or even slightly underperformed $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$, respectively. A possible explanation of this phenomenon is that the distribution of $\mathcal {D}_T$ also differs from that of the target domain testing dataset. Therefore, the estimated or learned value of $\mathbf {w}$ using $\mathcal {D}_T$ is not fully suitable for application to the testing dataset. This explanation is verified by the observation that $\text{CMD}^{\dagger }$ and $\text{DANN}^{\dagger }$ also slightly outperforms $\text{CMD}^{*}$ and $\text{DANN}^{*}$ on these tasks, respectively. Conclusion In this paper, we studied the problem of the popular domain-invariant representation learning (DIRL) framework for domain adaptation, when $\rm {P}(\rm {Y})$ changes across domains. To address the problem, we proposed a weighted version of DIRL (WDIRL). We showed that existing methods of the DIRL framework can be easily transferred to our WDIRL framework. Extensive experimental studies on benchmark cross-domain sentiment analysis datasets verified our analysis and showed the effectiveness of our proposed solution.
12 binary-class classification and multi-class classification of reviews based on rating
c95fd189985d996322193be71cf5be8858ac72b5
c95fd189985d996322193be71cf5be8858ac72b5_0
Q: Which NLP area have the highest average citation for woman author? Text: Introduction The ACL Anthology (AA) is a digital repository of tens of thousands of articles on Natural Language Processing (NLP) / Computational Linguistics (CL). It includes papers published in the family of ACL conferences as well as in other NLP conferences such as LREC and RANLP. AA is the largest single source of scientific literature on NLP. This project, which we call NLP Scholar, examines the literature as a whole to identify broad trends in productivity, focus, and impact. We will present the analyses in a sequence of questions and answers. The questions range from fairly mundane to oh-that-will-be-good-to-know. Our broader goal here is simply to record the state of the AA literature: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? The answers are usually in the form of numbers, graphs, and inter-connected visualizations. We focus on the following aspects of NLP research: size, demographics, areas of research, impact, and correlation of citations with demographic attributes (age and gender). Target Audience: The analyses presented here are likely to be of interest to any NLP researcher. This might be particularly the case for those that are new to the field and wish to get a broad overview of the NLP publishing landscape. On the other hand, even seasoned NLP'ers have likely wondered about the questions raised here and might be interested in the empirical evidence. Data: The analyses presented below are based on information about the papers taken directly from AA (as of June 2019) and citation information extracted from Google Scholar (as of June 2019). Thus, all subsequent papers and citations are not included in the analysis. A fresh data collection is planned for January 2020. Interactive Visualizations: The visualizations we are developing for this work (using Tableau) are interactive—so one can hover, click to select and filter, move sliders, etc. Since this work is high in the number of visualizations, the main visualizations are presented as figures in the paper and some sets of visualizations are pointed to online. The interactive visualizations and data will be made available through the first author's website after peer review. Related Work: This work builds on past research, including that on Google Scholar BIBREF0, BIBREF1, BIBREF2, BIBREF3, on the analysis of NLP papers BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, on citation intent BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, and on measuring scholarly impact BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. Caveats and Ethical Considerations: We list several caveats and limitations throughout the paper. A compilation of these is also available online in the About NLP Scholar page. The analyses presented here are also available as a series of blog posts. Size Q. How big is the ACL Anthology (AA)? How is it changing with time? A. As of June 2019, AA had $\sim $50K entries, however, this includes some number of entries that are not truly research publications (for example, forewords, prefaces, table of contents, programs, schedules, indexes, calls for papers/participation, lists of reviewers, lists of tutorial abstracts, invited talks, appendices, session information, obituaries, book reviews, newsletters, lists of proceedings, lifetime achievement awards, erratum, and notes). We discard them for the analyses here. (Note: CL journal includes position papers like squibs, letter to editor, opinion, etc. We do not discard them.) We are then left with 44,896 articles. Figure FIGREF6 shows a graph of the number of papers published in each of the years from 1965 to 2018. Discussion: Observe that there was a spurt in the 1990s, but things really took off since the year 2000, and the growth continues. Also, note that the number of publications is considerably higher in alternate years. This is due to biennial conferences. Since 1998 the largest of such conferences has been LREC (In 2018 alone LREC had over 700 main conferences papers and additional papers from its 29 workshops). COLING, another biennial conference (also occurring in the even years) has about 45% of the number of main conference papers as LREC. Q. How many people publish in the ACL Anthology (NLP conferences)? A. Figure FIGREF7 shows a graph of the number of authors (of AA papers) over the years: Discussion: It is a good sign for the field to have a growing number of people join its ranks as researchers. A further interesting question would be: Q. How many people are actively publishing in NLP? A. It is hard to know the exact number, but we can determine the number of people who have published in AA in the last N years. #people who published at least one paper in 2017 and 2018 (2 years): $\sim $12k (11,957 to be precise) #people who published at least one paper 2015 through 2018 (4 years):$\sim $17.5k (17,457 to be precise) Of course, some number of researchers published NLP papers in non-AA venues, and some number are active NLP researchers who may not have published papers in the last few years. Q. How many journal papers exist in the AA? How many main conference papers? How many workshop papers? A. See Figure FIGREF8. Discussion: The number of journal papers is dwarfed by the number of conference and workshop papers. (This is common in computer science. Even though NLP is a broad interdisciplinary field, the influence of computer science practices on NLP is particularly strong.) Shared task and system demo papers are relatively new (introduced in the 2000s), but their numbers are already significant and growing. Creating a separate class for “Top-tier Conference” is somewhat arbitrary, but it helps make certain comparisons more meaningful (for example, when comparing the average number of citations, etc.). For this work, we consider ACL, EMNLP, NAACL, COLING, and EACL as top-tier conferences, but certainly other groupings are also reasonable. Q. How many papers have been published at ACL (main conference papers)? What are the other NLP venues and what is the distribution of the number of papers across various CL/NLP venues? A. # ACL (main conference papers) as of June 2018: 4,839 The same workshop can co-occur with different conferences in different years, so we grouped all workshop papers in their own class. We did the same for tutorials, system demonstration papers (demos), and student research papers. Figure FIGREF9 shows the number of main conference papers for various venues and paper types (workshop papers, demos, etc.). Discussion: Even though LREC is a relatively new conference that occurs only once in two years, it tends to have a high acceptance rate ($\sim $60%), and enjoys substantial participation. Thus, LREC is already the largest single source of NLP conference papers. SemEval, which started as SenseEval in 1998 and occurred once in two or three years, has now morphed into an annual two-day workshop—SemEval. It is the largest single source of NLP shared task papers. Demographics (focus of analysis: gender, age, and geographic diversity) NLP, like most other areas of research, suffers from poor demographic diversity. There is very little to low representation from certain nationalities, race, gender, language, income, age, physical abilities, etc. This impacts the breadth of technologies we create, how useful they are, and whether they reach those that need it most. In this section, we analyze three specific attributes among many that deserve attention: gender (specifically, the number of women researchers in NLP), age (more precisely, the number of years of NLP paper publishing experience), and the amount of research in various languages (which loosely correlates with geographic diversity). Demographics (focus of analysis: gender, age, and geographic diversity) ::: Gender The ACL Anthology does not record demographic information about the paper authors. (Until recently, ACL and other NLP conferences did not record demographic information of the authors.) However, many first names have strong associations with a male or female gender. We will use these names to estimate the percentage of female first authors in NLP. The US Social Security Administration publishes a database of names and genders of newborns. We use the dataset to identify 55,133 first names that are strongly associated with females (probability $\ge $99%) and 29,873 first names that are strongly associated with males (probability $\ge $99%). (As a side, it is interesting to note that there is markedly greater diversity in female names than in male names.) We identified 26,637 of the 44,896 AA papers ($\sim $60%) where the first authors have one of these names and determine the percentage of female first author papers across the years. We will refer to this subset of AA papers as AA*. Note the following caveats associated with this analysis: The names dataset used has a lower representation of names from nationalities other than the US. However, there is a large expatriate population living in the US. Chinese names (especially in the romanized form) are not good indicators of gender. Thus the method presented here disregards most Chinese names, and the results of the analysis apply to the group of researchers excluding those with Chinese names. The dataset only records names associated with two genders. The approach presented here is meant to be an approximation in the absence of true gender information. Q. What percent of the AA* papers have female first authors (FFA)? How has this percentage changed with time? A. Overall FFA%: 30.3%. Figure FIGREF16 shows how FFA% has changed with time. Common paper title words and FFA% of papers that have those words are shown in the bottom half of the image. Note that the slider at the bottom has been set to 400, i.e., only those title words that occur in 400 or more papers are shown. The legend on the bottom right shows that low FFA scores are shown in shades of blue, whereas relatively higher FFA scores are shown in shades of green. Discussion: Observe that as a community, we are far from obtaining male-female parity in terms of first authors. A further striking (and concerning) observation is that the female first author percentage has not improved since the years 1999 and 2000 when the FFA percentages were highest (32.9% and 32.8%, respectively). In fact there seems to even be a slight downward trend in recent years. The calculations shown above are for the percentage of papers that have female first authors. The percentage of female first authors is about the same ($\sim $31%). On average male authors had a slightly higher average number of publications than female authors. To put these numbers in context, the percentage of female scientists world wide (considering all areas of research) has been estimated to be around 30%. The reported percentages for many computer science sub-fields are much lower. (See Women in Science (2015).) The percentages are much higher for certain other fields such as psychology and linguistics. (See this study for psychology and this study for linguistics.) If we can identify ways to move the needle on the FFA percentage and get it closer to 50% (or more), NLP can be a beacon to many other fields, especially in the sciences. FFA percentages are particularly low for papers that have parsing, neural, and unsupervised in the title. There are some areas within NLP that enjoy a healthier female-male parity in terms of first authors of papers. Figure FIGREF20 shows FFA percentages for papers that have the word discourse in the title. There is burgeoning research on neural NLP in the last few years. Figure FIGREF21 shows FFA percentages for papers that have the word neural in the title. Figure FIGREF22 shows lists of terms with the highest and lowest FFA percentages, respectively, when considering terms that occur in at least 50 paper titles (instead of 400 in the analysis above). Observe that FFA percentages are relatively higher in non-English European language research such as papers on Russian, Portuguese, French, and Italian. FFA percentages are also relatively higher for certain areas of NLP such as work on prosody, readability, discourse, dialogue, paraphrasing, and individual parts of speech such as adjectives and verbs. FFA percentages are particularly low for papers on theoretical aspects of statistical modelling, and areas such as machine translation, parsing, and logic. The full lists of terms and FFA percentages will be made available with the rest of the data. Demographics (focus of analysis: gender, age, and geographic diversity) ::: Academic Age While the actual age of NLP researchers might be an interesting aspect to explore, we do not have that information. Thus, instead, we can explore a slightly different (and perhaps more useful) attribute: NLP academic age. We can define NLP academic age as the number of years one has been publishing in AA. So if this is the first year one has published in AA, then their NLP academic age is 1. If one published their first AA paper in 2001 and their latest AA paper in 2018, then their academic age is 18. Q. How old are we? That is, what is the average NLP academic age of those who published papers in 2018? How has the average changed over the years? That is, have we been getting older or younger? What percentage of authors that published in 2018 were publishing their first AA paper? A. Average NLP Academic Age of people that published in 2018: 5.41 years Median NLP Academic Age of people that published in 2018: 2 years Percentage of 2018 authors that published their first AA paper in 2018: 44.9% Figure FIGREF24 shows how these numbers have changed over the years. Discussion: Observe that the Average academic age has been steadily increasing over the years until 2016 and 2017, when the trend has shifted and the average academic age has started to decrease. The median age was 1 year for most of the 1965 to 1990 period, 2 years for most of the 1991 to 2006 period, 3 years for most of the 2007 to 2015 period, and back to 2 years since then. The first-time AA author percentage decreased until about 1988, after which it sort of stayed steady at around 48% until 2004 with occasional bursts to $\sim $56%. Since 2005, the first-time author percentage has gone up and down every other year. It seems that the even years (which are also LREC years) have a higher first-time author percentage. Perhaps, this oscillation in first-time authors percentage is related to LREC’s high acceptance rate. Q. What is the distribution of authors in various academic age bins? For example, what percentage of authors that published in 2018 had an academic age of 2, 3, or 4? What percentage had an age between 5 and 9? And so on? A. See Figure FIGREF25. Discussion: Observe that about 65% of the authors that published in 2018 had an academic age of less than 5. This number has steadily reduced since 1965, was in the 60 to 70% range in 1990s, rose to the 70 to 72% range in early 2000s, then declined again until it reached the lowest value ($\sim $60%) in 2010, and has again steadily risen until 2018 (65%). Thus, even though it may sometimes seem at recent conferences that there is a large influx of new people into NLP (and that is true), proportionally speaking, the average NLP academic age is higher (more experienced) than what it has been in much of its history. Demographics (focus of analysis: gender, age, and geographic diversity) ::: Location (Languages) Automatic systems with natural language abilities are growing to be increasingly pervasive in our lives. Not only are they sources of mere convenience, but are crucial in making sure large sections of society and the world are not left behind by the information divide. Thus, the limits of what automatic systems can do in a language, limit the world for the speakers of that language. We know that much of the research in NLP is on English or uses English datasets. Many reasons have been proffered, and we will not go into that here. Instead, we will focus on estimating how much research pertains to non-English languages. We will make use of the idea that often when work is done focusing on a non-English language, then the language is mentioned in the title. We collected a list of 122 languages indexed by Wiktionary and looked for the presence of these words in the titles of AA papers. (Of course there are hundreds of other lesser known languages as well, but here we wanted to see the representation of these more prominent languages in NLP literature.) Figure FIGREF27 is a treemap of the 122 languages arranged alphabetically and shaded such that languages that appear more often in AA paper titles have a darker shade of green. Discussion: Even though the amount of work done on English is much larger than that on any other language, often the word English does not appear in the title, and this explains why English is not the first (but the second-most) common language name to appear in the titles. This is likely due to the fact that many papers fail to mention the language of study or the language of the datasets used if it is English. There is growing realization in the community that this is not quite right. However, the language of study can be named in other less prominent places than the title, for example the abstract, introduction, or when the datasets are introduced, depending on how central it is to the paper. We can see from the treemap that the most widely spoken Asian and Western European languages enjoy good representation in AA. These include: Chinese, Arabic, Korean, Japanese, and Hindi (Asian) as well as French, German, Swedish, Spanish, Portuguese, and Italian (European). This is followed by the relatively less widely spoken European languages (such as Russian, Polish, Norwegian, Romanian, Dutch, and Czech) and Asian languages (such as Turkish, Thai, and Urdu). Most of the well-represented languages are from the Indo-European language family. Yet, even in the limited landscape of the most common 122 languages, vast swathes are barren with inattention. Notable among these is the extremely low representation of languages from Africa, languages from non-Indo-European language families, and Indigenous languages from around the world. Areas of Research Natural Language Processing addresses a wide range of research questions and tasks pertaining to language and computing. It encompasses many areas of research that have seen an ebb and flow of interest over the years. In this section, we examine the terms that have been used in the titles of ACL Anthology (AA) papers. The terms in a title are particularly informative because they are used to clearly and precisely convey what the paper is about. Some journals ask authors to separately include keywords in the paper or in the meta-information, but AA papers are largely devoid of this information. Thus titles are an especially useful source of keywords for papers—keywords that are often indicative of the area of research. Keywords could also be extracted from abstracts and papers; we leave that for future work. Further work is also planned on inferring areas of research using word embeddings, techniques from topic modelling, and clustering. There are clear benefits to performing analyses using that information. However, those approaches can be sensitive to the parameters used. Here, we keep things simple and explore counts of terms in paper titles. Thus the results are easily reproducible and verifiable. Caveat: Even though there is an association between title terms and areas of research, the association can be less strong for some terms and areas. We use the association as one (imperfect) source of information about areas of research. This information may be combined with other sources of information to draw more robust conclusions. Title Terms: The title has a privileged position in a paper. It serves many functions, and here are three key ones (from an article by Sneha Kulkarni): "A good research paper title: 1. Condenses the paper's content in a few words 2. Captures the readers' attention 3. Differentiates the paper from other papers of the same subject area". If we examine the titles of papers in the ACL Anthology, we would expect that because of Function 1 many of the most common terms will be associated with the dominant areas of research. Function 2 (or attempting to have a catchy title) on the other hand, arguably leads to more unique and less frequent title terms. Function 3 seems crucial to the effectiveness of a title; and while at first glance it may seem like this will lead to unique title terms, often one needs to establish a connection with something familiar in order to convey how the work being presented is new or different. It is also worth noting that a catchy term today, will likely not be catchy tomorrow. Similarly, a distinctive term today, may not be distinctive tomorrow. For example, early papers used neural in the title to distinguish themselves from non-nerual approaches, but these days neural is not particularly discriminative as far as NLP papers go. Thus, competing and complex interactions are involved in the making of titles. Nonetheless, an arguable hypothesis is that: broad trends in interest towards an area of research will be reflected, to some degree, in the frequencies of title terms associated with that area over time. However, even if one does not believe in that hypothesis, it is worth examining the terms in the titles of tens of thousands of papers in the ACL Anthology—spread across many decades. Q. What terms are used most commonly in the titles of the AA papers? How has that changed with time? A. Figure FIGREF28 shows the most common unigrams (single word) and bigrams (two-word sequences) in the titles of papers published from 1980 to 2019. (Ignoring function words.) The timeline graph at the bottom shows the percentage of occurrences of the unigrams over the years (the colors of the unigrams in the Timeline match those in the Title Unigram list). Note: For a given year, the timeline graph includes a point for a unigram if the sum of the frequency of the unigram in that year and the two years before it is at least ten. The period before 1980 is not included because of the small number of papers. Discussion: Appropriately enough, the most common term in the titles of NLP papers is language. Presence of high-ranking terms pertaining to machine translation suggest that it is the area of research that has received considerable attention. Other areas associated with the high-frequency title terms include lexical semantics, named entity recognition, question answering, word sense disambiguation, and sentiment analysis. In fact, the common bigrams in the titles often correspond to names of NLP research areas. Some of the bigrams like shared task and large scale are not areas of research, but rather mechanisms or trends of research that apply broadly to many areas of research. The unigrams, also provide additional insights, such as the interest of the community in Chinese language, and in areas such as speech and parsing. The Timeline graph is crowded in this view, but clicking on a term from the unigram list will filter out all other lines from the timeline. This is especially useful for determining whether the popularity of a term is growing or declining. (One can already see from above that neural has broken away from the pack in recent years.) Since there are many lines in the Timeline graph, Tableau labels only some (you can see neural and machine). However, hovering over a line, in the eventual interactive visualization, will display the corresponding term—as shown in the figure. Despite being busy, the graph sheds light on the relative dominance of the most frequent terms and how that has changed with time. The vocabulary of title words is smaller when considering papers from the 1980's than in recent years. (As would be expected since the number of papers then was also relatively fewer.) Further, dominant terms such as language and translation accounted for a higher percentage than in recent years where there is a much larger diversity of topics and the dominant research areas are not as dominant as they once were. Q. What are the most frequent unigrams and bigrams in the titles of recent papers? A. Figure FIGREF29 shows the most frequent unigrams and bigrams in the titles of papers published 2016 Jan to 2019 June (time of data collection). Discussion: Some of the terms that have made notable gains in the top 20 unigrams and bigrams lists in recent years include: neural machine (presumably largely due to the phrase neural machine translation), neural network(s), word embeddings, recurrent neural, deep learning and the corresponding unigrams (neural, networks, etc.). We also see gains for terms related to shared tasks such as SemEval and task. The sets of most frequent unigrams and bigrams in the titles of AA papers from various time spans are available online. Apart from clicking on terms, one can also enter the query (say parsing) in the search box at the bottom. Apart from filtering the timeline graph (bottom), this action also filters the unigram list (top left) to provide information only about the search term. This is useful because the query term may not be one of the visible top unigrams. FigureFIGREF31 shows the timeline graph for parsing. Discussion: Parsing seems to have enjoyed considerable attention in the 1980s, began a period of steep decline in the early 1990s, and a period of gradual decline ever since. One can enter multiple terms in the search box or shift/command click multiple terms to show graphs for more than one term. FigureFIGREF32 shows the timelines for three bigrams statistical machine, neural machine, and machine translation: Discussion: The graph indicates that there was a spike in machine translation papers in 1996, but the number of papers dropped substantially after that. Yet, its numbers have been comparatively much higher than other terms. One can also see the rise of statistical machine translation in the early 2000s followed by its decline with the rise of neural machine translation. Impact Research articles can have impact in a number of ways—pushing the state of the art, answering crucial questions, finding practical solutions that directly help people, making a new generation of potential-scientists excited about a field of study, and more. As scientists, it seems attractive to quantitatively measure scientific impact, and this is particularly appealing to governments and funding agencies; however, it should be noted that individual measures of research impact are limited in scope—they measure only some kinds of contributions. Citations The most commonly used metrics of research impact are derived from citations. A citation of a scholarly article is the explicit reference to that article. Citations serve many functions. However, a simplifying assumption is that regardless of the reason for citation, every citation counts as credit to the influence or impact of the cited work. Thus several citation-based metrics have emerged over the years including: number of citations, average citations, h-index, relative citation ratio, and impact factor. It is not always clear why some papers get lots of citations and others do not. One can argue that highly cited papers have captured the imagination of the field: perhaps because they were particularly creative, opened up a new area of research, pushed the state of the art by a substantial degree, tested compelling hypotheses, or produced useful datasets, among other things. Note however, that the number of citations is not always a reflection of the quality or importance of a piece of work. Note also that there are systematic biases that prevent certain kinds of papers from accruing citations, especially when the contributions of a piece of work are atypical, not easily quantified, or in an area where the number of scientific publications is low. Further, the citations process can be abused, for example, by egregious self-citations. Nonetheless, given the immense volume of scientific literature, the relative ease with which one can track citations using services such as Google Scholar and Semantic Scholar, and given the lack of other easily applicable and effective metrics, citation analysis is an imperfect but useful window into research impact. In this section, we examine citations of AA papers. We focus on two aspects: Most cited papers: We begin by looking at the most cited papers overall and in various time spans. We will then look at most cited papers by paper-type (long, short, demo, etc) and venue (ACL, LREC, etc.). Perhaps these make interesting reading lists. Perhaps they also lead to a qualitative understanding of the kinds of AA papers that have received lots of citations. Aggregate citation metrics by time span, paper type, and venue: Access to citation information allows us to calculate aggregate citation metrics such as average and median citations of papers published in different time periods, published in different venues, etc. These can help answer questions such as: on average, how well cited are papers published in the 1990s? on average, how many citations does a short paper get? how many citations does a long paper get? how many citations for a workshop paper? etc. Data: The analyses presented below are based on information about the papers taken directly from AA (as of June 2019) and citation information extracted from Google Scholar (as of June 2019). We extracted citation information from Google Scholar profiles of authors who had a Google Scholar Profile page and had published at least three papers in the ACL Anthology. This yielded citation information for about 75% of the papers (33,051 out of the 44,896 papers). We will refer to this subset of the ACL Anthology papers as AA’. All citation analysis below is on AA’. Impact ::: #Citations and Most Cited Papers Q. How many citations have the AA’ papers received? How is that distributed among the papers published in various decades? A. $\sim $1.2 million citations (as of June 2019). Figure FIGREF36 shows a timeline graph where each year has a bar with height corresponding to the number of citations received by papers published in that year. Further, the bar has colored fragments corresponding to each of the papers and the height of a fragment (paper) is proportional to the number of citations it has received. Thus it is easy to spot the papers that received a large number of citations, and the years when the published papers received a large number of citations. Hovering over individual papers reveals an information box showing the paper title, authors, year of publication, publication venue, and #citations. Discussion: With time, not only have the number of papers grown, but also the number of high-citation papers. We see a marked jump in the 1990s over the previous decades, but the 2000s are the most notable in terms of the high number of citations. The 2010s papers will likely surpass the 2000s papers in the years to come. Q. What are the most cited papers in AA'? A. Figure FIGREF37 shoes the most cited papers in the AA'. Discussion: We see that the top-tier conference papers (green) are some of the most cited papers in AA’. There are a notable number of journal papers (dark green) in the most cited list as well, but very few demo (purple) and workshop (orange) papers. In the interactive visualizations (to be released later), one can click on the url to be to taken directly to the paper’s landing page in the ACL Anthology website. That page includes links to meta information, the pdf, and associated files such as videos and appendices. There will also be functionality to download the lists. Alas, copying the lists from the screenshots shown here is not easy. Q. What are the most cited AA' journal papers ? What are the most cited AA' workshop papers? What are the most cited AA' shared task papers? What are the most cited AA' demo papers? What are the most cited tutorials? A. The most cited AA’ journal papers, conference papers, workshop papers, system demo papers, shared task papers, and tutorials can be viewed online. The most cited papers from individual venues (ACL, CL journal, TACL, EMNLP, LREC, etc.) can also be viewed there. Discussion: Machine translation papers are well-represented in many of these lists, but especially in the system demo papers list. Toolkits such as MT evaluation ones, NLTK, Stanford Core NLP, WordNet Similarity, and OpenNMT have highly cited demo or workshop papers. The shared task papers list is dominated by task description papers (papers by task organizers describing the data and task), especially for sentiment analysis tasks. However, the list also includes papers by top-performing systems in these shared tasks, such as the NRC-Canada, HidelTime, and UKP papers. Q. What are the most cited AA' papers in the last decade? A. Figure FIGREF39 shows the most cited AA' papers in the 2010s. The most cited AA' papers from the earlier periods are available online. Discussion: The early period (1965–1989) list includes papers focused on grammar and linguistic structure. The 1990s list has papers addressing many different NLP problems with statistical approaches. Papers on MT and sentiment analysis are frequent in the 2000s list. The 2010s are dominated by papers on word embeddings and neural representations. Impact ::: Average Citations by Time Span Q. How many citations did the papers published between 1990 and 1994 receive? What is the average number of citations that a paper published between 1990 and 1994 has received? What are the numbers for other time spans? A. Total citations for papers published between 1990 and 1994: $\sim $92k Average citations for papers published between 1990 and 1994: 94.3 Figure FIGREF41 shows the numbers for various time spans. Discussion: The early 1990s were an interesting period for NLP with the use of data from the World Wide Web and technologies from speech processing. This was the period with the highest average citations per paper, closely followed by the 1965–1969 and 1995–1999 periods. The 2000–2004 period is notable for: (1) a markedly larger number of citations than the previous decades; (2) third highest average number of citations. The drop off in the average citations for recent 5-year spans is largely because they have not had as much time to collect citations. Impact ::: Aggregate Citation Statistics, by Paper Type and Venue Q. What are the average number of citations received by different types of papers: main conference papers, workshop papers, student research papers, shared task papers, and system demonstration papers? A. In this analysis, we include only those AA’ papers that were published in 2016 or earlier (to allow for at least 2.5 years to collect citations). There are 26,949 such papers. Figures FIGREF42 and FIGREF43 show the average citations by paper type when considering papers published 1965–2016 and 2010–2016, respectively. Figures FIGREF45 and FIGREF46 show the medians. Discussion: Journal papers have much higher average and median citations than other papers, but the gap between them and top-tier conferences is markedly reduced when considering papers published since 2010. System demo papers have the third highest average citations; however, shared task papers have the third highest median citations. The popularity of shared tasks and the general importance given to beating the state of the art (SOTA) seems to have grown in recent years—something that has come under criticism. It is interesting to note that in terms of citations, workshop papers are doing somewhat better than the conferences that are not top tier. Finally, the citation numbers for tutorials show that even though a small number of tutorials are well cited, a majority receive 1 or no citations. This is in contrast to system demo papers that have average and median citations that are higher or comparable to workshop papers. Throughout the analyses in this article, we see that median citation numbers are markedly lower than average citation numbers. This is particularly telling. It shows that while there are some very highly cited papers, a majority of the papers obtain much lower number of citations—and when considering papers other than journals and top-tier conferences, the number of citations is frequently lower than ten. Q. What are the average number of citations received by the long and short ACL main conference papers, respectively? A. Short papers were introduced at ACL in 2003. Since then ACL is by far the venue with the most number of short papers (compared to other venues). So we compare long and short papers published at ACL since 2003 to determine their average citations. Once again, we limit the papers to those published until 2016 to allow for the papers to have time to collect citations. Figure FIGREF47 shows the average and median citations for long and short papers. Discussion: On average, long papers get almost three times as many citations as short papers. However, the median for long papers is two-and-half times that of short papers. This difference might be because some very heavily cited long papers push the average up for long papers. Q. Which venue has publications with the highest average number of citations? What is the average number of citations for ACL and EMNLP papers? What is this average for other venues? What are the average citations for workshop papers, system demonstration papers, and shared task papers? A. CL journal has the highest average citations per paper. Figure FIGREF49 shows the average citations for AA’ papers published 1965–2016 and 2010–2016, respectively, grouped by venue and paper type. (Figure with median citations is available online.) Discussion: In terms of citations, TACL papers have not been as successful as EMNLP and ACL; however, CL journal (the more traditional journal paper venue) has the highest average and median paper citations (by a large margin). This gap has reduced in papers published since 2010. When considering papers published between 2010 and 2016, the system demonstration papers, the SemEval shared task papers, and non-SemEval shared task papers have notably high average (surpassing those of EACL and COLING); however their median citations are lower. This is likely because some heavily cited papers have pushed the average up. Nonetheless, it is interesting to note how, in terms of citations, demo and shared task papers have surpassed many conferences and even become competitive with some top-tier conferences such as EACL and COLING. Q. What percent of the AA’ papers that were published in 2016 or earlier are cited more than 1000 times? How many more than 10 times? How many papers are cited 0 times? A. Google Scholar invented the i-10 index as another measure of author research impact. It stands for the number of papers by an author that received ten or more citations. (Ten here is somewhat arbitrary, but reasonable.) Similar to that, one can look at the impact of AA’ as a whole and the impact of various subsets of AA’ through the number of papers in various citation bins. Figure FIGREF50 shows the percentage of AA’ papers in various citation bins. (The percentages of papers when considering papers from specific time spans are available online.) Discussion: About 56% of the papers are cited ten or more times. 6.4% of the papers are never cited. Note also that some portion of the 1–9 bin likely includes papers that only received self-citations. It is interesting that the percentage of papers with 0 citations is rather steady (between 7.4% and 8.7%) for the 1965–1989, 1990–1999, and 2010–2016 periods. The majority of the papers lie in the 10 to 99 citations bin, for all except the recent periods (2010–2016 and 2016Jan–2016Dec). With time, the recent period should also have the majority of the papers in the 10 to 99 citations bin. The numbers for the 2016Jan–2016Dec papers show that after 2.5 years, about 89% of the papers have at least one citation and about 33% of the papers have ten or more citations. Q. What are the citation bin percentages for individual venues and paper types? A. See Figure FIGREF51. Discussion: Observe that 70 to 80% of the papers in journals and top-tier conferences have ten or more citations. The percentages are markedly lower (between 30 and 70%) for the other conferences shown above, and even lower for some other conferences (not shown above). CL Journal is particularly notable for the largest percentage of papers with 100 or more citations. The somewhat high percentage of papers that are never cited (4.3%) are likely because some of the book reviews from earlier years are not explicitly marked in CL journal, and thus they were not removed from analysis. Also, letters to editors, which are more common in CL journal, tend to often obtain 0 citations. CL, EMNLP, and ACL have the best track record for accepting papers that have gone on to receive 1000 or more citations. *Sem, the semantics conference, seems to have notably lower percentage of high-citation papers, even though it has fairly competitive acceptance rates. Instead of percentage, if one considers raw numbers of papers that have at least ten citations (i-10 index), then LREC is particularly notable in terms of the large number of papers it accepts that have gone on to obtain ten or more citations ($\sim $1600). Thus, by producing a large number of moderate-to-high citation papers, and introducing many first-time authors, LREC is one of the notable (yet perhaps undervalued) engines of impact on NLP. About 50% of the SemEval shared task papers received 10 or more citations, and about 46% of the non-SemEval Shared Task Papers received 10 or more citations. About 47% of the workshop papers received ten or more citations. About 43% of the demo papers received 10 or more citations. Impact ::: Citations to Papers by Areas of Research Q. What is the average number of citations of AA' papers that have machine translation in the title? What about papers that have the term sentiment analysis or word representations? A. Different areas of research within NLP enjoy varying amounts of attention. In Part II, we looked at the relative popularity of various areas over time—estimated through the number of paper titles that had corresponding terms. (You may also want to see the discussion on the use of paper title terms to sample papers from various, possibly overlapping, areas.) Figure FIGREF53 shows the top 50 title bigrams ordered by decreasing number of total citations. Only those bigrams that occur in at least 30 AA' papers (published between 1965 and 2016) are considered. (The papers from 2017 and later are not included, to allow for at least 2.5 years for the papers to accumulate citations.) Discussion: The graph shows that the bigram machine translation occurred in 1,659 papers that together accrued more than 93k citations. These papers have on average 68.8 citations and the median citations is 14. Not all machine translation (MT) papers have machine translation in the title. However, arguably, this set of 1,659 papers is a representative enough sample of machine translation papers; and thus, the average and median are estimates of MT in general. Second in the list are papers with statistical machine in the title—most commonly from the phrase statistical machine translation. One expects considerable overlap in the papers across the sets of papers with machine translation and statistical machine, but machine translation likely covers a broader range of research including work before statistical MT was introduced, neural MT, and MT evaluation. There are fewer papers with sentiment analysis in the title (356), but these have acquired citations at a higher average (104) than both machine translation and statistical machine. The bigram automatic evaluation jumps out because of its high average citations (337). Some of the neural-related bigrams have high median citations, for example, neural machine (49) and convolutional neural (40.5). Figure FIGREF54 shows the lists of top 25 bigrams ordered by average citations. Discussion: Observe the wide variety of topics covered by this list. In some ways that is reassuring for the health of the field as a whole; however, this list does not show which areas are not receiving sufficient attention. It is less clear to me how to highlight those, as simply showing the bottom 50 bigrams by average citations is not meaningful. Also note that this is not in any way an endorsement to write papers with these high-citation bigrams in the title. Doing so is of course no guarantee of receiving a large number of citations. Correlation of Age and Gender with Citations In this section, we examine citations across two demographic dimensions: Academic age (number of years one has been publishing) and Gender. There are good reasons to study citations across each of these dimensions including, but not limited to, the following: Areas of research: To better understand research contributions in the context of the area where the contribution is made. Academic age: To better understand how the challenges faced by researchers at various stages of their career may impact the citations of their papers. For example, how well-cited are first-time NLP authors? On average, at what academic age do citations peak? etc. Gender: To better understand the extent to which systematic biases (explicit and implicit) pervasive in society and scientific publishing impact author citations. Some of these aspects of study may seem controversial. So it is worth addressing that first. The goal here is not to perpetuate stereotypes about age, gender, or even areas of research. The history of scientific discovery is awash with plenty of examples of bad science that has tried to erroneously show that one group of people is “better” than another, with devastating consequences. People are far more alike than different. However, different demographic groups have faced (and continue to face) various socio-cultural inequities and biases. Gender and race studies look at how demographic differences shape our experiences. They examine the roles of social institutions in maintaining the inequities and biases. This work is in support of those studies. Unless we measure differences in outcomes such as scientific productivity and impact across demographic groups, we will not fully know the extent to which these inequities and biases impact our scientific community; and we cannot track the effectiveness of measures to make our universities, research labs, and conferences more inclusive, equitable, and fair. Correlation of Age and Gender with Citations ::: Correlation of Academic Age with Citations We introduced NLP academic age earlier in the paper, where we defined NLP academic age as the number of years one has been publishing in AA. Here we examine whether NLP academic age impacts citations. The analyses are done in terms of the academic age of the first author; however, similar analyses can be done for the last author and all authors. (There are limitations to each of these analyses though as discussed further below.) First author is a privileged position in the author list as it is usually reserved for the researcher that has done the most work and writing. The first author is also usually the main driver of the project; although, their mentor or advisor may also be a significant driver of the project. Sometimes multiple authors may be marked as first authors in the paper, but the current analysis simply takes the first author from the author list. In many academic communities, the last author position is reserved for the most senior or mentoring researcher. However, in non-university research labs and in large collaboration projects, the meaning of the last author position is less clear. (Personally, I prefer author names ordered by the amount of work done.) Examining all authors is slightly more tricky as one has to decide how to credit the citations to the possibly multiple authors. It might also not be a clear indicator of differences across gender as a large number of the papers in AA have both male and female authors. Q. How does the NLP academic age of the first author correlate with the amount of citations? Are first-year authors less cited than those with more experience? A. Figure FIGREF59 shows various aggregate citation statistics corresponding to academic age. To produce the graph we put each paper in a bin corresponding to the academic age of the first author when the paper was published. For example, if the first author of a paper had an academic age of 3 when that paper was published, then the paper goes in bin 3. We then calculate #papers, #citations, median citations, and average citations for each bin. For the figure below, We further group the bins 10 to 14, 15 to 19, 20 to 34, and 35 to 50. These groupings are done to avoid clutter, and also because many of the higher age bins have a low number of papers. Discussion: Observe that the number of papers where the first author has academic age 1 is much larger than the number of papers in any other bin. This is largely because a large number of authors in AA have written exactly one paper as first author. Also, about 60% of the authors in AA (17,874 out of the 29,941 authors) have written exactly one paper (regardless of author position). The curves for the average and median citations have a slight upside down U shape. The relatively lower average and median citations in year 1 (37.26 and 10, respectively) indicate that being new to the field has some negative impact on citations. The average increases steadily from year 1 to year 4, but the median is already at the highest point by year 2. One might say, that year 2 to year 14 are the period of steady and high citations. Year 15 onwards, there is a steady decline in the citations. It is probably wise to not draw too many conclusions from the averages of the 35 to 50 bin, because of the small number of papers. There seems to be a peak in average citations at age 7. However, there is not a corresponding peak in the median. Thus the peak in average might be due to an increase in the number of very highly cited papers. Citations to Papers by First Author Gender As noted in Part I, neither ACL nor the ACL Anthology have recorded demographic information for the vast majority of the authors. Thus we use the same setup discussed earlier in the section on demographics, to determine gender using the United States Social Security Administration database of names and genders of newborns to identify 55,133 first names that are strongly associated with females (probability $\ge $99%) and 29,873 first names that are strongly associated with males (probability $\ge $99%). Q. On average, are women cited less than men? A. Yes, on average, female first author papers have received markedly fewer citations than male first author papers (36.4 compared to 52.4). The difference in median is smaller (11 compared to 13). See Figure FIGREF60. Discussion: The large difference in averages and smaller difference in medians suggests that there are markedly more very heavily cited male first-author papers than female first-author papers. The gender-unknown category, which here largely consist of authors with Chinese origin names and names that are less strongly associated with one gender have a slightly higher average, but the same median citations, as authors with female-associated first names. The differences in citations, or citation gap, across genders may: (1) vary by period of time; (2) vary due to confounding factors such as academic age and areas of research. We explore these next. Q. How has the citation gap across genders changed over the years? A. Figure FIGREF61 (left side) shows the citation statistics across four time periods. Discussion: Observe that female first authors have always been a minority in the history of ACL; however, on average, their papers from the early years (1965 to 1989) received a markedly higher number of citations than those of male first authors from the same period. We can see from the graph that this changed in the 1990s where male first-author papers obtained markedly more citations on average. The citation gap reduced considerably in the 2000s, and the 2010–2016 period saw a further slight reduction in the citation gap. It is also interesting to note that the gender-unknown category has almost bridged the gap with the males in this most recent time period. Further, the proportion of the gender-unknown authors has increased over the years—arguably, an indication of better representations of authors from around the world in recent years. (Nonetheless, as indicated in Part I, there is still plenty to be done to promote greater inclusion of authors from Africa and South America.) Q. How have citations varied by gender and academic age? Are women less cited because of a greater proportion of new-to-NLP female first authors than new-to-NLP male first authors? A. Figure FIGREF61 (right side) shows citation statistics broken down by gender and academic age. (This figure is similar to the academic age graph seen earlier, except that it shows separate average and median lines for female, male, and unknown gender first authors.) Discussion: The graphs show that female first authors consistently receive fewer citations than male authors for the first fifteen years. The trend is inverted with a small citation gap in the 15th to 34th years period. Q. Is the citation gap common across the vast majority of areas of research within NLP? Is the gap simply because more women work in areas that receive low numbers of citations (regardless of gender)? A. Figure FIGREF64 shows the most cited areas of research along with citation statistics split by gender of the first authors of corresponding papers. (This figure is similar to the areas of research graph seen earlier, except that it shows separate citation statistics for the genders.) Note that the figure includes rows for only those bigram and gender pairs with at least 30 AA’ papers (published between 1965 and 2016). Thus for some of the bigrams certain gender entries are not shown. Discussion: Numbers for an additional 32 areas are available online. Observe that in only about 12% (7 of the top 59) of the most cited areas of research, women received higher average citations than men. These include: sentiment analysis, information extraction, document summarization, spoken dialogue, cross lingual (research), dialogue, systems, language generation. (Of course, note that some of the 59 areas, as estimated using title term bigrams, are overlapping. Also, we did not include large scale in the list above because the difference in averages is very small and it is not really an area of research.) Thus, the citation gap is common across a majority of the high-citations areas within NLP. Conclusions This work examined the ACL Anthology to identify broad trends in productivity, focus, and impact. We examined several questions such as: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? Particular attention was paid to the demographics and inclusiveness of the NLP community. Notably, we showed that only about 30% of first authors are female, and that this percentage has not improved since the year 2000. We also showed that, on average, female first authors are cited less than male first authors, even when controlling for academic age. We hope that recording citation and participation gaps across demographic groups will encourage our university, industry, and government research labs to be more inclusive and fair. Several additional aspects of the AA will be explored in future work (see the bottom of the blog posts). Acknowledgments This work was possible due to the helpful discussion and encouragement from a number of awesome people, including: Dan Jurafsky, Tara Small, Michael Strube, Cyril Goutte, Eric Joanis, Matt Post, Patrick Littell, Torsten Zesch, Ellen Riloff, Norm Vinson, Iryna Gurevych, Rebecca Knowles, Isar Nejadgholi, and Peter Turney. Also, a big thanks to the ACL Anthology team for creating and maintaining a wonderful resource.
sentiment analysis, information extraction, document summarization, spoken dialogue, cross lingual (research), dialogue, systems, language generation
4a61260d6edfb0f93100d92e01cf655812243724
4a61260d6edfb0f93100d92e01cf655812243724_0
Q: Which 3 NLP areas are cited the most? Text: Introduction The ACL Anthology (AA) is a digital repository of tens of thousands of articles on Natural Language Processing (NLP) / Computational Linguistics (CL). It includes papers published in the family of ACL conferences as well as in other NLP conferences such as LREC and RANLP. AA is the largest single source of scientific literature on NLP. This project, which we call NLP Scholar, examines the literature as a whole to identify broad trends in productivity, focus, and impact. We will present the analyses in a sequence of questions and answers. The questions range from fairly mundane to oh-that-will-be-good-to-know. Our broader goal here is simply to record the state of the AA literature: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? The answers are usually in the form of numbers, graphs, and inter-connected visualizations. We focus on the following aspects of NLP research: size, demographics, areas of research, impact, and correlation of citations with demographic attributes (age and gender). Target Audience: The analyses presented here are likely to be of interest to any NLP researcher. This might be particularly the case for those that are new to the field and wish to get a broad overview of the NLP publishing landscape. On the other hand, even seasoned NLP'ers have likely wondered about the questions raised here and might be interested in the empirical evidence. Data: The analyses presented below are based on information about the papers taken directly from AA (as of June 2019) and citation information extracted from Google Scholar (as of June 2019). Thus, all subsequent papers and citations are not included in the analysis. A fresh data collection is planned for January 2020. Interactive Visualizations: The visualizations we are developing for this work (using Tableau) are interactive—so one can hover, click to select and filter, move sliders, etc. Since this work is high in the number of visualizations, the main visualizations are presented as figures in the paper and some sets of visualizations are pointed to online. The interactive visualizations and data will be made available through the first author's website after peer review. Related Work: This work builds on past research, including that on Google Scholar BIBREF0, BIBREF1, BIBREF2, BIBREF3, on the analysis of NLP papers BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, on citation intent BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, and on measuring scholarly impact BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. Caveats and Ethical Considerations: We list several caveats and limitations throughout the paper. A compilation of these is also available online in the About NLP Scholar page. The analyses presented here are also available as a series of blog posts. Size Q. How big is the ACL Anthology (AA)? How is it changing with time? A. As of June 2019, AA had $\sim $50K entries, however, this includes some number of entries that are not truly research publications (for example, forewords, prefaces, table of contents, programs, schedules, indexes, calls for papers/participation, lists of reviewers, lists of tutorial abstracts, invited talks, appendices, session information, obituaries, book reviews, newsletters, lists of proceedings, lifetime achievement awards, erratum, and notes). We discard them for the analyses here. (Note: CL journal includes position papers like squibs, letter to editor, opinion, etc. We do not discard them.) We are then left with 44,896 articles. Figure FIGREF6 shows a graph of the number of papers published in each of the years from 1965 to 2018. Discussion: Observe that there was a spurt in the 1990s, but things really took off since the year 2000, and the growth continues. Also, note that the number of publications is considerably higher in alternate years. This is due to biennial conferences. Since 1998 the largest of such conferences has been LREC (In 2018 alone LREC had over 700 main conferences papers and additional papers from its 29 workshops). COLING, another biennial conference (also occurring in the even years) has about 45% of the number of main conference papers as LREC. Q. How many people publish in the ACL Anthology (NLP conferences)? A. Figure FIGREF7 shows a graph of the number of authors (of AA papers) over the years: Discussion: It is a good sign for the field to have a growing number of people join its ranks as researchers. A further interesting question would be: Q. How many people are actively publishing in NLP? A. It is hard to know the exact number, but we can determine the number of people who have published in AA in the last N years. #people who published at least one paper in 2017 and 2018 (2 years): $\sim $12k (11,957 to be precise) #people who published at least one paper 2015 through 2018 (4 years):$\sim $17.5k (17,457 to be precise) Of course, some number of researchers published NLP papers in non-AA venues, and some number are active NLP researchers who may not have published papers in the last few years. Q. How many journal papers exist in the AA? How many main conference papers? How many workshop papers? A. See Figure FIGREF8. Discussion: The number of journal papers is dwarfed by the number of conference and workshop papers. (This is common in computer science. Even though NLP is a broad interdisciplinary field, the influence of computer science practices on NLP is particularly strong.) Shared task and system demo papers are relatively new (introduced in the 2000s), but their numbers are already significant and growing. Creating a separate class for “Top-tier Conference” is somewhat arbitrary, but it helps make certain comparisons more meaningful (for example, when comparing the average number of citations, etc.). For this work, we consider ACL, EMNLP, NAACL, COLING, and EACL as top-tier conferences, but certainly other groupings are also reasonable. Q. How many papers have been published at ACL (main conference papers)? What are the other NLP venues and what is the distribution of the number of papers across various CL/NLP venues? A. # ACL (main conference papers) as of June 2018: 4,839 The same workshop can co-occur with different conferences in different years, so we grouped all workshop papers in their own class. We did the same for tutorials, system demonstration papers (demos), and student research papers. Figure FIGREF9 shows the number of main conference papers for various venues and paper types (workshop papers, demos, etc.). Discussion: Even though LREC is a relatively new conference that occurs only once in two years, it tends to have a high acceptance rate ($\sim $60%), and enjoys substantial participation. Thus, LREC is already the largest single source of NLP conference papers. SemEval, which started as SenseEval in 1998 and occurred once in two or three years, has now morphed into an annual two-day workshop—SemEval. It is the largest single source of NLP shared task papers. Demographics (focus of analysis: gender, age, and geographic diversity) NLP, like most other areas of research, suffers from poor demographic diversity. There is very little to low representation from certain nationalities, race, gender, language, income, age, physical abilities, etc. This impacts the breadth of technologies we create, how useful they are, and whether they reach those that need it most. In this section, we analyze three specific attributes among many that deserve attention: gender (specifically, the number of women researchers in NLP), age (more precisely, the number of years of NLP paper publishing experience), and the amount of research in various languages (which loosely correlates with geographic diversity). Demographics (focus of analysis: gender, age, and geographic diversity) ::: Gender The ACL Anthology does not record demographic information about the paper authors. (Until recently, ACL and other NLP conferences did not record demographic information of the authors.) However, many first names have strong associations with a male or female gender. We will use these names to estimate the percentage of female first authors in NLP. The US Social Security Administration publishes a database of names and genders of newborns. We use the dataset to identify 55,133 first names that are strongly associated with females (probability $\ge $99%) and 29,873 first names that are strongly associated with males (probability $\ge $99%). (As a side, it is interesting to note that there is markedly greater diversity in female names than in male names.) We identified 26,637 of the 44,896 AA papers ($\sim $60%) where the first authors have one of these names and determine the percentage of female first author papers across the years. We will refer to this subset of AA papers as AA*. Note the following caveats associated with this analysis: The names dataset used has a lower representation of names from nationalities other than the US. However, there is a large expatriate population living in the US. Chinese names (especially in the romanized form) are not good indicators of gender. Thus the method presented here disregards most Chinese names, and the results of the analysis apply to the group of researchers excluding those with Chinese names. The dataset only records names associated with two genders. The approach presented here is meant to be an approximation in the absence of true gender information. Q. What percent of the AA* papers have female first authors (FFA)? How has this percentage changed with time? A. Overall FFA%: 30.3%. Figure FIGREF16 shows how FFA% has changed with time. Common paper title words and FFA% of papers that have those words are shown in the bottom half of the image. Note that the slider at the bottom has been set to 400, i.e., only those title words that occur in 400 or more papers are shown. The legend on the bottom right shows that low FFA scores are shown in shades of blue, whereas relatively higher FFA scores are shown in shades of green. Discussion: Observe that as a community, we are far from obtaining male-female parity in terms of first authors. A further striking (and concerning) observation is that the female first author percentage has not improved since the years 1999 and 2000 when the FFA percentages were highest (32.9% and 32.8%, respectively). In fact there seems to even be a slight downward trend in recent years. The calculations shown above are for the percentage of papers that have female first authors. The percentage of female first authors is about the same ($\sim $31%). On average male authors had a slightly higher average number of publications than female authors. To put these numbers in context, the percentage of female scientists world wide (considering all areas of research) has been estimated to be around 30%. The reported percentages for many computer science sub-fields are much lower. (See Women in Science (2015).) The percentages are much higher for certain other fields such as psychology and linguistics. (See this study for psychology and this study for linguistics.) If we can identify ways to move the needle on the FFA percentage and get it closer to 50% (or more), NLP can be a beacon to many other fields, especially in the sciences. FFA percentages are particularly low for papers that have parsing, neural, and unsupervised in the title. There are some areas within NLP that enjoy a healthier female-male parity in terms of first authors of papers. Figure FIGREF20 shows FFA percentages for papers that have the word discourse in the title. There is burgeoning research on neural NLP in the last few years. Figure FIGREF21 shows FFA percentages for papers that have the word neural in the title. Figure FIGREF22 shows lists of terms with the highest and lowest FFA percentages, respectively, when considering terms that occur in at least 50 paper titles (instead of 400 in the analysis above). Observe that FFA percentages are relatively higher in non-English European language research such as papers on Russian, Portuguese, French, and Italian. FFA percentages are also relatively higher for certain areas of NLP such as work on prosody, readability, discourse, dialogue, paraphrasing, and individual parts of speech such as adjectives and verbs. FFA percentages are particularly low for papers on theoretical aspects of statistical modelling, and areas such as machine translation, parsing, and logic. The full lists of terms and FFA percentages will be made available with the rest of the data. Demographics (focus of analysis: gender, age, and geographic diversity) ::: Academic Age While the actual age of NLP researchers might be an interesting aspect to explore, we do not have that information. Thus, instead, we can explore a slightly different (and perhaps more useful) attribute: NLP academic age. We can define NLP academic age as the number of years one has been publishing in AA. So if this is the first year one has published in AA, then their NLP academic age is 1. If one published their first AA paper in 2001 and their latest AA paper in 2018, then their academic age is 18. Q. How old are we? That is, what is the average NLP academic age of those who published papers in 2018? How has the average changed over the years? That is, have we been getting older or younger? What percentage of authors that published in 2018 were publishing their first AA paper? A. Average NLP Academic Age of people that published in 2018: 5.41 years Median NLP Academic Age of people that published in 2018: 2 years Percentage of 2018 authors that published their first AA paper in 2018: 44.9% Figure FIGREF24 shows how these numbers have changed over the years. Discussion: Observe that the Average academic age has been steadily increasing over the years until 2016 and 2017, when the trend has shifted and the average academic age has started to decrease. The median age was 1 year for most of the 1965 to 1990 period, 2 years for most of the 1991 to 2006 period, 3 years for most of the 2007 to 2015 period, and back to 2 years since then. The first-time AA author percentage decreased until about 1988, after which it sort of stayed steady at around 48% until 2004 with occasional bursts to $\sim $56%. Since 2005, the first-time author percentage has gone up and down every other year. It seems that the even years (which are also LREC years) have a higher first-time author percentage. Perhaps, this oscillation in first-time authors percentage is related to LREC’s high acceptance rate. Q. What is the distribution of authors in various academic age bins? For example, what percentage of authors that published in 2018 had an academic age of 2, 3, or 4? What percentage had an age between 5 and 9? And so on? A. See Figure FIGREF25. Discussion: Observe that about 65% of the authors that published in 2018 had an academic age of less than 5. This number has steadily reduced since 1965, was in the 60 to 70% range in 1990s, rose to the 70 to 72% range in early 2000s, then declined again until it reached the lowest value ($\sim $60%) in 2010, and has again steadily risen until 2018 (65%). Thus, even though it may sometimes seem at recent conferences that there is a large influx of new people into NLP (and that is true), proportionally speaking, the average NLP academic age is higher (more experienced) than what it has been in much of its history. Demographics (focus of analysis: gender, age, and geographic diversity) ::: Location (Languages) Automatic systems with natural language abilities are growing to be increasingly pervasive in our lives. Not only are they sources of mere convenience, but are crucial in making sure large sections of society and the world are not left behind by the information divide. Thus, the limits of what automatic systems can do in a language, limit the world for the speakers of that language. We know that much of the research in NLP is on English or uses English datasets. Many reasons have been proffered, and we will not go into that here. Instead, we will focus on estimating how much research pertains to non-English languages. We will make use of the idea that often when work is done focusing on a non-English language, then the language is mentioned in the title. We collected a list of 122 languages indexed by Wiktionary and looked for the presence of these words in the titles of AA papers. (Of course there are hundreds of other lesser known languages as well, but here we wanted to see the representation of these more prominent languages in NLP literature.) Figure FIGREF27 is a treemap of the 122 languages arranged alphabetically and shaded such that languages that appear more often in AA paper titles have a darker shade of green. Discussion: Even though the amount of work done on English is much larger than that on any other language, often the word English does not appear in the title, and this explains why English is not the first (but the second-most) common language name to appear in the titles. This is likely due to the fact that many papers fail to mention the language of study or the language of the datasets used if it is English. There is growing realization in the community that this is not quite right. However, the language of study can be named in other less prominent places than the title, for example the abstract, introduction, or when the datasets are introduced, depending on how central it is to the paper. We can see from the treemap that the most widely spoken Asian and Western European languages enjoy good representation in AA. These include: Chinese, Arabic, Korean, Japanese, and Hindi (Asian) as well as French, German, Swedish, Spanish, Portuguese, and Italian (European). This is followed by the relatively less widely spoken European languages (such as Russian, Polish, Norwegian, Romanian, Dutch, and Czech) and Asian languages (such as Turkish, Thai, and Urdu). Most of the well-represented languages are from the Indo-European language family. Yet, even in the limited landscape of the most common 122 languages, vast swathes are barren with inattention. Notable among these is the extremely low representation of languages from Africa, languages from non-Indo-European language families, and Indigenous languages from around the world. Areas of Research Natural Language Processing addresses a wide range of research questions and tasks pertaining to language and computing. It encompasses many areas of research that have seen an ebb and flow of interest over the years. In this section, we examine the terms that have been used in the titles of ACL Anthology (AA) papers. The terms in a title are particularly informative because they are used to clearly and precisely convey what the paper is about. Some journals ask authors to separately include keywords in the paper or in the meta-information, but AA papers are largely devoid of this information. Thus titles are an especially useful source of keywords for papers—keywords that are often indicative of the area of research. Keywords could also be extracted from abstracts and papers; we leave that for future work. Further work is also planned on inferring areas of research using word embeddings, techniques from topic modelling, and clustering. There are clear benefits to performing analyses using that information. However, those approaches can be sensitive to the parameters used. Here, we keep things simple and explore counts of terms in paper titles. Thus the results are easily reproducible and verifiable. Caveat: Even though there is an association between title terms and areas of research, the association can be less strong for some terms and areas. We use the association as one (imperfect) source of information about areas of research. This information may be combined with other sources of information to draw more robust conclusions. Title Terms: The title has a privileged position in a paper. It serves many functions, and here are three key ones (from an article by Sneha Kulkarni): "A good research paper title: 1. Condenses the paper's content in a few words 2. Captures the readers' attention 3. Differentiates the paper from other papers of the same subject area". If we examine the titles of papers in the ACL Anthology, we would expect that because of Function 1 many of the most common terms will be associated with the dominant areas of research. Function 2 (or attempting to have a catchy title) on the other hand, arguably leads to more unique and less frequent title terms. Function 3 seems crucial to the effectiveness of a title; and while at first glance it may seem like this will lead to unique title terms, often one needs to establish a connection with something familiar in order to convey how the work being presented is new or different. It is also worth noting that a catchy term today, will likely not be catchy tomorrow. Similarly, a distinctive term today, may not be distinctive tomorrow. For example, early papers used neural in the title to distinguish themselves from non-nerual approaches, but these days neural is not particularly discriminative as far as NLP papers go. Thus, competing and complex interactions are involved in the making of titles. Nonetheless, an arguable hypothesis is that: broad trends in interest towards an area of research will be reflected, to some degree, in the frequencies of title terms associated with that area over time. However, even if one does not believe in that hypothesis, it is worth examining the terms in the titles of tens of thousands of papers in the ACL Anthology—spread across many decades. Q. What terms are used most commonly in the titles of the AA papers? How has that changed with time? A. Figure FIGREF28 shows the most common unigrams (single word) and bigrams (two-word sequences) in the titles of papers published from 1980 to 2019. (Ignoring function words.) The timeline graph at the bottom shows the percentage of occurrences of the unigrams over the years (the colors of the unigrams in the Timeline match those in the Title Unigram list). Note: For a given year, the timeline graph includes a point for a unigram if the sum of the frequency of the unigram in that year and the two years before it is at least ten. The period before 1980 is not included because of the small number of papers. Discussion: Appropriately enough, the most common term in the titles of NLP papers is language. Presence of high-ranking terms pertaining to machine translation suggest that it is the area of research that has received considerable attention. Other areas associated with the high-frequency title terms include lexical semantics, named entity recognition, question answering, word sense disambiguation, and sentiment analysis. In fact, the common bigrams in the titles often correspond to names of NLP research areas. Some of the bigrams like shared task and large scale are not areas of research, but rather mechanisms or trends of research that apply broadly to many areas of research. The unigrams, also provide additional insights, such as the interest of the community in Chinese language, and in areas such as speech and parsing. The Timeline graph is crowded in this view, but clicking on a term from the unigram list will filter out all other lines from the timeline. This is especially useful for determining whether the popularity of a term is growing or declining. (One can already see from above that neural has broken away from the pack in recent years.) Since there are many lines in the Timeline graph, Tableau labels only some (you can see neural and machine). However, hovering over a line, in the eventual interactive visualization, will display the corresponding term—as shown in the figure. Despite being busy, the graph sheds light on the relative dominance of the most frequent terms and how that has changed with time. The vocabulary of title words is smaller when considering papers from the 1980's than in recent years. (As would be expected since the number of papers then was also relatively fewer.) Further, dominant terms such as language and translation accounted for a higher percentage than in recent years where there is a much larger diversity of topics and the dominant research areas are not as dominant as they once were. Q. What are the most frequent unigrams and bigrams in the titles of recent papers? A. Figure FIGREF29 shows the most frequent unigrams and bigrams in the titles of papers published 2016 Jan to 2019 June (time of data collection). Discussion: Some of the terms that have made notable gains in the top 20 unigrams and bigrams lists in recent years include: neural machine (presumably largely due to the phrase neural machine translation), neural network(s), word embeddings, recurrent neural, deep learning and the corresponding unigrams (neural, networks, etc.). We also see gains for terms related to shared tasks such as SemEval and task. The sets of most frequent unigrams and bigrams in the titles of AA papers from various time spans are available online. Apart from clicking on terms, one can also enter the query (say parsing) in the search box at the bottom. Apart from filtering the timeline graph (bottom), this action also filters the unigram list (top left) to provide information only about the search term. This is useful because the query term may not be one of the visible top unigrams. FigureFIGREF31 shows the timeline graph for parsing. Discussion: Parsing seems to have enjoyed considerable attention in the 1980s, began a period of steep decline in the early 1990s, and a period of gradual decline ever since. One can enter multiple terms in the search box or shift/command click multiple terms to show graphs for more than one term. FigureFIGREF32 shows the timelines for three bigrams statistical machine, neural machine, and machine translation: Discussion: The graph indicates that there was a spike in machine translation papers in 1996, but the number of papers dropped substantially after that. Yet, its numbers have been comparatively much higher than other terms. One can also see the rise of statistical machine translation in the early 2000s followed by its decline with the rise of neural machine translation. Impact Research articles can have impact in a number of ways—pushing the state of the art, answering crucial questions, finding practical solutions that directly help people, making a new generation of potential-scientists excited about a field of study, and more. As scientists, it seems attractive to quantitatively measure scientific impact, and this is particularly appealing to governments and funding agencies; however, it should be noted that individual measures of research impact are limited in scope—they measure only some kinds of contributions. Citations The most commonly used metrics of research impact are derived from citations. A citation of a scholarly article is the explicit reference to that article. Citations serve many functions. However, a simplifying assumption is that regardless of the reason for citation, every citation counts as credit to the influence or impact of the cited work. Thus several citation-based metrics have emerged over the years including: number of citations, average citations, h-index, relative citation ratio, and impact factor. It is not always clear why some papers get lots of citations and others do not. One can argue that highly cited papers have captured the imagination of the field: perhaps because they were particularly creative, opened up a new area of research, pushed the state of the art by a substantial degree, tested compelling hypotheses, or produced useful datasets, among other things. Note however, that the number of citations is not always a reflection of the quality or importance of a piece of work. Note also that there are systematic biases that prevent certain kinds of papers from accruing citations, especially when the contributions of a piece of work are atypical, not easily quantified, or in an area where the number of scientific publications is low. Further, the citations process can be abused, for example, by egregious self-citations. Nonetheless, given the immense volume of scientific literature, the relative ease with which one can track citations using services such as Google Scholar and Semantic Scholar, and given the lack of other easily applicable and effective metrics, citation analysis is an imperfect but useful window into research impact. In this section, we examine citations of AA papers. We focus on two aspects: Most cited papers: We begin by looking at the most cited papers overall and in various time spans. We will then look at most cited papers by paper-type (long, short, demo, etc) and venue (ACL, LREC, etc.). Perhaps these make interesting reading lists. Perhaps they also lead to a qualitative understanding of the kinds of AA papers that have received lots of citations. Aggregate citation metrics by time span, paper type, and venue: Access to citation information allows us to calculate aggregate citation metrics such as average and median citations of papers published in different time periods, published in different venues, etc. These can help answer questions such as: on average, how well cited are papers published in the 1990s? on average, how many citations does a short paper get? how many citations does a long paper get? how many citations for a workshop paper? etc. Data: The analyses presented below are based on information about the papers taken directly from AA (as of June 2019) and citation information extracted from Google Scholar (as of June 2019). We extracted citation information from Google Scholar profiles of authors who had a Google Scholar Profile page and had published at least three papers in the ACL Anthology. This yielded citation information for about 75% of the papers (33,051 out of the 44,896 papers). We will refer to this subset of the ACL Anthology papers as AA’. All citation analysis below is on AA’. Impact ::: #Citations and Most Cited Papers Q. How many citations have the AA’ papers received? How is that distributed among the papers published in various decades? A. $\sim $1.2 million citations (as of June 2019). Figure FIGREF36 shows a timeline graph where each year has a bar with height corresponding to the number of citations received by papers published in that year. Further, the bar has colored fragments corresponding to each of the papers and the height of a fragment (paper) is proportional to the number of citations it has received. Thus it is easy to spot the papers that received a large number of citations, and the years when the published papers received a large number of citations. Hovering over individual papers reveals an information box showing the paper title, authors, year of publication, publication venue, and #citations. Discussion: With time, not only have the number of papers grown, but also the number of high-citation papers. We see a marked jump in the 1990s over the previous decades, but the 2000s are the most notable in terms of the high number of citations. The 2010s papers will likely surpass the 2000s papers in the years to come. Q. What are the most cited papers in AA'? A. Figure FIGREF37 shoes the most cited papers in the AA'. Discussion: We see that the top-tier conference papers (green) are some of the most cited papers in AA’. There are a notable number of journal papers (dark green) in the most cited list as well, but very few demo (purple) and workshop (orange) papers. In the interactive visualizations (to be released later), one can click on the url to be to taken directly to the paper’s landing page in the ACL Anthology website. That page includes links to meta information, the pdf, and associated files such as videos and appendices. There will also be functionality to download the lists. Alas, copying the lists from the screenshots shown here is not easy. Q. What are the most cited AA' journal papers ? What are the most cited AA' workshop papers? What are the most cited AA' shared task papers? What are the most cited AA' demo papers? What are the most cited tutorials? A. The most cited AA’ journal papers, conference papers, workshop papers, system demo papers, shared task papers, and tutorials can be viewed online. The most cited papers from individual venues (ACL, CL journal, TACL, EMNLP, LREC, etc.) can also be viewed there. Discussion: Machine translation papers are well-represented in many of these lists, but especially in the system demo papers list. Toolkits such as MT evaluation ones, NLTK, Stanford Core NLP, WordNet Similarity, and OpenNMT have highly cited demo or workshop papers. The shared task papers list is dominated by task description papers (papers by task organizers describing the data and task), especially for sentiment analysis tasks. However, the list also includes papers by top-performing systems in these shared tasks, such as the NRC-Canada, HidelTime, and UKP papers. Q. What are the most cited AA' papers in the last decade? A. Figure FIGREF39 shows the most cited AA' papers in the 2010s. The most cited AA' papers from the earlier periods are available online. Discussion: The early period (1965–1989) list includes papers focused on grammar and linguistic structure. The 1990s list has papers addressing many different NLP problems with statistical approaches. Papers on MT and sentiment analysis are frequent in the 2000s list. The 2010s are dominated by papers on word embeddings and neural representations. Impact ::: Average Citations by Time Span Q. How many citations did the papers published between 1990 and 1994 receive? What is the average number of citations that a paper published between 1990 and 1994 has received? What are the numbers for other time spans? A. Total citations for papers published between 1990 and 1994: $\sim $92k Average citations for papers published between 1990 and 1994: 94.3 Figure FIGREF41 shows the numbers for various time spans. Discussion: The early 1990s were an interesting period for NLP with the use of data from the World Wide Web and technologies from speech processing. This was the period with the highest average citations per paper, closely followed by the 1965–1969 and 1995–1999 periods. The 2000–2004 period is notable for: (1) a markedly larger number of citations than the previous decades; (2) third highest average number of citations. The drop off in the average citations for recent 5-year spans is largely because they have not had as much time to collect citations. Impact ::: Aggregate Citation Statistics, by Paper Type and Venue Q. What are the average number of citations received by different types of papers: main conference papers, workshop papers, student research papers, shared task papers, and system demonstration papers? A. In this analysis, we include only those AA’ papers that were published in 2016 or earlier (to allow for at least 2.5 years to collect citations). There are 26,949 such papers. Figures FIGREF42 and FIGREF43 show the average citations by paper type when considering papers published 1965–2016 and 2010–2016, respectively. Figures FIGREF45 and FIGREF46 show the medians. Discussion: Journal papers have much higher average and median citations than other papers, but the gap between them and top-tier conferences is markedly reduced when considering papers published since 2010. System demo papers have the third highest average citations; however, shared task papers have the third highest median citations. The popularity of shared tasks and the general importance given to beating the state of the art (SOTA) seems to have grown in recent years—something that has come under criticism. It is interesting to note that in terms of citations, workshop papers are doing somewhat better than the conferences that are not top tier. Finally, the citation numbers for tutorials show that even though a small number of tutorials are well cited, a majority receive 1 or no citations. This is in contrast to system demo papers that have average and median citations that are higher or comparable to workshop papers. Throughout the analyses in this article, we see that median citation numbers are markedly lower than average citation numbers. This is particularly telling. It shows that while there are some very highly cited papers, a majority of the papers obtain much lower number of citations—and when considering papers other than journals and top-tier conferences, the number of citations is frequently lower than ten. Q. What are the average number of citations received by the long and short ACL main conference papers, respectively? A. Short papers were introduced at ACL in 2003. Since then ACL is by far the venue with the most number of short papers (compared to other venues). So we compare long and short papers published at ACL since 2003 to determine their average citations. Once again, we limit the papers to those published until 2016 to allow for the papers to have time to collect citations. Figure FIGREF47 shows the average and median citations for long and short papers. Discussion: On average, long papers get almost three times as many citations as short papers. However, the median for long papers is two-and-half times that of short papers. This difference might be because some very heavily cited long papers push the average up for long papers. Q. Which venue has publications with the highest average number of citations? What is the average number of citations for ACL and EMNLP papers? What is this average for other venues? What are the average citations for workshop papers, system demonstration papers, and shared task papers? A. CL journal has the highest average citations per paper. Figure FIGREF49 shows the average citations for AA’ papers published 1965–2016 and 2010–2016, respectively, grouped by venue and paper type. (Figure with median citations is available online.) Discussion: In terms of citations, TACL papers have not been as successful as EMNLP and ACL; however, CL journal (the more traditional journal paper venue) has the highest average and median paper citations (by a large margin). This gap has reduced in papers published since 2010. When considering papers published between 2010 and 2016, the system demonstration papers, the SemEval shared task papers, and non-SemEval shared task papers have notably high average (surpassing those of EACL and COLING); however their median citations are lower. This is likely because some heavily cited papers have pushed the average up. Nonetheless, it is interesting to note how, in terms of citations, demo and shared task papers have surpassed many conferences and even become competitive with some top-tier conferences such as EACL and COLING. Q. What percent of the AA’ papers that were published in 2016 or earlier are cited more than 1000 times? How many more than 10 times? How many papers are cited 0 times? A. Google Scholar invented the i-10 index as another measure of author research impact. It stands for the number of papers by an author that received ten or more citations. (Ten here is somewhat arbitrary, but reasonable.) Similar to that, one can look at the impact of AA’ as a whole and the impact of various subsets of AA’ through the number of papers in various citation bins. Figure FIGREF50 shows the percentage of AA’ papers in various citation bins. (The percentages of papers when considering papers from specific time spans are available online.) Discussion: About 56% of the papers are cited ten or more times. 6.4% of the papers are never cited. Note also that some portion of the 1–9 bin likely includes papers that only received self-citations. It is interesting that the percentage of papers with 0 citations is rather steady (between 7.4% and 8.7%) for the 1965–1989, 1990–1999, and 2010–2016 periods. The majority of the papers lie in the 10 to 99 citations bin, for all except the recent periods (2010–2016 and 2016Jan–2016Dec). With time, the recent period should also have the majority of the papers in the 10 to 99 citations bin. The numbers for the 2016Jan–2016Dec papers show that after 2.5 years, about 89% of the papers have at least one citation and about 33% of the papers have ten or more citations. Q. What are the citation bin percentages for individual venues and paper types? A. See Figure FIGREF51. Discussion: Observe that 70 to 80% of the papers in journals and top-tier conferences have ten or more citations. The percentages are markedly lower (between 30 and 70%) for the other conferences shown above, and even lower for some other conferences (not shown above). CL Journal is particularly notable for the largest percentage of papers with 100 or more citations. The somewhat high percentage of papers that are never cited (4.3%) are likely because some of the book reviews from earlier years are not explicitly marked in CL journal, and thus they were not removed from analysis. Also, letters to editors, which are more common in CL journal, tend to often obtain 0 citations. CL, EMNLP, and ACL have the best track record for accepting papers that have gone on to receive 1000 or more citations. *Sem, the semantics conference, seems to have notably lower percentage of high-citation papers, even though it has fairly competitive acceptance rates. Instead of percentage, if one considers raw numbers of papers that have at least ten citations (i-10 index), then LREC is particularly notable in terms of the large number of papers it accepts that have gone on to obtain ten or more citations ($\sim $1600). Thus, by producing a large number of moderate-to-high citation papers, and introducing many first-time authors, LREC is one of the notable (yet perhaps undervalued) engines of impact on NLP. About 50% of the SemEval shared task papers received 10 or more citations, and about 46% of the non-SemEval Shared Task Papers received 10 or more citations. About 47% of the workshop papers received ten or more citations. About 43% of the demo papers received 10 or more citations. Impact ::: Citations to Papers by Areas of Research Q. What is the average number of citations of AA' papers that have machine translation in the title? What about papers that have the term sentiment analysis or word representations? A. Different areas of research within NLP enjoy varying amounts of attention. In Part II, we looked at the relative popularity of various areas over time—estimated through the number of paper titles that had corresponding terms. (You may also want to see the discussion on the use of paper title terms to sample papers from various, possibly overlapping, areas.) Figure FIGREF53 shows the top 50 title bigrams ordered by decreasing number of total citations. Only those bigrams that occur in at least 30 AA' papers (published between 1965 and 2016) are considered. (The papers from 2017 and later are not included, to allow for at least 2.5 years for the papers to accumulate citations.) Discussion: The graph shows that the bigram machine translation occurred in 1,659 papers that together accrued more than 93k citations. These papers have on average 68.8 citations and the median citations is 14. Not all machine translation (MT) papers have machine translation in the title. However, arguably, this set of 1,659 papers is a representative enough sample of machine translation papers; and thus, the average and median are estimates of MT in general. Second in the list are papers with statistical machine in the title—most commonly from the phrase statistical machine translation. One expects considerable overlap in the papers across the sets of papers with machine translation and statistical machine, but machine translation likely covers a broader range of research including work before statistical MT was introduced, neural MT, and MT evaluation. There are fewer papers with sentiment analysis in the title (356), but these have acquired citations at a higher average (104) than both machine translation and statistical machine. The bigram automatic evaluation jumps out because of its high average citations (337). Some of the neural-related bigrams have high median citations, for example, neural machine (49) and convolutional neural (40.5). Figure FIGREF54 shows the lists of top 25 bigrams ordered by average citations. Discussion: Observe the wide variety of topics covered by this list. In some ways that is reassuring for the health of the field as a whole; however, this list does not show which areas are not receiving sufficient attention. It is less clear to me how to highlight those, as simply showing the bottom 50 bigrams by average citations is not meaningful. Also note that this is not in any way an endorsement to write papers with these high-citation bigrams in the title. Doing so is of course no guarantee of receiving a large number of citations. Correlation of Age and Gender with Citations In this section, we examine citations across two demographic dimensions: Academic age (number of years one has been publishing) and Gender. There are good reasons to study citations across each of these dimensions including, but not limited to, the following: Areas of research: To better understand research contributions in the context of the area where the contribution is made. Academic age: To better understand how the challenges faced by researchers at various stages of their career may impact the citations of their papers. For example, how well-cited are first-time NLP authors? On average, at what academic age do citations peak? etc. Gender: To better understand the extent to which systematic biases (explicit and implicit) pervasive in society and scientific publishing impact author citations. Some of these aspects of study may seem controversial. So it is worth addressing that first. The goal here is not to perpetuate stereotypes about age, gender, or even areas of research. The history of scientific discovery is awash with plenty of examples of bad science that has tried to erroneously show that one group of people is “better” than another, with devastating consequences. People are far more alike than different. However, different demographic groups have faced (and continue to face) various socio-cultural inequities and biases. Gender and race studies look at how demographic differences shape our experiences. They examine the roles of social institutions in maintaining the inequities and biases. This work is in support of those studies. Unless we measure differences in outcomes such as scientific productivity and impact across demographic groups, we will not fully know the extent to which these inequities and biases impact our scientific community; and we cannot track the effectiveness of measures to make our universities, research labs, and conferences more inclusive, equitable, and fair. Correlation of Age and Gender with Citations ::: Correlation of Academic Age with Citations We introduced NLP academic age earlier in the paper, where we defined NLP academic age as the number of years one has been publishing in AA. Here we examine whether NLP academic age impacts citations. The analyses are done in terms of the academic age of the first author; however, similar analyses can be done for the last author and all authors. (There are limitations to each of these analyses though as discussed further below.) First author is a privileged position in the author list as it is usually reserved for the researcher that has done the most work and writing. The first author is also usually the main driver of the project; although, their mentor or advisor may also be a significant driver of the project. Sometimes multiple authors may be marked as first authors in the paper, but the current analysis simply takes the first author from the author list. In many academic communities, the last author position is reserved for the most senior or mentoring researcher. However, in non-university research labs and in large collaboration projects, the meaning of the last author position is less clear. (Personally, I prefer author names ordered by the amount of work done.) Examining all authors is slightly more tricky as one has to decide how to credit the citations to the possibly multiple authors. It might also not be a clear indicator of differences across gender as a large number of the papers in AA have both male and female authors. Q. How does the NLP academic age of the first author correlate with the amount of citations? Are first-year authors less cited than those with more experience? A. Figure FIGREF59 shows various aggregate citation statistics corresponding to academic age. To produce the graph we put each paper in a bin corresponding to the academic age of the first author when the paper was published. For example, if the first author of a paper had an academic age of 3 when that paper was published, then the paper goes in bin 3. We then calculate #papers, #citations, median citations, and average citations for each bin. For the figure below, We further group the bins 10 to 14, 15 to 19, 20 to 34, and 35 to 50. These groupings are done to avoid clutter, and also because many of the higher age bins have a low number of papers. Discussion: Observe that the number of papers where the first author has academic age 1 is much larger than the number of papers in any other bin. This is largely because a large number of authors in AA have written exactly one paper as first author. Also, about 60% of the authors in AA (17,874 out of the 29,941 authors) have written exactly one paper (regardless of author position). The curves for the average and median citations have a slight upside down U shape. The relatively lower average and median citations in year 1 (37.26 and 10, respectively) indicate that being new to the field has some negative impact on citations. The average increases steadily from year 1 to year 4, but the median is already at the highest point by year 2. One might say, that year 2 to year 14 are the period of steady and high citations. Year 15 onwards, there is a steady decline in the citations. It is probably wise to not draw too many conclusions from the averages of the 35 to 50 bin, because of the small number of papers. There seems to be a peak in average citations at age 7. However, there is not a corresponding peak in the median. Thus the peak in average might be due to an increase in the number of very highly cited papers. Citations to Papers by First Author Gender As noted in Part I, neither ACL nor the ACL Anthology have recorded demographic information for the vast majority of the authors. Thus we use the same setup discussed earlier in the section on demographics, to determine gender using the United States Social Security Administration database of names and genders of newborns to identify 55,133 first names that are strongly associated with females (probability $\ge $99%) and 29,873 first names that are strongly associated with males (probability $\ge $99%). Q. On average, are women cited less than men? A. Yes, on average, female first author papers have received markedly fewer citations than male first author papers (36.4 compared to 52.4). The difference in median is smaller (11 compared to 13). See Figure FIGREF60. Discussion: The large difference in averages and smaller difference in medians suggests that there are markedly more very heavily cited male first-author papers than female first-author papers. The gender-unknown category, which here largely consist of authors with Chinese origin names and names that are less strongly associated with one gender have a slightly higher average, but the same median citations, as authors with female-associated first names. The differences in citations, or citation gap, across genders may: (1) vary by period of time; (2) vary due to confounding factors such as academic age and areas of research. We explore these next. Q. How has the citation gap across genders changed over the years? A. Figure FIGREF61 (left side) shows the citation statistics across four time periods. Discussion: Observe that female first authors have always been a minority in the history of ACL; however, on average, their papers from the early years (1965 to 1989) received a markedly higher number of citations than those of male first authors from the same period. We can see from the graph that this changed in the 1990s where male first-author papers obtained markedly more citations on average. The citation gap reduced considerably in the 2000s, and the 2010–2016 period saw a further slight reduction in the citation gap. It is also interesting to note that the gender-unknown category has almost bridged the gap with the males in this most recent time period. Further, the proportion of the gender-unknown authors has increased over the years—arguably, an indication of better representations of authors from around the world in recent years. (Nonetheless, as indicated in Part I, there is still plenty to be done to promote greater inclusion of authors from Africa and South America.) Q. How have citations varied by gender and academic age? Are women less cited because of a greater proportion of new-to-NLP female first authors than new-to-NLP male first authors? A. Figure FIGREF61 (right side) shows citation statistics broken down by gender and academic age. (This figure is similar to the academic age graph seen earlier, except that it shows separate average and median lines for female, male, and unknown gender first authors.) Discussion: The graphs show that female first authors consistently receive fewer citations than male authors for the first fifteen years. The trend is inverted with a small citation gap in the 15th to 34th years period. Q. Is the citation gap common across the vast majority of areas of research within NLP? Is the gap simply because more women work in areas that receive low numbers of citations (regardless of gender)? A. Figure FIGREF64 shows the most cited areas of research along with citation statistics split by gender of the first authors of corresponding papers. (This figure is similar to the areas of research graph seen earlier, except that it shows separate citation statistics for the genders.) Note that the figure includes rows for only those bigram and gender pairs with at least 30 AA’ papers (published between 1965 and 2016). Thus for some of the bigrams certain gender entries are not shown. Discussion: Numbers for an additional 32 areas are available online. Observe that in only about 12% (7 of the top 59) of the most cited areas of research, women received higher average citations than men. These include: sentiment analysis, information extraction, document summarization, spoken dialogue, cross lingual (research), dialogue, systems, language generation. (Of course, note that some of the 59 areas, as estimated using title term bigrams, are overlapping. Also, we did not include large scale in the list above because the difference in averages is very small and it is not really an area of research.) Thus, the citation gap is common across a majority of the high-citations areas within NLP. Conclusions This work examined the ACL Anthology to identify broad trends in productivity, focus, and impact. We examined several questions such as: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? Particular attention was paid to the demographics and inclusiveness of the NLP community. Notably, we showed that only about 30% of first authors are female, and that this percentage has not improved since the year 2000. We also showed that, on average, female first authors are cited less than male first authors, even when controlling for academic age. We hope that recording citation and participation gaps across demographic groups will encourage our university, industry, and government research labs to be more inclusive and fair. Several additional aspects of the AA will be explored in future work (see the bottom of the blog posts). Acknowledgments This work was possible due to the helpful discussion and encouragement from a number of awesome people, including: Dan Jurafsky, Tara Small, Michael Strube, Cyril Goutte, Eric Joanis, Matt Post, Patrick Littell, Torsten Zesch, Ellen Riloff, Norm Vinson, Iryna Gurevych, Rebecca Knowles, Isar Nejadgholi, and Peter Turney. Also, a big thanks to the ACL Anthology team for creating and maintaining a wonderful resource.
machine translation, statistical machine, sentiment analysis
5c95808cd3ee9585f05ef573b0d4a52e86d04c60
5c95808cd3ee9585f05ef573b0d4a52e86d04c60_0
Q: Which journal and conference are cited the most in recent years? Text: Introduction The ACL Anthology (AA) is a digital repository of tens of thousands of articles on Natural Language Processing (NLP) / Computational Linguistics (CL). It includes papers published in the family of ACL conferences as well as in other NLP conferences such as LREC and RANLP. AA is the largest single source of scientific literature on NLP. This project, which we call NLP Scholar, examines the literature as a whole to identify broad trends in productivity, focus, and impact. We will present the analyses in a sequence of questions and answers. The questions range from fairly mundane to oh-that-will-be-good-to-know. Our broader goal here is simply to record the state of the AA literature: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? The answers are usually in the form of numbers, graphs, and inter-connected visualizations. We focus on the following aspects of NLP research: size, demographics, areas of research, impact, and correlation of citations with demographic attributes (age and gender). Target Audience: The analyses presented here are likely to be of interest to any NLP researcher. This might be particularly the case for those that are new to the field and wish to get a broad overview of the NLP publishing landscape. On the other hand, even seasoned NLP'ers have likely wondered about the questions raised here and might be interested in the empirical evidence. Data: The analyses presented below are based on information about the papers taken directly from AA (as of June 2019) and citation information extracted from Google Scholar (as of June 2019). Thus, all subsequent papers and citations are not included in the analysis. A fresh data collection is planned for January 2020. Interactive Visualizations: The visualizations we are developing for this work (using Tableau) are interactive—so one can hover, click to select and filter, move sliders, etc. Since this work is high in the number of visualizations, the main visualizations are presented as figures in the paper and some sets of visualizations are pointed to online. The interactive visualizations and data will be made available through the first author's website after peer review. Related Work: This work builds on past research, including that on Google Scholar BIBREF0, BIBREF1, BIBREF2, BIBREF3, on the analysis of NLP papers BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, on citation intent BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, and on measuring scholarly impact BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. Caveats and Ethical Considerations: We list several caveats and limitations throughout the paper. A compilation of these is also available online in the About NLP Scholar page. The analyses presented here are also available as a series of blog posts. Size Q. How big is the ACL Anthology (AA)? How is it changing with time? A. As of June 2019, AA had $\sim $50K entries, however, this includes some number of entries that are not truly research publications (for example, forewords, prefaces, table of contents, programs, schedules, indexes, calls for papers/participation, lists of reviewers, lists of tutorial abstracts, invited talks, appendices, session information, obituaries, book reviews, newsletters, lists of proceedings, lifetime achievement awards, erratum, and notes). We discard them for the analyses here. (Note: CL journal includes position papers like squibs, letter to editor, opinion, etc. We do not discard them.) We are then left with 44,896 articles. Figure FIGREF6 shows a graph of the number of papers published in each of the years from 1965 to 2018. Discussion: Observe that there was a spurt in the 1990s, but things really took off since the year 2000, and the growth continues. Also, note that the number of publications is considerably higher in alternate years. This is due to biennial conferences. Since 1998 the largest of such conferences has been LREC (In 2018 alone LREC had over 700 main conferences papers and additional papers from its 29 workshops). COLING, another biennial conference (also occurring in the even years) has about 45% of the number of main conference papers as LREC. Q. How many people publish in the ACL Anthology (NLP conferences)? A. Figure FIGREF7 shows a graph of the number of authors (of AA papers) over the years: Discussion: It is a good sign for the field to have a growing number of people join its ranks as researchers. A further interesting question would be: Q. How many people are actively publishing in NLP? A. It is hard to know the exact number, but we can determine the number of people who have published in AA in the last N years. #people who published at least one paper in 2017 and 2018 (2 years): $\sim $12k (11,957 to be precise) #people who published at least one paper 2015 through 2018 (4 years):$\sim $17.5k (17,457 to be precise) Of course, some number of researchers published NLP papers in non-AA venues, and some number are active NLP researchers who may not have published papers in the last few years. Q. How many journal papers exist in the AA? How many main conference papers? How many workshop papers? A. See Figure FIGREF8. Discussion: The number of journal papers is dwarfed by the number of conference and workshop papers. (This is common in computer science. Even though NLP is a broad interdisciplinary field, the influence of computer science practices on NLP is particularly strong.) Shared task and system demo papers are relatively new (introduced in the 2000s), but their numbers are already significant and growing. Creating a separate class for “Top-tier Conference” is somewhat arbitrary, but it helps make certain comparisons more meaningful (for example, when comparing the average number of citations, etc.). For this work, we consider ACL, EMNLP, NAACL, COLING, and EACL as top-tier conferences, but certainly other groupings are also reasonable. Q. How many papers have been published at ACL (main conference papers)? What are the other NLP venues and what is the distribution of the number of papers across various CL/NLP venues? A. # ACL (main conference papers) as of June 2018: 4,839 The same workshop can co-occur with different conferences in different years, so we grouped all workshop papers in their own class. We did the same for tutorials, system demonstration papers (demos), and student research papers. Figure FIGREF9 shows the number of main conference papers for various venues and paper types (workshop papers, demos, etc.). Discussion: Even though LREC is a relatively new conference that occurs only once in two years, it tends to have a high acceptance rate ($\sim $60%), and enjoys substantial participation. Thus, LREC is already the largest single source of NLP conference papers. SemEval, which started as SenseEval in 1998 and occurred once in two or three years, has now morphed into an annual two-day workshop—SemEval. It is the largest single source of NLP shared task papers. Demographics (focus of analysis: gender, age, and geographic diversity) NLP, like most other areas of research, suffers from poor demographic diversity. There is very little to low representation from certain nationalities, race, gender, language, income, age, physical abilities, etc. This impacts the breadth of technologies we create, how useful they are, and whether they reach those that need it most. In this section, we analyze three specific attributes among many that deserve attention: gender (specifically, the number of women researchers in NLP), age (more precisely, the number of years of NLP paper publishing experience), and the amount of research in various languages (which loosely correlates with geographic diversity). Demographics (focus of analysis: gender, age, and geographic diversity) ::: Gender The ACL Anthology does not record demographic information about the paper authors. (Until recently, ACL and other NLP conferences did not record demographic information of the authors.) However, many first names have strong associations with a male or female gender. We will use these names to estimate the percentage of female first authors in NLP. The US Social Security Administration publishes a database of names and genders of newborns. We use the dataset to identify 55,133 first names that are strongly associated with females (probability $\ge $99%) and 29,873 first names that are strongly associated with males (probability $\ge $99%). (As a side, it is interesting to note that there is markedly greater diversity in female names than in male names.) We identified 26,637 of the 44,896 AA papers ($\sim $60%) where the first authors have one of these names and determine the percentage of female first author papers across the years. We will refer to this subset of AA papers as AA*. Note the following caveats associated with this analysis: The names dataset used has a lower representation of names from nationalities other than the US. However, there is a large expatriate population living in the US. Chinese names (especially in the romanized form) are not good indicators of gender. Thus the method presented here disregards most Chinese names, and the results of the analysis apply to the group of researchers excluding those with Chinese names. The dataset only records names associated with two genders. The approach presented here is meant to be an approximation in the absence of true gender information. Q. What percent of the AA* papers have female first authors (FFA)? How has this percentage changed with time? A. Overall FFA%: 30.3%. Figure FIGREF16 shows how FFA% has changed with time. Common paper title words and FFA% of papers that have those words are shown in the bottom half of the image. Note that the slider at the bottom has been set to 400, i.e., only those title words that occur in 400 or more papers are shown. The legend on the bottom right shows that low FFA scores are shown in shades of blue, whereas relatively higher FFA scores are shown in shades of green. Discussion: Observe that as a community, we are far from obtaining male-female parity in terms of first authors. A further striking (and concerning) observation is that the female first author percentage has not improved since the years 1999 and 2000 when the FFA percentages were highest (32.9% and 32.8%, respectively). In fact there seems to even be a slight downward trend in recent years. The calculations shown above are for the percentage of papers that have female first authors. The percentage of female first authors is about the same ($\sim $31%). On average male authors had a slightly higher average number of publications than female authors. To put these numbers in context, the percentage of female scientists world wide (considering all areas of research) has been estimated to be around 30%. The reported percentages for many computer science sub-fields are much lower. (See Women in Science (2015).) The percentages are much higher for certain other fields such as psychology and linguistics. (See this study for psychology and this study for linguistics.) If we can identify ways to move the needle on the FFA percentage and get it closer to 50% (or more), NLP can be a beacon to many other fields, especially in the sciences. FFA percentages are particularly low for papers that have parsing, neural, and unsupervised in the title. There are some areas within NLP that enjoy a healthier female-male parity in terms of first authors of papers. Figure FIGREF20 shows FFA percentages for papers that have the word discourse in the title. There is burgeoning research on neural NLP in the last few years. Figure FIGREF21 shows FFA percentages for papers that have the word neural in the title. Figure FIGREF22 shows lists of terms with the highest and lowest FFA percentages, respectively, when considering terms that occur in at least 50 paper titles (instead of 400 in the analysis above). Observe that FFA percentages are relatively higher in non-English European language research such as papers on Russian, Portuguese, French, and Italian. FFA percentages are also relatively higher for certain areas of NLP such as work on prosody, readability, discourse, dialogue, paraphrasing, and individual parts of speech such as adjectives and verbs. FFA percentages are particularly low for papers on theoretical aspects of statistical modelling, and areas such as machine translation, parsing, and logic. The full lists of terms and FFA percentages will be made available with the rest of the data. Demographics (focus of analysis: gender, age, and geographic diversity) ::: Academic Age While the actual age of NLP researchers might be an interesting aspect to explore, we do not have that information. Thus, instead, we can explore a slightly different (and perhaps more useful) attribute: NLP academic age. We can define NLP academic age as the number of years one has been publishing in AA. So if this is the first year one has published in AA, then their NLP academic age is 1. If one published their first AA paper in 2001 and their latest AA paper in 2018, then their academic age is 18. Q. How old are we? That is, what is the average NLP academic age of those who published papers in 2018? How has the average changed over the years? That is, have we been getting older or younger? What percentage of authors that published in 2018 were publishing their first AA paper? A. Average NLP Academic Age of people that published in 2018: 5.41 years Median NLP Academic Age of people that published in 2018: 2 years Percentage of 2018 authors that published their first AA paper in 2018: 44.9% Figure FIGREF24 shows how these numbers have changed over the years. Discussion: Observe that the Average academic age has been steadily increasing over the years until 2016 and 2017, when the trend has shifted and the average academic age has started to decrease. The median age was 1 year for most of the 1965 to 1990 period, 2 years for most of the 1991 to 2006 period, 3 years for most of the 2007 to 2015 period, and back to 2 years since then. The first-time AA author percentage decreased until about 1988, after which it sort of stayed steady at around 48% until 2004 with occasional bursts to $\sim $56%. Since 2005, the first-time author percentage has gone up and down every other year. It seems that the even years (which are also LREC years) have a higher first-time author percentage. Perhaps, this oscillation in first-time authors percentage is related to LREC’s high acceptance rate. Q. What is the distribution of authors in various academic age bins? For example, what percentage of authors that published in 2018 had an academic age of 2, 3, or 4? What percentage had an age between 5 and 9? And so on? A. See Figure FIGREF25. Discussion: Observe that about 65% of the authors that published in 2018 had an academic age of less than 5. This number has steadily reduced since 1965, was in the 60 to 70% range in 1990s, rose to the 70 to 72% range in early 2000s, then declined again until it reached the lowest value ($\sim $60%) in 2010, and has again steadily risen until 2018 (65%). Thus, even though it may sometimes seem at recent conferences that there is a large influx of new people into NLP (and that is true), proportionally speaking, the average NLP academic age is higher (more experienced) than what it has been in much of its history. Demographics (focus of analysis: gender, age, and geographic diversity) ::: Location (Languages) Automatic systems with natural language abilities are growing to be increasingly pervasive in our lives. Not only are they sources of mere convenience, but are crucial in making sure large sections of society and the world are not left behind by the information divide. Thus, the limits of what automatic systems can do in a language, limit the world for the speakers of that language. We know that much of the research in NLP is on English or uses English datasets. Many reasons have been proffered, and we will not go into that here. Instead, we will focus on estimating how much research pertains to non-English languages. We will make use of the idea that often when work is done focusing on a non-English language, then the language is mentioned in the title. We collected a list of 122 languages indexed by Wiktionary and looked for the presence of these words in the titles of AA papers. (Of course there are hundreds of other lesser known languages as well, but here we wanted to see the representation of these more prominent languages in NLP literature.) Figure FIGREF27 is a treemap of the 122 languages arranged alphabetically and shaded such that languages that appear more often in AA paper titles have a darker shade of green. Discussion: Even though the amount of work done on English is much larger than that on any other language, often the word English does not appear in the title, and this explains why English is not the first (but the second-most) common language name to appear in the titles. This is likely due to the fact that many papers fail to mention the language of study or the language of the datasets used if it is English. There is growing realization in the community that this is not quite right. However, the language of study can be named in other less prominent places than the title, for example the abstract, introduction, or when the datasets are introduced, depending on how central it is to the paper. We can see from the treemap that the most widely spoken Asian and Western European languages enjoy good representation in AA. These include: Chinese, Arabic, Korean, Japanese, and Hindi (Asian) as well as French, German, Swedish, Spanish, Portuguese, and Italian (European). This is followed by the relatively less widely spoken European languages (such as Russian, Polish, Norwegian, Romanian, Dutch, and Czech) and Asian languages (such as Turkish, Thai, and Urdu). Most of the well-represented languages are from the Indo-European language family. Yet, even in the limited landscape of the most common 122 languages, vast swathes are barren with inattention. Notable among these is the extremely low representation of languages from Africa, languages from non-Indo-European language families, and Indigenous languages from around the world. Areas of Research Natural Language Processing addresses a wide range of research questions and tasks pertaining to language and computing. It encompasses many areas of research that have seen an ebb and flow of interest over the years. In this section, we examine the terms that have been used in the titles of ACL Anthology (AA) papers. The terms in a title are particularly informative because they are used to clearly and precisely convey what the paper is about. Some journals ask authors to separately include keywords in the paper or in the meta-information, but AA papers are largely devoid of this information. Thus titles are an especially useful source of keywords for papers—keywords that are often indicative of the area of research. Keywords could also be extracted from abstracts and papers; we leave that for future work. Further work is also planned on inferring areas of research using word embeddings, techniques from topic modelling, and clustering. There are clear benefits to performing analyses using that information. However, those approaches can be sensitive to the parameters used. Here, we keep things simple and explore counts of terms in paper titles. Thus the results are easily reproducible and verifiable. Caveat: Even though there is an association between title terms and areas of research, the association can be less strong for some terms and areas. We use the association as one (imperfect) source of information about areas of research. This information may be combined with other sources of information to draw more robust conclusions. Title Terms: The title has a privileged position in a paper. It serves many functions, and here are three key ones (from an article by Sneha Kulkarni): "A good research paper title: 1. Condenses the paper's content in a few words 2. Captures the readers' attention 3. Differentiates the paper from other papers of the same subject area". If we examine the titles of papers in the ACL Anthology, we would expect that because of Function 1 many of the most common terms will be associated with the dominant areas of research. Function 2 (or attempting to have a catchy title) on the other hand, arguably leads to more unique and less frequent title terms. Function 3 seems crucial to the effectiveness of a title; and while at first glance it may seem like this will lead to unique title terms, often one needs to establish a connection with something familiar in order to convey how the work being presented is new or different. It is also worth noting that a catchy term today, will likely not be catchy tomorrow. Similarly, a distinctive term today, may not be distinctive tomorrow. For example, early papers used neural in the title to distinguish themselves from non-nerual approaches, but these days neural is not particularly discriminative as far as NLP papers go. Thus, competing and complex interactions are involved in the making of titles. Nonetheless, an arguable hypothesis is that: broad trends in interest towards an area of research will be reflected, to some degree, in the frequencies of title terms associated with that area over time. However, even if one does not believe in that hypothesis, it is worth examining the terms in the titles of tens of thousands of papers in the ACL Anthology—spread across many decades. Q. What terms are used most commonly in the titles of the AA papers? How has that changed with time? A. Figure FIGREF28 shows the most common unigrams (single word) and bigrams (two-word sequences) in the titles of papers published from 1980 to 2019. (Ignoring function words.) The timeline graph at the bottom shows the percentage of occurrences of the unigrams over the years (the colors of the unigrams in the Timeline match those in the Title Unigram list). Note: For a given year, the timeline graph includes a point for a unigram if the sum of the frequency of the unigram in that year and the two years before it is at least ten. The period before 1980 is not included because of the small number of papers. Discussion: Appropriately enough, the most common term in the titles of NLP papers is language. Presence of high-ranking terms pertaining to machine translation suggest that it is the area of research that has received considerable attention. Other areas associated with the high-frequency title terms include lexical semantics, named entity recognition, question answering, word sense disambiguation, and sentiment analysis. In fact, the common bigrams in the titles often correspond to names of NLP research areas. Some of the bigrams like shared task and large scale are not areas of research, but rather mechanisms or trends of research that apply broadly to many areas of research. The unigrams, also provide additional insights, such as the interest of the community in Chinese language, and in areas such as speech and parsing. The Timeline graph is crowded in this view, but clicking on a term from the unigram list will filter out all other lines from the timeline. This is especially useful for determining whether the popularity of a term is growing or declining. (One can already see from above that neural has broken away from the pack in recent years.) Since there are many lines in the Timeline graph, Tableau labels only some (you can see neural and machine). However, hovering over a line, in the eventual interactive visualization, will display the corresponding term—as shown in the figure. Despite being busy, the graph sheds light on the relative dominance of the most frequent terms and how that has changed with time. The vocabulary of title words is smaller when considering papers from the 1980's than in recent years. (As would be expected since the number of papers then was also relatively fewer.) Further, dominant terms such as language and translation accounted for a higher percentage than in recent years where there is a much larger diversity of topics and the dominant research areas are not as dominant as they once were. Q. What are the most frequent unigrams and bigrams in the titles of recent papers? A. Figure FIGREF29 shows the most frequent unigrams and bigrams in the titles of papers published 2016 Jan to 2019 June (time of data collection). Discussion: Some of the terms that have made notable gains in the top 20 unigrams and bigrams lists in recent years include: neural machine (presumably largely due to the phrase neural machine translation), neural network(s), word embeddings, recurrent neural, deep learning and the corresponding unigrams (neural, networks, etc.). We also see gains for terms related to shared tasks such as SemEval and task. The sets of most frequent unigrams and bigrams in the titles of AA papers from various time spans are available online. Apart from clicking on terms, one can also enter the query (say parsing) in the search box at the bottom. Apart from filtering the timeline graph (bottom), this action also filters the unigram list (top left) to provide information only about the search term. This is useful because the query term may not be one of the visible top unigrams. FigureFIGREF31 shows the timeline graph for parsing. Discussion: Parsing seems to have enjoyed considerable attention in the 1980s, began a period of steep decline in the early 1990s, and a period of gradual decline ever since. One can enter multiple terms in the search box or shift/command click multiple terms to show graphs for more than one term. FigureFIGREF32 shows the timelines for three bigrams statistical machine, neural machine, and machine translation: Discussion: The graph indicates that there was a spike in machine translation papers in 1996, but the number of papers dropped substantially after that. Yet, its numbers have been comparatively much higher than other terms. One can also see the rise of statistical machine translation in the early 2000s followed by its decline with the rise of neural machine translation. Impact Research articles can have impact in a number of ways—pushing the state of the art, answering crucial questions, finding practical solutions that directly help people, making a new generation of potential-scientists excited about a field of study, and more. As scientists, it seems attractive to quantitatively measure scientific impact, and this is particularly appealing to governments and funding agencies; however, it should be noted that individual measures of research impact are limited in scope—they measure only some kinds of contributions. Citations The most commonly used metrics of research impact are derived from citations. A citation of a scholarly article is the explicit reference to that article. Citations serve many functions. However, a simplifying assumption is that regardless of the reason for citation, every citation counts as credit to the influence or impact of the cited work. Thus several citation-based metrics have emerged over the years including: number of citations, average citations, h-index, relative citation ratio, and impact factor. It is not always clear why some papers get lots of citations and others do not. One can argue that highly cited papers have captured the imagination of the field: perhaps because they were particularly creative, opened up a new area of research, pushed the state of the art by a substantial degree, tested compelling hypotheses, or produced useful datasets, among other things. Note however, that the number of citations is not always a reflection of the quality or importance of a piece of work. Note also that there are systematic biases that prevent certain kinds of papers from accruing citations, especially when the contributions of a piece of work are atypical, not easily quantified, or in an area where the number of scientific publications is low. Further, the citations process can be abused, for example, by egregious self-citations. Nonetheless, given the immense volume of scientific literature, the relative ease with which one can track citations using services such as Google Scholar and Semantic Scholar, and given the lack of other easily applicable and effective metrics, citation analysis is an imperfect but useful window into research impact. In this section, we examine citations of AA papers. We focus on two aspects: Most cited papers: We begin by looking at the most cited papers overall and in various time spans. We will then look at most cited papers by paper-type (long, short, demo, etc) and venue (ACL, LREC, etc.). Perhaps these make interesting reading lists. Perhaps they also lead to a qualitative understanding of the kinds of AA papers that have received lots of citations. Aggregate citation metrics by time span, paper type, and venue: Access to citation information allows us to calculate aggregate citation metrics such as average and median citations of papers published in different time periods, published in different venues, etc. These can help answer questions such as: on average, how well cited are papers published in the 1990s? on average, how many citations does a short paper get? how many citations does a long paper get? how many citations for a workshop paper? etc. Data: The analyses presented below are based on information about the papers taken directly from AA (as of June 2019) and citation information extracted from Google Scholar (as of June 2019). We extracted citation information from Google Scholar profiles of authors who had a Google Scholar Profile page and had published at least three papers in the ACL Anthology. This yielded citation information for about 75% of the papers (33,051 out of the 44,896 papers). We will refer to this subset of the ACL Anthology papers as AA’. All citation analysis below is on AA’. Impact ::: #Citations and Most Cited Papers Q. How many citations have the AA’ papers received? How is that distributed among the papers published in various decades? A. $\sim $1.2 million citations (as of June 2019). Figure FIGREF36 shows a timeline graph where each year has a bar with height corresponding to the number of citations received by papers published in that year. Further, the bar has colored fragments corresponding to each of the papers and the height of a fragment (paper) is proportional to the number of citations it has received. Thus it is easy to spot the papers that received a large number of citations, and the years when the published papers received a large number of citations. Hovering over individual papers reveals an information box showing the paper title, authors, year of publication, publication venue, and #citations. Discussion: With time, not only have the number of papers grown, but also the number of high-citation papers. We see a marked jump in the 1990s over the previous decades, but the 2000s are the most notable in terms of the high number of citations. The 2010s papers will likely surpass the 2000s papers in the years to come. Q. What are the most cited papers in AA'? A. Figure FIGREF37 shoes the most cited papers in the AA'. Discussion: We see that the top-tier conference papers (green) are some of the most cited papers in AA’. There are a notable number of journal papers (dark green) in the most cited list as well, but very few demo (purple) and workshop (orange) papers. In the interactive visualizations (to be released later), one can click on the url to be to taken directly to the paper’s landing page in the ACL Anthology website. That page includes links to meta information, the pdf, and associated files such as videos and appendices. There will also be functionality to download the lists. Alas, copying the lists from the screenshots shown here is not easy. Q. What are the most cited AA' journal papers ? What are the most cited AA' workshop papers? What are the most cited AA' shared task papers? What are the most cited AA' demo papers? What are the most cited tutorials? A. The most cited AA’ journal papers, conference papers, workshop papers, system demo papers, shared task papers, and tutorials can be viewed online. The most cited papers from individual venues (ACL, CL journal, TACL, EMNLP, LREC, etc.) can also be viewed there. Discussion: Machine translation papers are well-represented in many of these lists, but especially in the system demo papers list. Toolkits such as MT evaluation ones, NLTK, Stanford Core NLP, WordNet Similarity, and OpenNMT have highly cited demo or workshop papers. The shared task papers list is dominated by task description papers (papers by task organizers describing the data and task), especially for sentiment analysis tasks. However, the list also includes papers by top-performing systems in these shared tasks, such as the NRC-Canada, HidelTime, and UKP papers. Q. What are the most cited AA' papers in the last decade? A. Figure FIGREF39 shows the most cited AA' papers in the 2010s. The most cited AA' papers from the earlier periods are available online. Discussion: The early period (1965–1989) list includes papers focused on grammar and linguistic structure. The 1990s list has papers addressing many different NLP problems with statistical approaches. Papers on MT and sentiment analysis are frequent in the 2000s list. The 2010s are dominated by papers on word embeddings and neural representations. Impact ::: Average Citations by Time Span Q. How many citations did the papers published between 1990 and 1994 receive? What is the average number of citations that a paper published between 1990 and 1994 has received? What are the numbers for other time spans? A. Total citations for papers published between 1990 and 1994: $\sim $92k Average citations for papers published between 1990 and 1994: 94.3 Figure FIGREF41 shows the numbers for various time spans. Discussion: The early 1990s were an interesting period for NLP with the use of data from the World Wide Web and technologies from speech processing. This was the period with the highest average citations per paper, closely followed by the 1965–1969 and 1995–1999 periods. The 2000–2004 period is notable for: (1) a markedly larger number of citations than the previous decades; (2) third highest average number of citations. The drop off in the average citations for recent 5-year spans is largely because they have not had as much time to collect citations. Impact ::: Aggregate Citation Statistics, by Paper Type and Venue Q. What are the average number of citations received by different types of papers: main conference papers, workshop papers, student research papers, shared task papers, and system demonstration papers? A. In this analysis, we include only those AA’ papers that were published in 2016 or earlier (to allow for at least 2.5 years to collect citations). There are 26,949 such papers. Figures FIGREF42 and FIGREF43 show the average citations by paper type when considering papers published 1965–2016 and 2010–2016, respectively. Figures FIGREF45 and FIGREF46 show the medians. Discussion: Journal papers have much higher average and median citations than other papers, but the gap between them and top-tier conferences is markedly reduced when considering papers published since 2010. System demo papers have the third highest average citations; however, shared task papers have the third highest median citations. The popularity of shared tasks and the general importance given to beating the state of the art (SOTA) seems to have grown in recent years—something that has come under criticism. It is interesting to note that in terms of citations, workshop papers are doing somewhat better than the conferences that are not top tier. Finally, the citation numbers for tutorials show that even though a small number of tutorials are well cited, a majority receive 1 or no citations. This is in contrast to system demo papers that have average and median citations that are higher or comparable to workshop papers. Throughout the analyses in this article, we see that median citation numbers are markedly lower than average citation numbers. This is particularly telling. It shows that while there are some very highly cited papers, a majority of the papers obtain much lower number of citations—and when considering papers other than journals and top-tier conferences, the number of citations is frequently lower than ten. Q. What are the average number of citations received by the long and short ACL main conference papers, respectively? A. Short papers were introduced at ACL in 2003. Since then ACL is by far the venue with the most number of short papers (compared to other venues). So we compare long and short papers published at ACL since 2003 to determine their average citations. Once again, we limit the papers to those published until 2016 to allow for the papers to have time to collect citations. Figure FIGREF47 shows the average and median citations for long and short papers. Discussion: On average, long papers get almost three times as many citations as short papers. However, the median for long papers is two-and-half times that of short papers. This difference might be because some very heavily cited long papers push the average up for long papers. Q. Which venue has publications with the highest average number of citations? What is the average number of citations for ACL and EMNLP papers? What is this average for other venues? What are the average citations for workshop papers, system demonstration papers, and shared task papers? A. CL journal has the highest average citations per paper. Figure FIGREF49 shows the average citations for AA’ papers published 1965–2016 and 2010–2016, respectively, grouped by venue and paper type. (Figure with median citations is available online.) Discussion: In terms of citations, TACL papers have not been as successful as EMNLP and ACL; however, CL journal (the more traditional journal paper venue) has the highest average and median paper citations (by a large margin). This gap has reduced in papers published since 2010. When considering papers published between 2010 and 2016, the system demonstration papers, the SemEval shared task papers, and non-SemEval shared task papers have notably high average (surpassing those of EACL and COLING); however their median citations are lower. This is likely because some heavily cited papers have pushed the average up. Nonetheless, it is interesting to note how, in terms of citations, demo and shared task papers have surpassed many conferences and even become competitive with some top-tier conferences such as EACL and COLING. Q. What percent of the AA’ papers that were published in 2016 or earlier are cited more than 1000 times? How many more than 10 times? How many papers are cited 0 times? A. Google Scholar invented the i-10 index as another measure of author research impact. It stands for the number of papers by an author that received ten or more citations. (Ten here is somewhat arbitrary, but reasonable.) Similar to that, one can look at the impact of AA’ as a whole and the impact of various subsets of AA’ through the number of papers in various citation bins. Figure FIGREF50 shows the percentage of AA’ papers in various citation bins. (The percentages of papers when considering papers from specific time spans are available online.) Discussion: About 56% of the papers are cited ten or more times. 6.4% of the papers are never cited. Note also that some portion of the 1–9 bin likely includes papers that only received self-citations. It is interesting that the percentage of papers with 0 citations is rather steady (between 7.4% and 8.7%) for the 1965–1989, 1990–1999, and 2010–2016 periods. The majority of the papers lie in the 10 to 99 citations bin, for all except the recent periods (2010–2016 and 2016Jan–2016Dec). With time, the recent period should also have the majority of the papers in the 10 to 99 citations bin. The numbers for the 2016Jan–2016Dec papers show that after 2.5 years, about 89% of the papers have at least one citation and about 33% of the papers have ten or more citations. Q. What are the citation bin percentages for individual venues and paper types? A. See Figure FIGREF51. Discussion: Observe that 70 to 80% of the papers in journals and top-tier conferences have ten or more citations. The percentages are markedly lower (between 30 and 70%) for the other conferences shown above, and even lower for some other conferences (not shown above). CL Journal is particularly notable for the largest percentage of papers with 100 or more citations. The somewhat high percentage of papers that are never cited (4.3%) are likely because some of the book reviews from earlier years are not explicitly marked in CL journal, and thus they were not removed from analysis. Also, letters to editors, which are more common in CL journal, tend to often obtain 0 citations. CL, EMNLP, and ACL have the best track record for accepting papers that have gone on to receive 1000 or more citations. *Sem, the semantics conference, seems to have notably lower percentage of high-citation papers, even though it has fairly competitive acceptance rates. Instead of percentage, if one considers raw numbers of papers that have at least ten citations (i-10 index), then LREC is particularly notable in terms of the large number of papers it accepts that have gone on to obtain ten or more citations ($\sim $1600). Thus, by producing a large number of moderate-to-high citation papers, and introducing many first-time authors, LREC is one of the notable (yet perhaps undervalued) engines of impact on NLP. About 50% of the SemEval shared task papers received 10 or more citations, and about 46% of the non-SemEval Shared Task Papers received 10 or more citations. About 47% of the workshop papers received ten or more citations. About 43% of the demo papers received 10 or more citations. Impact ::: Citations to Papers by Areas of Research Q. What is the average number of citations of AA' papers that have machine translation in the title? What about papers that have the term sentiment analysis or word representations? A. Different areas of research within NLP enjoy varying amounts of attention. In Part II, we looked at the relative popularity of various areas over time—estimated through the number of paper titles that had corresponding terms. (You may also want to see the discussion on the use of paper title terms to sample papers from various, possibly overlapping, areas.) Figure FIGREF53 shows the top 50 title bigrams ordered by decreasing number of total citations. Only those bigrams that occur in at least 30 AA' papers (published between 1965 and 2016) are considered. (The papers from 2017 and later are not included, to allow for at least 2.5 years for the papers to accumulate citations.) Discussion: The graph shows that the bigram machine translation occurred in 1,659 papers that together accrued more than 93k citations. These papers have on average 68.8 citations and the median citations is 14. Not all machine translation (MT) papers have machine translation in the title. However, arguably, this set of 1,659 papers is a representative enough sample of machine translation papers; and thus, the average and median are estimates of MT in general. Second in the list are papers with statistical machine in the title—most commonly from the phrase statistical machine translation. One expects considerable overlap in the papers across the sets of papers with machine translation and statistical machine, but machine translation likely covers a broader range of research including work before statistical MT was introduced, neural MT, and MT evaluation. There are fewer papers with sentiment analysis in the title (356), but these have acquired citations at a higher average (104) than both machine translation and statistical machine. The bigram automatic evaluation jumps out because of its high average citations (337). Some of the neural-related bigrams have high median citations, for example, neural machine (49) and convolutional neural (40.5). Figure FIGREF54 shows the lists of top 25 bigrams ordered by average citations. Discussion: Observe the wide variety of topics covered by this list. In some ways that is reassuring for the health of the field as a whole; however, this list does not show which areas are not receiving sufficient attention. It is less clear to me how to highlight those, as simply showing the bottom 50 bigrams by average citations is not meaningful. Also note that this is not in any way an endorsement to write papers with these high-citation bigrams in the title. Doing so is of course no guarantee of receiving a large number of citations. Correlation of Age and Gender with Citations In this section, we examine citations across two demographic dimensions: Academic age (number of years one has been publishing) and Gender. There are good reasons to study citations across each of these dimensions including, but not limited to, the following: Areas of research: To better understand research contributions in the context of the area where the contribution is made. Academic age: To better understand how the challenges faced by researchers at various stages of their career may impact the citations of their papers. For example, how well-cited are first-time NLP authors? On average, at what academic age do citations peak? etc. Gender: To better understand the extent to which systematic biases (explicit and implicit) pervasive in society and scientific publishing impact author citations. Some of these aspects of study may seem controversial. So it is worth addressing that first. The goal here is not to perpetuate stereotypes about age, gender, or even areas of research. The history of scientific discovery is awash with plenty of examples of bad science that has tried to erroneously show that one group of people is “better” than another, with devastating consequences. People are far more alike than different. However, different demographic groups have faced (and continue to face) various socio-cultural inequities and biases. Gender and race studies look at how demographic differences shape our experiences. They examine the roles of social institutions in maintaining the inequities and biases. This work is in support of those studies. Unless we measure differences in outcomes such as scientific productivity and impact across demographic groups, we will not fully know the extent to which these inequities and biases impact our scientific community; and we cannot track the effectiveness of measures to make our universities, research labs, and conferences more inclusive, equitable, and fair. Correlation of Age and Gender with Citations ::: Correlation of Academic Age with Citations We introduced NLP academic age earlier in the paper, where we defined NLP academic age as the number of years one has been publishing in AA. Here we examine whether NLP academic age impacts citations. The analyses are done in terms of the academic age of the first author; however, similar analyses can be done for the last author and all authors. (There are limitations to each of these analyses though as discussed further below.) First author is a privileged position in the author list as it is usually reserved for the researcher that has done the most work and writing. The first author is also usually the main driver of the project; although, their mentor or advisor may also be a significant driver of the project. Sometimes multiple authors may be marked as first authors in the paper, but the current analysis simply takes the first author from the author list. In many academic communities, the last author position is reserved for the most senior or mentoring researcher. However, in non-university research labs and in large collaboration projects, the meaning of the last author position is less clear. (Personally, I prefer author names ordered by the amount of work done.) Examining all authors is slightly more tricky as one has to decide how to credit the citations to the possibly multiple authors. It might also not be a clear indicator of differences across gender as a large number of the papers in AA have both male and female authors. Q. How does the NLP academic age of the first author correlate with the amount of citations? Are first-year authors less cited than those with more experience? A. Figure FIGREF59 shows various aggregate citation statistics corresponding to academic age. To produce the graph we put each paper in a bin corresponding to the academic age of the first author when the paper was published. For example, if the first author of a paper had an academic age of 3 when that paper was published, then the paper goes in bin 3. We then calculate #papers, #citations, median citations, and average citations for each bin. For the figure below, We further group the bins 10 to 14, 15 to 19, 20 to 34, and 35 to 50. These groupings are done to avoid clutter, and also because many of the higher age bins have a low number of papers. Discussion: Observe that the number of papers where the first author has academic age 1 is much larger than the number of papers in any other bin. This is largely because a large number of authors in AA have written exactly one paper as first author. Also, about 60% of the authors in AA (17,874 out of the 29,941 authors) have written exactly one paper (regardless of author position). The curves for the average and median citations have a slight upside down U shape. The relatively lower average and median citations in year 1 (37.26 and 10, respectively) indicate that being new to the field has some negative impact on citations. The average increases steadily from year 1 to year 4, but the median is already at the highest point by year 2. One might say, that year 2 to year 14 are the period of steady and high citations. Year 15 onwards, there is a steady decline in the citations. It is probably wise to not draw too many conclusions from the averages of the 35 to 50 bin, because of the small number of papers. There seems to be a peak in average citations at age 7. However, there is not a corresponding peak in the median. Thus the peak in average might be due to an increase in the number of very highly cited papers. Citations to Papers by First Author Gender As noted in Part I, neither ACL nor the ACL Anthology have recorded demographic information for the vast majority of the authors. Thus we use the same setup discussed earlier in the section on demographics, to determine gender using the United States Social Security Administration database of names and genders of newborns to identify 55,133 first names that are strongly associated with females (probability $\ge $99%) and 29,873 first names that are strongly associated with males (probability $\ge $99%). Q. On average, are women cited less than men? A. Yes, on average, female first author papers have received markedly fewer citations than male first author papers (36.4 compared to 52.4). The difference in median is smaller (11 compared to 13). See Figure FIGREF60. Discussion: The large difference in averages and smaller difference in medians suggests that there are markedly more very heavily cited male first-author papers than female first-author papers. The gender-unknown category, which here largely consist of authors with Chinese origin names and names that are less strongly associated with one gender have a slightly higher average, but the same median citations, as authors with female-associated first names. The differences in citations, or citation gap, across genders may: (1) vary by period of time; (2) vary due to confounding factors such as academic age and areas of research. We explore these next. Q. How has the citation gap across genders changed over the years? A. Figure FIGREF61 (left side) shows the citation statistics across four time periods. Discussion: Observe that female first authors have always been a minority in the history of ACL; however, on average, their papers from the early years (1965 to 1989) received a markedly higher number of citations than those of male first authors from the same period. We can see from the graph that this changed in the 1990s where male first-author papers obtained markedly more citations on average. The citation gap reduced considerably in the 2000s, and the 2010–2016 period saw a further slight reduction in the citation gap. It is also interesting to note that the gender-unknown category has almost bridged the gap with the males in this most recent time period. Further, the proportion of the gender-unknown authors has increased over the years—arguably, an indication of better representations of authors from around the world in recent years. (Nonetheless, as indicated in Part I, there is still plenty to be done to promote greater inclusion of authors from Africa and South America.) Q. How have citations varied by gender and academic age? Are women less cited because of a greater proportion of new-to-NLP female first authors than new-to-NLP male first authors? A. Figure FIGREF61 (right side) shows citation statistics broken down by gender and academic age. (This figure is similar to the academic age graph seen earlier, except that it shows separate average and median lines for female, male, and unknown gender first authors.) Discussion: The graphs show that female first authors consistently receive fewer citations than male authors for the first fifteen years. The trend is inverted with a small citation gap in the 15th to 34th years period. Q. Is the citation gap common across the vast majority of areas of research within NLP? Is the gap simply because more women work in areas that receive low numbers of citations (regardless of gender)? A. Figure FIGREF64 shows the most cited areas of research along with citation statistics split by gender of the first authors of corresponding papers. (This figure is similar to the areas of research graph seen earlier, except that it shows separate citation statistics for the genders.) Note that the figure includes rows for only those bigram and gender pairs with at least 30 AA’ papers (published between 1965 and 2016). Thus for some of the bigrams certain gender entries are not shown. Discussion: Numbers for an additional 32 areas are available online. Observe that in only about 12% (7 of the top 59) of the most cited areas of research, women received higher average citations than men. These include: sentiment analysis, information extraction, document summarization, spoken dialogue, cross lingual (research), dialogue, systems, language generation. (Of course, note that some of the 59 areas, as estimated using title term bigrams, are overlapping. Also, we did not include large scale in the list above because the difference in averages is very small and it is not really an area of research.) Thus, the citation gap is common across a majority of the high-citations areas within NLP. Conclusions This work examined the ACL Anthology to identify broad trends in productivity, focus, and impact. We examined several questions such as: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? Particular attention was paid to the demographics and inclusiveness of the NLP community. Notably, we showed that only about 30% of first authors are female, and that this percentage has not improved since the year 2000. We also showed that, on average, female first authors are cited less than male first authors, even when controlling for academic age. We hope that recording citation and participation gaps across demographic groups will encourage our university, industry, and government research labs to be more inclusive and fair. Several additional aspects of the AA will be explored in future work (see the bottom of the blog posts). Acknowledgments This work was possible due to the helpful discussion and encouragement from a number of awesome people, including: Dan Jurafsky, Tara Small, Michael Strube, Cyril Goutte, Eric Joanis, Matt Post, Patrick Littell, Torsten Zesch, Ellen Riloff, Norm Vinson, Iryna Gurevych, Rebecca Knowles, Isar Nejadgholi, and Peter Turney. Also, a big thanks to the ACL Anthology team for creating and maintaining a wonderful resource.
CL Journal and EMNLP conference
b6f5860fc4a9a763ddc5edaf6d8df0eb52125c9e
b6f5860fc4a9a763ddc5edaf6d8df0eb52125c9e_0
Q: Which 5 languages appear most frequently in AA paper titles? Text: Introduction The ACL Anthology (AA) is a digital repository of tens of thousands of articles on Natural Language Processing (NLP) / Computational Linguistics (CL). It includes papers published in the family of ACL conferences as well as in other NLP conferences such as LREC and RANLP. AA is the largest single source of scientific literature on NLP. This project, which we call NLP Scholar, examines the literature as a whole to identify broad trends in productivity, focus, and impact. We will present the analyses in a sequence of questions and answers. The questions range from fairly mundane to oh-that-will-be-good-to-know. Our broader goal here is simply to record the state of the AA literature: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? The answers are usually in the form of numbers, graphs, and inter-connected visualizations. We focus on the following aspects of NLP research: size, demographics, areas of research, impact, and correlation of citations with demographic attributes (age and gender). Target Audience: The analyses presented here are likely to be of interest to any NLP researcher. This might be particularly the case for those that are new to the field and wish to get a broad overview of the NLP publishing landscape. On the other hand, even seasoned NLP'ers have likely wondered about the questions raised here and might be interested in the empirical evidence. Data: The analyses presented below are based on information about the papers taken directly from AA (as of June 2019) and citation information extracted from Google Scholar (as of June 2019). Thus, all subsequent papers and citations are not included in the analysis. A fresh data collection is planned for January 2020. Interactive Visualizations: The visualizations we are developing for this work (using Tableau) are interactive—so one can hover, click to select and filter, move sliders, etc. Since this work is high in the number of visualizations, the main visualizations are presented as figures in the paper and some sets of visualizations are pointed to online. The interactive visualizations and data will be made available through the first author's website after peer review. Related Work: This work builds on past research, including that on Google Scholar BIBREF0, BIBREF1, BIBREF2, BIBREF3, on the analysis of NLP papers BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, on citation intent BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, and on measuring scholarly impact BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. Caveats and Ethical Considerations: We list several caveats and limitations throughout the paper. A compilation of these is also available online in the About NLP Scholar page. The analyses presented here are also available as a series of blog posts. Size Q. How big is the ACL Anthology (AA)? How is it changing with time? A. As of June 2019, AA had $\sim $50K entries, however, this includes some number of entries that are not truly research publications (for example, forewords, prefaces, table of contents, programs, schedules, indexes, calls for papers/participation, lists of reviewers, lists of tutorial abstracts, invited talks, appendices, session information, obituaries, book reviews, newsletters, lists of proceedings, lifetime achievement awards, erratum, and notes). We discard them for the analyses here. (Note: CL journal includes position papers like squibs, letter to editor, opinion, etc. We do not discard them.) We are then left with 44,896 articles. Figure FIGREF6 shows a graph of the number of papers published in each of the years from 1965 to 2018. Discussion: Observe that there was a spurt in the 1990s, but things really took off since the year 2000, and the growth continues. Also, note that the number of publications is considerably higher in alternate years. This is due to biennial conferences. Since 1998 the largest of such conferences has been LREC (In 2018 alone LREC had over 700 main conferences papers and additional papers from its 29 workshops). COLING, another biennial conference (also occurring in the even years) has about 45% of the number of main conference papers as LREC. Q. How many people publish in the ACL Anthology (NLP conferences)? A. Figure FIGREF7 shows a graph of the number of authors (of AA papers) over the years: Discussion: It is a good sign for the field to have a growing number of people join its ranks as researchers. A further interesting question would be: Q. How many people are actively publishing in NLP? A. It is hard to know the exact number, but we can determine the number of people who have published in AA in the last N years. #people who published at least one paper in 2017 and 2018 (2 years): $\sim $12k (11,957 to be precise) #people who published at least one paper 2015 through 2018 (4 years):$\sim $17.5k (17,457 to be precise) Of course, some number of researchers published NLP papers in non-AA venues, and some number are active NLP researchers who may not have published papers in the last few years. Q. How many journal papers exist in the AA? How many main conference papers? How many workshop papers? A. See Figure FIGREF8. Discussion: The number of journal papers is dwarfed by the number of conference and workshop papers. (This is common in computer science. Even though NLP is a broad interdisciplinary field, the influence of computer science practices on NLP is particularly strong.) Shared task and system demo papers are relatively new (introduced in the 2000s), but their numbers are already significant and growing. Creating a separate class for “Top-tier Conference” is somewhat arbitrary, but it helps make certain comparisons more meaningful (for example, when comparing the average number of citations, etc.). For this work, we consider ACL, EMNLP, NAACL, COLING, and EACL as top-tier conferences, but certainly other groupings are also reasonable. Q. How many papers have been published at ACL (main conference papers)? What are the other NLP venues and what is the distribution of the number of papers across various CL/NLP venues? A. # ACL (main conference papers) as of June 2018: 4,839 The same workshop can co-occur with different conferences in different years, so we grouped all workshop papers in their own class. We did the same for tutorials, system demonstration papers (demos), and student research papers. Figure FIGREF9 shows the number of main conference papers for various venues and paper types (workshop papers, demos, etc.). Discussion: Even though LREC is a relatively new conference that occurs only once in two years, it tends to have a high acceptance rate ($\sim $60%), and enjoys substantial participation. Thus, LREC is already the largest single source of NLP conference papers. SemEval, which started as SenseEval in 1998 and occurred once in two or three years, has now morphed into an annual two-day workshop—SemEval. It is the largest single source of NLP shared task papers. Demographics (focus of analysis: gender, age, and geographic diversity) NLP, like most other areas of research, suffers from poor demographic diversity. There is very little to low representation from certain nationalities, race, gender, language, income, age, physical abilities, etc. This impacts the breadth of technologies we create, how useful they are, and whether they reach those that need it most. In this section, we analyze three specific attributes among many that deserve attention: gender (specifically, the number of women researchers in NLP), age (more precisely, the number of years of NLP paper publishing experience), and the amount of research in various languages (which loosely correlates with geographic diversity). Demographics (focus of analysis: gender, age, and geographic diversity) ::: Gender The ACL Anthology does not record demographic information about the paper authors. (Until recently, ACL and other NLP conferences did not record demographic information of the authors.) However, many first names have strong associations with a male or female gender. We will use these names to estimate the percentage of female first authors in NLP. The US Social Security Administration publishes a database of names and genders of newborns. We use the dataset to identify 55,133 first names that are strongly associated with females (probability $\ge $99%) and 29,873 first names that are strongly associated with males (probability $\ge $99%). (As a side, it is interesting to note that there is markedly greater diversity in female names than in male names.) We identified 26,637 of the 44,896 AA papers ($\sim $60%) where the first authors have one of these names and determine the percentage of female first author papers across the years. We will refer to this subset of AA papers as AA*. Note the following caveats associated with this analysis: The names dataset used has a lower representation of names from nationalities other than the US. However, there is a large expatriate population living in the US. Chinese names (especially in the romanized form) are not good indicators of gender. Thus the method presented here disregards most Chinese names, and the results of the analysis apply to the group of researchers excluding those with Chinese names. The dataset only records names associated with two genders. The approach presented here is meant to be an approximation in the absence of true gender information. Q. What percent of the AA* papers have female first authors (FFA)? How has this percentage changed with time? A. Overall FFA%: 30.3%. Figure FIGREF16 shows how FFA% has changed with time. Common paper title words and FFA% of papers that have those words are shown in the bottom half of the image. Note that the slider at the bottom has been set to 400, i.e., only those title words that occur in 400 or more papers are shown. The legend on the bottom right shows that low FFA scores are shown in shades of blue, whereas relatively higher FFA scores are shown in shades of green. Discussion: Observe that as a community, we are far from obtaining male-female parity in terms of first authors. A further striking (and concerning) observation is that the female first author percentage has not improved since the years 1999 and 2000 when the FFA percentages were highest (32.9% and 32.8%, respectively). In fact there seems to even be a slight downward trend in recent years. The calculations shown above are for the percentage of papers that have female first authors. The percentage of female first authors is about the same ($\sim $31%). On average male authors had a slightly higher average number of publications than female authors. To put these numbers in context, the percentage of female scientists world wide (considering all areas of research) has been estimated to be around 30%. The reported percentages for many computer science sub-fields are much lower. (See Women in Science (2015).) The percentages are much higher for certain other fields such as psychology and linguistics. (See this study for psychology and this study for linguistics.) If we can identify ways to move the needle on the FFA percentage and get it closer to 50% (or more), NLP can be a beacon to many other fields, especially in the sciences. FFA percentages are particularly low for papers that have parsing, neural, and unsupervised in the title. There are some areas within NLP that enjoy a healthier female-male parity in terms of first authors of papers. Figure FIGREF20 shows FFA percentages for papers that have the word discourse in the title. There is burgeoning research on neural NLP in the last few years. Figure FIGREF21 shows FFA percentages for papers that have the word neural in the title. Figure FIGREF22 shows lists of terms with the highest and lowest FFA percentages, respectively, when considering terms that occur in at least 50 paper titles (instead of 400 in the analysis above). Observe that FFA percentages are relatively higher in non-English European language research such as papers on Russian, Portuguese, French, and Italian. FFA percentages are also relatively higher for certain areas of NLP such as work on prosody, readability, discourse, dialogue, paraphrasing, and individual parts of speech such as adjectives and verbs. FFA percentages are particularly low for papers on theoretical aspects of statistical modelling, and areas such as machine translation, parsing, and logic. The full lists of terms and FFA percentages will be made available with the rest of the data. Demographics (focus of analysis: gender, age, and geographic diversity) ::: Academic Age While the actual age of NLP researchers might be an interesting aspect to explore, we do not have that information. Thus, instead, we can explore a slightly different (and perhaps more useful) attribute: NLP academic age. We can define NLP academic age as the number of years one has been publishing in AA. So if this is the first year one has published in AA, then their NLP academic age is 1. If one published their first AA paper in 2001 and their latest AA paper in 2018, then their academic age is 18. Q. How old are we? That is, what is the average NLP academic age of those who published papers in 2018? How has the average changed over the years? That is, have we been getting older or younger? What percentage of authors that published in 2018 were publishing their first AA paper? A. Average NLP Academic Age of people that published in 2018: 5.41 years Median NLP Academic Age of people that published in 2018: 2 years Percentage of 2018 authors that published their first AA paper in 2018: 44.9% Figure FIGREF24 shows how these numbers have changed over the years. Discussion: Observe that the Average academic age has been steadily increasing over the years until 2016 and 2017, when the trend has shifted and the average academic age has started to decrease. The median age was 1 year for most of the 1965 to 1990 period, 2 years for most of the 1991 to 2006 period, 3 years for most of the 2007 to 2015 period, and back to 2 years since then. The first-time AA author percentage decreased until about 1988, after which it sort of stayed steady at around 48% until 2004 with occasional bursts to $\sim $56%. Since 2005, the first-time author percentage has gone up and down every other year. It seems that the even years (which are also LREC years) have a higher first-time author percentage. Perhaps, this oscillation in first-time authors percentage is related to LREC’s high acceptance rate. Q. What is the distribution of authors in various academic age bins? For example, what percentage of authors that published in 2018 had an academic age of 2, 3, or 4? What percentage had an age between 5 and 9? And so on? A. See Figure FIGREF25. Discussion: Observe that about 65% of the authors that published in 2018 had an academic age of less than 5. This number has steadily reduced since 1965, was in the 60 to 70% range in 1990s, rose to the 70 to 72% range in early 2000s, then declined again until it reached the lowest value ($\sim $60%) in 2010, and has again steadily risen until 2018 (65%). Thus, even though it may sometimes seem at recent conferences that there is a large influx of new people into NLP (and that is true), proportionally speaking, the average NLP academic age is higher (more experienced) than what it has been in much of its history. Demographics (focus of analysis: gender, age, and geographic diversity) ::: Location (Languages) Automatic systems with natural language abilities are growing to be increasingly pervasive in our lives. Not only are they sources of mere convenience, but are crucial in making sure large sections of society and the world are not left behind by the information divide. Thus, the limits of what automatic systems can do in a language, limit the world for the speakers of that language. We know that much of the research in NLP is on English or uses English datasets. Many reasons have been proffered, and we will not go into that here. Instead, we will focus on estimating how much research pertains to non-English languages. We will make use of the idea that often when work is done focusing on a non-English language, then the language is mentioned in the title. We collected a list of 122 languages indexed by Wiktionary and looked for the presence of these words in the titles of AA papers. (Of course there are hundreds of other lesser known languages as well, but here we wanted to see the representation of these more prominent languages in NLP literature.) Figure FIGREF27 is a treemap of the 122 languages arranged alphabetically and shaded such that languages that appear more often in AA paper titles have a darker shade of green. Discussion: Even though the amount of work done on English is much larger than that on any other language, often the word English does not appear in the title, and this explains why English is not the first (but the second-most) common language name to appear in the titles. This is likely due to the fact that many papers fail to mention the language of study or the language of the datasets used if it is English. There is growing realization in the community that this is not quite right. However, the language of study can be named in other less prominent places than the title, for example the abstract, introduction, or when the datasets are introduced, depending on how central it is to the paper. We can see from the treemap that the most widely spoken Asian and Western European languages enjoy good representation in AA. These include: Chinese, Arabic, Korean, Japanese, and Hindi (Asian) as well as French, German, Swedish, Spanish, Portuguese, and Italian (European). This is followed by the relatively less widely spoken European languages (such as Russian, Polish, Norwegian, Romanian, Dutch, and Czech) and Asian languages (such as Turkish, Thai, and Urdu). Most of the well-represented languages are from the Indo-European language family. Yet, even in the limited landscape of the most common 122 languages, vast swathes are barren with inattention. Notable among these is the extremely low representation of languages from Africa, languages from non-Indo-European language families, and Indigenous languages from around the world. Areas of Research Natural Language Processing addresses a wide range of research questions and tasks pertaining to language and computing. It encompasses many areas of research that have seen an ebb and flow of interest over the years. In this section, we examine the terms that have been used in the titles of ACL Anthology (AA) papers. The terms in a title are particularly informative because they are used to clearly and precisely convey what the paper is about. Some journals ask authors to separately include keywords in the paper or in the meta-information, but AA papers are largely devoid of this information. Thus titles are an especially useful source of keywords for papers—keywords that are often indicative of the area of research. Keywords could also be extracted from abstracts and papers; we leave that for future work. Further work is also planned on inferring areas of research using word embeddings, techniques from topic modelling, and clustering. There are clear benefits to performing analyses using that information. However, those approaches can be sensitive to the parameters used. Here, we keep things simple and explore counts of terms in paper titles. Thus the results are easily reproducible and verifiable. Caveat: Even though there is an association between title terms and areas of research, the association can be less strong for some terms and areas. We use the association as one (imperfect) source of information about areas of research. This information may be combined with other sources of information to draw more robust conclusions. Title Terms: The title has a privileged position in a paper. It serves many functions, and here are three key ones (from an article by Sneha Kulkarni): "A good research paper title: 1. Condenses the paper's content in a few words 2. Captures the readers' attention 3. Differentiates the paper from other papers of the same subject area". If we examine the titles of papers in the ACL Anthology, we would expect that because of Function 1 many of the most common terms will be associated with the dominant areas of research. Function 2 (or attempting to have a catchy title) on the other hand, arguably leads to more unique and less frequent title terms. Function 3 seems crucial to the effectiveness of a title; and while at first glance it may seem like this will lead to unique title terms, often one needs to establish a connection with something familiar in order to convey how the work being presented is new or different. It is also worth noting that a catchy term today, will likely not be catchy tomorrow. Similarly, a distinctive term today, may not be distinctive tomorrow. For example, early papers used neural in the title to distinguish themselves from non-nerual approaches, but these days neural is not particularly discriminative as far as NLP papers go. Thus, competing and complex interactions are involved in the making of titles. Nonetheless, an arguable hypothesis is that: broad trends in interest towards an area of research will be reflected, to some degree, in the frequencies of title terms associated with that area over time. However, even if one does not believe in that hypothesis, it is worth examining the terms in the titles of tens of thousands of papers in the ACL Anthology—spread across many decades. Q. What terms are used most commonly in the titles of the AA papers? How has that changed with time? A. Figure FIGREF28 shows the most common unigrams (single word) and bigrams (two-word sequences) in the titles of papers published from 1980 to 2019. (Ignoring function words.) The timeline graph at the bottom shows the percentage of occurrences of the unigrams over the years (the colors of the unigrams in the Timeline match those in the Title Unigram list). Note: For a given year, the timeline graph includes a point for a unigram if the sum of the frequency of the unigram in that year and the two years before it is at least ten. The period before 1980 is not included because of the small number of papers. Discussion: Appropriately enough, the most common term in the titles of NLP papers is language. Presence of high-ranking terms pertaining to machine translation suggest that it is the area of research that has received considerable attention. Other areas associated with the high-frequency title terms include lexical semantics, named entity recognition, question answering, word sense disambiguation, and sentiment analysis. In fact, the common bigrams in the titles often correspond to names of NLP research areas. Some of the bigrams like shared task and large scale are not areas of research, but rather mechanisms or trends of research that apply broadly to many areas of research. The unigrams, also provide additional insights, such as the interest of the community in Chinese language, and in areas such as speech and parsing. The Timeline graph is crowded in this view, but clicking on a term from the unigram list will filter out all other lines from the timeline. This is especially useful for determining whether the popularity of a term is growing or declining. (One can already see from above that neural has broken away from the pack in recent years.) Since there are many lines in the Timeline graph, Tableau labels only some (you can see neural and machine). However, hovering over a line, in the eventual interactive visualization, will display the corresponding term—as shown in the figure. Despite being busy, the graph sheds light on the relative dominance of the most frequent terms and how that has changed with time. The vocabulary of title words is smaller when considering papers from the 1980's than in recent years. (As would be expected since the number of papers then was also relatively fewer.) Further, dominant terms such as language and translation accounted for a higher percentage than in recent years where there is a much larger diversity of topics and the dominant research areas are not as dominant as they once were. Q. What are the most frequent unigrams and bigrams in the titles of recent papers? A. Figure FIGREF29 shows the most frequent unigrams and bigrams in the titles of papers published 2016 Jan to 2019 June (time of data collection). Discussion: Some of the terms that have made notable gains in the top 20 unigrams and bigrams lists in recent years include: neural machine (presumably largely due to the phrase neural machine translation), neural network(s), word embeddings, recurrent neural, deep learning and the corresponding unigrams (neural, networks, etc.). We also see gains for terms related to shared tasks such as SemEval and task. The sets of most frequent unigrams and bigrams in the titles of AA papers from various time spans are available online. Apart from clicking on terms, one can also enter the query (say parsing) in the search box at the bottom. Apart from filtering the timeline graph (bottom), this action also filters the unigram list (top left) to provide information only about the search term. This is useful because the query term may not be one of the visible top unigrams. FigureFIGREF31 shows the timeline graph for parsing. Discussion: Parsing seems to have enjoyed considerable attention in the 1980s, began a period of steep decline in the early 1990s, and a period of gradual decline ever since. One can enter multiple terms in the search box or shift/command click multiple terms to show graphs for more than one term. FigureFIGREF32 shows the timelines for three bigrams statistical machine, neural machine, and machine translation: Discussion: The graph indicates that there was a spike in machine translation papers in 1996, but the number of papers dropped substantially after that. Yet, its numbers have been comparatively much higher than other terms. One can also see the rise of statistical machine translation in the early 2000s followed by its decline with the rise of neural machine translation. Impact Research articles can have impact in a number of ways—pushing the state of the art, answering crucial questions, finding practical solutions that directly help people, making a new generation of potential-scientists excited about a field of study, and more. As scientists, it seems attractive to quantitatively measure scientific impact, and this is particularly appealing to governments and funding agencies; however, it should be noted that individual measures of research impact are limited in scope—they measure only some kinds of contributions. Citations The most commonly used metrics of research impact are derived from citations. A citation of a scholarly article is the explicit reference to that article. Citations serve many functions. However, a simplifying assumption is that regardless of the reason for citation, every citation counts as credit to the influence or impact of the cited work. Thus several citation-based metrics have emerged over the years including: number of citations, average citations, h-index, relative citation ratio, and impact factor. It is not always clear why some papers get lots of citations and others do not. One can argue that highly cited papers have captured the imagination of the field: perhaps because they were particularly creative, opened up a new area of research, pushed the state of the art by a substantial degree, tested compelling hypotheses, or produced useful datasets, among other things. Note however, that the number of citations is not always a reflection of the quality or importance of a piece of work. Note also that there are systematic biases that prevent certain kinds of papers from accruing citations, especially when the contributions of a piece of work are atypical, not easily quantified, or in an area where the number of scientific publications is low. Further, the citations process can be abused, for example, by egregious self-citations. Nonetheless, given the immense volume of scientific literature, the relative ease with which one can track citations using services such as Google Scholar and Semantic Scholar, and given the lack of other easily applicable and effective metrics, citation analysis is an imperfect but useful window into research impact. In this section, we examine citations of AA papers. We focus on two aspects: Most cited papers: We begin by looking at the most cited papers overall and in various time spans. We will then look at most cited papers by paper-type (long, short, demo, etc) and venue (ACL, LREC, etc.). Perhaps these make interesting reading lists. Perhaps they also lead to a qualitative understanding of the kinds of AA papers that have received lots of citations. Aggregate citation metrics by time span, paper type, and venue: Access to citation information allows us to calculate aggregate citation metrics such as average and median citations of papers published in different time periods, published in different venues, etc. These can help answer questions such as: on average, how well cited are papers published in the 1990s? on average, how many citations does a short paper get? how many citations does a long paper get? how many citations for a workshop paper? etc. Data: The analyses presented below are based on information about the papers taken directly from AA (as of June 2019) and citation information extracted from Google Scholar (as of June 2019). We extracted citation information from Google Scholar profiles of authors who had a Google Scholar Profile page and had published at least three papers in the ACL Anthology. This yielded citation information for about 75% of the papers (33,051 out of the 44,896 papers). We will refer to this subset of the ACL Anthology papers as AA’. All citation analysis below is on AA’. Impact ::: #Citations and Most Cited Papers Q. How many citations have the AA’ papers received? How is that distributed among the papers published in various decades? A. $\sim $1.2 million citations (as of June 2019). Figure FIGREF36 shows a timeline graph where each year has a bar with height corresponding to the number of citations received by papers published in that year. Further, the bar has colored fragments corresponding to each of the papers and the height of a fragment (paper) is proportional to the number of citations it has received. Thus it is easy to spot the papers that received a large number of citations, and the years when the published papers received a large number of citations. Hovering over individual papers reveals an information box showing the paper title, authors, year of publication, publication venue, and #citations. Discussion: With time, not only have the number of papers grown, but also the number of high-citation papers. We see a marked jump in the 1990s over the previous decades, but the 2000s are the most notable in terms of the high number of citations. The 2010s papers will likely surpass the 2000s papers in the years to come. Q. What are the most cited papers in AA'? A. Figure FIGREF37 shoes the most cited papers in the AA'. Discussion: We see that the top-tier conference papers (green) are some of the most cited papers in AA’. There are a notable number of journal papers (dark green) in the most cited list as well, but very few demo (purple) and workshop (orange) papers. In the interactive visualizations (to be released later), one can click on the url to be to taken directly to the paper’s landing page in the ACL Anthology website. That page includes links to meta information, the pdf, and associated files such as videos and appendices. There will also be functionality to download the lists. Alas, copying the lists from the screenshots shown here is not easy. Q. What are the most cited AA' journal papers ? What are the most cited AA' workshop papers? What are the most cited AA' shared task papers? What are the most cited AA' demo papers? What are the most cited tutorials? A. The most cited AA’ journal papers, conference papers, workshop papers, system demo papers, shared task papers, and tutorials can be viewed online. The most cited papers from individual venues (ACL, CL journal, TACL, EMNLP, LREC, etc.) can also be viewed there. Discussion: Machine translation papers are well-represented in many of these lists, but especially in the system demo papers list. Toolkits such as MT evaluation ones, NLTK, Stanford Core NLP, WordNet Similarity, and OpenNMT have highly cited demo or workshop papers. The shared task papers list is dominated by task description papers (papers by task organizers describing the data and task), especially for sentiment analysis tasks. However, the list also includes papers by top-performing systems in these shared tasks, such as the NRC-Canada, HidelTime, and UKP papers. Q. What are the most cited AA' papers in the last decade? A. Figure FIGREF39 shows the most cited AA' papers in the 2010s. The most cited AA' papers from the earlier periods are available online. Discussion: The early period (1965–1989) list includes papers focused on grammar and linguistic structure. The 1990s list has papers addressing many different NLP problems with statistical approaches. Papers on MT and sentiment analysis are frequent in the 2000s list. The 2010s are dominated by papers on word embeddings and neural representations. Impact ::: Average Citations by Time Span Q. How many citations did the papers published between 1990 and 1994 receive? What is the average number of citations that a paper published between 1990 and 1994 has received? What are the numbers for other time spans? A. Total citations for papers published between 1990 and 1994: $\sim $92k Average citations for papers published between 1990 and 1994: 94.3 Figure FIGREF41 shows the numbers for various time spans. Discussion: The early 1990s were an interesting period for NLP with the use of data from the World Wide Web and technologies from speech processing. This was the period with the highest average citations per paper, closely followed by the 1965–1969 and 1995–1999 periods. The 2000–2004 period is notable for: (1) a markedly larger number of citations than the previous decades; (2) third highest average number of citations. The drop off in the average citations for recent 5-year spans is largely because they have not had as much time to collect citations. Impact ::: Aggregate Citation Statistics, by Paper Type and Venue Q. What are the average number of citations received by different types of papers: main conference papers, workshop papers, student research papers, shared task papers, and system demonstration papers? A. In this analysis, we include only those AA’ papers that were published in 2016 or earlier (to allow for at least 2.5 years to collect citations). There are 26,949 such papers. Figures FIGREF42 and FIGREF43 show the average citations by paper type when considering papers published 1965–2016 and 2010–2016, respectively. Figures FIGREF45 and FIGREF46 show the medians. Discussion: Journal papers have much higher average and median citations than other papers, but the gap between them and top-tier conferences is markedly reduced when considering papers published since 2010. System demo papers have the third highest average citations; however, shared task papers have the third highest median citations. The popularity of shared tasks and the general importance given to beating the state of the art (SOTA) seems to have grown in recent years—something that has come under criticism. It is interesting to note that in terms of citations, workshop papers are doing somewhat better than the conferences that are not top tier. Finally, the citation numbers for tutorials show that even though a small number of tutorials are well cited, a majority receive 1 or no citations. This is in contrast to system demo papers that have average and median citations that are higher or comparable to workshop papers. Throughout the analyses in this article, we see that median citation numbers are markedly lower than average citation numbers. This is particularly telling. It shows that while there are some very highly cited papers, a majority of the papers obtain much lower number of citations—and when considering papers other than journals and top-tier conferences, the number of citations is frequently lower than ten. Q. What are the average number of citations received by the long and short ACL main conference papers, respectively? A. Short papers were introduced at ACL in 2003. Since then ACL is by far the venue with the most number of short papers (compared to other venues). So we compare long and short papers published at ACL since 2003 to determine their average citations. Once again, we limit the papers to those published until 2016 to allow for the papers to have time to collect citations. Figure FIGREF47 shows the average and median citations for long and short papers. Discussion: On average, long papers get almost three times as many citations as short papers. However, the median for long papers is two-and-half times that of short papers. This difference might be because some very heavily cited long papers push the average up for long papers. Q. Which venue has publications with the highest average number of citations? What is the average number of citations for ACL and EMNLP papers? What is this average for other venues? What are the average citations for workshop papers, system demonstration papers, and shared task papers? A. CL journal has the highest average citations per paper. Figure FIGREF49 shows the average citations for AA’ papers published 1965–2016 and 2010–2016, respectively, grouped by venue and paper type. (Figure with median citations is available online.) Discussion: In terms of citations, TACL papers have not been as successful as EMNLP and ACL; however, CL journal (the more traditional journal paper venue) has the highest average and median paper citations (by a large margin). This gap has reduced in papers published since 2010. When considering papers published between 2010 and 2016, the system demonstration papers, the SemEval shared task papers, and non-SemEval shared task papers have notably high average (surpassing those of EACL and COLING); however their median citations are lower. This is likely because some heavily cited papers have pushed the average up. Nonetheless, it is interesting to note how, in terms of citations, demo and shared task papers have surpassed many conferences and even become competitive with some top-tier conferences such as EACL and COLING. Q. What percent of the AA’ papers that were published in 2016 or earlier are cited more than 1000 times? How many more than 10 times? How many papers are cited 0 times? A. Google Scholar invented the i-10 index as another measure of author research impact. It stands for the number of papers by an author that received ten or more citations. (Ten here is somewhat arbitrary, but reasonable.) Similar to that, one can look at the impact of AA’ as a whole and the impact of various subsets of AA’ through the number of papers in various citation bins. Figure FIGREF50 shows the percentage of AA’ papers in various citation bins. (The percentages of papers when considering papers from specific time spans are available online.) Discussion: About 56% of the papers are cited ten or more times. 6.4% of the papers are never cited. Note also that some portion of the 1–9 bin likely includes papers that only received self-citations. It is interesting that the percentage of papers with 0 citations is rather steady (between 7.4% and 8.7%) for the 1965–1989, 1990–1999, and 2010–2016 periods. The majority of the papers lie in the 10 to 99 citations bin, for all except the recent periods (2010–2016 and 2016Jan–2016Dec). With time, the recent period should also have the majority of the papers in the 10 to 99 citations bin. The numbers for the 2016Jan–2016Dec papers show that after 2.5 years, about 89% of the papers have at least one citation and about 33% of the papers have ten or more citations. Q. What are the citation bin percentages for individual venues and paper types? A. See Figure FIGREF51. Discussion: Observe that 70 to 80% of the papers in journals and top-tier conferences have ten or more citations. The percentages are markedly lower (between 30 and 70%) for the other conferences shown above, and even lower for some other conferences (not shown above). CL Journal is particularly notable for the largest percentage of papers with 100 or more citations. The somewhat high percentage of papers that are never cited (4.3%) are likely because some of the book reviews from earlier years are not explicitly marked in CL journal, and thus they were not removed from analysis. Also, letters to editors, which are more common in CL journal, tend to often obtain 0 citations. CL, EMNLP, and ACL have the best track record for accepting papers that have gone on to receive 1000 or more citations. *Sem, the semantics conference, seems to have notably lower percentage of high-citation papers, even though it has fairly competitive acceptance rates. Instead of percentage, if one considers raw numbers of papers that have at least ten citations (i-10 index), then LREC is particularly notable in terms of the large number of papers it accepts that have gone on to obtain ten or more citations ($\sim $1600). Thus, by producing a large number of moderate-to-high citation papers, and introducing many first-time authors, LREC is one of the notable (yet perhaps undervalued) engines of impact on NLP. About 50% of the SemEval shared task papers received 10 or more citations, and about 46% of the non-SemEval Shared Task Papers received 10 or more citations. About 47% of the workshop papers received ten or more citations. About 43% of the demo papers received 10 or more citations. Impact ::: Citations to Papers by Areas of Research Q. What is the average number of citations of AA' papers that have machine translation in the title? What about papers that have the term sentiment analysis or word representations? A. Different areas of research within NLP enjoy varying amounts of attention. In Part II, we looked at the relative popularity of various areas over time—estimated through the number of paper titles that had corresponding terms. (You may also want to see the discussion on the use of paper title terms to sample papers from various, possibly overlapping, areas.) Figure FIGREF53 shows the top 50 title bigrams ordered by decreasing number of total citations. Only those bigrams that occur in at least 30 AA' papers (published between 1965 and 2016) are considered. (The papers from 2017 and later are not included, to allow for at least 2.5 years for the papers to accumulate citations.) Discussion: The graph shows that the bigram machine translation occurred in 1,659 papers that together accrued more than 93k citations. These papers have on average 68.8 citations and the median citations is 14. Not all machine translation (MT) papers have machine translation in the title. However, arguably, this set of 1,659 papers is a representative enough sample of machine translation papers; and thus, the average and median are estimates of MT in general. Second in the list are papers with statistical machine in the title—most commonly from the phrase statistical machine translation. One expects considerable overlap in the papers across the sets of papers with machine translation and statistical machine, but machine translation likely covers a broader range of research including work before statistical MT was introduced, neural MT, and MT evaluation. There are fewer papers with sentiment analysis in the title (356), but these have acquired citations at a higher average (104) than both machine translation and statistical machine. The bigram automatic evaluation jumps out because of its high average citations (337). Some of the neural-related bigrams have high median citations, for example, neural machine (49) and convolutional neural (40.5). Figure FIGREF54 shows the lists of top 25 bigrams ordered by average citations. Discussion: Observe the wide variety of topics covered by this list. In some ways that is reassuring for the health of the field as a whole; however, this list does not show which areas are not receiving sufficient attention. It is less clear to me how to highlight those, as simply showing the bottom 50 bigrams by average citations is not meaningful. Also note that this is not in any way an endorsement to write papers with these high-citation bigrams in the title. Doing so is of course no guarantee of receiving a large number of citations. Correlation of Age and Gender with Citations In this section, we examine citations across two demographic dimensions: Academic age (number of years one has been publishing) and Gender. There are good reasons to study citations across each of these dimensions including, but not limited to, the following: Areas of research: To better understand research contributions in the context of the area where the contribution is made. Academic age: To better understand how the challenges faced by researchers at various stages of their career may impact the citations of their papers. For example, how well-cited are first-time NLP authors? On average, at what academic age do citations peak? etc. Gender: To better understand the extent to which systematic biases (explicit and implicit) pervasive in society and scientific publishing impact author citations. Some of these aspects of study may seem controversial. So it is worth addressing that first. The goal here is not to perpetuate stereotypes about age, gender, or even areas of research. The history of scientific discovery is awash with plenty of examples of bad science that has tried to erroneously show that one group of people is “better” than another, with devastating consequences. People are far more alike than different. However, different demographic groups have faced (and continue to face) various socio-cultural inequities and biases. Gender and race studies look at how demographic differences shape our experiences. They examine the roles of social institutions in maintaining the inequities and biases. This work is in support of those studies. Unless we measure differences in outcomes such as scientific productivity and impact across demographic groups, we will not fully know the extent to which these inequities and biases impact our scientific community; and we cannot track the effectiveness of measures to make our universities, research labs, and conferences more inclusive, equitable, and fair. Correlation of Age and Gender with Citations ::: Correlation of Academic Age with Citations We introduced NLP academic age earlier in the paper, where we defined NLP academic age as the number of years one has been publishing in AA. Here we examine whether NLP academic age impacts citations. The analyses are done in terms of the academic age of the first author; however, similar analyses can be done for the last author and all authors. (There are limitations to each of these analyses though as discussed further below.) First author is a privileged position in the author list as it is usually reserved for the researcher that has done the most work and writing. The first author is also usually the main driver of the project; although, their mentor or advisor may also be a significant driver of the project. Sometimes multiple authors may be marked as first authors in the paper, but the current analysis simply takes the first author from the author list. In many academic communities, the last author position is reserved for the most senior or mentoring researcher. However, in non-university research labs and in large collaboration projects, the meaning of the last author position is less clear. (Personally, I prefer author names ordered by the amount of work done.) Examining all authors is slightly more tricky as one has to decide how to credit the citations to the possibly multiple authors. It might also not be a clear indicator of differences across gender as a large number of the papers in AA have both male and female authors. Q. How does the NLP academic age of the first author correlate with the amount of citations? Are first-year authors less cited than those with more experience? A. Figure FIGREF59 shows various aggregate citation statistics corresponding to academic age. To produce the graph we put each paper in a bin corresponding to the academic age of the first author when the paper was published. For example, if the first author of a paper had an academic age of 3 when that paper was published, then the paper goes in bin 3. We then calculate #papers, #citations, median citations, and average citations for each bin. For the figure below, We further group the bins 10 to 14, 15 to 19, 20 to 34, and 35 to 50. These groupings are done to avoid clutter, and also because many of the higher age bins have a low number of papers. Discussion: Observe that the number of papers where the first author has academic age 1 is much larger than the number of papers in any other bin. This is largely because a large number of authors in AA have written exactly one paper as first author. Also, about 60% of the authors in AA (17,874 out of the 29,941 authors) have written exactly one paper (regardless of author position). The curves for the average and median citations have a slight upside down U shape. The relatively lower average and median citations in year 1 (37.26 and 10, respectively) indicate that being new to the field has some negative impact on citations. The average increases steadily from year 1 to year 4, but the median is already at the highest point by year 2. One might say, that year 2 to year 14 are the period of steady and high citations. Year 15 onwards, there is a steady decline in the citations. It is probably wise to not draw too many conclusions from the averages of the 35 to 50 bin, because of the small number of papers. There seems to be a peak in average citations at age 7. However, there is not a corresponding peak in the median. Thus the peak in average might be due to an increase in the number of very highly cited papers. Citations to Papers by First Author Gender As noted in Part I, neither ACL nor the ACL Anthology have recorded demographic information for the vast majority of the authors. Thus we use the same setup discussed earlier in the section on demographics, to determine gender using the United States Social Security Administration database of names and genders of newborns to identify 55,133 first names that are strongly associated with females (probability $\ge $99%) and 29,873 first names that are strongly associated with males (probability $\ge $99%). Q. On average, are women cited less than men? A. Yes, on average, female first author papers have received markedly fewer citations than male first author papers (36.4 compared to 52.4). The difference in median is smaller (11 compared to 13). See Figure FIGREF60. Discussion: The large difference in averages and smaller difference in medians suggests that there are markedly more very heavily cited male first-author papers than female first-author papers. The gender-unknown category, which here largely consist of authors with Chinese origin names and names that are less strongly associated with one gender have a slightly higher average, but the same median citations, as authors with female-associated first names. The differences in citations, or citation gap, across genders may: (1) vary by period of time; (2) vary due to confounding factors such as academic age and areas of research. We explore these next. Q. How has the citation gap across genders changed over the years? A. Figure FIGREF61 (left side) shows the citation statistics across four time periods. Discussion: Observe that female first authors have always been a minority in the history of ACL; however, on average, their papers from the early years (1965 to 1989) received a markedly higher number of citations than those of male first authors from the same period. We can see from the graph that this changed in the 1990s where male first-author papers obtained markedly more citations on average. The citation gap reduced considerably in the 2000s, and the 2010–2016 period saw a further slight reduction in the citation gap. It is also interesting to note that the gender-unknown category has almost bridged the gap with the males in this most recent time period. Further, the proportion of the gender-unknown authors has increased over the years—arguably, an indication of better representations of authors from around the world in recent years. (Nonetheless, as indicated in Part I, there is still plenty to be done to promote greater inclusion of authors from Africa and South America.) Q. How have citations varied by gender and academic age? Are women less cited because of a greater proportion of new-to-NLP female first authors than new-to-NLP male first authors? A. Figure FIGREF61 (right side) shows citation statistics broken down by gender and academic age. (This figure is similar to the academic age graph seen earlier, except that it shows separate average and median lines for female, male, and unknown gender first authors.) Discussion: The graphs show that female first authors consistently receive fewer citations than male authors for the first fifteen years. The trend is inverted with a small citation gap in the 15th to 34th years period. Q. Is the citation gap common across the vast majority of areas of research within NLP? Is the gap simply because more women work in areas that receive low numbers of citations (regardless of gender)? A. Figure FIGREF64 shows the most cited areas of research along with citation statistics split by gender of the first authors of corresponding papers. (This figure is similar to the areas of research graph seen earlier, except that it shows separate citation statistics for the genders.) Note that the figure includes rows for only those bigram and gender pairs with at least 30 AA’ papers (published between 1965 and 2016). Thus for some of the bigrams certain gender entries are not shown. Discussion: Numbers for an additional 32 areas are available online. Observe that in only about 12% (7 of the top 59) of the most cited areas of research, women received higher average citations than men. These include: sentiment analysis, information extraction, document summarization, spoken dialogue, cross lingual (research), dialogue, systems, language generation. (Of course, note that some of the 59 areas, as estimated using title term bigrams, are overlapping. Also, we did not include large scale in the list above because the difference in averages is very small and it is not really an area of research.) Thus, the citation gap is common across a majority of the high-citations areas within NLP. Conclusions This work examined the ACL Anthology to identify broad trends in productivity, focus, and impact. We examined several questions such as: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? Particular attention was paid to the demographics and inclusiveness of the NLP community. Notably, we showed that only about 30% of first authors are female, and that this percentage has not improved since the year 2000. We also showed that, on average, female first authors are cited less than male first authors, even when controlling for academic age. We hope that recording citation and participation gaps across demographic groups will encourage our university, industry, and government research labs to be more inclusive and fair. Several additional aspects of the AA will be explored in future work (see the bottom of the blog posts). Acknowledgments This work was possible due to the helpful discussion and encouragement from a number of awesome people, including: Dan Jurafsky, Tara Small, Michael Strube, Cyril Goutte, Eric Joanis, Matt Post, Patrick Littell, Torsten Zesch, Ellen Riloff, Norm Vinson, Iryna Gurevych, Rebecca Knowles, Isar Nejadgholi, and Peter Turney. Also, a big thanks to the ACL Anthology team for creating and maintaining a wonderful resource.
English, Chinese, French, Japanese and Arabic
7955dbd79ded8ef4ae9fc28b2edf516320c1cb55
7955dbd79ded8ef4ae9fc28b2edf516320c1cb55_0
Q: What aspect of NLP research is examined? Text: Introduction The ACL Anthology (AA) is a digital repository of tens of thousands of articles on Natural Language Processing (NLP) / Computational Linguistics (CL). It includes papers published in the family of ACL conferences as well as in other NLP conferences such as LREC and RANLP. AA is the largest single source of scientific literature on NLP. This project, which we call NLP Scholar, examines the literature as a whole to identify broad trends in productivity, focus, and impact. We will present the analyses in a sequence of questions and answers. The questions range from fairly mundane to oh-that-will-be-good-to-know. Our broader goal here is simply to record the state of the AA literature: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? The answers are usually in the form of numbers, graphs, and inter-connected visualizations. We focus on the following aspects of NLP research: size, demographics, areas of research, impact, and correlation of citations with demographic attributes (age and gender). Target Audience: The analyses presented here are likely to be of interest to any NLP researcher. This might be particularly the case for those that are new to the field and wish to get a broad overview of the NLP publishing landscape. On the other hand, even seasoned NLP'ers have likely wondered about the questions raised here and might be interested in the empirical evidence. Data: The analyses presented below are based on information about the papers taken directly from AA (as of June 2019) and citation information extracted from Google Scholar (as of June 2019). Thus, all subsequent papers and citations are not included in the analysis. A fresh data collection is planned for January 2020. Interactive Visualizations: The visualizations we are developing for this work (using Tableau) are interactive—so one can hover, click to select and filter, move sliders, etc. Since this work is high in the number of visualizations, the main visualizations are presented as figures in the paper and some sets of visualizations are pointed to online. The interactive visualizations and data will be made available through the first author's website after peer review. Related Work: This work builds on past research, including that on Google Scholar BIBREF0, BIBREF1, BIBREF2, BIBREF3, on the analysis of NLP papers BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, on citation intent BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, and on measuring scholarly impact BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. Caveats and Ethical Considerations: We list several caveats and limitations throughout the paper. A compilation of these is also available online in the About NLP Scholar page. The analyses presented here are also available as a series of blog posts. Size Q. How big is the ACL Anthology (AA)? How is it changing with time? A. As of June 2019, AA had $\sim $50K entries, however, this includes some number of entries that are not truly research publications (for example, forewords, prefaces, table of contents, programs, schedules, indexes, calls for papers/participation, lists of reviewers, lists of tutorial abstracts, invited talks, appendices, session information, obituaries, book reviews, newsletters, lists of proceedings, lifetime achievement awards, erratum, and notes). We discard them for the analyses here. (Note: CL journal includes position papers like squibs, letter to editor, opinion, etc. We do not discard them.) We are then left with 44,896 articles. Figure FIGREF6 shows a graph of the number of papers published in each of the years from 1965 to 2018. Discussion: Observe that there was a spurt in the 1990s, but things really took off since the year 2000, and the growth continues. Also, note that the number of publications is considerably higher in alternate years. This is due to biennial conferences. Since 1998 the largest of such conferences has been LREC (In 2018 alone LREC had over 700 main conferences papers and additional papers from its 29 workshops). COLING, another biennial conference (also occurring in the even years) has about 45% of the number of main conference papers as LREC. Q. How many people publish in the ACL Anthology (NLP conferences)? A. Figure FIGREF7 shows a graph of the number of authors (of AA papers) over the years: Discussion: It is a good sign for the field to have a growing number of people join its ranks as researchers. A further interesting question would be: Q. How many people are actively publishing in NLP? A. It is hard to know the exact number, but we can determine the number of people who have published in AA in the last N years. #people who published at least one paper in 2017 and 2018 (2 years): $\sim $12k (11,957 to be precise) #people who published at least one paper 2015 through 2018 (4 years):$\sim $17.5k (17,457 to be precise) Of course, some number of researchers published NLP papers in non-AA venues, and some number are active NLP researchers who may not have published papers in the last few years. Q. How many journal papers exist in the AA? How many main conference papers? How many workshop papers? A. See Figure FIGREF8. Discussion: The number of journal papers is dwarfed by the number of conference and workshop papers. (This is common in computer science. Even though NLP is a broad interdisciplinary field, the influence of computer science practices on NLP is particularly strong.) Shared task and system demo papers are relatively new (introduced in the 2000s), but their numbers are already significant and growing. Creating a separate class for “Top-tier Conference” is somewhat arbitrary, but it helps make certain comparisons more meaningful (for example, when comparing the average number of citations, etc.). For this work, we consider ACL, EMNLP, NAACL, COLING, and EACL as top-tier conferences, but certainly other groupings are also reasonable. Q. How many papers have been published at ACL (main conference papers)? What are the other NLP venues and what is the distribution of the number of papers across various CL/NLP venues? A. # ACL (main conference papers) as of June 2018: 4,839 The same workshop can co-occur with different conferences in different years, so we grouped all workshop papers in their own class. We did the same for tutorials, system demonstration papers (demos), and student research papers. Figure FIGREF9 shows the number of main conference papers for various venues and paper types (workshop papers, demos, etc.). Discussion: Even though LREC is a relatively new conference that occurs only once in two years, it tends to have a high acceptance rate ($\sim $60%), and enjoys substantial participation. Thus, LREC is already the largest single source of NLP conference papers. SemEval, which started as SenseEval in 1998 and occurred once in two or three years, has now morphed into an annual two-day workshop—SemEval. It is the largest single source of NLP shared task papers. Demographics (focus of analysis: gender, age, and geographic diversity) NLP, like most other areas of research, suffers from poor demographic diversity. There is very little to low representation from certain nationalities, race, gender, language, income, age, physical abilities, etc. This impacts the breadth of technologies we create, how useful they are, and whether they reach those that need it most. In this section, we analyze three specific attributes among many that deserve attention: gender (specifically, the number of women researchers in NLP), age (more precisely, the number of years of NLP paper publishing experience), and the amount of research in various languages (which loosely correlates with geographic diversity). Demographics (focus of analysis: gender, age, and geographic diversity) ::: Gender The ACL Anthology does not record demographic information about the paper authors. (Until recently, ACL and other NLP conferences did not record demographic information of the authors.) However, many first names have strong associations with a male or female gender. We will use these names to estimate the percentage of female first authors in NLP. The US Social Security Administration publishes a database of names and genders of newborns. We use the dataset to identify 55,133 first names that are strongly associated with females (probability $\ge $99%) and 29,873 first names that are strongly associated with males (probability $\ge $99%). (As a side, it is interesting to note that there is markedly greater diversity in female names than in male names.) We identified 26,637 of the 44,896 AA papers ($\sim $60%) where the first authors have one of these names and determine the percentage of female first author papers across the years. We will refer to this subset of AA papers as AA*. Note the following caveats associated with this analysis: The names dataset used has a lower representation of names from nationalities other than the US. However, there is a large expatriate population living in the US. Chinese names (especially in the romanized form) are not good indicators of gender. Thus the method presented here disregards most Chinese names, and the results of the analysis apply to the group of researchers excluding those with Chinese names. The dataset only records names associated with two genders. The approach presented here is meant to be an approximation in the absence of true gender information. Q. What percent of the AA* papers have female first authors (FFA)? How has this percentage changed with time? A. Overall FFA%: 30.3%. Figure FIGREF16 shows how FFA% has changed with time. Common paper title words and FFA% of papers that have those words are shown in the bottom half of the image. Note that the slider at the bottom has been set to 400, i.e., only those title words that occur in 400 or more papers are shown. The legend on the bottom right shows that low FFA scores are shown in shades of blue, whereas relatively higher FFA scores are shown in shades of green. Discussion: Observe that as a community, we are far from obtaining male-female parity in terms of first authors. A further striking (and concerning) observation is that the female first author percentage has not improved since the years 1999 and 2000 when the FFA percentages were highest (32.9% and 32.8%, respectively). In fact there seems to even be a slight downward trend in recent years. The calculations shown above are for the percentage of papers that have female first authors. The percentage of female first authors is about the same ($\sim $31%). On average male authors had a slightly higher average number of publications than female authors. To put these numbers in context, the percentage of female scientists world wide (considering all areas of research) has been estimated to be around 30%. The reported percentages for many computer science sub-fields are much lower. (See Women in Science (2015).) The percentages are much higher for certain other fields such as psychology and linguistics. (See this study for psychology and this study for linguistics.) If we can identify ways to move the needle on the FFA percentage and get it closer to 50% (or more), NLP can be a beacon to many other fields, especially in the sciences. FFA percentages are particularly low for papers that have parsing, neural, and unsupervised in the title. There are some areas within NLP that enjoy a healthier female-male parity in terms of first authors of papers. Figure FIGREF20 shows FFA percentages for papers that have the word discourse in the title. There is burgeoning research on neural NLP in the last few years. Figure FIGREF21 shows FFA percentages for papers that have the word neural in the title. Figure FIGREF22 shows lists of terms with the highest and lowest FFA percentages, respectively, when considering terms that occur in at least 50 paper titles (instead of 400 in the analysis above). Observe that FFA percentages are relatively higher in non-English European language research such as papers on Russian, Portuguese, French, and Italian. FFA percentages are also relatively higher for certain areas of NLP such as work on prosody, readability, discourse, dialogue, paraphrasing, and individual parts of speech such as adjectives and verbs. FFA percentages are particularly low for papers on theoretical aspects of statistical modelling, and areas such as machine translation, parsing, and logic. The full lists of terms and FFA percentages will be made available with the rest of the data. Demographics (focus of analysis: gender, age, and geographic diversity) ::: Academic Age While the actual age of NLP researchers might be an interesting aspect to explore, we do not have that information. Thus, instead, we can explore a slightly different (and perhaps more useful) attribute: NLP academic age. We can define NLP academic age as the number of years one has been publishing in AA. So if this is the first year one has published in AA, then their NLP academic age is 1. If one published their first AA paper in 2001 and their latest AA paper in 2018, then their academic age is 18. Q. How old are we? That is, what is the average NLP academic age of those who published papers in 2018? How has the average changed over the years? That is, have we been getting older or younger? What percentage of authors that published in 2018 were publishing their first AA paper? A. Average NLP Academic Age of people that published in 2018: 5.41 years Median NLP Academic Age of people that published in 2018: 2 years Percentage of 2018 authors that published their first AA paper in 2018: 44.9% Figure FIGREF24 shows how these numbers have changed over the years. Discussion: Observe that the Average academic age has been steadily increasing over the years until 2016 and 2017, when the trend has shifted and the average academic age has started to decrease. The median age was 1 year for most of the 1965 to 1990 period, 2 years for most of the 1991 to 2006 period, 3 years for most of the 2007 to 2015 period, and back to 2 years since then. The first-time AA author percentage decreased until about 1988, after which it sort of stayed steady at around 48% until 2004 with occasional bursts to $\sim $56%. Since 2005, the first-time author percentage has gone up and down every other year. It seems that the even years (which are also LREC years) have a higher first-time author percentage. Perhaps, this oscillation in first-time authors percentage is related to LREC’s high acceptance rate. Q. What is the distribution of authors in various academic age bins? For example, what percentage of authors that published in 2018 had an academic age of 2, 3, or 4? What percentage had an age between 5 and 9? And so on? A. See Figure FIGREF25. Discussion: Observe that about 65% of the authors that published in 2018 had an academic age of less than 5. This number has steadily reduced since 1965, was in the 60 to 70% range in 1990s, rose to the 70 to 72% range in early 2000s, then declined again until it reached the lowest value ($\sim $60%) in 2010, and has again steadily risen until 2018 (65%). Thus, even though it may sometimes seem at recent conferences that there is a large influx of new people into NLP (and that is true), proportionally speaking, the average NLP academic age is higher (more experienced) than what it has been in much of its history. Demographics (focus of analysis: gender, age, and geographic diversity) ::: Location (Languages) Automatic systems with natural language abilities are growing to be increasingly pervasive in our lives. Not only are they sources of mere convenience, but are crucial in making sure large sections of society and the world are not left behind by the information divide. Thus, the limits of what automatic systems can do in a language, limit the world for the speakers of that language. We know that much of the research in NLP is on English or uses English datasets. Many reasons have been proffered, and we will not go into that here. Instead, we will focus on estimating how much research pertains to non-English languages. We will make use of the idea that often when work is done focusing on a non-English language, then the language is mentioned in the title. We collected a list of 122 languages indexed by Wiktionary and looked for the presence of these words in the titles of AA papers. (Of course there are hundreds of other lesser known languages as well, but here we wanted to see the representation of these more prominent languages in NLP literature.) Figure FIGREF27 is a treemap of the 122 languages arranged alphabetically and shaded such that languages that appear more often in AA paper titles have a darker shade of green. Discussion: Even though the amount of work done on English is much larger than that on any other language, often the word English does not appear in the title, and this explains why English is not the first (but the second-most) common language name to appear in the titles. This is likely due to the fact that many papers fail to mention the language of study or the language of the datasets used if it is English. There is growing realization in the community that this is not quite right. However, the language of study can be named in other less prominent places than the title, for example the abstract, introduction, or when the datasets are introduced, depending on how central it is to the paper. We can see from the treemap that the most widely spoken Asian and Western European languages enjoy good representation in AA. These include: Chinese, Arabic, Korean, Japanese, and Hindi (Asian) as well as French, German, Swedish, Spanish, Portuguese, and Italian (European). This is followed by the relatively less widely spoken European languages (such as Russian, Polish, Norwegian, Romanian, Dutch, and Czech) and Asian languages (such as Turkish, Thai, and Urdu). Most of the well-represented languages are from the Indo-European language family. Yet, even in the limited landscape of the most common 122 languages, vast swathes are barren with inattention. Notable among these is the extremely low representation of languages from Africa, languages from non-Indo-European language families, and Indigenous languages from around the world. Areas of Research Natural Language Processing addresses a wide range of research questions and tasks pertaining to language and computing. It encompasses many areas of research that have seen an ebb and flow of interest over the years. In this section, we examine the terms that have been used in the titles of ACL Anthology (AA) papers. The terms in a title are particularly informative because they are used to clearly and precisely convey what the paper is about. Some journals ask authors to separately include keywords in the paper or in the meta-information, but AA papers are largely devoid of this information. Thus titles are an especially useful source of keywords for papers—keywords that are often indicative of the area of research. Keywords could also be extracted from abstracts and papers; we leave that for future work. Further work is also planned on inferring areas of research using word embeddings, techniques from topic modelling, and clustering. There are clear benefits to performing analyses using that information. However, those approaches can be sensitive to the parameters used. Here, we keep things simple and explore counts of terms in paper titles. Thus the results are easily reproducible and verifiable. Caveat: Even though there is an association between title terms and areas of research, the association can be less strong for some terms and areas. We use the association as one (imperfect) source of information about areas of research. This information may be combined with other sources of information to draw more robust conclusions. Title Terms: The title has a privileged position in a paper. It serves many functions, and here are three key ones (from an article by Sneha Kulkarni): "A good research paper title: 1. Condenses the paper's content in a few words 2. Captures the readers' attention 3. Differentiates the paper from other papers of the same subject area". If we examine the titles of papers in the ACL Anthology, we would expect that because of Function 1 many of the most common terms will be associated with the dominant areas of research. Function 2 (or attempting to have a catchy title) on the other hand, arguably leads to more unique and less frequent title terms. Function 3 seems crucial to the effectiveness of a title; and while at first glance it may seem like this will lead to unique title terms, often one needs to establish a connection with something familiar in order to convey how the work being presented is new or different. It is also worth noting that a catchy term today, will likely not be catchy tomorrow. Similarly, a distinctive term today, may not be distinctive tomorrow. For example, early papers used neural in the title to distinguish themselves from non-nerual approaches, but these days neural is not particularly discriminative as far as NLP papers go. Thus, competing and complex interactions are involved in the making of titles. Nonetheless, an arguable hypothesis is that: broad trends in interest towards an area of research will be reflected, to some degree, in the frequencies of title terms associated with that area over time. However, even if one does not believe in that hypothesis, it is worth examining the terms in the titles of tens of thousands of papers in the ACL Anthology—spread across many decades. Q. What terms are used most commonly in the titles of the AA papers? How has that changed with time? A. Figure FIGREF28 shows the most common unigrams (single word) and bigrams (two-word sequences) in the titles of papers published from 1980 to 2019. (Ignoring function words.) The timeline graph at the bottom shows the percentage of occurrences of the unigrams over the years (the colors of the unigrams in the Timeline match those in the Title Unigram list). Note: For a given year, the timeline graph includes a point for a unigram if the sum of the frequency of the unigram in that year and the two years before it is at least ten. The period before 1980 is not included because of the small number of papers. Discussion: Appropriately enough, the most common term in the titles of NLP papers is language. Presence of high-ranking terms pertaining to machine translation suggest that it is the area of research that has received considerable attention. Other areas associated with the high-frequency title terms include lexical semantics, named entity recognition, question answering, word sense disambiguation, and sentiment analysis. In fact, the common bigrams in the titles often correspond to names of NLP research areas. Some of the bigrams like shared task and large scale are not areas of research, but rather mechanisms or trends of research that apply broadly to many areas of research. The unigrams, also provide additional insights, such as the interest of the community in Chinese language, and in areas such as speech and parsing. The Timeline graph is crowded in this view, but clicking on a term from the unigram list will filter out all other lines from the timeline. This is especially useful for determining whether the popularity of a term is growing or declining. (One can already see from above that neural has broken away from the pack in recent years.) Since there are many lines in the Timeline graph, Tableau labels only some (you can see neural and machine). However, hovering over a line, in the eventual interactive visualization, will display the corresponding term—as shown in the figure. Despite being busy, the graph sheds light on the relative dominance of the most frequent terms and how that has changed with time. The vocabulary of title words is smaller when considering papers from the 1980's than in recent years. (As would be expected since the number of papers then was also relatively fewer.) Further, dominant terms such as language and translation accounted for a higher percentage than in recent years where there is a much larger diversity of topics and the dominant research areas are not as dominant as they once were. Q. What are the most frequent unigrams and bigrams in the titles of recent papers? A. Figure FIGREF29 shows the most frequent unigrams and bigrams in the titles of papers published 2016 Jan to 2019 June (time of data collection). Discussion: Some of the terms that have made notable gains in the top 20 unigrams and bigrams lists in recent years include: neural machine (presumably largely due to the phrase neural machine translation), neural network(s), word embeddings, recurrent neural, deep learning and the corresponding unigrams (neural, networks, etc.). We also see gains for terms related to shared tasks such as SemEval and task. The sets of most frequent unigrams and bigrams in the titles of AA papers from various time spans are available online. Apart from clicking on terms, one can also enter the query (say parsing) in the search box at the bottom. Apart from filtering the timeline graph (bottom), this action also filters the unigram list (top left) to provide information only about the search term. This is useful because the query term may not be one of the visible top unigrams. FigureFIGREF31 shows the timeline graph for parsing. Discussion: Parsing seems to have enjoyed considerable attention in the 1980s, began a period of steep decline in the early 1990s, and a period of gradual decline ever since. One can enter multiple terms in the search box or shift/command click multiple terms to show graphs for more than one term. FigureFIGREF32 shows the timelines for three bigrams statistical machine, neural machine, and machine translation: Discussion: The graph indicates that there was a spike in machine translation papers in 1996, but the number of papers dropped substantially after that. Yet, its numbers have been comparatively much higher than other terms. One can also see the rise of statistical machine translation in the early 2000s followed by its decline with the rise of neural machine translation. Impact Research articles can have impact in a number of ways—pushing the state of the art, answering crucial questions, finding practical solutions that directly help people, making a new generation of potential-scientists excited about a field of study, and more. As scientists, it seems attractive to quantitatively measure scientific impact, and this is particularly appealing to governments and funding agencies; however, it should be noted that individual measures of research impact are limited in scope—they measure only some kinds of contributions. Citations The most commonly used metrics of research impact are derived from citations. A citation of a scholarly article is the explicit reference to that article. Citations serve many functions. However, a simplifying assumption is that regardless of the reason for citation, every citation counts as credit to the influence or impact of the cited work. Thus several citation-based metrics have emerged over the years including: number of citations, average citations, h-index, relative citation ratio, and impact factor. It is not always clear why some papers get lots of citations and others do not. One can argue that highly cited papers have captured the imagination of the field: perhaps because they were particularly creative, opened up a new area of research, pushed the state of the art by a substantial degree, tested compelling hypotheses, or produced useful datasets, among other things. Note however, that the number of citations is not always a reflection of the quality or importance of a piece of work. Note also that there are systematic biases that prevent certain kinds of papers from accruing citations, especially when the contributions of a piece of work are atypical, not easily quantified, or in an area where the number of scientific publications is low. Further, the citations process can be abused, for example, by egregious self-citations. Nonetheless, given the immense volume of scientific literature, the relative ease with which one can track citations using services such as Google Scholar and Semantic Scholar, and given the lack of other easily applicable and effective metrics, citation analysis is an imperfect but useful window into research impact. In this section, we examine citations of AA papers. We focus on two aspects: Most cited papers: We begin by looking at the most cited papers overall and in various time spans. We will then look at most cited papers by paper-type (long, short, demo, etc) and venue (ACL, LREC, etc.). Perhaps these make interesting reading lists. Perhaps they also lead to a qualitative understanding of the kinds of AA papers that have received lots of citations. Aggregate citation metrics by time span, paper type, and venue: Access to citation information allows us to calculate aggregate citation metrics such as average and median citations of papers published in different time periods, published in different venues, etc. These can help answer questions such as: on average, how well cited are papers published in the 1990s? on average, how many citations does a short paper get? how many citations does a long paper get? how many citations for a workshop paper? etc. Data: The analyses presented below are based on information about the papers taken directly from AA (as of June 2019) and citation information extracted from Google Scholar (as of June 2019). We extracted citation information from Google Scholar profiles of authors who had a Google Scholar Profile page and had published at least three papers in the ACL Anthology. This yielded citation information for about 75% of the papers (33,051 out of the 44,896 papers). We will refer to this subset of the ACL Anthology papers as AA’. All citation analysis below is on AA’. Impact ::: #Citations and Most Cited Papers Q. How many citations have the AA’ papers received? How is that distributed among the papers published in various decades? A. $\sim $1.2 million citations (as of June 2019). Figure FIGREF36 shows a timeline graph where each year has a bar with height corresponding to the number of citations received by papers published in that year. Further, the bar has colored fragments corresponding to each of the papers and the height of a fragment (paper) is proportional to the number of citations it has received. Thus it is easy to spot the papers that received a large number of citations, and the years when the published papers received a large number of citations. Hovering over individual papers reveals an information box showing the paper title, authors, year of publication, publication venue, and #citations. Discussion: With time, not only have the number of papers grown, but also the number of high-citation papers. We see a marked jump in the 1990s over the previous decades, but the 2000s are the most notable in terms of the high number of citations. The 2010s papers will likely surpass the 2000s papers in the years to come. Q. What are the most cited papers in AA'? A. Figure FIGREF37 shoes the most cited papers in the AA'. Discussion: We see that the top-tier conference papers (green) are some of the most cited papers in AA’. There are a notable number of journal papers (dark green) in the most cited list as well, but very few demo (purple) and workshop (orange) papers. In the interactive visualizations (to be released later), one can click on the url to be to taken directly to the paper’s landing page in the ACL Anthology website. That page includes links to meta information, the pdf, and associated files such as videos and appendices. There will also be functionality to download the lists. Alas, copying the lists from the screenshots shown here is not easy. Q. What are the most cited AA' journal papers ? What are the most cited AA' workshop papers? What are the most cited AA' shared task papers? What are the most cited AA' demo papers? What are the most cited tutorials? A. The most cited AA’ journal papers, conference papers, workshop papers, system demo papers, shared task papers, and tutorials can be viewed online. The most cited papers from individual venues (ACL, CL journal, TACL, EMNLP, LREC, etc.) can also be viewed there. Discussion: Machine translation papers are well-represented in many of these lists, but especially in the system demo papers list. Toolkits such as MT evaluation ones, NLTK, Stanford Core NLP, WordNet Similarity, and OpenNMT have highly cited demo or workshop papers. The shared task papers list is dominated by task description papers (papers by task organizers describing the data and task), especially for sentiment analysis tasks. However, the list also includes papers by top-performing systems in these shared tasks, such as the NRC-Canada, HidelTime, and UKP papers. Q. What are the most cited AA' papers in the last decade? A. Figure FIGREF39 shows the most cited AA' papers in the 2010s. The most cited AA' papers from the earlier periods are available online. Discussion: The early period (1965–1989) list includes papers focused on grammar and linguistic structure. The 1990s list has papers addressing many different NLP problems with statistical approaches. Papers on MT and sentiment analysis are frequent in the 2000s list. The 2010s are dominated by papers on word embeddings and neural representations. Impact ::: Average Citations by Time Span Q. How many citations did the papers published between 1990 and 1994 receive? What is the average number of citations that a paper published between 1990 and 1994 has received? What are the numbers for other time spans? A. Total citations for papers published between 1990 and 1994: $\sim $92k Average citations for papers published between 1990 and 1994: 94.3 Figure FIGREF41 shows the numbers for various time spans. Discussion: The early 1990s were an interesting period for NLP with the use of data from the World Wide Web and technologies from speech processing. This was the period with the highest average citations per paper, closely followed by the 1965–1969 and 1995–1999 periods. The 2000–2004 period is notable for: (1) a markedly larger number of citations than the previous decades; (2) third highest average number of citations. The drop off in the average citations for recent 5-year spans is largely because they have not had as much time to collect citations. Impact ::: Aggregate Citation Statistics, by Paper Type and Venue Q. What are the average number of citations received by different types of papers: main conference papers, workshop papers, student research papers, shared task papers, and system demonstration papers? A. In this analysis, we include only those AA’ papers that were published in 2016 or earlier (to allow for at least 2.5 years to collect citations). There are 26,949 such papers. Figures FIGREF42 and FIGREF43 show the average citations by paper type when considering papers published 1965–2016 and 2010–2016, respectively. Figures FIGREF45 and FIGREF46 show the medians. Discussion: Journal papers have much higher average and median citations than other papers, but the gap between them and top-tier conferences is markedly reduced when considering papers published since 2010. System demo papers have the third highest average citations; however, shared task papers have the third highest median citations. The popularity of shared tasks and the general importance given to beating the state of the art (SOTA) seems to have grown in recent years—something that has come under criticism. It is interesting to note that in terms of citations, workshop papers are doing somewhat better than the conferences that are not top tier. Finally, the citation numbers for tutorials show that even though a small number of tutorials are well cited, a majority receive 1 or no citations. This is in contrast to system demo papers that have average and median citations that are higher or comparable to workshop papers. Throughout the analyses in this article, we see that median citation numbers are markedly lower than average citation numbers. This is particularly telling. It shows that while there are some very highly cited papers, a majority of the papers obtain much lower number of citations—and when considering papers other than journals and top-tier conferences, the number of citations is frequently lower than ten. Q. What are the average number of citations received by the long and short ACL main conference papers, respectively? A. Short papers were introduced at ACL in 2003. Since then ACL is by far the venue with the most number of short papers (compared to other venues). So we compare long and short papers published at ACL since 2003 to determine their average citations. Once again, we limit the papers to those published until 2016 to allow for the papers to have time to collect citations. Figure FIGREF47 shows the average and median citations for long and short papers. Discussion: On average, long papers get almost three times as many citations as short papers. However, the median for long papers is two-and-half times that of short papers. This difference might be because some very heavily cited long papers push the average up for long papers. Q. Which venue has publications with the highest average number of citations? What is the average number of citations for ACL and EMNLP papers? What is this average for other venues? What are the average citations for workshop papers, system demonstration papers, and shared task papers? A. CL journal has the highest average citations per paper. Figure FIGREF49 shows the average citations for AA’ papers published 1965–2016 and 2010–2016, respectively, grouped by venue and paper type. (Figure with median citations is available online.) Discussion: In terms of citations, TACL papers have not been as successful as EMNLP and ACL; however, CL journal (the more traditional journal paper venue) has the highest average and median paper citations (by a large margin). This gap has reduced in papers published since 2010. When considering papers published between 2010 and 2016, the system demonstration papers, the SemEval shared task papers, and non-SemEval shared task papers have notably high average (surpassing those of EACL and COLING); however their median citations are lower. This is likely because some heavily cited papers have pushed the average up. Nonetheless, it is interesting to note how, in terms of citations, demo and shared task papers have surpassed many conferences and even become competitive with some top-tier conferences such as EACL and COLING. Q. What percent of the AA’ papers that were published in 2016 or earlier are cited more than 1000 times? How many more than 10 times? How many papers are cited 0 times? A. Google Scholar invented the i-10 index as another measure of author research impact. It stands for the number of papers by an author that received ten or more citations. (Ten here is somewhat arbitrary, but reasonable.) Similar to that, one can look at the impact of AA’ as a whole and the impact of various subsets of AA’ through the number of papers in various citation bins. Figure FIGREF50 shows the percentage of AA’ papers in various citation bins. (The percentages of papers when considering papers from specific time spans are available online.) Discussion: About 56% of the papers are cited ten or more times. 6.4% of the papers are never cited. Note also that some portion of the 1–9 bin likely includes papers that only received self-citations. It is interesting that the percentage of papers with 0 citations is rather steady (between 7.4% and 8.7%) for the 1965–1989, 1990–1999, and 2010–2016 periods. The majority of the papers lie in the 10 to 99 citations bin, for all except the recent periods (2010–2016 and 2016Jan–2016Dec). With time, the recent period should also have the majority of the papers in the 10 to 99 citations bin. The numbers for the 2016Jan–2016Dec papers show that after 2.5 years, about 89% of the papers have at least one citation and about 33% of the papers have ten or more citations. Q. What are the citation bin percentages for individual venues and paper types? A. See Figure FIGREF51. Discussion: Observe that 70 to 80% of the papers in journals and top-tier conferences have ten or more citations. The percentages are markedly lower (between 30 and 70%) for the other conferences shown above, and even lower for some other conferences (not shown above). CL Journal is particularly notable for the largest percentage of papers with 100 or more citations. The somewhat high percentage of papers that are never cited (4.3%) are likely because some of the book reviews from earlier years are not explicitly marked in CL journal, and thus they were not removed from analysis. Also, letters to editors, which are more common in CL journal, tend to often obtain 0 citations. CL, EMNLP, and ACL have the best track record for accepting papers that have gone on to receive 1000 or more citations. *Sem, the semantics conference, seems to have notably lower percentage of high-citation papers, even though it has fairly competitive acceptance rates. Instead of percentage, if one considers raw numbers of papers that have at least ten citations (i-10 index), then LREC is particularly notable in terms of the large number of papers it accepts that have gone on to obtain ten or more citations ($\sim $1600). Thus, by producing a large number of moderate-to-high citation papers, and introducing many first-time authors, LREC is one of the notable (yet perhaps undervalued) engines of impact on NLP. About 50% of the SemEval shared task papers received 10 or more citations, and about 46% of the non-SemEval Shared Task Papers received 10 or more citations. About 47% of the workshop papers received ten or more citations. About 43% of the demo papers received 10 or more citations. Impact ::: Citations to Papers by Areas of Research Q. What is the average number of citations of AA' papers that have machine translation in the title? What about papers that have the term sentiment analysis or word representations? A. Different areas of research within NLP enjoy varying amounts of attention. In Part II, we looked at the relative popularity of various areas over time—estimated through the number of paper titles that had corresponding terms. (You may also want to see the discussion on the use of paper title terms to sample papers from various, possibly overlapping, areas.) Figure FIGREF53 shows the top 50 title bigrams ordered by decreasing number of total citations. Only those bigrams that occur in at least 30 AA' papers (published between 1965 and 2016) are considered. (The papers from 2017 and later are not included, to allow for at least 2.5 years for the papers to accumulate citations.) Discussion: The graph shows that the bigram machine translation occurred in 1,659 papers that together accrued more than 93k citations. These papers have on average 68.8 citations and the median citations is 14. Not all machine translation (MT) papers have machine translation in the title. However, arguably, this set of 1,659 papers is a representative enough sample of machine translation papers; and thus, the average and median are estimates of MT in general. Second in the list are papers with statistical machine in the title—most commonly from the phrase statistical machine translation. One expects considerable overlap in the papers across the sets of papers with machine translation and statistical machine, but machine translation likely covers a broader range of research including work before statistical MT was introduced, neural MT, and MT evaluation. There are fewer papers with sentiment analysis in the title (356), but these have acquired citations at a higher average (104) than both machine translation and statistical machine. The bigram automatic evaluation jumps out because of its high average citations (337). Some of the neural-related bigrams have high median citations, for example, neural machine (49) and convolutional neural (40.5). Figure FIGREF54 shows the lists of top 25 bigrams ordered by average citations. Discussion: Observe the wide variety of topics covered by this list. In some ways that is reassuring for the health of the field as a whole; however, this list does not show which areas are not receiving sufficient attention. It is less clear to me how to highlight those, as simply showing the bottom 50 bigrams by average citations is not meaningful. Also note that this is not in any way an endorsement to write papers with these high-citation bigrams in the title. Doing so is of course no guarantee of receiving a large number of citations. Correlation of Age and Gender with Citations In this section, we examine citations across two demographic dimensions: Academic age (number of years one has been publishing) and Gender. There are good reasons to study citations across each of these dimensions including, but not limited to, the following: Areas of research: To better understand research contributions in the context of the area where the contribution is made. Academic age: To better understand how the challenges faced by researchers at various stages of their career may impact the citations of their papers. For example, how well-cited are first-time NLP authors? On average, at what academic age do citations peak? etc. Gender: To better understand the extent to which systematic biases (explicit and implicit) pervasive in society and scientific publishing impact author citations. Some of these aspects of study may seem controversial. So it is worth addressing that first. The goal here is not to perpetuate stereotypes about age, gender, or even areas of research. The history of scientific discovery is awash with plenty of examples of bad science that has tried to erroneously show that one group of people is “better” than another, with devastating consequences. People are far more alike than different. However, different demographic groups have faced (and continue to face) various socio-cultural inequities and biases. Gender and race studies look at how demographic differences shape our experiences. They examine the roles of social institutions in maintaining the inequities and biases. This work is in support of those studies. Unless we measure differences in outcomes such as scientific productivity and impact across demographic groups, we will not fully know the extent to which these inequities and biases impact our scientific community; and we cannot track the effectiveness of measures to make our universities, research labs, and conferences more inclusive, equitable, and fair. Correlation of Age and Gender with Citations ::: Correlation of Academic Age with Citations We introduced NLP academic age earlier in the paper, where we defined NLP academic age as the number of years one has been publishing in AA. Here we examine whether NLP academic age impacts citations. The analyses are done in terms of the academic age of the first author; however, similar analyses can be done for the last author and all authors. (There are limitations to each of these analyses though as discussed further below.) First author is a privileged position in the author list as it is usually reserved for the researcher that has done the most work and writing. The first author is also usually the main driver of the project; although, their mentor or advisor may also be a significant driver of the project. Sometimes multiple authors may be marked as first authors in the paper, but the current analysis simply takes the first author from the author list. In many academic communities, the last author position is reserved for the most senior or mentoring researcher. However, in non-university research labs and in large collaboration projects, the meaning of the last author position is less clear. (Personally, I prefer author names ordered by the amount of work done.) Examining all authors is slightly more tricky as one has to decide how to credit the citations to the possibly multiple authors. It might also not be a clear indicator of differences across gender as a large number of the papers in AA have both male and female authors. Q. How does the NLP academic age of the first author correlate with the amount of citations? Are first-year authors less cited than those with more experience? A. Figure FIGREF59 shows various aggregate citation statistics corresponding to academic age. To produce the graph we put each paper in a bin corresponding to the academic age of the first author when the paper was published. For example, if the first author of a paper had an academic age of 3 when that paper was published, then the paper goes in bin 3. We then calculate #papers, #citations, median citations, and average citations for each bin. For the figure below, We further group the bins 10 to 14, 15 to 19, 20 to 34, and 35 to 50. These groupings are done to avoid clutter, and also because many of the higher age bins have a low number of papers. Discussion: Observe that the number of papers where the first author has academic age 1 is much larger than the number of papers in any other bin. This is largely because a large number of authors in AA have written exactly one paper as first author. Also, about 60% of the authors in AA (17,874 out of the 29,941 authors) have written exactly one paper (regardless of author position). The curves for the average and median citations have a slight upside down U shape. The relatively lower average and median citations in year 1 (37.26 and 10, respectively) indicate that being new to the field has some negative impact on citations. The average increases steadily from year 1 to year 4, but the median is already at the highest point by year 2. One might say, that year 2 to year 14 are the period of steady and high citations. Year 15 onwards, there is a steady decline in the citations. It is probably wise to not draw too many conclusions from the averages of the 35 to 50 bin, because of the small number of papers. There seems to be a peak in average citations at age 7. However, there is not a corresponding peak in the median. Thus the peak in average might be due to an increase in the number of very highly cited papers. Citations to Papers by First Author Gender As noted in Part I, neither ACL nor the ACL Anthology have recorded demographic information for the vast majority of the authors. Thus we use the same setup discussed earlier in the section on demographics, to determine gender using the United States Social Security Administration database of names and genders of newborns to identify 55,133 first names that are strongly associated with females (probability $\ge $99%) and 29,873 first names that are strongly associated with males (probability $\ge $99%). Q. On average, are women cited less than men? A. Yes, on average, female first author papers have received markedly fewer citations than male first author papers (36.4 compared to 52.4). The difference in median is smaller (11 compared to 13). See Figure FIGREF60. Discussion: The large difference in averages and smaller difference in medians suggests that there are markedly more very heavily cited male first-author papers than female first-author papers. The gender-unknown category, which here largely consist of authors with Chinese origin names and names that are less strongly associated with one gender have a slightly higher average, but the same median citations, as authors with female-associated first names. The differences in citations, or citation gap, across genders may: (1) vary by period of time; (2) vary due to confounding factors such as academic age and areas of research. We explore these next. Q. How has the citation gap across genders changed over the years? A. Figure FIGREF61 (left side) shows the citation statistics across four time periods. Discussion: Observe that female first authors have always been a minority in the history of ACL; however, on average, their papers from the early years (1965 to 1989) received a markedly higher number of citations than those of male first authors from the same period. We can see from the graph that this changed in the 1990s where male first-author papers obtained markedly more citations on average. The citation gap reduced considerably in the 2000s, and the 2010–2016 period saw a further slight reduction in the citation gap. It is also interesting to note that the gender-unknown category has almost bridged the gap with the males in this most recent time period. Further, the proportion of the gender-unknown authors has increased over the years—arguably, an indication of better representations of authors from around the world in recent years. (Nonetheless, as indicated in Part I, there is still plenty to be done to promote greater inclusion of authors from Africa and South America.) Q. How have citations varied by gender and academic age? Are women less cited because of a greater proportion of new-to-NLP female first authors than new-to-NLP male first authors? A. Figure FIGREF61 (right side) shows citation statistics broken down by gender and academic age. (This figure is similar to the academic age graph seen earlier, except that it shows separate average and median lines for female, male, and unknown gender first authors.) Discussion: The graphs show that female first authors consistently receive fewer citations than male authors for the first fifteen years. The trend is inverted with a small citation gap in the 15th to 34th years period. Q. Is the citation gap common across the vast majority of areas of research within NLP? Is the gap simply because more women work in areas that receive low numbers of citations (regardless of gender)? A. Figure FIGREF64 shows the most cited areas of research along with citation statistics split by gender of the first authors of corresponding papers. (This figure is similar to the areas of research graph seen earlier, except that it shows separate citation statistics for the genders.) Note that the figure includes rows for only those bigram and gender pairs with at least 30 AA’ papers (published between 1965 and 2016). Thus for some of the bigrams certain gender entries are not shown. Discussion: Numbers for an additional 32 areas are available online. Observe that in only about 12% (7 of the top 59) of the most cited areas of research, women received higher average citations than men. These include: sentiment analysis, information extraction, document summarization, spoken dialogue, cross lingual (research), dialogue, systems, language generation. (Of course, note that some of the 59 areas, as estimated using title term bigrams, are overlapping. Also, we did not include large scale in the list above because the difference in averages is very small and it is not really an area of research.) Thus, the citation gap is common across a majority of the high-citations areas within NLP. Conclusions This work examined the ACL Anthology to identify broad trends in productivity, focus, and impact. We examined several questions such as: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? Particular attention was paid to the demographics and inclusiveness of the NLP community. Notably, we showed that only about 30% of first authors are female, and that this percentage has not improved since the year 2000. We also showed that, on average, female first authors are cited less than male first authors, even when controlling for academic age. We hope that recording citation and participation gaps across demographic groups will encourage our university, industry, and government research labs to be more inclusive and fair. Several additional aspects of the AA will be explored in future work (see the bottom of the blog posts). Acknowledgments This work was possible due to the helpful discussion and encouragement from a number of awesome people, including: Dan Jurafsky, Tara Small, Michael Strube, Cyril Goutte, Eric Joanis, Matt Post, Patrick Littell, Torsten Zesch, Ellen Riloff, Norm Vinson, Iryna Gurevych, Rebecca Knowles, Isar Nejadgholi, and Peter Turney. Also, a big thanks to the ACL Anthology team for creating and maintaining a wonderful resource.
size, demographics, areas of research, impact, and correlation of citations with demographic attributes (age and gender)
6bff681f1f6743ef7aa6c29cc00eac26fafdabc2
6bff681f1f6743ef7aa6c29cc00eac26fafdabc2_0
Q: Are the academically younger authors cited less than older? Text: Introduction The ACL Anthology (AA) is a digital repository of tens of thousands of articles on Natural Language Processing (NLP) / Computational Linguistics (CL). It includes papers published in the family of ACL conferences as well as in other NLP conferences such as LREC and RANLP. AA is the largest single source of scientific literature on NLP. This project, which we call NLP Scholar, examines the literature as a whole to identify broad trends in productivity, focus, and impact. We will present the analyses in a sequence of questions and answers. The questions range from fairly mundane to oh-that-will-be-good-to-know. Our broader goal here is simply to record the state of the AA literature: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? The answers are usually in the form of numbers, graphs, and inter-connected visualizations. We focus on the following aspects of NLP research: size, demographics, areas of research, impact, and correlation of citations with demographic attributes (age and gender). Target Audience: The analyses presented here are likely to be of interest to any NLP researcher. This might be particularly the case for those that are new to the field and wish to get a broad overview of the NLP publishing landscape. On the other hand, even seasoned NLP'ers have likely wondered about the questions raised here and might be interested in the empirical evidence. Data: The analyses presented below are based on information about the papers taken directly from AA (as of June 2019) and citation information extracted from Google Scholar (as of June 2019). Thus, all subsequent papers and citations are not included in the analysis. A fresh data collection is planned for January 2020. Interactive Visualizations: The visualizations we are developing for this work (using Tableau) are interactive—so one can hover, click to select and filter, move sliders, etc. Since this work is high in the number of visualizations, the main visualizations are presented as figures in the paper and some sets of visualizations are pointed to online. The interactive visualizations and data will be made available through the first author's website after peer review. Related Work: This work builds on past research, including that on Google Scholar BIBREF0, BIBREF1, BIBREF2, BIBREF3, on the analysis of NLP papers BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, on citation intent BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15, and on measuring scholarly impact BIBREF16, BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21. Caveats and Ethical Considerations: We list several caveats and limitations throughout the paper. A compilation of these is also available online in the About NLP Scholar page. The analyses presented here are also available as a series of blog posts. Size Q. How big is the ACL Anthology (AA)? How is it changing with time? A. As of June 2019, AA had $\sim $50K entries, however, this includes some number of entries that are not truly research publications (for example, forewords, prefaces, table of contents, programs, schedules, indexes, calls for papers/participation, lists of reviewers, lists of tutorial abstracts, invited talks, appendices, session information, obituaries, book reviews, newsletters, lists of proceedings, lifetime achievement awards, erratum, and notes). We discard them for the analyses here. (Note: CL journal includes position papers like squibs, letter to editor, opinion, etc. We do not discard them.) We are then left with 44,896 articles. Figure FIGREF6 shows a graph of the number of papers published in each of the years from 1965 to 2018. Discussion: Observe that there was a spurt in the 1990s, but things really took off since the year 2000, and the growth continues. Also, note that the number of publications is considerably higher in alternate years. This is due to biennial conferences. Since 1998 the largest of such conferences has been LREC (In 2018 alone LREC had over 700 main conferences papers and additional papers from its 29 workshops). COLING, another biennial conference (also occurring in the even years) has about 45% of the number of main conference papers as LREC. Q. How many people publish in the ACL Anthology (NLP conferences)? A. Figure FIGREF7 shows a graph of the number of authors (of AA papers) over the years: Discussion: It is a good sign for the field to have a growing number of people join its ranks as researchers. A further interesting question would be: Q. How many people are actively publishing in NLP? A. It is hard to know the exact number, but we can determine the number of people who have published in AA in the last N years. #people who published at least one paper in 2017 and 2018 (2 years): $\sim $12k (11,957 to be precise) #people who published at least one paper 2015 through 2018 (4 years):$\sim $17.5k (17,457 to be precise) Of course, some number of researchers published NLP papers in non-AA venues, and some number are active NLP researchers who may not have published papers in the last few years. Q. How many journal papers exist in the AA? How many main conference papers? How many workshop papers? A. See Figure FIGREF8. Discussion: The number of journal papers is dwarfed by the number of conference and workshop papers. (This is common in computer science. Even though NLP is a broad interdisciplinary field, the influence of computer science practices on NLP is particularly strong.) Shared task and system demo papers are relatively new (introduced in the 2000s), but their numbers are already significant and growing. Creating a separate class for “Top-tier Conference” is somewhat arbitrary, but it helps make certain comparisons more meaningful (for example, when comparing the average number of citations, etc.). For this work, we consider ACL, EMNLP, NAACL, COLING, and EACL as top-tier conferences, but certainly other groupings are also reasonable. Q. How many papers have been published at ACL (main conference papers)? What are the other NLP venues and what is the distribution of the number of papers across various CL/NLP venues? A. # ACL (main conference papers) as of June 2018: 4,839 The same workshop can co-occur with different conferences in different years, so we grouped all workshop papers in their own class. We did the same for tutorials, system demonstration papers (demos), and student research papers. Figure FIGREF9 shows the number of main conference papers for various venues and paper types (workshop papers, demos, etc.). Discussion: Even though LREC is a relatively new conference that occurs only once in two years, it tends to have a high acceptance rate ($\sim $60%), and enjoys substantial participation. Thus, LREC is already the largest single source of NLP conference papers. SemEval, which started as SenseEval in 1998 and occurred once in two or three years, has now morphed into an annual two-day workshop—SemEval. It is the largest single source of NLP shared task papers. Demographics (focus of analysis: gender, age, and geographic diversity) NLP, like most other areas of research, suffers from poor demographic diversity. There is very little to low representation from certain nationalities, race, gender, language, income, age, physical abilities, etc. This impacts the breadth of technologies we create, how useful they are, and whether they reach those that need it most. In this section, we analyze three specific attributes among many that deserve attention: gender (specifically, the number of women researchers in NLP), age (more precisely, the number of years of NLP paper publishing experience), and the amount of research in various languages (which loosely correlates with geographic diversity). Demographics (focus of analysis: gender, age, and geographic diversity) ::: Gender The ACL Anthology does not record demographic information about the paper authors. (Until recently, ACL and other NLP conferences did not record demographic information of the authors.) However, many first names have strong associations with a male or female gender. We will use these names to estimate the percentage of female first authors in NLP. The US Social Security Administration publishes a database of names and genders of newborns. We use the dataset to identify 55,133 first names that are strongly associated with females (probability $\ge $99%) and 29,873 first names that are strongly associated with males (probability $\ge $99%). (As a side, it is interesting to note that there is markedly greater diversity in female names than in male names.) We identified 26,637 of the 44,896 AA papers ($\sim $60%) where the first authors have one of these names and determine the percentage of female first author papers across the years. We will refer to this subset of AA papers as AA*. Note the following caveats associated with this analysis: The names dataset used has a lower representation of names from nationalities other than the US. However, there is a large expatriate population living in the US. Chinese names (especially in the romanized form) are not good indicators of gender. Thus the method presented here disregards most Chinese names, and the results of the analysis apply to the group of researchers excluding those with Chinese names. The dataset only records names associated with two genders. The approach presented here is meant to be an approximation in the absence of true gender information. Q. What percent of the AA* papers have female first authors (FFA)? How has this percentage changed with time? A. Overall FFA%: 30.3%. Figure FIGREF16 shows how FFA% has changed with time. Common paper title words and FFA% of papers that have those words are shown in the bottom half of the image. Note that the slider at the bottom has been set to 400, i.e., only those title words that occur in 400 or more papers are shown. The legend on the bottom right shows that low FFA scores are shown in shades of blue, whereas relatively higher FFA scores are shown in shades of green. Discussion: Observe that as a community, we are far from obtaining male-female parity in terms of first authors. A further striking (and concerning) observation is that the female first author percentage has not improved since the years 1999 and 2000 when the FFA percentages were highest (32.9% and 32.8%, respectively). In fact there seems to even be a slight downward trend in recent years. The calculations shown above are for the percentage of papers that have female first authors. The percentage of female first authors is about the same ($\sim $31%). On average male authors had a slightly higher average number of publications than female authors. To put these numbers in context, the percentage of female scientists world wide (considering all areas of research) has been estimated to be around 30%. The reported percentages for many computer science sub-fields are much lower. (See Women in Science (2015).) The percentages are much higher for certain other fields such as psychology and linguistics. (See this study for psychology and this study for linguistics.) If we can identify ways to move the needle on the FFA percentage and get it closer to 50% (or more), NLP can be a beacon to many other fields, especially in the sciences. FFA percentages are particularly low for papers that have parsing, neural, and unsupervised in the title. There are some areas within NLP that enjoy a healthier female-male parity in terms of first authors of papers. Figure FIGREF20 shows FFA percentages for papers that have the word discourse in the title. There is burgeoning research on neural NLP in the last few years. Figure FIGREF21 shows FFA percentages for papers that have the word neural in the title. Figure FIGREF22 shows lists of terms with the highest and lowest FFA percentages, respectively, when considering terms that occur in at least 50 paper titles (instead of 400 in the analysis above). Observe that FFA percentages are relatively higher in non-English European language research such as papers on Russian, Portuguese, French, and Italian. FFA percentages are also relatively higher for certain areas of NLP such as work on prosody, readability, discourse, dialogue, paraphrasing, and individual parts of speech such as adjectives and verbs. FFA percentages are particularly low for papers on theoretical aspects of statistical modelling, and areas such as machine translation, parsing, and logic. The full lists of terms and FFA percentages will be made available with the rest of the data. Demographics (focus of analysis: gender, age, and geographic diversity) ::: Academic Age While the actual age of NLP researchers might be an interesting aspect to explore, we do not have that information. Thus, instead, we can explore a slightly different (and perhaps more useful) attribute: NLP academic age. We can define NLP academic age as the number of years one has been publishing in AA. So if this is the first year one has published in AA, then their NLP academic age is 1. If one published their first AA paper in 2001 and their latest AA paper in 2018, then their academic age is 18. Q. How old are we? That is, what is the average NLP academic age of those who published papers in 2018? How has the average changed over the years? That is, have we been getting older or younger? What percentage of authors that published in 2018 were publishing their first AA paper? A. Average NLP Academic Age of people that published in 2018: 5.41 years Median NLP Academic Age of people that published in 2018: 2 years Percentage of 2018 authors that published their first AA paper in 2018: 44.9% Figure FIGREF24 shows how these numbers have changed over the years. Discussion: Observe that the Average academic age has been steadily increasing over the years until 2016 and 2017, when the trend has shifted and the average academic age has started to decrease. The median age was 1 year for most of the 1965 to 1990 period, 2 years for most of the 1991 to 2006 period, 3 years for most of the 2007 to 2015 period, and back to 2 years since then. The first-time AA author percentage decreased until about 1988, after which it sort of stayed steady at around 48% until 2004 with occasional bursts to $\sim $56%. Since 2005, the first-time author percentage has gone up and down every other year. It seems that the even years (which are also LREC years) have a higher first-time author percentage. Perhaps, this oscillation in first-time authors percentage is related to LREC’s high acceptance rate. Q. What is the distribution of authors in various academic age bins? For example, what percentage of authors that published in 2018 had an academic age of 2, 3, or 4? What percentage had an age between 5 and 9? And so on? A. See Figure FIGREF25. Discussion: Observe that about 65% of the authors that published in 2018 had an academic age of less than 5. This number has steadily reduced since 1965, was in the 60 to 70% range in 1990s, rose to the 70 to 72% range in early 2000s, then declined again until it reached the lowest value ($\sim $60%) in 2010, and has again steadily risen until 2018 (65%). Thus, even though it may sometimes seem at recent conferences that there is a large influx of new people into NLP (and that is true), proportionally speaking, the average NLP academic age is higher (more experienced) than what it has been in much of its history. Demographics (focus of analysis: gender, age, and geographic diversity) ::: Location (Languages) Automatic systems with natural language abilities are growing to be increasingly pervasive in our lives. Not only are they sources of mere convenience, but are crucial in making sure large sections of society and the world are not left behind by the information divide. Thus, the limits of what automatic systems can do in a language, limit the world for the speakers of that language. We know that much of the research in NLP is on English or uses English datasets. Many reasons have been proffered, and we will not go into that here. Instead, we will focus on estimating how much research pertains to non-English languages. We will make use of the idea that often when work is done focusing on a non-English language, then the language is mentioned in the title. We collected a list of 122 languages indexed by Wiktionary and looked for the presence of these words in the titles of AA papers. (Of course there are hundreds of other lesser known languages as well, but here we wanted to see the representation of these more prominent languages in NLP literature.) Figure FIGREF27 is a treemap of the 122 languages arranged alphabetically and shaded such that languages that appear more often in AA paper titles have a darker shade of green. Discussion: Even though the amount of work done on English is much larger than that on any other language, often the word English does not appear in the title, and this explains why English is not the first (but the second-most) common language name to appear in the titles. This is likely due to the fact that many papers fail to mention the language of study or the language of the datasets used if it is English. There is growing realization in the community that this is not quite right. However, the language of study can be named in other less prominent places than the title, for example the abstract, introduction, or when the datasets are introduced, depending on how central it is to the paper. We can see from the treemap that the most widely spoken Asian and Western European languages enjoy good representation in AA. These include: Chinese, Arabic, Korean, Japanese, and Hindi (Asian) as well as French, German, Swedish, Spanish, Portuguese, and Italian (European). This is followed by the relatively less widely spoken European languages (such as Russian, Polish, Norwegian, Romanian, Dutch, and Czech) and Asian languages (such as Turkish, Thai, and Urdu). Most of the well-represented languages are from the Indo-European language family. Yet, even in the limited landscape of the most common 122 languages, vast swathes are barren with inattention. Notable among these is the extremely low representation of languages from Africa, languages from non-Indo-European language families, and Indigenous languages from around the world. Areas of Research Natural Language Processing addresses a wide range of research questions and tasks pertaining to language and computing. It encompasses many areas of research that have seen an ebb and flow of interest over the years. In this section, we examine the terms that have been used in the titles of ACL Anthology (AA) papers. The terms in a title are particularly informative because they are used to clearly and precisely convey what the paper is about. Some journals ask authors to separately include keywords in the paper or in the meta-information, but AA papers are largely devoid of this information. Thus titles are an especially useful source of keywords for papers—keywords that are often indicative of the area of research. Keywords could also be extracted from abstracts and papers; we leave that for future work. Further work is also planned on inferring areas of research using word embeddings, techniques from topic modelling, and clustering. There are clear benefits to performing analyses using that information. However, those approaches can be sensitive to the parameters used. Here, we keep things simple and explore counts of terms in paper titles. Thus the results are easily reproducible and verifiable. Caveat: Even though there is an association between title terms and areas of research, the association can be less strong for some terms and areas. We use the association as one (imperfect) source of information about areas of research. This information may be combined with other sources of information to draw more robust conclusions. Title Terms: The title has a privileged position in a paper. It serves many functions, and here are three key ones (from an article by Sneha Kulkarni): "A good research paper title: 1. Condenses the paper's content in a few words 2. Captures the readers' attention 3. Differentiates the paper from other papers of the same subject area". If we examine the titles of papers in the ACL Anthology, we would expect that because of Function 1 many of the most common terms will be associated with the dominant areas of research. Function 2 (or attempting to have a catchy title) on the other hand, arguably leads to more unique and less frequent title terms. Function 3 seems crucial to the effectiveness of a title; and while at first glance it may seem like this will lead to unique title terms, often one needs to establish a connection with something familiar in order to convey how the work being presented is new or different. It is also worth noting that a catchy term today, will likely not be catchy tomorrow. Similarly, a distinctive term today, may not be distinctive tomorrow. For example, early papers used neural in the title to distinguish themselves from non-nerual approaches, but these days neural is not particularly discriminative as far as NLP papers go. Thus, competing and complex interactions are involved in the making of titles. Nonetheless, an arguable hypothesis is that: broad trends in interest towards an area of research will be reflected, to some degree, in the frequencies of title terms associated with that area over time. However, even if one does not believe in that hypothesis, it is worth examining the terms in the titles of tens of thousands of papers in the ACL Anthology—spread across many decades. Q. What terms are used most commonly in the titles of the AA papers? How has that changed with time? A. Figure FIGREF28 shows the most common unigrams (single word) and bigrams (two-word sequences) in the titles of papers published from 1980 to 2019. (Ignoring function words.) The timeline graph at the bottom shows the percentage of occurrences of the unigrams over the years (the colors of the unigrams in the Timeline match those in the Title Unigram list). Note: For a given year, the timeline graph includes a point for a unigram if the sum of the frequency of the unigram in that year and the two years before it is at least ten. The period before 1980 is not included because of the small number of papers. Discussion: Appropriately enough, the most common term in the titles of NLP papers is language. Presence of high-ranking terms pertaining to machine translation suggest that it is the area of research that has received considerable attention. Other areas associated with the high-frequency title terms include lexical semantics, named entity recognition, question answering, word sense disambiguation, and sentiment analysis. In fact, the common bigrams in the titles often correspond to names of NLP research areas. Some of the bigrams like shared task and large scale are not areas of research, but rather mechanisms or trends of research that apply broadly to many areas of research. The unigrams, also provide additional insights, such as the interest of the community in Chinese language, and in areas such as speech and parsing. The Timeline graph is crowded in this view, but clicking on a term from the unigram list will filter out all other lines from the timeline. This is especially useful for determining whether the popularity of a term is growing or declining. (One can already see from above that neural has broken away from the pack in recent years.) Since there are many lines in the Timeline graph, Tableau labels only some (you can see neural and machine). However, hovering over a line, in the eventual interactive visualization, will display the corresponding term—as shown in the figure. Despite being busy, the graph sheds light on the relative dominance of the most frequent terms and how that has changed with time. The vocabulary of title words is smaller when considering papers from the 1980's than in recent years. (As would be expected since the number of papers then was also relatively fewer.) Further, dominant terms such as language and translation accounted for a higher percentage than in recent years where there is a much larger diversity of topics and the dominant research areas are not as dominant as they once were. Q. What are the most frequent unigrams and bigrams in the titles of recent papers? A. Figure FIGREF29 shows the most frequent unigrams and bigrams in the titles of papers published 2016 Jan to 2019 June (time of data collection). Discussion: Some of the terms that have made notable gains in the top 20 unigrams and bigrams lists in recent years include: neural machine (presumably largely due to the phrase neural machine translation), neural network(s), word embeddings, recurrent neural, deep learning and the corresponding unigrams (neural, networks, etc.). We also see gains for terms related to shared tasks such as SemEval and task. The sets of most frequent unigrams and bigrams in the titles of AA papers from various time spans are available online. Apart from clicking on terms, one can also enter the query (say parsing) in the search box at the bottom. Apart from filtering the timeline graph (bottom), this action also filters the unigram list (top left) to provide information only about the search term. This is useful because the query term may not be one of the visible top unigrams. FigureFIGREF31 shows the timeline graph for parsing. Discussion: Parsing seems to have enjoyed considerable attention in the 1980s, began a period of steep decline in the early 1990s, and a period of gradual decline ever since. One can enter multiple terms in the search box or shift/command click multiple terms to show graphs for more than one term. FigureFIGREF32 shows the timelines for three bigrams statistical machine, neural machine, and machine translation: Discussion: The graph indicates that there was a spike in machine translation papers in 1996, but the number of papers dropped substantially after that. Yet, its numbers have been comparatively much higher than other terms. One can also see the rise of statistical machine translation in the early 2000s followed by its decline with the rise of neural machine translation. Impact Research articles can have impact in a number of ways—pushing the state of the art, answering crucial questions, finding practical solutions that directly help people, making a new generation of potential-scientists excited about a field of study, and more. As scientists, it seems attractive to quantitatively measure scientific impact, and this is particularly appealing to governments and funding agencies; however, it should be noted that individual measures of research impact are limited in scope—they measure only some kinds of contributions. Citations The most commonly used metrics of research impact are derived from citations. A citation of a scholarly article is the explicit reference to that article. Citations serve many functions. However, a simplifying assumption is that regardless of the reason for citation, every citation counts as credit to the influence or impact of the cited work. Thus several citation-based metrics have emerged over the years including: number of citations, average citations, h-index, relative citation ratio, and impact factor. It is not always clear why some papers get lots of citations and others do not. One can argue that highly cited papers have captured the imagination of the field: perhaps because they were particularly creative, opened up a new area of research, pushed the state of the art by a substantial degree, tested compelling hypotheses, or produced useful datasets, among other things. Note however, that the number of citations is not always a reflection of the quality or importance of a piece of work. Note also that there are systematic biases that prevent certain kinds of papers from accruing citations, especially when the contributions of a piece of work are atypical, not easily quantified, or in an area where the number of scientific publications is low. Further, the citations process can be abused, for example, by egregious self-citations. Nonetheless, given the immense volume of scientific literature, the relative ease with which one can track citations using services such as Google Scholar and Semantic Scholar, and given the lack of other easily applicable and effective metrics, citation analysis is an imperfect but useful window into research impact. In this section, we examine citations of AA papers. We focus on two aspects: Most cited papers: We begin by looking at the most cited papers overall and in various time spans. We will then look at most cited papers by paper-type (long, short, demo, etc) and venue (ACL, LREC, etc.). Perhaps these make interesting reading lists. Perhaps they also lead to a qualitative understanding of the kinds of AA papers that have received lots of citations. Aggregate citation metrics by time span, paper type, and venue: Access to citation information allows us to calculate aggregate citation metrics such as average and median citations of papers published in different time periods, published in different venues, etc. These can help answer questions such as: on average, how well cited are papers published in the 1990s? on average, how many citations does a short paper get? how many citations does a long paper get? how many citations for a workshop paper? etc. Data: The analyses presented below are based on information about the papers taken directly from AA (as of June 2019) and citation information extracted from Google Scholar (as of June 2019). We extracted citation information from Google Scholar profiles of authors who had a Google Scholar Profile page and had published at least three papers in the ACL Anthology. This yielded citation information for about 75% of the papers (33,051 out of the 44,896 papers). We will refer to this subset of the ACL Anthology papers as AA’. All citation analysis below is on AA’. Impact ::: #Citations and Most Cited Papers Q. How many citations have the AA’ papers received? How is that distributed among the papers published in various decades? A. $\sim $1.2 million citations (as of June 2019). Figure FIGREF36 shows a timeline graph where each year has a bar with height corresponding to the number of citations received by papers published in that year. Further, the bar has colored fragments corresponding to each of the papers and the height of a fragment (paper) is proportional to the number of citations it has received. Thus it is easy to spot the papers that received a large number of citations, and the years when the published papers received a large number of citations. Hovering over individual papers reveals an information box showing the paper title, authors, year of publication, publication venue, and #citations. Discussion: With time, not only have the number of papers grown, but also the number of high-citation papers. We see a marked jump in the 1990s over the previous decades, but the 2000s are the most notable in terms of the high number of citations. The 2010s papers will likely surpass the 2000s papers in the years to come. Q. What are the most cited papers in AA'? A. Figure FIGREF37 shoes the most cited papers in the AA'. Discussion: We see that the top-tier conference papers (green) are some of the most cited papers in AA’. There are a notable number of journal papers (dark green) in the most cited list as well, but very few demo (purple) and workshop (orange) papers. In the interactive visualizations (to be released later), one can click on the url to be to taken directly to the paper’s landing page in the ACL Anthology website. That page includes links to meta information, the pdf, and associated files such as videos and appendices. There will also be functionality to download the lists. Alas, copying the lists from the screenshots shown here is not easy. Q. What are the most cited AA' journal papers ? What are the most cited AA' workshop papers? What are the most cited AA' shared task papers? What are the most cited AA' demo papers? What are the most cited tutorials? A. The most cited AA’ journal papers, conference papers, workshop papers, system demo papers, shared task papers, and tutorials can be viewed online. The most cited papers from individual venues (ACL, CL journal, TACL, EMNLP, LREC, etc.) can also be viewed there. Discussion: Machine translation papers are well-represented in many of these lists, but especially in the system demo papers list. Toolkits such as MT evaluation ones, NLTK, Stanford Core NLP, WordNet Similarity, and OpenNMT have highly cited demo or workshop papers. The shared task papers list is dominated by task description papers (papers by task organizers describing the data and task), especially for sentiment analysis tasks. However, the list also includes papers by top-performing systems in these shared tasks, such as the NRC-Canada, HidelTime, and UKP papers. Q. What are the most cited AA' papers in the last decade? A. Figure FIGREF39 shows the most cited AA' papers in the 2010s. The most cited AA' papers from the earlier periods are available online. Discussion: The early period (1965–1989) list includes papers focused on grammar and linguistic structure. The 1990s list has papers addressing many different NLP problems with statistical approaches. Papers on MT and sentiment analysis are frequent in the 2000s list. The 2010s are dominated by papers on word embeddings and neural representations. Impact ::: Average Citations by Time Span Q. How many citations did the papers published between 1990 and 1994 receive? What is the average number of citations that a paper published between 1990 and 1994 has received? What are the numbers for other time spans? A. Total citations for papers published between 1990 and 1994: $\sim $92k Average citations for papers published between 1990 and 1994: 94.3 Figure FIGREF41 shows the numbers for various time spans. Discussion: The early 1990s were an interesting period for NLP with the use of data from the World Wide Web and technologies from speech processing. This was the period with the highest average citations per paper, closely followed by the 1965–1969 and 1995–1999 periods. The 2000–2004 period is notable for: (1) a markedly larger number of citations than the previous decades; (2) third highest average number of citations. The drop off in the average citations for recent 5-year spans is largely because they have not had as much time to collect citations. Impact ::: Aggregate Citation Statistics, by Paper Type and Venue Q. What are the average number of citations received by different types of papers: main conference papers, workshop papers, student research papers, shared task papers, and system demonstration papers? A. In this analysis, we include only those AA’ papers that were published in 2016 or earlier (to allow for at least 2.5 years to collect citations). There are 26,949 such papers. Figures FIGREF42 and FIGREF43 show the average citations by paper type when considering papers published 1965–2016 and 2010–2016, respectively. Figures FIGREF45 and FIGREF46 show the medians. Discussion: Journal papers have much higher average and median citations than other papers, but the gap between them and top-tier conferences is markedly reduced when considering papers published since 2010. System demo papers have the third highest average citations; however, shared task papers have the third highest median citations. The popularity of shared tasks and the general importance given to beating the state of the art (SOTA) seems to have grown in recent years—something that has come under criticism. It is interesting to note that in terms of citations, workshop papers are doing somewhat better than the conferences that are not top tier. Finally, the citation numbers for tutorials show that even though a small number of tutorials are well cited, a majority receive 1 or no citations. This is in contrast to system demo papers that have average and median citations that are higher or comparable to workshop papers. Throughout the analyses in this article, we see that median citation numbers are markedly lower than average citation numbers. This is particularly telling. It shows that while there are some very highly cited papers, a majority of the papers obtain much lower number of citations—and when considering papers other than journals and top-tier conferences, the number of citations is frequently lower than ten. Q. What are the average number of citations received by the long and short ACL main conference papers, respectively? A. Short papers were introduced at ACL in 2003. Since then ACL is by far the venue with the most number of short papers (compared to other venues). So we compare long and short papers published at ACL since 2003 to determine their average citations. Once again, we limit the papers to those published until 2016 to allow for the papers to have time to collect citations. Figure FIGREF47 shows the average and median citations for long and short papers. Discussion: On average, long papers get almost three times as many citations as short papers. However, the median for long papers is two-and-half times that of short papers. This difference might be because some very heavily cited long papers push the average up for long papers. Q. Which venue has publications with the highest average number of citations? What is the average number of citations for ACL and EMNLP papers? What is this average for other venues? What are the average citations for workshop papers, system demonstration papers, and shared task papers? A. CL journal has the highest average citations per paper. Figure FIGREF49 shows the average citations for AA’ papers published 1965–2016 and 2010–2016, respectively, grouped by venue and paper type. (Figure with median citations is available online.) Discussion: In terms of citations, TACL papers have not been as successful as EMNLP and ACL; however, CL journal (the more traditional journal paper venue) has the highest average and median paper citations (by a large margin). This gap has reduced in papers published since 2010. When considering papers published between 2010 and 2016, the system demonstration papers, the SemEval shared task papers, and non-SemEval shared task papers have notably high average (surpassing those of EACL and COLING); however their median citations are lower. This is likely because some heavily cited papers have pushed the average up. Nonetheless, it is interesting to note how, in terms of citations, demo and shared task papers have surpassed many conferences and even become competitive with some top-tier conferences such as EACL and COLING. Q. What percent of the AA’ papers that were published in 2016 or earlier are cited more than 1000 times? How many more than 10 times? How many papers are cited 0 times? A. Google Scholar invented the i-10 index as another measure of author research impact. It stands for the number of papers by an author that received ten or more citations. (Ten here is somewhat arbitrary, but reasonable.) Similar to that, one can look at the impact of AA’ as a whole and the impact of various subsets of AA’ through the number of papers in various citation bins. Figure FIGREF50 shows the percentage of AA’ papers in various citation bins. (The percentages of papers when considering papers from specific time spans are available online.) Discussion: About 56% of the papers are cited ten or more times. 6.4% of the papers are never cited. Note also that some portion of the 1–9 bin likely includes papers that only received self-citations. It is interesting that the percentage of papers with 0 citations is rather steady (between 7.4% and 8.7%) for the 1965–1989, 1990–1999, and 2010–2016 periods. The majority of the papers lie in the 10 to 99 citations bin, for all except the recent periods (2010–2016 and 2016Jan–2016Dec). With time, the recent period should also have the majority of the papers in the 10 to 99 citations bin. The numbers for the 2016Jan–2016Dec papers show that after 2.5 years, about 89% of the papers have at least one citation and about 33% of the papers have ten or more citations. Q. What are the citation bin percentages for individual venues and paper types? A. See Figure FIGREF51. Discussion: Observe that 70 to 80% of the papers in journals and top-tier conferences have ten or more citations. The percentages are markedly lower (between 30 and 70%) for the other conferences shown above, and even lower for some other conferences (not shown above). CL Journal is particularly notable for the largest percentage of papers with 100 or more citations. The somewhat high percentage of papers that are never cited (4.3%) are likely because some of the book reviews from earlier years are not explicitly marked in CL journal, and thus they were not removed from analysis. Also, letters to editors, which are more common in CL journal, tend to often obtain 0 citations. CL, EMNLP, and ACL have the best track record for accepting papers that have gone on to receive 1000 or more citations. *Sem, the semantics conference, seems to have notably lower percentage of high-citation papers, even though it has fairly competitive acceptance rates. Instead of percentage, if one considers raw numbers of papers that have at least ten citations (i-10 index), then LREC is particularly notable in terms of the large number of papers it accepts that have gone on to obtain ten or more citations ($\sim $1600). Thus, by producing a large number of moderate-to-high citation papers, and introducing many first-time authors, LREC is one of the notable (yet perhaps undervalued) engines of impact on NLP. About 50% of the SemEval shared task papers received 10 or more citations, and about 46% of the non-SemEval Shared Task Papers received 10 or more citations. About 47% of the workshop papers received ten or more citations. About 43% of the demo papers received 10 or more citations. Impact ::: Citations to Papers by Areas of Research Q. What is the average number of citations of AA' papers that have machine translation in the title? What about papers that have the term sentiment analysis or word representations? A. Different areas of research within NLP enjoy varying amounts of attention. In Part II, we looked at the relative popularity of various areas over time—estimated through the number of paper titles that had corresponding terms. (You may also want to see the discussion on the use of paper title terms to sample papers from various, possibly overlapping, areas.) Figure FIGREF53 shows the top 50 title bigrams ordered by decreasing number of total citations. Only those bigrams that occur in at least 30 AA' papers (published between 1965 and 2016) are considered. (The papers from 2017 and later are not included, to allow for at least 2.5 years for the papers to accumulate citations.) Discussion: The graph shows that the bigram machine translation occurred in 1,659 papers that together accrued more than 93k citations. These papers have on average 68.8 citations and the median citations is 14. Not all machine translation (MT) papers have machine translation in the title. However, arguably, this set of 1,659 papers is a representative enough sample of machine translation papers; and thus, the average and median are estimates of MT in general. Second in the list are papers with statistical machine in the title—most commonly from the phrase statistical machine translation. One expects considerable overlap in the papers across the sets of papers with machine translation and statistical machine, but machine translation likely covers a broader range of research including work before statistical MT was introduced, neural MT, and MT evaluation. There are fewer papers with sentiment analysis in the title (356), but these have acquired citations at a higher average (104) than both machine translation and statistical machine. The bigram automatic evaluation jumps out because of its high average citations (337). Some of the neural-related bigrams have high median citations, for example, neural machine (49) and convolutional neural (40.5). Figure FIGREF54 shows the lists of top 25 bigrams ordered by average citations. Discussion: Observe the wide variety of topics covered by this list. In some ways that is reassuring for the health of the field as a whole; however, this list does not show which areas are not receiving sufficient attention. It is less clear to me how to highlight those, as simply showing the bottom 50 bigrams by average citations is not meaningful. Also note that this is not in any way an endorsement to write papers with these high-citation bigrams in the title. Doing so is of course no guarantee of receiving a large number of citations. Correlation of Age and Gender with Citations In this section, we examine citations across two demographic dimensions: Academic age (number of years one has been publishing) and Gender. There are good reasons to study citations across each of these dimensions including, but not limited to, the following: Areas of research: To better understand research contributions in the context of the area where the contribution is made. Academic age: To better understand how the challenges faced by researchers at various stages of their career may impact the citations of their papers. For example, how well-cited are first-time NLP authors? On average, at what academic age do citations peak? etc. Gender: To better understand the extent to which systematic biases (explicit and implicit) pervasive in society and scientific publishing impact author citations. Some of these aspects of study may seem controversial. So it is worth addressing that first. The goal here is not to perpetuate stereotypes about age, gender, or even areas of research. The history of scientific discovery is awash with plenty of examples of bad science that has tried to erroneously show that one group of people is “better” than another, with devastating consequences. People are far more alike than different. However, different demographic groups have faced (and continue to face) various socio-cultural inequities and biases. Gender and race studies look at how demographic differences shape our experiences. They examine the roles of social institutions in maintaining the inequities and biases. This work is in support of those studies. Unless we measure differences in outcomes such as scientific productivity and impact across demographic groups, we will not fully know the extent to which these inequities and biases impact our scientific community; and we cannot track the effectiveness of measures to make our universities, research labs, and conferences more inclusive, equitable, and fair. Correlation of Age and Gender with Citations ::: Correlation of Academic Age with Citations We introduced NLP academic age earlier in the paper, where we defined NLP academic age as the number of years one has been publishing in AA. Here we examine whether NLP academic age impacts citations. The analyses are done in terms of the academic age of the first author; however, similar analyses can be done for the last author and all authors. (There are limitations to each of these analyses though as discussed further below.) First author is a privileged position in the author list as it is usually reserved for the researcher that has done the most work and writing. The first author is also usually the main driver of the project; although, their mentor or advisor may also be a significant driver of the project. Sometimes multiple authors may be marked as first authors in the paper, but the current analysis simply takes the first author from the author list. In many academic communities, the last author position is reserved for the most senior or mentoring researcher. However, in non-university research labs and in large collaboration projects, the meaning of the last author position is less clear. (Personally, I prefer author names ordered by the amount of work done.) Examining all authors is slightly more tricky as one has to decide how to credit the citations to the possibly multiple authors. It might also not be a clear indicator of differences across gender as a large number of the papers in AA have both male and female authors. Q. How does the NLP academic age of the first author correlate with the amount of citations? Are first-year authors less cited than those with more experience? A. Figure FIGREF59 shows various aggregate citation statistics corresponding to academic age. To produce the graph we put each paper in a bin corresponding to the academic age of the first author when the paper was published. For example, if the first author of a paper had an academic age of 3 when that paper was published, then the paper goes in bin 3. We then calculate #papers, #citations, median citations, and average citations for each bin. For the figure below, We further group the bins 10 to 14, 15 to 19, 20 to 34, and 35 to 50. These groupings are done to avoid clutter, and also because many of the higher age bins have a low number of papers. Discussion: Observe that the number of papers where the first author has academic age 1 is much larger than the number of papers in any other bin. This is largely because a large number of authors in AA have written exactly one paper as first author. Also, about 60% of the authors in AA (17,874 out of the 29,941 authors) have written exactly one paper (regardless of author position). The curves for the average and median citations have a slight upside down U shape. The relatively lower average and median citations in year 1 (37.26 and 10, respectively) indicate that being new to the field has some negative impact on citations. The average increases steadily from year 1 to year 4, but the median is already at the highest point by year 2. One might say, that year 2 to year 14 are the period of steady and high citations. Year 15 onwards, there is a steady decline in the citations. It is probably wise to not draw too many conclusions from the averages of the 35 to 50 bin, because of the small number of papers. There seems to be a peak in average citations at age 7. However, there is not a corresponding peak in the median. Thus the peak in average might be due to an increase in the number of very highly cited papers. Citations to Papers by First Author Gender As noted in Part I, neither ACL nor the ACL Anthology have recorded demographic information for the vast majority of the authors. Thus we use the same setup discussed earlier in the section on demographics, to determine gender using the United States Social Security Administration database of names and genders of newborns to identify 55,133 first names that are strongly associated with females (probability $\ge $99%) and 29,873 first names that are strongly associated with males (probability $\ge $99%). Q. On average, are women cited less than men? A. Yes, on average, female first author papers have received markedly fewer citations than male first author papers (36.4 compared to 52.4). The difference in median is smaller (11 compared to 13). See Figure FIGREF60. Discussion: The large difference in averages and smaller difference in medians suggests that there are markedly more very heavily cited male first-author papers than female first-author papers. The gender-unknown category, which here largely consist of authors with Chinese origin names and names that are less strongly associated with one gender have a slightly higher average, but the same median citations, as authors with female-associated first names. The differences in citations, or citation gap, across genders may: (1) vary by period of time; (2) vary due to confounding factors such as academic age and areas of research. We explore these next. Q. How has the citation gap across genders changed over the years? A. Figure FIGREF61 (left side) shows the citation statistics across four time periods. Discussion: Observe that female first authors have always been a minority in the history of ACL; however, on average, their papers from the early years (1965 to 1989) received a markedly higher number of citations than those of male first authors from the same period. We can see from the graph that this changed in the 1990s where male first-author papers obtained markedly more citations on average. The citation gap reduced considerably in the 2000s, and the 2010–2016 period saw a further slight reduction in the citation gap. It is also interesting to note that the gender-unknown category has almost bridged the gap with the males in this most recent time period. Further, the proportion of the gender-unknown authors has increased over the years—arguably, an indication of better representations of authors from around the world in recent years. (Nonetheless, as indicated in Part I, there is still plenty to be done to promote greater inclusion of authors from Africa and South America.) Q. How have citations varied by gender and academic age? Are women less cited because of a greater proportion of new-to-NLP female first authors than new-to-NLP male first authors? A. Figure FIGREF61 (right side) shows citation statistics broken down by gender and academic age. (This figure is similar to the academic age graph seen earlier, except that it shows separate average and median lines for female, male, and unknown gender first authors.) Discussion: The graphs show that female first authors consistently receive fewer citations than male authors for the first fifteen years. The trend is inverted with a small citation gap in the 15th to 34th years period. Q. Is the citation gap common across the vast majority of areas of research within NLP? Is the gap simply because more women work in areas that receive low numbers of citations (regardless of gender)? A. Figure FIGREF64 shows the most cited areas of research along with citation statistics split by gender of the first authors of corresponding papers. (This figure is similar to the areas of research graph seen earlier, except that it shows separate citation statistics for the genders.) Note that the figure includes rows for only those bigram and gender pairs with at least 30 AA’ papers (published between 1965 and 2016). Thus for some of the bigrams certain gender entries are not shown. Discussion: Numbers for an additional 32 areas are available online. Observe that in only about 12% (7 of the top 59) of the most cited areas of research, women received higher average citations than men. These include: sentiment analysis, information extraction, document summarization, spoken dialogue, cross lingual (research), dialogue, systems, language generation. (Of course, note that some of the 59 areas, as estimated using title term bigrams, are overlapping. Also, we did not include large scale in the list above because the difference in averages is very small and it is not really an area of research.) Thus, the citation gap is common across a majority of the high-citations areas within NLP. Conclusions This work examined the ACL Anthology to identify broad trends in productivity, focus, and impact. We examined several questions such as: who and how many of us are publishing? what are we publishing on? where and in what form are we publishing? and what is the impact of our publications? Particular attention was paid to the demographics and inclusiveness of the NLP community. Notably, we showed that only about 30% of first authors are female, and that this percentage has not improved since the year 2000. We also showed that, on average, female first authors are cited less than male first authors, even when controlling for academic age. We hope that recording citation and participation gaps across demographic groups will encourage our university, industry, and government research labs to be more inclusive and fair. Several additional aspects of the AA will be explored in future work (see the bottom of the blog posts). Acknowledgments This work was possible due to the helpful discussion and encouragement from a number of awesome people, including: Dan Jurafsky, Tara Small, Michael Strube, Cyril Goutte, Eric Joanis, Matt Post, Patrick Littell, Torsten Zesch, Ellen Riloff, Norm Vinson, Iryna Gurevych, Rebecca Knowles, Isar Nejadgholi, and Peter Turney. Also, a big thanks to the ACL Anthology team for creating and maintaining a wonderful resource.
Yes