id
stringlengths 40
40
| pid
stringlengths 42
42
| input
stringlengths 8.37k
169k
| output
stringlengths 1
1.63k
|
---|---|---|---|
eced6a6dffe43c28e6d06ab87eed98c135f285a3 | eced6a6dffe43c28e6d06ab87eed98c135f285a3_0 | Q: Do the authors report only on English language data?
Text: Introduction
Analysing sentiment from text is a well-known NLP problem. Several state-of-the-art tools exist that can achieve this with reasonable accuracy. However most of the existing tools perform well on well-formatted text. In case of tweets, the user generated content is short, noisy, and in many cases ( INLINEFORM0 ) doesn't follow proper grammatical structure. Additionally, numerous internet slangs, abbreviations, urls, emoticons, and unconventional style of capitalization are found in the tweets. As a result, the accuracy of the state-of-the art NLP tools decreases sharply. In this project, we develop new features to incorporate the styles salient in short, informal user generated contents like tweets. We achieve an F1-accuracy of INLINEFORM1 for predicting the sentiment of tweets in our data-set. We also propose a method to discover new sentiment terms from the tweets.
In section SECREF2 we present analysis of the data-set. We describe the data-preprocessing that we have done in section SECREF3 . In section SECREF4 we describe how the feature-set was extracted, the classification framework, and also the tuning of the parameters for reasonable accuracy. In section SECREF5 we report the performance of our system. We also report how the different features affect the accuracy of the system. We describe how we harvest new sentiment terms using our framework in section SECREF6 . In this section we also present how we predict strength of sentiment from the tweets. We finally conclude with some possible future directions of work in section SECREF7 .
Data-analysis
Tweets are short messages, restricted to 140 characters in length. Due to the nature of this microblogging service (quick and short messages), people use acronyms, make spelling mistakes, use emoticons and other characters that express special meanings. Following is a brief terminology associated with tweets:
Our dataset contains tweets about `ObamaCare' in USA collected during march 2010. It is divided into three subsets (train, dev, and test). Some tweets are manually annotated with one of the following classes.
positive, negative, neutral, unsure, and irrelevant
We ignore the tweets which are annotated unsure, or irrelevant. We present some preliminary statistics about the training data and test data in Table TABREF5 . We observe that there is an imbalance in the dataset. In training dataset, the ratio of positive tweets to negative ones is almost 1:2. In test set, it is heavily skewed with the ratio being less than 1:3. We handle this data imbalance problem using class prior parameters of the learning algorithm. We discuss this is detail in section SECREF38 .
Data pre-processing
Since tweets are informal in nature, some pre-processing is required. Consider the following tweet.
“#Healthcare #Ins. Cigna denies #MD prescribed #tx 2 customers 20% of the time. - http://bit.ly/5PoQfo #HCR #Passit #ILDems #p2 PLS RT"
It is difficult to understand what is the content of the tweet unless it is normalized. We process all the tweets through the following stages.
Normalization
Normalization is done as follows:
Removing patterns like 'RT', '@user_name', url.
Tokenizing tweet text using NLTK BIBREF0 word tokenizer.
Making use of the stopwords list by NLTK to remove them from the tweet text.
Rectifying informal/misspelled words using normalization dictionary BIBREF1 . For example, “foundation" for “foudation", “forgot" for “forgt".
Expanding abbreviations using slang dictionary. For example, “btw" is expanded to “by the way".
Removing emoticons. However we keep the number of positive and negative emoticons in each tweet as feature. We make use of the emoticon dictionary(Table TABREF14 ) presented in BIBREF2 .
Hashtag Segmentation
We segment a hashtag into meaningful English phrases. The `#' character is removed from the tweet text. As for example, #killthebill is transformed into kill the bill.
In order to achieve this, we use a dictionary of English words. We recursively break the hashtagged phrase into segments and match the segments in the dictionary until we get a complete set of meaningful words. This is important since many users tend to post tweets where the actual message of the tweet is expressed in form of terse hashtagged phrases.
Processing URLs
The urls embedded in the tweet are a good source of additional context to the actual short tweet content. Sometimes tweets are too terse to comprehend just from the text content of it alone. However if there is a url embedded in the tweet, that can help us understand the context of it – perhaps the sentiment expressed as well.
In order to leverage this additional source of information, we identify all the urls present in the tweets and crawl the web-pages using AlchemyAPI. The API retrieves only the textual body of the article in a web-page. We analyze the article texts later on to get more context for the tweet.
Algorithmic Framework
We employ a supervised learning model using the manually labeled data as training set and a collection of handcrafted features. In this section we describe the features and the classification model used in this task.
Feature Extraction
Table TABREF19 presents the set of features we use in our experiment. We have used some basic features (that are commonly used for text classification task) as well as some advanced ones suitable for this particular domain.
We use two basic features:
Parts of Speech (POS) tags: We use the POS tagger of NLTK to tag the tweet texts BIBREF0 . We use counts of noun, adjective, adverb, verb words in a tweet as POS features.
Prior polarity of the words: We use a polarity dictionary BIBREF3 to get the prior polarity of words. The dictionary contains positive, negative and neutral words along with their polarity strength (weak or strong). The polarity of a word is dependent on its POS tag. For example, the word `excuse' is negative when used as `noun' or `adjective', but it carries a positive sense when used as a `verb'. We use the tags produced by NLTK postagger while selecting the prior polarity of a word from the dictionary. We also employ stemming (Porter Stemmer implementation from NLTK) while performing the dictionary lookup to increase number of matches. We use the counts of weak positive words, weak negative words, strong positive words and strong negative words in a tweet as features.
We have also explored some advanced features that helps improve detecting sentiment of tweets.
Emoticons: We use the emoticon dictionary from BIBREF2 , and count the positive and negtive emocicons for each tweet.
The sentiment of url: Since almost all the articles are written in well-formatted english, we analyze the sentiment of the first paragraph of the article using Standford Sentiment Analysis tool BIBREF4 . It predicts sentiment for each sentence within the article. We calculate the fraction of sentences that are negative, positive, and neutral and use these three values as features.
Hashtag: We count the number of hashtags in each tweet.
Capitalization: We assume that capitalization in the tweets has some relationship with the degree of sentiment. We count the number of words with capitalization in the tweets.
Retweet: This is a boolean feature indicating whether the tweet is a retweet or not.
User Mention: A boolean feature indicating whether the tweet contains a user mention.
Negation: Words like `no', `not', `won't' are called negation words since they negate the meaning of the word that is following it. As for example `good' becomes `not good'. We detect all the negation words in the tweets. If a negation word is followed by a polarity word, then we negate the polarity of that word. For example, if `good' is preceeded by a `not', we change the polarity from `weak positive' to `weak negative'.
Text Feature: We use tf-idf based text features to predict the sentiment of a tweet. We perform tf-idf based scoring of words in a tweet and the hashtags present in the tweets. We use the tf-idf vectors to train a classifier and predict the sentiment. This is then used as a stacked prediction feature in the final classifier.
Target: We use the target of the tweet as categorical feature for our classifier.
User: On a particular topic one particular user will generally have a single viewpoint (either positive or negative or neutral). If there are multiple posts within a short period of time from a user, then possibly the posts will contain the same sentiment. We use the user id as a categorical feature. On an average there are INLINEFORM0 tweets per user in the dataset, and over INLINEFORM1 users in the train set have expressed a single viewpoint for all their tweets (either positive or negative). Hence we believe this feature should be able to capture a user's viewpoint on the topic.
.
Classifier
We experiment with the following set of machine learning classifiers. We train the model with manually labeled data and used the above described features to predict the sentiment. We consider only positive, negative and neutral classes.
Multinomial Naive Bayes : Naive Bayes have been one of the most commonly used classifiers for text classification problems over the years. Naive Bayes classifier makes the assumption that the value of a particular feature is independent of the value of any other feature, given the class variable. This independence assumption makes the classifier both simple and scalable. Bayes classifier assigns a class label INLINEFORM0 for some k according to the following equation: DISPLAYFORM0
The assumptions on distributions of features define the event model of the Naive Bayes classifier. We use multinomial Naive Bayes classifer, which is suitable for discrete features (like counts and frequencies).
Linear SVM : Support Vector Machines are linear non-probabilistic learning algorithms that given training examples, depending on features, build a model to classify new data points to one of the probable classes. We have used support vector machine with stochastic gradient descent learning where gradient of loss is estimated and model is updated at each sample with decreasing strength.
. For this task we found Multinomial Naive Bayes performs slightly better than Linear SVM, hence in the evaluation we report accuracy with this classifier.
Parameter Tuning
Parameter tuning or hyperparameter optimization is an important step in model selection since it prevents the model from overfitting and optimize the performance of a model on an independent dataset. We perform hyperparameter optimization by using grid search, i.e. an exhaustive searching through a manually specified subset of the hyperparameter space for a learning algorithm. We do grid search and set the `best parameters' by doing cross validation on training set and verified the improvement of accuracy on the validation set. Finally we use the model with best hyperparameters to make predictions on the test set.
Evaluation and Analysis
Table TABREF39 shows the test results when features are added incrementally. We start with our basic model (with only POS tag features and word polarity features) and subsequently add various sets of features. First we add emoticon features, it has not much effect. This is reasonable since only 8 positive emoticons and 3 negative emoticons are detected(Table TABREF5 ) out of 40049 tokens. So the significance of emoticon can be neglected in this dataset. Then we add hashtag and capitalization features, and obtain an overall gain of 2% over the basic model. By adding the sentiment features from URL articles, we get overall 6% improvement over baseline. Further twitter specific features and user features improve the f1 by 12%. Last, we add TF-IDF feature, and the result improves a lot, and our sentiment classifier reaches the best classification results with an F1-accuracy of INLINEFORM0 as shown in the table.
Analyzing the results for different classes, we observe that the classifier works best for negative tweets. This can be explained by the number of training tweets for each classes, since proportion of negative tweets were considerably higher in both train and test sets as mentioned in Section SECREF2 .
Comparison with Stanford Sentiment Analysis Tool
In this section we compare the performance of our framework with an openly available state-of-the-art sentiment analysis tool. We choose Stanford coreNLP package as the baseline. It uses recursive deep models to do sentiment analysis and achieves good accuracy ( INLINEFORM0 ) for formal corpora BIBREF4 . However for noisy and informal texts like tweets, their performance decreases sharply. We present the performance of Stanford coreNLP tool over the test dataset.
Comparing table TABREF41 with table TABREF39 we observe that our framework outperforms stanford coreNLP by a significant margin ( INLINEFORM0 ). This owes to the fact that stanford coreNLP is not able to handle text with lot of noise, lack of formality, and slangs/abbreviations. This proves the effectiveness of our framework.
Enhancements
Apart from sentiment prediction, we also present some extensions to our system.
Harvest New Sentiment Terms
We have used a static dictionary to get prior polarity of a word, which helps detect the overall sentiment of a sentence. However the usage of words varies depending on conversation medium (e.g. : informal social media, blogs, news media), context and topic. For instance, the word `simple' is generally used in positive sense, but consider its use while describing the storyline of a movie. In this context, a `simple storyline' will probably hint at a negative sentiment. For a dynamic media like Twitter, where the topic mix and word mix change often, having a static dictionary of words with fixed polarity will not suffice. To get temporal and topic-specific sentiment terms, we make use of the tweets classified by our classifier.
We consider the words that appear in the positive, neutral and negative tweets. A word that very frequently occurs in tweets with positive (negative) sentiment and hardly occurs with tweets with a negative (positive) sentiment, will probably have a positive (negative) orientation for that particular topic. To implement this hypothesis, we first count the word frequency in each tweet collection. Then for each collection, we select top INLINEFORM0 most frequent words and deduct from top INLINEFORM1 words from other two collections. For example, in Algorithm SECREF42 , if we want to get new negative words, we find the words in top INLINEFORM2 from negative collection. And we compare the words that appear in top INLINEFORM3 of the other two, remove words that co-appear. Part of the new negative terms we find are shown in Table TABREF43 . We use same procedure to find new positive and neutral words.
Harvest New Negative Words Algorithm negativeCol, positiveCol, neutralCol new negative words from data collection
INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 drop word
Predicting Strength of Sentiment
Apart from predicting the sentiment class of tweets we are also interested in predicting the strength or intensity of the sentiment associated. Consider the following tweets.
t1: `GO TO YOUR US REPS OFFICE ON SATURDAY AND SAY VOTE NO! ON #HCR #Obama #cnn #killthebill #p2 #msnbc #foxnews #congress #tcot'
t2: `Thankfully the Democrat Party isn't too big to fail. #tcot #hcr'
Although both the tweets have negative sentiment towards `ObamaCare', the intensity in both are not the same. The first tweet (t1) is quite aggressive whereas the other one (t2) is not that much. Here we propose a technique to predict the strength of sentiment.
We consider few features from the tweet in order to do this. If our classifier predicts the sentiment to be neutral we say that the strength of sentiment is 0. However if it is not i.e., if it is either positive or negative, we increase strength of sentiment for each of the following features of the tweet.
Number of capitalized words.
Number of strong positive words.
Number of strong negative words.
Number of weak positive words.
Number of weak negative words.
Each of these features contributes to the strength score of a tweet. Once calculated, we normalize the score within [0-5]. Finally we assign sentiment polarity depending on the overall sentiment of the tweet. As for example, if a tweet has score of 3 and the overall predicted sentiment is negative then we give it a score of `-3'. It denotes that the tweet is moderately negative. Having said that, strength of sentiment is highly subjective. A tweet can appear to be very much aggressive to some person whereas the same may appear to not to be that aggressive to some other person.
Conclusion
In this report we have presented a sentiment analysis tool for Twitter posts. We have discussed the characteristics of Twitter that make existing sentiment analyzers perform poorly. The model proposed in this report has addressed the challenges by using normalization methods and features specific to this media. We show that using external knowledge outside the tweet text (from landing pages of URLs) and user features can significantly improve performance. We have presented experimental results and comparison with state-of-the-art tools.
We have presented two enhanced functionalities, i.e. discovering new sentiment terms and predicting strength of the sentiment. Due to the absence of labelled data we couldn't discuss the accuracies of these two enhancements. In the future, we plan to use these as feedback mechanism to classify new tweets. | Yes |
7fdeef2b1c8f6bd5d7c3a44e533d8aae2bbc155f | 7fdeef2b1c8f6bd5d7c3a44e533d8aae2bbc155f_0 | Q: What dataset of tweets is used?
Text: Introduction
Analysing sentiment from text is a well-known NLP problem. Several state-of-the-art tools exist that can achieve this with reasonable accuracy. However most of the existing tools perform well on well-formatted text. In case of tweets, the user generated content is short, noisy, and in many cases ( INLINEFORM0 ) doesn't follow proper grammatical structure. Additionally, numerous internet slangs, abbreviations, urls, emoticons, and unconventional style of capitalization are found in the tweets. As a result, the accuracy of the state-of-the art NLP tools decreases sharply. In this project, we develop new features to incorporate the styles salient in short, informal user generated contents like tweets. We achieve an F1-accuracy of INLINEFORM1 for predicting the sentiment of tweets in our data-set. We also propose a method to discover new sentiment terms from the tweets.
In section SECREF2 we present analysis of the data-set. We describe the data-preprocessing that we have done in section SECREF3 . In section SECREF4 we describe how the feature-set was extracted, the classification framework, and also the tuning of the parameters for reasonable accuracy. In section SECREF5 we report the performance of our system. We also report how the different features affect the accuracy of the system. We describe how we harvest new sentiment terms using our framework in section SECREF6 . In this section we also present how we predict strength of sentiment from the tweets. We finally conclude with some possible future directions of work in section SECREF7 .
Data-analysis
Tweets are short messages, restricted to 140 characters in length. Due to the nature of this microblogging service (quick and short messages), people use acronyms, make spelling mistakes, use emoticons and other characters that express special meanings. Following is a brief terminology associated with tweets:
Our dataset contains tweets about `ObamaCare' in USA collected during march 2010. It is divided into three subsets (train, dev, and test). Some tweets are manually annotated with one of the following classes.
positive, negative, neutral, unsure, and irrelevant
We ignore the tweets which are annotated unsure, or irrelevant. We present some preliminary statistics about the training data and test data in Table TABREF5 . We observe that there is an imbalance in the dataset. In training dataset, the ratio of positive tweets to negative ones is almost 1:2. In test set, it is heavily skewed with the ratio being less than 1:3. We handle this data imbalance problem using class prior parameters of the learning algorithm. We discuss this is detail in section SECREF38 .
Data pre-processing
Since tweets are informal in nature, some pre-processing is required. Consider the following tweet.
“#Healthcare #Ins. Cigna denies #MD prescribed #tx 2 customers 20% of the time. - http://bit.ly/5PoQfo #HCR #Passit #ILDems #p2 PLS RT"
It is difficult to understand what is the content of the tweet unless it is normalized. We process all the tweets through the following stages.
Normalization
Normalization is done as follows:
Removing patterns like 'RT', '@user_name', url.
Tokenizing tweet text using NLTK BIBREF0 word tokenizer.
Making use of the stopwords list by NLTK to remove them from the tweet text.
Rectifying informal/misspelled words using normalization dictionary BIBREF1 . For example, “foundation" for “foudation", “forgot" for “forgt".
Expanding abbreviations using slang dictionary. For example, “btw" is expanded to “by the way".
Removing emoticons. However we keep the number of positive and negative emoticons in each tweet as feature. We make use of the emoticon dictionary(Table TABREF14 ) presented in BIBREF2 .
Hashtag Segmentation
We segment a hashtag into meaningful English phrases. The `#' character is removed from the tweet text. As for example, #killthebill is transformed into kill the bill.
In order to achieve this, we use a dictionary of English words. We recursively break the hashtagged phrase into segments and match the segments in the dictionary until we get a complete set of meaningful words. This is important since many users tend to post tweets where the actual message of the tweet is expressed in form of terse hashtagged phrases.
Processing URLs
The urls embedded in the tweet are a good source of additional context to the actual short tweet content. Sometimes tweets are too terse to comprehend just from the text content of it alone. However if there is a url embedded in the tweet, that can help us understand the context of it – perhaps the sentiment expressed as well.
In order to leverage this additional source of information, we identify all the urls present in the tweets and crawl the web-pages using AlchemyAPI. The API retrieves only the textual body of the article in a web-page. We analyze the article texts later on to get more context for the tweet.
Algorithmic Framework
We employ a supervised learning model using the manually labeled data as training set and a collection of handcrafted features. In this section we describe the features and the classification model used in this task.
Feature Extraction
Table TABREF19 presents the set of features we use in our experiment. We have used some basic features (that are commonly used for text classification task) as well as some advanced ones suitable for this particular domain.
We use two basic features:
Parts of Speech (POS) tags: We use the POS tagger of NLTK to tag the tweet texts BIBREF0 . We use counts of noun, adjective, adverb, verb words in a tweet as POS features.
Prior polarity of the words: We use a polarity dictionary BIBREF3 to get the prior polarity of words. The dictionary contains positive, negative and neutral words along with their polarity strength (weak or strong). The polarity of a word is dependent on its POS tag. For example, the word `excuse' is negative when used as `noun' or `adjective', but it carries a positive sense when used as a `verb'. We use the tags produced by NLTK postagger while selecting the prior polarity of a word from the dictionary. We also employ stemming (Porter Stemmer implementation from NLTK) while performing the dictionary lookup to increase number of matches. We use the counts of weak positive words, weak negative words, strong positive words and strong negative words in a tweet as features.
We have also explored some advanced features that helps improve detecting sentiment of tweets.
Emoticons: We use the emoticon dictionary from BIBREF2 , and count the positive and negtive emocicons for each tweet.
The sentiment of url: Since almost all the articles are written in well-formatted english, we analyze the sentiment of the first paragraph of the article using Standford Sentiment Analysis tool BIBREF4 . It predicts sentiment for each sentence within the article. We calculate the fraction of sentences that are negative, positive, and neutral and use these three values as features.
Hashtag: We count the number of hashtags in each tweet.
Capitalization: We assume that capitalization in the tweets has some relationship with the degree of sentiment. We count the number of words with capitalization in the tweets.
Retweet: This is a boolean feature indicating whether the tweet is a retweet or not.
User Mention: A boolean feature indicating whether the tweet contains a user mention.
Negation: Words like `no', `not', `won't' are called negation words since they negate the meaning of the word that is following it. As for example `good' becomes `not good'. We detect all the negation words in the tweets. If a negation word is followed by a polarity word, then we negate the polarity of that word. For example, if `good' is preceeded by a `not', we change the polarity from `weak positive' to `weak negative'.
Text Feature: We use tf-idf based text features to predict the sentiment of a tweet. We perform tf-idf based scoring of words in a tweet and the hashtags present in the tweets. We use the tf-idf vectors to train a classifier and predict the sentiment. This is then used as a stacked prediction feature in the final classifier.
Target: We use the target of the tweet as categorical feature for our classifier.
User: On a particular topic one particular user will generally have a single viewpoint (either positive or negative or neutral). If there are multiple posts within a short period of time from a user, then possibly the posts will contain the same sentiment. We use the user id as a categorical feature. On an average there are INLINEFORM0 tweets per user in the dataset, and over INLINEFORM1 users in the train set have expressed a single viewpoint for all their tweets (either positive or negative). Hence we believe this feature should be able to capture a user's viewpoint on the topic.
.
Classifier
We experiment with the following set of machine learning classifiers. We train the model with manually labeled data and used the above described features to predict the sentiment. We consider only positive, negative and neutral classes.
Multinomial Naive Bayes : Naive Bayes have been one of the most commonly used classifiers for text classification problems over the years. Naive Bayes classifier makes the assumption that the value of a particular feature is independent of the value of any other feature, given the class variable. This independence assumption makes the classifier both simple and scalable. Bayes classifier assigns a class label INLINEFORM0 for some k according to the following equation: DISPLAYFORM0
The assumptions on distributions of features define the event model of the Naive Bayes classifier. We use multinomial Naive Bayes classifer, which is suitable for discrete features (like counts and frequencies).
Linear SVM : Support Vector Machines are linear non-probabilistic learning algorithms that given training examples, depending on features, build a model to classify new data points to one of the probable classes. We have used support vector machine with stochastic gradient descent learning where gradient of loss is estimated and model is updated at each sample with decreasing strength.
. For this task we found Multinomial Naive Bayes performs slightly better than Linear SVM, hence in the evaluation we report accuracy with this classifier.
Parameter Tuning
Parameter tuning or hyperparameter optimization is an important step in model selection since it prevents the model from overfitting and optimize the performance of a model on an independent dataset. We perform hyperparameter optimization by using grid search, i.e. an exhaustive searching through a manually specified subset of the hyperparameter space for a learning algorithm. We do grid search and set the `best parameters' by doing cross validation on training set and verified the improvement of accuracy on the validation set. Finally we use the model with best hyperparameters to make predictions on the test set.
Evaluation and Analysis
Table TABREF39 shows the test results when features are added incrementally. We start with our basic model (with only POS tag features and word polarity features) and subsequently add various sets of features. First we add emoticon features, it has not much effect. This is reasonable since only 8 positive emoticons and 3 negative emoticons are detected(Table TABREF5 ) out of 40049 tokens. So the significance of emoticon can be neglected in this dataset. Then we add hashtag and capitalization features, and obtain an overall gain of 2% over the basic model. By adding the sentiment features from URL articles, we get overall 6% improvement over baseline. Further twitter specific features and user features improve the f1 by 12%. Last, we add TF-IDF feature, and the result improves a lot, and our sentiment classifier reaches the best classification results with an F1-accuracy of INLINEFORM0 as shown in the table.
Analyzing the results for different classes, we observe that the classifier works best for negative tweets. This can be explained by the number of training tweets for each classes, since proportion of negative tweets were considerably higher in both train and test sets as mentioned in Section SECREF2 .
Comparison with Stanford Sentiment Analysis Tool
In this section we compare the performance of our framework with an openly available state-of-the-art sentiment analysis tool. We choose Stanford coreNLP package as the baseline. It uses recursive deep models to do sentiment analysis and achieves good accuracy ( INLINEFORM0 ) for formal corpora BIBREF4 . However for noisy and informal texts like tweets, their performance decreases sharply. We present the performance of Stanford coreNLP tool over the test dataset.
Comparing table TABREF41 with table TABREF39 we observe that our framework outperforms stanford coreNLP by a significant margin ( INLINEFORM0 ). This owes to the fact that stanford coreNLP is not able to handle text with lot of noise, lack of formality, and slangs/abbreviations. This proves the effectiveness of our framework.
Enhancements
Apart from sentiment prediction, we also present some extensions to our system.
Harvest New Sentiment Terms
We have used a static dictionary to get prior polarity of a word, which helps detect the overall sentiment of a sentence. However the usage of words varies depending on conversation medium (e.g. : informal social media, blogs, news media), context and topic. For instance, the word `simple' is generally used in positive sense, but consider its use while describing the storyline of a movie. In this context, a `simple storyline' will probably hint at a negative sentiment. For a dynamic media like Twitter, where the topic mix and word mix change often, having a static dictionary of words with fixed polarity will not suffice. To get temporal and topic-specific sentiment terms, we make use of the tweets classified by our classifier.
We consider the words that appear in the positive, neutral and negative tweets. A word that very frequently occurs in tweets with positive (negative) sentiment and hardly occurs with tweets with a negative (positive) sentiment, will probably have a positive (negative) orientation for that particular topic. To implement this hypothesis, we first count the word frequency in each tweet collection. Then for each collection, we select top INLINEFORM0 most frequent words and deduct from top INLINEFORM1 words from other two collections. For example, in Algorithm SECREF42 , if we want to get new negative words, we find the words in top INLINEFORM2 from negative collection. And we compare the words that appear in top INLINEFORM3 of the other two, remove words that co-appear. Part of the new negative terms we find are shown in Table TABREF43 . We use same procedure to find new positive and neutral words.
Harvest New Negative Words Algorithm negativeCol, positiveCol, neutralCol new negative words from data collection
INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 drop word
Predicting Strength of Sentiment
Apart from predicting the sentiment class of tweets we are also interested in predicting the strength or intensity of the sentiment associated. Consider the following tweets.
t1: `GO TO YOUR US REPS OFFICE ON SATURDAY AND SAY VOTE NO! ON #HCR #Obama #cnn #killthebill #p2 #msnbc #foxnews #congress #tcot'
t2: `Thankfully the Democrat Party isn't too big to fail. #tcot #hcr'
Although both the tweets have negative sentiment towards `ObamaCare', the intensity in both are not the same. The first tweet (t1) is quite aggressive whereas the other one (t2) is not that much. Here we propose a technique to predict the strength of sentiment.
We consider few features from the tweet in order to do this. If our classifier predicts the sentiment to be neutral we say that the strength of sentiment is 0. However if it is not i.e., if it is either positive or negative, we increase strength of sentiment for each of the following features of the tweet.
Number of capitalized words.
Number of strong positive words.
Number of strong negative words.
Number of weak positive words.
Number of weak negative words.
Each of these features contributes to the strength score of a tweet. Once calculated, we normalize the score within [0-5]. Finally we assign sentiment polarity depending on the overall sentiment of the tweet. As for example, if a tweet has score of 3 and the overall predicted sentiment is negative then we give it a score of `-3'. It denotes that the tweet is moderately negative. Having said that, strength of sentiment is highly subjective. A tweet can appear to be very much aggressive to some person whereas the same may appear to not to be that aggressive to some other person.
Conclusion
In this report we have presented a sentiment analysis tool for Twitter posts. We have discussed the characteristics of Twitter that make existing sentiment analyzers perform poorly. The model proposed in this report has addressed the challenges by using normalization methods and features specific to this media. We show that using external knowledge outside the tweet text (from landing pages of URLs) and user features can significantly improve performance. We have presented experimental results and comparison with state-of-the-art tools.
We have presented two enhanced functionalities, i.e. discovering new sentiment terms and predicting strength of the sentiment. Due to the absence of labelled data we couldn't discuss the accuracies of these two enhancements. In the future, we plan to use these as feedback mechanism to classify new tweets. | tweets about `ObamaCare' in USA collected during march 2010 |
be074c880263f56e0d4a8f42d9a95d2d77ac2280 | be074c880263f56e0d4a8f42d9a95d2d77ac2280_0 | Q: What external sources of information are used?
Text: Introduction
Analysing sentiment from text is a well-known NLP problem. Several state-of-the-art tools exist that can achieve this with reasonable accuracy. However most of the existing tools perform well on well-formatted text. In case of tweets, the user generated content is short, noisy, and in many cases ( INLINEFORM0 ) doesn't follow proper grammatical structure. Additionally, numerous internet slangs, abbreviations, urls, emoticons, and unconventional style of capitalization are found in the tweets. As a result, the accuracy of the state-of-the art NLP tools decreases sharply. In this project, we develop new features to incorporate the styles salient in short, informal user generated contents like tweets. We achieve an F1-accuracy of INLINEFORM1 for predicting the sentiment of tweets in our data-set. We also propose a method to discover new sentiment terms from the tweets.
In section SECREF2 we present analysis of the data-set. We describe the data-preprocessing that we have done in section SECREF3 . In section SECREF4 we describe how the feature-set was extracted, the classification framework, and also the tuning of the parameters for reasonable accuracy. In section SECREF5 we report the performance of our system. We also report how the different features affect the accuracy of the system. We describe how we harvest new sentiment terms using our framework in section SECREF6 . In this section we also present how we predict strength of sentiment from the tweets. We finally conclude with some possible future directions of work in section SECREF7 .
Data-analysis
Tweets are short messages, restricted to 140 characters in length. Due to the nature of this microblogging service (quick and short messages), people use acronyms, make spelling mistakes, use emoticons and other characters that express special meanings. Following is a brief terminology associated with tweets:
Our dataset contains tweets about `ObamaCare' in USA collected during march 2010. It is divided into three subsets (train, dev, and test). Some tweets are manually annotated with one of the following classes.
positive, negative, neutral, unsure, and irrelevant
We ignore the tweets which are annotated unsure, or irrelevant. We present some preliminary statistics about the training data and test data in Table TABREF5 . We observe that there is an imbalance in the dataset. In training dataset, the ratio of positive tweets to negative ones is almost 1:2. In test set, it is heavily skewed with the ratio being less than 1:3. We handle this data imbalance problem using class prior parameters of the learning algorithm. We discuss this is detail in section SECREF38 .
Data pre-processing
Since tweets are informal in nature, some pre-processing is required. Consider the following tweet.
“#Healthcare #Ins. Cigna denies #MD prescribed #tx 2 customers 20% of the time. - http://bit.ly/5PoQfo #HCR #Passit #ILDems #p2 PLS RT"
It is difficult to understand what is the content of the tweet unless it is normalized. We process all the tweets through the following stages.
Normalization
Normalization is done as follows:
Removing patterns like 'RT', '@user_name', url.
Tokenizing tweet text using NLTK BIBREF0 word tokenizer.
Making use of the stopwords list by NLTK to remove them from the tweet text.
Rectifying informal/misspelled words using normalization dictionary BIBREF1 . For example, “foundation" for “foudation", “forgot" for “forgt".
Expanding abbreviations using slang dictionary. For example, “btw" is expanded to “by the way".
Removing emoticons. However we keep the number of positive and negative emoticons in each tweet as feature. We make use of the emoticon dictionary(Table TABREF14 ) presented in BIBREF2 .
Hashtag Segmentation
We segment a hashtag into meaningful English phrases. The `#' character is removed from the tweet text. As for example, #killthebill is transformed into kill the bill.
In order to achieve this, we use a dictionary of English words. We recursively break the hashtagged phrase into segments and match the segments in the dictionary until we get a complete set of meaningful words. This is important since many users tend to post tweets where the actual message of the tweet is expressed in form of terse hashtagged phrases.
Processing URLs
The urls embedded in the tweet are a good source of additional context to the actual short tweet content. Sometimes tweets are too terse to comprehend just from the text content of it alone. However if there is a url embedded in the tweet, that can help us understand the context of it – perhaps the sentiment expressed as well.
In order to leverage this additional source of information, we identify all the urls present in the tweets and crawl the web-pages using AlchemyAPI. The API retrieves only the textual body of the article in a web-page. We analyze the article texts later on to get more context for the tweet.
Algorithmic Framework
We employ a supervised learning model using the manually labeled data as training set and a collection of handcrafted features. In this section we describe the features and the classification model used in this task.
Feature Extraction
Table TABREF19 presents the set of features we use in our experiment. We have used some basic features (that are commonly used for text classification task) as well as some advanced ones suitable for this particular domain.
We use two basic features:
Parts of Speech (POS) tags: We use the POS tagger of NLTK to tag the tweet texts BIBREF0 . We use counts of noun, adjective, adverb, verb words in a tweet as POS features.
Prior polarity of the words: We use a polarity dictionary BIBREF3 to get the prior polarity of words. The dictionary contains positive, negative and neutral words along with their polarity strength (weak or strong). The polarity of a word is dependent on its POS tag. For example, the word `excuse' is negative when used as `noun' or `adjective', but it carries a positive sense when used as a `verb'. We use the tags produced by NLTK postagger while selecting the prior polarity of a word from the dictionary. We also employ stemming (Porter Stemmer implementation from NLTK) while performing the dictionary lookup to increase number of matches. We use the counts of weak positive words, weak negative words, strong positive words and strong negative words in a tweet as features.
We have also explored some advanced features that helps improve detecting sentiment of tweets.
Emoticons: We use the emoticon dictionary from BIBREF2 , and count the positive and negtive emocicons for each tweet.
The sentiment of url: Since almost all the articles are written in well-formatted english, we analyze the sentiment of the first paragraph of the article using Standford Sentiment Analysis tool BIBREF4 . It predicts sentiment for each sentence within the article. We calculate the fraction of sentences that are negative, positive, and neutral and use these three values as features.
Hashtag: We count the number of hashtags in each tweet.
Capitalization: We assume that capitalization in the tweets has some relationship with the degree of sentiment. We count the number of words with capitalization in the tweets.
Retweet: This is a boolean feature indicating whether the tweet is a retweet or not.
User Mention: A boolean feature indicating whether the tweet contains a user mention.
Negation: Words like `no', `not', `won't' are called negation words since they negate the meaning of the word that is following it. As for example `good' becomes `not good'. We detect all the negation words in the tweets. If a negation word is followed by a polarity word, then we negate the polarity of that word. For example, if `good' is preceeded by a `not', we change the polarity from `weak positive' to `weak negative'.
Text Feature: We use tf-idf based text features to predict the sentiment of a tweet. We perform tf-idf based scoring of words in a tweet and the hashtags present in the tweets. We use the tf-idf vectors to train a classifier and predict the sentiment. This is then used as a stacked prediction feature in the final classifier.
Target: We use the target of the tweet as categorical feature for our classifier.
User: On a particular topic one particular user will generally have a single viewpoint (either positive or negative or neutral). If there are multiple posts within a short period of time from a user, then possibly the posts will contain the same sentiment. We use the user id as a categorical feature. On an average there are INLINEFORM0 tweets per user in the dataset, and over INLINEFORM1 users in the train set have expressed a single viewpoint for all their tweets (either positive or negative). Hence we believe this feature should be able to capture a user's viewpoint on the topic.
.
Classifier
We experiment with the following set of machine learning classifiers. We train the model with manually labeled data and used the above described features to predict the sentiment. We consider only positive, negative and neutral classes.
Multinomial Naive Bayes : Naive Bayes have been one of the most commonly used classifiers for text classification problems over the years. Naive Bayes classifier makes the assumption that the value of a particular feature is independent of the value of any other feature, given the class variable. This independence assumption makes the classifier both simple and scalable. Bayes classifier assigns a class label INLINEFORM0 for some k according to the following equation: DISPLAYFORM0
The assumptions on distributions of features define the event model of the Naive Bayes classifier. We use multinomial Naive Bayes classifer, which is suitable for discrete features (like counts and frequencies).
Linear SVM : Support Vector Machines are linear non-probabilistic learning algorithms that given training examples, depending on features, build a model to classify new data points to one of the probable classes. We have used support vector machine with stochastic gradient descent learning where gradient of loss is estimated and model is updated at each sample with decreasing strength.
. For this task we found Multinomial Naive Bayes performs slightly better than Linear SVM, hence in the evaluation we report accuracy with this classifier.
Parameter Tuning
Parameter tuning or hyperparameter optimization is an important step in model selection since it prevents the model from overfitting and optimize the performance of a model on an independent dataset. We perform hyperparameter optimization by using grid search, i.e. an exhaustive searching through a manually specified subset of the hyperparameter space for a learning algorithm. We do grid search and set the `best parameters' by doing cross validation on training set and verified the improvement of accuracy on the validation set. Finally we use the model with best hyperparameters to make predictions on the test set.
Evaluation and Analysis
Table TABREF39 shows the test results when features are added incrementally. We start with our basic model (with only POS tag features and word polarity features) and subsequently add various sets of features. First we add emoticon features, it has not much effect. This is reasonable since only 8 positive emoticons and 3 negative emoticons are detected(Table TABREF5 ) out of 40049 tokens. So the significance of emoticon can be neglected in this dataset. Then we add hashtag and capitalization features, and obtain an overall gain of 2% over the basic model. By adding the sentiment features from URL articles, we get overall 6% improvement over baseline. Further twitter specific features and user features improve the f1 by 12%. Last, we add TF-IDF feature, and the result improves a lot, and our sentiment classifier reaches the best classification results with an F1-accuracy of INLINEFORM0 as shown in the table.
Analyzing the results for different classes, we observe that the classifier works best for negative tweets. This can be explained by the number of training tweets for each classes, since proportion of negative tweets were considerably higher in both train and test sets as mentioned in Section SECREF2 .
Comparison with Stanford Sentiment Analysis Tool
In this section we compare the performance of our framework with an openly available state-of-the-art sentiment analysis tool. We choose Stanford coreNLP package as the baseline. It uses recursive deep models to do sentiment analysis and achieves good accuracy ( INLINEFORM0 ) for formal corpora BIBREF4 . However for noisy and informal texts like tweets, their performance decreases sharply. We present the performance of Stanford coreNLP tool over the test dataset.
Comparing table TABREF41 with table TABREF39 we observe that our framework outperforms stanford coreNLP by a significant margin ( INLINEFORM0 ). This owes to the fact that stanford coreNLP is not able to handle text with lot of noise, lack of formality, and slangs/abbreviations. This proves the effectiveness of our framework.
Enhancements
Apart from sentiment prediction, we also present some extensions to our system.
Harvest New Sentiment Terms
We have used a static dictionary to get prior polarity of a word, which helps detect the overall sentiment of a sentence. However the usage of words varies depending on conversation medium (e.g. : informal social media, blogs, news media), context and topic. For instance, the word `simple' is generally used in positive sense, but consider its use while describing the storyline of a movie. In this context, a `simple storyline' will probably hint at a negative sentiment. For a dynamic media like Twitter, where the topic mix and word mix change often, having a static dictionary of words with fixed polarity will not suffice. To get temporal and topic-specific sentiment terms, we make use of the tweets classified by our classifier.
We consider the words that appear in the positive, neutral and negative tweets. A word that very frequently occurs in tweets with positive (negative) sentiment and hardly occurs with tweets with a negative (positive) sentiment, will probably have a positive (negative) orientation for that particular topic. To implement this hypothesis, we first count the word frequency in each tweet collection. Then for each collection, we select top INLINEFORM0 most frequent words and deduct from top INLINEFORM1 words from other two collections. For example, in Algorithm SECREF42 , if we want to get new negative words, we find the words in top INLINEFORM2 from negative collection. And we compare the words that appear in top INLINEFORM3 of the other two, remove words that co-appear. Part of the new negative terms we find are shown in Table TABREF43 . We use same procedure to find new positive and neutral words.
Harvest New Negative Words Algorithm negativeCol, positiveCol, neutralCol new negative words from data collection
INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 drop word
Predicting Strength of Sentiment
Apart from predicting the sentiment class of tweets we are also interested in predicting the strength or intensity of the sentiment associated. Consider the following tweets.
t1: `GO TO YOUR US REPS OFFICE ON SATURDAY AND SAY VOTE NO! ON #HCR #Obama #cnn #killthebill #p2 #msnbc #foxnews #congress #tcot'
t2: `Thankfully the Democrat Party isn't too big to fail. #tcot #hcr'
Although both the tweets have negative sentiment towards `ObamaCare', the intensity in both are not the same. The first tweet (t1) is quite aggressive whereas the other one (t2) is not that much. Here we propose a technique to predict the strength of sentiment.
We consider few features from the tweet in order to do this. If our classifier predicts the sentiment to be neutral we say that the strength of sentiment is 0. However if it is not i.e., if it is either positive or negative, we increase strength of sentiment for each of the following features of the tweet.
Number of capitalized words.
Number of strong positive words.
Number of strong negative words.
Number of weak positive words.
Number of weak negative words.
Each of these features contributes to the strength score of a tweet. Once calculated, we normalize the score within [0-5]. Finally we assign sentiment polarity depending on the overall sentiment of the tweet. As for example, if a tweet has score of 3 and the overall predicted sentiment is negative then we give it a score of `-3'. It denotes that the tweet is moderately negative. Having said that, strength of sentiment is highly subjective. A tweet can appear to be very much aggressive to some person whereas the same may appear to not to be that aggressive to some other person.
Conclusion
In this report we have presented a sentiment analysis tool for Twitter posts. We have discussed the characteristics of Twitter that make existing sentiment analyzers perform poorly. The model proposed in this report has addressed the challenges by using normalization methods and features specific to this media. We show that using external knowledge outside the tweet text (from landing pages of URLs) and user features can significantly improve performance. We have presented experimental results and comparison with state-of-the-art tools.
We have presented two enhanced functionalities, i.e. discovering new sentiment terms and predicting strength of the sentiment. Due to the absence of labelled data we couldn't discuss the accuracies of these two enhancements. In the future, we plan to use these as feedback mechanism to classify new tweets. | landing pages of URLs |
2a57fdc7e985311989b6829c1ceb201096e5c809 | 2a57fdc7e985311989b6829c1ceb201096e5c809_0 | Q: What linguistic features are used?
Text: Introduction
Analysing sentiment from text is a well-known NLP problem. Several state-of-the-art tools exist that can achieve this with reasonable accuracy. However most of the existing tools perform well on well-formatted text. In case of tweets, the user generated content is short, noisy, and in many cases ( INLINEFORM0 ) doesn't follow proper grammatical structure. Additionally, numerous internet slangs, abbreviations, urls, emoticons, and unconventional style of capitalization are found in the tweets. As a result, the accuracy of the state-of-the art NLP tools decreases sharply. In this project, we develop new features to incorporate the styles salient in short, informal user generated contents like tweets. We achieve an F1-accuracy of INLINEFORM1 for predicting the sentiment of tweets in our data-set. We also propose a method to discover new sentiment terms from the tweets.
In section SECREF2 we present analysis of the data-set. We describe the data-preprocessing that we have done in section SECREF3 . In section SECREF4 we describe how the feature-set was extracted, the classification framework, and also the tuning of the parameters for reasonable accuracy. In section SECREF5 we report the performance of our system. We also report how the different features affect the accuracy of the system. We describe how we harvest new sentiment terms using our framework in section SECREF6 . In this section we also present how we predict strength of sentiment from the tweets. We finally conclude with some possible future directions of work in section SECREF7 .
Data-analysis
Tweets are short messages, restricted to 140 characters in length. Due to the nature of this microblogging service (quick and short messages), people use acronyms, make spelling mistakes, use emoticons and other characters that express special meanings. Following is a brief terminology associated with tweets:
Our dataset contains tweets about `ObamaCare' in USA collected during march 2010. It is divided into three subsets (train, dev, and test). Some tweets are manually annotated with one of the following classes.
positive, negative, neutral, unsure, and irrelevant
We ignore the tweets which are annotated unsure, or irrelevant. We present some preliminary statistics about the training data and test data in Table TABREF5 . We observe that there is an imbalance in the dataset. In training dataset, the ratio of positive tweets to negative ones is almost 1:2. In test set, it is heavily skewed with the ratio being less than 1:3. We handle this data imbalance problem using class prior parameters of the learning algorithm. We discuss this is detail in section SECREF38 .
Data pre-processing
Since tweets are informal in nature, some pre-processing is required. Consider the following tweet.
“#Healthcare #Ins. Cigna denies #MD prescribed #tx 2 customers 20% of the time. - http://bit.ly/5PoQfo #HCR #Passit #ILDems #p2 PLS RT"
It is difficult to understand what is the content of the tweet unless it is normalized. We process all the tweets through the following stages.
Normalization
Normalization is done as follows:
Removing patterns like 'RT', '@user_name', url.
Tokenizing tweet text using NLTK BIBREF0 word tokenizer.
Making use of the stopwords list by NLTK to remove them from the tweet text.
Rectifying informal/misspelled words using normalization dictionary BIBREF1 . For example, “foundation" for “foudation", “forgot" for “forgt".
Expanding abbreviations using slang dictionary. For example, “btw" is expanded to “by the way".
Removing emoticons. However we keep the number of positive and negative emoticons in each tweet as feature. We make use of the emoticon dictionary(Table TABREF14 ) presented in BIBREF2 .
Hashtag Segmentation
We segment a hashtag into meaningful English phrases. The `#' character is removed from the tweet text. As for example, #killthebill is transformed into kill the bill.
In order to achieve this, we use a dictionary of English words. We recursively break the hashtagged phrase into segments and match the segments in the dictionary until we get a complete set of meaningful words. This is important since many users tend to post tweets where the actual message of the tweet is expressed in form of terse hashtagged phrases.
Processing URLs
The urls embedded in the tweet are a good source of additional context to the actual short tweet content. Sometimes tweets are too terse to comprehend just from the text content of it alone. However if there is a url embedded in the tweet, that can help us understand the context of it – perhaps the sentiment expressed as well.
In order to leverage this additional source of information, we identify all the urls present in the tweets and crawl the web-pages using AlchemyAPI. The API retrieves only the textual body of the article in a web-page. We analyze the article texts later on to get more context for the tweet.
Algorithmic Framework
We employ a supervised learning model using the manually labeled data as training set and a collection of handcrafted features. In this section we describe the features and the classification model used in this task.
Feature Extraction
Table TABREF19 presents the set of features we use in our experiment. We have used some basic features (that are commonly used for text classification task) as well as some advanced ones suitable for this particular domain.
We use two basic features:
Parts of Speech (POS) tags: We use the POS tagger of NLTK to tag the tweet texts BIBREF0 . We use counts of noun, adjective, adverb, verb words in a tweet as POS features.
Prior polarity of the words: We use a polarity dictionary BIBREF3 to get the prior polarity of words. The dictionary contains positive, negative and neutral words along with their polarity strength (weak or strong). The polarity of a word is dependent on its POS tag. For example, the word `excuse' is negative when used as `noun' or `adjective', but it carries a positive sense when used as a `verb'. We use the tags produced by NLTK postagger while selecting the prior polarity of a word from the dictionary. We also employ stemming (Porter Stemmer implementation from NLTK) while performing the dictionary lookup to increase number of matches. We use the counts of weak positive words, weak negative words, strong positive words and strong negative words in a tweet as features.
We have also explored some advanced features that helps improve detecting sentiment of tweets.
Emoticons: We use the emoticon dictionary from BIBREF2 , and count the positive and negtive emocicons for each tweet.
The sentiment of url: Since almost all the articles are written in well-formatted english, we analyze the sentiment of the first paragraph of the article using Standford Sentiment Analysis tool BIBREF4 . It predicts sentiment for each sentence within the article. We calculate the fraction of sentences that are negative, positive, and neutral and use these three values as features.
Hashtag: We count the number of hashtags in each tweet.
Capitalization: We assume that capitalization in the tweets has some relationship with the degree of sentiment. We count the number of words with capitalization in the tweets.
Retweet: This is a boolean feature indicating whether the tweet is a retweet or not.
User Mention: A boolean feature indicating whether the tweet contains a user mention.
Negation: Words like `no', `not', `won't' are called negation words since they negate the meaning of the word that is following it. As for example `good' becomes `not good'. We detect all the negation words in the tweets. If a negation word is followed by a polarity word, then we negate the polarity of that word. For example, if `good' is preceeded by a `not', we change the polarity from `weak positive' to `weak negative'.
Text Feature: We use tf-idf based text features to predict the sentiment of a tweet. We perform tf-idf based scoring of words in a tweet and the hashtags present in the tweets. We use the tf-idf vectors to train a classifier and predict the sentiment. This is then used as a stacked prediction feature in the final classifier.
Target: We use the target of the tweet as categorical feature for our classifier.
User: On a particular topic one particular user will generally have a single viewpoint (either positive or negative or neutral). If there are multiple posts within a short period of time from a user, then possibly the posts will contain the same sentiment. We use the user id as a categorical feature. On an average there are INLINEFORM0 tweets per user in the dataset, and over INLINEFORM1 users in the train set have expressed a single viewpoint for all their tweets (either positive or negative). Hence we believe this feature should be able to capture a user's viewpoint on the topic.
.
Classifier
We experiment with the following set of machine learning classifiers. We train the model with manually labeled data and used the above described features to predict the sentiment. We consider only positive, negative and neutral classes.
Multinomial Naive Bayes : Naive Bayes have been one of the most commonly used classifiers for text classification problems over the years. Naive Bayes classifier makes the assumption that the value of a particular feature is independent of the value of any other feature, given the class variable. This independence assumption makes the classifier both simple and scalable. Bayes classifier assigns a class label INLINEFORM0 for some k according to the following equation: DISPLAYFORM0
The assumptions on distributions of features define the event model of the Naive Bayes classifier. We use multinomial Naive Bayes classifer, which is suitable for discrete features (like counts and frequencies).
Linear SVM : Support Vector Machines are linear non-probabilistic learning algorithms that given training examples, depending on features, build a model to classify new data points to one of the probable classes. We have used support vector machine with stochastic gradient descent learning where gradient of loss is estimated and model is updated at each sample with decreasing strength.
. For this task we found Multinomial Naive Bayes performs slightly better than Linear SVM, hence in the evaluation we report accuracy with this classifier.
Parameter Tuning
Parameter tuning or hyperparameter optimization is an important step in model selection since it prevents the model from overfitting and optimize the performance of a model on an independent dataset. We perform hyperparameter optimization by using grid search, i.e. an exhaustive searching through a manually specified subset of the hyperparameter space for a learning algorithm. We do grid search and set the `best parameters' by doing cross validation on training set and verified the improvement of accuracy on the validation set. Finally we use the model with best hyperparameters to make predictions on the test set.
Evaluation and Analysis
Table TABREF39 shows the test results when features are added incrementally. We start with our basic model (with only POS tag features and word polarity features) and subsequently add various sets of features. First we add emoticon features, it has not much effect. This is reasonable since only 8 positive emoticons and 3 negative emoticons are detected(Table TABREF5 ) out of 40049 tokens. So the significance of emoticon can be neglected in this dataset. Then we add hashtag and capitalization features, and obtain an overall gain of 2% over the basic model. By adding the sentiment features from URL articles, we get overall 6% improvement over baseline. Further twitter specific features and user features improve the f1 by 12%. Last, we add TF-IDF feature, and the result improves a lot, and our sentiment classifier reaches the best classification results with an F1-accuracy of INLINEFORM0 as shown in the table.
Analyzing the results for different classes, we observe that the classifier works best for negative tweets. This can be explained by the number of training tweets for each classes, since proportion of negative tweets were considerably higher in both train and test sets as mentioned in Section SECREF2 .
Comparison with Stanford Sentiment Analysis Tool
In this section we compare the performance of our framework with an openly available state-of-the-art sentiment analysis tool. We choose Stanford coreNLP package as the baseline. It uses recursive deep models to do sentiment analysis and achieves good accuracy ( INLINEFORM0 ) for formal corpora BIBREF4 . However for noisy and informal texts like tweets, their performance decreases sharply. We present the performance of Stanford coreNLP tool over the test dataset.
Comparing table TABREF41 with table TABREF39 we observe that our framework outperforms stanford coreNLP by a significant margin ( INLINEFORM0 ). This owes to the fact that stanford coreNLP is not able to handle text with lot of noise, lack of formality, and slangs/abbreviations. This proves the effectiveness of our framework.
Enhancements
Apart from sentiment prediction, we also present some extensions to our system.
Harvest New Sentiment Terms
We have used a static dictionary to get prior polarity of a word, which helps detect the overall sentiment of a sentence. However the usage of words varies depending on conversation medium (e.g. : informal social media, blogs, news media), context and topic. For instance, the word `simple' is generally used in positive sense, but consider its use while describing the storyline of a movie. In this context, a `simple storyline' will probably hint at a negative sentiment. For a dynamic media like Twitter, where the topic mix and word mix change often, having a static dictionary of words with fixed polarity will not suffice. To get temporal and topic-specific sentiment terms, we make use of the tweets classified by our classifier.
We consider the words that appear in the positive, neutral and negative tweets. A word that very frequently occurs in tweets with positive (negative) sentiment and hardly occurs with tweets with a negative (positive) sentiment, will probably have a positive (negative) orientation for that particular topic. To implement this hypothesis, we first count the word frequency in each tweet collection. Then for each collection, we select top INLINEFORM0 most frequent words and deduct from top INLINEFORM1 words from other two collections. For example, in Algorithm SECREF42 , if we want to get new negative words, we find the words in top INLINEFORM2 from negative collection. And we compare the words that appear in top INLINEFORM3 of the other two, remove words that co-appear. Part of the new negative terms we find are shown in Table TABREF43 . We use same procedure to find new positive and neutral words.
Harvest New Negative Words Algorithm negativeCol, positiveCol, neutralCol new negative words from data collection
INLINEFORM0 INLINEFORM1 INLINEFORM2 INLINEFORM3 INLINEFORM4 INLINEFORM5 drop word
Predicting Strength of Sentiment
Apart from predicting the sentiment class of tweets we are also interested in predicting the strength or intensity of the sentiment associated. Consider the following tweets.
t1: `GO TO YOUR US REPS OFFICE ON SATURDAY AND SAY VOTE NO! ON #HCR #Obama #cnn #killthebill #p2 #msnbc #foxnews #congress #tcot'
t2: `Thankfully the Democrat Party isn't too big to fail. #tcot #hcr'
Although both the tweets have negative sentiment towards `ObamaCare', the intensity in both are not the same. The first tweet (t1) is quite aggressive whereas the other one (t2) is not that much. Here we propose a technique to predict the strength of sentiment.
We consider few features from the tweet in order to do this. If our classifier predicts the sentiment to be neutral we say that the strength of sentiment is 0. However if it is not i.e., if it is either positive or negative, we increase strength of sentiment for each of the following features of the tweet.
Number of capitalized words.
Number of strong positive words.
Number of strong negative words.
Number of weak positive words.
Number of weak negative words.
Each of these features contributes to the strength score of a tweet. Once calculated, we normalize the score within [0-5]. Finally we assign sentiment polarity depending on the overall sentiment of the tweet. As for example, if a tweet has score of 3 and the overall predicted sentiment is negative then we give it a score of `-3'. It denotes that the tweet is moderately negative. Having said that, strength of sentiment is highly subjective. A tweet can appear to be very much aggressive to some person whereas the same may appear to not to be that aggressive to some other person.
Conclusion
In this report we have presented a sentiment analysis tool for Twitter posts. We have discussed the characteristics of Twitter that make existing sentiment analyzers perform poorly. The model proposed in this report has addressed the challenges by using normalization methods and features specific to this media. We show that using external knowledge outside the tweet text (from landing pages of URLs) and user features can significantly improve performance. We have presented experimental results and comparison with state-of-the-art tools.
We have presented two enhanced functionalities, i.e. discovering new sentiment terms and predicting strength of the sentiment. Due to the absence of labelled data we couldn't discuss the accuracies of these two enhancements. In the future, we plan to use these as feedback mechanism to classify new tweets. | Parts of Speech (POS) tags, Prior polarity of the words, Capitalization, Negation, Text Feature |
53807f435d33fe5ce65f5e7bda7f77712194f6ab | 53807f435d33fe5ce65f5e7bda7f77712194f6ab_0 | Q: What are the key issues around whether the gold standard data produced in such an annotation is reliable?
Text: Introduction
Machine learning (ML) has become widely used in many academic fields, as well as across the private and public sector. Supervised machine learning is particularly prevalent, in which training data is collected for a set of entities with known properties (a “ground truth” or “gold standard”), which is used to create a classifier that will make predictions about new entities of the same type. Supervised ML requires high-quality training data to produce high-quality classifiers. “Garbage In, Garbage Out” is a longstanding aphorism in computing about how flawed input data or instructions will produce flawed outputs. BIBREF0, BIBREF1 However, contemporary ML research and education tends to focus less on obtaining and validating such a training dataset, with such considerations often passed over in major textbooks BIBREF2, BIBREF3, BIBREF4. The predominant focus is typically on what is done with the training data to produce a classifier, with heavy emphasis on mathematical foundations and routine use of clean and tidy “toy” datasets. The process of creating a “gold standard” or “ground truth” dataset is routinely black-boxed. Many papers in ML venues are expected to use a standard, public training dataset, with authors comparing various performance metrics on the same dataset. While such a focus on what is done to a training dataset may be appropriate for theoretically-oriented basic research in ML, this is not the case for supervised ML applications.
Introduction ::: Study overview
All approaches of producing a training dataset involve some form of human judgment, albeit at varying levels of granularity. In this paper, we investigate and discuss a wide range of issues and concerns around the curation of human-labeled or human-annotated data, in which one or more individuals make discrete assessments of items. We report from a study in which a team of six labelers systematically examined a corpus of supervised machine learning application papers in social computing, specifically those that classified tweets from Twitter for various purposes. For each paper, we recorded what the paper does or does not state about the training data used to produce the classifier presented in the paper. The bulk of the papers we examined were a sample of preprints or postprints published on ArXiV.org, plus a smaller set of published papers sampled from Scopus. We determined whether such papers involved an original classification task using supervised ML, whether the training data labels were produced from human annotation, and if so, the source of the human-labeled dataset (e.g. the paper's authors, Mechanical Turk, recruited experts, no information given, etc.). For all papers in which an original human-labeled dataset was produced, we then made a series of further determinations, including if definitions and/or examples were given to labelers, if labelers independently labeled the same items, if inter-rater reliability metrics were presented, if compensation details for crowdworkers were reported, if a public link to the dataset was available, and more.
As our research project was a human-labeling project studying other human-labeling projects, we took care in our own practices. We only have access to the paper reporting about the study and not the actual study itself, and many papers either do not discuss such details at all or without sufficient detail to make a determinations. For example, many papers did note that the study involved the creation of an original human-labeled dataset, but did not specify who labeled it. For some of our items, one of the most common labels we gave was “no information” — which is a concerning issue, given how crucial such information is in understanding the validity of the training dataset and by extension, the validity of the classifier.
Literature review and motivation ::: A different kind of “black-boxing” in machine learning
In the introduction, we noted training data is frequently black-boxed in machine learning research and applications. We use the term “black-boxed” in a different way than it is typically invoked in and beyond the FAT* community, where often refers to interpretability. In that sense, “black-boxing” means that even for experts who have access to the training data and code which created the classifier, it is difficult to understand why the classifier made each decision. In social science and humanities work on “black-boxing” of ML (and other “algorithmic” systems), there is often much elision between issues of interpretability and intentional concealment, as Burrell BIBREF5 notes. A major focus is on public accountability BIBREF6, where many problematic issues can occur behind closed doors. This is even the case with relatively simple forms of analytics and automation — such as if-then statements, linear regressions, or rule-based expert systems BIBREF7, BIBREF8.
In contrast, we are concerned with what is and is not taken for granted when developing a classifier. This use is closer to how Latour & Woolgar used it in an ethnographic study of scientific laboratories BIBREF9. They discuss how equipment like a mass spectrometer would typically be implicitly trusted to turn samples into signals. However, when the results were drastically unexpected, it could be a problem with the machine or a fundamental breakthrough. Scientists and technicians would have to “open up the black box,” changing their relationship to the equipment to determine if the problem was with the equipment or the prevailing theory. In this view, black-boxing is a relational concept, not an objective property. It is about the orientation people have to the same social-technical systems they routinely work with and rely upon. “Opening up the black box” is not about digging into technical or internal details per se, but a gestalt shift in whether the output of a system is implicitly taken for granted or open for further investigation.
In this view, black-boxing is not inherently problematic. The question is more about who gets to be skeptical about data and who is obligated to suspend disbelief, which are also raised in discussions of open science & reproducibility BIBREF10. Operationalization, measurement, and construct validity have long been crucial and contested topics in the social sciences. Within quantitative sub-fields, it is common to have extensive debates about the best way to define and measure a complex concept (e.g. “intelligence”). From a qualitative and Science & Technology Studies perspective, there is extensive work on the practices and implications of various regimes of measurement BIBREF11, BIBREF12, BIBREF13, BIBREF14. In ML, major operationalization decisions can implicitly occur in data labeling. Yet as Jacobs & Wallach note, “[i]n computer science, it is particularly rare to articulate the distinctions between constructs and their operationalizations” BIBREF15. This is concerning, because “many well-studied harms [in ML] are direct results of a mismatch between the constructs purported to be measured and their operationalizations” BIBREF15.
Literature review and motivation ::: Content analysis
Creating human-labeled training datasets for machine learning often looks like content analysis, a well-established methodology in the humanities and the social sciences (particularly literature, communication studies, and linguistics), which also has versions used in the life, ecological, and medical sciences. Content analysis has taken many forms over the past century, from more positivist methods that formally establish structural ways of evaluating content to more interpretivist methods that embrace ambiguity and multiple interpretations, such as grounded theory BIBREF16. The intersection of ML and interpretivist approaches is outside of the scope of this article, but it is an emerging area of interest BIBREF17.
Today, structured content analysis (also called “closed coding”) is used to turn qualitative or unstructured data of all kinds into structured and/or quantitative data, including media texts, free-form survey responses, interview transcripts, and video recordings. Projects usually involve teams of “coders” (also called “annotators”, “labelers”, or “reviewers”), with human labor required to “code”, “annotate”, or “label” a corpus of items. (Note that we use such terms interchangeably in this paper.) In one textbook, content analysis is described as a “systematic and replicable” BIBREF18 method with several best practices: A “coding scheme” is defined, which is a set of labels, annotations, or codes that items in the corpus may have. Schemes include formal definitions or procedures, and often include examples, particularly for borderline cases. Next, coders are trained with the coding scheme, which typically involves interactive feedback. Training sometimes results in changes to the coding scheme, in which the first round becomes a pilot test. Then, annotators independently review at least a portion of the same items throughout the entire process, with a calculation of “inter-annotator agreement” or “inter-rater reliability.” Finally, there is a process of “reconciliation” for disagreements, which is sometimes by majority vote without discussion and other times discussion-based.
Structured content analysis is a difficult, complicated, and labor-intensive process, requiring many different forms of expertise on the part of both the coders and those who manage them. Historically, teams of students have often performed such work. With the rise of crowdwork platforms like Amazon Mechanical Turk, crowdworkers are often used for content analysis tasks, which are often similar to other kinds of common crowdworking tasks. Google's reCAPTCHA BIBREF19 is a Turing test in which users perform annotation tasks to prove their humanness — which initially involved transcribing scanned phrases from books, but now involves image labeling for autonomous vehicles. There are major qualitative data analysis software tools that scaffold the content analysis process to varying degrees, such as MAXQDA or NVivo, which have support for inter-annotator agreement metrics. There have also been many new software platforms developed to support more micro-level annotation or labeling at scale, including in citizen science, linguistics, content moderation, and more general-purpose use cases BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25. For example, the Zooniverse BIBREF26 provides a common platform for citizen science projects across different domain application areas, which let volunteers make judgements about items, which are aggregated and reconciled in various ways.
Literature review and motivation ::: Meta-research and methods papers in linguistics and crowdsourcing
Our paper is also in conversation with various meta-research and standardization efforts in linguistics, crowdsourcing, and other related disciplines. Linguistics and Natural Language Processing have long struggled with issues around standardization and reliability of linguistic tagging. Linguistics researchers have long developed best practices for corpus annotation BIBREF27, including recent work about using crowdworkers BIBREF28. Annotated corpus projects often release guidelines and reflections about their process. For example, the Linguistic Data Consortium's guidelines for annotation of English-language entities (version 6.6) is 72 single-spaced pages BIBREF29. A universal problem of standardization is that there are often too many standards and not enough enforcement. As BIBREF30 notes, 33-81% of linguistics/NLP papers in various venues do not even mention the name of the language being studied (usually English). A meta-research study found only 1 in 9 qualitative papers in Human-Computer Interaction reported inter-rater reliability metrics BIBREF31.
Another related area are meta-research and methods papers focused on identifying or preventing low-effort responses from crowdworkers — sometimes called “spam” or “random” responses, or alternatively ”fraudsters” or ”cheaters.” Rates of “self-agreement” are often used, determining if the same person labels the same item differently at a later stage. One paper BIBREF32 examined 17 crowdsourced datasets for sentiment analysis and found none had self-agreement rates (Krippendorf's alpha) above 0.8, with some lower than 0.5. Another paper recommends the self-agreement strategy in conjunction with asking crowdworkers to give a short explanation of their response, even if the response is never actually examined. BIBREF33. One highly-cited paper BIBREF34 proposes a strategy in which crowdworkers are given some items with known labels (a gold/ground truth), and those who answer incorrectly are successively given more items with known labels, with a Bayesian approach to identifying those who are answering randomly.
Literature review and motivation ::: The data documentation movements
Our paper is also in conversation with two related movements in computationally-supported knowledge production that have surfaced issues around documentation. First, we see connections with the broader open science and reproducibility movements. Open science is focused on a range of strategies, including open access research publications, educational materials, software tools, datasets, and analysis code BIBREF35. The reproducibility movement is deeply linked to the open science movement, focusing on getting researchers to release everything that is necessary for others to perform the same tasks needed to get the exact same results BIBREF36, BIBREF10. This increasingly includes pushing for high standards for releasing protocols, datasets, and analysis code. As more funders and journals are requiring releasing data, the issue of good documentation for data and protocols is rising BIBREF37, BIBREF38. There are also intersecting literatures on systems for capturing information in ML data flows and supply chains BIBREF39, BIBREF40, BIBREF41, as well as supporting data cleaning BIBREF42, BIBREF43. These issues have long been discussed in the fields of library and information science, particularly in Research Data Management BIBREF44, BIBREF45, BIBREF46, BIBREF47.
A major related movement is in and around the FATML field, with many recent papers proposing training data documentation in the context of ML. Various approaches, analogies, and metaphors have been taken in this area, including “datasheets for datasets” BIBREF48, ”model cards” BIBREF49, “data statements” BIBREF30, “nutrition labels” BIBREF50, a “bill of materials” BIBREF51, “data labels” BIBREF52, and “supplier declarations of conformity” BIBREF53. Many go far beyond the concerns we have raised around human-labeled training data, as some are also (or primarily) concerned with documenting other forms of training data, model performance and accuracy, bias, considerations of ethics and potential impacts, and more. We discuss how our findings relate to this broader emerging area more in the concluding discussion.
Data and methods ::: Data: machine learning papers performing classification tasks on Twitter data
Our goal was to find a corpus of papers that were using original human annotation or labeling to produce a new training dataset for supervised ML. We restricted our corpus to papers whose classifiers were trained on data from Twitter, for various reasons: First, we did attempt to produce a broader corpus of supervised ML application papers, but found our search queries in academic search engines would either 1) be so broad that most papers were non-applied / theoretical papers or papers re-using public pre-labeled datasets; or 2) that the results were so narrow they excluded many canonical papers in this area, which made us suspect that they were non-representative samples. Sampling to papers using Twitter data has strategic benefits for this kind of initial study. Data from Twitter is of interest to scholars from a variety of disciplines and topical interest areas, in addition to those who have an inherent interest in Twitter as a social media site. As we detail in appendix section SECREF45, the papers represented political science, public health, NLP, sentiment analysis, cybersecurity, content moderation, hate speech, information quality, demographic profiling, and more.
We drew the main corpus of ML application papers from ArXiV, the oldest and most established “preprint” repositories, originally for researchers to share papers prior to peer review. Today, ArXiV is widely used to share both drafts of papers that have not (yet) passed peer review (“preprints”) and final versions of papers that have passed peer review (often called “postprints”). Users submit to any number of disciplinary categories and subcategories. Subcategory moderators perform a cursory review to catch spam, blatant hoaxes, and miscategorized papers, but do not review papers for soundness or validity. We sampled all papers published in the Computer Science subcategories of Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Social and Information Networks (cs.SI), Computational Linguistics (cs.CL), Computers and Society (cs.CY), Information Retrieval (cs.IR), and Computer Vision (CS.CV), the Statistics subcategory of Machine Learning (stat.ML), and Social Physics (physics.soc-ph). We filtered for papers in which the title or abstract included at least one of the words “machine learning”, “classif*”, or “supervi*” (case insensitive). We then filtered to papers in which the title or abstract included at least “twitter” or “tweet” (case insensitive), which resulted in 494 papers. We used the same query on Elsevier's Scopus database of peer-reviewed articles, selecting 30 randomly sampled articles, which mostly selected from conference proceedings. One paper from the Scopus sample was corrupted, so only 29 papers were examined.
ArXiV is likely not a representative sample of all ML publications. However, we chose it because ArXiV papers are widely accessible to the public, indexed in Google Scholar and other scholarly databases, and are generally considered citeable publications. The fact that many ArXiV papers are not peer-reviewed and that papers posted are not likely representative samples of ML research is worth considering when reflecting on the generalizability of our findings. However, given that such papers are routinely discussed in both academic literature and the popular press means that issues with their reporting of training data is just as crucial. Sampling from ArXiv also lets us examine papers at various stages in the peer-review cycle, breaking out preprints not (yet) published, preprints of later published papers, and postprints of published works. The appendix details both corpora, including an analysis of the topics and fields of papers (in SECREF47), an analysis of the publishers and publication types (e.g. an early preprint of a journal article, a final postprint of a conference proceeding, a preprint never published) and publishers (in SECREF50 and SECREF47). The final dataset can be found on GitHub and Zenodo.
Data and methods ::: Labeling team, training, and workflow
Our labeling team included one research scientist who led the project (RSG) and undergraduate research assistants, who worked for course credit as part of an university-sponsored research experience program (KY, YY, MD, JQ, RT, and JH). The project began with five students for one semester, four of whom continued on the project for the second semester. A sixth student replaced the student who did not continue. All students had some coursework in computer science and/or data science, with a range of prior experience in machine learning in both a classroom and applied setting. Students' majors and minors included Electrical Engineering & Computer Science, Data Science, Statistics, and Linguistics.
The labeling workflow was that each week, a set of papers were randomly sampled each week from the unlabled set of 494 ArXiV papers in the corpus. For two weeks, the 30 sampled papers from Scopus were selected. The five students independently reviewed and labeled the same papers each week, using a different web-based spreadsheet to record labels. The team leader synthesized labels and identified disagreement. The team met in person each week to discuss cases of disagreement, working to build a consensus about the proper label (as opposed to purely majority vote). The team leader facilitated these discussions and had the final say when a consensus could not be reached. The papers labeled for the first two weeks were in a training period, in which the team worked on a different set of papers not included in the dataset. In these initial weeks, the team learned the coding schema and the reconciliation process, which were further refined.
Data and methods ::: Second round verification and reconciliation
After 164 papers were labeled by five annotators, we conducted a second round of verification. This was necessary both because there were some disagreements in labeling and changes made to the coding schema (discussed in appendix SECREF54). All labels for all 164 papers were independently re-examined by at least two of the six team members. Annotators were given a summary of the original labels in the first round and were instructed to review all papers, being mindful of how the schema and instructions had changed. We then aggregated, reconciled, and verified labels in the same way as in the first round. For papers where there was no substantive disagreement on any question between those who re-examined it in the second round, the paper's labels were considered to be final. For papers where there was any substantive disagreement on any question, the paper was either discussed to consensus in the same manner as in the first round or decided by the team leader. The final schema and instructions are in the appendix, section SECREF57.
Finally, we cleaned up issues with labels around implicit or blank values using rule-based scripts. We learned our process involved some ambiguities around whether a subsequent value needed to be filled in. For example, if a paper was not using crowdworkers, then the instructions for our schema were that the question about crowdworker compensation was to remain blank. However, we found we had cases where “reported crowdworker compensation” was “no” for papers that did not use crowdworkers. This would be concerning had we had a “yes” for such a variable, but found no such cases. We recoded questions about pre-screening for crowdwork platforms (implied by using crowdworkers in original human annotation source) and the number of human annotators.
We measured interrater reliability metrics using mean percent total agreement, or the proportion of cases where all labelers initially gave the same label. This is a more stringent metric than Fleiss's kappa and Krippendorf's alpha, and our data does not fit the assumptions for those widely-used metrics. IRR rates for round one were relatively low: across all questions, the mean percent total agreement was 66.67%, with the lowest question having a rate of 38.2%. IRR rates for round two were quite higher: the mean percent total agreement across all questions was 84.80% and the lowest agreement score was 63.4% (for “used external human annotation”, which we discuss later). We are confident about our labeling process, especially because these individual ratings were followed by an expert-adjudicated discussion-based reconciliation process, rather than simply counting majority votes. We detail more information and reflection about interrater reliability in appendix section SECREF52.
Data and methods ::: Raw and normalized information scores
We quantified the information about training data in papers, developing a raw and normalized information score, as different studies demanded different levels of information. For example, our question about whether inter-annotator agreement metrics were reported is only applicable for papers involving multiple annotators. Our questions about whether prescreening was used for crowdwork platforms or whether crowdworker compensation was reported is only relevant for projects using crowdworkers. However, some kinds of information are relevant to all papers that involve original human annotation: who the annotators are (annotation source), annotator training, formal instructions or definitions were given, the number of annotators involved, whether multiple annotators examined the same items, or a link to a publicly-available dataset.
For raw scores, papers involving original human annotation received one point each for reporting the six items mentioned above. In addition, they received one point per question if they included information for each of the two questions about crowdworkers if the project used crowdworkers, and one point if they reported inter-annotator metrics if the project used multiple annotators per item. For the normalized score, the raw score was divided by the highest possible raw score. We only calculated scores for papers involving original human annotation. Finally, we conducted an analysis of information scores by various bibliometric factors, which required determining such factors for all papers. For all ArXiV papers, we determined whether the PDF was a pre-print not (yet) published in another venue, a post-print identical in content to a published version, or a pre-print version of a paper published elsewhere with different content. For all Scopus papers and ArXiV post-prints, we also determined the publisher. We detail these in appendix SECREF47.
Findings ::: Original classification task
The first question was whether the paper was conducting an original classification task using supervised machine learning. Our keyword-based process of generating the corpus included many papers not in this scope. However, defining the boundaries of supervised ML and classification tasks is difficult, particularly for papers that are long, complex, and ambiguously worded. We found that some papers claimed to be using ML, but when we examined the details, these did not fall under our definition. We defined machine learning broadly, using a common working definition in which machine learning includes any automated process that does not exclusively rely on explicit rules, in which the performance of a task increases with additional data. This includes simple linear regressions, for example, and there is much debate about if and when simple linear regressions are a form of ML. However, as we were also looking for classification tasks, linear regressions were only included if it is used to make a prediction in a set of defined classes. We defined an “original” classifier to mean a classifier the authors made based on new or old data, which excludes the exclusive use of pre-trained classifiers or models.
As table TABREF13 shows, the overwhelming majority of papers in our dataset were involved in an original classification task. We placed 5 papers in the “unsure” category — meaning they did not give enough detail for us to make this determination, or that they were complex boundary cases. One of the “unsure” cases clearly used labels from human annotation, and so we answered the subsequent questions, which is why the counts in Table 2 add up to 143 (as well as some other seeming disparities in later questions).
Findings ::: Labels from human annotation
One of the major issues we had to come to a consensus around was whether a paper used labels from human annotation. We observed a wide range of cases in which human judgment was brought to bear on the curation of training data. Our final definition required that “the classifier [was] at least in part trained on labeled data that humans made for the purpose of the classification problem.” We decided on a working definition that excluded many “clever uses of metadata” from this category, but did allow some cases of “self-annotation” from social media, which were typically the most borderline cases on the other side. For example, one case from our examples we decided was human annotation used specific politically-inflected hashtags to automatically label tweets as for or against a position, for use in stance detection (e.g. #ProChoice versus #ProLife). However, these cases of self-annotation would all be considered external human annotation rather than original human annotation, and so the subsequent questions about the annotation process would be not applicable. Another set of borderline cases involved papers where no human annotation was involved in the curation of the training dataset that was used to build the classifier, but human annotation was used for validation purposes. We did not consider these to involve human annotation as we originally defined it in our schema, even though the same issues arise with equal significance for the validity of such research.
Findings ::: Used original human annotation and external human annotation
Our next two questions were about whether papers that used human annotation used original human annotation, which we defined as a process in which the paper's authors obtained new labels from humans for items. It is common in ML research to re-use public datasets, and many of papers in our corpus did so. We also found 10 papers in which external and original human annotation was combined to create a new training dataset. For these reasons, we modified our schema to ask separate questions for original and external human annotation data, to capture all three cases (using only original, only external, or both). Tables TABREF17 and TABREF17 show the breakdown for both questions. We only answered the subsequent questions about the human annotation process for the papers producing an original human annotated dataset.
Findings ::: Original human annotation source
Our next question asked who the annotators were, for the 74 papers that used original human annotation. The possible options were: the paper's authors, Amazon Mechanical Turk, other crowdworking platforms, experts/professionals, other, and no information. We took phrases like “we labeled” (with no other details) to be an implicit declaration that the paper's authors did the labeling. If the paper discussed labelers' qualifications for the task beyond an average person, we labeled it as “experts / professionals.” For example, some of our boundary cases involved recruiting students to label sentiment. One study involved labeling tweets with both English and Hindi text and noted that the students were fluent in both languages – which we considered to be in the “experts / professionals” category. Another paper we included in this category recruited students to label tweets with emojis, noting that the recruited students “are knowledgeable with the context of use of emojis.”
As table TABREF19 shows, we found a diversity of approaches to the recruitment of human annotators. The plurality of papers involved the paper's authors doing the annotation work themselves. The next highest category was “no information,” which was found in almost a quarter of the papers using original human annotation. Experts / professionals was far higher than we expected, although we took any claim of expertise for granted. Crowdworkers constituted a far smaller proportion than we expected, with Amazon Mechanical Turk and other platforms collectively comprising about 15% of papers. Almost all of the other crowdworking platforms specified were CrowdFlower/FigureEight, with one paper using oDesk.
Findings ::: Number of human annotators
Our instructions for the question about the number of human annotators was not precise and had one of the lower levels of inter-rater reliability. If the paper included information about the number of human annotators, the instructions were to put such a number, leaving the field blank for no information. Most of the disagreement was from differences around how papers report the number of annotators used. For example, some papers specified the total number of humans who worked on the project annotating items, while others only specified how many annotators were used per item (particularly for those using crowdworkers), and a few reported both. Some involved a closed set of annotators who all examined the same set of items, similar to how our team operated. Other papers involved an open set of annotators, particularly drawn from crowdworking platforms, but had a consistent number of annotators who reviewed each item. Due to these inconsistencies, we computationally re-coded responses into the presence of information about the number of human annotators. These are both important aspects to discuss, although it is arguably more important to discuss the number of annotators who reviewed each item. In general, having more annotators review each item provides a more robust way of determining the validity of the entire process, although this also requires caluclating inter-annotator agreement metrics.
As table TABREF21 shows, a slim majority of papers using original human annotation specified the number of annotators involved in some way. Based on our experiences, we typically noticed that papers discussing the number of annotators often fell into two categories: 1) a small closed team (more often 2-3, sometimes 4-6) that were either the papers' authors or recruited directly by the authors, who tended to perform the same amount of work for the duration of the project; or 2) a medium to large (25-500) open set of annotators, typically but not necessarily recruited through a crowdworking platform, who each performed highly variable amounts of work.
Findings ::: Formal definitions and instructions
Our next question was about whether instructions or guidelines with formal definitions or examples are reportedly given to annotators. Formal definitions and concrete examples are both important, as they help annotators understand how the researchers have operationalized the concept in question and determine edge cases. With no or ambiguous definitions/examples, there could be fundamental misunderstandings that are not captured by inter-annotator agreement metrics, if all annotators make the same misunderstandings. We defined two levels: giving no instructions beyond the text of a question, then giving definitions for each label and/or concrete examples. The paper must describe or refer to instructions given (or include them in supplemental materials), otherwise, we categorized it "No Information". Some borderline cases involved authors labeling the dataset themselves, where the paper presented a formal definition, but only implied that it informed the labeling – which we took to be a formal definition. As table TABREF23 shows, the plurality of papers did not provide enough information to make a determination (it is rare for authors to say they did not do something), but 43.2% provided definitions or examples.
Findings ::: Training for human annotators
We defined training for human annotators to involve some kind of interactive process in which the annotators have the opportunity to receive some kind of feedback and/or dialogue about the annotation process. We identified this as a distinct category from both the qualifications of the annotators and the instructions given to annotators, which are examined in other questions. Training typically involved some kind of live session or ongoing meeting in which annotators' progress was evaluated and/or discussed, where annotators had the chance to ask questions or receive feedback on why certain determinations did or did not match definitions or a schema. We used our own team's process as an example of this, and found several papers that used a similar roundtable process, which went into detail about interactions between team members. Cases in which the paper only specified that annotators were given a video or a detailed schema to review were not considered training details, as this was a one-way process and counted as definitions/instructions.
The overwhelming majority of papers did not discuss such issues, as table TABREF25 shows, with 15% of papers involving a training session. Because we had a quite strict definition for what constitutes training (versus what many may think of around “trained annotators”), this is expected. We also are not all that concerned with this low number, as there are many tasks that likely do not require specialized training — unlike our project, which required both specific expertise in an area and with our complicated schema.
Findings ::: Pre-screening for crowdwork platforms
Crowdwork platforms let employers pre-screen or test for traits, skills, or performance metrics, which significantly narrows the pool of crowdworkers. For example, “project-specific pre-screening” involves offering a sample task with known outcomes: if the crowdworker passed, they would be invited to annotate more items. 5 of the 11 papers using crowdworkers reported using this approach. Platforms also often have location-based screening (e.g. US-only), which 2 papers reported using. Some crowdwork platforms have a qualification for workers who have a positive track record based on total employer ratings (e.g. AMT Master). Platforms also offer generic skills-based tests for certain kinds of work (e.g. CrowdFlower's Skill Tests). These last two qualifications were in our coding schema, but no papers reported using them.
Findings ::: Multiple annotator overlap and reporting inter-annotator agreement
Our next two questions were about using multiple annotators to review the same items (multiple annotator overlap) and whether inter-annotator agreement metrics were reported. Having multiple independent annotators is typically a foundational best practice in structured content analysis, so that the integrity of the annotations and the schema can be evaluated (although see BIBREF31). For multiple annotator overlap, our definitions required papers state whether all or some of the items were labeled by multiple labelers, otherwise “no information” was recorded. Then, for papers that did multiple annotator overlap, we examined whether any inter-annotator agreement metric was reported. We did find one paper that did not explicitly state that multiple labelers overlapped, but did report inter-annotator agreement metrics. This implicitly means that at least some of the items were labeled by multiple labelers, but for consistency, we keep the “no information” label for this case. We did not record what kind of inter-annotator metric was used, such as Cohen's kappa or Krippendorff's alpha, but many different metrics were used. We also did not record what the exact statistic was, although we did notice a wide variation in what was considered an acceptable or unacceptable score for inter-annotator agreement.
For multiple annotator overlap, table TABREF29 shows that just under half of all papers that involved an original human annotation task did not provide explicit information one way or the other about whether multiple annotators reviewed each item. This includes the one paper that reported inter-annotator agreement metrics, but did not specify whether overlap was for all items or some items. Only three papers explicitly stated that there was no overlap among annotators, and so it is quite likely that the papers that did not specify such information did not engage in such a practice. For the 37 papers that did involve some kind of multiple annotator overlap, the overwhelming majority of this subsample (84%) involved multiple annotation of all items, rather than only some items. We also found that for papers that did involve some kind of multiple overlap, the large majority of them ( 70%) did report some metric of inter-annotator agreement, as table TABREF29 indicates.
Findings ::: Reported crowdworker compensation
Crowdworking is often used because of the low cost, which can be far below minimum wage in certain countries. Researchers and crowdworkers have been organizing around issues related to the exploitation of crowdworkers in research, advocating ethical practices including fair pay BIBREF54. We examined all papers involving crowdworkers for any indication of compensation, and found zero mentioned compensation. We did find that some papers using other sources of human annotation (e.g. students) discussed compensation for annotators, but this was not in our original schema.
Findings ::: Link to dataset available
Our final question was about whether the paper contained a link to the dataset containing the original human annotated training dataset. Note that this question was only answered for papers involving some kind of original or novel human annotation, and papers that were exclusively re-using an existing open or public dataset were left blank to avoid double-counting. We did not follow such links or verify that such data was actually available. As table TABREF32 shows, the overwhelming majority of papers did not include such a link, with 8 papers (10.81%) using original human-annotated training datasets linking to such data. Given the time, labor, expertise, and funding in creating original human annotated datasets, authors may be hesitant to release such data until they feel they have published as many papers as they can.
Paper information scores
The raw and normalized information scores (see section SECREF10 for methodology) were calculated for all papers that involved original human annotation. As previously discussed, our corpora represent a likely non-representative sample of ML research, even if bounded to social computing. Our relatively small sample sizes combined with the number of multiple comparisons would mean that thresholds for statistical significance would need to be quite high. Instead, we present these results to help provide an initial framework and limited results on this issue, intended to help inform a broader and more systematic evaluation the ML literature. We do observe quite varying ranges and distributions of information scores, which does give evidence to the claim that there is substantial and wide variation in the practices around human annotation, training data curation, and research documentation.
Paper information scores ::: Overall distributions of information scores
Figure FIGREF34 shows histograms for raw and normalized information scores, which both suggest a bimodal distribution, with fewer papers at the both extremes and the median. This suggests that there are roughly two populations of researchers, with one centered around raw scores of 1-2 and normalized scores of 0.25 and one centered around raw scores of 5 and normalized scores of 0.7. The normalized information score ranged from 0 to 1, with 6 papers having a normalized score of 0 and only 1 paper with a score of 1. The raw information score ranged from 0 to 7, with no paper receiving a full score of 8 or 9, which would have required a study involving crowdworkers, multiple overlap, and open datasets. Overall, the mean normalized information score was 0.441, with a median of 0.429 and a standard deviation of 0.261. The mean raw score was 3.15, with a median of 3.0 and a standard deviation of 2.05.
Paper information scores ::: Information scores by corpus and publication type
Figure FIGREF37 shows two boxplots of normalized information scores that are based on different intersecting categories of publication type and status. The left figure compares scores in four categories: all papers in the Scopus sample (non-ArXived), ArXiv preprints that were never (or are not yet) published, and ArXiv preprints that were either postprints or preprints of a traditional publication. The category with the lowest median score are papers from the Scopus sample, which is followed closely by ArXiv preprints never published, although preprints never published had a much larger IQR and standard deviation. Postprints of publications had a similar IQR and standard deviation as preprints never published, but a much higher median score. Preprints of publications had a similar median score as postprints, but with a much smaller IQR and standard deviation. The righthand figure plots publication types for the combined corpora. Conference proceedings and ArXiv preprints never published have somewhat similar medians and IQRs, with journal articles having a higher median of 0.5 and a much narrower IQR. While we hesitate to draw generalizable conclusions, we see these findings indicating a wide range of factors potentially at play.
Paper information scores ::: Information scores by publisher
Figure FIGREF39 shows boxplots for normalized information scores by publisher, split between papers sampled from ArXiv and Scopus. The boxplots are ordered by the median score per publisher. In papers in the ArXiv corpus, those that were pre- or post-prints of papers published by the professional societies Association for Computing Machinery (ACM) or Association of Computational Linguistics (ACL) tied for the highest median scores of 0.667, with similar IQRs. These were followed by Springer and Elsevier, with respective medians 0.625 and 0.603 and narrower IQRs. ArXiv preprints not published elsewhere had a median score of 0.381 and the highest IQR and standard deviation (0.289), suggesting that it represents a wide range of papers. The publishers at the lower end of the scale included AAAI, with a median of 0.444 and a narrower IQR, and IEEE, with a median of 0.226 and the second-highest IQR and standard deviation (0.327). Curiously, papers from the Scopus corpus show different results per-publisher, with the median scores of all publishers lower in the Scopus corpus than in the ArXiv corpus. Given the small number of papers in the Scopus sample, we hesitate to draw general conclusions, but suspect it indicates differences between all academic authors and those who post ArXiv postprints.
Concluding discussion ::: Findings
In the sample of ML application publications using Twitter data we examined, we found a wide range in levels of documentation about methodological practices in human annotation. While we hesitate to overly generalize our findings to ML at large, these findings do indicate concern, given how crucial the quality of training data is and the difficulty of standardizing human judgment. Yet they also give us hope, as we found a number of papers we considered to be excellent cases of reporting the processes behind their datasets. About half of the papers using original human annotation engaged in some form of multiple overlap, and about 70% of the papers that did multiple overlap reported metrics of inter-annotator agreement. The distribution of annotation information scores was roughly bimodal, suggesting two distinct populations of those who provide substantially more and less information about training data in their papers. We do see preliminary evidence that papers in our sample published by certain publishers/venues tended to have papers with far more information than others (e.g. ACM and ACL at the top end, followed closely by journal publishers Springer and Elsevier, with IEEE and AAAI proceedings at the lower end). Preprints exclusively published on ArXiv also had the widest range of scores.
Concluding discussion ::: Implications
Based on our findings and experiences in this project, we believe human annotation should be considered a core aspect of the research process, with as much attention, care, and concern placed on the annotation process as is currently placed on performance-based metrics like F1 scores. Our findings — while preliminary, descriptive, and limited in scope — tell us that there is much room for improvement. This paper also makes steps towards more large-scale and systematic analyses of the research landscape, as well as towards standards and best practices for researchers and reviewers.
Institutions like journals, funders, and disciplinary societies have a major role to play in solutions to these issues. Most publications have strict length maximums, and many papers we scored highly spent a page or more describing their process. Reviewer expectations are crucial in any discussion of the reporting of methodological details in research publications. It could be that some authors did include such details, but were asked to take it out and add other material instead. Authors have incentives to be less open about the messiness inherent in research, as this may open them up to additional criticism. We see many parallels here to issues around reproducibility and open science, which are increasingly being tackled by universal requirements from journals and funders, rather than relying on individuals to change norms. Such research guidelines are common, including the COREQ standard for qualitative data analysis reporting BIBREF55, a requirement by some journals. A number of proposed standards have been created around datasets for ML BIBREF48, BIBREF49, BIBREF30, BIBREF50, BIBREF51, BIBREF52, BIBREF53, which are often framed as potential ways to mitigate bias and improve transparency and accountability. Several of these are broader proposals around reporting information about ML classifiers and models, which include various aspects beyond our study. In fact, given the recent explosion of proposals for structured disclosure or transparency documents around ML, the Partnership on AI has recently created the “ABOUT ML” working group to arrive at a common format or standard. BIBREF56
From our perspective, it is important to frame this issue as one of research validity and integrity: what kind of information about training data is needed for researchers, reviewers, and readers to have confidence in the model or classifier? As we observed in our discussions, we became skeptical about papers that did not adequately describe their human annotation processes. However, human annotation is a broad and diverse category of analytical activity, encompassing a wide range of structured human judgment brought to bear on items, some far more straightforward or complex. We saw the wide range papers that were engaged in various forms of annotation or labeling, even though we bounded our study to papers using data from Twitter. One important distinguishing factor is the difficulty of the task and the level of specific knowledge needed to complete it, which can vary significantly. Another key distinction may be between when there is expected to be only one `right' answer and when there might be many valid answers.
Most importantly, we would not want a straightforward checklist to overdetermine issues of model integrity. A number of papers we read were missing details we thought were crucial for understanding that study, but would not make sense for a majority of papers we examined. If a checklist was created, it should not be seen as an end in itself. The classic principle of scientific replicability could be a useful heuristic: does the paper provide enough information about the labeling process such that any reader could (with sufficient resources and access to the same kind of human annotators) conduct a substantively identical human annotation process on their own? We also see a role for technical solutions to help scaffold adherence to these best practices. For example, major qualitative data analysis platforms like MAXQDA or NVivo have built-in support for inter-annotator agreement metrics. Several crowdsourcing and citizen science platforms for data labeling are built to support reconciliation for disagreements. Automated workflow, pipeline, and provenance tracking is an increasing topic in ML, although these can focus more on model building and tuning, taking data as given. We recommend such projects include human annotation as a first-class element, with customization as needed.
Finally, our own experience in this human annotation project studying human annotation projects has shown us the costs and benefits of taking an intensive, detailed, collaborative, and multi-stage approach to human annotation. On one side, we believe that after going through such a long process, we have not only better data, but also a much better contextual understanding of our object of study. Yet on the other hand, even though struggling over the labels and labeling process is an opportunity, our time- and labor-intensive process did have a direct tradeoff with the number of items we were able to annotate. These issues and tradeoffs are important for ML researchers to discuss when designing their own projects and evaluating others.
Concluding discussion ::: Limitations and future work
Our study has limitations, as we only examined a sample of publications in the ML application space. First, we only examined papers that performing a classification task on tweets, which is likely not a representative sample of ML application publications. We would expect to find different results in different domain application areas. Papers in medicine and health may have substantially different practices around reporting training data, due to strict reporting standards in clinical trials and related areas. We also generally examined papers that are posted on ArXiV (in addition to 30 papers sampled from Scopus) and ArXiV is likely to not be a representative sample of academic publications. ArXiV papers are self-submitted and represent a range of publication stages, from drafts not submitted to review, preprints in peer review, and postprints that have passed peer review. Future work should examine different kinds of stratified random samples to examine differences between various publishers, publication types, disciplines, topics, and other factors.
Our study only examined a set of the kinds of issues that scholars and practitioners in ML are examining when they call for greater transparency and accountability through documentation of datasets and models. We have not recorded information about what exactly the rates of inter-annotator agreement are. In particular, we did not record information about the reconciliation or adjudication process for projects which involve multiple overlap (e.g. majority rule, talking to consensus), which we have personally found to be a crucial and difficult process. Other questions we considered but did not include were: the demographics of the labelers, the number of labelers (total and per item), compensation beyond crowdworkers, whether instructions or screenshot of the labeling interface was included, and whether labelers had the option to choose “unsure” (vs. being forced to choose a label). We leave this for future work, but also found that each additional question made it more difficult for labelers. We also considered but did not have our team give a holistic score indicating their confidence in the paper (e.g. a 1-5 score, like those used in some peer reviewing processes).
Our study also has limitations that any human annotation project has, and we gained much empathy around the difficulties of human annotation. Our process is not perfect, and as we have analyzed our data, we have identified cases that make us want to change our schema even further or reclassify boundary cases. In future work, we would also recommend using a more structured and constrained system for annotation to capture the text that annotators use to justify their answers to various questions. ML papers are very long and complex, such that our reconciliation and adjudication process was very time-consuming. Finally, we only have access to what the publications say about the work they did, and not the work itself. Future work could improve on this through other methods, such as ethnographic studies of ML practitioners.
Appendix
The appendix appears following the references section. This work was funded in part by the Gordon & Betty Moore Foundation (Grant GBMF3834) and Alfred P. Sloan Foundation (Grant 2013-10-27), as part of the Moore-Sloan Data Science Environments grant to UC-Berkeley. This work was also supported by UC-Berkeley's Undergraduate Research Apprenticeship Program (URAP). We thank many members of UC-Berkeley's Algorithmic Fairness & Opacity Group (AFOG) for providing invaluable feedback on this project.
Appendix ::: Dataset/corpus details ::: Keyword labels
To capture the topical and disciplinary diversity of papers in our corpus, we assigned one or more keyword labels to each paper, intended to capture topical, domain, disciplinary, and methodological qualities about the study. A paper seeking to classify tweets for spam and phishing in Turkish might include the labels: spam detection; phishing detection; cybersecurity; non-English. A study seeking to classify whether users are tweeting in support or opposition of a protest might have the keywords: user profiling; political science; protests; stance detection; public opinion. As part of the annotation and labeling process, all five annotators gave each paper a short description of what was being classified or predicted. The project lead aggregated these independent descriptions and additionally examined the paper title, abstract, and text. The project lead — who has extensive knowledge and experience of the various disciplines in the social computing space — then conducted a two-stage thematic coding process. A first pass involved open (or free-form) coding for all papers, with the goal of creating a typology of keywords. The list of keywords were then refined and consolidated, and a second pass was conducted on all of the items to re-label them as appropriate. Papers could have multiple keywords.
The distribution is plotted in Figure FIGREF46, which is broken out by papers that were using original human annotation (e.g. a new labeled training dataset) versus either theoretical papers or papers exclusively re-using a public or external dataset (see section SECREF16). This shows that the most common keywords were user profiling (a broader keyword that includes demographic prediction and classification of users into various categories), public opinion (a broader keyword that includes using Twitter to obtain beliefs or opinions, typically about political or cultural topics), and then two NLP methodologies of sentiment analysis and topic identification. The keyword "social networks" was used for any paper that either made substantive use of the network structure (e.g. follower graphs) as a feature, or tried to predict it. This figure also shows that our corpus also includes papers from a wide range of fields and sub-fields across disciplines, including a number of papers on cybersecurity (including bot/human detection, phishing detection, and spam detection), public health and epidemology, hate speech and content moderation, human geography, computer vision, political science, and crisis informatics. Papers using non-English languages were also represented in our corpus.
Appendix ::: Dataset/corpus details ::: Distribution of paper types in the corpus
For each of our 164 papers, we needed to determine various bibliometric factors. For papers in the ArXiv sample, the most important of these is whether the file uploaded to ArXiV is a version of a paper published in a more traditional venue, and if so, whether the ArXiV version is a pre-print submitted prior to peer-review (and has different content than the published version) or if it is a post-print that is identical in content to the published version. Many authors upload a paper to ArXiv when they submit it to a journal, others upload the accepted manuscript that has passed peer-review but has not been formatted and typeset by the publisher, and others upload the exact “camera-ready” version published by the publishers. ArXiV also lets authors update new versions; some will update each of these versions as they progress through the publishing process, others will only upload a final version, and some only upload the pre-review version and do not update the version in ArXiv to the published version.
To do this, the project lead first manually searched for the exact text of the title in Google Scholar, which consolidates multiple versions of papers with the same title. Papers that only had versions in ArXiv, ArXiv mirrors (such as adsabs), other e-print repositories like ResearchGate, personal websites, or institutional repositories were labeled as “Preprint never published.” For papers that also appeared in any kind of publication venue or publishing library (such as the ACM, IEEE, AAAI, or ACL digital libraries), the project lead recorded the publication venue and publisher, then downloaded the published version. In some workshops and smaller conferences, the “publisher” was a single website just for the event, which lacked ISSNs or DOIs. These were considered to be published as conference or workshop proceedings, if there was a public list of all the papers presented at the event with links to all of the papers. There was only one case in which there were two or more publications with the exact same title by the same authors, which involved a 2-page archived extended abstract for a poster in an earlier conference proceeding and a full paper in a later conference proceeding. For this case, we chose the full paper in the later venue.
The project lead then compared the version uploaded to ArXiv with the published version. As this was done after the labeling process, for papers where the author uploaded multiple versions to ArXiv, we took care to examine the version our labelers examined. If there were any differences in substantive content, the paper was labeled as “Preprint of” and then an appropriate description of the venue, such as “refereed conference proceeding” or “refereed journal article.” If there were no differences in the substantive content of the paper, the paper was labeled as “Postprint of” and then the venue description. Changes in reference style or ordering, page layout, typesetting, the size or color of figures, or moving the same text between footnotes and inline parentheticals were not considered to be substantive content changes. However, even a single character typo fix to the main body text, a single added or removed reference, or a change to a figure's caption constituted a substantive content change. Table TABREF48 shows the distribution of paper types. Because there was only one dissertation in the sample, which also was not using original human annotation, we excluded this category from the aggregate analyses by paper type shown in the results section.
Appendix ::: Dataset/corpus details ::: Distribution of publishers in corpus
For each paper in the Scopus samples and each paper in the ArXiv corpus that was a pre-print or post-print of a published paper, we also collected information about the journal and publisher. There were 80 different journals, conference proceedings, or workshops represented, with the top venues being the proceedings of SocInfo with 6 papers and the proceedings of ASONAM (Advances in Social Network Analysis and Mining) with 4 papers. Six venues had 3 publications each, which were all conference proceedings: AAAI ICWSM, ELRA LREC, ACM CIKM, ACM WWW, and IEEE Big Data. The distribution of publishers is presented in table TABREF49, which is broken out by papers in the ArXiv and Scopus corpus. The distribution of papers by years is shown in table TABREF49.
Appendix ::: Methods and analysis details ::: Inter-annotator agreement
In the first round, 5 annotators examined each paper independently, then met to discuss papers with disagreement. Table TABREF53 shows for each question, what percent of items were given the same label by all annotators (with number of annotators being recoded for the presence or absence of any information). Cases where no annotator answered the question because it was not relevant (e.g. crowdworker compensation for non-crowdworker projects) were not included in such a calculation, which would have increased such rates even more, but this would be somewhat disingenuous.
We report percent complete agreement among all raters for each question; for each item, what percent were given the same rating by all raters? We believe this is a more appropriate and straightforward metric for our project. This is due to the fact that our data does not necessarily meet the particular assumptions of other widely used two statistical estimators for 3+ raters. Fleiss's kappa and Krippendorf's alpha are widely used because they take into account the possibilities that raters made decisions based on random chance. However, this requires assuming a uniform prior possibility of such a random distribution, which generally only applies if each possible response by raters is equally likely BIBREF64, BIBREF61. This is the case in balanced datasets, but we observed widely skewed distributions.
The rates of proportional agreement were not high enough in the first round for us to be confident, which is likely due to a variety of factors. First, in contrast to most of the papers we examined, our project involved annotators answering 13 different questions for each item, which adds significant complexity to the process. Second, machine learning publications are also some of the more difficult pieces of content to make determinations around, as the definitions and boundaries of various concepts are often relatively undefined and contested across the many academic disciplines. In particular, our lowest rate for the second round was in the external human annotation question, which was added between the first and second round, and appears to still have some ambiguity.
We observed substantial increases in agreement between round one and two, although this also is likely confounded by the fact that all five annotators reviewed every item in round one, but only two or three reviewed every item in round two. We should note that as our approach was a human annotation research project studying human annotation research projects, this has given us much empathy for how difficult such a task is. We also acknowledge that our project involves the same kind of “black boxing” we discussed in the literature review, in which a messy process of multiple rounds of human annotations is reduced to a gold standard. However, we do believe in being open about our process, and our data for both rounds of annotation and the final dataset will be available upon publication.
The overall question for any study involving structured human annotation is whether the entire annotation, integration, review, and reconciliation process ultimately results in high confidence for the final dataset. The standard approach of human annotation checked by inter-rater reliability treats individual humans as instruments that turn phenomena in the world into structured data. If there is a high degree of inter-rater reliability, then each individual human can generally be trusted to make the same determination. If this is the case, then either reconciliation can easily take place through a majority vote process involving no discussion, or if rates are quite high, then only a subset of items need to be reviewed multiple times. In contrast, what our first round of inter-rater reliability metrics told us was that we were not the same kinds of standardized instruments that turn the same inputs into the same outputs. This does not bode well if we were conducting a single-stage mechanical majority-rule reconciliation process, and certainly would be unwise if we only had a single individual annotate each paper. For such a reason, we did not rely on such easier processes of reconciliation and demanded all papers be annotated by multiple individuals and discussed in a group setting moderated by the lead research scientist.
Furthermore, because our approach was largely focused on identifying the presence of various kinds of information within long-form publications, this is a different kind of human judgment than is involved in common tasks using human annotators in social computing, such as social media content moderation, sentiment analysis, or image labeling. Typically, annotated items are much smaller and tend to be evaluated holistically, with disagreements arising from annotators who looked at the same information and made different determinations.
In contrast, we reflected that in our reconciliation process, most of the time when annotators disagreed, it was because some annotators had caught a piece of information in the paper that others had not seen. There was a common occurrence wherein one of the annotators would point out a particular paragraph, the other annotators who had initially disagreed would read it, and then remark that they had missed that part and would like to change their answer. That said, there were cases wherein annotators were reading the same sections of the paper and still arriving at different answers, which was often either 1) because the paper was giving ambiguous, incomplete, or implicit information, or 2) because there was a fundamental interpretation of the coding schema, which required updating the schema or the examples in it. For such reasons, we are relatively confident that if, after our two rounds of annotation and the reconciliation process, no individual member of our team has identified the presence of such information, then it is quite likely it is not present in the paper.
Appendix ::: Methods and analysis details ::: Changes to the coding schema
Unlike in some approaches to structured content analysis, the coding schema was open to revision if needed during this first round. Some difficult edge cases led to the refinement of the schema approximately half-way through this round of the labeling. The schema was developed on a web-based word processing platform, which also included examples of difficult edge cases, which were added as they were identified in team meetings. The document detailed each question, a formal definition or explanation of the question, the list of possible permitted labels, and various cases of examples that illustrated difficult or edge cases.
The coding schema was modified only in cases where backward compatibility could be maintained with prior labeling work. This typically involved taking a question which had many granular possible labels and consolidating the possible labels into a smaller number of broader labels. For example, the question about whether instructions were given to human annotators originally involved specifying whether the instructions included a formal definition, examples, or both. This was revised to only specify “instructions with formal definition or examples.” Similarly, training for human annotators originally included a more granular list of possible training circumstances, plus ”no information”, ”other”, and ”unsure”. Because of the difficulty of gaining consensus on these different forms of training and the relatively few number of papers that gave any details whatsoever about annotator training (as well as no papers that explicitly stated no training had occurred), these were reduced to “some training details”, “no information”, and ”unsure” (see Table TABREF55).
In addition, three questions were added halfway through the first round of the annotation process. First, a question was added about whether the paper used an external human-annotated dataset or not, which was added to clarify the question about whether original human annotation was used. This was added after a paper was discussed where an external human-annotated dataset was combined with an original human-annotated dataset. Two other questions were added about whether the paper contains a link to the training dataset and whether details about crowdworker compensation were included for projects using crowdworkers. These were both relatively straightforward questions, with relatively few incidences across our dataset. All papers had all questions answered in the second round.
Appendix ::: Software used
All computational analysis and scripting was conducted in Python 3.7 BIBREF66, using the following libraries: Pandas dataframes BIBREF60 for data parsing and transformation; SciPy BIBREF58 and NumPy BIBREF65 for quantitative computations; and Matplotlib BIBREF57 and Seaborn BIBREF67 for visualization. Analysis was conducted in Jupyter Notebooks BIBREF59 using the IPython BIBREF62 kernels. Datasets and Jupyter Notebooks for data collection and analysis will be made available upon publication, which are made to run on Binder BIBREF63.
Appendix ::: Coding schema, examples, and instructions
A final version of our coding schema and instructions is below:
1. Original classification task: Is the paper presenting its own original classifier that is trying to predict something? “Original” means a new classifier they made based on new or old data, not anything about the novelty or innovation in the problem area.
Machine learning involves any process that does not have explicit or formal rules, where performance increases with more data. Classification involves predicting cases on a defined set of categories. Prediction is required, but not enough. Linear regressions might be included if the regression is used to make a classification, but making predictions for a linear variable is not. Predicting income or age brackets is classification, predicting raw income or age is not.
Example: analyzing statistics about the kinds of words people use on social media is not a classification task at all.
Example: predicting location is a classification task if it is from work, school, home, or other, but not if it is an infinite/undefined number of locations.
Example: This paper (https://ieeexplore.ieee.org/document/7937783) was framed as not an original classification task (more algorithm performance), but they did create an original classifier. This can also be an “unsure” – which is 100% OK to answer.
Example: Literature review papers that include classification papers aren't in this, if they didn't actually build a classifier.
Example: if there is a supervised classification task that is part of a broader process, this counts, focus on that.
If no, skip the following questions.
2. Classification outcome: What is the general type of problem or outcome that the classifier is trying to predict? Keep it short if possible. For example: sentiment, gender, human/bot, hate speech, political affiliation.
3. Labels from human annotation: Is the classifier at least in part trained on labeled data that humans made for the purpose of the classification problem? This includes re-using existing data from human judgments, if it was for the same purpose as the classifier. This does not include clever re-using of metadata.
Do a quick CTRL-F for “manual” and “annot” if you don't see anything, just to be sure.
If not, skip the following questions about human annotation.
Example: ISideWith paper on political stances was labels from human annotation, just not original. They took the labels from elsewhere and filled in the gaps (more on that in next Q).
Example: Buying followers and seeing who follows (1411.4299.pdf) is not human annotation.
Example: Generating (smart) simulated datasets from metadata is not human annotation.
Example: 1612.08207.pdf is not annotation when looking up political affiliation of politicians from an external database, even though it is manual work. No judgment is involved.
Example: 1709.01895.pdf is labels from human annotation, even though it is semi-automated. They identified hashtags that they believe universally correspond to certain political stances. There is a form of human judgment here, although in that paper, they don't define or explain it.
Example: Evaluation using human annotation is not annotation for ML, if the annotation wasn't used to make the classifier. (1710.07394.pdf)
Example: If they are using human annotation just to have confidence that a machine-annotated dataset is as good as a human annotated one, but the human annotated dataset isn't actually used to train the classifier, it is *not* using human annotation for ML. (1605.05195.pdf)
4. Used original human annotation: Did the project involve creating new human-labeled data, or was it exclusively re-using an existing dataset?
Yes
No
Unsure
Papers may have a mix of new and old human labeled data, or new human labeled data and non-human labeled data. If there is any new human annotation, say yes.
New human annotation must be systematic, not filling in the gaps of another dataset. Example: ISideWith paper on political stances is *not* original human annotation, even though they did some manual original research to fill the gap.
If the methods section is too vague to not tell, then leave as unsure (example: 1801.06294.pdf)
4.5. Used external human annotation data: Did the project use an already existing dataset from human labeled data?
Yes
No
Unsure
If they are using external human annotated data, skip the remaining questions:
5. Original human annotation source: Who were the human annotators? Drop-down options are:
Amazon Mechanical Turk (AMT, Turkers)
Any other crowdworking platform (Crowdflower / Figure8)
The paper's authors
Academic experts / professionals in the area
No information in the paper
Other
Unsure
For academic experts or professionals in the area, this is independent from the kinds of specific training they received for the task at hand. Think of “the area” broadly, so if it is something about healthcare and nurses were recruited, that would be professionals in the area, even if they don't say anything about the nurses having specific training in the annotation task at hand. If it doesn't easily fit into these or uses multiple sources, add them in the next column.
Example: “We develop a mechanism to help three volunteers analyze each collected user manually” -- put other, if that is all they say
Example: If it just says “we annotated...” then assume it is only the paper's authors unless otherwise stated.
6. Number of human annotators:
Put the number if stated, if not, leave blank.
7. Training for human annotators: Did the annotators receive interactive training for this specific annotation task / research project? Training involves some kind of interactive feedback. Simply being given formal instructions or guidelines is not training. Prior professional expertise is not training. Options include:
Some kind of training is mentioned
No information in the paper
Unsure
Example: It is not considered training if there was prescreening, unless they were told what they got right and wrong or other debriefing. Not training if they just gave people with high accuracy more work.
Example: This paper had a minimum acceptable statement for some training information, with only these lines: “The labeling was done by four volunteers, who were carefully instructed on the definitions in Section 3. The volunteers agree on more than 90% of the labels, and any labeling differences in the remaining accounts are resolved by consensus.”
8. Formal instructions/guidelines: What documents were the annotators given to help them? This document you are in right now is an example of formal instructions with definitions and examples.
No instructions beyond question text
Instructions include formal definition or examples
No information in paper (or not enough to decide)
Unsure
Example of a paper showing examples: “we asked crowdsourcing workers to assign the `relevant' label if the tweet conveys/reports information useful for crisis response such as a report of injured or dead people, some kind of infrastructure damage, urgent needs of affected people, donations requests or offers, otherwise assign the `non-relevant' label”
9. Prescreening for crowdwork platforms
Leave blank if this is not applicable.
No prescreening (must state this)
Previous platform performance qualification (e.g. AMT Master)
Generic skills-based qualification (e.g. AMT Premium)
Location qualification
Project-specific prescreening: researchers had known ground truth and only invited
No information
Unsure
10. Multiple annotator overlap: Did the annotators label at least some of the same items?
Yes, for all items
Yes, for some items
No
Unsure
No information
If it says there was overlap but not info to say all or some, put unsure.
11. Reported inter-annotator agreement: Leave blank if there was no overlap. Is a metric of inter-annotator agreement or intercoder reliability reported? It may be called Krippendorf's alpha, Cohen's kappa, F1 score, or other things.
Yes
No
Unsure
12. Reported crowdworker compensation: If using crowdworkers to annotate, did they say how much the annotators were paid for their work? Leave blank if crowdworkers were not used.
Yes
No
Unsure
13. Link to dataset available: Is there a link in the paper to the dataset they used?
Yes
No
Unsure | only 1 in 9 qualitative papers in Human-Computer Interaction reported inter-rater reliability metrics, low-effort responses from crowdworkers |
2ec9c1590c96f17a66c7d4eb95dc5d3a447cb973 | 2ec9c1590c96f17a66c7d4eb95dc5d3a447cb973_0 | Q: How were the machine learning papers from ArXiv sampled?
Text: Introduction
Machine learning (ML) has become widely used in many academic fields, as well as across the private and public sector. Supervised machine learning is particularly prevalent, in which training data is collected for a set of entities with known properties (a “ground truth” or “gold standard”), which is used to create a classifier that will make predictions about new entities of the same type. Supervised ML requires high-quality training data to produce high-quality classifiers. “Garbage In, Garbage Out” is a longstanding aphorism in computing about how flawed input data or instructions will produce flawed outputs. BIBREF0, BIBREF1 However, contemporary ML research and education tends to focus less on obtaining and validating such a training dataset, with such considerations often passed over in major textbooks BIBREF2, BIBREF3, BIBREF4. The predominant focus is typically on what is done with the training data to produce a classifier, with heavy emphasis on mathematical foundations and routine use of clean and tidy “toy” datasets. The process of creating a “gold standard” or “ground truth” dataset is routinely black-boxed. Many papers in ML venues are expected to use a standard, public training dataset, with authors comparing various performance metrics on the same dataset. While such a focus on what is done to a training dataset may be appropriate for theoretically-oriented basic research in ML, this is not the case for supervised ML applications.
Introduction ::: Study overview
All approaches of producing a training dataset involve some form of human judgment, albeit at varying levels of granularity. In this paper, we investigate and discuss a wide range of issues and concerns around the curation of human-labeled or human-annotated data, in which one or more individuals make discrete assessments of items. We report from a study in which a team of six labelers systematically examined a corpus of supervised machine learning application papers in social computing, specifically those that classified tweets from Twitter for various purposes. For each paper, we recorded what the paper does or does not state about the training data used to produce the classifier presented in the paper. The bulk of the papers we examined were a sample of preprints or postprints published on ArXiV.org, plus a smaller set of published papers sampled from Scopus. We determined whether such papers involved an original classification task using supervised ML, whether the training data labels were produced from human annotation, and if so, the source of the human-labeled dataset (e.g. the paper's authors, Mechanical Turk, recruited experts, no information given, etc.). For all papers in which an original human-labeled dataset was produced, we then made a series of further determinations, including if definitions and/or examples were given to labelers, if labelers independently labeled the same items, if inter-rater reliability metrics were presented, if compensation details for crowdworkers were reported, if a public link to the dataset was available, and more.
As our research project was a human-labeling project studying other human-labeling projects, we took care in our own practices. We only have access to the paper reporting about the study and not the actual study itself, and many papers either do not discuss such details at all or without sufficient detail to make a determinations. For example, many papers did note that the study involved the creation of an original human-labeled dataset, but did not specify who labeled it. For some of our items, one of the most common labels we gave was “no information” — which is a concerning issue, given how crucial such information is in understanding the validity of the training dataset and by extension, the validity of the classifier.
Literature review and motivation ::: A different kind of “black-boxing” in machine learning
In the introduction, we noted training data is frequently black-boxed in machine learning research and applications. We use the term “black-boxed” in a different way than it is typically invoked in and beyond the FAT* community, where often refers to interpretability. In that sense, “black-boxing” means that even for experts who have access to the training data and code which created the classifier, it is difficult to understand why the classifier made each decision. In social science and humanities work on “black-boxing” of ML (and other “algorithmic” systems), there is often much elision between issues of interpretability and intentional concealment, as Burrell BIBREF5 notes. A major focus is on public accountability BIBREF6, where many problematic issues can occur behind closed doors. This is even the case with relatively simple forms of analytics and automation — such as if-then statements, linear regressions, or rule-based expert systems BIBREF7, BIBREF8.
In contrast, we are concerned with what is and is not taken for granted when developing a classifier. This use is closer to how Latour & Woolgar used it in an ethnographic study of scientific laboratories BIBREF9. They discuss how equipment like a mass spectrometer would typically be implicitly trusted to turn samples into signals. However, when the results were drastically unexpected, it could be a problem with the machine or a fundamental breakthrough. Scientists and technicians would have to “open up the black box,” changing their relationship to the equipment to determine if the problem was with the equipment or the prevailing theory. In this view, black-boxing is a relational concept, not an objective property. It is about the orientation people have to the same social-technical systems they routinely work with and rely upon. “Opening up the black box” is not about digging into technical or internal details per se, but a gestalt shift in whether the output of a system is implicitly taken for granted or open for further investigation.
In this view, black-boxing is not inherently problematic. The question is more about who gets to be skeptical about data and who is obligated to suspend disbelief, which are also raised in discussions of open science & reproducibility BIBREF10. Operationalization, measurement, and construct validity have long been crucial and contested topics in the social sciences. Within quantitative sub-fields, it is common to have extensive debates about the best way to define and measure a complex concept (e.g. “intelligence”). From a qualitative and Science & Technology Studies perspective, there is extensive work on the practices and implications of various regimes of measurement BIBREF11, BIBREF12, BIBREF13, BIBREF14. In ML, major operationalization decisions can implicitly occur in data labeling. Yet as Jacobs & Wallach note, “[i]n computer science, it is particularly rare to articulate the distinctions between constructs and their operationalizations” BIBREF15. This is concerning, because “many well-studied harms [in ML] are direct results of a mismatch between the constructs purported to be measured and their operationalizations” BIBREF15.
Literature review and motivation ::: Content analysis
Creating human-labeled training datasets for machine learning often looks like content analysis, a well-established methodology in the humanities and the social sciences (particularly literature, communication studies, and linguistics), which also has versions used in the life, ecological, and medical sciences. Content analysis has taken many forms over the past century, from more positivist methods that formally establish structural ways of evaluating content to more interpretivist methods that embrace ambiguity and multiple interpretations, such as grounded theory BIBREF16. The intersection of ML and interpretivist approaches is outside of the scope of this article, but it is an emerging area of interest BIBREF17.
Today, structured content analysis (also called “closed coding”) is used to turn qualitative or unstructured data of all kinds into structured and/or quantitative data, including media texts, free-form survey responses, interview transcripts, and video recordings. Projects usually involve teams of “coders” (also called “annotators”, “labelers”, or “reviewers”), with human labor required to “code”, “annotate”, or “label” a corpus of items. (Note that we use such terms interchangeably in this paper.) In one textbook, content analysis is described as a “systematic and replicable” BIBREF18 method with several best practices: A “coding scheme” is defined, which is a set of labels, annotations, or codes that items in the corpus may have. Schemes include formal definitions or procedures, and often include examples, particularly for borderline cases. Next, coders are trained with the coding scheme, which typically involves interactive feedback. Training sometimes results in changes to the coding scheme, in which the first round becomes a pilot test. Then, annotators independently review at least a portion of the same items throughout the entire process, with a calculation of “inter-annotator agreement” or “inter-rater reliability.” Finally, there is a process of “reconciliation” for disagreements, which is sometimes by majority vote without discussion and other times discussion-based.
Structured content analysis is a difficult, complicated, and labor-intensive process, requiring many different forms of expertise on the part of both the coders and those who manage them. Historically, teams of students have often performed such work. With the rise of crowdwork platforms like Amazon Mechanical Turk, crowdworkers are often used for content analysis tasks, which are often similar to other kinds of common crowdworking tasks. Google's reCAPTCHA BIBREF19 is a Turing test in which users perform annotation tasks to prove their humanness — which initially involved transcribing scanned phrases from books, but now involves image labeling for autonomous vehicles. There are major qualitative data analysis software tools that scaffold the content analysis process to varying degrees, such as MAXQDA or NVivo, which have support for inter-annotator agreement metrics. There have also been many new software platforms developed to support more micro-level annotation or labeling at scale, including in citizen science, linguistics, content moderation, and more general-purpose use cases BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25. For example, the Zooniverse BIBREF26 provides a common platform for citizen science projects across different domain application areas, which let volunteers make judgements about items, which are aggregated and reconciled in various ways.
Literature review and motivation ::: Meta-research and methods papers in linguistics and crowdsourcing
Our paper is also in conversation with various meta-research and standardization efforts in linguistics, crowdsourcing, and other related disciplines. Linguistics and Natural Language Processing have long struggled with issues around standardization and reliability of linguistic tagging. Linguistics researchers have long developed best practices for corpus annotation BIBREF27, including recent work about using crowdworkers BIBREF28. Annotated corpus projects often release guidelines and reflections about their process. For example, the Linguistic Data Consortium's guidelines for annotation of English-language entities (version 6.6) is 72 single-spaced pages BIBREF29. A universal problem of standardization is that there are often too many standards and not enough enforcement. As BIBREF30 notes, 33-81% of linguistics/NLP papers in various venues do not even mention the name of the language being studied (usually English). A meta-research study found only 1 in 9 qualitative papers in Human-Computer Interaction reported inter-rater reliability metrics BIBREF31.
Another related area are meta-research and methods papers focused on identifying or preventing low-effort responses from crowdworkers — sometimes called “spam” or “random” responses, or alternatively ”fraudsters” or ”cheaters.” Rates of “self-agreement” are often used, determining if the same person labels the same item differently at a later stage. One paper BIBREF32 examined 17 crowdsourced datasets for sentiment analysis and found none had self-agreement rates (Krippendorf's alpha) above 0.8, with some lower than 0.5. Another paper recommends the self-agreement strategy in conjunction with asking crowdworkers to give a short explanation of their response, even if the response is never actually examined. BIBREF33. One highly-cited paper BIBREF34 proposes a strategy in which crowdworkers are given some items with known labels (a gold/ground truth), and those who answer incorrectly are successively given more items with known labels, with a Bayesian approach to identifying those who are answering randomly.
Literature review and motivation ::: The data documentation movements
Our paper is also in conversation with two related movements in computationally-supported knowledge production that have surfaced issues around documentation. First, we see connections with the broader open science and reproducibility movements. Open science is focused on a range of strategies, including open access research publications, educational materials, software tools, datasets, and analysis code BIBREF35. The reproducibility movement is deeply linked to the open science movement, focusing on getting researchers to release everything that is necessary for others to perform the same tasks needed to get the exact same results BIBREF36, BIBREF10. This increasingly includes pushing for high standards for releasing protocols, datasets, and analysis code. As more funders and journals are requiring releasing data, the issue of good documentation for data and protocols is rising BIBREF37, BIBREF38. There are also intersecting literatures on systems for capturing information in ML data flows and supply chains BIBREF39, BIBREF40, BIBREF41, as well as supporting data cleaning BIBREF42, BIBREF43. These issues have long been discussed in the fields of library and information science, particularly in Research Data Management BIBREF44, BIBREF45, BIBREF46, BIBREF47.
A major related movement is in and around the FATML field, with many recent papers proposing training data documentation in the context of ML. Various approaches, analogies, and metaphors have been taken in this area, including “datasheets for datasets” BIBREF48, ”model cards” BIBREF49, “data statements” BIBREF30, “nutrition labels” BIBREF50, a “bill of materials” BIBREF51, “data labels” BIBREF52, and “supplier declarations of conformity” BIBREF53. Many go far beyond the concerns we have raised around human-labeled training data, as some are also (or primarily) concerned with documenting other forms of training data, model performance and accuracy, bias, considerations of ethics and potential impacts, and more. We discuss how our findings relate to this broader emerging area more in the concluding discussion.
Data and methods ::: Data: machine learning papers performing classification tasks on Twitter data
Our goal was to find a corpus of papers that were using original human annotation or labeling to produce a new training dataset for supervised ML. We restricted our corpus to papers whose classifiers were trained on data from Twitter, for various reasons: First, we did attempt to produce a broader corpus of supervised ML application papers, but found our search queries in academic search engines would either 1) be so broad that most papers were non-applied / theoretical papers or papers re-using public pre-labeled datasets; or 2) that the results were so narrow they excluded many canonical papers in this area, which made us suspect that they were non-representative samples. Sampling to papers using Twitter data has strategic benefits for this kind of initial study. Data from Twitter is of interest to scholars from a variety of disciplines and topical interest areas, in addition to those who have an inherent interest in Twitter as a social media site. As we detail in appendix section SECREF45, the papers represented political science, public health, NLP, sentiment analysis, cybersecurity, content moderation, hate speech, information quality, demographic profiling, and more.
We drew the main corpus of ML application papers from ArXiV, the oldest and most established “preprint” repositories, originally for researchers to share papers prior to peer review. Today, ArXiV is widely used to share both drafts of papers that have not (yet) passed peer review (“preprints”) and final versions of papers that have passed peer review (often called “postprints”). Users submit to any number of disciplinary categories and subcategories. Subcategory moderators perform a cursory review to catch spam, blatant hoaxes, and miscategorized papers, but do not review papers for soundness or validity. We sampled all papers published in the Computer Science subcategories of Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Social and Information Networks (cs.SI), Computational Linguistics (cs.CL), Computers and Society (cs.CY), Information Retrieval (cs.IR), and Computer Vision (CS.CV), the Statistics subcategory of Machine Learning (stat.ML), and Social Physics (physics.soc-ph). We filtered for papers in which the title or abstract included at least one of the words “machine learning”, “classif*”, or “supervi*” (case insensitive). We then filtered to papers in which the title or abstract included at least “twitter” or “tweet” (case insensitive), which resulted in 494 papers. We used the same query on Elsevier's Scopus database of peer-reviewed articles, selecting 30 randomly sampled articles, which mostly selected from conference proceedings. One paper from the Scopus sample was corrupted, so only 29 papers were examined.
ArXiV is likely not a representative sample of all ML publications. However, we chose it because ArXiV papers are widely accessible to the public, indexed in Google Scholar and other scholarly databases, and are generally considered citeable publications. The fact that many ArXiV papers are not peer-reviewed and that papers posted are not likely representative samples of ML research is worth considering when reflecting on the generalizability of our findings. However, given that such papers are routinely discussed in both academic literature and the popular press means that issues with their reporting of training data is just as crucial. Sampling from ArXiv also lets us examine papers at various stages in the peer-review cycle, breaking out preprints not (yet) published, preprints of later published papers, and postprints of published works. The appendix details both corpora, including an analysis of the topics and fields of papers (in SECREF47), an analysis of the publishers and publication types (e.g. an early preprint of a journal article, a final postprint of a conference proceeding, a preprint never published) and publishers (in SECREF50 and SECREF47). The final dataset can be found on GitHub and Zenodo.
Data and methods ::: Labeling team, training, and workflow
Our labeling team included one research scientist who led the project (RSG) and undergraduate research assistants, who worked for course credit as part of an university-sponsored research experience program (KY, YY, MD, JQ, RT, and JH). The project began with five students for one semester, four of whom continued on the project for the second semester. A sixth student replaced the student who did not continue. All students had some coursework in computer science and/or data science, with a range of prior experience in machine learning in both a classroom and applied setting. Students' majors and minors included Electrical Engineering & Computer Science, Data Science, Statistics, and Linguistics.
The labeling workflow was that each week, a set of papers were randomly sampled each week from the unlabled set of 494 ArXiV papers in the corpus. For two weeks, the 30 sampled papers from Scopus were selected. The five students independently reviewed and labeled the same papers each week, using a different web-based spreadsheet to record labels. The team leader synthesized labels and identified disagreement. The team met in person each week to discuss cases of disagreement, working to build a consensus about the proper label (as opposed to purely majority vote). The team leader facilitated these discussions and had the final say when a consensus could not be reached. The papers labeled for the first two weeks were in a training period, in which the team worked on a different set of papers not included in the dataset. In these initial weeks, the team learned the coding schema and the reconciliation process, which were further refined.
Data and methods ::: Second round verification and reconciliation
After 164 papers were labeled by five annotators, we conducted a second round of verification. This was necessary both because there were some disagreements in labeling and changes made to the coding schema (discussed in appendix SECREF54). All labels for all 164 papers were independently re-examined by at least two of the six team members. Annotators were given a summary of the original labels in the first round and were instructed to review all papers, being mindful of how the schema and instructions had changed. We then aggregated, reconciled, and verified labels in the same way as in the first round. For papers where there was no substantive disagreement on any question between those who re-examined it in the second round, the paper's labels were considered to be final. For papers where there was any substantive disagreement on any question, the paper was either discussed to consensus in the same manner as in the first round or decided by the team leader. The final schema and instructions are in the appendix, section SECREF57.
Finally, we cleaned up issues with labels around implicit or blank values using rule-based scripts. We learned our process involved some ambiguities around whether a subsequent value needed to be filled in. For example, if a paper was not using crowdworkers, then the instructions for our schema were that the question about crowdworker compensation was to remain blank. However, we found we had cases where “reported crowdworker compensation” was “no” for papers that did not use crowdworkers. This would be concerning had we had a “yes” for such a variable, but found no such cases. We recoded questions about pre-screening for crowdwork platforms (implied by using crowdworkers in original human annotation source) and the number of human annotators.
We measured interrater reliability metrics using mean percent total agreement, or the proportion of cases where all labelers initially gave the same label. This is a more stringent metric than Fleiss's kappa and Krippendorf's alpha, and our data does not fit the assumptions for those widely-used metrics. IRR rates for round one were relatively low: across all questions, the mean percent total agreement was 66.67%, with the lowest question having a rate of 38.2%. IRR rates for round two were quite higher: the mean percent total agreement across all questions was 84.80% and the lowest agreement score was 63.4% (for “used external human annotation”, which we discuss later). We are confident about our labeling process, especially because these individual ratings were followed by an expert-adjudicated discussion-based reconciliation process, rather than simply counting majority votes. We detail more information and reflection about interrater reliability in appendix section SECREF52.
Data and methods ::: Raw and normalized information scores
We quantified the information about training data in papers, developing a raw and normalized information score, as different studies demanded different levels of information. For example, our question about whether inter-annotator agreement metrics were reported is only applicable for papers involving multiple annotators. Our questions about whether prescreening was used for crowdwork platforms or whether crowdworker compensation was reported is only relevant for projects using crowdworkers. However, some kinds of information are relevant to all papers that involve original human annotation: who the annotators are (annotation source), annotator training, formal instructions or definitions were given, the number of annotators involved, whether multiple annotators examined the same items, or a link to a publicly-available dataset.
For raw scores, papers involving original human annotation received one point each for reporting the six items mentioned above. In addition, they received one point per question if they included information for each of the two questions about crowdworkers if the project used crowdworkers, and one point if they reported inter-annotator metrics if the project used multiple annotators per item. For the normalized score, the raw score was divided by the highest possible raw score. We only calculated scores for papers involving original human annotation. Finally, we conducted an analysis of information scores by various bibliometric factors, which required determining such factors for all papers. For all ArXiV papers, we determined whether the PDF was a pre-print not (yet) published in another venue, a post-print identical in content to a published version, or a pre-print version of a paper published elsewhere with different content. For all Scopus papers and ArXiV post-prints, we also determined the publisher. We detail these in appendix SECREF47.
Findings ::: Original classification task
The first question was whether the paper was conducting an original classification task using supervised machine learning. Our keyword-based process of generating the corpus included many papers not in this scope. However, defining the boundaries of supervised ML and classification tasks is difficult, particularly for papers that are long, complex, and ambiguously worded. We found that some papers claimed to be using ML, but when we examined the details, these did not fall under our definition. We defined machine learning broadly, using a common working definition in which machine learning includes any automated process that does not exclusively rely on explicit rules, in which the performance of a task increases with additional data. This includes simple linear regressions, for example, and there is much debate about if and when simple linear regressions are a form of ML. However, as we were also looking for classification tasks, linear regressions were only included if it is used to make a prediction in a set of defined classes. We defined an “original” classifier to mean a classifier the authors made based on new or old data, which excludes the exclusive use of pre-trained classifiers or models.
As table TABREF13 shows, the overwhelming majority of papers in our dataset were involved in an original classification task. We placed 5 papers in the “unsure” category — meaning they did not give enough detail for us to make this determination, or that they were complex boundary cases. One of the “unsure” cases clearly used labels from human annotation, and so we answered the subsequent questions, which is why the counts in Table 2 add up to 143 (as well as some other seeming disparities in later questions).
Findings ::: Labels from human annotation
One of the major issues we had to come to a consensus around was whether a paper used labels from human annotation. We observed a wide range of cases in which human judgment was brought to bear on the curation of training data. Our final definition required that “the classifier [was] at least in part trained on labeled data that humans made for the purpose of the classification problem.” We decided on a working definition that excluded many “clever uses of metadata” from this category, but did allow some cases of “self-annotation” from social media, which were typically the most borderline cases on the other side. For example, one case from our examples we decided was human annotation used specific politically-inflected hashtags to automatically label tweets as for or against a position, for use in stance detection (e.g. #ProChoice versus #ProLife). However, these cases of self-annotation would all be considered external human annotation rather than original human annotation, and so the subsequent questions about the annotation process would be not applicable. Another set of borderline cases involved papers where no human annotation was involved in the curation of the training dataset that was used to build the classifier, but human annotation was used for validation purposes. We did not consider these to involve human annotation as we originally defined it in our schema, even though the same issues arise with equal significance for the validity of such research.
Findings ::: Used original human annotation and external human annotation
Our next two questions were about whether papers that used human annotation used original human annotation, which we defined as a process in which the paper's authors obtained new labels from humans for items. It is common in ML research to re-use public datasets, and many of papers in our corpus did so. We also found 10 papers in which external and original human annotation was combined to create a new training dataset. For these reasons, we modified our schema to ask separate questions for original and external human annotation data, to capture all three cases (using only original, only external, or both). Tables TABREF17 and TABREF17 show the breakdown for both questions. We only answered the subsequent questions about the human annotation process for the papers producing an original human annotated dataset.
Findings ::: Original human annotation source
Our next question asked who the annotators were, for the 74 papers that used original human annotation. The possible options were: the paper's authors, Amazon Mechanical Turk, other crowdworking platforms, experts/professionals, other, and no information. We took phrases like “we labeled” (with no other details) to be an implicit declaration that the paper's authors did the labeling. If the paper discussed labelers' qualifications for the task beyond an average person, we labeled it as “experts / professionals.” For example, some of our boundary cases involved recruiting students to label sentiment. One study involved labeling tweets with both English and Hindi text and noted that the students were fluent in both languages – which we considered to be in the “experts / professionals” category. Another paper we included in this category recruited students to label tweets with emojis, noting that the recruited students “are knowledgeable with the context of use of emojis.”
As table TABREF19 shows, we found a diversity of approaches to the recruitment of human annotators. The plurality of papers involved the paper's authors doing the annotation work themselves. The next highest category was “no information,” which was found in almost a quarter of the papers using original human annotation. Experts / professionals was far higher than we expected, although we took any claim of expertise for granted. Crowdworkers constituted a far smaller proportion than we expected, with Amazon Mechanical Turk and other platforms collectively comprising about 15% of papers. Almost all of the other crowdworking platforms specified were CrowdFlower/FigureEight, with one paper using oDesk.
Findings ::: Number of human annotators
Our instructions for the question about the number of human annotators was not precise and had one of the lower levels of inter-rater reliability. If the paper included information about the number of human annotators, the instructions were to put such a number, leaving the field blank for no information. Most of the disagreement was from differences around how papers report the number of annotators used. For example, some papers specified the total number of humans who worked on the project annotating items, while others only specified how many annotators were used per item (particularly for those using crowdworkers), and a few reported both. Some involved a closed set of annotators who all examined the same set of items, similar to how our team operated. Other papers involved an open set of annotators, particularly drawn from crowdworking platforms, but had a consistent number of annotators who reviewed each item. Due to these inconsistencies, we computationally re-coded responses into the presence of information about the number of human annotators. These are both important aspects to discuss, although it is arguably more important to discuss the number of annotators who reviewed each item. In general, having more annotators review each item provides a more robust way of determining the validity of the entire process, although this also requires caluclating inter-annotator agreement metrics.
As table TABREF21 shows, a slim majority of papers using original human annotation specified the number of annotators involved in some way. Based on our experiences, we typically noticed that papers discussing the number of annotators often fell into two categories: 1) a small closed team (more often 2-3, sometimes 4-6) that were either the papers' authors or recruited directly by the authors, who tended to perform the same amount of work for the duration of the project; or 2) a medium to large (25-500) open set of annotators, typically but not necessarily recruited through a crowdworking platform, who each performed highly variable amounts of work.
Findings ::: Formal definitions and instructions
Our next question was about whether instructions or guidelines with formal definitions or examples are reportedly given to annotators. Formal definitions and concrete examples are both important, as they help annotators understand how the researchers have operationalized the concept in question and determine edge cases. With no or ambiguous definitions/examples, there could be fundamental misunderstandings that are not captured by inter-annotator agreement metrics, if all annotators make the same misunderstandings. We defined two levels: giving no instructions beyond the text of a question, then giving definitions for each label and/or concrete examples. The paper must describe or refer to instructions given (or include them in supplemental materials), otherwise, we categorized it "No Information". Some borderline cases involved authors labeling the dataset themselves, where the paper presented a formal definition, but only implied that it informed the labeling – which we took to be a formal definition. As table TABREF23 shows, the plurality of papers did not provide enough information to make a determination (it is rare for authors to say they did not do something), but 43.2% provided definitions or examples.
Findings ::: Training for human annotators
We defined training for human annotators to involve some kind of interactive process in which the annotators have the opportunity to receive some kind of feedback and/or dialogue about the annotation process. We identified this as a distinct category from both the qualifications of the annotators and the instructions given to annotators, which are examined in other questions. Training typically involved some kind of live session or ongoing meeting in which annotators' progress was evaluated and/or discussed, where annotators had the chance to ask questions or receive feedback on why certain determinations did or did not match definitions or a schema. We used our own team's process as an example of this, and found several papers that used a similar roundtable process, which went into detail about interactions between team members. Cases in which the paper only specified that annotators were given a video or a detailed schema to review were not considered training details, as this was a one-way process and counted as definitions/instructions.
The overwhelming majority of papers did not discuss such issues, as table TABREF25 shows, with 15% of papers involving a training session. Because we had a quite strict definition for what constitutes training (versus what many may think of around “trained annotators”), this is expected. We also are not all that concerned with this low number, as there are many tasks that likely do not require specialized training — unlike our project, which required both specific expertise in an area and with our complicated schema.
Findings ::: Pre-screening for crowdwork platforms
Crowdwork platforms let employers pre-screen or test for traits, skills, or performance metrics, which significantly narrows the pool of crowdworkers. For example, “project-specific pre-screening” involves offering a sample task with known outcomes: if the crowdworker passed, they would be invited to annotate more items. 5 of the 11 papers using crowdworkers reported using this approach. Platforms also often have location-based screening (e.g. US-only), which 2 papers reported using. Some crowdwork platforms have a qualification for workers who have a positive track record based on total employer ratings (e.g. AMT Master). Platforms also offer generic skills-based tests for certain kinds of work (e.g. CrowdFlower's Skill Tests). These last two qualifications were in our coding schema, but no papers reported using them.
Findings ::: Multiple annotator overlap and reporting inter-annotator agreement
Our next two questions were about using multiple annotators to review the same items (multiple annotator overlap) and whether inter-annotator agreement metrics were reported. Having multiple independent annotators is typically a foundational best practice in structured content analysis, so that the integrity of the annotations and the schema can be evaluated (although see BIBREF31). For multiple annotator overlap, our definitions required papers state whether all or some of the items were labeled by multiple labelers, otherwise “no information” was recorded. Then, for papers that did multiple annotator overlap, we examined whether any inter-annotator agreement metric was reported. We did find one paper that did not explicitly state that multiple labelers overlapped, but did report inter-annotator agreement metrics. This implicitly means that at least some of the items were labeled by multiple labelers, but for consistency, we keep the “no information” label for this case. We did not record what kind of inter-annotator metric was used, such as Cohen's kappa or Krippendorff's alpha, but many different metrics were used. We also did not record what the exact statistic was, although we did notice a wide variation in what was considered an acceptable or unacceptable score for inter-annotator agreement.
For multiple annotator overlap, table TABREF29 shows that just under half of all papers that involved an original human annotation task did not provide explicit information one way or the other about whether multiple annotators reviewed each item. This includes the one paper that reported inter-annotator agreement metrics, but did not specify whether overlap was for all items or some items. Only three papers explicitly stated that there was no overlap among annotators, and so it is quite likely that the papers that did not specify such information did not engage in such a practice. For the 37 papers that did involve some kind of multiple annotator overlap, the overwhelming majority of this subsample (84%) involved multiple annotation of all items, rather than only some items. We also found that for papers that did involve some kind of multiple overlap, the large majority of them ( 70%) did report some metric of inter-annotator agreement, as table TABREF29 indicates.
Findings ::: Reported crowdworker compensation
Crowdworking is often used because of the low cost, which can be far below minimum wage in certain countries. Researchers and crowdworkers have been organizing around issues related to the exploitation of crowdworkers in research, advocating ethical practices including fair pay BIBREF54. We examined all papers involving crowdworkers for any indication of compensation, and found zero mentioned compensation. We did find that some papers using other sources of human annotation (e.g. students) discussed compensation for annotators, but this was not in our original schema.
Findings ::: Link to dataset available
Our final question was about whether the paper contained a link to the dataset containing the original human annotated training dataset. Note that this question was only answered for papers involving some kind of original or novel human annotation, and papers that were exclusively re-using an existing open or public dataset were left blank to avoid double-counting. We did not follow such links or verify that such data was actually available. As table TABREF32 shows, the overwhelming majority of papers did not include such a link, with 8 papers (10.81%) using original human-annotated training datasets linking to such data. Given the time, labor, expertise, and funding in creating original human annotated datasets, authors may be hesitant to release such data until they feel they have published as many papers as they can.
Paper information scores
The raw and normalized information scores (see section SECREF10 for methodology) were calculated for all papers that involved original human annotation. As previously discussed, our corpora represent a likely non-representative sample of ML research, even if bounded to social computing. Our relatively small sample sizes combined with the number of multiple comparisons would mean that thresholds for statistical significance would need to be quite high. Instead, we present these results to help provide an initial framework and limited results on this issue, intended to help inform a broader and more systematic evaluation the ML literature. We do observe quite varying ranges and distributions of information scores, which does give evidence to the claim that there is substantial and wide variation in the practices around human annotation, training data curation, and research documentation.
Paper information scores ::: Overall distributions of information scores
Figure FIGREF34 shows histograms for raw and normalized information scores, which both suggest a bimodal distribution, with fewer papers at the both extremes and the median. This suggests that there are roughly two populations of researchers, with one centered around raw scores of 1-2 and normalized scores of 0.25 and one centered around raw scores of 5 and normalized scores of 0.7. The normalized information score ranged from 0 to 1, with 6 papers having a normalized score of 0 and only 1 paper with a score of 1. The raw information score ranged from 0 to 7, with no paper receiving a full score of 8 or 9, which would have required a study involving crowdworkers, multiple overlap, and open datasets. Overall, the mean normalized information score was 0.441, with a median of 0.429 and a standard deviation of 0.261. The mean raw score was 3.15, with a median of 3.0 and a standard deviation of 2.05.
Paper information scores ::: Information scores by corpus and publication type
Figure FIGREF37 shows two boxplots of normalized information scores that are based on different intersecting categories of publication type and status. The left figure compares scores in four categories: all papers in the Scopus sample (non-ArXived), ArXiv preprints that were never (or are not yet) published, and ArXiv preprints that were either postprints or preprints of a traditional publication. The category with the lowest median score are papers from the Scopus sample, which is followed closely by ArXiv preprints never published, although preprints never published had a much larger IQR and standard deviation. Postprints of publications had a similar IQR and standard deviation as preprints never published, but a much higher median score. Preprints of publications had a similar median score as postprints, but with a much smaller IQR and standard deviation. The righthand figure plots publication types for the combined corpora. Conference proceedings and ArXiv preprints never published have somewhat similar medians and IQRs, with journal articles having a higher median of 0.5 and a much narrower IQR. While we hesitate to draw generalizable conclusions, we see these findings indicating a wide range of factors potentially at play.
Paper information scores ::: Information scores by publisher
Figure FIGREF39 shows boxplots for normalized information scores by publisher, split between papers sampled from ArXiv and Scopus. The boxplots are ordered by the median score per publisher. In papers in the ArXiv corpus, those that were pre- or post-prints of papers published by the professional societies Association for Computing Machinery (ACM) or Association of Computational Linguistics (ACL) tied for the highest median scores of 0.667, with similar IQRs. These were followed by Springer and Elsevier, with respective medians 0.625 and 0.603 and narrower IQRs. ArXiv preprints not published elsewhere had a median score of 0.381 and the highest IQR and standard deviation (0.289), suggesting that it represents a wide range of papers. The publishers at the lower end of the scale included AAAI, with a median of 0.444 and a narrower IQR, and IEEE, with a median of 0.226 and the second-highest IQR and standard deviation (0.327). Curiously, papers from the Scopus corpus show different results per-publisher, with the median scores of all publishers lower in the Scopus corpus than in the ArXiv corpus. Given the small number of papers in the Scopus sample, we hesitate to draw general conclusions, but suspect it indicates differences between all academic authors and those who post ArXiv postprints.
Concluding discussion ::: Findings
In the sample of ML application publications using Twitter data we examined, we found a wide range in levels of documentation about methodological practices in human annotation. While we hesitate to overly generalize our findings to ML at large, these findings do indicate concern, given how crucial the quality of training data is and the difficulty of standardizing human judgment. Yet they also give us hope, as we found a number of papers we considered to be excellent cases of reporting the processes behind their datasets. About half of the papers using original human annotation engaged in some form of multiple overlap, and about 70% of the papers that did multiple overlap reported metrics of inter-annotator agreement. The distribution of annotation information scores was roughly bimodal, suggesting two distinct populations of those who provide substantially more and less information about training data in their papers. We do see preliminary evidence that papers in our sample published by certain publishers/venues tended to have papers with far more information than others (e.g. ACM and ACL at the top end, followed closely by journal publishers Springer and Elsevier, with IEEE and AAAI proceedings at the lower end). Preprints exclusively published on ArXiv also had the widest range of scores.
Concluding discussion ::: Implications
Based on our findings and experiences in this project, we believe human annotation should be considered a core aspect of the research process, with as much attention, care, and concern placed on the annotation process as is currently placed on performance-based metrics like F1 scores. Our findings — while preliminary, descriptive, and limited in scope — tell us that there is much room for improvement. This paper also makes steps towards more large-scale and systematic analyses of the research landscape, as well as towards standards and best practices for researchers and reviewers.
Institutions like journals, funders, and disciplinary societies have a major role to play in solutions to these issues. Most publications have strict length maximums, and many papers we scored highly spent a page or more describing their process. Reviewer expectations are crucial in any discussion of the reporting of methodological details in research publications. It could be that some authors did include such details, but were asked to take it out and add other material instead. Authors have incentives to be less open about the messiness inherent in research, as this may open them up to additional criticism. We see many parallels here to issues around reproducibility and open science, which are increasingly being tackled by universal requirements from journals and funders, rather than relying on individuals to change norms. Such research guidelines are common, including the COREQ standard for qualitative data analysis reporting BIBREF55, a requirement by some journals. A number of proposed standards have been created around datasets for ML BIBREF48, BIBREF49, BIBREF30, BIBREF50, BIBREF51, BIBREF52, BIBREF53, which are often framed as potential ways to mitigate bias and improve transparency and accountability. Several of these are broader proposals around reporting information about ML classifiers and models, which include various aspects beyond our study. In fact, given the recent explosion of proposals for structured disclosure or transparency documents around ML, the Partnership on AI has recently created the “ABOUT ML” working group to arrive at a common format or standard. BIBREF56
From our perspective, it is important to frame this issue as one of research validity and integrity: what kind of information about training data is needed for researchers, reviewers, and readers to have confidence in the model or classifier? As we observed in our discussions, we became skeptical about papers that did not adequately describe their human annotation processes. However, human annotation is a broad and diverse category of analytical activity, encompassing a wide range of structured human judgment brought to bear on items, some far more straightforward or complex. We saw the wide range papers that were engaged in various forms of annotation or labeling, even though we bounded our study to papers using data from Twitter. One important distinguishing factor is the difficulty of the task and the level of specific knowledge needed to complete it, which can vary significantly. Another key distinction may be between when there is expected to be only one `right' answer and when there might be many valid answers.
Most importantly, we would not want a straightforward checklist to overdetermine issues of model integrity. A number of papers we read were missing details we thought were crucial for understanding that study, but would not make sense for a majority of papers we examined. If a checklist was created, it should not be seen as an end in itself. The classic principle of scientific replicability could be a useful heuristic: does the paper provide enough information about the labeling process such that any reader could (with sufficient resources and access to the same kind of human annotators) conduct a substantively identical human annotation process on their own? We also see a role for technical solutions to help scaffold adherence to these best practices. For example, major qualitative data analysis platforms like MAXQDA or NVivo have built-in support for inter-annotator agreement metrics. Several crowdsourcing and citizen science platforms for data labeling are built to support reconciliation for disagreements. Automated workflow, pipeline, and provenance tracking is an increasing topic in ML, although these can focus more on model building and tuning, taking data as given. We recommend such projects include human annotation as a first-class element, with customization as needed.
Finally, our own experience in this human annotation project studying human annotation projects has shown us the costs and benefits of taking an intensive, detailed, collaborative, and multi-stage approach to human annotation. On one side, we believe that after going through such a long process, we have not only better data, but also a much better contextual understanding of our object of study. Yet on the other hand, even though struggling over the labels and labeling process is an opportunity, our time- and labor-intensive process did have a direct tradeoff with the number of items we were able to annotate. These issues and tradeoffs are important for ML researchers to discuss when designing their own projects and evaluating others.
Concluding discussion ::: Limitations and future work
Our study has limitations, as we only examined a sample of publications in the ML application space. First, we only examined papers that performing a classification task on tweets, which is likely not a representative sample of ML application publications. We would expect to find different results in different domain application areas. Papers in medicine and health may have substantially different practices around reporting training data, due to strict reporting standards in clinical trials and related areas. We also generally examined papers that are posted on ArXiV (in addition to 30 papers sampled from Scopus) and ArXiV is likely to not be a representative sample of academic publications. ArXiV papers are self-submitted and represent a range of publication stages, from drafts not submitted to review, preprints in peer review, and postprints that have passed peer review. Future work should examine different kinds of stratified random samples to examine differences between various publishers, publication types, disciplines, topics, and other factors.
Our study only examined a set of the kinds of issues that scholars and practitioners in ML are examining when they call for greater transparency and accountability through documentation of datasets and models. We have not recorded information about what exactly the rates of inter-annotator agreement are. In particular, we did not record information about the reconciliation or adjudication process for projects which involve multiple overlap (e.g. majority rule, talking to consensus), which we have personally found to be a crucial and difficult process. Other questions we considered but did not include were: the demographics of the labelers, the number of labelers (total and per item), compensation beyond crowdworkers, whether instructions or screenshot of the labeling interface was included, and whether labelers had the option to choose “unsure” (vs. being forced to choose a label). We leave this for future work, but also found that each additional question made it more difficult for labelers. We also considered but did not have our team give a holistic score indicating their confidence in the paper (e.g. a 1-5 score, like those used in some peer reviewing processes).
Our study also has limitations that any human annotation project has, and we gained much empathy around the difficulties of human annotation. Our process is not perfect, and as we have analyzed our data, we have identified cases that make us want to change our schema even further or reclassify boundary cases. In future work, we would also recommend using a more structured and constrained system for annotation to capture the text that annotators use to justify their answers to various questions. ML papers are very long and complex, such that our reconciliation and adjudication process was very time-consuming. Finally, we only have access to what the publications say about the work they did, and not the work itself. Future work could improve on this through other methods, such as ethnographic studies of ML practitioners.
Appendix
The appendix appears following the references section. This work was funded in part by the Gordon & Betty Moore Foundation (Grant GBMF3834) and Alfred P. Sloan Foundation (Grant 2013-10-27), as part of the Moore-Sloan Data Science Environments grant to UC-Berkeley. This work was also supported by UC-Berkeley's Undergraduate Research Apprenticeship Program (URAP). We thank many members of UC-Berkeley's Algorithmic Fairness & Opacity Group (AFOG) for providing invaluable feedback on this project.
Appendix ::: Dataset/corpus details ::: Keyword labels
To capture the topical and disciplinary diversity of papers in our corpus, we assigned one or more keyword labels to each paper, intended to capture topical, domain, disciplinary, and methodological qualities about the study. A paper seeking to classify tweets for spam and phishing in Turkish might include the labels: spam detection; phishing detection; cybersecurity; non-English. A study seeking to classify whether users are tweeting in support or opposition of a protest might have the keywords: user profiling; political science; protests; stance detection; public opinion. As part of the annotation and labeling process, all five annotators gave each paper a short description of what was being classified or predicted. The project lead aggregated these independent descriptions and additionally examined the paper title, abstract, and text. The project lead — who has extensive knowledge and experience of the various disciplines in the social computing space — then conducted a two-stage thematic coding process. A first pass involved open (or free-form) coding for all papers, with the goal of creating a typology of keywords. The list of keywords were then refined and consolidated, and a second pass was conducted on all of the items to re-label them as appropriate. Papers could have multiple keywords.
The distribution is plotted in Figure FIGREF46, which is broken out by papers that were using original human annotation (e.g. a new labeled training dataset) versus either theoretical papers or papers exclusively re-using a public or external dataset (see section SECREF16). This shows that the most common keywords were user profiling (a broader keyword that includes demographic prediction and classification of users into various categories), public opinion (a broader keyword that includes using Twitter to obtain beliefs or opinions, typically about political or cultural topics), and then two NLP methodologies of sentiment analysis and topic identification. The keyword "social networks" was used for any paper that either made substantive use of the network structure (e.g. follower graphs) as a feature, or tried to predict it. This figure also shows that our corpus also includes papers from a wide range of fields and sub-fields across disciplines, including a number of papers on cybersecurity (including bot/human detection, phishing detection, and spam detection), public health and epidemology, hate speech and content moderation, human geography, computer vision, political science, and crisis informatics. Papers using non-English languages were also represented in our corpus.
Appendix ::: Dataset/corpus details ::: Distribution of paper types in the corpus
For each of our 164 papers, we needed to determine various bibliometric factors. For papers in the ArXiv sample, the most important of these is whether the file uploaded to ArXiV is a version of a paper published in a more traditional venue, and if so, whether the ArXiV version is a pre-print submitted prior to peer-review (and has different content than the published version) or if it is a post-print that is identical in content to the published version. Many authors upload a paper to ArXiv when they submit it to a journal, others upload the accepted manuscript that has passed peer-review but has not been formatted and typeset by the publisher, and others upload the exact “camera-ready” version published by the publishers. ArXiV also lets authors update new versions; some will update each of these versions as they progress through the publishing process, others will only upload a final version, and some only upload the pre-review version and do not update the version in ArXiv to the published version.
To do this, the project lead first manually searched for the exact text of the title in Google Scholar, which consolidates multiple versions of papers with the same title. Papers that only had versions in ArXiv, ArXiv mirrors (such as adsabs), other e-print repositories like ResearchGate, personal websites, or institutional repositories were labeled as “Preprint never published.” For papers that also appeared in any kind of publication venue or publishing library (such as the ACM, IEEE, AAAI, or ACL digital libraries), the project lead recorded the publication venue and publisher, then downloaded the published version. In some workshops and smaller conferences, the “publisher” was a single website just for the event, which lacked ISSNs or DOIs. These were considered to be published as conference or workshop proceedings, if there was a public list of all the papers presented at the event with links to all of the papers. There was only one case in which there were two or more publications with the exact same title by the same authors, which involved a 2-page archived extended abstract for a poster in an earlier conference proceeding and a full paper in a later conference proceeding. For this case, we chose the full paper in the later venue.
The project lead then compared the version uploaded to ArXiv with the published version. As this was done after the labeling process, for papers where the author uploaded multiple versions to ArXiv, we took care to examine the version our labelers examined. If there were any differences in substantive content, the paper was labeled as “Preprint of” and then an appropriate description of the venue, such as “refereed conference proceeding” or “refereed journal article.” If there were no differences in the substantive content of the paper, the paper was labeled as “Postprint of” and then the venue description. Changes in reference style or ordering, page layout, typesetting, the size or color of figures, or moving the same text between footnotes and inline parentheticals were not considered to be substantive content changes. However, even a single character typo fix to the main body text, a single added or removed reference, or a change to a figure's caption constituted a substantive content change. Table TABREF48 shows the distribution of paper types. Because there was only one dissertation in the sample, which also was not using original human annotation, we excluded this category from the aggregate analyses by paper type shown in the results section.
Appendix ::: Dataset/corpus details ::: Distribution of publishers in corpus
For each paper in the Scopus samples and each paper in the ArXiv corpus that was a pre-print or post-print of a published paper, we also collected information about the journal and publisher. There were 80 different journals, conference proceedings, or workshops represented, with the top venues being the proceedings of SocInfo with 6 papers and the proceedings of ASONAM (Advances in Social Network Analysis and Mining) with 4 papers. Six venues had 3 publications each, which were all conference proceedings: AAAI ICWSM, ELRA LREC, ACM CIKM, ACM WWW, and IEEE Big Data. The distribution of publishers is presented in table TABREF49, which is broken out by papers in the ArXiv and Scopus corpus. The distribution of papers by years is shown in table TABREF49.
Appendix ::: Methods and analysis details ::: Inter-annotator agreement
In the first round, 5 annotators examined each paper independently, then met to discuss papers with disagreement. Table TABREF53 shows for each question, what percent of items were given the same label by all annotators (with number of annotators being recoded for the presence or absence of any information). Cases where no annotator answered the question because it was not relevant (e.g. crowdworker compensation for non-crowdworker projects) were not included in such a calculation, which would have increased such rates even more, but this would be somewhat disingenuous.
We report percent complete agreement among all raters for each question; for each item, what percent were given the same rating by all raters? We believe this is a more appropriate and straightforward metric for our project. This is due to the fact that our data does not necessarily meet the particular assumptions of other widely used two statistical estimators for 3+ raters. Fleiss's kappa and Krippendorf's alpha are widely used because they take into account the possibilities that raters made decisions based on random chance. However, this requires assuming a uniform prior possibility of such a random distribution, which generally only applies if each possible response by raters is equally likely BIBREF64, BIBREF61. This is the case in balanced datasets, but we observed widely skewed distributions.
The rates of proportional agreement were not high enough in the first round for us to be confident, which is likely due to a variety of factors. First, in contrast to most of the papers we examined, our project involved annotators answering 13 different questions for each item, which adds significant complexity to the process. Second, machine learning publications are also some of the more difficult pieces of content to make determinations around, as the definitions and boundaries of various concepts are often relatively undefined and contested across the many academic disciplines. In particular, our lowest rate for the second round was in the external human annotation question, which was added between the first and second round, and appears to still have some ambiguity.
We observed substantial increases in agreement between round one and two, although this also is likely confounded by the fact that all five annotators reviewed every item in round one, but only two or three reviewed every item in round two. We should note that as our approach was a human annotation research project studying human annotation research projects, this has given us much empathy for how difficult such a task is. We also acknowledge that our project involves the same kind of “black boxing” we discussed in the literature review, in which a messy process of multiple rounds of human annotations is reduced to a gold standard. However, we do believe in being open about our process, and our data for both rounds of annotation and the final dataset will be available upon publication.
The overall question for any study involving structured human annotation is whether the entire annotation, integration, review, and reconciliation process ultimately results in high confidence for the final dataset. The standard approach of human annotation checked by inter-rater reliability treats individual humans as instruments that turn phenomena in the world into structured data. If there is a high degree of inter-rater reliability, then each individual human can generally be trusted to make the same determination. If this is the case, then either reconciliation can easily take place through a majority vote process involving no discussion, or if rates are quite high, then only a subset of items need to be reviewed multiple times. In contrast, what our first round of inter-rater reliability metrics told us was that we were not the same kinds of standardized instruments that turn the same inputs into the same outputs. This does not bode well if we were conducting a single-stage mechanical majority-rule reconciliation process, and certainly would be unwise if we only had a single individual annotate each paper. For such a reason, we did not rely on such easier processes of reconciliation and demanded all papers be annotated by multiple individuals and discussed in a group setting moderated by the lead research scientist.
Furthermore, because our approach was largely focused on identifying the presence of various kinds of information within long-form publications, this is a different kind of human judgment than is involved in common tasks using human annotators in social computing, such as social media content moderation, sentiment analysis, or image labeling. Typically, annotated items are much smaller and tend to be evaluated holistically, with disagreements arising from annotators who looked at the same information and made different determinations.
In contrast, we reflected that in our reconciliation process, most of the time when annotators disagreed, it was because some annotators had caught a piece of information in the paper that others had not seen. There was a common occurrence wherein one of the annotators would point out a particular paragraph, the other annotators who had initially disagreed would read it, and then remark that they had missed that part and would like to change their answer. That said, there were cases wherein annotators were reading the same sections of the paper and still arriving at different answers, which was often either 1) because the paper was giving ambiguous, incomplete, or implicit information, or 2) because there was a fundamental interpretation of the coding schema, which required updating the schema or the examples in it. For such reasons, we are relatively confident that if, after our two rounds of annotation and the reconciliation process, no individual member of our team has identified the presence of such information, then it is quite likely it is not present in the paper.
Appendix ::: Methods and analysis details ::: Changes to the coding schema
Unlike in some approaches to structured content analysis, the coding schema was open to revision if needed during this first round. Some difficult edge cases led to the refinement of the schema approximately half-way through this round of the labeling. The schema was developed on a web-based word processing platform, which also included examples of difficult edge cases, which were added as they were identified in team meetings. The document detailed each question, a formal definition or explanation of the question, the list of possible permitted labels, and various cases of examples that illustrated difficult or edge cases.
The coding schema was modified only in cases where backward compatibility could be maintained with prior labeling work. This typically involved taking a question which had many granular possible labels and consolidating the possible labels into a smaller number of broader labels. For example, the question about whether instructions were given to human annotators originally involved specifying whether the instructions included a formal definition, examples, or both. This was revised to only specify “instructions with formal definition or examples.” Similarly, training for human annotators originally included a more granular list of possible training circumstances, plus ”no information”, ”other”, and ”unsure”. Because of the difficulty of gaining consensus on these different forms of training and the relatively few number of papers that gave any details whatsoever about annotator training (as well as no papers that explicitly stated no training had occurred), these were reduced to “some training details”, “no information”, and ”unsure” (see Table TABREF55).
In addition, three questions were added halfway through the first round of the annotation process. First, a question was added about whether the paper used an external human-annotated dataset or not, which was added to clarify the question about whether original human annotation was used. This was added after a paper was discussed where an external human-annotated dataset was combined with an original human-annotated dataset. Two other questions were added about whether the paper contains a link to the training dataset and whether details about crowdworker compensation were included for projects using crowdworkers. These were both relatively straightforward questions, with relatively few incidences across our dataset. All papers had all questions answered in the second round.
Appendix ::: Software used
All computational analysis and scripting was conducted in Python 3.7 BIBREF66, using the following libraries: Pandas dataframes BIBREF60 for data parsing and transformation; SciPy BIBREF58 and NumPy BIBREF65 for quantitative computations; and Matplotlib BIBREF57 and Seaborn BIBREF67 for visualization. Analysis was conducted in Jupyter Notebooks BIBREF59 using the IPython BIBREF62 kernels. Datasets and Jupyter Notebooks for data collection and analysis will be made available upon publication, which are made to run on Binder BIBREF63.
Appendix ::: Coding schema, examples, and instructions
A final version of our coding schema and instructions is below:
1. Original classification task: Is the paper presenting its own original classifier that is trying to predict something? “Original” means a new classifier they made based on new or old data, not anything about the novelty or innovation in the problem area.
Machine learning involves any process that does not have explicit or formal rules, where performance increases with more data. Classification involves predicting cases on a defined set of categories. Prediction is required, but not enough. Linear regressions might be included if the regression is used to make a classification, but making predictions for a linear variable is not. Predicting income or age brackets is classification, predicting raw income or age is not.
Example: analyzing statistics about the kinds of words people use on social media is not a classification task at all.
Example: predicting location is a classification task if it is from work, school, home, or other, but not if it is an infinite/undefined number of locations.
Example: This paper (https://ieeexplore.ieee.org/document/7937783) was framed as not an original classification task (more algorithm performance), but they did create an original classifier. This can also be an “unsure” – which is 100% OK to answer.
Example: Literature review papers that include classification papers aren't in this, if they didn't actually build a classifier.
Example: if there is a supervised classification task that is part of a broader process, this counts, focus on that.
If no, skip the following questions.
2. Classification outcome: What is the general type of problem or outcome that the classifier is trying to predict? Keep it short if possible. For example: sentiment, gender, human/bot, hate speech, political affiliation.
3. Labels from human annotation: Is the classifier at least in part trained on labeled data that humans made for the purpose of the classification problem? This includes re-using existing data from human judgments, if it was for the same purpose as the classifier. This does not include clever re-using of metadata.
Do a quick CTRL-F for “manual” and “annot” if you don't see anything, just to be sure.
If not, skip the following questions about human annotation.
Example: ISideWith paper on political stances was labels from human annotation, just not original. They took the labels from elsewhere and filled in the gaps (more on that in next Q).
Example: Buying followers and seeing who follows (1411.4299.pdf) is not human annotation.
Example: Generating (smart) simulated datasets from metadata is not human annotation.
Example: 1612.08207.pdf is not annotation when looking up political affiliation of politicians from an external database, even though it is manual work. No judgment is involved.
Example: 1709.01895.pdf is labels from human annotation, even though it is semi-automated. They identified hashtags that they believe universally correspond to certain political stances. There is a form of human judgment here, although in that paper, they don't define or explain it.
Example: Evaluation using human annotation is not annotation for ML, if the annotation wasn't used to make the classifier. (1710.07394.pdf)
Example: If they are using human annotation just to have confidence that a machine-annotated dataset is as good as a human annotated one, but the human annotated dataset isn't actually used to train the classifier, it is *not* using human annotation for ML. (1605.05195.pdf)
4. Used original human annotation: Did the project involve creating new human-labeled data, or was it exclusively re-using an existing dataset?
Yes
No
Unsure
Papers may have a mix of new and old human labeled data, or new human labeled data and non-human labeled data. If there is any new human annotation, say yes.
New human annotation must be systematic, not filling in the gaps of another dataset. Example: ISideWith paper on political stances is *not* original human annotation, even though they did some manual original research to fill the gap.
If the methods section is too vague to not tell, then leave as unsure (example: 1801.06294.pdf)
4.5. Used external human annotation data: Did the project use an already existing dataset from human labeled data?
Yes
No
Unsure
If they are using external human annotated data, skip the remaining questions:
5. Original human annotation source: Who were the human annotators? Drop-down options are:
Amazon Mechanical Turk (AMT, Turkers)
Any other crowdworking platform (Crowdflower / Figure8)
The paper's authors
Academic experts / professionals in the area
No information in the paper
Other
Unsure
For academic experts or professionals in the area, this is independent from the kinds of specific training they received for the task at hand. Think of “the area” broadly, so if it is something about healthcare and nurses were recruited, that would be professionals in the area, even if they don't say anything about the nurses having specific training in the annotation task at hand. If it doesn't easily fit into these or uses multiple sources, add them in the next column.
Example: “We develop a mechanism to help three volunteers analyze each collected user manually” -- put other, if that is all they say
Example: If it just says “we annotated...” then assume it is only the paper's authors unless otherwise stated.
6. Number of human annotators:
Put the number if stated, if not, leave blank.
7. Training for human annotators: Did the annotators receive interactive training for this specific annotation task / research project? Training involves some kind of interactive feedback. Simply being given formal instructions or guidelines is not training. Prior professional expertise is not training. Options include:
Some kind of training is mentioned
No information in the paper
Unsure
Example: It is not considered training if there was prescreening, unless they were told what they got right and wrong or other debriefing. Not training if they just gave people with high accuracy more work.
Example: This paper had a minimum acceptable statement for some training information, with only these lines: “The labeling was done by four volunteers, who were carefully instructed on the definitions in Section 3. The volunteers agree on more than 90% of the labels, and any labeling differences in the remaining accounts are resolved by consensus.”
8. Formal instructions/guidelines: What documents were the annotators given to help them? This document you are in right now is an example of formal instructions with definitions and examples.
No instructions beyond question text
Instructions include formal definition or examples
No information in paper (or not enough to decide)
Unsure
Example of a paper showing examples: “we asked crowdsourcing workers to assign the `relevant' label if the tweet conveys/reports information useful for crisis response such as a report of injured or dead people, some kind of infrastructure damage, urgent needs of affected people, donations requests or offers, otherwise assign the `non-relevant' label”
9. Prescreening for crowdwork platforms
Leave blank if this is not applicable.
No prescreening (must state this)
Previous platform performance qualification (e.g. AMT Master)
Generic skills-based qualification (e.g. AMT Premium)
Location qualification
Project-specific prescreening: researchers had known ground truth and only invited
No information
Unsure
10. Multiple annotator overlap: Did the annotators label at least some of the same items?
Yes, for all items
Yes, for some items
No
Unsure
No information
If it says there was overlap but not info to say all or some, put unsure.
11. Reported inter-annotator agreement: Leave blank if there was no overlap. Is a metric of inter-annotator agreement or intercoder reliability reported? It may be called Krippendorf's alpha, Cohen's kappa, F1 score, or other things.
Yes
No
Unsure
12. Reported crowdworker compensation: If using crowdworkers to annotate, did they say how much the annotators were paid for their work? Leave blank if crowdworkers were not used.
Yes
No
Unsure
13. Link to dataset available: Is there a link in the paper to the dataset they used?
Yes
No
Unsure | sampled all papers published in the Computer Science subcategories of Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Social and Information Networks (cs.SI), Computational Linguistics (cs.CL), Computers and Society (cs.CY), Information Retrieval (cs.IR), and Computer Vision (CS.CV), the Statistics subcategory of Machine Learning (stat.ML), and Social Physics (physics.soc-ph), filtered for papers in which the title or abstract included at least one of the words “machine learning”, “classif*”, or “supervi*” (case insensitive), filtered to papers in which the title or abstract included at least “twitter” or “tweet” (case insensitive) |
208e667982160cfbce49ef49ad96f6ab094292ac | 208e667982160cfbce49ef49ad96f6ab094292ac_0 | Q: What are the core best practices of structured content analysis?
Text: Introduction
Machine learning (ML) has become widely used in many academic fields, as well as across the private and public sector. Supervised machine learning is particularly prevalent, in which training data is collected for a set of entities with known properties (a “ground truth” or “gold standard”), which is used to create a classifier that will make predictions about new entities of the same type. Supervised ML requires high-quality training data to produce high-quality classifiers. “Garbage In, Garbage Out” is a longstanding aphorism in computing about how flawed input data or instructions will produce flawed outputs. BIBREF0, BIBREF1 However, contemporary ML research and education tends to focus less on obtaining and validating such a training dataset, with such considerations often passed over in major textbooks BIBREF2, BIBREF3, BIBREF4. The predominant focus is typically on what is done with the training data to produce a classifier, with heavy emphasis on mathematical foundations and routine use of clean and tidy “toy” datasets. The process of creating a “gold standard” or “ground truth” dataset is routinely black-boxed. Many papers in ML venues are expected to use a standard, public training dataset, with authors comparing various performance metrics on the same dataset. While such a focus on what is done to a training dataset may be appropriate for theoretically-oriented basic research in ML, this is not the case for supervised ML applications.
Introduction ::: Study overview
All approaches of producing a training dataset involve some form of human judgment, albeit at varying levels of granularity. In this paper, we investigate and discuss a wide range of issues and concerns around the curation of human-labeled or human-annotated data, in which one or more individuals make discrete assessments of items. We report from a study in which a team of six labelers systematically examined a corpus of supervised machine learning application papers in social computing, specifically those that classified tweets from Twitter for various purposes. For each paper, we recorded what the paper does or does not state about the training data used to produce the classifier presented in the paper. The bulk of the papers we examined were a sample of preprints or postprints published on ArXiV.org, plus a smaller set of published papers sampled from Scopus. We determined whether such papers involved an original classification task using supervised ML, whether the training data labels were produced from human annotation, and if so, the source of the human-labeled dataset (e.g. the paper's authors, Mechanical Turk, recruited experts, no information given, etc.). For all papers in which an original human-labeled dataset was produced, we then made a series of further determinations, including if definitions and/or examples were given to labelers, if labelers independently labeled the same items, if inter-rater reliability metrics were presented, if compensation details for crowdworkers were reported, if a public link to the dataset was available, and more.
As our research project was a human-labeling project studying other human-labeling projects, we took care in our own practices. We only have access to the paper reporting about the study and not the actual study itself, and many papers either do not discuss such details at all or without sufficient detail to make a determinations. For example, many papers did note that the study involved the creation of an original human-labeled dataset, but did not specify who labeled it. For some of our items, one of the most common labels we gave was “no information” — which is a concerning issue, given how crucial such information is in understanding the validity of the training dataset and by extension, the validity of the classifier.
Literature review and motivation ::: A different kind of “black-boxing” in machine learning
In the introduction, we noted training data is frequently black-boxed in machine learning research and applications. We use the term “black-boxed” in a different way than it is typically invoked in and beyond the FAT* community, where often refers to interpretability. In that sense, “black-boxing” means that even for experts who have access to the training data and code which created the classifier, it is difficult to understand why the classifier made each decision. In social science and humanities work on “black-boxing” of ML (and other “algorithmic” systems), there is often much elision between issues of interpretability and intentional concealment, as Burrell BIBREF5 notes. A major focus is on public accountability BIBREF6, where many problematic issues can occur behind closed doors. This is even the case with relatively simple forms of analytics and automation — such as if-then statements, linear regressions, or rule-based expert systems BIBREF7, BIBREF8.
In contrast, we are concerned with what is and is not taken for granted when developing a classifier. This use is closer to how Latour & Woolgar used it in an ethnographic study of scientific laboratories BIBREF9. They discuss how equipment like a mass spectrometer would typically be implicitly trusted to turn samples into signals. However, when the results were drastically unexpected, it could be a problem with the machine or a fundamental breakthrough. Scientists and technicians would have to “open up the black box,” changing their relationship to the equipment to determine if the problem was with the equipment or the prevailing theory. In this view, black-boxing is a relational concept, not an objective property. It is about the orientation people have to the same social-technical systems they routinely work with and rely upon. “Opening up the black box” is not about digging into technical or internal details per se, but a gestalt shift in whether the output of a system is implicitly taken for granted or open for further investigation.
In this view, black-boxing is not inherently problematic. The question is more about who gets to be skeptical about data and who is obligated to suspend disbelief, which are also raised in discussions of open science & reproducibility BIBREF10. Operationalization, measurement, and construct validity have long been crucial and contested topics in the social sciences. Within quantitative sub-fields, it is common to have extensive debates about the best way to define and measure a complex concept (e.g. “intelligence”). From a qualitative and Science & Technology Studies perspective, there is extensive work on the practices and implications of various regimes of measurement BIBREF11, BIBREF12, BIBREF13, BIBREF14. In ML, major operationalization decisions can implicitly occur in data labeling. Yet as Jacobs & Wallach note, “[i]n computer science, it is particularly rare to articulate the distinctions between constructs and their operationalizations” BIBREF15. This is concerning, because “many well-studied harms [in ML] are direct results of a mismatch between the constructs purported to be measured and their operationalizations” BIBREF15.
Literature review and motivation ::: Content analysis
Creating human-labeled training datasets for machine learning often looks like content analysis, a well-established methodology in the humanities and the social sciences (particularly literature, communication studies, and linguistics), which also has versions used in the life, ecological, and medical sciences. Content analysis has taken many forms over the past century, from more positivist methods that formally establish structural ways of evaluating content to more interpretivist methods that embrace ambiguity and multiple interpretations, such as grounded theory BIBREF16. The intersection of ML and interpretivist approaches is outside of the scope of this article, but it is an emerging area of interest BIBREF17.
Today, structured content analysis (also called “closed coding”) is used to turn qualitative or unstructured data of all kinds into structured and/or quantitative data, including media texts, free-form survey responses, interview transcripts, and video recordings. Projects usually involve teams of “coders” (also called “annotators”, “labelers”, or “reviewers”), with human labor required to “code”, “annotate”, or “label” a corpus of items. (Note that we use such terms interchangeably in this paper.) In one textbook, content analysis is described as a “systematic and replicable” BIBREF18 method with several best practices: A “coding scheme” is defined, which is a set of labels, annotations, or codes that items in the corpus may have. Schemes include formal definitions or procedures, and often include examples, particularly for borderline cases. Next, coders are trained with the coding scheme, which typically involves interactive feedback. Training sometimes results in changes to the coding scheme, in which the first round becomes a pilot test. Then, annotators independently review at least a portion of the same items throughout the entire process, with a calculation of “inter-annotator agreement” or “inter-rater reliability.” Finally, there is a process of “reconciliation” for disagreements, which is sometimes by majority vote without discussion and other times discussion-based.
Structured content analysis is a difficult, complicated, and labor-intensive process, requiring many different forms of expertise on the part of both the coders and those who manage them. Historically, teams of students have often performed such work. With the rise of crowdwork platforms like Amazon Mechanical Turk, crowdworkers are often used for content analysis tasks, which are often similar to other kinds of common crowdworking tasks. Google's reCAPTCHA BIBREF19 is a Turing test in which users perform annotation tasks to prove their humanness — which initially involved transcribing scanned phrases from books, but now involves image labeling for autonomous vehicles. There are major qualitative data analysis software tools that scaffold the content analysis process to varying degrees, such as MAXQDA or NVivo, which have support for inter-annotator agreement metrics. There have also been many new software platforms developed to support more micro-level annotation or labeling at scale, including in citizen science, linguistics, content moderation, and more general-purpose use cases BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25. For example, the Zooniverse BIBREF26 provides a common platform for citizen science projects across different domain application areas, which let volunteers make judgements about items, which are aggregated and reconciled in various ways.
Literature review and motivation ::: Meta-research and methods papers in linguistics and crowdsourcing
Our paper is also in conversation with various meta-research and standardization efforts in linguistics, crowdsourcing, and other related disciplines. Linguistics and Natural Language Processing have long struggled with issues around standardization and reliability of linguistic tagging. Linguistics researchers have long developed best practices for corpus annotation BIBREF27, including recent work about using crowdworkers BIBREF28. Annotated corpus projects often release guidelines and reflections about their process. For example, the Linguistic Data Consortium's guidelines for annotation of English-language entities (version 6.6) is 72 single-spaced pages BIBREF29. A universal problem of standardization is that there are often too many standards and not enough enforcement. As BIBREF30 notes, 33-81% of linguistics/NLP papers in various venues do not even mention the name of the language being studied (usually English). A meta-research study found only 1 in 9 qualitative papers in Human-Computer Interaction reported inter-rater reliability metrics BIBREF31.
Another related area are meta-research and methods papers focused on identifying or preventing low-effort responses from crowdworkers — sometimes called “spam” or “random” responses, or alternatively ”fraudsters” or ”cheaters.” Rates of “self-agreement” are often used, determining if the same person labels the same item differently at a later stage. One paper BIBREF32 examined 17 crowdsourced datasets for sentiment analysis and found none had self-agreement rates (Krippendorf's alpha) above 0.8, with some lower than 0.5. Another paper recommends the self-agreement strategy in conjunction with asking crowdworkers to give a short explanation of their response, even if the response is never actually examined. BIBREF33. One highly-cited paper BIBREF34 proposes a strategy in which crowdworkers are given some items with known labels (a gold/ground truth), and those who answer incorrectly are successively given more items with known labels, with a Bayesian approach to identifying those who are answering randomly.
Literature review and motivation ::: The data documentation movements
Our paper is also in conversation with two related movements in computationally-supported knowledge production that have surfaced issues around documentation. First, we see connections with the broader open science and reproducibility movements. Open science is focused on a range of strategies, including open access research publications, educational materials, software tools, datasets, and analysis code BIBREF35. The reproducibility movement is deeply linked to the open science movement, focusing on getting researchers to release everything that is necessary for others to perform the same tasks needed to get the exact same results BIBREF36, BIBREF10. This increasingly includes pushing for high standards for releasing protocols, datasets, and analysis code. As more funders and journals are requiring releasing data, the issue of good documentation for data and protocols is rising BIBREF37, BIBREF38. There are also intersecting literatures on systems for capturing information in ML data flows and supply chains BIBREF39, BIBREF40, BIBREF41, as well as supporting data cleaning BIBREF42, BIBREF43. These issues have long been discussed in the fields of library and information science, particularly in Research Data Management BIBREF44, BIBREF45, BIBREF46, BIBREF47.
A major related movement is in and around the FATML field, with many recent papers proposing training data documentation in the context of ML. Various approaches, analogies, and metaphors have been taken in this area, including “datasheets for datasets” BIBREF48, ”model cards” BIBREF49, “data statements” BIBREF30, “nutrition labels” BIBREF50, a “bill of materials” BIBREF51, “data labels” BIBREF52, and “supplier declarations of conformity” BIBREF53. Many go far beyond the concerns we have raised around human-labeled training data, as some are also (or primarily) concerned with documenting other forms of training data, model performance and accuracy, bias, considerations of ethics and potential impacts, and more. We discuss how our findings relate to this broader emerging area more in the concluding discussion.
Data and methods ::: Data: machine learning papers performing classification tasks on Twitter data
Our goal was to find a corpus of papers that were using original human annotation or labeling to produce a new training dataset for supervised ML. We restricted our corpus to papers whose classifiers were trained on data from Twitter, for various reasons: First, we did attempt to produce a broader corpus of supervised ML application papers, but found our search queries in academic search engines would either 1) be so broad that most papers were non-applied / theoretical papers or papers re-using public pre-labeled datasets; or 2) that the results were so narrow they excluded many canonical papers in this area, which made us suspect that they were non-representative samples. Sampling to papers using Twitter data has strategic benefits for this kind of initial study. Data from Twitter is of interest to scholars from a variety of disciplines and topical interest areas, in addition to those who have an inherent interest in Twitter as a social media site. As we detail in appendix section SECREF45, the papers represented political science, public health, NLP, sentiment analysis, cybersecurity, content moderation, hate speech, information quality, demographic profiling, and more.
We drew the main corpus of ML application papers from ArXiV, the oldest and most established “preprint” repositories, originally for researchers to share papers prior to peer review. Today, ArXiV is widely used to share both drafts of papers that have not (yet) passed peer review (“preprints”) and final versions of papers that have passed peer review (often called “postprints”). Users submit to any number of disciplinary categories and subcategories. Subcategory moderators perform a cursory review to catch spam, blatant hoaxes, and miscategorized papers, but do not review papers for soundness or validity. We sampled all papers published in the Computer Science subcategories of Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Social and Information Networks (cs.SI), Computational Linguistics (cs.CL), Computers and Society (cs.CY), Information Retrieval (cs.IR), and Computer Vision (CS.CV), the Statistics subcategory of Machine Learning (stat.ML), and Social Physics (physics.soc-ph). We filtered for papers in which the title or abstract included at least one of the words “machine learning”, “classif*”, or “supervi*” (case insensitive). We then filtered to papers in which the title or abstract included at least “twitter” or “tweet” (case insensitive), which resulted in 494 papers. We used the same query on Elsevier's Scopus database of peer-reviewed articles, selecting 30 randomly sampled articles, which mostly selected from conference proceedings. One paper from the Scopus sample was corrupted, so only 29 papers were examined.
ArXiV is likely not a representative sample of all ML publications. However, we chose it because ArXiV papers are widely accessible to the public, indexed in Google Scholar and other scholarly databases, and are generally considered citeable publications. The fact that many ArXiV papers are not peer-reviewed and that papers posted are not likely representative samples of ML research is worth considering when reflecting on the generalizability of our findings. However, given that such papers are routinely discussed in both academic literature and the popular press means that issues with their reporting of training data is just as crucial. Sampling from ArXiv also lets us examine papers at various stages in the peer-review cycle, breaking out preprints not (yet) published, preprints of later published papers, and postprints of published works. The appendix details both corpora, including an analysis of the topics and fields of papers (in SECREF47), an analysis of the publishers and publication types (e.g. an early preprint of a journal article, a final postprint of a conference proceeding, a preprint never published) and publishers (in SECREF50 and SECREF47). The final dataset can be found on GitHub and Zenodo.
Data and methods ::: Labeling team, training, and workflow
Our labeling team included one research scientist who led the project (RSG) and undergraduate research assistants, who worked for course credit as part of an university-sponsored research experience program (KY, YY, MD, JQ, RT, and JH). The project began with five students for one semester, four of whom continued on the project for the second semester. A sixth student replaced the student who did not continue. All students had some coursework in computer science and/or data science, with a range of prior experience in machine learning in both a classroom and applied setting. Students' majors and minors included Electrical Engineering & Computer Science, Data Science, Statistics, and Linguistics.
The labeling workflow was that each week, a set of papers were randomly sampled each week from the unlabled set of 494 ArXiV papers in the corpus. For two weeks, the 30 sampled papers from Scopus were selected. The five students independently reviewed and labeled the same papers each week, using a different web-based spreadsheet to record labels. The team leader synthesized labels and identified disagreement. The team met in person each week to discuss cases of disagreement, working to build a consensus about the proper label (as opposed to purely majority vote). The team leader facilitated these discussions and had the final say when a consensus could not be reached. The papers labeled for the first two weeks were in a training period, in which the team worked on a different set of papers not included in the dataset. In these initial weeks, the team learned the coding schema and the reconciliation process, which were further refined.
Data and methods ::: Second round verification and reconciliation
After 164 papers were labeled by five annotators, we conducted a second round of verification. This was necessary both because there were some disagreements in labeling and changes made to the coding schema (discussed in appendix SECREF54). All labels for all 164 papers were independently re-examined by at least two of the six team members. Annotators were given a summary of the original labels in the first round and were instructed to review all papers, being mindful of how the schema and instructions had changed. We then aggregated, reconciled, and verified labels in the same way as in the first round. For papers where there was no substantive disagreement on any question between those who re-examined it in the second round, the paper's labels were considered to be final. For papers where there was any substantive disagreement on any question, the paper was either discussed to consensus in the same manner as in the first round or decided by the team leader. The final schema and instructions are in the appendix, section SECREF57.
Finally, we cleaned up issues with labels around implicit or blank values using rule-based scripts. We learned our process involved some ambiguities around whether a subsequent value needed to be filled in. For example, if a paper was not using crowdworkers, then the instructions for our schema were that the question about crowdworker compensation was to remain blank. However, we found we had cases where “reported crowdworker compensation” was “no” for papers that did not use crowdworkers. This would be concerning had we had a “yes” for such a variable, but found no such cases. We recoded questions about pre-screening for crowdwork platforms (implied by using crowdworkers in original human annotation source) and the number of human annotators.
We measured interrater reliability metrics using mean percent total agreement, or the proportion of cases where all labelers initially gave the same label. This is a more stringent metric than Fleiss's kappa and Krippendorf's alpha, and our data does not fit the assumptions for those widely-used metrics. IRR rates for round one were relatively low: across all questions, the mean percent total agreement was 66.67%, with the lowest question having a rate of 38.2%. IRR rates for round two were quite higher: the mean percent total agreement across all questions was 84.80% and the lowest agreement score was 63.4% (for “used external human annotation”, which we discuss later). We are confident about our labeling process, especially because these individual ratings were followed by an expert-adjudicated discussion-based reconciliation process, rather than simply counting majority votes. We detail more information and reflection about interrater reliability in appendix section SECREF52.
Data and methods ::: Raw and normalized information scores
We quantified the information about training data in papers, developing a raw and normalized information score, as different studies demanded different levels of information. For example, our question about whether inter-annotator agreement metrics were reported is only applicable for papers involving multiple annotators. Our questions about whether prescreening was used for crowdwork platforms or whether crowdworker compensation was reported is only relevant for projects using crowdworkers. However, some kinds of information are relevant to all papers that involve original human annotation: who the annotators are (annotation source), annotator training, formal instructions or definitions were given, the number of annotators involved, whether multiple annotators examined the same items, or a link to a publicly-available dataset.
For raw scores, papers involving original human annotation received one point each for reporting the six items mentioned above. In addition, they received one point per question if they included information for each of the two questions about crowdworkers if the project used crowdworkers, and one point if they reported inter-annotator metrics if the project used multiple annotators per item. For the normalized score, the raw score was divided by the highest possible raw score. We only calculated scores for papers involving original human annotation. Finally, we conducted an analysis of information scores by various bibliometric factors, which required determining such factors for all papers. For all ArXiV papers, we determined whether the PDF was a pre-print not (yet) published in another venue, a post-print identical in content to a published version, or a pre-print version of a paper published elsewhere with different content. For all Scopus papers and ArXiV post-prints, we also determined the publisher. We detail these in appendix SECREF47.
Findings ::: Original classification task
The first question was whether the paper was conducting an original classification task using supervised machine learning. Our keyword-based process of generating the corpus included many papers not in this scope. However, defining the boundaries of supervised ML and classification tasks is difficult, particularly for papers that are long, complex, and ambiguously worded. We found that some papers claimed to be using ML, but when we examined the details, these did not fall under our definition. We defined machine learning broadly, using a common working definition in which machine learning includes any automated process that does not exclusively rely on explicit rules, in which the performance of a task increases with additional data. This includes simple linear regressions, for example, and there is much debate about if and when simple linear regressions are a form of ML. However, as we were also looking for classification tasks, linear regressions were only included if it is used to make a prediction in a set of defined classes. We defined an “original” classifier to mean a classifier the authors made based on new or old data, which excludes the exclusive use of pre-trained classifiers or models.
As table TABREF13 shows, the overwhelming majority of papers in our dataset were involved in an original classification task. We placed 5 papers in the “unsure” category — meaning they did not give enough detail for us to make this determination, or that they were complex boundary cases. One of the “unsure” cases clearly used labels from human annotation, and so we answered the subsequent questions, which is why the counts in Table 2 add up to 143 (as well as some other seeming disparities in later questions).
Findings ::: Labels from human annotation
One of the major issues we had to come to a consensus around was whether a paper used labels from human annotation. We observed a wide range of cases in which human judgment was brought to bear on the curation of training data. Our final definition required that “the classifier [was] at least in part trained on labeled data that humans made for the purpose of the classification problem.” We decided on a working definition that excluded many “clever uses of metadata” from this category, but did allow some cases of “self-annotation” from social media, which were typically the most borderline cases on the other side. For example, one case from our examples we decided was human annotation used specific politically-inflected hashtags to automatically label tweets as for or against a position, for use in stance detection (e.g. #ProChoice versus #ProLife). However, these cases of self-annotation would all be considered external human annotation rather than original human annotation, and so the subsequent questions about the annotation process would be not applicable. Another set of borderline cases involved papers where no human annotation was involved in the curation of the training dataset that was used to build the classifier, but human annotation was used for validation purposes. We did not consider these to involve human annotation as we originally defined it in our schema, even though the same issues arise with equal significance for the validity of such research.
Findings ::: Used original human annotation and external human annotation
Our next two questions were about whether papers that used human annotation used original human annotation, which we defined as a process in which the paper's authors obtained new labels from humans for items. It is common in ML research to re-use public datasets, and many of papers in our corpus did so. We also found 10 papers in which external and original human annotation was combined to create a new training dataset. For these reasons, we modified our schema to ask separate questions for original and external human annotation data, to capture all three cases (using only original, only external, or both). Tables TABREF17 and TABREF17 show the breakdown for both questions. We only answered the subsequent questions about the human annotation process for the papers producing an original human annotated dataset.
Findings ::: Original human annotation source
Our next question asked who the annotators were, for the 74 papers that used original human annotation. The possible options were: the paper's authors, Amazon Mechanical Turk, other crowdworking platforms, experts/professionals, other, and no information. We took phrases like “we labeled” (with no other details) to be an implicit declaration that the paper's authors did the labeling. If the paper discussed labelers' qualifications for the task beyond an average person, we labeled it as “experts / professionals.” For example, some of our boundary cases involved recruiting students to label sentiment. One study involved labeling tweets with both English and Hindi text and noted that the students were fluent in both languages – which we considered to be in the “experts / professionals” category. Another paper we included in this category recruited students to label tweets with emojis, noting that the recruited students “are knowledgeable with the context of use of emojis.”
As table TABREF19 shows, we found a diversity of approaches to the recruitment of human annotators. The plurality of papers involved the paper's authors doing the annotation work themselves. The next highest category was “no information,” which was found in almost a quarter of the papers using original human annotation. Experts / professionals was far higher than we expected, although we took any claim of expertise for granted. Crowdworkers constituted a far smaller proportion than we expected, with Amazon Mechanical Turk and other platforms collectively comprising about 15% of papers. Almost all of the other crowdworking platforms specified were CrowdFlower/FigureEight, with one paper using oDesk.
Findings ::: Number of human annotators
Our instructions for the question about the number of human annotators was not precise and had one of the lower levels of inter-rater reliability. If the paper included information about the number of human annotators, the instructions were to put such a number, leaving the field blank for no information. Most of the disagreement was from differences around how papers report the number of annotators used. For example, some papers specified the total number of humans who worked on the project annotating items, while others only specified how many annotators were used per item (particularly for those using crowdworkers), and a few reported both. Some involved a closed set of annotators who all examined the same set of items, similar to how our team operated. Other papers involved an open set of annotators, particularly drawn from crowdworking platforms, but had a consistent number of annotators who reviewed each item. Due to these inconsistencies, we computationally re-coded responses into the presence of information about the number of human annotators. These are both important aspects to discuss, although it is arguably more important to discuss the number of annotators who reviewed each item. In general, having more annotators review each item provides a more robust way of determining the validity of the entire process, although this also requires caluclating inter-annotator agreement metrics.
As table TABREF21 shows, a slim majority of papers using original human annotation specified the number of annotators involved in some way. Based on our experiences, we typically noticed that papers discussing the number of annotators often fell into two categories: 1) a small closed team (more often 2-3, sometimes 4-6) that were either the papers' authors or recruited directly by the authors, who tended to perform the same amount of work for the duration of the project; or 2) a medium to large (25-500) open set of annotators, typically but not necessarily recruited through a crowdworking platform, who each performed highly variable amounts of work.
Findings ::: Formal definitions and instructions
Our next question was about whether instructions or guidelines with formal definitions or examples are reportedly given to annotators. Formal definitions and concrete examples are both important, as they help annotators understand how the researchers have operationalized the concept in question and determine edge cases. With no or ambiguous definitions/examples, there could be fundamental misunderstandings that are not captured by inter-annotator agreement metrics, if all annotators make the same misunderstandings. We defined two levels: giving no instructions beyond the text of a question, then giving definitions for each label and/or concrete examples. The paper must describe or refer to instructions given (or include them in supplemental materials), otherwise, we categorized it "No Information". Some borderline cases involved authors labeling the dataset themselves, where the paper presented a formal definition, but only implied that it informed the labeling – which we took to be a formal definition. As table TABREF23 shows, the plurality of papers did not provide enough information to make a determination (it is rare for authors to say they did not do something), but 43.2% provided definitions or examples.
Findings ::: Training for human annotators
We defined training for human annotators to involve some kind of interactive process in which the annotators have the opportunity to receive some kind of feedback and/or dialogue about the annotation process. We identified this as a distinct category from both the qualifications of the annotators and the instructions given to annotators, which are examined in other questions. Training typically involved some kind of live session or ongoing meeting in which annotators' progress was evaluated and/or discussed, where annotators had the chance to ask questions or receive feedback on why certain determinations did or did not match definitions or a schema. We used our own team's process as an example of this, and found several papers that used a similar roundtable process, which went into detail about interactions between team members. Cases in which the paper only specified that annotators were given a video or a detailed schema to review were not considered training details, as this was a one-way process and counted as definitions/instructions.
The overwhelming majority of papers did not discuss such issues, as table TABREF25 shows, with 15% of papers involving a training session. Because we had a quite strict definition for what constitutes training (versus what many may think of around “trained annotators”), this is expected. We also are not all that concerned with this low number, as there are many tasks that likely do not require specialized training — unlike our project, which required both specific expertise in an area and with our complicated schema.
Findings ::: Pre-screening for crowdwork platforms
Crowdwork platforms let employers pre-screen or test for traits, skills, or performance metrics, which significantly narrows the pool of crowdworkers. For example, “project-specific pre-screening” involves offering a sample task with known outcomes: if the crowdworker passed, they would be invited to annotate more items. 5 of the 11 papers using crowdworkers reported using this approach. Platforms also often have location-based screening (e.g. US-only), which 2 papers reported using. Some crowdwork platforms have a qualification for workers who have a positive track record based on total employer ratings (e.g. AMT Master). Platforms also offer generic skills-based tests for certain kinds of work (e.g. CrowdFlower's Skill Tests). These last two qualifications were in our coding schema, but no papers reported using them.
Findings ::: Multiple annotator overlap and reporting inter-annotator agreement
Our next two questions were about using multiple annotators to review the same items (multiple annotator overlap) and whether inter-annotator agreement metrics were reported. Having multiple independent annotators is typically a foundational best practice in structured content analysis, so that the integrity of the annotations and the schema can be evaluated (although see BIBREF31). For multiple annotator overlap, our definitions required papers state whether all or some of the items were labeled by multiple labelers, otherwise “no information” was recorded. Then, for papers that did multiple annotator overlap, we examined whether any inter-annotator agreement metric was reported. We did find one paper that did not explicitly state that multiple labelers overlapped, but did report inter-annotator agreement metrics. This implicitly means that at least some of the items were labeled by multiple labelers, but for consistency, we keep the “no information” label for this case. We did not record what kind of inter-annotator metric was used, such as Cohen's kappa or Krippendorff's alpha, but many different metrics were used. We also did not record what the exact statistic was, although we did notice a wide variation in what was considered an acceptable or unacceptable score for inter-annotator agreement.
For multiple annotator overlap, table TABREF29 shows that just under half of all papers that involved an original human annotation task did not provide explicit information one way or the other about whether multiple annotators reviewed each item. This includes the one paper that reported inter-annotator agreement metrics, but did not specify whether overlap was for all items or some items. Only three papers explicitly stated that there was no overlap among annotators, and so it is quite likely that the papers that did not specify such information did not engage in such a practice. For the 37 papers that did involve some kind of multiple annotator overlap, the overwhelming majority of this subsample (84%) involved multiple annotation of all items, rather than only some items. We also found that for papers that did involve some kind of multiple overlap, the large majority of them ( 70%) did report some metric of inter-annotator agreement, as table TABREF29 indicates.
Findings ::: Reported crowdworker compensation
Crowdworking is often used because of the low cost, which can be far below minimum wage in certain countries. Researchers and crowdworkers have been organizing around issues related to the exploitation of crowdworkers in research, advocating ethical practices including fair pay BIBREF54. We examined all papers involving crowdworkers for any indication of compensation, and found zero mentioned compensation. We did find that some papers using other sources of human annotation (e.g. students) discussed compensation for annotators, but this was not in our original schema.
Findings ::: Link to dataset available
Our final question was about whether the paper contained a link to the dataset containing the original human annotated training dataset. Note that this question was only answered for papers involving some kind of original or novel human annotation, and papers that were exclusively re-using an existing open or public dataset were left blank to avoid double-counting. We did not follow such links or verify that such data was actually available. As table TABREF32 shows, the overwhelming majority of papers did not include such a link, with 8 papers (10.81%) using original human-annotated training datasets linking to such data. Given the time, labor, expertise, and funding in creating original human annotated datasets, authors may be hesitant to release such data until they feel they have published as many papers as they can.
Paper information scores
The raw and normalized information scores (see section SECREF10 for methodology) were calculated for all papers that involved original human annotation. As previously discussed, our corpora represent a likely non-representative sample of ML research, even if bounded to social computing. Our relatively small sample sizes combined with the number of multiple comparisons would mean that thresholds for statistical significance would need to be quite high. Instead, we present these results to help provide an initial framework and limited results on this issue, intended to help inform a broader and more systematic evaluation the ML literature. We do observe quite varying ranges and distributions of information scores, which does give evidence to the claim that there is substantial and wide variation in the practices around human annotation, training data curation, and research documentation.
Paper information scores ::: Overall distributions of information scores
Figure FIGREF34 shows histograms for raw and normalized information scores, which both suggest a bimodal distribution, with fewer papers at the both extremes and the median. This suggests that there are roughly two populations of researchers, with one centered around raw scores of 1-2 and normalized scores of 0.25 and one centered around raw scores of 5 and normalized scores of 0.7. The normalized information score ranged from 0 to 1, with 6 papers having a normalized score of 0 and only 1 paper with a score of 1. The raw information score ranged from 0 to 7, with no paper receiving a full score of 8 or 9, which would have required a study involving crowdworkers, multiple overlap, and open datasets. Overall, the mean normalized information score was 0.441, with a median of 0.429 and a standard deviation of 0.261. The mean raw score was 3.15, with a median of 3.0 and a standard deviation of 2.05.
Paper information scores ::: Information scores by corpus and publication type
Figure FIGREF37 shows two boxplots of normalized information scores that are based on different intersecting categories of publication type and status. The left figure compares scores in four categories: all papers in the Scopus sample (non-ArXived), ArXiv preprints that were never (or are not yet) published, and ArXiv preprints that were either postprints or preprints of a traditional publication. The category with the lowest median score are papers from the Scopus sample, which is followed closely by ArXiv preprints never published, although preprints never published had a much larger IQR and standard deviation. Postprints of publications had a similar IQR and standard deviation as preprints never published, but a much higher median score. Preprints of publications had a similar median score as postprints, but with a much smaller IQR and standard deviation. The righthand figure plots publication types for the combined corpora. Conference proceedings and ArXiv preprints never published have somewhat similar medians and IQRs, with journal articles having a higher median of 0.5 and a much narrower IQR. While we hesitate to draw generalizable conclusions, we see these findings indicating a wide range of factors potentially at play.
Paper information scores ::: Information scores by publisher
Figure FIGREF39 shows boxplots for normalized information scores by publisher, split between papers sampled from ArXiv and Scopus. The boxplots are ordered by the median score per publisher. In papers in the ArXiv corpus, those that were pre- or post-prints of papers published by the professional societies Association for Computing Machinery (ACM) or Association of Computational Linguistics (ACL) tied for the highest median scores of 0.667, with similar IQRs. These were followed by Springer and Elsevier, with respective medians 0.625 and 0.603 and narrower IQRs. ArXiv preprints not published elsewhere had a median score of 0.381 and the highest IQR and standard deviation (0.289), suggesting that it represents a wide range of papers. The publishers at the lower end of the scale included AAAI, with a median of 0.444 and a narrower IQR, and IEEE, with a median of 0.226 and the second-highest IQR and standard deviation (0.327). Curiously, papers from the Scopus corpus show different results per-publisher, with the median scores of all publishers lower in the Scopus corpus than in the ArXiv corpus. Given the small number of papers in the Scopus sample, we hesitate to draw general conclusions, but suspect it indicates differences between all academic authors and those who post ArXiv postprints.
Concluding discussion ::: Findings
In the sample of ML application publications using Twitter data we examined, we found a wide range in levels of documentation about methodological practices in human annotation. While we hesitate to overly generalize our findings to ML at large, these findings do indicate concern, given how crucial the quality of training data is and the difficulty of standardizing human judgment. Yet they also give us hope, as we found a number of papers we considered to be excellent cases of reporting the processes behind their datasets. About half of the papers using original human annotation engaged in some form of multiple overlap, and about 70% of the papers that did multiple overlap reported metrics of inter-annotator agreement. The distribution of annotation information scores was roughly bimodal, suggesting two distinct populations of those who provide substantially more and less information about training data in their papers. We do see preliminary evidence that papers in our sample published by certain publishers/venues tended to have papers with far more information than others (e.g. ACM and ACL at the top end, followed closely by journal publishers Springer and Elsevier, with IEEE and AAAI proceedings at the lower end). Preprints exclusively published on ArXiv also had the widest range of scores.
Concluding discussion ::: Implications
Based on our findings and experiences in this project, we believe human annotation should be considered a core aspect of the research process, with as much attention, care, and concern placed on the annotation process as is currently placed on performance-based metrics like F1 scores. Our findings — while preliminary, descriptive, and limited in scope — tell us that there is much room for improvement. This paper also makes steps towards more large-scale and systematic analyses of the research landscape, as well as towards standards and best practices for researchers and reviewers.
Institutions like journals, funders, and disciplinary societies have a major role to play in solutions to these issues. Most publications have strict length maximums, and many papers we scored highly spent a page or more describing their process. Reviewer expectations are crucial in any discussion of the reporting of methodological details in research publications. It could be that some authors did include such details, but were asked to take it out and add other material instead. Authors have incentives to be less open about the messiness inherent in research, as this may open them up to additional criticism. We see many parallels here to issues around reproducibility and open science, which are increasingly being tackled by universal requirements from journals and funders, rather than relying on individuals to change norms. Such research guidelines are common, including the COREQ standard for qualitative data analysis reporting BIBREF55, a requirement by some journals. A number of proposed standards have been created around datasets for ML BIBREF48, BIBREF49, BIBREF30, BIBREF50, BIBREF51, BIBREF52, BIBREF53, which are often framed as potential ways to mitigate bias and improve transparency and accountability. Several of these are broader proposals around reporting information about ML classifiers and models, which include various aspects beyond our study. In fact, given the recent explosion of proposals for structured disclosure or transparency documents around ML, the Partnership on AI has recently created the “ABOUT ML” working group to arrive at a common format or standard. BIBREF56
From our perspective, it is important to frame this issue as one of research validity and integrity: what kind of information about training data is needed for researchers, reviewers, and readers to have confidence in the model or classifier? As we observed in our discussions, we became skeptical about papers that did not adequately describe their human annotation processes. However, human annotation is a broad and diverse category of analytical activity, encompassing a wide range of structured human judgment brought to bear on items, some far more straightforward or complex. We saw the wide range papers that were engaged in various forms of annotation or labeling, even though we bounded our study to papers using data from Twitter. One important distinguishing factor is the difficulty of the task and the level of specific knowledge needed to complete it, which can vary significantly. Another key distinction may be between when there is expected to be only one `right' answer and when there might be many valid answers.
Most importantly, we would not want a straightforward checklist to overdetermine issues of model integrity. A number of papers we read were missing details we thought were crucial for understanding that study, but would not make sense for a majority of papers we examined. If a checklist was created, it should not be seen as an end in itself. The classic principle of scientific replicability could be a useful heuristic: does the paper provide enough information about the labeling process such that any reader could (with sufficient resources and access to the same kind of human annotators) conduct a substantively identical human annotation process on their own? We also see a role for technical solutions to help scaffold adherence to these best practices. For example, major qualitative data analysis platforms like MAXQDA or NVivo have built-in support for inter-annotator agreement metrics. Several crowdsourcing and citizen science platforms for data labeling are built to support reconciliation for disagreements. Automated workflow, pipeline, and provenance tracking is an increasing topic in ML, although these can focus more on model building and tuning, taking data as given. We recommend such projects include human annotation as a first-class element, with customization as needed.
Finally, our own experience in this human annotation project studying human annotation projects has shown us the costs and benefits of taking an intensive, detailed, collaborative, and multi-stage approach to human annotation. On one side, we believe that after going through such a long process, we have not only better data, but also a much better contextual understanding of our object of study. Yet on the other hand, even though struggling over the labels and labeling process is an opportunity, our time- and labor-intensive process did have a direct tradeoff with the number of items we were able to annotate. These issues and tradeoffs are important for ML researchers to discuss when designing their own projects and evaluating others.
Concluding discussion ::: Limitations and future work
Our study has limitations, as we only examined a sample of publications in the ML application space. First, we only examined papers that performing a classification task on tweets, which is likely not a representative sample of ML application publications. We would expect to find different results in different domain application areas. Papers in medicine and health may have substantially different practices around reporting training data, due to strict reporting standards in clinical trials and related areas. We also generally examined papers that are posted on ArXiV (in addition to 30 papers sampled from Scopus) and ArXiV is likely to not be a representative sample of academic publications. ArXiV papers are self-submitted and represent a range of publication stages, from drafts not submitted to review, preprints in peer review, and postprints that have passed peer review. Future work should examine different kinds of stratified random samples to examine differences between various publishers, publication types, disciplines, topics, and other factors.
Our study only examined a set of the kinds of issues that scholars and practitioners in ML are examining when they call for greater transparency and accountability through documentation of datasets and models. We have not recorded information about what exactly the rates of inter-annotator agreement are. In particular, we did not record information about the reconciliation or adjudication process for projects which involve multiple overlap (e.g. majority rule, talking to consensus), which we have personally found to be a crucial and difficult process. Other questions we considered but did not include were: the demographics of the labelers, the number of labelers (total and per item), compensation beyond crowdworkers, whether instructions or screenshot of the labeling interface was included, and whether labelers had the option to choose “unsure” (vs. being forced to choose a label). We leave this for future work, but also found that each additional question made it more difficult for labelers. We also considered but did not have our team give a holistic score indicating their confidence in the paper (e.g. a 1-5 score, like those used in some peer reviewing processes).
Our study also has limitations that any human annotation project has, and we gained much empathy around the difficulties of human annotation. Our process is not perfect, and as we have analyzed our data, we have identified cases that make us want to change our schema even further or reclassify boundary cases. In future work, we would also recommend using a more structured and constrained system for annotation to capture the text that annotators use to justify their answers to various questions. ML papers are very long and complex, such that our reconciliation and adjudication process was very time-consuming. Finally, we only have access to what the publications say about the work they did, and not the work itself. Future work could improve on this through other methods, such as ethnographic studies of ML practitioners.
Appendix
The appendix appears following the references section. This work was funded in part by the Gordon & Betty Moore Foundation (Grant GBMF3834) and Alfred P. Sloan Foundation (Grant 2013-10-27), as part of the Moore-Sloan Data Science Environments grant to UC-Berkeley. This work was also supported by UC-Berkeley's Undergraduate Research Apprenticeship Program (URAP). We thank many members of UC-Berkeley's Algorithmic Fairness & Opacity Group (AFOG) for providing invaluable feedback on this project.
Appendix ::: Dataset/corpus details ::: Keyword labels
To capture the topical and disciplinary diversity of papers in our corpus, we assigned one or more keyword labels to each paper, intended to capture topical, domain, disciplinary, and methodological qualities about the study. A paper seeking to classify tweets for spam and phishing in Turkish might include the labels: spam detection; phishing detection; cybersecurity; non-English. A study seeking to classify whether users are tweeting in support or opposition of a protest might have the keywords: user profiling; political science; protests; stance detection; public opinion. As part of the annotation and labeling process, all five annotators gave each paper a short description of what was being classified or predicted. The project lead aggregated these independent descriptions and additionally examined the paper title, abstract, and text. The project lead — who has extensive knowledge and experience of the various disciplines in the social computing space — then conducted a two-stage thematic coding process. A first pass involved open (or free-form) coding for all papers, with the goal of creating a typology of keywords. The list of keywords were then refined and consolidated, and a second pass was conducted on all of the items to re-label them as appropriate. Papers could have multiple keywords.
The distribution is plotted in Figure FIGREF46, which is broken out by papers that were using original human annotation (e.g. a new labeled training dataset) versus either theoretical papers or papers exclusively re-using a public or external dataset (see section SECREF16). This shows that the most common keywords were user profiling (a broader keyword that includes demographic prediction and classification of users into various categories), public opinion (a broader keyword that includes using Twitter to obtain beliefs or opinions, typically about political or cultural topics), and then two NLP methodologies of sentiment analysis and topic identification. The keyword "social networks" was used for any paper that either made substantive use of the network structure (e.g. follower graphs) as a feature, or tried to predict it. This figure also shows that our corpus also includes papers from a wide range of fields and sub-fields across disciplines, including a number of papers on cybersecurity (including bot/human detection, phishing detection, and spam detection), public health and epidemology, hate speech and content moderation, human geography, computer vision, political science, and crisis informatics. Papers using non-English languages were also represented in our corpus.
Appendix ::: Dataset/corpus details ::: Distribution of paper types in the corpus
For each of our 164 papers, we needed to determine various bibliometric factors. For papers in the ArXiv sample, the most important of these is whether the file uploaded to ArXiV is a version of a paper published in a more traditional venue, and if so, whether the ArXiV version is a pre-print submitted prior to peer-review (and has different content than the published version) or if it is a post-print that is identical in content to the published version. Many authors upload a paper to ArXiv when they submit it to a journal, others upload the accepted manuscript that has passed peer-review but has not been formatted and typeset by the publisher, and others upload the exact “camera-ready” version published by the publishers. ArXiV also lets authors update new versions; some will update each of these versions as they progress through the publishing process, others will only upload a final version, and some only upload the pre-review version and do not update the version in ArXiv to the published version.
To do this, the project lead first manually searched for the exact text of the title in Google Scholar, which consolidates multiple versions of papers with the same title. Papers that only had versions in ArXiv, ArXiv mirrors (such as adsabs), other e-print repositories like ResearchGate, personal websites, or institutional repositories were labeled as “Preprint never published.” For papers that also appeared in any kind of publication venue or publishing library (such as the ACM, IEEE, AAAI, or ACL digital libraries), the project lead recorded the publication venue and publisher, then downloaded the published version. In some workshops and smaller conferences, the “publisher” was a single website just for the event, which lacked ISSNs or DOIs. These were considered to be published as conference or workshop proceedings, if there was a public list of all the papers presented at the event with links to all of the papers. There was only one case in which there were two or more publications with the exact same title by the same authors, which involved a 2-page archived extended abstract for a poster in an earlier conference proceeding and a full paper in a later conference proceeding. For this case, we chose the full paper in the later venue.
The project lead then compared the version uploaded to ArXiv with the published version. As this was done after the labeling process, for papers where the author uploaded multiple versions to ArXiv, we took care to examine the version our labelers examined. If there were any differences in substantive content, the paper was labeled as “Preprint of” and then an appropriate description of the venue, such as “refereed conference proceeding” or “refereed journal article.” If there were no differences in the substantive content of the paper, the paper was labeled as “Postprint of” and then the venue description. Changes in reference style or ordering, page layout, typesetting, the size or color of figures, or moving the same text between footnotes and inline parentheticals were not considered to be substantive content changes. However, even a single character typo fix to the main body text, a single added or removed reference, or a change to a figure's caption constituted a substantive content change. Table TABREF48 shows the distribution of paper types. Because there was only one dissertation in the sample, which also was not using original human annotation, we excluded this category from the aggregate analyses by paper type shown in the results section.
Appendix ::: Dataset/corpus details ::: Distribution of publishers in corpus
For each paper in the Scopus samples and each paper in the ArXiv corpus that was a pre-print or post-print of a published paper, we also collected information about the journal and publisher. There were 80 different journals, conference proceedings, or workshops represented, with the top venues being the proceedings of SocInfo with 6 papers and the proceedings of ASONAM (Advances in Social Network Analysis and Mining) with 4 papers. Six venues had 3 publications each, which were all conference proceedings: AAAI ICWSM, ELRA LREC, ACM CIKM, ACM WWW, and IEEE Big Data. The distribution of publishers is presented in table TABREF49, which is broken out by papers in the ArXiv and Scopus corpus. The distribution of papers by years is shown in table TABREF49.
Appendix ::: Methods and analysis details ::: Inter-annotator agreement
In the first round, 5 annotators examined each paper independently, then met to discuss papers with disagreement. Table TABREF53 shows for each question, what percent of items were given the same label by all annotators (with number of annotators being recoded for the presence or absence of any information). Cases where no annotator answered the question because it was not relevant (e.g. crowdworker compensation for non-crowdworker projects) were not included in such a calculation, which would have increased such rates even more, but this would be somewhat disingenuous.
We report percent complete agreement among all raters for each question; for each item, what percent were given the same rating by all raters? We believe this is a more appropriate and straightforward metric for our project. This is due to the fact that our data does not necessarily meet the particular assumptions of other widely used two statistical estimators for 3+ raters. Fleiss's kappa and Krippendorf's alpha are widely used because they take into account the possibilities that raters made decisions based on random chance. However, this requires assuming a uniform prior possibility of such a random distribution, which generally only applies if each possible response by raters is equally likely BIBREF64, BIBREF61. This is the case in balanced datasets, but we observed widely skewed distributions.
The rates of proportional agreement were not high enough in the first round for us to be confident, which is likely due to a variety of factors. First, in contrast to most of the papers we examined, our project involved annotators answering 13 different questions for each item, which adds significant complexity to the process. Second, machine learning publications are also some of the more difficult pieces of content to make determinations around, as the definitions and boundaries of various concepts are often relatively undefined and contested across the many academic disciplines. In particular, our lowest rate for the second round was in the external human annotation question, which was added between the first and second round, and appears to still have some ambiguity.
We observed substantial increases in agreement between round one and two, although this also is likely confounded by the fact that all five annotators reviewed every item in round one, but only two or three reviewed every item in round two. We should note that as our approach was a human annotation research project studying human annotation research projects, this has given us much empathy for how difficult such a task is. We also acknowledge that our project involves the same kind of “black boxing” we discussed in the literature review, in which a messy process of multiple rounds of human annotations is reduced to a gold standard. However, we do believe in being open about our process, and our data for both rounds of annotation and the final dataset will be available upon publication.
The overall question for any study involving structured human annotation is whether the entire annotation, integration, review, and reconciliation process ultimately results in high confidence for the final dataset. The standard approach of human annotation checked by inter-rater reliability treats individual humans as instruments that turn phenomena in the world into structured data. If there is a high degree of inter-rater reliability, then each individual human can generally be trusted to make the same determination. If this is the case, then either reconciliation can easily take place through a majority vote process involving no discussion, or if rates are quite high, then only a subset of items need to be reviewed multiple times. In contrast, what our first round of inter-rater reliability metrics told us was that we were not the same kinds of standardized instruments that turn the same inputs into the same outputs. This does not bode well if we were conducting a single-stage mechanical majority-rule reconciliation process, and certainly would be unwise if we only had a single individual annotate each paper. For such a reason, we did not rely on such easier processes of reconciliation and demanded all papers be annotated by multiple individuals and discussed in a group setting moderated by the lead research scientist.
Furthermore, because our approach was largely focused on identifying the presence of various kinds of information within long-form publications, this is a different kind of human judgment than is involved in common tasks using human annotators in social computing, such as social media content moderation, sentiment analysis, or image labeling. Typically, annotated items are much smaller and tend to be evaluated holistically, with disagreements arising from annotators who looked at the same information and made different determinations.
In contrast, we reflected that in our reconciliation process, most of the time when annotators disagreed, it was because some annotators had caught a piece of information in the paper that others had not seen. There was a common occurrence wherein one of the annotators would point out a particular paragraph, the other annotators who had initially disagreed would read it, and then remark that they had missed that part and would like to change their answer. That said, there were cases wherein annotators were reading the same sections of the paper and still arriving at different answers, which was often either 1) because the paper was giving ambiguous, incomplete, or implicit information, or 2) because there was a fundamental interpretation of the coding schema, which required updating the schema or the examples in it. For such reasons, we are relatively confident that if, after our two rounds of annotation and the reconciliation process, no individual member of our team has identified the presence of such information, then it is quite likely it is not present in the paper.
Appendix ::: Methods and analysis details ::: Changes to the coding schema
Unlike in some approaches to structured content analysis, the coding schema was open to revision if needed during this first round. Some difficult edge cases led to the refinement of the schema approximately half-way through this round of the labeling. The schema was developed on a web-based word processing platform, which also included examples of difficult edge cases, which were added as they were identified in team meetings. The document detailed each question, a formal definition or explanation of the question, the list of possible permitted labels, and various cases of examples that illustrated difficult or edge cases.
The coding schema was modified only in cases where backward compatibility could be maintained with prior labeling work. This typically involved taking a question which had many granular possible labels and consolidating the possible labels into a smaller number of broader labels. For example, the question about whether instructions were given to human annotators originally involved specifying whether the instructions included a formal definition, examples, or both. This was revised to only specify “instructions with formal definition or examples.” Similarly, training for human annotators originally included a more granular list of possible training circumstances, plus ”no information”, ”other”, and ”unsure”. Because of the difficulty of gaining consensus on these different forms of training and the relatively few number of papers that gave any details whatsoever about annotator training (as well as no papers that explicitly stated no training had occurred), these were reduced to “some training details”, “no information”, and ”unsure” (see Table TABREF55).
In addition, three questions were added halfway through the first round of the annotation process. First, a question was added about whether the paper used an external human-annotated dataset or not, which was added to clarify the question about whether original human annotation was used. This was added after a paper was discussed where an external human-annotated dataset was combined with an original human-annotated dataset. Two other questions were added about whether the paper contains a link to the training dataset and whether details about crowdworker compensation were included for projects using crowdworkers. These were both relatively straightforward questions, with relatively few incidences across our dataset. All papers had all questions answered in the second round.
Appendix ::: Software used
All computational analysis and scripting was conducted in Python 3.7 BIBREF66, using the following libraries: Pandas dataframes BIBREF60 for data parsing and transformation; SciPy BIBREF58 and NumPy BIBREF65 for quantitative computations; and Matplotlib BIBREF57 and Seaborn BIBREF67 for visualization. Analysis was conducted in Jupyter Notebooks BIBREF59 using the IPython BIBREF62 kernels. Datasets and Jupyter Notebooks for data collection and analysis will be made available upon publication, which are made to run on Binder BIBREF63.
Appendix ::: Coding schema, examples, and instructions
A final version of our coding schema and instructions is below:
1. Original classification task: Is the paper presenting its own original classifier that is trying to predict something? “Original” means a new classifier they made based on new or old data, not anything about the novelty or innovation in the problem area.
Machine learning involves any process that does not have explicit or formal rules, where performance increases with more data. Classification involves predicting cases on a defined set of categories. Prediction is required, but not enough. Linear regressions might be included if the regression is used to make a classification, but making predictions for a linear variable is not. Predicting income or age brackets is classification, predicting raw income or age is not.
Example: analyzing statistics about the kinds of words people use on social media is not a classification task at all.
Example: predicting location is a classification task if it is from work, school, home, or other, but not if it is an infinite/undefined number of locations.
Example: This paper (https://ieeexplore.ieee.org/document/7937783) was framed as not an original classification task (more algorithm performance), but they did create an original classifier. This can also be an “unsure” – which is 100% OK to answer.
Example: Literature review papers that include classification papers aren't in this, if they didn't actually build a classifier.
Example: if there is a supervised classification task that is part of a broader process, this counts, focus on that.
If no, skip the following questions.
2. Classification outcome: What is the general type of problem or outcome that the classifier is trying to predict? Keep it short if possible. For example: sentiment, gender, human/bot, hate speech, political affiliation.
3. Labels from human annotation: Is the classifier at least in part trained on labeled data that humans made for the purpose of the classification problem? This includes re-using existing data from human judgments, if it was for the same purpose as the classifier. This does not include clever re-using of metadata.
Do a quick CTRL-F for “manual” and “annot” if you don't see anything, just to be sure.
If not, skip the following questions about human annotation.
Example: ISideWith paper on political stances was labels from human annotation, just not original. They took the labels from elsewhere and filled in the gaps (more on that in next Q).
Example: Buying followers and seeing who follows (1411.4299.pdf) is not human annotation.
Example: Generating (smart) simulated datasets from metadata is not human annotation.
Example: 1612.08207.pdf is not annotation when looking up political affiliation of politicians from an external database, even though it is manual work. No judgment is involved.
Example: 1709.01895.pdf is labels from human annotation, even though it is semi-automated. They identified hashtags that they believe universally correspond to certain political stances. There is a form of human judgment here, although in that paper, they don't define or explain it.
Example: Evaluation using human annotation is not annotation for ML, if the annotation wasn't used to make the classifier. (1710.07394.pdf)
Example: If they are using human annotation just to have confidence that a machine-annotated dataset is as good as a human annotated one, but the human annotated dataset isn't actually used to train the classifier, it is *not* using human annotation for ML. (1605.05195.pdf)
4. Used original human annotation: Did the project involve creating new human-labeled data, or was it exclusively re-using an existing dataset?
Yes
No
Unsure
Papers may have a mix of new and old human labeled data, or new human labeled data and non-human labeled data. If there is any new human annotation, say yes.
New human annotation must be systematic, not filling in the gaps of another dataset. Example: ISideWith paper on political stances is *not* original human annotation, even though they did some manual original research to fill the gap.
If the methods section is too vague to not tell, then leave as unsure (example: 1801.06294.pdf)
4.5. Used external human annotation data: Did the project use an already existing dataset from human labeled data?
Yes
No
Unsure
If they are using external human annotated data, skip the remaining questions:
5. Original human annotation source: Who were the human annotators? Drop-down options are:
Amazon Mechanical Turk (AMT, Turkers)
Any other crowdworking platform (Crowdflower / Figure8)
The paper's authors
Academic experts / professionals in the area
No information in the paper
Other
Unsure
For academic experts or professionals in the area, this is independent from the kinds of specific training they received for the task at hand. Think of “the area” broadly, so if it is something about healthcare and nurses were recruited, that would be professionals in the area, even if they don't say anything about the nurses having specific training in the annotation task at hand. If it doesn't easily fit into these or uses multiple sources, add them in the next column.
Example: “We develop a mechanism to help three volunteers analyze each collected user manually” -- put other, if that is all they say
Example: If it just says “we annotated...” then assume it is only the paper's authors unless otherwise stated.
6. Number of human annotators:
Put the number if stated, if not, leave blank.
7. Training for human annotators: Did the annotators receive interactive training for this specific annotation task / research project? Training involves some kind of interactive feedback. Simply being given formal instructions or guidelines is not training. Prior professional expertise is not training. Options include:
Some kind of training is mentioned
No information in the paper
Unsure
Example: It is not considered training if there was prescreening, unless they were told what they got right and wrong or other debriefing. Not training if they just gave people with high accuracy more work.
Example: This paper had a minimum acceptable statement for some training information, with only these lines: “The labeling was done by four volunteers, who were carefully instructed on the definitions in Section 3. The volunteers agree on more than 90% of the labels, and any labeling differences in the remaining accounts are resolved by consensus.”
8. Formal instructions/guidelines: What documents were the annotators given to help them? This document you are in right now is an example of formal instructions with definitions and examples.
No instructions beyond question text
Instructions include formal definition or examples
No information in paper (or not enough to decide)
Unsure
Example of a paper showing examples: “we asked crowdsourcing workers to assign the `relevant' label if the tweet conveys/reports information useful for crisis response such as a report of injured or dead people, some kind of infrastructure damage, urgent needs of affected people, donations requests or offers, otherwise assign the `non-relevant' label”
9. Prescreening for crowdwork platforms
Leave blank if this is not applicable.
No prescreening (must state this)
Previous platform performance qualification (e.g. AMT Master)
Generic skills-based qualification (e.g. AMT Premium)
Location qualification
Project-specific prescreening: researchers had known ground truth and only invited
No information
Unsure
10. Multiple annotator overlap: Did the annotators label at least some of the same items?
Yes, for all items
Yes, for some items
No
Unsure
No information
If it says there was overlap but not info to say all or some, put unsure.
11. Reported inter-annotator agreement: Leave blank if there was no overlap. Is a metric of inter-annotator agreement or intercoder reliability reported? It may be called Krippendorf's alpha, Cohen's kappa, F1 score, or other things.
Yes
No
Unsure
12. Reported crowdworker compensation: If using crowdworkers to annotate, did they say how much the annotators were paid for their work? Leave blank if crowdworkers were not used.
Yes
No
Unsure
13. Link to dataset available: Is there a link in the paper to the dataset they used?
Yes
No
Unsure | “coding scheme” is defined, coders are trained with the coding scheme, Training sometimes results in changes to the coding scheme, calculation of “inter-annotator agreement” or “inter-rater reliability.”, there is a process of “reconciliation” for disagreements |
35eb8464e934a2769debe14148667c61bf1da243 | 35eb8464e934a2769debe14148667c61bf1da243_0 | Q: In what sense is data annotation similar to structured content analysis?
Text: Introduction
Machine learning (ML) has become widely used in many academic fields, as well as across the private and public sector. Supervised machine learning is particularly prevalent, in which training data is collected for a set of entities with known properties (a “ground truth” or “gold standard”), which is used to create a classifier that will make predictions about new entities of the same type. Supervised ML requires high-quality training data to produce high-quality classifiers. “Garbage In, Garbage Out” is a longstanding aphorism in computing about how flawed input data or instructions will produce flawed outputs. BIBREF0, BIBREF1 However, contemporary ML research and education tends to focus less on obtaining and validating such a training dataset, with such considerations often passed over in major textbooks BIBREF2, BIBREF3, BIBREF4. The predominant focus is typically on what is done with the training data to produce a classifier, with heavy emphasis on mathematical foundations and routine use of clean and tidy “toy” datasets. The process of creating a “gold standard” or “ground truth” dataset is routinely black-boxed. Many papers in ML venues are expected to use a standard, public training dataset, with authors comparing various performance metrics on the same dataset. While such a focus on what is done to a training dataset may be appropriate for theoretically-oriented basic research in ML, this is not the case for supervised ML applications.
Introduction ::: Study overview
All approaches of producing a training dataset involve some form of human judgment, albeit at varying levels of granularity. In this paper, we investigate and discuss a wide range of issues and concerns around the curation of human-labeled or human-annotated data, in which one or more individuals make discrete assessments of items. We report from a study in which a team of six labelers systematically examined a corpus of supervised machine learning application papers in social computing, specifically those that classified tweets from Twitter for various purposes. For each paper, we recorded what the paper does or does not state about the training data used to produce the classifier presented in the paper. The bulk of the papers we examined were a sample of preprints or postprints published on ArXiV.org, plus a smaller set of published papers sampled from Scopus. We determined whether such papers involved an original classification task using supervised ML, whether the training data labels were produced from human annotation, and if so, the source of the human-labeled dataset (e.g. the paper's authors, Mechanical Turk, recruited experts, no information given, etc.). For all papers in which an original human-labeled dataset was produced, we then made a series of further determinations, including if definitions and/or examples were given to labelers, if labelers independently labeled the same items, if inter-rater reliability metrics were presented, if compensation details for crowdworkers were reported, if a public link to the dataset was available, and more.
As our research project was a human-labeling project studying other human-labeling projects, we took care in our own practices. We only have access to the paper reporting about the study and not the actual study itself, and many papers either do not discuss such details at all or without sufficient detail to make a determinations. For example, many papers did note that the study involved the creation of an original human-labeled dataset, but did not specify who labeled it. For some of our items, one of the most common labels we gave was “no information” — which is a concerning issue, given how crucial such information is in understanding the validity of the training dataset and by extension, the validity of the classifier.
Literature review and motivation ::: A different kind of “black-boxing” in machine learning
In the introduction, we noted training data is frequently black-boxed in machine learning research and applications. We use the term “black-boxed” in a different way than it is typically invoked in and beyond the FAT* community, where often refers to interpretability. In that sense, “black-boxing” means that even for experts who have access to the training data and code which created the classifier, it is difficult to understand why the classifier made each decision. In social science and humanities work on “black-boxing” of ML (and other “algorithmic” systems), there is often much elision between issues of interpretability and intentional concealment, as Burrell BIBREF5 notes. A major focus is on public accountability BIBREF6, where many problematic issues can occur behind closed doors. This is even the case with relatively simple forms of analytics and automation — such as if-then statements, linear regressions, or rule-based expert systems BIBREF7, BIBREF8.
In contrast, we are concerned with what is and is not taken for granted when developing a classifier. This use is closer to how Latour & Woolgar used it in an ethnographic study of scientific laboratories BIBREF9. They discuss how equipment like a mass spectrometer would typically be implicitly trusted to turn samples into signals. However, when the results were drastically unexpected, it could be a problem with the machine or a fundamental breakthrough. Scientists and technicians would have to “open up the black box,” changing their relationship to the equipment to determine if the problem was with the equipment or the prevailing theory. In this view, black-boxing is a relational concept, not an objective property. It is about the orientation people have to the same social-technical systems they routinely work with and rely upon. “Opening up the black box” is not about digging into technical or internal details per se, but a gestalt shift in whether the output of a system is implicitly taken for granted or open for further investigation.
In this view, black-boxing is not inherently problematic. The question is more about who gets to be skeptical about data and who is obligated to suspend disbelief, which are also raised in discussions of open science & reproducibility BIBREF10. Operationalization, measurement, and construct validity have long been crucial and contested topics in the social sciences. Within quantitative sub-fields, it is common to have extensive debates about the best way to define and measure a complex concept (e.g. “intelligence”). From a qualitative and Science & Technology Studies perspective, there is extensive work on the practices and implications of various regimes of measurement BIBREF11, BIBREF12, BIBREF13, BIBREF14. In ML, major operationalization decisions can implicitly occur in data labeling. Yet as Jacobs & Wallach note, “[i]n computer science, it is particularly rare to articulate the distinctions between constructs and their operationalizations” BIBREF15. This is concerning, because “many well-studied harms [in ML] are direct results of a mismatch between the constructs purported to be measured and their operationalizations” BIBREF15.
Literature review and motivation ::: Content analysis
Creating human-labeled training datasets for machine learning often looks like content analysis, a well-established methodology in the humanities and the social sciences (particularly literature, communication studies, and linguistics), which also has versions used in the life, ecological, and medical sciences. Content analysis has taken many forms over the past century, from more positivist methods that formally establish structural ways of evaluating content to more interpretivist methods that embrace ambiguity and multiple interpretations, such as grounded theory BIBREF16. The intersection of ML and interpretivist approaches is outside of the scope of this article, but it is an emerging area of interest BIBREF17.
Today, structured content analysis (also called “closed coding”) is used to turn qualitative or unstructured data of all kinds into structured and/or quantitative data, including media texts, free-form survey responses, interview transcripts, and video recordings. Projects usually involve teams of “coders” (also called “annotators”, “labelers”, or “reviewers”), with human labor required to “code”, “annotate”, or “label” a corpus of items. (Note that we use such terms interchangeably in this paper.) In one textbook, content analysis is described as a “systematic and replicable” BIBREF18 method with several best practices: A “coding scheme” is defined, which is a set of labels, annotations, or codes that items in the corpus may have. Schemes include formal definitions or procedures, and often include examples, particularly for borderline cases. Next, coders are trained with the coding scheme, which typically involves interactive feedback. Training sometimes results in changes to the coding scheme, in which the first round becomes a pilot test. Then, annotators independently review at least a portion of the same items throughout the entire process, with a calculation of “inter-annotator agreement” or “inter-rater reliability.” Finally, there is a process of “reconciliation” for disagreements, which is sometimes by majority vote without discussion and other times discussion-based.
Structured content analysis is a difficult, complicated, and labor-intensive process, requiring many different forms of expertise on the part of both the coders and those who manage them. Historically, teams of students have often performed such work. With the rise of crowdwork platforms like Amazon Mechanical Turk, crowdworkers are often used for content analysis tasks, which are often similar to other kinds of common crowdworking tasks. Google's reCAPTCHA BIBREF19 is a Turing test in which users perform annotation tasks to prove their humanness — which initially involved transcribing scanned phrases from books, but now involves image labeling for autonomous vehicles. There are major qualitative data analysis software tools that scaffold the content analysis process to varying degrees, such as MAXQDA or NVivo, which have support for inter-annotator agreement metrics. There have also been many new software platforms developed to support more micro-level annotation or labeling at scale, including in citizen science, linguistics, content moderation, and more general-purpose use cases BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25. For example, the Zooniverse BIBREF26 provides a common platform for citizen science projects across different domain application areas, which let volunteers make judgements about items, which are aggregated and reconciled in various ways.
Literature review and motivation ::: Meta-research and methods papers in linguistics and crowdsourcing
Our paper is also in conversation with various meta-research and standardization efforts in linguistics, crowdsourcing, and other related disciplines. Linguistics and Natural Language Processing have long struggled with issues around standardization and reliability of linguistic tagging. Linguistics researchers have long developed best practices for corpus annotation BIBREF27, including recent work about using crowdworkers BIBREF28. Annotated corpus projects often release guidelines and reflections about their process. For example, the Linguistic Data Consortium's guidelines for annotation of English-language entities (version 6.6) is 72 single-spaced pages BIBREF29. A universal problem of standardization is that there are often too many standards and not enough enforcement. As BIBREF30 notes, 33-81% of linguistics/NLP papers in various venues do not even mention the name of the language being studied (usually English). A meta-research study found only 1 in 9 qualitative papers in Human-Computer Interaction reported inter-rater reliability metrics BIBREF31.
Another related area are meta-research and methods papers focused on identifying or preventing low-effort responses from crowdworkers — sometimes called “spam” or “random” responses, or alternatively ”fraudsters” or ”cheaters.” Rates of “self-agreement” are often used, determining if the same person labels the same item differently at a later stage. One paper BIBREF32 examined 17 crowdsourced datasets for sentiment analysis and found none had self-agreement rates (Krippendorf's alpha) above 0.8, with some lower than 0.5. Another paper recommends the self-agreement strategy in conjunction with asking crowdworkers to give a short explanation of their response, even if the response is never actually examined. BIBREF33. One highly-cited paper BIBREF34 proposes a strategy in which crowdworkers are given some items with known labels (a gold/ground truth), and those who answer incorrectly are successively given more items with known labels, with a Bayesian approach to identifying those who are answering randomly.
Literature review and motivation ::: The data documentation movements
Our paper is also in conversation with two related movements in computationally-supported knowledge production that have surfaced issues around documentation. First, we see connections with the broader open science and reproducibility movements. Open science is focused on a range of strategies, including open access research publications, educational materials, software tools, datasets, and analysis code BIBREF35. The reproducibility movement is deeply linked to the open science movement, focusing on getting researchers to release everything that is necessary for others to perform the same tasks needed to get the exact same results BIBREF36, BIBREF10. This increasingly includes pushing for high standards for releasing protocols, datasets, and analysis code. As more funders and journals are requiring releasing data, the issue of good documentation for data and protocols is rising BIBREF37, BIBREF38. There are also intersecting literatures on systems for capturing information in ML data flows and supply chains BIBREF39, BIBREF40, BIBREF41, as well as supporting data cleaning BIBREF42, BIBREF43. These issues have long been discussed in the fields of library and information science, particularly in Research Data Management BIBREF44, BIBREF45, BIBREF46, BIBREF47.
A major related movement is in and around the FATML field, with many recent papers proposing training data documentation in the context of ML. Various approaches, analogies, and metaphors have been taken in this area, including “datasheets for datasets” BIBREF48, ”model cards” BIBREF49, “data statements” BIBREF30, “nutrition labels” BIBREF50, a “bill of materials” BIBREF51, “data labels” BIBREF52, and “supplier declarations of conformity” BIBREF53. Many go far beyond the concerns we have raised around human-labeled training data, as some are also (or primarily) concerned with documenting other forms of training data, model performance and accuracy, bias, considerations of ethics and potential impacts, and more. We discuss how our findings relate to this broader emerging area more in the concluding discussion.
Data and methods ::: Data: machine learning papers performing classification tasks on Twitter data
Our goal was to find a corpus of papers that were using original human annotation or labeling to produce a new training dataset for supervised ML. We restricted our corpus to papers whose classifiers were trained on data from Twitter, for various reasons: First, we did attempt to produce a broader corpus of supervised ML application papers, but found our search queries in academic search engines would either 1) be so broad that most papers were non-applied / theoretical papers or papers re-using public pre-labeled datasets; or 2) that the results were so narrow they excluded many canonical papers in this area, which made us suspect that they were non-representative samples. Sampling to papers using Twitter data has strategic benefits for this kind of initial study. Data from Twitter is of interest to scholars from a variety of disciplines and topical interest areas, in addition to those who have an inherent interest in Twitter as a social media site. As we detail in appendix section SECREF45, the papers represented political science, public health, NLP, sentiment analysis, cybersecurity, content moderation, hate speech, information quality, demographic profiling, and more.
We drew the main corpus of ML application papers from ArXiV, the oldest and most established “preprint” repositories, originally for researchers to share papers prior to peer review. Today, ArXiV is widely used to share both drafts of papers that have not (yet) passed peer review (“preprints”) and final versions of papers that have passed peer review (often called “postprints”). Users submit to any number of disciplinary categories and subcategories. Subcategory moderators perform a cursory review to catch spam, blatant hoaxes, and miscategorized papers, but do not review papers for soundness or validity. We sampled all papers published in the Computer Science subcategories of Artificial Intelligence (cs.AI), Machine Learning (cs.LG), Social and Information Networks (cs.SI), Computational Linguistics (cs.CL), Computers and Society (cs.CY), Information Retrieval (cs.IR), and Computer Vision (CS.CV), the Statistics subcategory of Machine Learning (stat.ML), and Social Physics (physics.soc-ph). We filtered for papers in which the title or abstract included at least one of the words “machine learning”, “classif*”, or “supervi*” (case insensitive). We then filtered to papers in which the title or abstract included at least “twitter” or “tweet” (case insensitive), which resulted in 494 papers. We used the same query on Elsevier's Scopus database of peer-reviewed articles, selecting 30 randomly sampled articles, which mostly selected from conference proceedings. One paper from the Scopus sample was corrupted, so only 29 papers were examined.
ArXiV is likely not a representative sample of all ML publications. However, we chose it because ArXiV papers are widely accessible to the public, indexed in Google Scholar and other scholarly databases, and are generally considered citeable publications. The fact that many ArXiV papers are not peer-reviewed and that papers posted are not likely representative samples of ML research is worth considering when reflecting on the generalizability of our findings. However, given that such papers are routinely discussed in both academic literature and the popular press means that issues with their reporting of training data is just as crucial. Sampling from ArXiv also lets us examine papers at various stages in the peer-review cycle, breaking out preprints not (yet) published, preprints of later published papers, and postprints of published works. The appendix details both corpora, including an analysis of the topics and fields of papers (in SECREF47), an analysis of the publishers and publication types (e.g. an early preprint of a journal article, a final postprint of a conference proceeding, a preprint never published) and publishers (in SECREF50 and SECREF47). The final dataset can be found on GitHub and Zenodo.
Data and methods ::: Labeling team, training, and workflow
Our labeling team included one research scientist who led the project (RSG) and undergraduate research assistants, who worked for course credit as part of an university-sponsored research experience program (KY, YY, MD, JQ, RT, and JH). The project began with five students for one semester, four of whom continued on the project for the second semester. A sixth student replaced the student who did not continue. All students had some coursework in computer science and/or data science, with a range of prior experience in machine learning in both a classroom and applied setting. Students' majors and minors included Electrical Engineering & Computer Science, Data Science, Statistics, and Linguistics.
The labeling workflow was that each week, a set of papers were randomly sampled each week from the unlabled set of 494 ArXiV papers in the corpus. For two weeks, the 30 sampled papers from Scopus were selected. The five students independently reviewed and labeled the same papers each week, using a different web-based spreadsheet to record labels. The team leader synthesized labels and identified disagreement. The team met in person each week to discuss cases of disagreement, working to build a consensus about the proper label (as opposed to purely majority vote). The team leader facilitated these discussions and had the final say when a consensus could not be reached. The papers labeled for the first two weeks were in a training period, in which the team worked on a different set of papers not included in the dataset. In these initial weeks, the team learned the coding schema and the reconciliation process, which were further refined.
Data and methods ::: Second round verification and reconciliation
After 164 papers were labeled by five annotators, we conducted a second round of verification. This was necessary both because there were some disagreements in labeling and changes made to the coding schema (discussed in appendix SECREF54). All labels for all 164 papers were independently re-examined by at least two of the six team members. Annotators were given a summary of the original labels in the first round and were instructed to review all papers, being mindful of how the schema and instructions had changed. We then aggregated, reconciled, and verified labels in the same way as in the first round. For papers where there was no substantive disagreement on any question between those who re-examined it in the second round, the paper's labels were considered to be final. For papers where there was any substantive disagreement on any question, the paper was either discussed to consensus in the same manner as in the first round or decided by the team leader. The final schema and instructions are in the appendix, section SECREF57.
Finally, we cleaned up issues with labels around implicit or blank values using rule-based scripts. We learned our process involved some ambiguities around whether a subsequent value needed to be filled in. For example, if a paper was not using crowdworkers, then the instructions for our schema were that the question about crowdworker compensation was to remain blank. However, we found we had cases where “reported crowdworker compensation” was “no” for papers that did not use crowdworkers. This would be concerning had we had a “yes” for such a variable, but found no such cases. We recoded questions about pre-screening for crowdwork platforms (implied by using crowdworkers in original human annotation source) and the number of human annotators.
We measured interrater reliability metrics using mean percent total agreement, or the proportion of cases where all labelers initially gave the same label. This is a more stringent metric than Fleiss's kappa and Krippendorf's alpha, and our data does not fit the assumptions for those widely-used metrics. IRR rates for round one were relatively low: across all questions, the mean percent total agreement was 66.67%, with the lowest question having a rate of 38.2%. IRR rates for round two were quite higher: the mean percent total agreement across all questions was 84.80% and the lowest agreement score was 63.4% (for “used external human annotation”, which we discuss later). We are confident about our labeling process, especially because these individual ratings were followed by an expert-adjudicated discussion-based reconciliation process, rather than simply counting majority votes. We detail more information and reflection about interrater reliability in appendix section SECREF52.
Data and methods ::: Raw and normalized information scores
We quantified the information about training data in papers, developing a raw and normalized information score, as different studies demanded different levels of information. For example, our question about whether inter-annotator agreement metrics were reported is only applicable for papers involving multiple annotators. Our questions about whether prescreening was used for crowdwork platforms or whether crowdworker compensation was reported is only relevant for projects using crowdworkers. However, some kinds of information are relevant to all papers that involve original human annotation: who the annotators are (annotation source), annotator training, formal instructions or definitions were given, the number of annotators involved, whether multiple annotators examined the same items, or a link to a publicly-available dataset.
For raw scores, papers involving original human annotation received one point each for reporting the six items mentioned above. In addition, they received one point per question if they included information for each of the two questions about crowdworkers if the project used crowdworkers, and one point if they reported inter-annotator metrics if the project used multiple annotators per item. For the normalized score, the raw score was divided by the highest possible raw score. We only calculated scores for papers involving original human annotation. Finally, we conducted an analysis of information scores by various bibliometric factors, which required determining such factors for all papers. For all ArXiV papers, we determined whether the PDF was a pre-print not (yet) published in another venue, a post-print identical in content to a published version, or a pre-print version of a paper published elsewhere with different content. For all Scopus papers and ArXiV post-prints, we also determined the publisher. We detail these in appendix SECREF47.
Findings ::: Original classification task
The first question was whether the paper was conducting an original classification task using supervised machine learning. Our keyword-based process of generating the corpus included many papers not in this scope. However, defining the boundaries of supervised ML and classification tasks is difficult, particularly for papers that are long, complex, and ambiguously worded. We found that some papers claimed to be using ML, but when we examined the details, these did not fall under our definition. We defined machine learning broadly, using a common working definition in which machine learning includes any automated process that does not exclusively rely on explicit rules, in which the performance of a task increases with additional data. This includes simple linear regressions, for example, and there is much debate about if and when simple linear regressions are a form of ML. However, as we were also looking for classification tasks, linear regressions were only included if it is used to make a prediction in a set of defined classes. We defined an “original” classifier to mean a classifier the authors made based on new or old data, which excludes the exclusive use of pre-trained classifiers or models.
As table TABREF13 shows, the overwhelming majority of papers in our dataset were involved in an original classification task. We placed 5 papers in the “unsure” category — meaning they did not give enough detail for us to make this determination, or that they were complex boundary cases. One of the “unsure” cases clearly used labels from human annotation, and so we answered the subsequent questions, which is why the counts in Table 2 add up to 143 (as well as some other seeming disparities in later questions).
Findings ::: Labels from human annotation
One of the major issues we had to come to a consensus around was whether a paper used labels from human annotation. We observed a wide range of cases in which human judgment was brought to bear on the curation of training data. Our final definition required that “the classifier [was] at least in part trained on labeled data that humans made for the purpose of the classification problem.” We decided on a working definition that excluded many “clever uses of metadata” from this category, but did allow some cases of “self-annotation” from social media, which were typically the most borderline cases on the other side. For example, one case from our examples we decided was human annotation used specific politically-inflected hashtags to automatically label tweets as for or against a position, for use in stance detection (e.g. #ProChoice versus #ProLife). However, these cases of self-annotation would all be considered external human annotation rather than original human annotation, and so the subsequent questions about the annotation process would be not applicable. Another set of borderline cases involved papers where no human annotation was involved in the curation of the training dataset that was used to build the classifier, but human annotation was used for validation purposes. We did not consider these to involve human annotation as we originally defined it in our schema, even though the same issues arise with equal significance for the validity of such research.
Findings ::: Used original human annotation and external human annotation
Our next two questions were about whether papers that used human annotation used original human annotation, which we defined as a process in which the paper's authors obtained new labels from humans for items. It is common in ML research to re-use public datasets, and many of papers in our corpus did so. We also found 10 papers in which external and original human annotation was combined to create a new training dataset. For these reasons, we modified our schema to ask separate questions for original and external human annotation data, to capture all three cases (using only original, only external, or both). Tables TABREF17 and TABREF17 show the breakdown for both questions. We only answered the subsequent questions about the human annotation process for the papers producing an original human annotated dataset.
Findings ::: Original human annotation source
Our next question asked who the annotators were, for the 74 papers that used original human annotation. The possible options were: the paper's authors, Amazon Mechanical Turk, other crowdworking platforms, experts/professionals, other, and no information. We took phrases like “we labeled” (with no other details) to be an implicit declaration that the paper's authors did the labeling. If the paper discussed labelers' qualifications for the task beyond an average person, we labeled it as “experts / professionals.” For example, some of our boundary cases involved recruiting students to label sentiment. One study involved labeling tweets with both English and Hindi text and noted that the students were fluent in both languages – which we considered to be in the “experts / professionals” category. Another paper we included in this category recruited students to label tweets with emojis, noting that the recruited students “are knowledgeable with the context of use of emojis.”
As table TABREF19 shows, we found a diversity of approaches to the recruitment of human annotators. The plurality of papers involved the paper's authors doing the annotation work themselves. The next highest category was “no information,” which was found in almost a quarter of the papers using original human annotation. Experts / professionals was far higher than we expected, although we took any claim of expertise for granted. Crowdworkers constituted a far smaller proportion than we expected, with Amazon Mechanical Turk and other platforms collectively comprising about 15% of papers. Almost all of the other crowdworking platforms specified were CrowdFlower/FigureEight, with one paper using oDesk.
Findings ::: Number of human annotators
Our instructions for the question about the number of human annotators was not precise and had one of the lower levels of inter-rater reliability. If the paper included information about the number of human annotators, the instructions were to put such a number, leaving the field blank for no information. Most of the disagreement was from differences around how papers report the number of annotators used. For example, some papers specified the total number of humans who worked on the project annotating items, while others only specified how many annotators were used per item (particularly for those using crowdworkers), and a few reported both. Some involved a closed set of annotators who all examined the same set of items, similar to how our team operated. Other papers involved an open set of annotators, particularly drawn from crowdworking platforms, but had a consistent number of annotators who reviewed each item. Due to these inconsistencies, we computationally re-coded responses into the presence of information about the number of human annotators. These are both important aspects to discuss, although it is arguably more important to discuss the number of annotators who reviewed each item. In general, having more annotators review each item provides a more robust way of determining the validity of the entire process, although this also requires caluclating inter-annotator agreement metrics.
As table TABREF21 shows, a slim majority of papers using original human annotation specified the number of annotators involved in some way. Based on our experiences, we typically noticed that papers discussing the number of annotators often fell into two categories: 1) a small closed team (more often 2-3, sometimes 4-6) that were either the papers' authors or recruited directly by the authors, who tended to perform the same amount of work for the duration of the project; or 2) a medium to large (25-500) open set of annotators, typically but not necessarily recruited through a crowdworking platform, who each performed highly variable amounts of work.
Findings ::: Formal definitions and instructions
Our next question was about whether instructions or guidelines with formal definitions or examples are reportedly given to annotators. Formal definitions and concrete examples are both important, as they help annotators understand how the researchers have operationalized the concept in question and determine edge cases. With no or ambiguous definitions/examples, there could be fundamental misunderstandings that are not captured by inter-annotator agreement metrics, if all annotators make the same misunderstandings. We defined two levels: giving no instructions beyond the text of a question, then giving definitions for each label and/or concrete examples. The paper must describe or refer to instructions given (or include them in supplemental materials), otherwise, we categorized it "No Information". Some borderline cases involved authors labeling the dataset themselves, where the paper presented a formal definition, but only implied that it informed the labeling – which we took to be a formal definition. As table TABREF23 shows, the plurality of papers did not provide enough information to make a determination (it is rare for authors to say they did not do something), but 43.2% provided definitions or examples.
Findings ::: Training for human annotators
We defined training for human annotators to involve some kind of interactive process in which the annotators have the opportunity to receive some kind of feedback and/or dialogue about the annotation process. We identified this as a distinct category from both the qualifications of the annotators and the instructions given to annotators, which are examined in other questions. Training typically involved some kind of live session or ongoing meeting in which annotators' progress was evaluated and/or discussed, where annotators had the chance to ask questions or receive feedback on why certain determinations did or did not match definitions or a schema. We used our own team's process as an example of this, and found several papers that used a similar roundtable process, which went into detail about interactions between team members. Cases in which the paper only specified that annotators were given a video or a detailed schema to review were not considered training details, as this was a one-way process and counted as definitions/instructions.
The overwhelming majority of papers did not discuss such issues, as table TABREF25 shows, with 15% of papers involving a training session. Because we had a quite strict definition for what constitutes training (versus what many may think of around “trained annotators”), this is expected. We also are not all that concerned with this low number, as there are many tasks that likely do not require specialized training — unlike our project, which required both specific expertise in an area and with our complicated schema.
Findings ::: Pre-screening for crowdwork platforms
Crowdwork platforms let employers pre-screen or test for traits, skills, or performance metrics, which significantly narrows the pool of crowdworkers. For example, “project-specific pre-screening” involves offering a sample task with known outcomes: if the crowdworker passed, they would be invited to annotate more items. 5 of the 11 papers using crowdworkers reported using this approach. Platforms also often have location-based screening (e.g. US-only), which 2 papers reported using. Some crowdwork platforms have a qualification for workers who have a positive track record based on total employer ratings (e.g. AMT Master). Platforms also offer generic skills-based tests for certain kinds of work (e.g. CrowdFlower's Skill Tests). These last two qualifications were in our coding schema, but no papers reported using them.
Findings ::: Multiple annotator overlap and reporting inter-annotator agreement
Our next two questions were about using multiple annotators to review the same items (multiple annotator overlap) and whether inter-annotator agreement metrics were reported. Having multiple independent annotators is typically a foundational best practice in structured content analysis, so that the integrity of the annotations and the schema can be evaluated (although see BIBREF31). For multiple annotator overlap, our definitions required papers state whether all or some of the items were labeled by multiple labelers, otherwise “no information” was recorded. Then, for papers that did multiple annotator overlap, we examined whether any inter-annotator agreement metric was reported. We did find one paper that did not explicitly state that multiple labelers overlapped, but did report inter-annotator agreement metrics. This implicitly means that at least some of the items were labeled by multiple labelers, but for consistency, we keep the “no information” label for this case. We did not record what kind of inter-annotator metric was used, such as Cohen's kappa or Krippendorff's alpha, but many different metrics were used. We also did not record what the exact statistic was, although we did notice a wide variation in what was considered an acceptable or unacceptable score for inter-annotator agreement.
For multiple annotator overlap, table TABREF29 shows that just under half of all papers that involved an original human annotation task did not provide explicit information one way or the other about whether multiple annotators reviewed each item. This includes the one paper that reported inter-annotator agreement metrics, but did not specify whether overlap was for all items or some items. Only three papers explicitly stated that there was no overlap among annotators, and so it is quite likely that the papers that did not specify such information did not engage in such a practice. For the 37 papers that did involve some kind of multiple annotator overlap, the overwhelming majority of this subsample (84%) involved multiple annotation of all items, rather than only some items. We also found that for papers that did involve some kind of multiple overlap, the large majority of them ( 70%) did report some metric of inter-annotator agreement, as table TABREF29 indicates.
Findings ::: Reported crowdworker compensation
Crowdworking is often used because of the low cost, which can be far below minimum wage in certain countries. Researchers and crowdworkers have been organizing around issues related to the exploitation of crowdworkers in research, advocating ethical practices including fair pay BIBREF54. We examined all papers involving crowdworkers for any indication of compensation, and found zero mentioned compensation. We did find that some papers using other sources of human annotation (e.g. students) discussed compensation for annotators, but this was not in our original schema.
Findings ::: Link to dataset available
Our final question was about whether the paper contained a link to the dataset containing the original human annotated training dataset. Note that this question was only answered for papers involving some kind of original or novel human annotation, and papers that were exclusively re-using an existing open or public dataset were left blank to avoid double-counting. We did not follow such links or verify that such data was actually available. As table TABREF32 shows, the overwhelming majority of papers did not include such a link, with 8 papers (10.81%) using original human-annotated training datasets linking to such data. Given the time, labor, expertise, and funding in creating original human annotated datasets, authors may be hesitant to release such data until they feel they have published as many papers as they can.
Paper information scores
The raw and normalized information scores (see section SECREF10 for methodology) were calculated for all papers that involved original human annotation. As previously discussed, our corpora represent a likely non-representative sample of ML research, even if bounded to social computing. Our relatively small sample sizes combined with the number of multiple comparisons would mean that thresholds for statistical significance would need to be quite high. Instead, we present these results to help provide an initial framework and limited results on this issue, intended to help inform a broader and more systematic evaluation the ML literature. We do observe quite varying ranges and distributions of information scores, which does give evidence to the claim that there is substantial and wide variation in the practices around human annotation, training data curation, and research documentation.
Paper information scores ::: Overall distributions of information scores
Figure FIGREF34 shows histograms for raw and normalized information scores, which both suggest a bimodal distribution, with fewer papers at the both extremes and the median. This suggests that there are roughly two populations of researchers, with one centered around raw scores of 1-2 and normalized scores of 0.25 and one centered around raw scores of 5 and normalized scores of 0.7. The normalized information score ranged from 0 to 1, with 6 papers having a normalized score of 0 and only 1 paper with a score of 1. The raw information score ranged from 0 to 7, with no paper receiving a full score of 8 or 9, which would have required a study involving crowdworkers, multiple overlap, and open datasets. Overall, the mean normalized information score was 0.441, with a median of 0.429 and a standard deviation of 0.261. The mean raw score was 3.15, with a median of 3.0 and a standard deviation of 2.05.
Paper information scores ::: Information scores by corpus and publication type
Figure FIGREF37 shows two boxplots of normalized information scores that are based on different intersecting categories of publication type and status. The left figure compares scores in four categories: all papers in the Scopus sample (non-ArXived), ArXiv preprints that were never (or are not yet) published, and ArXiv preprints that were either postprints or preprints of a traditional publication. The category with the lowest median score are papers from the Scopus sample, which is followed closely by ArXiv preprints never published, although preprints never published had a much larger IQR and standard deviation. Postprints of publications had a similar IQR and standard deviation as preprints never published, but a much higher median score. Preprints of publications had a similar median score as postprints, but with a much smaller IQR and standard deviation. The righthand figure plots publication types for the combined corpora. Conference proceedings and ArXiv preprints never published have somewhat similar medians and IQRs, with journal articles having a higher median of 0.5 and a much narrower IQR. While we hesitate to draw generalizable conclusions, we see these findings indicating a wide range of factors potentially at play.
Paper information scores ::: Information scores by publisher
Figure FIGREF39 shows boxplots for normalized information scores by publisher, split between papers sampled from ArXiv and Scopus. The boxplots are ordered by the median score per publisher. In papers in the ArXiv corpus, those that were pre- or post-prints of papers published by the professional societies Association for Computing Machinery (ACM) or Association of Computational Linguistics (ACL) tied for the highest median scores of 0.667, with similar IQRs. These were followed by Springer and Elsevier, with respective medians 0.625 and 0.603 and narrower IQRs. ArXiv preprints not published elsewhere had a median score of 0.381 and the highest IQR and standard deviation (0.289), suggesting that it represents a wide range of papers. The publishers at the lower end of the scale included AAAI, with a median of 0.444 and a narrower IQR, and IEEE, with a median of 0.226 and the second-highest IQR and standard deviation (0.327). Curiously, papers from the Scopus corpus show different results per-publisher, with the median scores of all publishers lower in the Scopus corpus than in the ArXiv corpus. Given the small number of papers in the Scopus sample, we hesitate to draw general conclusions, but suspect it indicates differences between all academic authors and those who post ArXiv postprints.
Concluding discussion ::: Findings
In the sample of ML application publications using Twitter data we examined, we found a wide range in levels of documentation about methodological practices in human annotation. While we hesitate to overly generalize our findings to ML at large, these findings do indicate concern, given how crucial the quality of training data is and the difficulty of standardizing human judgment. Yet they also give us hope, as we found a number of papers we considered to be excellent cases of reporting the processes behind their datasets. About half of the papers using original human annotation engaged in some form of multiple overlap, and about 70% of the papers that did multiple overlap reported metrics of inter-annotator agreement. The distribution of annotation information scores was roughly bimodal, suggesting two distinct populations of those who provide substantially more and less information about training data in their papers. We do see preliminary evidence that papers in our sample published by certain publishers/venues tended to have papers with far more information than others (e.g. ACM and ACL at the top end, followed closely by journal publishers Springer and Elsevier, with IEEE and AAAI proceedings at the lower end). Preprints exclusively published on ArXiv also had the widest range of scores.
Concluding discussion ::: Implications
Based on our findings and experiences in this project, we believe human annotation should be considered a core aspect of the research process, with as much attention, care, and concern placed on the annotation process as is currently placed on performance-based metrics like F1 scores. Our findings — while preliminary, descriptive, and limited in scope — tell us that there is much room for improvement. This paper also makes steps towards more large-scale and systematic analyses of the research landscape, as well as towards standards and best practices for researchers and reviewers.
Institutions like journals, funders, and disciplinary societies have a major role to play in solutions to these issues. Most publications have strict length maximums, and many papers we scored highly spent a page or more describing their process. Reviewer expectations are crucial in any discussion of the reporting of methodological details in research publications. It could be that some authors did include such details, but were asked to take it out and add other material instead. Authors have incentives to be less open about the messiness inherent in research, as this may open them up to additional criticism. We see many parallels here to issues around reproducibility and open science, which are increasingly being tackled by universal requirements from journals and funders, rather than relying on individuals to change norms. Such research guidelines are common, including the COREQ standard for qualitative data analysis reporting BIBREF55, a requirement by some journals. A number of proposed standards have been created around datasets for ML BIBREF48, BIBREF49, BIBREF30, BIBREF50, BIBREF51, BIBREF52, BIBREF53, which are often framed as potential ways to mitigate bias and improve transparency and accountability. Several of these are broader proposals around reporting information about ML classifiers and models, which include various aspects beyond our study. In fact, given the recent explosion of proposals for structured disclosure or transparency documents around ML, the Partnership on AI has recently created the “ABOUT ML” working group to arrive at a common format or standard. BIBREF56
From our perspective, it is important to frame this issue as one of research validity and integrity: what kind of information about training data is needed for researchers, reviewers, and readers to have confidence in the model or classifier? As we observed in our discussions, we became skeptical about papers that did not adequately describe their human annotation processes. However, human annotation is a broad and diverse category of analytical activity, encompassing a wide range of structured human judgment brought to bear on items, some far more straightforward or complex. We saw the wide range papers that were engaged in various forms of annotation or labeling, even though we bounded our study to papers using data from Twitter. One important distinguishing factor is the difficulty of the task and the level of specific knowledge needed to complete it, which can vary significantly. Another key distinction may be between when there is expected to be only one `right' answer and when there might be many valid answers.
Most importantly, we would not want a straightforward checklist to overdetermine issues of model integrity. A number of papers we read were missing details we thought were crucial for understanding that study, but would not make sense for a majority of papers we examined. If a checklist was created, it should not be seen as an end in itself. The classic principle of scientific replicability could be a useful heuristic: does the paper provide enough information about the labeling process such that any reader could (with sufficient resources and access to the same kind of human annotators) conduct a substantively identical human annotation process on their own? We also see a role for technical solutions to help scaffold adherence to these best practices. For example, major qualitative data analysis platforms like MAXQDA or NVivo have built-in support for inter-annotator agreement metrics. Several crowdsourcing and citizen science platforms for data labeling are built to support reconciliation for disagreements. Automated workflow, pipeline, and provenance tracking is an increasing topic in ML, although these can focus more on model building and tuning, taking data as given. We recommend such projects include human annotation as a first-class element, with customization as needed.
Finally, our own experience in this human annotation project studying human annotation projects has shown us the costs and benefits of taking an intensive, detailed, collaborative, and multi-stage approach to human annotation. On one side, we believe that after going through such a long process, we have not only better data, but also a much better contextual understanding of our object of study. Yet on the other hand, even though struggling over the labels and labeling process is an opportunity, our time- and labor-intensive process did have a direct tradeoff with the number of items we were able to annotate. These issues and tradeoffs are important for ML researchers to discuss when designing their own projects and evaluating others.
Concluding discussion ::: Limitations and future work
Our study has limitations, as we only examined a sample of publications in the ML application space. First, we only examined papers that performing a classification task on tweets, which is likely not a representative sample of ML application publications. We would expect to find different results in different domain application areas. Papers in medicine and health may have substantially different practices around reporting training data, due to strict reporting standards in clinical trials and related areas. We also generally examined papers that are posted on ArXiV (in addition to 30 papers sampled from Scopus) and ArXiV is likely to not be a representative sample of academic publications. ArXiV papers are self-submitted and represent a range of publication stages, from drafts not submitted to review, preprints in peer review, and postprints that have passed peer review. Future work should examine different kinds of stratified random samples to examine differences between various publishers, publication types, disciplines, topics, and other factors.
Our study only examined a set of the kinds of issues that scholars and practitioners in ML are examining when they call for greater transparency and accountability through documentation of datasets and models. We have not recorded information about what exactly the rates of inter-annotator agreement are. In particular, we did not record information about the reconciliation or adjudication process for projects which involve multiple overlap (e.g. majority rule, talking to consensus), which we have personally found to be a crucial and difficult process. Other questions we considered but did not include were: the demographics of the labelers, the number of labelers (total and per item), compensation beyond crowdworkers, whether instructions or screenshot of the labeling interface was included, and whether labelers had the option to choose “unsure” (vs. being forced to choose a label). We leave this for future work, but also found that each additional question made it more difficult for labelers. We also considered but did not have our team give a holistic score indicating their confidence in the paper (e.g. a 1-5 score, like those used in some peer reviewing processes).
Our study also has limitations that any human annotation project has, and we gained much empathy around the difficulties of human annotation. Our process is not perfect, and as we have analyzed our data, we have identified cases that make us want to change our schema even further or reclassify boundary cases. In future work, we would also recommend using a more structured and constrained system for annotation to capture the text that annotators use to justify their answers to various questions. ML papers are very long and complex, such that our reconciliation and adjudication process was very time-consuming. Finally, we only have access to what the publications say about the work they did, and not the work itself. Future work could improve on this through other methods, such as ethnographic studies of ML practitioners.
Appendix
The appendix appears following the references section. This work was funded in part by the Gordon & Betty Moore Foundation (Grant GBMF3834) and Alfred P. Sloan Foundation (Grant 2013-10-27), as part of the Moore-Sloan Data Science Environments grant to UC-Berkeley. This work was also supported by UC-Berkeley's Undergraduate Research Apprenticeship Program (URAP). We thank many members of UC-Berkeley's Algorithmic Fairness & Opacity Group (AFOG) for providing invaluable feedback on this project.
Appendix ::: Dataset/corpus details ::: Keyword labels
To capture the topical and disciplinary diversity of papers in our corpus, we assigned one or more keyword labels to each paper, intended to capture topical, domain, disciplinary, and methodological qualities about the study. A paper seeking to classify tweets for spam and phishing in Turkish might include the labels: spam detection; phishing detection; cybersecurity; non-English. A study seeking to classify whether users are tweeting in support or opposition of a protest might have the keywords: user profiling; political science; protests; stance detection; public opinion. As part of the annotation and labeling process, all five annotators gave each paper a short description of what was being classified or predicted. The project lead aggregated these independent descriptions and additionally examined the paper title, abstract, and text. The project lead — who has extensive knowledge and experience of the various disciplines in the social computing space — then conducted a two-stage thematic coding process. A first pass involved open (or free-form) coding for all papers, with the goal of creating a typology of keywords. The list of keywords were then refined and consolidated, and a second pass was conducted on all of the items to re-label them as appropriate. Papers could have multiple keywords.
The distribution is plotted in Figure FIGREF46, which is broken out by papers that were using original human annotation (e.g. a new labeled training dataset) versus either theoretical papers or papers exclusively re-using a public or external dataset (see section SECREF16). This shows that the most common keywords were user profiling (a broader keyword that includes demographic prediction and classification of users into various categories), public opinion (a broader keyword that includes using Twitter to obtain beliefs or opinions, typically about political or cultural topics), and then two NLP methodologies of sentiment analysis and topic identification. The keyword "social networks" was used for any paper that either made substantive use of the network structure (e.g. follower graphs) as a feature, or tried to predict it. This figure also shows that our corpus also includes papers from a wide range of fields and sub-fields across disciplines, including a number of papers on cybersecurity (including bot/human detection, phishing detection, and spam detection), public health and epidemology, hate speech and content moderation, human geography, computer vision, political science, and crisis informatics. Papers using non-English languages were also represented in our corpus.
Appendix ::: Dataset/corpus details ::: Distribution of paper types in the corpus
For each of our 164 papers, we needed to determine various bibliometric factors. For papers in the ArXiv sample, the most important of these is whether the file uploaded to ArXiV is a version of a paper published in a more traditional venue, and if so, whether the ArXiV version is a pre-print submitted prior to peer-review (and has different content than the published version) or if it is a post-print that is identical in content to the published version. Many authors upload a paper to ArXiv when they submit it to a journal, others upload the accepted manuscript that has passed peer-review but has not been formatted and typeset by the publisher, and others upload the exact “camera-ready” version published by the publishers. ArXiV also lets authors update new versions; some will update each of these versions as they progress through the publishing process, others will only upload a final version, and some only upload the pre-review version and do not update the version in ArXiv to the published version.
To do this, the project lead first manually searched for the exact text of the title in Google Scholar, which consolidates multiple versions of papers with the same title. Papers that only had versions in ArXiv, ArXiv mirrors (such as adsabs), other e-print repositories like ResearchGate, personal websites, or institutional repositories were labeled as “Preprint never published.” For papers that also appeared in any kind of publication venue or publishing library (such as the ACM, IEEE, AAAI, or ACL digital libraries), the project lead recorded the publication venue and publisher, then downloaded the published version. In some workshops and smaller conferences, the “publisher” was a single website just for the event, which lacked ISSNs or DOIs. These were considered to be published as conference or workshop proceedings, if there was a public list of all the papers presented at the event with links to all of the papers. There was only one case in which there were two or more publications with the exact same title by the same authors, which involved a 2-page archived extended abstract for a poster in an earlier conference proceeding and a full paper in a later conference proceeding. For this case, we chose the full paper in the later venue.
The project lead then compared the version uploaded to ArXiv with the published version. As this was done after the labeling process, for papers where the author uploaded multiple versions to ArXiv, we took care to examine the version our labelers examined. If there were any differences in substantive content, the paper was labeled as “Preprint of” and then an appropriate description of the venue, such as “refereed conference proceeding” or “refereed journal article.” If there were no differences in the substantive content of the paper, the paper was labeled as “Postprint of” and then the venue description. Changes in reference style or ordering, page layout, typesetting, the size or color of figures, or moving the same text between footnotes and inline parentheticals were not considered to be substantive content changes. However, even a single character typo fix to the main body text, a single added or removed reference, or a change to a figure's caption constituted a substantive content change. Table TABREF48 shows the distribution of paper types. Because there was only one dissertation in the sample, which also was not using original human annotation, we excluded this category from the aggregate analyses by paper type shown in the results section.
Appendix ::: Dataset/corpus details ::: Distribution of publishers in corpus
For each paper in the Scopus samples and each paper in the ArXiv corpus that was a pre-print or post-print of a published paper, we also collected information about the journal and publisher. There were 80 different journals, conference proceedings, or workshops represented, with the top venues being the proceedings of SocInfo with 6 papers and the proceedings of ASONAM (Advances in Social Network Analysis and Mining) with 4 papers. Six venues had 3 publications each, which were all conference proceedings: AAAI ICWSM, ELRA LREC, ACM CIKM, ACM WWW, and IEEE Big Data. The distribution of publishers is presented in table TABREF49, which is broken out by papers in the ArXiv and Scopus corpus. The distribution of papers by years is shown in table TABREF49.
Appendix ::: Methods and analysis details ::: Inter-annotator agreement
In the first round, 5 annotators examined each paper independently, then met to discuss papers with disagreement. Table TABREF53 shows for each question, what percent of items were given the same label by all annotators (with number of annotators being recoded for the presence or absence of any information). Cases where no annotator answered the question because it was not relevant (e.g. crowdworker compensation for non-crowdworker projects) were not included in such a calculation, which would have increased such rates even more, but this would be somewhat disingenuous.
We report percent complete agreement among all raters for each question; for each item, what percent were given the same rating by all raters? We believe this is a more appropriate and straightforward metric for our project. This is due to the fact that our data does not necessarily meet the particular assumptions of other widely used two statistical estimators for 3+ raters. Fleiss's kappa and Krippendorf's alpha are widely used because they take into account the possibilities that raters made decisions based on random chance. However, this requires assuming a uniform prior possibility of such a random distribution, which generally only applies if each possible response by raters is equally likely BIBREF64, BIBREF61. This is the case in balanced datasets, but we observed widely skewed distributions.
The rates of proportional agreement were not high enough in the first round for us to be confident, which is likely due to a variety of factors. First, in contrast to most of the papers we examined, our project involved annotators answering 13 different questions for each item, which adds significant complexity to the process. Second, machine learning publications are also some of the more difficult pieces of content to make determinations around, as the definitions and boundaries of various concepts are often relatively undefined and contested across the many academic disciplines. In particular, our lowest rate for the second round was in the external human annotation question, which was added between the first and second round, and appears to still have some ambiguity.
We observed substantial increases in agreement between round one and two, although this also is likely confounded by the fact that all five annotators reviewed every item in round one, but only two or three reviewed every item in round two. We should note that as our approach was a human annotation research project studying human annotation research projects, this has given us much empathy for how difficult such a task is. We also acknowledge that our project involves the same kind of “black boxing” we discussed in the literature review, in which a messy process of multiple rounds of human annotations is reduced to a gold standard. However, we do believe in being open about our process, and our data for both rounds of annotation and the final dataset will be available upon publication.
The overall question for any study involving structured human annotation is whether the entire annotation, integration, review, and reconciliation process ultimately results in high confidence for the final dataset. The standard approach of human annotation checked by inter-rater reliability treats individual humans as instruments that turn phenomena in the world into structured data. If there is a high degree of inter-rater reliability, then each individual human can generally be trusted to make the same determination. If this is the case, then either reconciliation can easily take place through a majority vote process involving no discussion, or if rates are quite high, then only a subset of items need to be reviewed multiple times. In contrast, what our first round of inter-rater reliability metrics told us was that we were not the same kinds of standardized instruments that turn the same inputs into the same outputs. This does not bode well if we were conducting a single-stage mechanical majority-rule reconciliation process, and certainly would be unwise if we only had a single individual annotate each paper. For such a reason, we did not rely on such easier processes of reconciliation and demanded all papers be annotated by multiple individuals and discussed in a group setting moderated by the lead research scientist.
Furthermore, because our approach was largely focused on identifying the presence of various kinds of information within long-form publications, this is a different kind of human judgment than is involved in common tasks using human annotators in social computing, such as social media content moderation, sentiment analysis, or image labeling. Typically, annotated items are much smaller and tend to be evaluated holistically, with disagreements arising from annotators who looked at the same information and made different determinations.
In contrast, we reflected that in our reconciliation process, most of the time when annotators disagreed, it was because some annotators had caught a piece of information in the paper that others had not seen. There was a common occurrence wherein one of the annotators would point out a particular paragraph, the other annotators who had initially disagreed would read it, and then remark that they had missed that part and would like to change their answer. That said, there were cases wherein annotators were reading the same sections of the paper and still arriving at different answers, which was often either 1) because the paper was giving ambiguous, incomplete, or implicit information, or 2) because there was a fundamental interpretation of the coding schema, which required updating the schema or the examples in it. For such reasons, we are relatively confident that if, after our two rounds of annotation and the reconciliation process, no individual member of our team has identified the presence of such information, then it is quite likely it is not present in the paper.
Appendix ::: Methods and analysis details ::: Changes to the coding schema
Unlike in some approaches to structured content analysis, the coding schema was open to revision if needed during this first round. Some difficult edge cases led to the refinement of the schema approximately half-way through this round of the labeling. The schema was developed on a web-based word processing platform, which also included examples of difficult edge cases, which were added as they were identified in team meetings. The document detailed each question, a formal definition or explanation of the question, the list of possible permitted labels, and various cases of examples that illustrated difficult or edge cases.
The coding schema was modified only in cases where backward compatibility could be maintained with prior labeling work. This typically involved taking a question which had many granular possible labels and consolidating the possible labels into a smaller number of broader labels. For example, the question about whether instructions were given to human annotators originally involved specifying whether the instructions included a formal definition, examples, or both. This was revised to only specify “instructions with formal definition or examples.” Similarly, training for human annotators originally included a more granular list of possible training circumstances, plus ”no information”, ”other”, and ”unsure”. Because of the difficulty of gaining consensus on these different forms of training and the relatively few number of papers that gave any details whatsoever about annotator training (as well as no papers that explicitly stated no training had occurred), these were reduced to “some training details”, “no information”, and ”unsure” (see Table TABREF55).
In addition, three questions were added halfway through the first round of the annotation process. First, a question was added about whether the paper used an external human-annotated dataset or not, which was added to clarify the question about whether original human annotation was used. This was added after a paper was discussed where an external human-annotated dataset was combined with an original human-annotated dataset. Two other questions were added about whether the paper contains a link to the training dataset and whether details about crowdworker compensation were included for projects using crowdworkers. These were both relatively straightforward questions, with relatively few incidences across our dataset. All papers had all questions answered in the second round.
Appendix ::: Software used
All computational analysis and scripting was conducted in Python 3.7 BIBREF66, using the following libraries: Pandas dataframes BIBREF60 for data parsing and transformation; SciPy BIBREF58 and NumPy BIBREF65 for quantitative computations; and Matplotlib BIBREF57 and Seaborn BIBREF67 for visualization. Analysis was conducted in Jupyter Notebooks BIBREF59 using the IPython BIBREF62 kernels. Datasets and Jupyter Notebooks for data collection and analysis will be made available upon publication, which are made to run on Binder BIBREF63.
Appendix ::: Coding schema, examples, and instructions
A final version of our coding schema and instructions is below:
1. Original classification task: Is the paper presenting its own original classifier that is trying to predict something? “Original” means a new classifier they made based on new or old data, not anything about the novelty or innovation in the problem area.
Machine learning involves any process that does not have explicit or formal rules, where performance increases with more data. Classification involves predicting cases on a defined set of categories. Prediction is required, but not enough. Linear regressions might be included if the regression is used to make a classification, but making predictions for a linear variable is not. Predicting income or age brackets is classification, predicting raw income or age is not.
Example: analyzing statistics about the kinds of words people use on social media is not a classification task at all.
Example: predicting location is a classification task if it is from work, school, home, or other, but not if it is an infinite/undefined number of locations.
Example: This paper (https://ieeexplore.ieee.org/document/7937783) was framed as not an original classification task (more algorithm performance), but they did create an original classifier. This can also be an “unsure” – which is 100% OK to answer.
Example: Literature review papers that include classification papers aren't in this, if they didn't actually build a classifier.
Example: if there is a supervised classification task that is part of a broader process, this counts, focus on that.
If no, skip the following questions.
2. Classification outcome: What is the general type of problem or outcome that the classifier is trying to predict? Keep it short if possible. For example: sentiment, gender, human/bot, hate speech, political affiliation.
3. Labels from human annotation: Is the classifier at least in part trained on labeled data that humans made for the purpose of the classification problem? This includes re-using existing data from human judgments, if it was for the same purpose as the classifier. This does not include clever re-using of metadata.
Do a quick CTRL-F for “manual” and “annot” if you don't see anything, just to be sure.
If not, skip the following questions about human annotation.
Example: ISideWith paper on political stances was labels from human annotation, just not original. They took the labels from elsewhere and filled in the gaps (more on that in next Q).
Example: Buying followers and seeing who follows (1411.4299.pdf) is not human annotation.
Example: Generating (smart) simulated datasets from metadata is not human annotation.
Example: 1612.08207.pdf is not annotation when looking up political affiliation of politicians from an external database, even though it is manual work. No judgment is involved.
Example: 1709.01895.pdf is labels from human annotation, even though it is semi-automated. They identified hashtags that they believe universally correspond to certain political stances. There is a form of human judgment here, although in that paper, they don't define or explain it.
Example: Evaluation using human annotation is not annotation for ML, if the annotation wasn't used to make the classifier. (1710.07394.pdf)
Example: If they are using human annotation just to have confidence that a machine-annotated dataset is as good as a human annotated one, but the human annotated dataset isn't actually used to train the classifier, it is *not* using human annotation for ML. (1605.05195.pdf)
4. Used original human annotation: Did the project involve creating new human-labeled data, or was it exclusively re-using an existing dataset?
Yes
No
Unsure
Papers may have a mix of new and old human labeled data, or new human labeled data and non-human labeled data. If there is any new human annotation, say yes.
New human annotation must be systematic, not filling in the gaps of another dataset. Example: ISideWith paper on political stances is *not* original human annotation, even though they did some manual original research to fill the gap.
If the methods section is too vague to not tell, then leave as unsure (example: 1801.06294.pdf)
4.5. Used external human annotation data: Did the project use an already existing dataset from human labeled data?
Yes
No
Unsure
If they are using external human annotated data, skip the remaining questions:
5. Original human annotation source: Who were the human annotators? Drop-down options are:
Amazon Mechanical Turk (AMT, Turkers)
Any other crowdworking platform (Crowdflower / Figure8)
The paper's authors
Academic experts / professionals in the area
No information in the paper
Other
Unsure
For academic experts or professionals in the area, this is independent from the kinds of specific training they received for the task at hand. Think of “the area” broadly, so if it is something about healthcare and nurses were recruited, that would be professionals in the area, even if they don't say anything about the nurses having specific training in the annotation task at hand. If it doesn't easily fit into these or uses multiple sources, add them in the next column.
Example: “We develop a mechanism to help three volunteers analyze each collected user manually” -- put other, if that is all they say
Example: If it just says “we annotated...” then assume it is only the paper's authors unless otherwise stated.
6. Number of human annotators:
Put the number if stated, if not, leave blank.
7. Training for human annotators: Did the annotators receive interactive training for this specific annotation task / research project? Training involves some kind of interactive feedback. Simply being given formal instructions or guidelines is not training. Prior professional expertise is not training. Options include:
Some kind of training is mentioned
No information in the paper
Unsure
Example: It is not considered training if there was prescreening, unless they were told what they got right and wrong or other debriefing. Not training if they just gave people with high accuracy more work.
Example: This paper had a minimum acceptable statement for some training information, with only these lines: “The labeling was done by four volunteers, who were carefully instructed on the definitions in Section 3. The volunteers agree on more than 90% of the labels, and any labeling differences in the remaining accounts are resolved by consensus.”
8. Formal instructions/guidelines: What documents were the annotators given to help them? This document you are in right now is an example of formal instructions with definitions and examples.
No instructions beyond question text
Instructions include formal definition or examples
No information in paper (or not enough to decide)
Unsure
Example of a paper showing examples: “we asked crowdsourcing workers to assign the `relevant' label if the tweet conveys/reports information useful for crisis response such as a report of injured or dead people, some kind of infrastructure damage, urgent needs of affected people, donations requests or offers, otherwise assign the `non-relevant' label”
9. Prescreening for crowdwork platforms
Leave blank if this is not applicable.
No prescreening (must state this)
Previous platform performance qualification (e.g. AMT Master)
Generic skills-based qualification (e.g. AMT Premium)
Location qualification
Project-specific prescreening: researchers had known ground truth and only invited
No information
Unsure
10. Multiple annotator overlap: Did the annotators label at least some of the same items?
Yes, for all items
Yes, for some items
No
Unsure
No information
If it says there was overlap but not info to say all or some, put unsure.
11. Reported inter-annotator agreement: Leave blank if there was no overlap. Is a metric of inter-annotator agreement or intercoder reliability reported? It may be called Krippendorf's alpha, Cohen's kappa, F1 score, or other things.
Yes
No
Unsure
12. Reported crowdworker compensation: If using crowdworkers to annotate, did they say how much the annotators were paid for their work? Leave blank if crowdworkers were not used.
Yes
No
Unsure
13. Link to dataset available: Is there a link in the paper to the dataset they used?
Yes
No
Unsure | structured content analysis (also called “closed coding”) is used to turn qualitative or unstructured data of all kinds into structured and/or quantitative data, Projects usually involve teams of “coders” (also called “annotators”, “labelers”, or “reviewers”), with human labor required to “code”, “annotate”, or “label” a corpus of items. |
524abe0ab77db168d5b2f0b68dba0982ac5c1d8e | 524abe0ab77db168d5b2f0b68dba0982ac5c1d8e_0 | Q: Do the authors suggest any future extensions to this work?
Text: Introduction
Event detection is important for emergency services to react rapidly and minimize damage. For example, terrorist attacks, protests, or bushfires may require the presence of ambulances, firefighters, and police as soon as possible to save people. This research aims to detect events as soon as they occur and are reported via some Twitter user. The event detection process requires to know the keywords associated with each event and to assess the minimal count of each word to decide confidently that an event has occurred. In this research, we propose a novel method of spike matching to identify keywords, and use probabilistic classification to assess the probability of having an event given the volume of each word.
Event detection and prediction from social networks have been studied frequently in recent years. Most of the predictive frameworks use textual content such as likes, shares, and retweets, as features. The text is used as features either by tracking the temporal patterns of keywords, clustering words into topics, or by evaluating sentiment scores and polarity. The main challenge in keyword-based models is to determine which words to use in the first place, especially as people use words in a non-standard way, particularly on Twitter.
In this research, we aim for detecting large events as soon as they happen with near-live sensitivity. For example, When spontaneous protests occur just after recent news such as increasing taxes or decreasing budget, we need to have indicators to raise the flag of a happening protest. Identifying these indicators requires to select a set of words that are mostly associated with the events of interest such as protests. We then track the volume of these words and evaluate the probability of an event occurring given the current volume of each of the tracked features. The main challenge is to find this set of features that allow such probabilistic classification.
Using text as features in Twitter is challenging because of the informal nature of the tweets, the limited length of the tweet, platform-specific language, and multilingual nature of Twitter BIBREF0 , BIBREF1 , BIBREF2 . The main challenges for text analysis in Twitter are listed below:
We approached the first and second challenges by using a Bayesian approach to learn which terms were associated with events, regardless of whether they are standard language, acronyms, or even a made-up word, so long as they match the events of interest. The third and fourth challenges are approached by using word-pairs, where we extract all the pairs of co-occurring words within each tweet. This allows us to recognize the context of the word ('Messi','strike' ) is different than ('labour','strike').
According to the distributional semantic hypothesis, event-related words are likely to be used on the day of an event more frequently than any normal day before or after the event. This will form a spike in the keyword count magnitude along the timeline as illustrated in Figure FIGREF6 . To find the words most associated with events, we search for the words that achieve the highest number of spikes matching the days of events. We use the Jaccard similarity metric as it values the spikes matching events and penalizes spikes with no event and penalizes events without spikes. Separate words can be noisy due to the misuse of the term by people, especially in big data environments. So, we rather used the word-pairs as textual features in order to capture the context of the word. For example, this can differentiate between the multiple usages of the word “strike” within the contexts of “lightning strike”, “football strike” and “labour strike”
In this paper, we propose a method to find the best word-pairs to represent the events of interest. These word-pairs can be used for time series analysis to predict future events as indicated in Figure FIGREF1 . They can also be used as seeds for topic modelling, or to find related posts and word-pairs using dynamic query expansion. The proposed framework uses a temporal filter to identify the spikes within the word-pair signal to binarize the word-pair time series vector BIBREF3 . The binary vector of the word-pair is compared to the protest days vector using Jaccard similarity index BIBREF4 , BIBREF5 , where the word-pairs with highest similarity scores are the most associated word-pairs with protest days. This feature selection method is built upon the assumption that people discuss an event on the day of that event more than on any day before or after the event. This implies that word-pairs related to the event will form a spike on this specific day. Some of the spiking word-pairs are related to the nature of the event itself, such as “taxi protest” or “fair education”. These word-pairs will appear once or twice along the time frame. Meanwhile, more generic word-pairs such as “human rights” or “labour strike” will spike more frequently in the days of events regardless the protest nature.
To test our method, we developed two experiments using all the tweets in Melbourne and Sydney over a period of 640 days. The total number of tweets exceeded 4 million tweets per day, with a total word-pair count of 12 million different word-pairs per day, forming 6 billion word-pairs over the entire timeframe. The selected word-pairs from in each city are used as features to classify if there will be an event or not on a specific day in that city. We classified events from the extracted word-pairs using 9 classifiers including Naive Bayes, Decision Trees, KNN, SVM, and logistic regression.
In Section 2, we describe the event detection methods. Section 3 states the known statistical methods used for data association and feature selection. Section 4 describes the proposed feature selection method. Section 5 describes model training and prediction. Section 6 describes the experiment design, the data and the results. Section 7 summarizes the paper, discuss the research conclusion and explains future work.
Event Detection Methods
Analyzing social networks for event detection is approached from multiple perspectives depending on the research objective. This can be predicting election results, a contest winner, or predicting peoples' reaction to a government decision through protest. The main perspectives to analyze the social networks are (1) content analysis, where the textual content of each post is analyzed using natural language processing to identify the topic or the sentiment of the authors. (2) Network structure analysis, where the relation between the users are described in a tree structure for the follower-followee patterns, or in a graph structure for friendship and interaction patterns. These patterns can be used to know the political preference of people prior to elections. (3) Behavioural analysis of each user including sentiment, response, likes, retweets, location, to identify responses toward specific events. This might be useful to identify users with terrorist intentions. In this section, we will focus on textual content-based models, where text analysis and understanding can be achieved using keywords, topic modelling or sentiment analysis.
Keyword-based approaches
Keyword-based approaches focus on sequence analysis of the time series for each keyword. They also consider different forms for each keyword, including n-gram, skip-gram, and word-pairs BIBREF6 . The keyword-based approaches use the concept of the distributional semantics to group semantically-related words as synonyms to be used as a single feature BIBREF7 . In this approach, keywords are usually associated with events by correlation, entropy or distance metrics. Also, Hossny et al. proposed using SVD with K-Means to strengthen keyword signals, by grouping words having similar temporal patterns, then mapping them into one central word that has minimum distance to the other members of the cluster BIBREF8 .
Sayyadi et al. used co-occurring keywords in documents such as news articles to build a network of keywords. This network is used as a graph to feed a community detection algorithm in order to identify and classify events BIBREF9 . Takeshi et al. created a probabilistic spatio-temporal model to identify natural disasters events such as earthquakes and typhoons using multiple tweet-based features such as words counts per tweet, event-related keywords, and tweet context. They considered each Twitter user as a social sensor and applied both of the Kalman filter and particle filter for location estimation. This model could detect 96% of Japanese earthquakes BIBREF10 . Zhou et al. developed a named entity recognition model to find location names within tweets and use them as keyword-features for event detection, then estimated the impact of the detected events qualitatively BIBREF11 .
Weng et al. introduced “Event Detection by Clustering of Wavelet-based Signals” (EDCow). This model used wavelets to analyze the frequency of word signals, then calculated the autocorrelations of each word signal in order to filter outlier words. The remaining words were clustered using a modularity-based graph partitioning technique to form events BIBREF12 . Ning et al. proposed a model to identify evidence-based precursors and forecasts of future events. They used as a set of news articles to develop a nested multiple instance learning model to predict events across multiple countries. This model can identify the news articles that can be used as precursors for a protest BIBREF13 .
Topic modelling approaches
Topic modelling approaches focus on clustering related words according to their meaning, and indexing them using some similarity metric such as cosine similarity or Euclidean distance. The most recognized techniques are (1) Latent Semantic Indexing (LSI), where the observation matrix is decomposed using singular value decomposition and the data are clustered using K-Means BIBREF7 ,(2) Latent Dirichlet Allocation (LDA), where the words are clustered using Gaussian mixture models (GMM) according to the likelihood of term co-occurrence within the same context BIBREF14 , (3) Word2Vec, which uses a very large corpus to compute continuous vector representations, where we can apply standard vector operations to map one vector to another BIBREF15 .
Cheng et al. suggested using space-time scan statistics to detect events by looking for clusters within data across both time and space, regardless of the content of each individual tweet BIBREF16 . The clusters emerging during spatio-temporal relevant events are used as an indicator of a currently occurring event, as people tweet more often about event topics and news. Ritter et al. proposed a framework that uses the calendar date, cause and event type to describe any event in a way similar to the way Twitter users mention the important events. This framework used temporal resolution, POS tagging, an event tagger, and named entity recognition. Once features are extracted, the association between the combination of features and the events is measured in order to know what are the most important features and how significant the event will be BIBREF17 .
Zhou et al. introduced a graphical model to capture the information in the social data including time, content, and location, calling it location-time constrained topic (LTT). They measure the similarity between the tweets using KL divergence to assess media content uncertainty. Then, they measure the similarity between users using a “longest common subsequence” (LCS) metric. They aggregate the two measurements by augmenting weights as a measure for message similarity. They used the similarity between streaming posts in a social network to detect social events BIBREF18 .
Ifrim et al. presented another approach for topic detection that combines aggressive pre-processing of data with hierarchical clustering of tweets. The framework analyzes different factors affecting the quality of topic modelling results BIBREF19 , along with real-time data streams of live tweets to produce topic streams in close to real-time rate.
Xing et al. presented the mutually generative Latent Dirichlet Allocation model (MGE-LDA) that uses hashtags and topics, as they both are generated mutually by each other in tweets. This process models the relationship between topics and hashtags in tweets, and uses them both as features for event discovery BIBREF20 . Azzam et al. used deep learning and cosine similarity to understand short text posts in communities of question answering BIBREF21 , BIBREF22 . Also, Hossny et al. used inductive logic programming to understand short sentences from news for translation purposes BIBREF23
Sentiment analysis approaches
The third approach is to identify sentiment through the context of the post, which is another application for distributional semantics requiring a huge amount of training data to build the required understanding of the context. Sentiment analysis approaches focus on recognizing the feelings of the crowd and use the score of each feeling as a feature to calculate the probability of social events occurring. The sentiment can represent the emotion, attitude, or opinion of the user towards the subject of the post. One approach to identify sentiment is to find smiley faces such as emoticons and emojis within a tweet or a post. Another approach is to use a sentiment labelled dictionary such as SentiWordNet to assess the sentiment associated with each word.
Generally, sentiment analysis has not been used solely to predict civil unrest, especially as it still faces the challenges of sarcasm and understanding negation in ill-formed sentences. Meanwhile, it is used as an extra feature in combination with features from other approaches such as keywords and topic modelling. Paul et al. proposed a framework to predict the results of the presidential election in the United States in 2017. The proposed framework applied topic modelling to identify related topics in news, then used the topics as seeds for Word2Vec and LSTM to generate a set of enriched keywords. The generated keywords will be used to classify politics-related tweets, which are used to evaluate the sentiment towards each candidate. The sentiment score trend is used to predict the winning candidate BIBREF24 .
Feature Selection Methods
Keywords can be selected as features as a single term or a word-pair or a skip-grams, which can be used for classification using multiple methods such as mutual information, TF-IDF, INLINEFORM0 , or traditional statistical methods such as ANOVA or correlation. Our problem faces two challenges: the first is the huge number of word-pairs extracted from all tweets for the whole time frame concurrently, which make some techniques such as TF-IDF and INLINEFORM1 computationally unfeasible as they require the technique to be distributable on parallel processors on a cluster. The second challenge is the temporal nature of the data which require some techniques that can capture the distributional semantics of terms along with the ground truth vector. In this section, we describe briefly a set of data association methods used to find the best word-pairs to identify the event days.
Pearson correlation measures the linear dependency of the response variable on the independent variable with the maximum dependency of 1 and no dependency of zero. This technique needs to satisfy multiple assumptions to assess the dependency properly. These assumptions require the signals of the variables to be normally distributed, homoskedastic, stationary and have no outliers BIBREF25 , BIBREF26 . In social network and human-authored tweets, we cannot guarantee that the word-pairs signals throughout the timeframe will satisfy the required assumptions. Another drawback for Pearson correlation is that zero score does not necessarily imply no correlation, while no correlation implies zero score.
Spearman is a rank-based metric that evaluates the linear association between the rank variables for each of the independent and the response variables. It simply evaluates the linear correlation between the ranked variables of the original variables. Spearman correlation assumes the monotonicity of the variables but it relaxes the Pearson correlation requirements of the signal to be normal, homoskedastic and stationary. Although the text signals in the social network posts do not satisfy the monotonicity assumption, Spearman correlation can select some word-pairs to be used as predictive features for classification. Spearman correlation has the same drawback of Pearson correlation that zero score does not necessarily imply no correlation while no correlation implies zero score.
Distance correlation is introduced by Szekely et al . (2007) to measure the nonlinear association between two variables BIBREF27 . Distance correlation measures the statistical distance between probability distributions by dividing the Brownian covariance (distance covariance) between X and Y by the product of the distance standard deviations BIBREF28 , BIBREF29 .
TF-IDF is the short of term frequency-inverse document frequency technique that is used for word selection for classification problems. The concept of this technique is to give the words that occur frequently within a specific class high weight as a feature and to penalize the words that occur frequently among multiple classes. for example; the term “Shakespeare” is considered a useful feature to classify English literature documents as it occurs frequently in English literature and rarely occurs in any other kind of documents. Meanwhile, the term “act” will occur frequently in English literature, but it also occurs frequently in the other types of document, so this term will be weighted for its frequent appearance and it will be penalized for its publicity among the classes by what we call inverse-document-frequency BIBREF30 .
Mutual information is a metric for the amount of information one variable can tell the other one. MI evaluates how similar are the joint distributions of the two variables with the product of the marginal distributions of each individual variable, which makes MI more general than correlation as it is not limited by the real cardinal values, it can also be applied to binary, ordinal and nominal values BIBREF31 . As mutual information uses the similarity of the distribution, it is not concerned with pairing the individual observations of X and Y as much as it cares about the whole statistical distribution of X and Y. This makes MI very useful for clustering purposes rather than classification purposes BIBREF32 .
Cosine similarity metric calculates the cosine of the angle between two vectors. The cosine metric evaluates the direction similarity of the vectors rather the magnitude similarity. The cosine similarity score equals to 1 if the two vectors have the angle of zero between the directions of two vectors, and the score is set to zero when the two vectors are perpendicular BIBREF33 . if the two vectors are oriented to opposite directions, the similarity score is -1. Cosine similarity metric is usually used in the positive space, which makes the scores limited within the interval of [0,1].
Jaccard index or coefficient is a metric to evaluate the similarity of two sets by comparing their members to identify the common elements versus the distinct ones. The main advantage of Jaccard similarity is it ignores the default value or the null assumption in the two vectors and it only considers the non-default correct matches compared to the mismatches. This consideration makes the metric immune to the data imbalance. Jaccard index is similar to cosine-similarity as it retains the sparsity property and it also allows the discrimination of the collinear vectors.
Spike Matching Method:
The proposed model extracts the word-pairs having a high association with event days according to the distributional semantic hypothesis and use them for training the model that will be used later for the binary classification task BIBREF34 as illustrated in figure FIGREF10 . The first step is the data preparation where we load all the tweets for each day, then we exclude the tweets having URLs or unrelated topics, then we clean each tweet by removing the hashtags, non-Latin script and stopping words. Then we lemmatize and stem each word in each tweet using Lancaster stemmer. Finally, we extract the word-pairs in each tweet. The word-pair is the list of n words co-occurring together within the same tweet.
The second step is to count the frequency of each word-pair per each day, which are used as features to classify the day as either event or no-event day. The formulation is a matrix with rows as word-pairs and columns as days and values are daily counts of each word-pair. The third step is to binarize the event count vector (ground truth) as well as the vector of each word-pair. Binarizing the event vector is done by checking if the count of events in each day is larger than zero. The binarization of the word-pair count vectors is done by applying a temporal filter to the time series in order to identify the spikes as explained in equation EQREF11 , where the days with spikes are set to ones and days without spike are set to zeros BIBREF35 , BIBREF36 . DISPLAYFORM0
Where x is the count of the word-pair, INLINEFORM0 is the time variable, INLINEFORM1 is the time difference, the threshold is the minimum height of the spike. Afterwards, we compare the binary vector for each word-pair with the ground truth binary vector using the Jaccard similarity index as stated in equation EQREF12 BIBREF4 , BIBREF5 . The word-pairs are then sorted descendingly according to the similarity score. The word-pairs with the highest scores are used as a feature for training the model in the fourth step. DISPLAYFORM0
where WP is the word pair vector, GT is the ground truth vector
Training and Prediction
Once we identify the best word-pairs to be used as features for classification, we split the time series vector of each word-pair into a training vector and a testing vector. then we use the list of the training vectors of the selected word-pairs to train the model as explained in subsection SECREF13 and use the list of testing vectors for the same word-pairs to classify any day to event/nonevent day SECREF16 .
Training the model:
The third step is to train the model using the set of features generated in the first step. We selected the Naive Bayes classifier to be our classification technique for the following reasons: (1) the high bias of the NB classifier reduces the possibility of over-fitting, and our problem has a high probability of over-fitting due to the high number of features and the low number of observations, (2) the response variable is binary, so we do not need to regress the variable real value as much as we need to know the event-class, and (3) The counts of the word-pairs as independent variables are limited between 0 and 100 occurrences per each day, which make the probabilistic approaches more effective than distance based approaches.
The training process aims to calculate three priori probabilities to be used later in calculating the posterior probabilities: (1) the probability of each word-pair count in a specific day given the status of the day as “event” or “non-event”. (2) the priori conditional probability of each word-pair given event status INLINEFORM0 . (3) the probability of each event class as well as the probability of each word-pair as stated in equations EQREF15 and EQREF15 . DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the word-pair, INLINEFORM1 is any class for event occurrence and word-pair is the vector of counts for the word-pairs extracted from tweets
Predicting Civil Unrest
Once the priori probabilities are calculated using the training data, we use them to calculate the posterior probability of both classes of event-days and non-event-days given the values of the word-pairs using the equation EQREF17 . DISPLAYFORM0
where INLINEFORM0 is the word-pair, INLINEFORM1 INLINEFORM2 As the word-pairs are assumed to be independent and previously known from the training step.
Experiments and Results
The experiments are designed to detect civil unrest events in Melbourne on any specific day. In this experiment, we used all the tweets posted from Melbourne within a time frame of 640 days between December 2015 and September 2017. This time frame will be split into 500 days for model training and 140 days for model testing on multiple folds. The tweet location is specified using (1) longitude and latitude meta-tag, (2) tweet location meta-tag, (3) the profile location meta-tag, and (4) The time zone meta-tag. The total number of tweets exceeded 4 million tweets daily. Firstly, we cleaned the data from noisy signals, performed stemming and lemmatization then extracted the word-pairs from each tweet and count each word-pair per each day. Example 1 illustrates how each tweet is cleaned, prepared and vectorized before being used for training the model. The steps are explained below:
As explained in example 1, each word-pair will be transformed from a vector of integer values into a vector of binary values and denoted as INLINEFORM0 . INLINEFORM1 will be used to calculate the Jaccard similarity index of the binary vector with the events binary vector. Each word-pair will have a similarity score according to the number of word-pair spikes matching the event days. This method uses the concept of distributional semantic, where the co-occurring signals are likely to be semantically associated BIBREF34 .
Example 1: Original Tweet: Protesters may be unmasked in wake of Coburg clash https://t.co/djjVIfzO3e (News) #melbourne #victoria Cleaned Tweet: protest unmask wake coburg clash news List of two-words-word-pairs: [`protest', `unmask'], [`protest', `wake'], [`protest', `Coburg'], ..., [`unmask', `wake'], [`unmask', `coburg'],..., [`clash', `news'] [`protest', `unmask'] training : INLINEFORM0 [`protest', `unmask'] testing : INLINEFORM1 Assuming a time frame of 20 days word-pair: [2,3,3,4,5,3,2,3,8,3,3,1,3,9,3,1,2,4,5,1] Spikes ( INLINEFORM2 ): [0,0,0,0,1,0,0,0,1,0,0,0,0,1,0,0,0,0,1,0] Events( INLINEFORM3 ): [0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,1,0,1,0] INLINEFORM4
Once we selected the most informative word-pairs as features, we will use the raw values to train the Naive Bayes classifier. The classifier is trained using 500 days selected randomly along the whole timeframe, then it is used to predict the other 140 days. To ensure the robustness of our experiment, We applied 10-folds cross-validation, where we performed the same experiment 10 times using 10 different folds of randomly selected training and testing data. The prediction achieved an average area under the ROC curve of 90%, which statistically significant and achieved F-score of 91%, which is immune to data imbalance as listed in table TABREF18 . Figure FIGREF25 shows the ROC curves for the results of a single fold of Naive Bayes classification that uses the features extracted by each selection methods. The classification results of the proposed method outperformed the benchmarks and state of the art developed by Cui et al. (2017), Nguyen et al. (2017), Willer et al. (2016), and Adedoyin-Olowe et al. (2016) as illustrated in the table TABREF33 BIBREF12 , BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 .
The same experiment has been applied to Sydney, Brisbane and Perth in Australia on a time frame of 640 days with 500 days training data and 140 days testing data and the results were similar to Melbourne results with average AUC of 0.91 and average F-Score of 0.79. To ensure that the proposed method is language independent, we used the same method to classify civil unrest days in Jakarta using the Indonesian language, the classification scores were lower than the average scores for English language by 0.05 taking into consideration that we did not apply any NLP pre-processing to the Indonesian tweets such as stemming and lemmatization.
To verify the robustness of this feature selection method, we tested the selected features using multiple classifiers such as KNN, SVM, naive Bayes and decision trees. The results emphasized that the word-pairs selected using the spike-matching method achieve better AUC scores than the other correlation methods as listed in table TABREF19
Conclusions
In this paper, we proposed a framework to detect civil unrest events by tracking each word-pair volume in twitter. The main challenge with this model is to identify the word-pairs that are highly associated with the events with predictive power. We used temporal filtering to detect the spike within the time series vector and used Jaccard similarity to calculate the scores of each word-pair according to its similarity with the binary vector of event days. These scores are used to rank the word-pairs as features for prediction.
Once the word-pairs are identified, we trained a Naive Bayes classifier to identify any day in a specific region to be an event or non-event days. We performed the experiment on both Melbourne and Sydney regions in Australia, and we achieved a classification accuracy of 87% with the precision of 77%, Recall of 82 %, area under the ROC curve of 91% and F-Score of 79%. The results are all achieved after 10-folds randomized cross-validation as listed in table TABREF32 .
The main contributions of this paper are (1) to overcome twitter challenges of acronyms, short text, ambiguity and synonyms, (2) to identify the set of word-pairs to be used as features for live event detection, (3) to build an end-to-end framework that can detect the events lively according to the word counts. This work can be applied to similar problems, where specific tweets can be associated with life events such as disease outbreak or stock market fluctuation. This work can be extended to predict future events with one day in advance, where we will use the same method for feature selection in addition to to time series analysis of the historical patterns of the word-pairs.
Acknowledgments
This research was fully supported by the School of Mathematical Sciences at the University of Adelaide. All the data, computation and technical framework were supported by Data-To-Decision-Collaborative-Research-Center (D2DCRC). | Yes |
858c51842fc3c1f3e6d2d7d853c94f6de27afade | 858c51842fc3c1f3e6d2d7d853c94f6de27afade_0 | Q: Which of the classifiers showed the best performance?
Text: Introduction
Event detection is important for emergency services to react rapidly and minimize damage. For example, terrorist attacks, protests, or bushfires may require the presence of ambulances, firefighters, and police as soon as possible to save people. This research aims to detect events as soon as they occur and are reported via some Twitter user. The event detection process requires to know the keywords associated with each event and to assess the minimal count of each word to decide confidently that an event has occurred. In this research, we propose a novel method of spike matching to identify keywords, and use probabilistic classification to assess the probability of having an event given the volume of each word.
Event detection and prediction from social networks have been studied frequently in recent years. Most of the predictive frameworks use textual content such as likes, shares, and retweets, as features. The text is used as features either by tracking the temporal patterns of keywords, clustering words into topics, or by evaluating sentiment scores and polarity. The main challenge in keyword-based models is to determine which words to use in the first place, especially as people use words in a non-standard way, particularly on Twitter.
In this research, we aim for detecting large events as soon as they happen with near-live sensitivity. For example, When spontaneous protests occur just after recent news such as increasing taxes or decreasing budget, we need to have indicators to raise the flag of a happening protest. Identifying these indicators requires to select a set of words that are mostly associated with the events of interest such as protests. We then track the volume of these words and evaluate the probability of an event occurring given the current volume of each of the tracked features. The main challenge is to find this set of features that allow such probabilistic classification.
Using text as features in Twitter is challenging because of the informal nature of the tweets, the limited length of the tweet, platform-specific language, and multilingual nature of Twitter BIBREF0 , BIBREF1 , BIBREF2 . The main challenges for text analysis in Twitter are listed below:
We approached the first and second challenges by using a Bayesian approach to learn which terms were associated with events, regardless of whether they are standard language, acronyms, or even a made-up word, so long as they match the events of interest. The third and fourth challenges are approached by using word-pairs, where we extract all the pairs of co-occurring words within each tweet. This allows us to recognize the context of the word ('Messi','strike' ) is different than ('labour','strike').
According to the distributional semantic hypothesis, event-related words are likely to be used on the day of an event more frequently than any normal day before or after the event. This will form a spike in the keyword count magnitude along the timeline as illustrated in Figure FIGREF6 . To find the words most associated with events, we search for the words that achieve the highest number of spikes matching the days of events. We use the Jaccard similarity metric as it values the spikes matching events and penalizes spikes with no event and penalizes events without spikes. Separate words can be noisy due to the misuse of the term by people, especially in big data environments. So, we rather used the word-pairs as textual features in order to capture the context of the word. For example, this can differentiate between the multiple usages of the word “strike” within the contexts of “lightning strike”, “football strike” and “labour strike”
In this paper, we propose a method to find the best word-pairs to represent the events of interest. These word-pairs can be used for time series analysis to predict future events as indicated in Figure FIGREF1 . They can also be used as seeds for topic modelling, or to find related posts and word-pairs using dynamic query expansion. The proposed framework uses a temporal filter to identify the spikes within the word-pair signal to binarize the word-pair time series vector BIBREF3 . The binary vector of the word-pair is compared to the protest days vector using Jaccard similarity index BIBREF4 , BIBREF5 , where the word-pairs with highest similarity scores are the most associated word-pairs with protest days. This feature selection method is built upon the assumption that people discuss an event on the day of that event more than on any day before or after the event. This implies that word-pairs related to the event will form a spike on this specific day. Some of the spiking word-pairs are related to the nature of the event itself, such as “taxi protest” or “fair education”. These word-pairs will appear once or twice along the time frame. Meanwhile, more generic word-pairs such as “human rights” or “labour strike” will spike more frequently in the days of events regardless the protest nature.
To test our method, we developed two experiments using all the tweets in Melbourne and Sydney over a period of 640 days. The total number of tweets exceeded 4 million tweets per day, with a total word-pair count of 12 million different word-pairs per day, forming 6 billion word-pairs over the entire timeframe. The selected word-pairs from in each city are used as features to classify if there will be an event or not on a specific day in that city. We classified events from the extracted word-pairs using 9 classifiers including Naive Bayes, Decision Trees, KNN, SVM, and logistic regression.
In Section 2, we describe the event detection methods. Section 3 states the known statistical methods used for data association and feature selection. Section 4 describes the proposed feature selection method. Section 5 describes model training and prediction. Section 6 describes the experiment design, the data and the results. Section 7 summarizes the paper, discuss the research conclusion and explains future work.
Event Detection Methods
Analyzing social networks for event detection is approached from multiple perspectives depending on the research objective. This can be predicting election results, a contest winner, or predicting peoples' reaction to a government decision through protest. The main perspectives to analyze the social networks are (1) content analysis, where the textual content of each post is analyzed using natural language processing to identify the topic or the sentiment of the authors. (2) Network structure analysis, where the relation between the users are described in a tree structure for the follower-followee patterns, or in a graph structure for friendship and interaction patterns. These patterns can be used to know the political preference of people prior to elections. (3) Behavioural analysis of each user including sentiment, response, likes, retweets, location, to identify responses toward specific events. This might be useful to identify users with terrorist intentions. In this section, we will focus on textual content-based models, where text analysis and understanding can be achieved using keywords, topic modelling or sentiment analysis.
Keyword-based approaches
Keyword-based approaches focus on sequence analysis of the time series for each keyword. They also consider different forms for each keyword, including n-gram, skip-gram, and word-pairs BIBREF6 . The keyword-based approaches use the concept of the distributional semantics to group semantically-related words as synonyms to be used as a single feature BIBREF7 . In this approach, keywords are usually associated with events by correlation, entropy or distance metrics. Also, Hossny et al. proposed using SVD with K-Means to strengthen keyword signals, by grouping words having similar temporal patterns, then mapping them into one central word that has minimum distance to the other members of the cluster BIBREF8 .
Sayyadi et al. used co-occurring keywords in documents such as news articles to build a network of keywords. This network is used as a graph to feed a community detection algorithm in order to identify and classify events BIBREF9 . Takeshi et al. created a probabilistic spatio-temporal model to identify natural disasters events such as earthquakes and typhoons using multiple tweet-based features such as words counts per tweet, event-related keywords, and tweet context. They considered each Twitter user as a social sensor and applied both of the Kalman filter and particle filter for location estimation. This model could detect 96% of Japanese earthquakes BIBREF10 . Zhou et al. developed a named entity recognition model to find location names within tweets and use them as keyword-features for event detection, then estimated the impact of the detected events qualitatively BIBREF11 .
Weng et al. introduced “Event Detection by Clustering of Wavelet-based Signals” (EDCow). This model used wavelets to analyze the frequency of word signals, then calculated the autocorrelations of each word signal in order to filter outlier words. The remaining words were clustered using a modularity-based graph partitioning technique to form events BIBREF12 . Ning et al. proposed a model to identify evidence-based precursors and forecasts of future events. They used as a set of news articles to develop a nested multiple instance learning model to predict events across multiple countries. This model can identify the news articles that can be used as precursors for a protest BIBREF13 .
Topic modelling approaches
Topic modelling approaches focus on clustering related words according to their meaning, and indexing them using some similarity metric such as cosine similarity or Euclidean distance. The most recognized techniques are (1) Latent Semantic Indexing (LSI), where the observation matrix is decomposed using singular value decomposition and the data are clustered using K-Means BIBREF7 ,(2) Latent Dirichlet Allocation (LDA), where the words are clustered using Gaussian mixture models (GMM) according to the likelihood of term co-occurrence within the same context BIBREF14 , (3) Word2Vec, which uses a very large corpus to compute continuous vector representations, where we can apply standard vector operations to map one vector to another BIBREF15 .
Cheng et al. suggested using space-time scan statistics to detect events by looking for clusters within data across both time and space, regardless of the content of each individual tweet BIBREF16 . The clusters emerging during spatio-temporal relevant events are used as an indicator of a currently occurring event, as people tweet more often about event topics and news. Ritter et al. proposed a framework that uses the calendar date, cause and event type to describe any event in a way similar to the way Twitter users mention the important events. This framework used temporal resolution, POS tagging, an event tagger, and named entity recognition. Once features are extracted, the association between the combination of features and the events is measured in order to know what are the most important features and how significant the event will be BIBREF17 .
Zhou et al. introduced a graphical model to capture the information in the social data including time, content, and location, calling it location-time constrained topic (LTT). They measure the similarity between the tweets using KL divergence to assess media content uncertainty. Then, they measure the similarity between users using a “longest common subsequence” (LCS) metric. They aggregate the two measurements by augmenting weights as a measure for message similarity. They used the similarity between streaming posts in a social network to detect social events BIBREF18 .
Ifrim et al. presented another approach for topic detection that combines aggressive pre-processing of data with hierarchical clustering of tweets. The framework analyzes different factors affecting the quality of topic modelling results BIBREF19 , along with real-time data streams of live tweets to produce topic streams in close to real-time rate.
Xing et al. presented the mutually generative Latent Dirichlet Allocation model (MGE-LDA) that uses hashtags and topics, as they both are generated mutually by each other in tweets. This process models the relationship between topics and hashtags in tweets, and uses them both as features for event discovery BIBREF20 . Azzam et al. used deep learning and cosine similarity to understand short text posts in communities of question answering BIBREF21 , BIBREF22 . Also, Hossny et al. used inductive logic programming to understand short sentences from news for translation purposes BIBREF23
Sentiment analysis approaches
The third approach is to identify sentiment through the context of the post, which is another application for distributional semantics requiring a huge amount of training data to build the required understanding of the context. Sentiment analysis approaches focus on recognizing the feelings of the crowd and use the score of each feeling as a feature to calculate the probability of social events occurring. The sentiment can represent the emotion, attitude, or opinion of the user towards the subject of the post. One approach to identify sentiment is to find smiley faces such as emoticons and emojis within a tweet or a post. Another approach is to use a sentiment labelled dictionary such as SentiWordNet to assess the sentiment associated with each word.
Generally, sentiment analysis has not been used solely to predict civil unrest, especially as it still faces the challenges of sarcasm and understanding negation in ill-formed sentences. Meanwhile, it is used as an extra feature in combination with features from other approaches such as keywords and topic modelling. Paul et al. proposed a framework to predict the results of the presidential election in the United States in 2017. The proposed framework applied topic modelling to identify related topics in news, then used the topics as seeds for Word2Vec and LSTM to generate a set of enriched keywords. The generated keywords will be used to classify politics-related tweets, which are used to evaluate the sentiment towards each candidate. The sentiment score trend is used to predict the winning candidate BIBREF24 .
Feature Selection Methods
Keywords can be selected as features as a single term or a word-pair or a skip-grams, which can be used for classification using multiple methods such as mutual information, TF-IDF, INLINEFORM0 , or traditional statistical methods such as ANOVA or correlation. Our problem faces two challenges: the first is the huge number of word-pairs extracted from all tweets for the whole time frame concurrently, which make some techniques such as TF-IDF and INLINEFORM1 computationally unfeasible as they require the technique to be distributable on parallel processors on a cluster. The second challenge is the temporal nature of the data which require some techniques that can capture the distributional semantics of terms along with the ground truth vector. In this section, we describe briefly a set of data association methods used to find the best word-pairs to identify the event days.
Pearson correlation measures the linear dependency of the response variable on the independent variable with the maximum dependency of 1 and no dependency of zero. This technique needs to satisfy multiple assumptions to assess the dependency properly. These assumptions require the signals of the variables to be normally distributed, homoskedastic, stationary and have no outliers BIBREF25 , BIBREF26 . In social network and human-authored tweets, we cannot guarantee that the word-pairs signals throughout the timeframe will satisfy the required assumptions. Another drawback for Pearson correlation is that zero score does not necessarily imply no correlation, while no correlation implies zero score.
Spearman is a rank-based metric that evaluates the linear association between the rank variables for each of the independent and the response variables. It simply evaluates the linear correlation between the ranked variables of the original variables. Spearman correlation assumes the monotonicity of the variables but it relaxes the Pearson correlation requirements of the signal to be normal, homoskedastic and stationary. Although the text signals in the social network posts do not satisfy the monotonicity assumption, Spearman correlation can select some word-pairs to be used as predictive features for classification. Spearman correlation has the same drawback of Pearson correlation that zero score does not necessarily imply no correlation while no correlation implies zero score.
Distance correlation is introduced by Szekely et al . (2007) to measure the nonlinear association between two variables BIBREF27 . Distance correlation measures the statistical distance between probability distributions by dividing the Brownian covariance (distance covariance) between X and Y by the product of the distance standard deviations BIBREF28 , BIBREF29 .
TF-IDF is the short of term frequency-inverse document frequency technique that is used for word selection for classification problems. The concept of this technique is to give the words that occur frequently within a specific class high weight as a feature and to penalize the words that occur frequently among multiple classes. for example; the term “Shakespeare” is considered a useful feature to classify English literature documents as it occurs frequently in English literature and rarely occurs in any other kind of documents. Meanwhile, the term “act” will occur frequently in English literature, but it also occurs frequently in the other types of document, so this term will be weighted for its frequent appearance and it will be penalized for its publicity among the classes by what we call inverse-document-frequency BIBREF30 .
Mutual information is a metric for the amount of information one variable can tell the other one. MI evaluates how similar are the joint distributions of the two variables with the product of the marginal distributions of each individual variable, which makes MI more general than correlation as it is not limited by the real cardinal values, it can also be applied to binary, ordinal and nominal values BIBREF31 . As mutual information uses the similarity of the distribution, it is not concerned with pairing the individual observations of X and Y as much as it cares about the whole statistical distribution of X and Y. This makes MI very useful for clustering purposes rather than classification purposes BIBREF32 .
Cosine similarity metric calculates the cosine of the angle between two vectors. The cosine metric evaluates the direction similarity of the vectors rather the magnitude similarity. The cosine similarity score equals to 1 if the two vectors have the angle of zero between the directions of two vectors, and the score is set to zero when the two vectors are perpendicular BIBREF33 . if the two vectors are oriented to opposite directions, the similarity score is -1. Cosine similarity metric is usually used in the positive space, which makes the scores limited within the interval of [0,1].
Jaccard index or coefficient is a metric to evaluate the similarity of two sets by comparing their members to identify the common elements versus the distinct ones. The main advantage of Jaccard similarity is it ignores the default value or the null assumption in the two vectors and it only considers the non-default correct matches compared to the mismatches. This consideration makes the metric immune to the data imbalance. Jaccard index is similar to cosine-similarity as it retains the sparsity property and it also allows the discrimination of the collinear vectors.
Spike Matching Method:
The proposed model extracts the word-pairs having a high association with event days according to the distributional semantic hypothesis and use them for training the model that will be used later for the binary classification task BIBREF34 as illustrated in figure FIGREF10 . The first step is the data preparation where we load all the tweets for each day, then we exclude the tweets having URLs or unrelated topics, then we clean each tweet by removing the hashtags, non-Latin script and stopping words. Then we lemmatize and stem each word in each tweet using Lancaster stemmer. Finally, we extract the word-pairs in each tweet. The word-pair is the list of n words co-occurring together within the same tweet.
The second step is to count the frequency of each word-pair per each day, which are used as features to classify the day as either event or no-event day. The formulation is a matrix with rows as word-pairs and columns as days and values are daily counts of each word-pair. The third step is to binarize the event count vector (ground truth) as well as the vector of each word-pair. Binarizing the event vector is done by checking if the count of events in each day is larger than zero. The binarization of the word-pair count vectors is done by applying a temporal filter to the time series in order to identify the spikes as explained in equation EQREF11 , where the days with spikes are set to ones and days without spike are set to zeros BIBREF35 , BIBREF36 . DISPLAYFORM0
Where x is the count of the word-pair, INLINEFORM0 is the time variable, INLINEFORM1 is the time difference, the threshold is the minimum height of the spike. Afterwards, we compare the binary vector for each word-pair with the ground truth binary vector using the Jaccard similarity index as stated in equation EQREF12 BIBREF4 , BIBREF5 . The word-pairs are then sorted descendingly according to the similarity score. The word-pairs with the highest scores are used as a feature for training the model in the fourth step. DISPLAYFORM0
where WP is the word pair vector, GT is the ground truth vector
Training and Prediction
Once we identify the best word-pairs to be used as features for classification, we split the time series vector of each word-pair into a training vector and a testing vector. then we use the list of the training vectors of the selected word-pairs to train the model as explained in subsection SECREF13 and use the list of testing vectors for the same word-pairs to classify any day to event/nonevent day SECREF16 .
Training the model:
The third step is to train the model using the set of features generated in the first step. We selected the Naive Bayes classifier to be our classification technique for the following reasons: (1) the high bias of the NB classifier reduces the possibility of over-fitting, and our problem has a high probability of over-fitting due to the high number of features and the low number of observations, (2) the response variable is binary, so we do not need to regress the variable real value as much as we need to know the event-class, and (3) The counts of the word-pairs as independent variables are limited between 0 and 100 occurrences per each day, which make the probabilistic approaches more effective than distance based approaches.
The training process aims to calculate three priori probabilities to be used later in calculating the posterior probabilities: (1) the probability of each word-pair count in a specific day given the status of the day as “event” or “non-event”. (2) the priori conditional probability of each word-pair given event status INLINEFORM0 . (3) the probability of each event class as well as the probability of each word-pair as stated in equations EQREF15 and EQREF15 . DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the word-pair, INLINEFORM1 is any class for event occurrence and word-pair is the vector of counts for the word-pairs extracted from tweets
Predicting Civil Unrest
Once the priori probabilities are calculated using the training data, we use them to calculate the posterior probability of both classes of event-days and non-event-days given the values of the word-pairs using the equation EQREF17 . DISPLAYFORM0
where INLINEFORM0 is the word-pair, INLINEFORM1 INLINEFORM2 As the word-pairs are assumed to be independent and previously known from the training step.
Experiments and Results
The experiments are designed to detect civil unrest events in Melbourne on any specific day. In this experiment, we used all the tweets posted from Melbourne within a time frame of 640 days between December 2015 and September 2017. This time frame will be split into 500 days for model training and 140 days for model testing on multiple folds. The tweet location is specified using (1) longitude and latitude meta-tag, (2) tweet location meta-tag, (3) the profile location meta-tag, and (4) The time zone meta-tag. The total number of tweets exceeded 4 million tweets daily. Firstly, we cleaned the data from noisy signals, performed stemming and lemmatization then extracted the word-pairs from each tweet and count each word-pair per each day. Example 1 illustrates how each tweet is cleaned, prepared and vectorized before being used for training the model. The steps are explained below:
As explained in example 1, each word-pair will be transformed from a vector of integer values into a vector of binary values and denoted as INLINEFORM0 . INLINEFORM1 will be used to calculate the Jaccard similarity index of the binary vector with the events binary vector. Each word-pair will have a similarity score according to the number of word-pair spikes matching the event days. This method uses the concept of distributional semantic, where the co-occurring signals are likely to be semantically associated BIBREF34 .
Example 1: Original Tweet: Protesters may be unmasked in wake of Coburg clash https://t.co/djjVIfzO3e (News) #melbourne #victoria Cleaned Tweet: protest unmask wake coburg clash news List of two-words-word-pairs: [`protest', `unmask'], [`protest', `wake'], [`protest', `Coburg'], ..., [`unmask', `wake'], [`unmask', `coburg'],..., [`clash', `news'] [`protest', `unmask'] training : INLINEFORM0 [`protest', `unmask'] testing : INLINEFORM1 Assuming a time frame of 20 days word-pair: [2,3,3,4,5,3,2,3,8,3,3,1,3,9,3,1,2,4,5,1] Spikes ( INLINEFORM2 ): [0,0,0,0,1,0,0,0,1,0,0,0,0,1,0,0,0,0,1,0] Events( INLINEFORM3 ): [0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,1,0,1,0] INLINEFORM4
Once we selected the most informative word-pairs as features, we will use the raw values to train the Naive Bayes classifier. The classifier is trained using 500 days selected randomly along the whole timeframe, then it is used to predict the other 140 days. To ensure the robustness of our experiment, We applied 10-folds cross-validation, where we performed the same experiment 10 times using 10 different folds of randomly selected training and testing data. The prediction achieved an average area under the ROC curve of 90%, which statistically significant and achieved F-score of 91%, which is immune to data imbalance as listed in table TABREF18 . Figure FIGREF25 shows the ROC curves for the results of a single fold of Naive Bayes classification that uses the features extracted by each selection methods. The classification results of the proposed method outperformed the benchmarks and state of the art developed by Cui et al. (2017), Nguyen et al. (2017), Willer et al. (2016), and Adedoyin-Olowe et al. (2016) as illustrated in the table TABREF33 BIBREF12 , BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 .
The same experiment has been applied to Sydney, Brisbane and Perth in Australia on a time frame of 640 days with 500 days training data and 140 days testing data and the results were similar to Melbourne results with average AUC of 0.91 and average F-Score of 0.79. To ensure that the proposed method is language independent, we used the same method to classify civil unrest days in Jakarta using the Indonesian language, the classification scores were lower than the average scores for English language by 0.05 taking into consideration that we did not apply any NLP pre-processing to the Indonesian tweets such as stemming and lemmatization.
To verify the robustness of this feature selection method, we tested the selected features using multiple classifiers such as KNN, SVM, naive Bayes and decision trees. The results emphasized that the word-pairs selected using the spike-matching method achieve better AUC scores than the other correlation methods as listed in table TABREF19
Conclusions
In this paper, we proposed a framework to detect civil unrest events by tracking each word-pair volume in twitter. The main challenge with this model is to identify the word-pairs that are highly associated with the events with predictive power. We used temporal filtering to detect the spike within the time series vector and used Jaccard similarity to calculate the scores of each word-pair according to its similarity with the binary vector of event days. These scores are used to rank the word-pairs as features for prediction.
Once the word-pairs are identified, we trained a Naive Bayes classifier to identify any day in a specific region to be an event or non-event days. We performed the experiment on both Melbourne and Sydney regions in Australia, and we achieved a classification accuracy of 87% with the precision of 77%, Recall of 82 %, area under the ROC curve of 91% and F-Score of 79%. The results are all achieved after 10-folds randomized cross-validation as listed in table TABREF32 .
The main contributions of this paper are (1) to overcome twitter challenges of acronyms, short text, ambiguity and synonyms, (2) to identify the set of word-pairs to be used as features for live event detection, (3) to build an end-to-end framework that can detect the events lively according to the word counts. This work can be applied to similar problems, where specific tweets can be associated with life events such as disease outbreak or stock market fluctuation. This work can be extended to predict future events with one day in advance, where we will use the same method for feature selection in addition to to time series analysis of the historical patterns of the word-pairs.
Acknowledgments
This research was fully supported by the School of Mathematical Sciences at the University of Adelaide. All the data, computation and technical framework were supported by Data-To-Decision-Collaborative-Research-Center (D2DCRC). | Logistic regression |
7c9c73508da628d58aaadb258f3a9d4cc2a8a9b3 | 7c9c73508da628d58aaadb258f3a9d4cc2a8a9b3_0 | Q: Were any other word similar metrics, besides Jaccard metric, tested?
Text: Introduction
Event detection is important for emergency services to react rapidly and minimize damage. For example, terrorist attacks, protests, or bushfires may require the presence of ambulances, firefighters, and police as soon as possible to save people. This research aims to detect events as soon as they occur and are reported via some Twitter user. The event detection process requires to know the keywords associated with each event and to assess the minimal count of each word to decide confidently that an event has occurred. In this research, we propose a novel method of spike matching to identify keywords, and use probabilistic classification to assess the probability of having an event given the volume of each word.
Event detection and prediction from social networks have been studied frequently in recent years. Most of the predictive frameworks use textual content such as likes, shares, and retweets, as features. The text is used as features either by tracking the temporal patterns of keywords, clustering words into topics, or by evaluating sentiment scores and polarity. The main challenge in keyword-based models is to determine which words to use in the first place, especially as people use words in a non-standard way, particularly on Twitter.
In this research, we aim for detecting large events as soon as they happen with near-live sensitivity. For example, When spontaneous protests occur just after recent news such as increasing taxes or decreasing budget, we need to have indicators to raise the flag of a happening protest. Identifying these indicators requires to select a set of words that are mostly associated with the events of interest such as protests. We then track the volume of these words and evaluate the probability of an event occurring given the current volume of each of the tracked features. The main challenge is to find this set of features that allow such probabilistic classification.
Using text as features in Twitter is challenging because of the informal nature of the tweets, the limited length of the tweet, platform-specific language, and multilingual nature of Twitter BIBREF0 , BIBREF1 , BIBREF2 . The main challenges for text analysis in Twitter are listed below:
We approached the first and second challenges by using a Bayesian approach to learn which terms were associated with events, regardless of whether they are standard language, acronyms, or even a made-up word, so long as they match the events of interest. The third and fourth challenges are approached by using word-pairs, where we extract all the pairs of co-occurring words within each tweet. This allows us to recognize the context of the word ('Messi','strike' ) is different than ('labour','strike').
According to the distributional semantic hypothesis, event-related words are likely to be used on the day of an event more frequently than any normal day before or after the event. This will form a spike in the keyword count magnitude along the timeline as illustrated in Figure FIGREF6 . To find the words most associated with events, we search for the words that achieve the highest number of spikes matching the days of events. We use the Jaccard similarity metric as it values the spikes matching events and penalizes spikes with no event and penalizes events without spikes. Separate words can be noisy due to the misuse of the term by people, especially in big data environments. So, we rather used the word-pairs as textual features in order to capture the context of the word. For example, this can differentiate between the multiple usages of the word “strike” within the contexts of “lightning strike”, “football strike” and “labour strike”
In this paper, we propose a method to find the best word-pairs to represent the events of interest. These word-pairs can be used for time series analysis to predict future events as indicated in Figure FIGREF1 . They can also be used as seeds for topic modelling, or to find related posts and word-pairs using dynamic query expansion. The proposed framework uses a temporal filter to identify the spikes within the word-pair signal to binarize the word-pair time series vector BIBREF3 . The binary vector of the word-pair is compared to the protest days vector using Jaccard similarity index BIBREF4 , BIBREF5 , where the word-pairs with highest similarity scores are the most associated word-pairs with protest days. This feature selection method is built upon the assumption that people discuss an event on the day of that event more than on any day before or after the event. This implies that word-pairs related to the event will form a spike on this specific day. Some of the spiking word-pairs are related to the nature of the event itself, such as “taxi protest” or “fair education”. These word-pairs will appear once or twice along the time frame. Meanwhile, more generic word-pairs such as “human rights” or “labour strike” will spike more frequently in the days of events regardless the protest nature.
To test our method, we developed two experiments using all the tweets in Melbourne and Sydney over a period of 640 days. The total number of tweets exceeded 4 million tweets per day, with a total word-pair count of 12 million different word-pairs per day, forming 6 billion word-pairs over the entire timeframe. The selected word-pairs from in each city are used as features to classify if there will be an event or not on a specific day in that city. We classified events from the extracted word-pairs using 9 classifiers including Naive Bayes, Decision Trees, KNN, SVM, and logistic regression.
In Section 2, we describe the event detection methods. Section 3 states the known statistical methods used for data association and feature selection. Section 4 describes the proposed feature selection method. Section 5 describes model training and prediction. Section 6 describes the experiment design, the data and the results. Section 7 summarizes the paper, discuss the research conclusion and explains future work.
Event Detection Methods
Analyzing social networks for event detection is approached from multiple perspectives depending on the research objective. This can be predicting election results, a contest winner, or predicting peoples' reaction to a government decision through protest. The main perspectives to analyze the social networks are (1) content analysis, where the textual content of each post is analyzed using natural language processing to identify the topic or the sentiment of the authors. (2) Network structure analysis, where the relation between the users are described in a tree structure for the follower-followee patterns, or in a graph structure for friendship and interaction patterns. These patterns can be used to know the political preference of people prior to elections. (3) Behavioural analysis of each user including sentiment, response, likes, retweets, location, to identify responses toward specific events. This might be useful to identify users with terrorist intentions. In this section, we will focus on textual content-based models, where text analysis and understanding can be achieved using keywords, topic modelling or sentiment analysis.
Keyword-based approaches
Keyword-based approaches focus on sequence analysis of the time series for each keyword. They also consider different forms for each keyword, including n-gram, skip-gram, and word-pairs BIBREF6 . The keyword-based approaches use the concept of the distributional semantics to group semantically-related words as synonyms to be used as a single feature BIBREF7 . In this approach, keywords are usually associated with events by correlation, entropy or distance metrics. Also, Hossny et al. proposed using SVD with K-Means to strengthen keyword signals, by grouping words having similar temporal patterns, then mapping them into one central word that has minimum distance to the other members of the cluster BIBREF8 .
Sayyadi et al. used co-occurring keywords in documents such as news articles to build a network of keywords. This network is used as a graph to feed a community detection algorithm in order to identify and classify events BIBREF9 . Takeshi et al. created a probabilistic spatio-temporal model to identify natural disasters events such as earthquakes and typhoons using multiple tweet-based features such as words counts per tweet, event-related keywords, and tweet context. They considered each Twitter user as a social sensor and applied both of the Kalman filter and particle filter for location estimation. This model could detect 96% of Japanese earthquakes BIBREF10 . Zhou et al. developed a named entity recognition model to find location names within tweets and use them as keyword-features for event detection, then estimated the impact of the detected events qualitatively BIBREF11 .
Weng et al. introduced “Event Detection by Clustering of Wavelet-based Signals” (EDCow). This model used wavelets to analyze the frequency of word signals, then calculated the autocorrelations of each word signal in order to filter outlier words. The remaining words were clustered using a modularity-based graph partitioning technique to form events BIBREF12 . Ning et al. proposed a model to identify evidence-based precursors and forecasts of future events. They used as a set of news articles to develop a nested multiple instance learning model to predict events across multiple countries. This model can identify the news articles that can be used as precursors for a protest BIBREF13 .
Topic modelling approaches
Topic modelling approaches focus on clustering related words according to their meaning, and indexing them using some similarity metric such as cosine similarity or Euclidean distance. The most recognized techniques are (1) Latent Semantic Indexing (LSI), where the observation matrix is decomposed using singular value decomposition and the data are clustered using K-Means BIBREF7 ,(2) Latent Dirichlet Allocation (LDA), where the words are clustered using Gaussian mixture models (GMM) according to the likelihood of term co-occurrence within the same context BIBREF14 , (3) Word2Vec, which uses a very large corpus to compute continuous vector representations, where we can apply standard vector operations to map one vector to another BIBREF15 .
Cheng et al. suggested using space-time scan statistics to detect events by looking for clusters within data across both time and space, regardless of the content of each individual tweet BIBREF16 . The clusters emerging during spatio-temporal relevant events are used as an indicator of a currently occurring event, as people tweet more often about event topics and news. Ritter et al. proposed a framework that uses the calendar date, cause and event type to describe any event in a way similar to the way Twitter users mention the important events. This framework used temporal resolution, POS tagging, an event tagger, and named entity recognition. Once features are extracted, the association between the combination of features and the events is measured in order to know what are the most important features and how significant the event will be BIBREF17 .
Zhou et al. introduced a graphical model to capture the information in the social data including time, content, and location, calling it location-time constrained topic (LTT). They measure the similarity between the tweets using KL divergence to assess media content uncertainty. Then, they measure the similarity between users using a “longest common subsequence” (LCS) metric. They aggregate the two measurements by augmenting weights as a measure for message similarity. They used the similarity between streaming posts in a social network to detect social events BIBREF18 .
Ifrim et al. presented another approach for topic detection that combines aggressive pre-processing of data with hierarchical clustering of tweets. The framework analyzes different factors affecting the quality of topic modelling results BIBREF19 , along with real-time data streams of live tweets to produce topic streams in close to real-time rate.
Xing et al. presented the mutually generative Latent Dirichlet Allocation model (MGE-LDA) that uses hashtags and topics, as they both are generated mutually by each other in tweets. This process models the relationship between topics and hashtags in tweets, and uses them both as features for event discovery BIBREF20 . Azzam et al. used deep learning and cosine similarity to understand short text posts in communities of question answering BIBREF21 , BIBREF22 . Also, Hossny et al. used inductive logic programming to understand short sentences from news for translation purposes BIBREF23
Sentiment analysis approaches
The third approach is to identify sentiment through the context of the post, which is another application for distributional semantics requiring a huge amount of training data to build the required understanding of the context. Sentiment analysis approaches focus on recognizing the feelings of the crowd and use the score of each feeling as a feature to calculate the probability of social events occurring. The sentiment can represent the emotion, attitude, or opinion of the user towards the subject of the post. One approach to identify sentiment is to find smiley faces such as emoticons and emojis within a tweet or a post. Another approach is to use a sentiment labelled dictionary such as SentiWordNet to assess the sentiment associated with each word.
Generally, sentiment analysis has not been used solely to predict civil unrest, especially as it still faces the challenges of sarcasm and understanding negation in ill-formed sentences. Meanwhile, it is used as an extra feature in combination with features from other approaches such as keywords and topic modelling. Paul et al. proposed a framework to predict the results of the presidential election in the United States in 2017. The proposed framework applied topic modelling to identify related topics in news, then used the topics as seeds for Word2Vec and LSTM to generate a set of enriched keywords. The generated keywords will be used to classify politics-related tweets, which are used to evaluate the sentiment towards each candidate. The sentiment score trend is used to predict the winning candidate BIBREF24 .
Feature Selection Methods
Keywords can be selected as features as a single term or a word-pair or a skip-grams, which can be used for classification using multiple methods such as mutual information, TF-IDF, INLINEFORM0 , or traditional statistical methods such as ANOVA or correlation. Our problem faces two challenges: the first is the huge number of word-pairs extracted from all tweets for the whole time frame concurrently, which make some techniques such as TF-IDF and INLINEFORM1 computationally unfeasible as they require the technique to be distributable on parallel processors on a cluster. The second challenge is the temporal nature of the data which require some techniques that can capture the distributional semantics of terms along with the ground truth vector. In this section, we describe briefly a set of data association methods used to find the best word-pairs to identify the event days.
Pearson correlation measures the linear dependency of the response variable on the independent variable with the maximum dependency of 1 and no dependency of zero. This technique needs to satisfy multiple assumptions to assess the dependency properly. These assumptions require the signals of the variables to be normally distributed, homoskedastic, stationary and have no outliers BIBREF25 , BIBREF26 . In social network and human-authored tweets, we cannot guarantee that the word-pairs signals throughout the timeframe will satisfy the required assumptions. Another drawback for Pearson correlation is that zero score does not necessarily imply no correlation, while no correlation implies zero score.
Spearman is a rank-based metric that evaluates the linear association between the rank variables for each of the independent and the response variables. It simply evaluates the linear correlation between the ranked variables of the original variables. Spearman correlation assumes the monotonicity of the variables but it relaxes the Pearson correlation requirements of the signal to be normal, homoskedastic and stationary. Although the text signals in the social network posts do not satisfy the monotonicity assumption, Spearman correlation can select some word-pairs to be used as predictive features for classification. Spearman correlation has the same drawback of Pearson correlation that zero score does not necessarily imply no correlation while no correlation implies zero score.
Distance correlation is introduced by Szekely et al . (2007) to measure the nonlinear association between two variables BIBREF27 . Distance correlation measures the statistical distance between probability distributions by dividing the Brownian covariance (distance covariance) between X and Y by the product of the distance standard deviations BIBREF28 , BIBREF29 .
TF-IDF is the short of term frequency-inverse document frequency technique that is used for word selection for classification problems. The concept of this technique is to give the words that occur frequently within a specific class high weight as a feature and to penalize the words that occur frequently among multiple classes. for example; the term “Shakespeare” is considered a useful feature to classify English literature documents as it occurs frequently in English literature and rarely occurs in any other kind of documents. Meanwhile, the term “act” will occur frequently in English literature, but it also occurs frequently in the other types of document, so this term will be weighted for its frequent appearance and it will be penalized for its publicity among the classes by what we call inverse-document-frequency BIBREF30 .
Mutual information is a metric for the amount of information one variable can tell the other one. MI evaluates how similar are the joint distributions of the two variables with the product of the marginal distributions of each individual variable, which makes MI more general than correlation as it is not limited by the real cardinal values, it can also be applied to binary, ordinal and nominal values BIBREF31 . As mutual information uses the similarity of the distribution, it is not concerned with pairing the individual observations of X and Y as much as it cares about the whole statistical distribution of X and Y. This makes MI very useful for clustering purposes rather than classification purposes BIBREF32 .
Cosine similarity metric calculates the cosine of the angle between two vectors. The cosine metric evaluates the direction similarity of the vectors rather the magnitude similarity. The cosine similarity score equals to 1 if the two vectors have the angle of zero between the directions of two vectors, and the score is set to zero when the two vectors are perpendicular BIBREF33 . if the two vectors are oriented to opposite directions, the similarity score is -1. Cosine similarity metric is usually used in the positive space, which makes the scores limited within the interval of [0,1].
Jaccard index or coefficient is a metric to evaluate the similarity of two sets by comparing their members to identify the common elements versus the distinct ones. The main advantage of Jaccard similarity is it ignores the default value or the null assumption in the two vectors and it only considers the non-default correct matches compared to the mismatches. This consideration makes the metric immune to the data imbalance. Jaccard index is similar to cosine-similarity as it retains the sparsity property and it also allows the discrimination of the collinear vectors.
Spike Matching Method:
The proposed model extracts the word-pairs having a high association with event days according to the distributional semantic hypothesis and use them for training the model that will be used later for the binary classification task BIBREF34 as illustrated in figure FIGREF10 . The first step is the data preparation where we load all the tweets for each day, then we exclude the tweets having URLs or unrelated topics, then we clean each tweet by removing the hashtags, non-Latin script and stopping words. Then we lemmatize and stem each word in each tweet using Lancaster stemmer. Finally, we extract the word-pairs in each tweet. The word-pair is the list of n words co-occurring together within the same tweet.
The second step is to count the frequency of each word-pair per each day, which are used as features to classify the day as either event or no-event day. The formulation is a matrix with rows as word-pairs and columns as days and values are daily counts of each word-pair. The third step is to binarize the event count vector (ground truth) as well as the vector of each word-pair. Binarizing the event vector is done by checking if the count of events in each day is larger than zero. The binarization of the word-pair count vectors is done by applying a temporal filter to the time series in order to identify the spikes as explained in equation EQREF11 , where the days with spikes are set to ones and days without spike are set to zeros BIBREF35 , BIBREF36 . DISPLAYFORM0
Where x is the count of the word-pair, INLINEFORM0 is the time variable, INLINEFORM1 is the time difference, the threshold is the minimum height of the spike. Afterwards, we compare the binary vector for each word-pair with the ground truth binary vector using the Jaccard similarity index as stated in equation EQREF12 BIBREF4 , BIBREF5 . The word-pairs are then sorted descendingly according to the similarity score. The word-pairs with the highest scores are used as a feature for training the model in the fourth step. DISPLAYFORM0
where WP is the word pair vector, GT is the ground truth vector
Training and Prediction
Once we identify the best word-pairs to be used as features for classification, we split the time series vector of each word-pair into a training vector and a testing vector. then we use the list of the training vectors of the selected word-pairs to train the model as explained in subsection SECREF13 and use the list of testing vectors for the same word-pairs to classify any day to event/nonevent day SECREF16 .
Training the model:
The third step is to train the model using the set of features generated in the first step. We selected the Naive Bayes classifier to be our classification technique for the following reasons: (1) the high bias of the NB classifier reduces the possibility of over-fitting, and our problem has a high probability of over-fitting due to the high number of features and the low number of observations, (2) the response variable is binary, so we do not need to regress the variable real value as much as we need to know the event-class, and (3) The counts of the word-pairs as independent variables are limited between 0 and 100 occurrences per each day, which make the probabilistic approaches more effective than distance based approaches.
The training process aims to calculate three priori probabilities to be used later in calculating the posterior probabilities: (1) the probability of each word-pair count in a specific day given the status of the day as “event” or “non-event”. (2) the priori conditional probability of each word-pair given event status INLINEFORM0 . (3) the probability of each event class as well as the probability of each word-pair as stated in equations EQREF15 and EQREF15 . DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the word-pair, INLINEFORM1 is any class for event occurrence and word-pair is the vector of counts for the word-pairs extracted from tweets
Predicting Civil Unrest
Once the priori probabilities are calculated using the training data, we use them to calculate the posterior probability of both classes of event-days and non-event-days given the values of the word-pairs using the equation EQREF17 . DISPLAYFORM0
where INLINEFORM0 is the word-pair, INLINEFORM1 INLINEFORM2 As the word-pairs are assumed to be independent and previously known from the training step.
Experiments and Results
The experiments are designed to detect civil unrest events in Melbourne on any specific day. In this experiment, we used all the tweets posted from Melbourne within a time frame of 640 days between December 2015 and September 2017. This time frame will be split into 500 days for model training and 140 days for model testing on multiple folds. The tweet location is specified using (1) longitude and latitude meta-tag, (2) tweet location meta-tag, (3) the profile location meta-tag, and (4) The time zone meta-tag. The total number of tweets exceeded 4 million tweets daily. Firstly, we cleaned the data from noisy signals, performed stemming and lemmatization then extracted the word-pairs from each tweet and count each word-pair per each day. Example 1 illustrates how each tweet is cleaned, prepared and vectorized before being used for training the model. The steps are explained below:
As explained in example 1, each word-pair will be transformed from a vector of integer values into a vector of binary values and denoted as INLINEFORM0 . INLINEFORM1 will be used to calculate the Jaccard similarity index of the binary vector with the events binary vector. Each word-pair will have a similarity score according to the number of word-pair spikes matching the event days. This method uses the concept of distributional semantic, where the co-occurring signals are likely to be semantically associated BIBREF34 .
Example 1: Original Tweet: Protesters may be unmasked in wake of Coburg clash https://t.co/djjVIfzO3e (News) #melbourne #victoria Cleaned Tweet: protest unmask wake coburg clash news List of two-words-word-pairs: [`protest', `unmask'], [`protest', `wake'], [`protest', `Coburg'], ..., [`unmask', `wake'], [`unmask', `coburg'],..., [`clash', `news'] [`protest', `unmask'] training : INLINEFORM0 [`protest', `unmask'] testing : INLINEFORM1 Assuming a time frame of 20 days word-pair: [2,3,3,4,5,3,2,3,8,3,3,1,3,9,3,1,2,4,5,1] Spikes ( INLINEFORM2 ): [0,0,0,0,1,0,0,0,1,0,0,0,0,1,0,0,0,0,1,0] Events( INLINEFORM3 ): [0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,1,0,1,0] INLINEFORM4
Once we selected the most informative word-pairs as features, we will use the raw values to train the Naive Bayes classifier. The classifier is trained using 500 days selected randomly along the whole timeframe, then it is used to predict the other 140 days. To ensure the robustness of our experiment, We applied 10-folds cross-validation, where we performed the same experiment 10 times using 10 different folds of randomly selected training and testing data. The prediction achieved an average area under the ROC curve of 90%, which statistically significant and achieved F-score of 91%, which is immune to data imbalance as listed in table TABREF18 . Figure FIGREF25 shows the ROC curves for the results of a single fold of Naive Bayes classification that uses the features extracted by each selection methods. The classification results of the proposed method outperformed the benchmarks and state of the art developed by Cui et al. (2017), Nguyen et al. (2017), Willer et al. (2016), and Adedoyin-Olowe et al. (2016) as illustrated in the table TABREF33 BIBREF12 , BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 .
The same experiment has been applied to Sydney, Brisbane and Perth in Australia on a time frame of 640 days with 500 days training data and 140 days testing data and the results were similar to Melbourne results with average AUC of 0.91 and average F-Score of 0.79. To ensure that the proposed method is language independent, we used the same method to classify civil unrest days in Jakarta using the Indonesian language, the classification scores were lower than the average scores for English language by 0.05 taking into consideration that we did not apply any NLP pre-processing to the Indonesian tweets such as stemming and lemmatization.
To verify the robustness of this feature selection method, we tested the selected features using multiple classifiers such as KNN, SVM, naive Bayes and decision trees. The results emphasized that the word-pairs selected using the spike-matching method achieve better AUC scores than the other correlation methods as listed in table TABREF19
Conclusions
In this paper, we proposed a framework to detect civil unrest events by tracking each word-pair volume in twitter. The main challenge with this model is to identify the word-pairs that are highly associated with the events with predictive power. We used temporal filtering to detect the spike within the time series vector and used Jaccard similarity to calculate the scores of each word-pair according to its similarity with the binary vector of event days. These scores are used to rank the word-pairs as features for prediction.
Once the word-pairs are identified, we trained a Naive Bayes classifier to identify any day in a specific region to be an event or non-event days. We performed the experiment on both Melbourne and Sydney regions in Australia, and we achieved a classification accuracy of 87% with the precision of 77%, Recall of 82 %, area under the ROC curve of 91% and F-Score of 79%. The results are all achieved after 10-folds randomized cross-validation as listed in table TABREF32 .
The main contributions of this paper are (1) to overcome twitter challenges of acronyms, short text, ambiguity and synonyms, (2) to identify the set of word-pairs to be used as features for live event detection, (3) to build an end-to-end framework that can detect the events lively according to the word counts. This work can be applied to similar problems, where specific tweets can be associated with life events such as disease outbreak or stock market fluctuation. This work can be extended to predict future events with one day in advance, where we will use the same method for feature selection in addition to to time series analysis of the historical patterns of the word-pairs.
Acknowledgments
This research was fully supported by the School of Mathematical Sciences at the University of Adelaide. All the data, computation and technical framework were supported by Data-To-Decision-Collaborative-Research-Center (D2DCRC). | Yes |
7b2bf0c1a24a2aa01d49f3c7e1bdc7401162c116 | 7b2bf0c1a24a2aa01d49f3c7e1bdc7401162c116_0 | Q: How are the keywords associated with events such as protests selected?
Text: Introduction
Event detection is important for emergency services to react rapidly and minimize damage. For example, terrorist attacks, protests, or bushfires may require the presence of ambulances, firefighters, and police as soon as possible to save people. This research aims to detect events as soon as they occur and are reported via some Twitter user. The event detection process requires to know the keywords associated with each event and to assess the minimal count of each word to decide confidently that an event has occurred. In this research, we propose a novel method of spike matching to identify keywords, and use probabilistic classification to assess the probability of having an event given the volume of each word.
Event detection and prediction from social networks have been studied frequently in recent years. Most of the predictive frameworks use textual content such as likes, shares, and retweets, as features. The text is used as features either by tracking the temporal patterns of keywords, clustering words into topics, or by evaluating sentiment scores and polarity. The main challenge in keyword-based models is to determine which words to use in the first place, especially as people use words in a non-standard way, particularly on Twitter.
In this research, we aim for detecting large events as soon as they happen with near-live sensitivity. For example, When spontaneous protests occur just after recent news such as increasing taxes or decreasing budget, we need to have indicators to raise the flag of a happening protest. Identifying these indicators requires to select a set of words that are mostly associated with the events of interest such as protests. We then track the volume of these words and evaluate the probability of an event occurring given the current volume of each of the tracked features. The main challenge is to find this set of features that allow such probabilistic classification.
Using text as features in Twitter is challenging because of the informal nature of the tweets, the limited length of the tweet, platform-specific language, and multilingual nature of Twitter BIBREF0 , BIBREF1 , BIBREF2 . The main challenges for text analysis in Twitter are listed below:
We approached the first and second challenges by using a Bayesian approach to learn which terms were associated with events, regardless of whether they are standard language, acronyms, or even a made-up word, so long as they match the events of interest. The third and fourth challenges are approached by using word-pairs, where we extract all the pairs of co-occurring words within each tweet. This allows us to recognize the context of the word ('Messi','strike' ) is different than ('labour','strike').
According to the distributional semantic hypothesis, event-related words are likely to be used on the day of an event more frequently than any normal day before or after the event. This will form a spike in the keyword count magnitude along the timeline as illustrated in Figure FIGREF6 . To find the words most associated with events, we search for the words that achieve the highest number of spikes matching the days of events. We use the Jaccard similarity metric as it values the spikes matching events and penalizes spikes with no event and penalizes events without spikes. Separate words can be noisy due to the misuse of the term by people, especially in big data environments. So, we rather used the word-pairs as textual features in order to capture the context of the word. For example, this can differentiate between the multiple usages of the word “strike” within the contexts of “lightning strike”, “football strike” and “labour strike”
In this paper, we propose a method to find the best word-pairs to represent the events of interest. These word-pairs can be used for time series analysis to predict future events as indicated in Figure FIGREF1 . They can also be used as seeds for topic modelling, or to find related posts and word-pairs using dynamic query expansion. The proposed framework uses a temporal filter to identify the spikes within the word-pair signal to binarize the word-pair time series vector BIBREF3 . The binary vector of the word-pair is compared to the protest days vector using Jaccard similarity index BIBREF4 , BIBREF5 , where the word-pairs with highest similarity scores are the most associated word-pairs with protest days. This feature selection method is built upon the assumption that people discuss an event on the day of that event more than on any day before or after the event. This implies that word-pairs related to the event will form a spike on this specific day. Some of the spiking word-pairs are related to the nature of the event itself, such as “taxi protest” or “fair education”. These word-pairs will appear once or twice along the time frame. Meanwhile, more generic word-pairs such as “human rights” or “labour strike” will spike more frequently in the days of events regardless the protest nature.
To test our method, we developed two experiments using all the tweets in Melbourne and Sydney over a period of 640 days. The total number of tweets exceeded 4 million tweets per day, with a total word-pair count of 12 million different word-pairs per day, forming 6 billion word-pairs over the entire timeframe. The selected word-pairs from in each city are used as features to classify if there will be an event or not on a specific day in that city. We classified events from the extracted word-pairs using 9 classifiers including Naive Bayes, Decision Trees, KNN, SVM, and logistic regression.
In Section 2, we describe the event detection methods. Section 3 states the known statistical methods used for data association and feature selection. Section 4 describes the proposed feature selection method. Section 5 describes model training and prediction. Section 6 describes the experiment design, the data and the results. Section 7 summarizes the paper, discuss the research conclusion and explains future work.
Event Detection Methods
Analyzing social networks for event detection is approached from multiple perspectives depending on the research objective. This can be predicting election results, a contest winner, or predicting peoples' reaction to a government decision through protest. The main perspectives to analyze the social networks are (1) content analysis, where the textual content of each post is analyzed using natural language processing to identify the topic or the sentiment of the authors. (2) Network structure analysis, where the relation between the users are described in a tree structure for the follower-followee patterns, or in a graph structure for friendship and interaction patterns. These patterns can be used to know the political preference of people prior to elections. (3) Behavioural analysis of each user including sentiment, response, likes, retweets, location, to identify responses toward specific events. This might be useful to identify users with terrorist intentions. In this section, we will focus on textual content-based models, where text analysis and understanding can be achieved using keywords, topic modelling or sentiment analysis.
Keyword-based approaches
Keyword-based approaches focus on sequence analysis of the time series for each keyword. They also consider different forms for each keyword, including n-gram, skip-gram, and word-pairs BIBREF6 . The keyword-based approaches use the concept of the distributional semantics to group semantically-related words as synonyms to be used as a single feature BIBREF7 . In this approach, keywords are usually associated with events by correlation, entropy or distance metrics. Also, Hossny et al. proposed using SVD with K-Means to strengthen keyword signals, by grouping words having similar temporal patterns, then mapping them into one central word that has minimum distance to the other members of the cluster BIBREF8 .
Sayyadi et al. used co-occurring keywords in documents such as news articles to build a network of keywords. This network is used as a graph to feed a community detection algorithm in order to identify and classify events BIBREF9 . Takeshi et al. created a probabilistic spatio-temporal model to identify natural disasters events such as earthquakes and typhoons using multiple tweet-based features such as words counts per tweet, event-related keywords, and tweet context. They considered each Twitter user as a social sensor and applied both of the Kalman filter and particle filter for location estimation. This model could detect 96% of Japanese earthquakes BIBREF10 . Zhou et al. developed a named entity recognition model to find location names within tweets and use them as keyword-features for event detection, then estimated the impact of the detected events qualitatively BIBREF11 .
Weng et al. introduced “Event Detection by Clustering of Wavelet-based Signals” (EDCow). This model used wavelets to analyze the frequency of word signals, then calculated the autocorrelations of each word signal in order to filter outlier words. The remaining words were clustered using a modularity-based graph partitioning technique to form events BIBREF12 . Ning et al. proposed a model to identify evidence-based precursors and forecasts of future events. They used as a set of news articles to develop a nested multiple instance learning model to predict events across multiple countries. This model can identify the news articles that can be used as precursors for a protest BIBREF13 .
Topic modelling approaches
Topic modelling approaches focus on clustering related words according to their meaning, and indexing them using some similarity metric such as cosine similarity or Euclidean distance. The most recognized techniques are (1) Latent Semantic Indexing (LSI), where the observation matrix is decomposed using singular value decomposition and the data are clustered using K-Means BIBREF7 ,(2) Latent Dirichlet Allocation (LDA), where the words are clustered using Gaussian mixture models (GMM) according to the likelihood of term co-occurrence within the same context BIBREF14 , (3) Word2Vec, which uses a very large corpus to compute continuous vector representations, where we can apply standard vector operations to map one vector to another BIBREF15 .
Cheng et al. suggested using space-time scan statistics to detect events by looking for clusters within data across both time and space, regardless of the content of each individual tweet BIBREF16 . The clusters emerging during spatio-temporal relevant events are used as an indicator of a currently occurring event, as people tweet more often about event topics and news. Ritter et al. proposed a framework that uses the calendar date, cause and event type to describe any event in a way similar to the way Twitter users mention the important events. This framework used temporal resolution, POS tagging, an event tagger, and named entity recognition. Once features are extracted, the association between the combination of features and the events is measured in order to know what are the most important features and how significant the event will be BIBREF17 .
Zhou et al. introduced a graphical model to capture the information in the social data including time, content, and location, calling it location-time constrained topic (LTT). They measure the similarity between the tweets using KL divergence to assess media content uncertainty. Then, they measure the similarity between users using a “longest common subsequence” (LCS) metric. They aggregate the two measurements by augmenting weights as a measure for message similarity. They used the similarity between streaming posts in a social network to detect social events BIBREF18 .
Ifrim et al. presented another approach for topic detection that combines aggressive pre-processing of data with hierarchical clustering of tweets. The framework analyzes different factors affecting the quality of topic modelling results BIBREF19 , along with real-time data streams of live tweets to produce topic streams in close to real-time rate.
Xing et al. presented the mutually generative Latent Dirichlet Allocation model (MGE-LDA) that uses hashtags and topics, as they both are generated mutually by each other in tweets. This process models the relationship between topics and hashtags in tweets, and uses them both as features for event discovery BIBREF20 . Azzam et al. used deep learning and cosine similarity to understand short text posts in communities of question answering BIBREF21 , BIBREF22 . Also, Hossny et al. used inductive logic programming to understand short sentences from news for translation purposes BIBREF23
Sentiment analysis approaches
The third approach is to identify sentiment through the context of the post, which is another application for distributional semantics requiring a huge amount of training data to build the required understanding of the context. Sentiment analysis approaches focus on recognizing the feelings of the crowd and use the score of each feeling as a feature to calculate the probability of social events occurring. The sentiment can represent the emotion, attitude, or opinion of the user towards the subject of the post. One approach to identify sentiment is to find smiley faces such as emoticons and emojis within a tweet or a post. Another approach is to use a sentiment labelled dictionary such as SentiWordNet to assess the sentiment associated with each word.
Generally, sentiment analysis has not been used solely to predict civil unrest, especially as it still faces the challenges of sarcasm and understanding negation in ill-formed sentences. Meanwhile, it is used as an extra feature in combination with features from other approaches such as keywords and topic modelling. Paul et al. proposed a framework to predict the results of the presidential election in the United States in 2017. The proposed framework applied topic modelling to identify related topics in news, then used the topics as seeds for Word2Vec and LSTM to generate a set of enriched keywords. The generated keywords will be used to classify politics-related tweets, which are used to evaluate the sentiment towards each candidate. The sentiment score trend is used to predict the winning candidate BIBREF24 .
Feature Selection Methods
Keywords can be selected as features as a single term or a word-pair or a skip-grams, which can be used for classification using multiple methods such as mutual information, TF-IDF, INLINEFORM0 , or traditional statistical methods such as ANOVA or correlation. Our problem faces two challenges: the first is the huge number of word-pairs extracted from all tweets for the whole time frame concurrently, which make some techniques such as TF-IDF and INLINEFORM1 computationally unfeasible as they require the technique to be distributable on parallel processors on a cluster. The second challenge is the temporal nature of the data which require some techniques that can capture the distributional semantics of terms along with the ground truth vector. In this section, we describe briefly a set of data association methods used to find the best word-pairs to identify the event days.
Pearson correlation measures the linear dependency of the response variable on the independent variable with the maximum dependency of 1 and no dependency of zero. This technique needs to satisfy multiple assumptions to assess the dependency properly. These assumptions require the signals of the variables to be normally distributed, homoskedastic, stationary and have no outliers BIBREF25 , BIBREF26 . In social network and human-authored tweets, we cannot guarantee that the word-pairs signals throughout the timeframe will satisfy the required assumptions. Another drawback for Pearson correlation is that zero score does not necessarily imply no correlation, while no correlation implies zero score.
Spearman is a rank-based metric that evaluates the linear association between the rank variables for each of the independent and the response variables. It simply evaluates the linear correlation between the ranked variables of the original variables. Spearman correlation assumes the monotonicity of the variables but it relaxes the Pearson correlation requirements of the signal to be normal, homoskedastic and stationary. Although the text signals in the social network posts do not satisfy the monotonicity assumption, Spearman correlation can select some word-pairs to be used as predictive features for classification. Spearman correlation has the same drawback of Pearson correlation that zero score does not necessarily imply no correlation while no correlation implies zero score.
Distance correlation is introduced by Szekely et al . (2007) to measure the nonlinear association between two variables BIBREF27 . Distance correlation measures the statistical distance between probability distributions by dividing the Brownian covariance (distance covariance) between X and Y by the product of the distance standard deviations BIBREF28 , BIBREF29 .
TF-IDF is the short of term frequency-inverse document frequency technique that is used for word selection for classification problems. The concept of this technique is to give the words that occur frequently within a specific class high weight as a feature and to penalize the words that occur frequently among multiple classes. for example; the term “Shakespeare” is considered a useful feature to classify English literature documents as it occurs frequently in English literature and rarely occurs in any other kind of documents. Meanwhile, the term “act” will occur frequently in English literature, but it also occurs frequently in the other types of document, so this term will be weighted for its frequent appearance and it will be penalized for its publicity among the classes by what we call inverse-document-frequency BIBREF30 .
Mutual information is a metric for the amount of information one variable can tell the other one. MI evaluates how similar are the joint distributions of the two variables with the product of the marginal distributions of each individual variable, which makes MI more general than correlation as it is not limited by the real cardinal values, it can also be applied to binary, ordinal and nominal values BIBREF31 . As mutual information uses the similarity of the distribution, it is not concerned with pairing the individual observations of X and Y as much as it cares about the whole statistical distribution of X and Y. This makes MI very useful for clustering purposes rather than classification purposes BIBREF32 .
Cosine similarity metric calculates the cosine of the angle between two vectors. The cosine metric evaluates the direction similarity of the vectors rather the magnitude similarity. The cosine similarity score equals to 1 if the two vectors have the angle of zero between the directions of two vectors, and the score is set to zero when the two vectors are perpendicular BIBREF33 . if the two vectors are oriented to opposite directions, the similarity score is -1. Cosine similarity metric is usually used in the positive space, which makes the scores limited within the interval of [0,1].
Jaccard index or coefficient is a metric to evaluate the similarity of two sets by comparing their members to identify the common elements versus the distinct ones. The main advantage of Jaccard similarity is it ignores the default value or the null assumption in the two vectors and it only considers the non-default correct matches compared to the mismatches. This consideration makes the metric immune to the data imbalance. Jaccard index is similar to cosine-similarity as it retains the sparsity property and it also allows the discrimination of the collinear vectors.
Spike Matching Method:
The proposed model extracts the word-pairs having a high association with event days according to the distributional semantic hypothesis and use them for training the model that will be used later for the binary classification task BIBREF34 as illustrated in figure FIGREF10 . The first step is the data preparation where we load all the tweets for each day, then we exclude the tweets having URLs or unrelated topics, then we clean each tweet by removing the hashtags, non-Latin script and stopping words. Then we lemmatize and stem each word in each tweet using Lancaster stemmer. Finally, we extract the word-pairs in each tweet. The word-pair is the list of n words co-occurring together within the same tweet.
The second step is to count the frequency of each word-pair per each day, which are used as features to classify the day as either event or no-event day. The formulation is a matrix with rows as word-pairs and columns as days and values are daily counts of each word-pair. The third step is to binarize the event count vector (ground truth) as well as the vector of each word-pair. Binarizing the event vector is done by checking if the count of events in each day is larger than zero. The binarization of the word-pair count vectors is done by applying a temporal filter to the time series in order to identify the spikes as explained in equation EQREF11 , where the days with spikes are set to ones and days without spike are set to zeros BIBREF35 , BIBREF36 . DISPLAYFORM0
Where x is the count of the word-pair, INLINEFORM0 is the time variable, INLINEFORM1 is the time difference, the threshold is the minimum height of the spike. Afterwards, we compare the binary vector for each word-pair with the ground truth binary vector using the Jaccard similarity index as stated in equation EQREF12 BIBREF4 , BIBREF5 . The word-pairs are then sorted descendingly according to the similarity score. The word-pairs with the highest scores are used as a feature for training the model in the fourth step. DISPLAYFORM0
where WP is the word pair vector, GT is the ground truth vector
Training and Prediction
Once we identify the best word-pairs to be used as features for classification, we split the time series vector of each word-pair into a training vector and a testing vector. then we use the list of the training vectors of the selected word-pairs to train the model as explained in subsection SECREF13 and use the list of testing vectors for the same word-pairs to classify any day to event/nonevent day SECREF16 .
Training the model:
The third step is to train the model using the set of features generated in the first step. We selected the Naive Bayes classifier to be our classification technique for the following reasons: (1) the high bias of the NB classifier reduces the possibility of over-fitting, and our problem has a high probability of over-fitting due to the high number of features and the low number of observations, (2) the response variable is binary, so we do not need to regress the variable real value as much as we need to know the event-class, and (3) The counts of the word-pairs as independent variables are limited between 0 and 100 occurrences per each day, which make the probabilistic approaches more effective than distance based approaches.
The training process aims to calculate three priori probabilities to be used later in calculating the posterior probabilities: (1) the probability of each word-pair count in a specific day given the status of the day as “event” or “non-event”. (2) the priori conditional probability of each word-pair given event status INLINEFORM0 . (3) the probability of each event class as well as the probability of each word-pair as stated in equations EQREF15 and EQREF15 . DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 is the word-pair, INLINEFORM1 is any class for event occurrence and word-pair is the vector of counts for the word-pairs extracted from tweets
Predicting Civil Unrest
Once the priori probabilities are calculated using the training data, we use them to calculate the posterior probability of both classes of event-days and non-event-days given the values of the word-pairs using the equation EQREF17 . DISPLAYFORM0
where INLINEFORM0 is the word-pair, INLINEFORM1 INLINEFORM2 As the word-pairs are assumed to be independent and previously known from the training step.
Experiments and Results
The experiments are designed to detect civil unrest events in Melbourne on any specific day. In this experiment, we used all the tweets posted from Melbourne within a time frame of 640 days between December 2015 and September 2017. This time frame will be split into 500 days for model training and 140 days for model testing on multiple folds. The tweet location is specified using (1) longitude and latitude meta-tag, (2) tweet location meta-tag, (3) the profile location meta-tag, and (4) The time zone meta-tag. The total number of tweets exceeded 4 million tweets daily. Firstly, we cleaned the data from noisy signals, performed stemming and lemmatization then extracted the word-pairs from each tweet and count each word-pair per each day. Example 1 illustrates how each tweet is cleaned, prepared and vectorized before being used for training the model. The steps are explained below:
As explained in example 1, each word-pair will be transformed from a vector of integer values into a vector of binary values and denoted as INLINEFORM0 . INLINEFORM1 will be used to calculate the Jaccard similarity index of the binary vector with the events binary vector. Each word-pair will have a similarity score according to the number of word-pair spikes matching the event days. This method uses the concept of distributional semantic, where the co-occurring signals are likely to be semantically associated BIBREF34 .
Example 1: Original Tweet: Protesters may be unmasked in wake of Coburg clash https://t.co/djjVIfzO3e (News) #melbourne #victoria Cleaned Tweet: protest unmask wake coburg clash news List of two-words-word-pairs: [`protest', `unmask'], [`protest', `wake'], [`protest', `Coburg'], ..., [`unmask', `wake'], [`unmask', `coburg'],..., [`clash', `news'] [`protest', `unmask'] training : INLINEFORM0 [`protest', `unmask'] testing : INLINEFORM1 Assuming a time frame of 20 days word-pair: [2,3,3,4,5,3,2,3,8,3,3,1,3,9,3,1,2,4,5,1] Spikes ( INLINEFORM2 ): [0,0,0,0,1,0,0,0,1,0,0,0,0,1,0,0,0,0,1,0] Events( INLINEFORM3 ): [0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,1,0,1,0] INLINEFORM4
Once we selected the most informative word-pairs as features, we will use the raw values to train the Naive Bayes classifier. The classifier is trained using 500 days selected randomly along the whole timeframe, then it is used to predict the other 140 days. To ensure the robustness of our experiment, We applied 10-folds cross-validation, where we performed the same experiment 10 times using 10 different folds of randomly selected training and testing data. The prediction achieved an average area under the ROC curve of 90%, which statistically significant and achieved F-score of 91%, which is immune to data imbalance as listed in table TABREF18 . Figure FIGREF25 shows the ROC curves for the results of a single fold of Naive Bayes classification that uses the features extracted by each selection methods. The classification results of the proposed method outperformed the benchmarks and state of the art developed by Cui et al. (2017), Nguyen et al. (2017), Willer et al. (2016), and Adedoyin-Olowe et al. (2016) as illustrated in the table TABREF33 BIBREF12 , BIBREF38 , BIBREF39 , BIBREF40 , BIBREF41 , BIBREF42 .
The same experiment has been applied to Sydney, Brisbane and Perth in Australia on a time frame of 640 days with 500 days training data and 140 days testing data and the results were similar to Melbourne results with average AUC of 0.91 and average F-Score of 0.79. To ensure that the proposed method is language independent, we used the same method to classify civil unrest days in Jakarta using the Indonesian language, the classification scores were lower than the average scores for English language by 0.05 taking into consideration that we did not apply any NLP pre-processing to the Indonesian tweets such as stemming and lemmatization.
To verify the robustness of this feature selection method, we tested the selected features using multiple classifiers such as KNN, SVM, naive Bayes and decision trees. The results emphasized that the word-pairs selected using the spike-matching method achieve better AUC scores than the other correlation methods as listed in table TABREF19
Conclusions
In this paper, we proposed a framework to detect civil unrest events by tracking each word-pair volume in twitter. The main challenge with this model is to identify the word-pairs that are highly associated with the events with predictive power. We used temporal filtering to detect the spike within the time series vector and used Jaccard similarity to calculate the scores of each word-pair according to its similarity with the binary vector of event days. These scores are used to rank the word-pairs as features for prediction.
Once the word-pairs are identified, we trained a Naive Bayes classifier to identify any day in a specific region to be an event or non-event days. We performed the experiment on both Melbourne and Sydney regions in Australia, and we achieved a classification accuracy of 87% with the precision of 77%, Recall of 82 %, area under the ROC curve of 91% and F-Score of 79%. The results are all achieved after 10-folds randomized cross-validation as listed in table TABREF32 .
The main contributions of this paper are (1) to overcome twitter challenges of acronyms, short text, ambiguity and synonyms, (2) to identify the set of word-pairs to be used as features for live event detection, (3) to build an end-to-end framework that can detect the events lively according to the word counts. This work can be applied to similar problems, where specific tweets can be associated with life events such as disease outbreak or stock market fluctuation. This work can be extended to predict future events with one day in advance, where we will use the same method for feature selection in addition to to time series analysis of the historical patterns of the word-pairs.
Acknowledgments
This research was fully supported by the School of Mathematical Sciences at the University of Adelaide. All the data, computation and technical framework were supported by Data-To-Decision-Collaborative-Research-Center (D2DCRC). | By using a Bayesian approach and by using word-pairs, where they extract all the pairs of co-occurring words within each tweet. They search for the words that achieve the highest number of spikes matching the days of events. |
e09e89b3945b756609278dcffb5f89d8a52a02cd | e09e89b3945b756609278dcffb5f89d8a52a02cd_0 | Q: How many speeches are in the dataset?
Text: Introduction
As the world moves towards increasing forms of digitization, the creation of text corpora has become an important activity for NLP and other fields of research. Parliamentary data is a rich corpus of discourse on a wide array of topics. The Lok Sabha website provides access to all kinds of reports, debates, bills related to the proceedings of the house. Similarly, the Rajya Sabha website also contains debates, bills, reports introduced in the house. The Lok Sabha website also contains information about members of the parliament who are elected by the people and debate in the house. Since the data is unstructured , it cannot be computationally analyzed. There is a need to shape the data into a structured format for analysis. This data is important as it can be used to visualize person, party and agenda level semantics in the house.
The data that we get from parliamentary proceedings has presence of sarcasm, interjections and allegations which makes it difficult to apply standard NLP techniques BIBREF0 . Members of the parliament discuss various important aspects and there is a strong purpose behind every speech. We wanted to analyze this particular aspect. Traditional polar stances (for or against) do not justify for the diplomatic intricacies in the speeches. We created this taxonomy to better understand the semantics i.e the pragmatics of the speeches and to give enriched insights into member's responses in a speech. The study of the speaker's meaning, not focusing on the phonetic or grammatical form of an utterance, but instead on what the speaker's intentions and beliefs are is pragmatics. Pragmatics is a sub-field of linguistics and semiotics that studies the ways in which context contributes to meaning.
After thorough investigation of many speeches we found that the statements made by members cannot be deemed strictly "for or against" a bill or government. A person maybe appreciating a bill or government's effort in one part of a speech but also asking attention to other contentious issues. Similarly, a person criticizing government for an irresponsible action could be giving some constructive suggestions elsewhere. A political discourse may not always be polar and might have a higher spectrum of meaning. After investigating and highlighting statements with different intentions we came up with a minimal set of 4 mutually exclusive categories with different degrees of correlation with the traditional two polar categories (for and against). It is observed that any statement by a participating member will fall into one of these categories namely - Appreciation, Call for Action, Issue, Blaming.
For example, if the debate consists of more of issues, one can infer that the bill is not serving the its purpose in a well manner. Also, this preliminary step will lead to new areas of research such as detection of appreciation, blame in similar lines of argument mining which is evolving in the recent years in the field of linguistics. We will quote portions of a few speeches which will give an idea of the data being presented:
This city has lost its place due to negligence of previous governments and almost all industries have migrated from here and lack of infrastructure facilities, business is also losing its grip. It is very unfortunate that previous UP Governments also did not do any justice to this city.
- Shri Devendra Singh Bhole, May 03, 2016
As evident, the speaker is clearly blaming the previous governments for negligence on the city. In this sense the data is very rich and a lot of linguistic research is possible. Researchers can work on different aspects such as detection of critique made by members, suggestions raised by members etc. Given the data, it can be used for rhetoric, linguistic, historical, political and sociological research. Parliamentary data is a major source of socially relevant content. A new series of workshops are being conducted for the sole purpose of encouraging research in parliamentary debates ParlClarin.
As a preliminary step, we created four major categories of the speeches spoken by the parliament members. The definitions and examples of the four categories are explained in the below tables respectively. The examples are taken from a debate on NABARD bill in Lok Sabha.
A speech can be labelled with multiple categories as members can appreciate and raise issues in the same speech. The following points are the contributions of this paper :
Related Work
Many linguists around the globe are concentrating on creation of parliamentary datasets. BIBREF1 gives an overview of the parliamentary records and corpora from countries with a focus on their availability through Clarin infrastructure. A dataset of Japanese Local Assembly minutes was created and analyzed for statistical data such as number of speakers, characters and words BIBREF2 . BIBREF3 created a highly multilingual parallel corpus of European parliament and demonstrated that it is useful for statistical machine translation. Parliamentary debates are full of arguments. Ruling party members refute the claims made by opposition party members and vice versa. Members provide strong arguments for supporting their claim or refuting other's claim. Analyzing argumentation from a computational linguistics point of view has led very recently to a new field called argumentation mining BIBREF4 . One can perform argument mining on these debates and analyze the results. BIBREF5 worked on detecting perspectives in UK political debates using a Bayesian modelling approach. BIBREF6 worked on claim detection from UK political debates using both linguistic features text and features from speech.
Stance classification is a relatively new and challenging approach to deepen opinion mining by classifying a user's stance in a debate i.e whether he is for or against the topic. BIBREF7 . BIBREF8 addressed the question of whether opinion mining techniques can be used on Congressional debates or not. BIBREF9 worked on stance classification of posts in online debate forums using both structural and linguistic features. BIBREF10 trained a svm BIBREF11 classifier with features of unigrams, bigrams and trigrams to predict whether a sentence is in agreement or disagreement and achieved an F-score of 0.55 for agreement and 0.81 for disagreement on the evaluation set. No one has worked on classifying speeches based on their purpose. This is the first novel work towards this aspect.
DataSet
Our dataset consists of synopsis of debates in the lower house of the Indian Parliament (Lok Sabha). The dataset consists of :
In Lok Sabha, a session is referred to as all the debates held in a particular cycle of sitting. There are 55 debate types identified by the Lok Sabha. Table 3 identifies some of the debate types we have considered and their frequency between the years 2014 and 2017. We opted out debate types which do not occur regularly. Each debate type has its own style of proceedings. For example, in the debate type "Government Bills", a minister places a bill on the table and discussion is carried out on the bill where as in the debate type "Matter under 377", each speaker raises an issue of which he is concerned of but no discussion is done on the issues.
Creation
The creation of the dataset involved 3 steps. The first step was to scrap the pdf files from the Lok Sabha website. Each pdf file is a session. The second step was to convert the pdf files into text files for easy parsing. The challenge here was to convert this unstructured information into a structured format. The third step is to extract relevant data using pattern matching. We developed a software parser for extracting the entities such as date, debate type, member name and speech. We used regex, pattern matching code to find out patterns from the text file. For example to segregate a speaker's name from his speech, we used :
re.split(":")
as name of the speaker and his/her speech is separated by a colon. An example pdf can be accessed using this URL . Right now, member name and bill name are needed to be stored manually which we plan to automate too. Sometimes the pattern matching fails due to irregularities in the pdf as those were written by humans though they were negligible. We stored the structured data into a Mongo database as different debate types have different schema. The database consists of the following tables :
Sessions : all the debates happened on a particular day with date, secretary general name.
Members : information about the members/speakers of the parliament i.e name and party affiliation.
Debates : contains the member id and the corresponding speeches, summaries and keywords.
Bills : the name of the bill.
Debate Type : the name of the debate type.
The software parser developed is very generic. As new sessions are being added on the Lok Sabha website, the software parser automatically identifies them, parses it and stores the structured data in the database. The database has been hosted in a online database hosting site, mLab. The mongo shell can be accessed using this command in any linux machine which has mongo installed.
mongo ds235388.mlab.com:35388/synopsis -u public -p public
Annotation
We have annotated 1201 speeches with the four categories mentioned above, on the speeches. We also annotated stances of the speakers towards the bill/issue that is being debated on. There are two stances one is for and other is against. The statistics of the annotated data is shown in Table 4.
Two humanities students were involved in the annotation of the four categories on 1201 speeches. The annotator agreement is shown in Table 5 and is evaluated using two metrics, one is the Kohen's Kappa BIBREF12 and other is the inter annotator agreement which is the percentage of overlapping choices between the annotators.
The inter annotator agreement for the stance categories were 0.92. The high values of inter annotator scores clearly explain how easy it was to delineate each category. It also signifies that the definition of the category that needed to be annotated, were very clear.
Keywords and Summarization
We have used TextRank which is an extractive summariser BIBREF13 for summarizing the entire debate and for finding keywords in the debate. TextRank is a graph based ranking model for text processing specifically KeyPhrase Extraction and Sentence Extraction. TextRank performs better in text summarization using graph based techniques BIBREF14 . We added these two extra fields i.e the keywords extracted by TextRank and the summary created by TextRank in the debates collection. An example summary is :
The last National Health Policy was framed in 2002. The Policy informs and prioritizes the role of the Government in shaping health systems in all its dimensions investment in health, organization and financing of health care services, prevention of diseases and promotion of good health through cross-sectoral action, access to technologies, developing human resources, encouraging medical pluralism, building the knowledge base required for better health, financial protection strategies and regulation and progressive assurance for health. The Policy aims for attainment of the highest possible level of health and well-being for all at all ages, through a preventive and promotive health care orientation in all developmental policies, and universal access to good quality health care services without anyone having to face financial hardship as a consequence. The Policy seeks to move away from Sick-Care to Wellness, with thrust on prevention and health care promotion. Before this, the Policy was for the Sick-Care Health Policy. Now we are making it Promotional and Preventive Health Policy. While the policy seeks to reorient and strengthen the public health systems, it also looks afresh at strategic purchasing from the private sector and leveraging their strengths to achieve national health goals. As a crucial component, the policy proposes raising public health expenditure to 2.5 per cent of the GDP in a time bound manner. The Policy has also assigned specific quantitative targets aimed at reduction of disease prevalence/incidence under three broad components viz., (a) health status and programme impact, (b) health system performance, and (c) health systems strengthening, aligned to the policy objectives. To improve and strengthen the regulatory environment, the policy seeks putting in place systems for setting standards and ensuring quality of health care. The policy advocates development of cadre of mid-level service providers, nurse practitioners, public health cadre to improve availability of appropriate health human resource. The policy also seeks to address health security and Make in India for drugs and devices. It also seeks to align other policies for medical devices and equipment with public health goals.
Detection of Polarity
To detect the polarity of each speech, we have used VADER BIBREF15 sentiment analysis tool. The tool uses a simple rule-based model for general sentiment analysis and generalizes more favorably across contexts than any of many benchmarks such as LIWC and SentiWordNet. The tool takes as input a sentence and gives a score between -1 and 1. The polarity of a speech is calculated by taking the sum of the polarities of the sentences. If the sum is greater than zero, then it is classified as positive, if it is less than zero, then it is classified as negative and if it is equal to zero then it is classified as neutral. The statistics of the data is presented in Table 6.
Examples
A Document in session collection.
[s]""blue[l]:black
{
"_id" : ObjectId("5a4255c789.."),
"indianDate" : "Vaisakha 9,1938(Saka)",
"debates" : {
"5999649837.." : ObjectId("5a425b5.."),
"5999644a37.." : ObjectId("5a425b06..")
}
"englishDate" : "Friday,April 29,2016",
"houseName" : "LOK SABHA",
"secretaryGeneralName" : "ANOOP MISHRA"
}
The _id is the unique key assigned by the mongo database. The keys in the debates key represent the debate types from the debate types collection. The values of the debates key refer to the corresponding debates in the debates collection.
A Document in member collection. The table consists of name of the member spoken, the house of the parliament and the party to which he is affliated.
[s]""blue[l]:black
{
"_id" : ObjectId("59a8e0e983"),
"name" : "Dharambir Singh,Shri",
"house" : "Lok Sabha",
"party" : "BJP"
}
A Document in bill collection. The table consists of the bill name.
[s]""blue[l]:black
{
"_id" : ObjectId("59de525596..."),
"name" : "THE COMPENATION BILL, 2016"
}
A Document in debates collection of debate type Submission Members. The table consists of all the speeches made in a particular debate in an order with summary and keywords from TextRank.
[s]""blue[l]:black
{
"_id" : ObjectId("5a42539889.."),
"topic" : "Flood situation in ...",
"keywords" : "water state ... ",
"summary" : "...",
"speeches" : {
"1" : {
"speech" : "In Tamil Nadu and in...",
"memberId" : "59a92d88a0b4...",
"polarity" : "Negative"
},
"2" : {
"speech" : "We all have witness...",
"memberId" : "59cbc3ef6636...",
"polarity" : "Positive"
},
"3" : {
...
}
...
...
}
The memberId refers to the _id in the member's collection.
Experiment
In this section, we deal with two tasks, task one is the classification of the stances the speakers take and task two is the classification of categories based on purpose. Stance classification differs from sentiment analysis. For instance, the number of speeches that were annotated as for i.e 919 had only 719 labelled as positive and the number of speeches that were annotated as against i.e 282 had only 89 as negatively labelled. So, these statistics clearly indicate the difference between polarity detection and stance classification.
Text classification is a core task to many applications, like spam detection, sentiment analysis or smart replies. We used fastText and SVM BIBREF16 for preliminary experiments. We have pre-processed the text removing punctuation's and lowering the case. Facebook developers have developed fastText BIBREF17 which is a library for efficient learning of word representations and sentence classification. The reason we have used fastText is because of its promising results in BIBREF18 .
We divided our training and testing data in the ratio of 8:2 for classification. As mentioned above we used fastText and SVM for both the classification tasks. We report accuracy for each class as it is a multi-label classification problem. The results are shown in Table 7 and Table 8. Also, the parameters used for fastText is described in Table 9.
We have not used hs (Hierarchical Soft-max) for binary classification, instead used regular softmax as it was giving better results in fastText.
For SVM, the features were the word vectors trained using word2vec BIBREF19 with dimesion size of 300 whereas for fastText, the features were the word vectors trained using character n-gram embedding. We have achieved considerably good results. We plan to annotate more and check if the accuracy increase any further. The limitation that we feel is the number of annotations being done. We approached the classification problem as one vs rest classification problem. We performed the classification on document level. Later we would like to analyze at sentence level. The least accuracy was for Issue category and the highest is for Blame category. This research will inspire researchers to take on further research on mining appreciation, blaming from text in lines with the ongoing approaches of argument mining, hate speech, sarcasm generation etc.
As we increase the number of epochs in the fastText, the scores also increase as evident from Table 10, but the increase stops after 25 epochs.
Conclusion
In this paper, we presented a dataset of synopsis of Indian parliamentary debates. We developed a generic software parser for the conversion of unstructured pdfs into structured format i.e into a relational database using mongo database software. We analyzed the purpose of the speeches of the member of parliament and categorized them into 4 major categories and provided statistics of the categories. We also tried to identify them automatically using fastText algorithm and provided the results. The analysis is done for understanding the purpose of the speeches in the parliament. We also presented our results on binary stance classification of the speeches whether the member is in favour of the debate topic or not.
Future Work
In future, we would like to increase the size of the dataset by including sessions of previous years which are not yet digitized. Sessions before 2009 are yet to be digitalised by the Lok Sabha editorial of India. Also we plan to include Rajya Sabha debates into the dataset. We have used fastText for classifying categories. We plan to develop a set of features to increase the accuracy of the classification task as we believe that features like party affiliation will have greater impact and experiment with other machine learning approaches.
TextRank is used for summarization. We feel that for political debates, summarization should emphasize on arguments made by members unlike TextRank. In the whole debate, a lot of themes are raised by the members. The debate revolves around these themes. So, developing a model for thematic summarization with arguments will capture the complete picture of the entire debate unlike TextRank. We plan to do this as our future work on these debates. A short summary of the important themes discussed with its arguments will benefit journalists, newspaper editors, common people etc. | 5575 speeches |
0cf5132ac7904b7b81e17938d5815f70926a5180 | 0cf5132ac7904b7b81e17938d5815f70926a5180_0 | Q: What classification models were used?
Text: Introduction
As the world moves towards increasing forms of digitization, the creation of text corpora has become an important activity for NLP and other fields of research. Parliamentary data is a rich corpus of discourse on a wide array of topics. The Lok Sabha website provides access to all kinds of reports, debates, bills related to the proceedings of the house. Similarly, the Rajya Sabha website also contains debates, bills, reports introduced in the house. The Lok Sabha website also contains information about members of the parliament who are elected by the people and debate in the house. Since the data is unstructured , it cannot be computationally analyzed. There is a need to shape the data into a structured format for analysis. This data is important as it can be used to visualize person, party and agenda level semantics in the house.
The data that we get from parliamentary proceedings has presence of sarcasm, interjections and allegations which makes it difficult to apply standard NLP techniques BIBREF0 . Members of the parliament discuss various important aspects and there is a strong purpose behind every speech. We wanted to analyze this particular aspect. Traditional polar stances (for or against) do not justify for the diplomatic intricacies in the speeches. We created this taxonomy to better understand the semantics i.e the pragmatics of the speeches and to give enriched insights into member's responses in a speech. The study of the speaker's meaning, not focusing on the phonetic or grammatical form of an utterance, but instead on what the speaker's intentions and beliefs are is pragmatics. Pragmatics is a sub-field of linguistics and semiotics that studies the ways in which context contributes to meaning.
After thorough investigation of many speeches we found that the statements made by members cannot be deemed strictly "for or against" a bill or government. A person maybe appreciating a bill or government's effort in one part of a speech but also asking attention to other contentious issues. Similarly, a person criticizing government for an irresponsible action could be giving some constructive suggestions elsewhere. A political discourse may not always be polar and might have a higher spectrum of meaning. After investigating and highlighting statements with different intentions we came up with a minimal set of 4 mutually exclusive categories with different degrees of correlation with the traditional two polar categories (for and against). It is observed that any statement by a participating member will fall into one of these categories namely - Appreciation, Call for Action, Issue, Blaming.
For example, if the debate consists of more of issues, one can infer that the bill is not serving the its purpose in a well manner. Also, this preliminary step will lead to new areas of research such as detection of appreciation, blame in similar lines of argument mining which is evolving in the recent years in the field of linguistics. We will quote portions of a few speeches which will give an idea of the data being presented:
This city has lost its place due to negligence of previous governments and almost all industries have migrated from here and lack of infrastructure facilities, business is also losing its grip. It is very unfortunate that previous UP Governments also did not do any justice to this city.
- Shri Devendra Singh Bhole, May 03, 2016
As evident, the speaker is clearly blaming the previous governments for negligence on the city. In this sense the data is very rich and a lot of linguistic research is possible. Researchers can work on different aspects such as detection of critique made by members, suggestions raised by members etc. Given the data, it can be used for rhetoric, linguistic, historical, political and sociological research. Parliamentary data is a major source of socially relevant content. A new series of workshops are being conducted for the sole purpose of encouraging research in parliamentary debates ParlClarin.
As a preliminary step, we created four major categories of the speeches spoken by the parliament members. The definitions and examples of the four categories are explained in the below tables respectively. The examples are taken from a debate on NABARD bill in Lok Sabha.
A speech can be labelled with multiple categories as members can appreciate and raise issues in the same speech. The following points are the contributions of this paper :
Related Work
Many linguists around the globe are concentrating on creation of parliamentary datasets. BIBREF1 gives an overview of the parliamentary records and corpora from countries with a focus on their availability through Clarin infrastructure. A dataset of Japanese Local Assembly minutes was created and analyzed for statistical data such as number of speakers, characters and words BIBREF2 . BIBREF3 created a highly multilingual parallel corpus of European parliament and demonstrated that it is useful for statistical machine translation. Parliamentary debates are full of arguments. Ruling party members refute the claims made by opposition party members and vice versa. Members provide strong arguments for supporting their claim or refuting other's claim. Analyzing argumentation from a computational linguistics point of view has led very recently to a new field called argumentation mining BIBREF4 . One can perform argument mining on these debates and analyze the results. BIBREF5 worked on detecting perspectives in UK political debates using a Bayesian modelling approach. BIBREF6 worked on claim detection from UK political debates using both linguistic features text and features from speech.
Stance classification is a relatively new and challenging approach to deepen opinion mining by classifying a user's stance in a debate i.e whether he is for or against the topic. BIBREF7 . BIBREF8 addressed the question of whether opinion mining techniques can be used on Congressional debates or not. BIBREF9 worked on stance classification of posts in online debate forums using both structural and linguistic features. BIBREF10 trained a svm BIBREF11 classifier with features of unigrams, bigrams and trigrams to predict whether a sentence is in agreement or disagreement and achieved an F-score of 0.55 for agreement and 0.81 for disagreement on the evaluation set. No one has worked on classifying speeches based on their purpose. This is the first novel work towards this aspect.
DataSet
Our dataset consists of synopsis of debates in the lower house of the Indian Parliament (Lok Sabha). The dataset consists of :
In Lok Sabha, a session is referred to as all the debates held in a particular cycle of sitting. There are 55 debate types identified by the Lok Sabha. Table 3 identifies some of the debate types we have considered and their frequency between the years 2014 and 2017. We opted out debate types which do not occur regularly. Each debate type has its own style of proceedings. For example, in the debate type "Government Bills", a minister places a bill on the table and discussion is carried out on the bill where as in the debate type "Matter under 377", each speaker raises an issue of which he is concerned of but no discussion is done on the issues.
Creation
The creation of the dataset involved 3 steps. The first step was to scrap the pdf files from the Lok Sabha website. Each pdf file is a session. The second step was to convert the pdf files into text files for easy parsing. The challenge here was to convert this unstructured information into a structured format. The third step is to extract relevant data using pattern matching. We developed a software parser for extracting the entities such as date, debate type, member name and speech. We used regex, pattern matching code to find out patterns from the text file. For example to segregate a speaker's name from his speech, we used :
re.split(":")
as name of the speaker and his/her speech is separated by a colon. An example pdf can be accessed using this URL . Right now, member name and bill name are needed to be stored manually which we plan to automate too. Sometimes the pattern matching fails due to irregularities in the pdf as those were written by humans though they were negligible. We stored the structured data into a Mongo database as different debate types have different schema. The database consists of the following tables :
Sessions : all the debates happened on a particular day with date, secretary general name.
Members : information about the members/speakers of the parliament i.e name and party affiliation.
Debates : contains the member id and the corresponding speeches, summaries and keywords.
Bills : the name of the bill.
Debate Type : the name of the debate type.
The software parser developed is very generic. As new sessions are being added on the Lok Sabha website, the software parser automatically identifies them, parses it and stores the structured data in the database. The database has been hosted in a online database hosting site, mLab. The mongo shell can be accessed using this command in any linux machine which has mongo installed.
mongo ds235388.mlab.com:35388/synopsis -u public -p public
Annotation
We have annotated 1201 speeches with the four categories mentioned above, on the speeches. We also annotated stances of the speakers towards the bill/issue that is being debated on. There are two stances one is for and other is against. The statistics of the annotated data is shown in Table 4.
Two humanities students were involved in the annotation of the four categories on 1201 speeches. The annotator agreement is shown in Table 5 and is evaluated using two metrics, one is the Kohen's Kappa BIBREF12 and other is the inter annotator agreement which is the percentage of overlapping choices between the annotators.
The inter annotator agreement for the stance categories were 0.92. The high values of inter annotator scores clearly explain how easy it was to delineate each category. It also signifies that the definition of the category that needed to be annotated, were very clear.
Keywords and Summarization
We have used TextRank which is an extractive summariser BIBREF13 for summarizing the entire debate and for finding keywords in the debate. TextRank is a graph based ranking model for text processing specifically KeyPhrase Extraction and Sentence Extraction. TextRank performs better in text summarization using graph based techniques BIBREF14 . We added these two extra fields i.e the keywords extracted by TextRank and the summary created by TextRank in the debates collection. An example summary is :
The last National Health Policy was framed in 2002. The Policy informs and prioritizes the role of the Government in shaping health systems in all its dimensions investment in health, organization and financing of health care services, prevention of diseases and promotion of good health through cross-sectoral action, access to technologies, developing human resources, encouraging medical pluralism, building the knowledge base required for better health, financial protection strategies and regulation and progressive assurance for health. The Policy aims for attainment of the highest possible level of health and well-being for all at all ages, through a preventive and promotive health care orientation in all developmental policies, and universal access to good quality health care services without anyone having to face financial hardship as a consequence. The Policy seeks to move away from Sick-Care to Wellness, with thrust on prevention and health care promotion. Before this, the Policy was for the Sick-Care Health Policy. Now we are making it Promotional and Preventive Health Policy. While the policy seeks to reorient and strengthen the public health systems, it also looks afresh at strategic purchasing from the private sector and leveraging their strengths to achieve national health goals. As a crucial component, the policy proposes raising public health expenditure to 2.5 per cent of the GDP in a time bound manner. The Policy has also assigned specific quantitative targets aimed at reduction of disease prevalence/incidence under three broad components viz., (a) health status and programme impact, (b) health system performance, and (c) health systems strengthening, aligned to the policy objectives. To improve and strengthen the regulatory environment, the policy seeks putting in place systems for setting standards and ensuring quality of health care. The policy advocates development of cadre of mid-level service providers, nurse practitioners, public health cadre to improve availability of appropriate health human resource. The policy also seeks to address health security and Make in India for drugs and devices. It also seeks to align other policies for medical devices and equipment with public health goals.
Detection of Polarity
To detect the polarity of each speech, we have used VADER BIBREF15 sentiment analysis tool. The tool uses a simple rule-based model for general sentiment analysis and generalizes more favorably across contexts than any of many benchmarks such as LIWC and SentiWordNet. The tool takes as input a sentence and gives a score between -1 and 1. The polarity of a speech is calculated by taking the sum of the polarities of the sentences. If the sum is greater than zero, then it is classified as positive, if it is less than zero, then it is classified as negative and if it is equal to zero then it is classified as neutral. The statistics of the data is presented in Table 6.
Examples
A Document in session collection.
[s]""blue[l]:black
{
"_id" : ObjectId("5a4255c789.."),
"indianDate" : "Vaisakha 9,1938(Saka)",
"debates" : {
"5999649837.." : ObjectId("5a425b5.."),
"5999644a37.." : ObjectId("5a425b06..")
}
"englishDate" : "Friday,April 29,2016",
"houseName" : "LOK SABHA",
"secretaryGeneralName" : "ANOOP MISHRA"
}
The _id is the unique key assigned by the mongo database. The keys in the debates key represent the debate types from the debate types collection. The values of the debates key refer to the corresponding debates in the debates collection.
A Document in member collection. The table consists of name of the member spoken, the house of the parliament and the party to which he is affliated.
[s]""blue[l]:black
{
"_id" : ObjectId("59a8e0e983"),
"name" : "Dharambir Singh,Shri",
"house" : "Lok Sabha",
"party" : "BJP"
}
A Document in bill collection. The table consists of the bill name.
[s]""blue[l]:black
{
"_id" : ObjectId("59de525596..."),
"name" : "THE COMPENATION BILL, 2016"
}
A Document in debates collection of debate type Submission Members. The table consists of all the speeches made in a particular debate in an order with summary and keywords from TextRank.
[s]""blue[l]:black
{
"_id" : ObjectId("5a42539889.."),
"topic" : "Flood situation in ...",
"keywords" : "water state ... ",
"summary" : "...",
"speeches" : {
"1" : {
"speech" : "In Tamil Nadu and in...",
"memberId" : "59a92d88a0b4...",
"polarity" : "Negative"
},
"2" : {
"speech" : "We all have witness...",
"memberId" : "59cbc3ef6636...",
"polarity" : "Positive"
},
"3" : {
...
}
...
...
}
The memberId refers to the _id in the member's collection.
Experiment
In this section, we deal with two tasks, task one is the classification of the stances the speakers take and task two is the classification of categories based on purpose. Stance classification differs from sentiment analysis. For instance, the number of speeches that were annotated as for i.e 919 had only 719 labelled as positive and the number of speeches that were annotated as against i.e 282 had only 89 as negatively labelled. So, these statistics clearly indicate the difference between polarity detection and stance classification.
Text classification is a core task to many applications, like spam detection, sentiment analysis or smart replies. We used fastText and SVM BIBREF16 for preliminary experiments. We have pre-processed the text removing punctuation's and lowering the case. Facebook developers have developed fastText BIBREF17 which is a library for efficient learning of word representations and sentence classification. The reason we have used fastText is because of its promising results in BIBREF18 .
We divided our training and testing data in the ratio of 8:2 for classification. As mentioned above we used fastText and SVM for both the classification tasks. We report accuracy for each class as it is a multi-label classification problem. The results are shown in Table 7 and Table 8. Also, the parameters used for fastText is described in Table 9.
We have not used hs (Hierarchical Soft-max) for binary classification, instead used regular softmax as it was giving better results in fastText.
For SVM, the features were the word vectors trained using word2vec BIBREF19 with dimesion size of 300 whereas for fastText, the features were the word vectors trained using character n-gram embedding. We have achieved considerably good results. We plan to annotate more and check if the accuracy increase any further. The limitation that we feel is the number of annotations being done. We approached the classification problem as one vs rest classification problem. We performed the classification on document level. Later we would like to analyze at sentence level. The least accuracy was for Issue category and the highest is for Blame category. This research will inspire researchers to take on further research on mining appreciation, blaming from text in lines with the ongoing approaches of argument mining, hate speech, sarcasm generation etc.
As we increase the number of epochs in the fastText, the scores also increase as evident from Table 10, but the increase stops after 25 epochs.
Conclusion
In this paper, we presented a dataset of synopsis of Indian parliamentary debates. We developed a generic software parser for the conversion of unstructured pdfs into structured format i.e into a relational database using mongo database software. We analyzed the purpose of the speeches of the member of parliament and categorized them into 4 major categories and provided statistics of the categories. We also tried to identify them automatically using fastText algorithm and provided the results. The analysis is done for understanding the purpose of the speeches in the parliament. We also presented our results on binary stance classification of the speeches whether the member is in favour of the debate topic or not.
Future Work
In future, we would like to increase the size of the dataset by including sessions of previous years which are not yet digitized. Sessions before 2009 are yet to be digitalised by the Lok Sabha editorial of India. Also we plan to include Rajya Sabha debates into the dataset. We have used fastText for classifying categories. We plan to develop a set of features to increase the accuracy of the classification task as we believe that features like party affiliation will have greater impact and experiment with other machine learning approaches.
TextRank is used for summarization. We feel that for political debates, summarization should emphasize on arguments made by members unlike TextRank. In the whole debate, a lot of themes are raised by the members. The debate revolves around these themes. So, developing a model for thematic summarization with arguments will capture the complete picture of the entire debate unlike TextRank. We plan to do this as our future work on these debates. A short summary of the important themes discussed with its arguments will benefit journalists, newspaper editors, common people etc. | fastText and SVM BIBREF16 |
1d860d7f615b9ca404c504f9df4231a702f840ef | 1d860d7f615b9ca404c504f9df4231a702f840ef_0 | Q: Do any speeches not fall in these categories?
Text: Introduction
As the world moves towards increasing forms of digitization, the creation of text corpora has become an important activity for NLP and other fields of research. Parliamentary data is a rich corpus of discourse on a wide array of topics. The Lok Sabha website provides access to all kinds of reports, debates, bills related to the proceedings of the house. Similarly, the Rajya Sabha website also contains debates, bills, reports introduced in the house. The Lok Sabha website also contains information about members of the parliament who are elected by the people and debate in the house. Since the data is unstructured , it cannot be computationally analyzed. There is a need to shape the data into a structured format for analysis. This data is important as it can be used to visualize person, party and agenda level semantics in the house.
The data that we get from parliamentary proceedings has presence of sarcasm, interjections and allegations which makes it difficult to apply standard NLP techniques BIBREF0 . Members of the parliament discuss various important aspects and there is a strong purpose behind every speech. We wanted to analyze this particular aspect. Traditional polar stances (for or against) do not justify for the diplomatic intricacies in the speeches. We created this taxonomy to better understand the semantics i.e the pragmatics of the speeches and to give enriched insights into member's responses in a speech. The study of the speaker's meaning, not focusing on the phonetic or grammatical form of an utterance, but instead on what the speaker's intentions and beliefs are is pragmatics. Pragmatics is a sub-field of linguistics and semiotics that studies the ways in which context contributes to meaning.
After thorough investigation of many speeches we found that the statements made by members cannot be deemed strictly "for or against" a bill or government. A person maybe appreciating a bill or government's effort in one part of a speech but also asking attention to other contentious issues. Similarly, a person criticizing government for an irresponsible action could be giving some constructive suggestions elsewhere. A political discourse may not always be polar and might have a higher spectrum of meaning. After investigating and highlighting statements with different intentions we came up with a minimal set of 4 mutually exclusive categories with different degrees of correlation with the traditional two polar categories (for and against). It is observed that any statement by a participating member will fall into one of these categories namely - Appreciation, Call for Action, Issue, Blaming.
For example, if the debate consists of more of issues, one can infer that the bill is not serving the its purpose in a well manner. Also, this preliminary step will lead to new areas of research such as detection of appreciation, blame in similar lines of argument mining which is evolving in the recent years in the field of linguistics. We will quote portions of a few speeches which will give an idea of the data being presented:
This city has lost its place due to negligence of previous governments and almost all industries have migrated from here and lack of infrastructure facilities, business is also losing its grip. It is very unfortunate that previous UP Governments also did not do any justice to this city.
- Shri Devendra Singh Bhole, May 03, 2016
As evident, the speaker is clearly blaming the previous governments for negligence on the city. In this sense the data is very rich and a lot of linguistic research is possible. Researchers can work on different aspects such as detection of critique made by members, suggestions raised by members etc. Given the data, it can be used for rhetoric, linguistic, historical, political and sociological research. Parliamentary data is a major source of socially relevant content. A new series of workshops are being conducted for the sole purpose of encouraging research in parliamentary debates ParlClarin.
As a preliminary step, we created four major categories of the speeches spoken by the parliament members. The definitions and examples of the four categories are explained in the below tables respectively. The examples are taken from a debate on NABARD bill in Lok Sabha.
A speech can be labelled with multiple categories as members can appreciate and raise issues in the same speech. The following points are the contributions of this paper :
Related Work
Many linguists around the globe are concentrating on creation of parliamentary datasets. BIBREF1 gives an overview of the parliamentary records and corpora from countries with a focus on their availability through Clarin infrastructure. A dataset of Japanese Local Assembly minutes was created and analyzed for statistical data such as number of speakers, characters and words BIBREF2 . BIBREF3 created a highly multilingual parallel corpus of European parliament and demonstrated that it is useful for statistical machine translation. Parliamentary debates are full of arguments. Ruling party members refute the claims made by opposition party members and vice versa. Members provide strong arguments for supporting their claim or refuting other's claim. Analyzing argumentation from a computational linguistics point of view has led very recently to a new field called argumentation mining BIBREF4 . One can perform argument mining on these debates and analyze the results. BIBREF5 worked on detecting perspectives in UK political debates using a Bayesian modelling approach. BIBREF6 worked on claim detection from UK political debates using both linguistic features text and features from speech.
Stance classification is a relatively new and challenging approach to deepen opinion mining by classifying a user's stance in a debate i.e whether he is for or against the topic. BIBREF7 . BIBREF8 addressed the question of whether opinion mining techniques can be used on Congressional debates or not. BIBREF9 worked on stance classification of posts in online debate forums using both structural and linguistic features. BIBREF10 trained a svm BIBREF11 classifier with features of unigrams, bigrams and trigrams to predict whether a sentence is in agreement or disagreement and achieved an F-score of 0.55 for agreement and 0.81 for disagreement on the evaluation set. No one has worked on classifying speeches based on their purpose. This is the first novel work towards this aspect.
DataSet
Our dataset consists of synopsis of debates in the lower house of the Indian Parliament (Lok Sabha). The dataset consists of :
In Lok Sabha, a session is referred to as all the debates held in a particular cycle of sitting. There are 55 debate types identified by the Lok Sabha. Table 3 identifies some of the debate types we have considered and their frequency between the years 2014 and 2017. We opted out debate types which do not occur regularly. Each debate type has its own style of proceedings. For example, in the debate type "Government Bills", a minister places a bill on the table and discussion is carried out on the bill where as in the debate type "Matter under 377", each speaker raises an issue of which he is concerned of but no discussion is done on the issues.
Creation
The creation of the dataset involved 3 steps. The first step was to scrap the pdf files from the Lok Sabha website. Each pdf file is a session. The second step was to convert the pdf files into text files for easy parsing. The challenge here was to convert this unstructured information into a structured format. The third step is to extract relevant data using pattern matching. We developed a software parser for extracting the entities such as date, debate type, member name and speech. We used regex, pattern matching code to find out patterns from the text file. For example to segregate a speaker's name from his speech, we used :
re.split(":")
as name of the speaker and his/her speech is separated by a colon. An example pdf can be accessed using this URL . Right now, member name and bill name are needed to be stored manually which we plan to automate too. Sometimes the pattern matching fails due to irregularities in the pdf as those were written by humans though they were negligible. We stored the structured data into a Mongo database as different debate types have different schema. The database consists of the following tables :
Sessions : all the debates happened on a particular day with date, secretary general name.
Members : information about the members/speakers of the parliament i.e name and party affiliation.
Debates : contains the member id and the corresponding speeches, summaries and keywords.
Bills : the name of the bill.
Debate Type : the name of the debate type.
The software parser developed is very generic. As new sessions are being added on the Lok Sabha website, the software parser automatically identifies them, parses it and stores the structured data in the database. The database has been hosted in a online database hosting site, mLab. The mongo shell can be accessed using this command in any linux machine which has mongo installed.
mongo ds235388.mlab.com:35388/synopsis -u public -p public
Annotation
We have annotated 1201 speeches with the four categories mentioned above, on the speeches. We also annotated stances of the speakers towards the bill/issue that is being debated on. There are two stances one is for and other is against. The statistics of the annotated data is shown in Table 4.
Two humanities students were involved in the annotation of the four categories on 1201 speeches. The annotator agreement is shown in Table 5 and is evaluated using two metrics, one is the Kohen's Kappa BIBREF12 and other is the inter annotator agreement which is the percentage of overlapping choices between the annotators.
The inter annotator agreement for the stance categories were 0.92. The high values of inter annotator scores clearly explain how easy it was to delineate each category. It also signifies that the definition of the category that needed to be annotated, were very clear.
Keywords and Summarization
We have used TextRank which is an extractive summariser BIBREF13 for summarizing the entire debate and for finding keywords in the debate. TextRank is a graph based ranking model for text processing specifically KeyPhrase Extraction and Sentence Extraction. TextRank performs better in text summarization using graph based techniques BIBREF14 . We added these two extra fields i.e the keywords extracted by TextRank and the summary created by TextRank in the debates collection. An example summary is :
The last National Health Policy was framed in 2002. The Policy informs and prioritizes the role of the Government in shaping health systems in all its dimensions investment in health, organization and financing of health care services, prevention of diseases and promotion of good health through cross-sectoral action, access to technologies, developing human resources, encouraging medical pluralism, building the knowledge base required for better health, financial protection strategies and regulation and progressive assurance for health. The Policy aims for attainment of the highest possible level of health and well-being for all at all ages, through a preventive and promotive health care orientation in all developmental policies, and universal access to good quality health care services without anyone having to face financial hardship as a consequence. The Policy seeks to move away from Sick-Care to Wellness, with thrust on prevention and health care promotion. Before this, the Policy was for the Sick-Care Health Policy. Now we are making it Promotional and Preventive Health Policy. While the policy seeks to reorient and strengthen the public health systems, it also looks afresh at strategic purchasing from the private sector and leveraging their strengths to achieve national health goals. As a crucial component, the policy proposes raising public health expenditure to 2.5 per cent of the GDP in a time bound manner. The Policy has also assigned specific quantitative targets aimed at reduction of disease prevalence/incidence under three broad components viz., (a) health status and programme impact, (b) health system performance, and (c) health systems strengthening, aligned to the policy objectives. To improve and strengthen the regulatory environment, the policy seeks putting in place systems for setting standards and ensuring quality of health care. The policy advocates development of cadre of mid-level service providers, nurse practitioners, public health cadre to improve availability of appropriate health human resource. The policy also seeks to address health security and Make in India for drugs and devices. It also seeks to align other policies for medical devices and equipment with public health goals.
Detection of Polarity
To detect the polarity of each speech, we have used VADER BIBREF15 sentiment analysis tool. The tool uses a simple rule-based model for general sentiment analysis and generalizes more favorably across contexts than any of many benchmarks such as LIWC and SentiWordNet. The tool takes as input a sentence and gives a score between -1 and 1. The polarity of a speech is calculated by taking the sum of the polarities of the sentences. If the sum is greater than zero, then it is classified as positive, if it is less than zero, then it is classified as negative and if it is equal to zero then it is classified as neutral. The statistics of the data is presented in Table 6.
Examples
A Document in session collection.
[s]""blue[l]:black
{
"_id" : ObjectId("5a4255c789.."),
"indianDate" : "Vaisakha 9,1938(Saka)",
"debates" : {
"5999649837.." : ObjectId("5a425b5.."),
"5999644a37.." : ObjectId("5a425b06..")
}
"englishDate" : "Friday,April 29,2016",
"houseName" : "LOK SABHA",
"secretaryGeneralName" : "ANOOP MISHRA"
}
The _id is the unique key assigned by the mongo database. The keys in the debates key represent the debate types from the debate types collection. The values of the debates key refer to the corresponding debates in the debates collection.
A Document in member collection. The table consists of name of the member spoken, the house of the parliament and the party to which he is affliated.
[s]""blue[l]:black
{
"_id" : ObjectId("59a8e0e983"),
"name" : "Dharambir Singh,Shri",
"house" : "Lok Sabha",
"party" : "BJP"
}
A Document in bill collection. The table consists of the bill name.
[s]""blue[l]:black
{
"_id" : ObjectId("59de525596..."),
"name" : "THE COMPENATION BILL, 2016"
}
A Document in debates collection of debate type Submission Members. The table consists of all the speeches made in a particular debate in an order with summary and keywords from TextRank.
[s]""blue[l]:black
{
"_id" : ObjectId("5a42539889.."),
"topic" : "Flood situation in ...",
"keywords" : "water state ... ",
"summary" : "...",
"speeches" : {
"1" : {
"speech" : "In Tamil Nadu and in...",
"memberId" : "59a92d88a0b4...",
"polarity" : "Negative"
},
"2" : {
"speech" : "We all have witness...",
"memberId" : "59cbc3ef6636...",
"polarity" : "Positive"
},
"3" : {
...
}
...
...
}
The memberId refers to the _id in the member's collection.
Experiment
In this section, we deal with two tasks, task one is the classification of the stances the speakers take and task two is the classification of categories based on purpose. Stance classification differs from sentiment analysis. For instance, the number of speeches that were annotated as for i.e 919 had only 719 labelled as positive and the number of speeches that were annotated as against i.e 282 had only 89 as negatively labelled. So, these statistics clearly indicate the difference between polarity detection and stance classification.
Text classification is a core task to many applications, like spam detection, sentiment analysis or smart replies. We used fastText and SVM BIBREF16 for preliminary experiments. We have pre-processed the text removing punctuation's and lowering the case. Facebook developers have developed fastText BIBREF17 which is a library for efficient learning of word representations and sentence classification. The reason we have used fastText is because of its promising results in BIBREF18 .
We divided our training and testing data in the ratio of 8:2 for classification. As mentioned above we used fastText and SVM for both the classification tasks. We report accuracy for each class as it is a multi-label classification problem. The results are shown in Table 7 and Table 8. Also, the parameters used for fastText is described in Table 9.
We have not used hs (Hierarchical Soft-max) for binary classification, instead used regular softmax as it was giving better results in fastText.
For SVM, the features were the word vectors trained using word2vec BIBREF19 with dimesion size of 300 whereas for fastText, the features were the word vectors trained using character n-gram embedding. We have achieved considerably good results. We plan to annotate more and check if the accuracy increase any further. The limitation that we feel is the number of annotations being done. We approached the classification problem as one vs rest classification problem. We performed the classification on document level. Later we would like to analyze at sentence level. The least accuracy was for Issue category and the highest is for Blame category. This research will inspire researchers to take on further research on mining appreciation, blaming from text in lines with the ongoing approaches of argument mining, hate speech, sarcasm generation etc.
As we increase the number of epochs in the fastText, the scores also increase as evident from Table 10, but the increase stops after 25 epochs.
Conclusion
In this paper, we presented a dataset of synopsis of Indian parliamentary debates. We developed a generic software parser for the conversion of unstructured pdfs into structured format i.e into a relational database using mongo database software. We analyzed the purpose of the speeches of the member of parliament and categorized them into 4 major categories and provided statistics of the categories. We also tried to identify them automatically using fastText algorithm and provided the results. The analysis is done for understanding the purpose of the speeches in the parliament. We also presented our results on binary stance classification of the speeches whether the member is in favour of the debate topic or not.
Future Work
In future, we would like to increase the size of the dataset by including sessions of previous years which are not yet digitized. Sessions before 2009 are yet to be digitalised by the Lok Sabha editorial of India. Also we plan to include Rajya Sabha debates into the dataset. We have used fastText for classifying categories. We plan to develop a set of features to increase the accuracy of the classification task as we believe that features like party affiliation will have greater impact and experiment with other machine learning approaches.
TextRank is used for summarization. We feel that for political debates, summarization should emphasize on arguments made by members unlike TextRank. In the whole debate, a lot of themes are raised by the members. The debate revolves around these themes. So, developing a model for thematic summarization with arguments will capture the complete picture of the entire debate unlike TextRank. We plan to do this as our future work on these debates. A short summary of the important themes discussed with its arguments will benefit journalists, newspaper editors, common people etc. | Unanswerable |
ed7985e733066cd067b399c36a3f5b09e532c844 | ed7985e733066cd067b399c36a3f5b09e532c844_0 | Q: What is different in BERT-gen from standard BERT?
Text: Introduction
The BERT language model BIBREF0 is a Deep Bidirectional Transformer BIBREF1 pre-trained on textual corpora (BookCorpus and Wikipedia) using a Masked Language Model (MLM) objective – predicting some words that are randomly masked in the sentence, along with a sentence entailment loss. Recent research efforts BIBREF2 have shown how BERT encodes abstractions that generalize across languages, even when trained on monolingual data only. This contradicts the common belief BIBREF3, BIBREF4 that a shared vocabulary and joint training on multiple languages are essential to achieve cross-lingual generalization capabilities. In this work, we further investigate the generalization potentials of large pre-trained LMs, this time moving to a cross-modal setup: does BERT contain abstractions that generalize beyond text?
In the Artificial Intelligence community, several works have investigated the longstanding research question of whether textual representations encode visual information. On the one hand, a large body of research called language grounding considers that textual representations lack visual commonsense BIBREF5, and intend to ground the meaning of words BIBREF6, BIBREF7 and sentences BIBREF8, BIBREF9 in the perceptual world. In another body of work, textual representations have successfully been used to tackle multi-modal tasks BIBREF10 such as Zero-Shot Learning BIBREF11, Visual Question Answering BIBREF12 or Image Captioning BIBREF13. Following the latter line of research, in this paper we evaluate the potential of pre-trained language models to generalize in the context of Visual Question Generation (VQG) BIBREF14.
The Visual Question Generation task allows us to investigate the cross-modal capabilities of BERT: unlike Image Captioning (where the input is only visual) or VQA (where the input is visual and textual), VQG is a multi-modal task where input can be textual and/or visual. VQG data usually includes images and the associated captions, along with corresponding questions about the image; thus, different experimental setups can be designed to analyze the impact of each modality. Indeed, the questions can be generated using i) textual (the caption), ii) visual (the image), or iii) multi-modal (both the caption and the image) input.
From a practical standpoint, the VQG task has several applications: robots or AI assistants could ask questions rooted in multi-modal data (e.g. fusing conversational data with visual information from captors and cameras), in order to refine their interpretation of the situation they are presented with. It could also allow systems relying on knowledge-bases to gain visual common sense and deal with the Human Reporting Bias BIBREF15, which states that the content of images and text are intrinsically different, since visual common sense is rarely explicitly stated in text.
Recently, BERT-based Multi-Modal Language Models have been proposed BIBREF16, BIBREF17, BIBREF18, BIBREF19 to tackle multi-modal tasks, using different approaches to incorporate visual data within BERT. From these works, it is left to explore whether the cross-modal alignment is fully learned, or it is to some extent already encoded in the BERT abstractions. Therefore, in contrast with those approaches, we explicitly avoid using the following complex mechanisms:
Multi-modal supervision: all previous works exploit an explicit multi-modal supervision through a pre-training step; the models have access to text/image pairs as input, to align their representations. In contrast, our model can switch from text-only to image-only mode without any explicit alignment.
Image-specific losses: specific losses such as Masked RoI (Region of Interest) Classification with Linguistic Clues BIBREF19 or sentence-image prediction BIBREF18 have been reported helpful to align visual and text modalities. Instead, we only use the original MLM loss from BERT (and not its entailment loss).
Non-linearities: we explore a scenario in which the only learnable parameters, for aligning image representations to BERT, are those of simple linear projection layer. This allows us to assess whether the representations encoded in BERT can transfer out-of-the-box to another modality.
Furthermore, to the best of our knowledge, this paper is the first attempt to investigate multi-modal text generation using pre-trained language models. We introduce BERT-gen, a text generator based on BERT, that can be applied both in mono and multi-modal settings. We treat images similarly to text: while a sentence is seen as a sequence of (sub)word tokens, an image is seen as a sequence of objects associated to their corresponding positions (bounding boxes). We show how a simple linear mapping, projecting visual embeddings into the first layer, is enough to ground BERT in the visual realm: text and image object representations are found to be effectively aligned, and the attention over words transfers to attention over the relevant objects in the image.
Our contributions can be summarized as follows:
we introduce BERT-gen, a novel method for generating text using BERT, that can be applied in both mono and multi-modal settings;
we show that the semantic abstractions encoded in pre-trained BERT can generalize to another modality;
we report state-of-the art results on the VQG task;
we provide extensive ablation analyses to interpret the behavior of BERT-gen under different configurations (mono- or multi- modal).
Related Work ::: Unsupervised Pre-trained Language Models
Learning unsupervised textual representations that can be applied to downstream tasks is a widely investigated topic in the literature. Text representations have been learned at different granularities: words with Word2vec BIBREF20, sentences with SkipThought BIBREF21, paragraphs with ParagraphVector BIBREF22 and contextualized word vectors with ELMo BIBREF23. Other methods leverage a transfer-learning approach by fine-tuning all parameters of a pre-trained model on a target task, a paradigm which has become mainstream since the introduction of BERT BIBREF0. BERT alleviates the problem of the uni-directionality of most language models (i.e. where the training objective aims at predicting the next word) by proposing a new objective called Masked Language Model (MLM). Under MLM, some words, that are randomly selected, are masked; the training objective aims at predicting them.
Related Work ::: Multi-modal Language Models
Following the successful application of BERT BIBREF0, and its derivatives, across a great majority of NLP tasks, several research efforts have focused on the design of multi-modal versions of BERT. VideoBERT BIBREF24, a joint video and text model, is pre-trained on a huge corpus of YouTube videos, and applied to action classification and video captioning tasks on the YouCook II dataset BIBREF25. The video is treated as a “visual sentence" (each frame being a “visual word") that is processed by the BERT Transformer.
Concerning models jointly treating information from images and text, visual features extracted from the image are used as “visual words", and a [SEP] special token is employed to separate textual and visual tokens. In the literature, visual features are object features extracted with a Faster R-CNN BIBREF26 – with the notable exception of BIBREF27 who used pooling layers from a CNN. A first body of work exploit single-stream Transformers in which visual features are incorporated in a BERT-like Transformer: this is the case for VisualBERT BIBREF18, VL-BERT BIBREF19, Unicoder-VL BIBREF28 and B2T2 BIBREF29. Other works, such as ViLBERT BIBREF16 and LXMERT BIBREF17 have investigated two-stream approaches: these models employ modality-specific encoders built on standard Transformer blocks, which are then fused into a cross-modal encoder. Interestingly, none of the aforementioned models have been used for generation tasks such as VQG, tackled in this work.
Related Work ::: Visual Question Generation
The text-based Question Generation task has been largely studied by the NLP community BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36. However, its visual counterpart, Visual Question Generation (VQG), has been comparatively less explored than standard well-known multi-modal tasks such as Visual Question Answering (VQA) BIBREF37, BIBREF38, BIBREF39, BIBREF40, Visual Dialog BIBREF41, BIBREF42, or Image Captioning BIBREF43, BIBREF44, BIBREF45.
The VQG task was first introduced by BIBREF46 in their Neural Self Talk model: the goal is to gain knowledge about an image by iteratively generating questions (VQG) and answering them (VQA). The authors tackle the task with a simple RNN conditioned on the image, following Image Captioning works such as BIBREF45.
Suitable data for the VQG task can come from standard image datasets on which questions have been manually annotated, such as $VQG_{COCO}$, $VQG_{Flickr}$, $VQG_{Bing}$ BIBREF14 , each consisting of 5000 images with 5 questions per image. Alternatively, VQG samples can be derived from Visual Question Answering datasets, such as $VQA1.0$ BIBREF47, by “reversing" them (taking images as inputs and questions as outputs).
A variety of approaches have been proposed. BIBREF14 use a standard Gated Recurrent Neural Network, i.e. a CNN encoder followed by a GRU decoder to generate questions. BIBREF48 aim at generating, for a given image, multiple visually grounded questions of varying types (what, when, where, etc.); similarly, BIBREF49 generate diverse questions using Variational Autoencoders. In BIBREF50, VQG is jointly tackled along its dual task (VQA), just as BIBREF46. In BIBREF51, BIBREF52, the image (processed by a CNN) and the caption (processed by a LSTM) are combined in a mixture module, followed by a LSTM decoder to generate the question, leading to state-of-the-art results on the VQG task on $VQA1.0$ data. More recently, BIBREF53 incorporate multiple cues – place information obtained from PlaceCNN BIBREF54, caption, tags – and combine them within a deep Bayesian framework where the contribution of each cue is weighted to predict a question, obtaining the current state-of-the-art results on $VQG_{COCO}$.
Model
In VQG, the objective is to generate a relevant question from an image and/or its caption. The caption $X_{txt}$ is composed of $M$ tokens $txt_1, ..., txt_M$; these tokens can be words or subwords (smaller than word) units depending on the tokenization strategy used. As BERT uses subword tokenization, throughout this paper we will refer to subwords as our tokenization units.
The proposed model is illustrated in Figure FIGREF11. In SECREF12, we detail how images are incorporated in the Transformer framework. In SECREF14, we present BERT-gen, a novel approach to use BERT for text generation.
Model ::: Representing an Image as Text
In this work, we treat textual and visual inputs similarly, by considering both as sequences. Since an image is not a priori sequential, we consider the image $X_{img}$ as a sequence of object regions $img_1, ..., img_N$, as described below.
The images are first processed as in BIBREF17: a Faster-RCNN BIBREF26, pre-trained on Visual Genome BIBREF55, detects the $N=36$ most salient regions (those likely to include an object) per image. The weights of the Faster-RCNN are fixed during training, as we use the precomputed representations made publicly available by BIBREF56. Each image is thus represented by a sequence of $N=36$ semantic embeddings $f_1, ... f_{N}$ (one for each object region) of dimension 2048, along with the corresponding bounding box coordinates $b_1, ... b_{N}$ of dimension 4. With this approach, the BERT attention can be computed at the level of objects or salient image regions; had we represented images with traditional CNN features, the attention would instead correspond to a uniform grid of image regions without particular semantics, as noted in BIBREF56. To build an object embedding $o_j$ encoding both the object region semantics and its location in the image, we concatenate $f_j$ and $b_j$ ($j\in [1,N]$). Hence, an image is seen as a sequence of $N=36$ visual representations (each corresponding to an object region) $o_1,..., o_N$. Object region representations $o_i$ are ordered by the relevance of the object detected, and the model has access to their relative location in the image through the vectors $b_i$.
To investigate whether our BERT-based model can transfer knowledge beyond language, we consider image features as simple visual tokens that can be presented to the model analogously to textual token embeddings. In order to make the $o_j$ vectors (of dimension $2048+4=2052$) comparable to BERT embeddings (of dimension 768), we use a simple linear cross-modal projection layer $W$ of dimensions $2052\hspace{-1.00006pt}\times \hspace{-1.00006pt}768$. The $N$ object regions detected in an image, are thus represented as $X_{img} = (W.o_1,...,W.o_N)$. Once mapped into the BERT embedding space with $W$, the image is seen by the rest of the model as a sequence of units with no explicit indication if it is a text or an image embedding.
Model ::: BERT-gen: Text Generation with BERT
We cast the VQG task as a classic sequence-to-sequence BIBREF57 modeling framework:
where the input $X=X_{txt}$ in caption-only mode, $X = X_{img}$ in image-only mode, and $X =X_{img} \oplus X_{txt}$ in a multi-modal setup; $Y = {y_1,..., y_T}$ is the question composed of $T$ tokens. $\Theta $ are the parameters of the BERT model; $W$ represents the weights of the linear layer used for projecting visual input to the BERT embedding layer.
As mentioned earlier, BERT is a Transformer BIBREF1 encoder pre-trained using the Masked Language Model (MLM) objective: tokens within the text are replaced with a [MASK] special token, and the model is trained to predict them. Since BERT was not trained with an unidirectional objective, its usage for text generation is not straightforward.
To generate text, BIBREF58 propose to stack a Transformer decoder, symmetric to BERT. However, the authors report training difficulties since the stacked decoder is not pre-trained, and propose a specific training regime, with the side-effect of doubling the number of parameters. BIBREF59 opt for an intermediate step of self-supervised training, introducing a unidirectional loss. As detailed below, we propose a relatively simpler, yet effective, method to use BERT out-of-the-box for text generation.
Model ::: BERT-gen: Text Generation with BERT ::: Decoder
We simply use the original BERT decoder as is, initially trained to generate the tokens masked during its pre-training phase. It consists in a feed-forward layer, followed by normalization, transposition of the embedding layer, and a softmax over the vocabulary.
Model ::: BERT-gen: Text Generation with BERT ::: Next Token Prediction
At inference time, to generate the first token of the question $y_1$, we concatenate [MASK] to the input tokens $X$, then encode $X \oplus \texttt {[MASK]}$ with the BERT encoder, and feed the output of the encoder to the decoder; $y_1$ is the output of the decoder for the [MASK] token. Subsequently, given $y_1$, we concatenate it to the input tokens and encode $X \oplus y_1 \oplus \texttt {[MASK]}$ to predict the next token $y_2$. This procedure is repeated until the generation of a special token [EOS] signaling the end of the sentence.
Model ::: BERT-gen: Text Generation with BERT ::: Attention Trick
As we iteratively concatenate the generated tokens, the BERT bi-directional self-attention mechanism would impact, at every new token, the representations of the previous tokens. To counter that, we use a left-to-right attention mask, similar to the one employed in the original Transformer decoder BIBREF1. For the input tokens in $X$, we apply such mask to all the target tokens $Y$ that were concatenated to $X$, so that input tokens can only attend to the other input tokens. Conversely, for target tokens $y_t$, we put an attention mask on all tokens $y_{>t}$, allowing target tokens $y_t$ to attend only to the input tokens and the already generated target tokens.
This novel method allows to use pre-trained encoders for text generation. In this work, we initialize our model with the parameters from BERT-base. Nonetheless, the methodology can be applied to any pre-trained Transformer encoders such as RoBERTa BIBREF60, or Ernie BIBREF61.
Model ::: BERT-gen: Text Generation with BERT ::: Modality-specific setups
The proposed model can be used in either mono- or multi- modal setups. This is accomplished by activating or deactivating specific modules.
Experimental Protocol
Our main objective is to measure whether the textual knowledge encoded in pre-trained BERT can be beneficial in a cross-modal task. Thus, we define the three following experimental setups, which we refer to as Step 1, 2, and 3:
Experimental Protocol ::: 1. Caption only
Deactivating the Visual embedding module (see Figure FIGREF11), the model has only access to textual input, i.e. the caption. The model is initialized with the BERT weights and trained according to Equation DISPLAY_FORM15.
Experimental Protocol ::: 2. Image only
Conversely, deactivating the Textual embedding module (see Figure FIGREF11), the model has only access to the input image, not the caption. To indicate the position $t$ of $img_t$ in the sequence, we sum the BERT positional embedding of $t$ to the visual representation of $img_t$, just as we would do for a text token $txt_t$. The model is initialized with the weights learned during step 1. All BERT-gen $\Theta $ weights are frozen, and only the linear layer $W$ is learnable. Hence, if the model is able to learn to generate contextualized questions w.r.t. the image, it shows that a simple linear layer is enough to bridge the two modalities.
Experimental Protocol ::: 3. Image + Caption
The full model is given access to both image and caption inputs. In this setup, we separate the two different inputs by a special BERT token [SEP]. Thus, the input sequence for the model takes the form of $\texttt {[CLS]}, img_1,..., img_N, \texttt {[SEP]}, txt_1,..., txt_M$. In step 1, only BERT-gen $\Theta $ parameters are learned, as no image input was given. In step 2, $W$ is trained while keeping $\Theta $ frozen. Finally then, in step 3, we fine-tune the model using both image and text inputs: the model is initialized with the parameters $\Theta $ learned during step 1 and the $W$ learned during step 2, and we unfreeze all parameters.
Experimental Protocol ::: Ablations
Additionally, we report results obtained with: Image only (unfreeze), where the BERT-gen parameters $\Theta $ are not frozen; and Image+Caption (from scratch) where the model is learned without the intermediate steps 1 and 2: the BERT-gen parameters $\Theta $ are initialized with the weights from pre-trained BERT while $W$ is randomly initialized.
Experimental Protocol ::: Datasets
We conduct our experiments using two established datasets for Visual Question Generation:
Experimental Protocol ::: Datasets ::: @!START@$VQG_{COCO}$@!END@
Introduced by BIBREF14, it contains 2500 training images, 1250 validation images and 1250 test images from MS COCO BIBREF62; each image has 5 corresponding questions and 5 ground-truth captions.
Experimental Protocol ::: Datasets ::: @!START@$VQA$@!END@
The Visual Question Answering BIBREF47 dataset can be used to derive VQG data BIBREF50. The task is reversed: instead of answering the question based on the image (VQA), models are called to generate a relevant question given the image (VQG). Also based on MS COCO, it contains 82783 training images, 40504 validation images and 81434 testing images. In $VQA1.0$, each image has 3 associated questions. Since the test set of MS COCO does not contain ground-truth captions, we generated artificial captions for it using NeuralTalk2 BIBREF45: for fair comparison, we used exactly the same model as BIBREF52 (MDN-Joint).
Experimental Protocol ::: Baselines
We compare the proposed model to the following:
Experimental Protocol ::: Baselines ::: Sample
BIBREF46 Questions are generated by a RNN conditioned on the image: at each generation step, the distribution over the vocabulary is computed and used to sample the next generated word. This baseline enables to generate diverse questions over the same image, as the word selection process is non-deterministic.
Experimental Protocol ::: Baselines ::: Max
BIBREF46 Using the above model, selecting words with maximum probability from the computed distribution.
Experimental Protocol ::: Baselines ::: MDN-Joint
BIBREF52 State-of-the-art model on $VQA1.0$, based on joint usage of caption and image information.
Experimental Protocol ::: Baselines ::: MC-SBN
BIBREF53 State-of-the-art on $VQG_{COCO}$. The model jointly leverages on multiple cues (the image, place information, caption, tags) to generate questions.
Experimental Protocol ::: Metrics
We report the following metrics for all experiments, consistently with previous works:
Experimental Protocol ::: Metrics ::: BLEU
BIBREF63 A precision-oriented metric, originally proposed to evaluate machine translation. It is based on the counts of overlapping n-grams between the generated sequences and the human references.
Experimental Protocol ::: Metrics ::: ROUGE
BIBREF64 The recall-oriented counterpart to BLEU metrics, again based on n-gram overlaps.
Experimental Protocol ::: Metrics ::: METEOR
BIBREF65 The harmonic mean between precision and recall w.r.t. unigrams. As opposed to the other metrics, it also accounts for stemming and synonymy matching.
Experimental Protocol ::: Metrics ::: CIDEr
BIBREF66 Originally designed for Image Captioning, it uses human consensus among the multiple references, favoring rare words and penalizing frequent words. This feature is particularly relevant for our task, as the automatically generated questions often follow similar patterns such as “What is the [...] ?". Indeed, we verify experimentally (cf Table and Table ) that the CIDEr metric is the most discriminant in our quantitative results.
Experimental Protocol ::: Implementation details
All models are implemented in PyText BIBREF67. For all our experiments we used a single NVIDIA RTX 2080 Ti GPU, a batch size of 128 and 5 epochs. We used the Adam optimizer with the recommended parameters for BERT: learning rate is set at $2e^{-5}$ with a warmup of $0.1$. The most computationally expensive experiment is the step 3 described above: for this model, completion of one epoch demands 30 seconds and 2 minutes for $VQG_{COCO}$ and $VQA$ datasets, respectively. Metrics were computed using the Python package released by BIBREF33.
Results
In Table , we report quantitative results for the VQG task on $VQA1.0$. The Caption only model already shows strong improvements for all metrics over state-of-the-art models. For this text-only model, the impressive performance can mostly be attributed to BERT, demonstrating once again the benefits obtained using pre-trained language models. In our second step (Image only), the BERT $\Theta $ parameters are frozen and only those of the cross-modal projection matrix $W$ are learned. Despite using a simple linear layer, the model is found to perform well, generating relevant questions given only visual inputs.
This suggests that the conceptual representations encoded in pre-trained language models such as BERT can effectively be used beyond text. Further, we report an additional Image only experiment, this time unfreezing the BERT parameters $\Theta $ – see Step 2 (unfreeze) in Table . As could be expected, since the model is allowed more flexibility, the performance is found to further improve.
Finally, in our third step (Image + Caption), we obtain the highest scores, for all metrics. This indicates that the model is able to effectively leverage the combination of textual and visual inputs. Indeed, complementary information from both modalities can be exploited by the self-attention mechanism, making visual and textual tokens interact to generate the output sequences. Again, we additionally report the results obtained bypassing the intermediate steps 1 and 2: for the model denoted as Step 3 (from scratch) (last row of Table ), $\Theta $ parameters are initialized with the original weights from pre-trained BERT, while the $W$ matrix is randomly initialized. Under this experimental condition, we observe lower performances, a finding that consolidates the importance of the multi-step training procedure we adopted.
In Table , we report quantitative VQG results on $VQG_{COCO}$. These are globally consistent with the ones above for $VQA1.0$. However, we observe two main differences. First, a bigger relative improvement over the state-of-the-art. As the efficacy of pre-trained models is boosted in small-data scenarios BIBREF68, this difference can be explained by the smaller size of $VQG_{COCO}$. Second, we note that the Caption only model globally outperforms all other models, especially on the discriminant CIDEr metric. This can be explained by the fact that, in $VQG_{COCO}$, the captions are human-written (whereas they are automatically generated for $VQA1.0$) and, thus, of higher quality; moreover, the smaller size of the dataset could play a role hindering the ability to adapt to the visual modality. Nonetheless, the strong performances obtained for Step 2 compared to the baselines highlight the effectiveness of our method to learn a cross-modal projection even with a relatively small number of training images.
Results ::: Human Evaluation
To get more in-depth understanding of our models, we report human assessment results in Table . We randomly sampled 50 images from the test set of $VQA1.0$. Each image is paired with its caption, the human-written question used as ground-truth, and the output for our three models: Caption only, Image only and Image+Caption. We asked 3 human annotators to assess the quality of each question using a Likert scale ranging from 1 to 5, for the following criteria: readability, measuring how well-written the question is; caption relevance, how relevant the question is w.r.t. to the caption; and, image relevance, how relevant the question is toward the image. For caption and image relevance, the annotators were presented with only the caption and only the image, respectively.
We observe that all evaluated models produce well-written sentences, as readability does not significantly differ compared to human's questions. Unsurprisingly, the Caption only model shows a higher score for caption relevance, while the relatively lower image relevance score can be explained by the automatically generated and thus imperfect captions in the $VQA1.0$ dataset. Comparatively, the Image only model obtains lower caption relevance and higher image relevance scores; this indicates that the cross modal projection is sufficient to bridge modalities, allowing BERT to generate relevant questions toward the image. Finally, the Image + Caption model obtains the best image relevance among our models, consistently the quantitative results reported in Tables and .
Model Discussion ::: What does the model look at?
To interpret the behavior of attention-based models, it is useful to look at which tokens are given higher attention BIBREF69. In Figure FIGREF44, we present two images $A$ and $B$, along with their captions and the three generated questions corresponding to our three experimental setups (Caption only, Image only and Image + Caption). For this analysis, we average the attention vectors of all the heads in the last layer, and highlight the textual and visual tokens most attended by the models.
For both images, the Caption only model attends to salient words in the caption. The Image only model remains at least as much relevant: on image $A$, it generates a question about a table (with an unclear attention). Interestingly, for image $B$, the Image only model corrects a mistake from step 1: it is a woman holding an umbrella rather than a man, and the attention is indeed focused on the woman in the image. Finally, the Image + Caption model is able to generate fitting questions about the image, with relatively little relevance to the caption: for image $A$, Image + Caption the model generates “What time is it?" while paying attention to the clock; for image $B$, Image + Caption generates “What is the color of the umbrella ?", focusing the attention on the umbrella. The captions of either samples include no mentions of clocks or umbrellas, further indicating effective alignment between visual and textual representations.
Model Discussion ::: Cross-modal alignment
We carry out an additional experiment to analyze the text/vision alignment for each model. Figure FIGREF46 shows the cross-modal similarity $X_{sim}$ for different model scenarios, computed at each BERT-base layer from 1 to 12. We define the cross-modal similarity $X_{sim}$ as the cosine similarity between the vector representations of both modalities. These vectors are the two continuous space representations from a model when given as input either i) an image, or ii) its corresponding caption. We represent these captions and images vectors with the special BERT token [CLS], following previous works BIBREF70 where [CLS] is used to represent the entire sequence.
The reported values correspond to the average cross-modal similarity calculated for all the examples of $VQG_{COCO}$ test set. In addition to the setups described in Section SECREF4 (Caption-only, Image-only and Image + Caption), we also report $X_{sim}$ for Random Transformer, a BERT architecture with random weights. As expected, its $X_{sim}$ is close to zero.
All the other models are based on BERT. As suggested by BIBREF71, the first layers in BERT tend to encode lower-level language information. This might explain why the models show similar $X_{sim}$ scores up to the 9th layer, and diverge afterwards: the weights for those layers remain very similar between our fine-tuned models.
For the last layer ($l=12$), we observe that $\textit {Caption only} < \textit {Image only} < \textit {Image + Caption}$. The Caption only model has never seen images during training, and therefore is not able to encode semantic information given only images as input. Still, its reported $X_{sim} > 0$ can be attributed to the fact that, when fine-tuned on VQG during Step 1, BERT-gen encodes task-specific information in the [CLS] token embedding (e.g. a question ends with a “?" and often begins with “What/Where/Who"). $\textit {Image only} > \textit {Caption only}$ can be explained by the learning of the cross-modal projection $W$. However, since BERT is not fine-tuned, the model learns a “contortion" allowing it to align text and vision. Finally, Image + Caption $>$ Image only can be attributed to BERT fine-tuning, contributing to an increase in the observed gap, and its emergence in earlier layers.
Conclusion and Perspectives
We investigated whether the abstractions encoded in a pre-trained BERT model can generalize beyond text. We proposed BERT-gen, a novel methodology that allows to directly generate text from out-of-the-box pre-trained encoders, either in mono- or multi- modal setups. Moreover, we applied BERT-gen to Visual Question Generation, obtaining state-of-the-art results on two established datasets. We showed how a simple linear projection is sufficient to effectively align visual and textual representations.
In future works, we plan to extend BERT-gen to other modalities, such as audio or video, exploring the potential interactions that can emerge in scenarios where more than two modalities are present. | They use a left-to-right attention mask so that the input tokens can only attend to other input tokens, and the target tokens can only attend to the input tokens and already generated target tokens. |
cd8de03eac49fd79b9d4c07b1b41a165197e1adb | cd8de03eac49fd79b9d4c07b1b41a165197e1adb_0 | Q: How are multimodal representations combined?
Text: Introduction
The BERT language model BIBREF0 is a Deep Bidirectional Transformer BIBREF1 pre-trained on textual corpora (BookCorpus and Wikipedia) using a Masked Language Model (MLM) objective – predicting some words that are randomly masked in the sentence, along with a sentence entailment loss. Recent research efforts BIBREF2 have shown how BERT encodes abstractions that generalize across languages, even when trained on monolingual data only. This contradicts the common belief BIBREF3, BIBREF4 that a shared vocabulary and joint training on multiple languages are essential to achieve cross-lingual generalization capabilities. In this work, we further investigate the generalization potentials of large pre-trained LMs, this time moving to a cross-modal setup: does BERT contain abstractions that generalize beyond text?
In the Artificial Intelligence community, several works have investigated the longstanding research question of whether textual representations encode visual information. On the one hand, a large body of research called language grounding considers that textual representations lack visual commonsense BIBREF5, and intend to ground the meaning of words BIBREF6, BIBREF7 and sentences BIBREF8, BIBREF9 in the perceptual world. In another body of work, textual representations have successfully been used to tackle multi-modal tasks BIBREF10 such as Zero-Shot Learning BIBREF11, Visual Question Answering BIBREF12 or Image Captioning BIBREF13. Following the latter line of research, in this paper we evaluate the potential of pre-trained language models to generalize in the context of Visual Question Generation (VQG) BIBREF14.
The Visual Question Generation task allows us to investigate the cross-modal capabilities of BERT: unlike Image Captioning (where the input is only visual) or VQA (where the input is visual and textual), VQG is a multi-modal task where input can be textual and/or visual. VQG data usually includes images and the associated captions, along with corresponding questions about the image; thus, different experimental setups can be designed to analyze the impact of each modality. Indeed, the questions can be generated using i) textual (the caption), ii) visual (the image), or iii) multi-modal (both the caption and the image) input.
From a practical standpoint, the VQG task has several applications: robots or AI assistants could ask questions rooted in multi-modal data (e.g. fusing conversational data with visual information from captors and cameras), in order to refine their interpretation of the situation they are presented with. It could also allow systems relying on knowledge-bases to gain visual common sense and deal with the Human Reporting Bias BIBREF15, which states that the content of images and text are intrinsically different, since visual common sense is rarely explicitly stated in text.
Recently, BERT-based Multi-Modal Language Models have been proposed BIBREF16, BIBREF17, BIBREF18, BIBREF19 to tackle multi-modal tasks, using different approaches to incorporate visual data within BERT. From these works, it is left to explore whether the cross-modal alignment is fully learned, or it is to some extent already encoded in the BERT abstractions. Therefore, in contrast with those approaches, we explicitly avoid using the following complex mechanisms:
Multi-modal supervision: all previous works exploit an explicit multi-modal supervision through a pre-training step; the models have access to text/image pairs as input, to align their representations. In contrast, our model can switch from text-only to image-only mode without any explicit alignment.
Image-specific losses: specific losses such as Masked RoI (Region of Interest) Classification with Linguistic Clues BIBREF19 or sentence-image prediction BIBREF18 have been reported helpful to align visual and text modalities. Instead, we only use the original MLM loss from BERT (and not its entailment loss).
Non-linearities: we explore a scenario in which the only learnable parameters, for aligning image representations to BERT, are those of simple linear projection layer. This allows us to assess whether the representations encoded in BERT can transfer out-of-the-box to another modality.
Furthermore, to the best of our knowledge, this paper is the first attempt to investigate multi-modal text generation using pre-trained language models. We introduce BERT-gen, a text generator based on BERT, that can be applied both in mono and multi-modal settings. We treat images similarly to text: while a sentence is seen as a sequence of (sub)word tokens, an image is seen as a sequence of objects associated to their corresponding positions (bounding boxes). We show how a simple linear mapping, projecting visual embeddings into the first layer, is enough to ground BERT in the visual realm: text and image object representations are found to be effectively aligned, and the attention over words transfers to attention over the relevant objects in the image.
Our contributions can be summarized as follows:
we introduce BERT-gen, a novel method for generating text using BERT, that can be applied in both mono and multi-modal settings;
we show that the semantic abstractions encoded in pre-trained BERT can generalize to another modality;
we report state-of-the art results on the VQG task;
we provide extensive ablation analyses to interpret the behavior of BERT-gen under different configurations (mono- or multi- modal).
Related Work ::: Unsupervised Pre-trained Language Models
Learning unsupervised textual representations that can be applied to downstream tasks is a widely investigated topic in the literature. Text representations have been learned at different granularities: words with Word2vec BIBREF20, sentences with SkipThought BIBREF21, paragraphs with ParagraphVector BIBREF22 and contextualized word vectors with ELMo BIBREF23. Other methods leverage a transfer-learning approach by fine-tuning all parameters of a pre-trained model on a target task, a paradigm which has become mainstream since the introduction of BERT BIBREF0. BERT alleviates the problem of the uni-directionality of most language models (i.e. where the training objective aims at predicting the next word) by proposing a new objective called Masked Language Model (MLM). Under MLM, some words, that are randomly selected, are masked; the training objective aims at predicting them.
Related Work ::: Multi-modal Language Models
Following the successful application of BERT BIBREF0, and its derivatives, across a great majority of NLP tasks, several research efforts have focused on the design of multi-modal versions of BERT. VideoBERT BIBREF24, a joint video and text model, is pre-trained on a huge corpus of YouTube videos, and applied to action classification and video captioning tasks on the YouCook II dataset BIBREF25. The video is treated as a “visual sentence" (each frame being a “visual word") that is processed by the BERT Transformer.
Concerning models jointly treating information from images and text, visual features extracted from the image are used as “visual words", and a [SEP] special token is employed to separate textual and visual tokens. In the literature, visual features are object features extracted with a Faster R-CNN BIBREF26 – with the notable exception of BIBREF27 who used pooling layers from a CNN. A first body of work exploit single-stream Transformers in which visual features are incorporated in a BERT-like Transformer: this is the case for VisualBERT BIBREF18, VL-BERT BIBREF19, Unicoder-VL BIBREF28 and B2T2 BIBREF29. Other works, such as ViLBERT BIBREF16 and LXMERT BIBREF17 have investigated two-stream approaches: these models employ modality-specific encoders built on standard Transformer blocks, which are then fused into a cross-modal encoder. Interestingly, none of the aforementioned models have been used for generation tasks such as VQG, tackled in this work.
Related Work ::: Visual Question Generation
The text-based Question Generation task has been largely studied by the NLP community BIBREF30, BIBREF31, BIBREF32, BIBREF33, BIBREF34, BIBREF35, BIBREF36. However, its visual counterpart, Visual Question Generation (VQG), has been comparatively less explored than standard well-known multi-modal tasks such as Visual Question Answering (VQA) BIBREF37, BIBREF38, BIBREF39, BIBREF40, Visual Dialog BIBREF41, BIBREF42, or Image Captioning BIBREF43, BIBREF44, BIBREF45.
The VQG task was first introduced by BIBREF46 in their Neural Self Talk model: the goal is to gain knowledge about an image by iteratively generating questions (VQG) and answering them (VQA). The authors tackle the task with a simple RNN conditioned on the image, following Image Captioning works such as BIBREF45.
Suitable data for the VQG task can come from standard image datasets on which questions have been manually annotated, such as $VQG_{COCO}$, $VQG_{Flickr}$, $VQG_{Bing}$ BIBREF14 , each consisting of 5000 images with 5 questions per image. Alternatively, VQG samples can be derived from Visual Question Answering datasets, such as $VQA1.0$ BIBREF47, by “reversing" them (taking images as inputs and questions as outputs).
A variety of approaches have been proposed. BIBREF14 use a standard Gated Recurrent Neural Network, i.e. a CNN encoder followed by a GRU decoder to generate questions. BIBREF48 aim at generating, for a given image, multiple visually grounded questions of varying types (what, when, where, etc.); similarly, BIBREF49 generate diverse questions using Variational Autoencoders. In BIBREF50, VQG is jointly tackled along its dual task (VQA), just as BIBREF46. In BIBREF51, BIBREF52, the image (processed by a CNN) and the caption (processed by a LSTM) are combined in a mixture module, followed by a LSTM decoder to generate the question, leading to state-of-the-art results on the VQG task on $VQA1.0$ data. More recently, BIBREF53 incorporate multiple cues – place information obtained from PlaceCNN BIBREF54, caption, tags – and combine them within a deep Bayesian framework where the contribution of each cue is weighted to predict a question, obtaining the current state-of-the-art results on $VQG_{COCO}$.
Model
In VQG, the objective is to generate a relevant question from an image and/or its caption. The caption $X_{txt}$ is composed of $M$ tokens $txt_1, ..., txt_M$; these tokens can be words or subwords (smaller than word) units depending on the tokenization strategy used. As BERT uses subword tokenization, throughout this paper we will refer to subwords as our tokenization units.
The proposed model is illustrated in Figure FIGREF11. In SECREF12, we detail how images are incorporated in the Transformer framework. In SECREF14, we present BERT-gen, a novel approach to use BERT for text generation.
Model ::: Representing an Image as Text
In this work, we treat textual and visual inputs similarly, by considering both as sequences. Since an image is not a priori sequential, we consider the image $X_{img}$ as a sequence of object regions $img_1, ..., img_N$, as described below.
The images are first processed as in BIBREF17: a Faster-RCNN BIBREF26, pre-trained on Visual Genome BIBREF55, detects the $N=36$ most salient regions (those likely to include an object) per image. The weights of the Faster-RCNN are fixed during training, as we use the precomputed representations made publicly available by BIBREF56. Each image is thus represented by a sequence of $N=36$ semantic embeddings $f_1, ... f_{N}$ (one for each object region) of dimension 2048, along with the corresponding bounding box coordinates $b_1, ... b_{N}$ of dimension 4. With this approach, the BERT attention can be computed at the level of objects or salient image regions; had we represented images with traditional CNN features, the attention would instead correspond to a uniform grid of image regions without particular semantics, as noted in BIBREF56. To build an object embedding $o_j$ encoding both the object region semantics and its location in the image, we concatenate $f_j$ and $b_j$ ($j\in [1,N]$). Hence, an image is seen as a sequence of $N=36$ visual representations (each corresponding to an object region) $o_1,..., o_N$. Object region representations $o_i$ are ordered by the relevance of the object detected, and the model has access to their relative location in the image through the vectors $b_i$.
To investigate whether our BERT-based model can transfer knowledge beyond language, we consider image features as simple visual tokens that can be presented to the model analogously to textual token embeddings. In order to make the $o_j$ vectors (of dimension $2048+4=2052$) comparable to BERT embeddings (of dimension 768), we use a simple linear cross-modal projection layer $W$ of dimensions $2052\hspace{-1.00006pt}\times \hspace{-1.00006pt}768$. The $N$ object regions detected in an image, are thus represented as $X_{img} = (W.o_1,...,W.o_N)$. Once mapped into the BERT embedding space with $W$, the image is seen by the rest of the model as a sequence of units with no explicit indication if it is a text or an image embedding.
Model ::: BERT-gen: Text Generation with BERT
We cast the VQG task as a classic sequence-to-sequence BIBREF57 modeling framework:
where the input $X=X_{txt}$ in caption-only mode, $X = X_{img}$ in image-only mode, and $X =X_{img} \oplus X_{txt}$ in a multi-modal setup; $Y = {y_1,..., y_T}$ is the question composed of $T$ tokens. $\Theta $ are the parameters of the BERT model; $W$ represents the weights of the linear layer used for projecting visual input to the BERT embedding layer.
As mentioned earlier, BERT is a Transformer BIBREF1 encoder pre-trained using the Masked Language Model (MLM) objective: tokens within the text are replaced with a [MASK] special token, and the model is trained to predict them. Since BERT was not trained with an unidirectional objective, its usage for text generation is not straightforward.
To generate text, BIBREF58 propose to stack a Transformer decoder, symmetric to BERT. However, the authors report training difficulties since the stacked decoder is not pre-trained, and propose a specific training regime, with the side-effect of doubling the number of parameters. BIBREF59 opt for an intermediate step of self-supervised training, introducing a unidirectional loss. As detailed below, we propose a relatively simpler, yet effective, method to use BERT out-of-the-box for text generation.
Model ::: BERT-gen: Text Generation with BERT ::: Decoder
We simply use the original BERT decoder as is, initially trained to generate the tokens masked during its pre-training phase. It consists in a feed-forward layer, followed by normalization, transposition of the embedding layer, and a softmax over the vocabulary.
Model ::: BERT-gen: Text Generation with BERT ::: Next Token Prediction
At inference time, to generate the first token of the question $y_1$, we concatenate [MASK] to the input tokens $X$, then encode $X \oplus \texttt {[MASK]}$ with the BERT encoder, and feed the output of the encoder to the decoder; $y_1$ is the output of the decoder for the [MASK] token. Subsequently, given $y_1$, we concatenate it to the input tokens and encode $X \oplus y_1 \oplus \texttt {[MASK]}$ to predict the next token $y_2$. This procedure is repeated until the generation of a special token [EOS] signaling the end of the sentence.
Model ::: BERT-gen: Text Generation with BERT ::: Attention Trick
As we iteratively concatenate the generated tokens, the BERT bi-directional self-attention mechanism would impact, at every new token, the representations of the previous tokens. To counter that, we use a left-to-right attention mask, similar to the one employed in the original Transformer decoder BIBREF1. For the input tokens in $X$, we apply such mask to all the target tokens $Y$ that were concatenated to $X$, so that input tokens can only attend to the other input tokens. Conversely, for target tokens $y_t$, we put an attention mask on all tokens $y_{>t}$, allowing target tokens $y_t$ to attend only to the input tokens and the already generated target tokens.
This novel method allows to use pre-trained encoders for text generation. In this work, we initialize our model with the parameters from BERT-base. Nonetheless, the methodology can be applied to any pre-trained Transformer encoders such as RoBERTa BIBREF60, or Ernie BIBREF61.
Model ::: BERT-gen: Text Generation with BERT ::: Modality-specific setups
The proposed model can be used in either mono- or multi- modal setups. This is accomplished by activating or deactivating specific modules.
Experimental Protocol
Our main objective is to measure whether the textual knowledge encoded in pre-trained BERT can be beneficial in a cross-modal task. Thus, we define the three following experimental setups, which we refer to as Step 1, 2, and 3:
Experimental Protocol ::: 1. Caption only
Deactivating the Visual embedding module (see Figure FIGREF11), the model has only access to textual input, i.e. the caption. The model is initialized with the BERT weights and trained according to Equation DISPLAY_FORM15.
Experimental Protocol ::: 2. Image only
Conversely, deactivating the Textual embedding module (see Figure FIGREF11), the model has only access to the input image, not the caption. To indicate the position $t$ of $img_t$ in the sequence, we sum the BERT positional embedding of $t$ to the visual representation of $img_t$, just as we would do for a text token $txt_t$. The model is initialized with the weights learned during step 1. All BERT-gen $\Theta $ weights are frozen, and only the linear layer $W$ is learnable. Hence, if the model is able to learn to generate contextualized questions w.r.t. the image, it shows that a simple linear layer is enough to bridge the two modalities.
Experimental Protocol ::: 3. Image + Caption
The full model is given access to both image and caption inputs. In this setup, we separate the two different inputs by a special BERT token [SEP]. Thus, the input sequence for the model takes the form of $\texttt {[CLS]}, img_1,..., img_N, \texttt {[SEP]}, txt_1,..., txt_M$. In step 1, only BERT-gen $\Theta $ parameters are learned, as no image input was given. In step 2, $W$ is trained while keeping $\Theta $ frozen. Finally then, in step 3, we fine-tune the model using both image and text inputs: the model is initialized with the parameters $\Theta $ learned during step 1 and the $W$ learned during step 2, and we unfreeze all parameters.
Experimental Protocol ::: Ablations
Additionally, we report results obtained with: Image only (unfreeze), where the BERT-gen parameters $\Theta $ are not frozen; and Image+Caption (from scratch) where the model is learned without the intermediate steps 1 and 2: the BERT-gen parameters $\Theta $ are initialized with the weights from pre-trained BERT while $W$ is randomly initialized.
Experimental Protocol ::: Datasets
We conduct our experiments using two established datasets for Visual Question Generation:
Experimental Protocol ::: Datasets ::: @!START@$VQG_{COCO}$@!END@
Introduced by BIBREF14, it contains 2500 training images, 1250 validation images and 1250 test images from MS COCO BIBREF62; each image has 5 corresponding questions and 5 ground-truth captions.
Experimental Protocol ::: Datasets ::: @!START@$VQA$@!END@
The Visual Question Answering BIBREF47 dataset can be used to derive VQG data BIBREF50. The task is reversed: instead of answering the question based on the image (VQA), models are called to generate a relevant question given the image (VQG). Also based on MS COCO, it contains 82783 training images, 40504 validation images and 81434 testing images. In $VQA1.0$, each image has 3 associated questions. Since the test set of MS COCO does not contain ground-truth captions, we generated artificial captions for it using NeuralTalk2 BIBREF45: for fair comparison, we used exactly the same model as BIBREF52 (MDN-Joint).
Experimental Protocol ::: Baselines
We compare the proposed model to the following:
Experimental Protocol ::: Baselines ::: Sample
BIBREF46 Questions are generated by a RNN conditioned on the image: at each generation step, the distribution over the vocabulary is computed and used to sample the next generated word. This baseline enables to generate diverse questions over the same image, as the word selection process is non-deterministic.
Experimental Protocol ::: Baselines ::: Max
BIBREF46 Using the above model, selecting words with maximum probability from the computed distribution.
Experimental Protocol ::: Baselines ::: MDN-Joint
BIBREF52 State-of-the-art model on $VQA1.0$, based on joint usage of caption and image information.
Experimental Protocol ::: Baselines ::: MC-SBN
BIBREF53 State-of-the-art on $VQG_{COCO}$. The model jointly leverages on multiple cues (the image, place information, caption, tags) to generate questions.
Experimental Protocol ::: Metrics
We report the following metrics for all experiments, consistently with previous works:
Experimental Protocol ::: Metrics ::: BLEU
BIBREF63 A precision-oriented metric, originally proposed to evaluate machine translation. It is based on the counts of overlapping n-grams between the generated sequences and the human references.
Experimental Protocol ::: Metrics ::: ROUGE
BIBREF64 The recall-oriented counterpart to BLEU metrics, again based on n-gram overlaps.
Experimental Protocol ::: Metrics ::: METEOR
BIBREF65 The harmonic mean between precision and recall w.r.t. unigrams. As opposed to the other metrics, it also accounts for stemming and synonymy matching.
Experimental Protocol ::: Metrics ::: CIDEr
BIBREF66 Originally designed for Image Captioning, it uses human consensus among the multiple references, favoring rare words and penalizing frequent words. This feature is particularly relevant for our task, as the automatically generated questions often follow similar patterns such as “What is the [...] ?". Indeed, we verify experimentally (cf Table and Table ) that the CIDEr metric is the most discriminant in our quantitative results.
Experimental Protocol ::: Implementation details
All models are implemented in PyText BIBREF67. For all our experiments we used a single NVIDIA RTX 2080 Ti GPU, a batch size of 128 and 5 epochs. We used the Adam optimizer with the recommended parameters for BERT: learning rate is set at $2e^{-5}$ with a warmup of $0.1$. The most computationally expensive experiment is the step 3 described above: for this model, completion of one epoch demands 30 seconds and 2 minutes for $VQG_{COCO}$ and $VQA$ datasets, respectively. Metrics were computed using the Python package released by BIBREF33.
Results
In Table , we report quantitative results for the VQG task on $VQA1.0$. The Caption only model already shows strong improvements for all metrics over state-of-the-art models. For this text-only model, the impressive performance can mostly be attributed to BERT, demonstrating once again the benefits obtained using pre-trained language models. In our second step (Image only), the BERT $\Theta $ parameters are frozen and only those of the cross-modal projection matrix $W$ are learned. Despite using a simple linear layer, the model is found to perform well, generating relevant questions given only visual inputs.
This suggests that the conceptual representations encoded in pre-trained language models such as BERT can effectively be used beyond text. Further, we report an additional Image only experiment, this time unfreezing the BERT parameters $\Theta $ – see Step 2 (unfreeze) in Table . As could be expected, since the model is allowed more flexibility, the performance is found to further improve.
Finally, in our third step (Image + Caption), we obtain the highest scores, for all metrics. This indicates that the model is able to effectively leverage the combination of textual and visual inputs. Indeed, complementary information from both modalities can be exploited by the self-attention mechanism, making visual and textual tokens interact to generate the output sequences. Again, we additionally report the results obtained bypassing the intermediate steps 1 and 2: for the model denoted as Step 3 (from scratch) (last row of Table ), $\Theta $ parameters are initialized with the original weights from pre-trained BERT, while the $W$ matrix is randomly initialized. Under this experimental condition, we observe lower performances, a finding that consolidates the importance of the multi-step training procedure we adopted.
In Table , we report quantitative VQG results on $VQG_{COCO}$. These are globally consistent with the ones above for $VQA1.0$. However, we observe two main differences. First, a bigger relative improvement over the state-of-the-art. As the efficacy of pre-trained models is boosted in small-data scenarios BIBREF68, this difference can be explained by the smaller size of $VQG_{COCO}$. Second, we note that the Caption only model globally outperforms all other models, especially on the discriminant CIDEr metric. This can be explained by the fact that, in $VQG_{COCO}$, the captions are human-written (whereas they are automatically generated for $VQA1.0$) and, thus, of higher quality; moreover, the smaller size of the dataset could play a role hindering the ability to adapt to the visual modality. Nonetheless, the strong performances obtained for Step 2 compared to the baselines highlight the effectiveness of our method to learn a cross-modal projection even with a relatively small number of training images.
Results ::: Human Evaluation
To get more in-depth understanding of our models, we report human assessment results in Table . We randomly sampled 50 images from the test set of $VQA1.0$. Each image is paired with its caption, the human-written question used as ground-truth, and the output for our three models: Caption only, Image only and Image+Caption. We asked 3 human annotators to assess the quality of each question using a Likert scale ranging from 1 to 5, for the following criteria: readability, measuring how well-written the question is; caption relevance, how relevant the question is w.r.t. to the caption; and, image relevance, how relevant the question is toward the image. For caption and image relevance, the annotators were presented with only the caption and only the image, respectively.
We observe that all evaluated models produce well-written sentences, as readability does not significantly differ compared to human's questions. Unsurprisingly, the Caption only model shows a higher score for caption relevance, while the relatively lower image relevance score can be explained by the automatically generated and thus imperfect captions in the $VQA1.0$ dataset. Comparatively, the Image only model obtains lower caption relevance and higher image relevance scores; this indicates that the cross modal projection is sufficient to bridge modalities, allowing BERT to generate relevant questions toward the image. Finally, the Image + Caption model obtains the best image relevance among our models, consistently the quantitative results reported in Tables and .
Model Discussion ::: What does the model look at?
To interpret the behavior of attention-based models, it is useful to look at which tokens are given higher attention BIBREF69. In Figure FIGREF44, we present two images $A$ and $B$, along with their captions and the three generated questions corresponding to our three experimental setups (Caption only, Image only and Image + Caption). For this analysis, we average the attention vectors of all the heads in the last layer, and highlight the textual and visual tokens most attended by the models.
For both images, the Caption only model attends to salient words in the caption. The Image only model remains at least as much relevant: on image $A$, it generates a question about a table (with an unclear attention). Interestingly, for image $B$, the Image only model corrects a mistake from step 1: it is a woman holding an umbrella rather than a man, and the attention is indeed focused on the woman in the image. Finally, the Image + Caption model is able to generate fitting questions about the image, with relatively little relevance to the caption: for image $A$, Image + Caption the model generates “What time is it?" while paying attention to the clock; for image $B$, Image + Caption generates “What is the color of the umbrella ?", focusing the attention on the umbrella. The captions of either samples include no mentions of clocks or umbrellas, further indicating effective alignment between visual and textual representations.
Model Discussion ::: Cross-modal alignment
We carry out an additional experiment to analyze the text/vision alignment for each model. Figure FIGREF46 shows the cross-modal similarity $X_{sim}$ for different model scenarios, computed at each BERT-base layer from 1 to 12. We define the cross-modal similarity $X_{sim}$ as the cosine similarity between the vector representations of both modalities. These vectors are the two continuous space representations from a model when given as input either i) an image, or ii) its corresponding caption. We represent these captions and images vectors with the special BERT token [CLS], following previous works BIBREF70 where [CLS] is used to represent the entire sequence.
The reported values correspond to the average cross-modal similarity calculated for all the examples of $VQG_{COCO}$ test set. In addition to the setups described in Section SECREF4 (Caption-only, Image-only and Image + Caption), we also report $X_{sim}$ for Random Transformer, a BERT architecture with random weights. As expected, its $X_{sim}$ is close to zero.
All the other models are based on BERT. As suggested by BIBREF71, the first layers in BERT tend to encode lower-level language information. This might explain why the models show similar $X_{sim}$ scores up to the 9th layer, and diverge afterwards: the weights for those layers remain very similar between our fine-tuned models.
For the last layer ($l=12$), we observe that $\textit {Caption only} < \textit {Image only} < \textit {Image + Caption}$. The Caption only model has never seen images during training, and therefore is not able to encode semantic information given only images as input. Still, its reported $X_{sim} > 0$ can be attributed to the fact that, when fine-tuned on VQG during Step 1, BERT-gen encodes task-specific information in the [CLS] token embedding (e.g. a question ends with a “?" and often begins with “What/Where/Who"). $\textit {Image only} > \textit {Caption only}$ can be explained by the learning of the cross-modal projection $W$. However, since BERT is not fine-tuned, the model learns a “contortion" allowing it to align text and vision. Finally, Image + Caption $>$ Image only can be attributed to BERT fine-tuning, contributing to an increase in the observed gap, and its emergence in earlier layers.
Conclusion and Perspectives
We investigated whether the abstractions encoded in a pre-trained BERT model can generalize beyond text. We proposed BERT-gen, a novel methodology that allows to directly generate text from out-of-the-box pre-trained encoders, either in mono- or multi- modal setups. Moreover, we applied BERT-gen to Visual Question Generation, obtaining state-of-the-art results on two established datasets. We showed how a simple linear projection is sufficient to effectively align visual and textual representations.
In future works, we plan to extend BERT-gen to other modalities, such as audio or video, exploring the potential interactions that can emerge in scenarios where more than two modalities are present. | The image feature vectors are mapped into BERT embedding dimensions and treated like a text sequence afterwards. |
63850ac98a47ae49f0f49c1c1a6e45c6c447272c | 63850ac98a47ae49f0f49c1c1a6e45c6c447272c_0 | Q: What is the problem with existing metrics that they are trying to address?
Text: Building Extractive CNN/Daily Mail
In this section, we describe the procedure of annotating CNN/Daily Mail. For each facet (sentence) in the reference summary, we find all its support sentences in the document that can cover its meaning. Note that the support sentences are likely to be more verbose, but we only consider if the sentences cover the semantics of the facet regardless of their length. The reason is that we believe extractive summarization should focus on information coverage and once salient sentences are extracted, one can then compress them in an abstractive way BIBREF0, BIBREF1. Formally, we denote one document-summary pair as $\lbrace d, r\rbrace $, where $d = \lbrace d^j\rbrace _{j=1}^D$, $r = \lbrace r^j\rbrace _{j=1}^R$, and $D$, $R$ denote the number of sentences. We define one support group of facet $\mathcal {F}$ as a minimum set of sentences in the document that express the meaning of $\mathcal {F}$. For each $r^j$, we annotate a FAM $r^j \rightarrow \lbrace \lbrace d^{s_{j, 1}^k}\rbrace _{k=1}^{\textrm {K}_1}, \lbrace d^{s_{j, 2}^k}\rbrace _{k=1}^{\textrm {K}_2}, ..., \lbrace d^{s_{j, N}^k}\rbrace _{k=1}^{\textrm {K}_N}\rbrace $ in which each $\lbrace d^{s_{j, n}^k}\rbrace _{k=1}^{\textrm {K}_n}$ is a support group and $s_{j, n}^k$ is the index of the $k$-th support sentence in group $n$.
One may regard the procedure as creating extractive labels, which is widely used in extractive summarization since only abstractive references are available in existing datasets. The major differences are that 1) We label all the support sentences instead of just one or fixed number of sentences, i.e., we do not specify $\textrm {K}_n$. For example, we would put two sentences to one support group if they are complementary and only combining them can cover the facet. 2) We find multiple support groups ($N > 1$), as there could be more than one set of sentences that cover the same facet and extracting any one of them is acceptable. In contrast, there is no concept of support group in extractive labels as they inherently form one such group. We sampled 150 document-summary pairs from the test set of CNN/Daily Mail. 344 FAMs were created by three annotators with high agreement (pairwise Jaccard index 0.71) and further verified to reach consensus. We found that the facets can be divided into three categories based on their quality and degree of abstraction as follows.
Random: The facet is quite random, either because the document itself is too hard to summarize (e.g., a report full of quotations) or the human editor was too subjective when writing the summary BIBREF2. Another possible reason is that the so-called “summaries” are in fact “story highlights”, which seems reasonable to contain details. We found that 41/150 (26%) samples have random facet(s), implying there are severe issues in the reference summaries of CNN/Daily Mail.
Low Abstraction: The facet can be mapped to its support sentences. We further divide this category by the (rounded) average number of support sentences K of $N$ support groups ($\textrm {K}=\frac{\sum _{n=1}^N |\lbrace d^{s_{j, n}^k}\rbrace _{k=1}^{\textrm {K}_n} \rbrace |}{N})$. As in Table TABREF1, most facets (93%) in the reference summaries are paraphrases or compression of one to two sentences in the document without much abstraction.
High Abstraction: The facet cannot be mapped to its support sentences, which indicates that its writing requires deep understandings of the document rather than reorganizing several sentences. The proportion of this category (7%) also indicates how often extractive methods would not work (well) on CNN/Daily Mail.
Surprisingly, we found it easier than previously believed to create the FAMs on CNN/Daily Mail, as it is uncommon ($\overline{N} = 1.56$) to detect multiple sentences with similar semantics (compared to multi-document summarization). In addition, most support groups only have one or two support sentences with large lexical overlap.
Revisit of State-of-the-art Methods
By utilizing the FAMs, we revisit extractive methods to see how well they perform on facet coverage. Specifically, we compare Lead-3, Refresh BIBREF3, FastRL(E) (E for extractive only) BIBREF0, UnifiedSum(E) BIBREF1, NeuSum BIBREF4, and BanditSum BIBREF5 using both ROUGE and FAMs. As these methods are facet-agnostic (i.e., their outputs are not organized by facets but flat extract sets), we consider one facet is covered as long as one of its support groups is extracted and measure the Facet-Aware Recall ($\textbf {FAR} = \frac{\textrm {\#covered}}{R}$). For a fair comparison, each method extracts three sentences since extracting all would result in a perfect FAR.
As shown in Table TABREF13, there is almost no discrimination among the last four methods under ROUGE-1 F1, and their rankings under ROUGE-1/2/L are quite different. In contrast, FAR shows that UnifiedSum(E) covers the most facets. Although FAR is supposed to be favored as FAMs are already manually labeled and tell exactly if one sentence should be extracted (assuming our annotations are in agreement), to further verify that FAR correlates with human preference, we rank UnifiedSum(E), NeuSum, and Lead-3 in Table TABREF15. The order of the 1st rank in the human evaluation coincides with FAR. FAR also has higher Spearman's coefficient $\rho $ than ROUGE (0.457 vs. 0.44, n=30, threshold=0.362 at 95% significance).
Another benefit of the FAMs is that one can employ the category breakdown for fine-grained analysis under any metrics of interest. Here we consider ROUGE and additionally evaluate several abstractive methods: Pointer-Generator (PG) BIBREF2, FastRL(E+A)(extractive+abstractive) BIBREF0, and UnifiedSum(E+A) BIBREF1. As depicted in Table TABREF16, not only extractive methods fail on high abstraction samples, but there is also a huge performance gap between low and high abstraction samples for abstractive methods, which suggests that existing methods achieve decent performance mainly by extraction rather than abstraction. We also found that all the compared methods perform much worse on the documents with “random” summaries, implying that the randomness in the reference summaries might introduce noise to both model training and evaluation. Despite the fact that the sample size is relatively small, we observed consistent results when analyzing different subsets of the data.
Analysis of Approximate Approaches to Mapping Generation
Although the FAMs only need to be annotated once, we investigate whether such human efforts can be further reduced by evaluating approximate approaches that generate extractive labels. Approximate approaches typically transform one abstractive summary to extractive labels heuristically using ROUGE. Previously one could only estimate the quality of these labels by evaluating the extractive models trained using such labels, i.e., comparing the extracted and reference summaries (also approximately via ROUGE). Now that the FAMs serve as ground-truth extractive labels, we can evaluate how well each approach performs accurately. Since the approximate approaches do not have the notion of support group, we flatten all the support sentences in one FAM to a label set.
Due to limited space, we leave the details of the approximate approaches (most of them are self-evident) to Appendix . The comparison results are shown in Table TABREF17. On the bright side, approximate approaches perform relatively well (e.g., 90.6% selected sentences of BIBREF3 indeed contain salient information). This is explainable as ROUGE is good at capturing lexical overlap and as we have shown, there are many copy-and-paste reference summaries in CNN/Daily Mail. On the other hand, these approaches are not perfect and the low recall suggests that simply mapping each facet with one support sentence would miss plenty of salient sentences, which could worsen the performance of extractive models trained on such labels. That said, how to find more than one support group for each facet or multiple support sentences in one support group automatically and accurately remains an open question.
Conclusions and Future Work
We presented the promising results towards the facet-aware evaluation for extractive summarization. In the future, we will conduct large-scale human annotations in a crowd-sourcing way on the whole test set of CNN/Daily Mail. We will also investigate benchmark multi-document summarization datasets such as DUC BIBREF8 and TAC BIBREF9 to see if the findings coincide and how we can leverage the multiple references provided for each document set in those datasets. | Answer with content missing: (whole introduction) However, recent
studies observe the limits of ROUGE and find in
some cases, it fails to reach consensus with human.
judgment (Paulus et al., 2017; Schluter, 2017). |
313087c69caeab2f58e7abd62664d3bd93618e4e | 313087c69caeab2f58e7abd62664d3bd93618e4e_0 | Q: How do they evaluate their proposed metric?
Text: Building Extractive CNN/Daily Mail
In this section, we describe the procedure of annotating CNN/Daily Mail. For each facet (sentence) in the reference summary, we find all its support sentences in the document that can cover its meaning. Note that the support sentences are likely to be more verbose, but we only consider if the sentences cover the semantics of the facet regardless of their length. The reason is that we believe extractive summarization should focus on information coverage and once salient sentences are extracted, one can then compress them in an abstractive way BIBREF0, BIBREF1. Formally, we denote one document-summary pair as $\lbrace d, r\rbrace $, where $d = \lbrace d^j\rbrace _{j=1}^D$, $r = \lbrace r^j\rbrace _{j=1}^R$, and $D$, $R$ denote the number of sentences. We define one support group of facet $\mathcal {F}$ as a minimum set of sentences in the document that express the meaning of $\mathcal {F}$. For each $r^j$, we annotate a FAM $r^j \rightarrow \lbrace \lbrace d^{s_{j, 1}^k}\rbrace _{k=1}^{\textrm {K}_1}, \lbrace d^{s_{j, 2}^k}\rbrace _{k=1}^{\textrm {K}_2}, ..., \lbrace d^{s_{j, N}^k}\rbrace _{k=1}^{\textrm {K}_N}\rbrace $ in which each $\lbrace d^{s_{j, n}^k}\rbrace _{k=1}^{\textrm {K}_n}$ is a support group and $s_{j, n}^k$ is the index of the $k$-th support sentence in group $n$.
One may regard the procedure as creating extractive labels, which is widely used in extractive summarization since only abstractive references are available in existing datasets. The major differences are that 1) We label all the support sentences instead of just one or fixed number of sentences, i.e., we do not specify $\textrm {K}_n$. For example, we would put two sentences to one support group if they are complementary and only combining them can cover the facet. 2) We find multiple support groups ($N > 1$), as there could be more than one set of sentences that cover the same facet and extracting any one of them is acceptable. In contrast, there is no concept of support group in extractive labels as they inherently form one such group. We sampled 150 document-summary pairs from the test set of CNN/Daily Mail. 344 FAMs were created by three annotators with high agreement (pairwise Jaccard index 0.71) and further verified to reach consensus. We found that the facets can be divided into three categories based on their quality and degree of abstraction as follows.
Random: The facet is quite random, either because the document itself is too hard to summarize (e.g., a report full of quotations) or the human editor was too subjective when writing the summary BIBREF2. Another possible reason is that the so-called “summaries” are in fact “story highlights”, which seems reasonable to contain details. We found that 41/150 (26%) samples have random facet(s), implying there are severe issues in the reference summaries of CNN/Daily Mail.
Low Abstraction: The facet can be mapped to its support sentences. We further divide this category by the (rounded) average number of support sentences K of $N$ support groups ($\textrm {K}=\frac{\sum _{n=1}^N |\lbrace d^{s_{j, n}^k}\rbrace _{k=1}^{\textrm {K}_n} \rbrace |}{N})$. As in Table TABREF1, most facets (93%) in the reference summaries are paraphrases or compression of one to two sentences in the document without much abstraction.
High Abstraction: The facet cannot be mapped to its support sentences, which indicates that its writing requires deep understandings of the document rather than reorganizing several sentences. The proportion of this category (7%) also indicates how often extractive methods would not work (well) on CNN/Daily Mail.
Surprisingly, we found it easier than previously believed to create the FAMs on CNN/Daily Mail, as it is uncommon ($\overline{N} = 1.56$) to detect multiple sentences with similar semantics (compared to multi-document summarization). In addition, most support groups only have one or two support sentences with large lexical overlap.
Revisit of State-of-the-art Methods
By utilizing the FAMs, we revisit extractive methods to see how well they perform on facet coverage. Specifically, we compare Lead-3, Refresh BIBREF3, FastRL(E) (E for extractive only) BIBREF0, UnifiedSum(E) BIBREF1, NeuSum BIBREF4, and BanditSum BIBREF5 using both ROUGE and FAMs. As these methods are facet-agnostic (i.e., their outputs are not organized by facets but flat extract sets), we consider one facet is covered as long as one of its support groups is extracted and measure the Facet-Aware Recall ($\textbf {FAR} = \frac{\textrm {\#covered}}{R}$). For a fair comparison, each method extracts three sentences since extracting all would result in a perfect FAR.
As shown in Table TABREF13, there is almost no discrimination among the last four methods under ROUGE-1 F1, and their rankings under ROUGE-1/2/L are quite different. In contrast, FAR shows that UnifiedSum(E) covers the most facets. Although FAR is supposed to be favored as FAMs are already manually labeled and tell exactly if one sentence should be extracted (assuming our annotations are in agreement), to further verify that FAR correlates with human preference, we rank UnifiedSum(E), NeuSum, and Lead-3 in Table TABREF15. The order of the 1st rank in the human evaluation coincides with FAR. FAR also has higher Spearman's coefficient $\rho $ than ROUGE (0.457 vs. 0.44, n=30, threshold=0.362 at 95% significance).
Another benefit of the FAMs is that one can employ the category breakdown for fine-grained analysis under any metrics of interest. Here we consider ROUGE and additionally evaluate several abstractive methods: Pointer-Generator (PG) BIBREF2, FastRL(E+A)(extractive+abstractive) BIBREF0, and UnifiedSum(E+A) BIBREF1. As depicted in Table TABREF16, not only extractive methods fail on high abstraction samples, but there is also a huge performance gap between low and high abstraction samples for abstractive methods, which suggests that existing methods achieve decent performance mainly by extraction rather than abstraction. We also found that all the compared methods perform much worse on the documents with “random” summaries, implying that the randomness in the reference summaries might introduce noise to both model training and evaluation. Despite the fact that the sample size is relatively small, we observed consistent results when analyzing different subsets of the data.
Analysis of Approximate Approaches to Mapping Generation
Although the FAMs only need to be annotated once, we investigate whether such human efforts can be further reduced by evaluating approximate approaches that generate extractive labels. Approximate approaches typically transform one abstractive summary to extractive labels heuristically using ROUGE. Previously one could only estimate the quality of these labels by evaluating the extractive models trained using such labels, i.e., comparing the extracted and reference summaries (also approximately via ROUGE). Now that the FAMs serve as ground-truth extractive labels, we can evaluate how well each approach performs accurately. Since the approximate approaches do not have the notion of support group, we flatten all the support sentences in one FAM to a label set.
Due to limited space, we leave the details of the approximate approaches (most of them are self-evident) to Appendix . The comparison results are shown in Table TABREF17. On the bright side, approximate approaches perform relatively well (e.g., 90.6% selected sentences of BIBREF3 indeed contain salient information). This is explainable as ROUGE is good at capturing lexical overlap and as we have shown, there are many copy-and-paste reference summaries in CNN/Daily Mail. On the other hand, these approaches are not perfect and the low recall suggests that simply mapping each facet with one support sentence would miss plenty of salient sentences, which could worsen the performance of extractive models trained on such labels. That said, how to find more than one support group for each facet or multiple support sentences in one support group automatically and accurately remains an open question.
Conclusions and Future Work
We presented the promising results towards the facet-aware evaluation for extractive summarization. In the future, we will conduct large-scale human annotations in a crowd-sourcing way on the whole test set of CNN/Daily Mail. We will also investigate benchmark multi-document summarization datasets such as DUC BIBREF8 and TAC BIBREF9 to see if the findings coincide and how we can leverage the multiple references provided for each document set in those datasets. | manually labeled and tell exactly if one sentence should be extracted (assuming our annotations are in agreement), to further verify that FAR correlates with human preference, |
8ec2ca6c7f60c46eedac1fe0530b5c4448800fec | 8ec2ca6c7f60c46eedac1fe0530b5c4448800fec_0 | Q: What is a facet?
Text: Building Extractive CNN/Daily Mail
In this section, we describe the procedure of annotating CNN/Daily Mail. For each facet (sentence) in the reference summary, we find all its support sentences in the document that can cover its meaning. Note that the support sentences are likely to be more verbose, but we only consider if the sentences cover the semantics of the facet regardless of their length. The reason is that we believe extractive summarization should focus on information coverage and once salient sentences are extracted, one can then compress them in an abstractive way BIBREF0, BIBREF1. Formally, we denote one document-summary pair as $\lbrace d, r\rbrace $, where $d = \lbrace d^j\rbrace _{j=1}^D$, $r = \lbrace r^j\rbrace _{j=1}^R$, and $D$, $R$ denote the number of sentences. We define one support group of facet $\mathcal {F}$ as a minimum set of sentences in the document that express the meaning of $\mathcal {F}$. For each $r^j$, we annotate a FAM $r^j \rightarrow \lbrace \lbrace d^{s_{j, 1}^k}\rbrace _{k=1}^{\textrm {K}_1}, \lbrace d^{s_{j, 2}^k}\rbrace _{k=1}^{\textrm {K}_2}, ..., \lbrace d^{s_{j, N}^k}\rbrace _{k=1}^{\textrm {K}_N}\rbrace $ in which each $\lbrace d^{s_{j, n}^k}\rbrace _{k=1}^{\textrm {K}_n}$ is a support group and $s_{j, n}^k$ is the index of the $k$-th support sentence in group $n$.
One may regard the procedure as creating extractive labels, which is widely used in extractive summarization since only abstractive references are available in existing datasets. The major differences are that 1) We label all the support sentences instead of just one or fixed number of sentences, i.e., we do not specify $\textrm {K}_n$. For example, we would put two sentences to one support group if they are complementary and only combining them can cover the facet. 2) We find multiple support groups ($N > 1$), as there could be more than one set of sentences that cover the same facet and extracting any one of them is acceptable. In contrast, there is no concept of support group in extractive labels as they inherently form one such group. We sampled 150 document-summary pairs from the test set of CNN/Daily Mail. 344 FAMs were created by three annotators with high agreement (pairwise Jaccard index 0.71) and further verified to reach consensus. We found that the facets can be divided into three categories based on their quality and degree of abstraction as follows.
Random: The facet is quite random, either because the document itself is too hard to summarize (e.g., a report full of quotations) or the human editor was too subjective when writing the summary BIBREF2. Another possible reason is that the so-called “summaries” are in fact “story highlights”, which seems reasonable to contain details. We found that 41/150 (26%) samples have random facet(s), implying there are severe issues in the reference summaries of CNN/Daily Mail.
Low Abstraction: The facet can be mapped to its support sentences. We further divide this category by the (rounded) average number of support sentences K of $N$ support groups ($\textrm {K}=\frac{\sum _{n=1}^N |\lbrace d^{s_{j, n}^k}\rbrace _{k=1}^{\textrm {K}_n} \rbrace |}{N})$. As in Table TABREF1, most facets (93%) in the reference summaries are paraphrases or compression of one to two sentences in the document without much abstraction.
High Abstraction: The facet cannot be mapped to its support sentences, which indicates that its writing requires deep understandings of the document rather than reorganizing several sentences. The proportion of this category (7%) also indicates how often extractive methods would not work (well) on CNN/Daily Mail.
Surprisingly, we found it easier than previously believed to create the FAMs on CNN/Daily Mail, as it is uncommon ($\overline{N} = 1.56$) to detect multiple sentences with similar semantics (compared to multi-document summarization). In addition, most support groups only have one or two support sentences with large lexical overlap.
Revisit of State-of-the-art Methods
By utilizing the FAMs, we revisit extractive methods to see how well they perform on facet coverage. Specifically, we compare Lead-3, Refresh BIBREF3, FastRL(E) (E for extractive only) BIBREF0, UnifiedSum(E) BIBREF1, NeuSum BIBREF4, and BanditSum BIBREF5 using both ROUGE and FAMs. As these methods are facet-agnostic (i.e., their outputs are not organized by facets but flat extract sets), we consider one facet is covered as long as one of its support groups is extracted and measure the Facet-Aware Recall ($\textbf {FAR} = \frac{\textrm {\#covered}}{R}$). For a fair comparison, each method extracts three sentences since extracting all would result in a perfect FAR.
As shown in Table TABREF13, there is almost no discrimination among the last four methods under ROUGE-1 F1, and their rankings under ROUGE-1/2/L are quite different. In contrast, FAR shows that UnifiedSum(E) covers the most facets. Although FAR is supposed to be favored as FAMs are already manually labeled and tell exactly if one sentence should be extracted (assuming our annotations are in agreement), to further verify that FAR correlates with human preference, we rank UnifiedSum(E), NeuSum, and Lead-3 in Table TABREF15. The order of the 1st rank in the human evaluation coincides with FAR. FAR also has higher Spearman's coefficient $\rho $ than ROUGE (0.457 vs. 0.44, n=30, threshold=0.362 at 95% significance).
Another benefit of the FAMs is that one can employ the category breakdown for fine-grained analysis under any metrics of interest. Here we consider ROUGE and additionally evaluate several abstractive methods: Pointer-Generator (PG) BIBREF2, FastRL(E+A)(extractive+abstractive) BIBREF0, and UnifiedSum(E+A) BIBREF1. As depicted in Table TABREF16, not only extractive methods fail on high abstraction samples, but there is also a huge performance gap between low and high abstraction samples for abstractive methods, which suggests that existing methods achieve decent performance mainly by extraction rather than abstraction. We also found that all the compared methods perform much worse on the documents with “random” summaries, implying that the randomness in the reference summaries might introduce noise to both model training and evaluation. Despite the fact that the sample size is relatively small, we observed consistent results when analyzing different subsets of the data.
Analysis of Approximate Approaches to Mapping Generation
Although the FAMs only need to be annotated once, we investigate whether such human efforts can be further reduced by evaluating approximate approaches that generate extractive labels. Approximate approaches typically transform one abstractive summary to extractive labels heuristically using ROUGE. Previously one could only estimate the quality of these labels by evaluating the extractive models trained using such labels, i.e., comparing the extracted and reference summaries (also approximately via ROUGE). Now that the FAMs serve as ground-truth extractive labels, we can evaluate how well each approach performs accurately. Since the approximate approaches do not have the notion of support group, we flatten all the support sentences in one FAM to a label set.
Due to limited space, we leave the details of the approximate approaches (most of them are self-evident) to Appendix . The comparison results are shown in Table TABREF17. On the bright side, approximate approaches perform relatively well (e.g., 90.6% selected sentences of BIBREF3 indeed contain salient information). This is explainable as ROUGE is good at capturing lexical overlap and as we have shown, there are many copy-and-paste reference summaries in CNN/Daily Mail. On the other hand, these approaches are not perfect and the low recall suggests that simply mapping each facet with one support sentence would miss plenty of salient sentences, which could worsen the performance of extractive models trained on such labels. That said, how to find more than one support group for each facet or multiple support sentences in one support group automatically and accurately remains an open question.
Conclusions and Future Work
We presented the promising results towards the facet-aware evaluation for extractive summarization. In the future, we will conduct large-scale human annotations in a crowd-sourcing way on the whole test set of CNN/Daily Mail. We will also investigate benchmark multi-document summarization datasets such as DUC BIBREF8 and TAC BIBREF9 to see if the findings coincide and how we can leverage the multiple references provided for each document set in those datasets. | Unanswerable |
cfbccb51f0f8f8f125b40168ed66384e2a09762b | cfbccb51f0f8f8f125b40168ed66384e2a09762b_0 | Q: How are discourse embeddings analyzed?
Text: Introduction
Authorship attribution (AA) is the task of identifying the author of a text, given a set of author-labeled training texts. This task typically makes use of stylometric cues at the surface lexical and syntactic level BIBREF0 , although BIBREF1 and BIBREF2 go beyond the sentence level, showing that discourse information can help. However, they achieve limited performance gains and lack an in-depth analysis of discourse featurization techniques. More recently, convolutional neural networks (CNNs) have demonstrated considerable success on AA relying only on character-level INLINEFORM0 -grams BIBREF3 , BIBREF4 . The strength of these models is evidenced by findings that traditional stylometric features such as word INLINEFORM1 -grams and POS-tags do not improve, and can sometimes even hurt performance BIBREF3 , BIBREF5 . However, none of these CNN models make use of discourse.
Our work builds upon these prior studies by exploring an effective method to (i) featurize the discourse information, and (ii) integrate discourse features into the best text classifier (i.e., CNN-based models), in the expectation of achieving state-of-the-art results in AA.
BIBREF1 (henceforth F&H14) made the first comprehensive attempt at using discourse information for AA. They employ an entity-grid model, an approach introduced by BIBREF6 for the task of ordering sentences. This model tracks how the grammatical relations of salient entities (e.g., subj, obj, etc.) change between pairs of sentences in a document, thus capturing a form of discourse coherence. The grid is summarized into a vector of transition probabilities. However, because the model only records the transition between two consecutive sentences at a time, the coherence is local. BIBREF2 (henceforth F15) further extends the entity-grid model by replacing grammatical relations with discourse relations from Rhetorical Structure Theory BIBREF7 . Their study uses a linear-kernel SVM to perform pairwise author classifications, where a non-discourse model captures lexical and syntactic features. They find that adding the entity-grid with grammatical relations enhances the non-discourse model by almost 1% in accuracy, and using RST relations provides an improvement of 3%. The study, however, works with only one small dataset and their models produce overall unremarkable performance ( INLINEFORM0 85%). BIBREF8 propose an advanced Recursive Neural Network (RecNN) architecture to work with RST in the more general area of text categorization and present impressive results. However, we suspect that the massive number of parameters of RecNNs would likely cause overfitting when working with smaller datasets, as is often the case in AA tasks.
In our paper, we opt for a state-of-the-art character bigram CNN classifier BIBREF4 , and investigate various ways in which the discourse information can be featurized and integrated into the CNN. Specifically,
We explore these questions using two approaches to represent salient entities: grammatical relations, and RST discourse relations. We apply these models to datasets of varying sizes and genres, and find that adding any discourse information improves AA consistently on longer documents, but has mixed results on shorter documents. Further, embedding the discourse features in a parallel CNN at the input end yields better performance than concatenating them to the output layer as a feature vector (Section SECREF3 ). The global featurization is more effective than the local one. We also show that SVMs, which can only use discourse probability vectors, neither produce a competitive performance (even with fine-tuning), nor generalize in using the discourse information effectively.
Background
Entity-grid model. Typical lexical features for AA are relatively superficial and restricted to within the same sentence. F&H14 hypothesize that discourse features beyond the sentence level also help authorship attribution. In particular, they propose an author has a particular style for representing entities across a discourse. Their work is based on the entity-grid model of BIBREF6 (henceforth B&L).
The entity-grid model tracks the grammatical relation (subj, obj, etc.) that salient entities take on throughout a document as a way to capture local coherence . A salient entity is defined as a noun phrase that co-occurs at least twice in a document. Extensive literature has shown that subject and object relations are a strong signal for salience and it follows from the Centering Theory that you want to avoid rough shifts in the center BIBREF9 , BIBREF10 . B&L thus focus on whether a salient entity is a subject (s), object (o), other (x), or is not present (-) in a given sentence, as illustrated in Table TABREF1 . Every sentence in a document is encoded with the grammatical relation of all the salient entities, resulting in a grid similar to Table TABREF6 .
The local coherence of a document is then defined on the basis of local entity transitions. A local entity transition is the sequence of grammatical relations that an entity can assume across INLINEFORM0 consecutive sentences, resulting in {s,o,x,-} INLINEFORM1 possible transitions. Following B&L, F&H14 consider sequences of length INLINEFORM2 =2, that is, transitions between two consecutive sentences, resulting in INLINEFORM3 =16 possible transitions. The probability for each transition is then calculated as the frequency of the transition divided by the total number of transitions. This step results in a single probability vector for every document, as illustrated in Table TABREF2 .
B&L apply this model to a sentence ordering task, where the more coherent option, as evidenced by its transition probabilities, was chosen. In authorship attribution, texts are however assumed to already be coherent. F&H14 instead hypothesize that an author unconsciously employs the same methods for describing entities as the discourse unfolds, resulting in discernible transition probability patterns across multiple of their texts. Indeed, F&H14 find that adding the B&L vectors increases the accuracy of AA by almost 1% over a baseline lexico-syntactic model.
RST discourse relations. F15 extends the notion of tracking salient entities to RST. Instead of using grammatical relations in the grid, RST discourse relations are specified. An RST discourse relation defines the relationship between two or more elementary discourse units (EDUs), which are spans of text that typically correspond to syntactic clauses. In a relation, an EDU can function as a nucleus (e.g., result.N) or as a satellite (e.g., summary.S). All the relations in a document then form a tree as in Figure FIGREF8 .
F15 finds that RST relations are more effective for AA than grammatical relations. In our paper, we populate the entity-grid in the same way as F15's “Shallow RST-style” encoding, but use fine-grained instead of coarse-grained RST relations, and do not distinguish between intra-sentential and multi-sentential RST relations, or salient and non-salient entities. We explore various featurization techniques using the coding scheme.
CNN model. shrestha2017 propose a convolutional neural network formulation for AA tasks (detailed in Section SECREF3 ). They report state-of-the-art performance on a corpus of Twitter data BIBREF11 , and compare their models with alternative architectures proposed in the literature: (i) SCH: an SVM that also uses character n-grams, among other stylometric features BIBREF11 ; (ii) LSTM-2: an LSTM trained on bigrams BIBREF12 ; (iii) CHAR: a Logistic Regression model that takes character n-grams BIBREF13 ; (iv) CNN-W: a CNN trained on word embeddings BIBREF14 . The authors show that the model CNN2 produces the best performance overall. Ruder:16 apply character INLINEFORM0 -gram CNNs to a wide range of datasets, providing strong empirical evidence that the architecture generalizes well. Further, they find that including word INLINEFORM1 -grams in addition to character INLINEFORM2 -grams reduces performance, which is in agreement with BIBREF5 's findings.
Models
Building on shrestha2017's work, we employ their character-bigram CNN (CNN2), and propose two extensions which utilize discourse information: (i) CNN2 enhanced with relation probability vectors (CNN2-PV), and (ii) CNN2 enhanced with discourse embeddings (CNN2-DE). The CNN2-PV allows us to conduct a comparison with F&H14 and F15, which also use relation probability vectors.
CNN2. CNN2 is the baseline model with no discourse features. Illustrated in Figure FIGREF10 (center), it consists of (i) an embedding layer, (ii) a convolution layer, (iii) a max-pooling layer, and (iv) a softmax layer. We briefly sketch the processing procedure and refer the reader to BIBREF4 for mathematical details.
The network takes a sequence of character bigrams INLINEFORM0 as input, and outputs a multinomial INLINEFORM1 over class labels as the prediction. The model first looks up the embedding matrix to produce a sequence of embeddings for INLINEFORM2 (i.e., the matrix INLINEFORM3 ), then pushes the embedding sequence through convolutional filters of three bigram-window sizes INLINEFORM4 , each yielding INLINEFORM5 feature maps. We then apply the max-over-time pooling BIBREF15 to the feature maps from each filter, and concatenate the resulting vectors to obtain a single vector INLINEFORM6 , which then goes through the softmax layer to produce predictions.
CNN2-PV. This model (Figure FIGREF10 , left+center) featurizes discourse information into a vector of relation probabilities. In order to derive the discourse features, an entity grid is constructed by feeding the document through an NLP pipeline to identify salient entities. Two flavors of discourse features are created by populating the entity grid with either (i) grammatical relations (GR) or (ii) RST discourse relations (RST). The GR features are represented as grammatical relation transitions derived from the entity grid, e.g., INLINEFORM0 . The RST features are represented as RST discourse relations with their nuclearity, e.g., INLINEFORM1 . The probability vectors are then distributions over relation types. For GR, the vector is a distribution over all the entity role transitions, i.e., INLINEFORM2 (see Table TABREF2 ). For RST, the vector is a distribution over all the RST discourse relations, i.e., INLINEFORM3 Denoting a feature as such with INLINEFORM4 , we construct the pooling vector INLINEFORM5 for the char-bigrams, and concatenate INLINEFORM6 to INLINEFORM7 before feeding the resulting vector to the softmax layer.
CNN2-DE. In this model (Figure FIGREF10 , center+right), we embed discourse features in high-dimensional space (similar to char-bigram embeddings). Let INLINEFORM0 be a sequence of discourse features, we treat it in a similar fashion to the char-bigram sequence INLINEFORM1 , i.e. feeding it through a “parallel” convolutional net (Figure FIGREF10 right). The operation results in a pooling vector INLINEFORM2 . We concatenate INLINEFORM3 to the pooling vector INLINEFORM4 (which is constructed from INLINEFORM5 ) then feed INLINEFORM6 to the softmax layer for the final prediction.
Experiments and Results
We begin by introducing the datasets (Section SECREF15 ), followed by detailing the featurization methods (Section SECREF17 ), the experiments (Section SECREF22 ), and finally reporting results (Section SECREF26 ).
Datasets
The statistics for the three datasets used in the experiments are summarized in Table TABREF16 .
novel-9. This dataset was compiled by F&H14: a collection of 19 novels by 9 nineteenth century British and American authors in the Project Gutenberg. To compare to F&H14, we apply the same resampling method (F&H14, Section 4.2) to correct the imbalance in authors by oversampling the texts of less-represented authors.
novel-50. This dataset extends novel-9, compiling the works of 50 randomly selected authors of the same period. For each author, we randomly select 5 novels for a total 250 novels.
IMDB62. IMDB62 consists of 62K movie reviews from 62 users (1,000 each) from the Internet Movie dataset, compiled by Seroussi:11. Unlike the novel datasets, the reviews are considerably shorter, with a mean of 349 words per text.
Featurization
As described in Section SECREF2 , in both the GR and RST variants, from each input entry we start by obtaining an entity grid.
CNN2-PV. We collect the probabilities of entity role transitions (in GR) or discourse relations (in RST) for the entries. Each entry corresponds to a probability distribution vector.
CNN2-DE. We employ two schema for creating discourse feature sequences from an entity grid. While we always read the grid by column (by a salient entity), we vary whether we track the entity across a number of sentences (n rows at a time) or across the entire document (one entire column at a time), denoted as local and global reading respectively.
For the GR discourse features, in the case of local reading, we process the entity roles one sentence pair at a time (Figure FIGREF18 , left). For example, in processing the pair INLINEFORM0 , we find the first non-empty role INLINEFORM1 for entity INLINEFORM2 in INLINEFORM3 . If INLINEFORM4 also has a non-empty role INLINEFORM5 in the INLINEFORM6 , we collect the entity role transition INLINEFORM7 . We then proceed to the following entity INLINEFORM8 , until we process all the entities in the grid and move to the next sentence pair. For the global reading, we instead read the entity roles by traversing one column of the entire document at a time (Figure FIGREF18 , right). The entity roles in all the sentences are read for one entity: we collect transitions for all the non-empty roles (e.g., INLINEFORM9 , but not INLINEFORM10 ).
For the RST discourse features, we process non-empty discourse relations also through either local or global reading. In the local reading, we read all the discourse relations in a sentence (a row) then move on to the next sentence. In the global reading, we read in discourse relations for one entity at a time. This results in sequences of discourse relations for the input entries.
Experiments
Baseline-dataset experiments. All the baseline-dataset experiments are evaluated on novel-9. As a comparison to previous work (F15), we evaluate our models using a pairwise classification task with GR discourse features. In her model, novels are partitioned into 1000-word chunks, and the model is evaluated with accuracy. Surpassing F15's SVM model by a large margin, we then further evaluate the more difficult multi-class task, i.e., all-class prediction simultaneously, with both GR and RST discourse features and the more robust F1 evaluation. In this multi-class task, we implement two SVMs to extend F15's SVM models: (i) SVM2: a linear-kernel SVM which takes char-bigrams as input, as our CNNs, and (ii) SVM2-PV: an updated SVM2 which takes also probability vector features.
Further, we are interested in finding a performance threshold on the minimally-required input text length for discourse information to “kick in”. To this end, we chunk the novels into different sizes: 200-2000 words, at 200-word intervals, and evaluate our CNNs in the multi-class condition.
Generalization-dataset experiments. To confirm that our models generalize, we pick the best models from the baseline-dataset experiments and evaluate on the novel-50 and IMDB62 datasets. For novel-50, the chunking size applied is 2000-word as per the baseline-dataset experiment results, and for IMDB62, texts are not chunked (i.e., we feed the models with the original reviews directly). For model comparison, we also run the SVMs (i.e., SVM2 and SVM2-PV) used in the baseline-dataset experiment. All the experiments conducted here are multi-class classification with macro-averaged F1 evaluation.
Model configurations. Following F15, we perform 5-fold cross-validation. The embedding sizes are tuned on novel-9 (multi-class condition): 50 for char-bigrams; 20 for discourse features. The learning rate is 0.001 using the Adam Optimizer BIBREF18 . For all models, we apply dropout regularization of 0.75 BIBREF19 , and run 50 epochs (batch size 32). The SVMs in the baseline-dataset experiments use default settings, following F15. For the SVMs in the generalization-dataset experiments, we tuned the hyperparameters on novel-9 with a grid search, and found the optimal setting as: stopping condition tol is 1e-5, at a max-iteration of 1,500.
Results
Baseline-dataset experiments. The results of the baseline-dataset experiments are reported in Table TABREF24 , TABREF25 and Figure FIGREF27 . In Table TABREF24 , Baseline denotes the dumb baseline model which always predicts the more-represented author of the pair. Both SVMs are from F15, and we report her results. SVM (LexSyn) takes character and word bi/trigrams and POS tags. SVM (LexSyn-PV) additionally includes probability vectors, similar to our CNN2-PV. In this part of the experiment, while the CNNs clear a large margin over SVMs, adding discourse in CNN2-PV brings only a small performance gain.
Table TABREF25 reports the results from the multi-class classification task, the more difficult task. Here, probability vector features (i.e., PV) again fail to contribute much. The discourse embedding features, on the other hand, manage to increase the F1 score by a noticeable amount, with the maximal improvement seen in the CNN2-DE (global) model with RST features (by 2.6 points). In contrast, the discourse-enhanced SVM2-PVs increase F1 by about 1 point, with overall much lower scores in comparison to the CNNs. In general, RST features work better than GR features.
The results of the varying-sizes experiments are plotted in Figure FIGREF27 . Again, we observe the overall pattern that discourse features improve the F1 score, and RST features procure superior performance. Crucially, however, we note there is no performance boost below the chunk size of 1000 for GR features, and below 600 for RST features. Where discourse features do help, the GR-based models achieve, on average, 1 extra point on F1, and the RST-based models around 2.
Generalization-dataset experiments. Table TABREF28 summarizes the results of the generalization-dataset experiments. On novel-50, most discourse-enhanced models improve the performance of the baseline non-discourse CNN2 to varying degrees. The clear pattern again emerges that RST features work better, with the best F1 score evidenced in the CNN2-DE (global) model (3.5 improvement in F1). On IMDB62, as expected with short text inputs (mean=349 words/review), the discourse features in general do not add further contribution. Even the best model CNN2-DE brings only marginal improvement, confirming our findings from varying the chunk size on novel-9, where discourse features did not help at this input size. Equipped with discourse features, SVM2-PV performs slightly better than SVM2 on novel-50 (by 0.4 with GR, 0.9 with RST features). On IMDB62, the same pattern persists for the SVMs: discourse features do not make noticeable improvements (by 0.0 and 0.5 with GR and RST respectively).
Analysis
General analysis. Overall, we have shown that discourse information can improve authorship attribution, but only when properly encoded. This result is critical in demonstrating the particular value of discourse information, because typical stylometric features such as word INLINEFORM0 -grams and POS tags do not add additional performance improvements BIBREF3 , BIBREF5 .
In addition, the type of discourse information and the way in which it is featurized are tantamount to this performance improvement: RST features provide overall stronger improvement, and the global reading scheme for discourse embedding works better than the local one. The discourse embedding proves to be a superior featurization technique, as evidenced by the generally higher performance of CNN2-DE models over CNN2-PV models. With an SVM, where the option is not available, we are only able to use relation probability vectors to obtain a very modest performance improvement.
Further, we found an input-length threshold for the discourse features to help (Section SECREF26 ). Not surprisingly, discourse does not contribute on shorter texts. Many of the feature grids are empty for these shorter texts– either there are no coreference chains or they are not correctly resolved. Currently we only have empirical results on short novel chunks and movie reviews, but believe the finding would generalize to Twitter or blog posts.
Discourse embeddings. It does not come as a surprise that discourse embedding-based models perform better than their relation probability-based peers. The former (i) leverages the weight learning of the entire computational graph of the CNN rather than only the softmax layer, as the PV models do, and (ii) provides a more fine-grained featurization of the discourse information. Rather than merely taking a probability over grammatical relation transitions (in GR) or discourse relation types (in RST), in DE-based models we learn the dependency between grammatical relation transitions/discourse relations through the INLINEFORM0 -sized filter sweeps.
To further study the information encoded in the discourse embeddings, we perform t-SNE clustering BIBREF20 on them, using the best performing model CNN2-DE (global). We examine the closest neighbors of each embedding, and observe that similar discourse relations tend to go together (e.g., explanation and interpretation; consequence and result). Some examples are given in Table TABREF29 . However, it is unclear how this pattern helps improve classification performance. We intend to investigate this question in future work.
Global vs. Local featurization. As described in Section SECREF17 , the global reading processes all the discourse features for one entity at a time, while the local approach reads one sentence (or one sentence pair) at a time. In all the relevant experiments, global featurization showed a clear performance advantage (on average 1 point gain in F1). Recall that the creation of the grids (both GR and RST) depend on coreference chains of entities (Section SECREF2 ), and only the global reading scheme takes advantage of the coreference pattern whereas the local reading breaks the chains. To find out whether coreference pattern is the key to the performance difference, we further ran a probe experiment where we read RST discourse relations in the order in which EDUs are arranged in the RST tree (i.e., left-to-right), and evaluated this model on novel-50 and IMDB62 with the same hyperparameter setting. The F1 scores turned out to be very close to the CNN2-DE (local) model, at 97.5 and 90.9. Based on this finding, we tentatively confirm the importance of the coreference pattern, and intend to further investigate how exactly it matters for the classification performance.
GR vs. RST. RST features in general effect higher performance gains than GR features (Table TABREF28 ). The RST parser produces a tree of discourse relations for the input text, thus introducing a “global view.” The GR features, on the other hand, are more restricted to a “local view” on entities between consecutive sentences. While a deeper empirical investigation is needed, one can intuitively imagine that identifying authorship by focusing on the local transitions between grammatical relations (as in GR) is more difficult than observing how the entire text is organized (as in RST).
Conclusion
We have conducted an in-depth investigation of techniques that (i) featurize discourse information, and (ii) effectively integrate discourse features into the state-of-the-art character-bigram CNN classifier for AA. Beyond confirming the overall superiority of RST features over GR features in larger and more difficult datasets, we present a discourse embedding technique that is unavailable for previously proposed discourse-enhanced models. The new technique enabled us to push the envelope of the current performance ceiling by a large margin.
Admittedly, in using the RST features with entity-grids, we lose the valuable RST tree structure. In future work, we intend to adopt more sophisticated methods such as RecNN, as per Ji:17, to retain more information from the RST trees while reducing the parameter size. Further, we aim to understand how discourse embeddings contribute to AA tasks, and find alternatives to coreference chains for shorter texts. | They perform t-SNE clustering to analyze discourse embeddings |
feb4e92ff1609f3a5e22588da66532ff689f3bcc | feb4e92ff1609f3a5e22588da66532ff689f3bcc_0 | Q: What was the previous state-of-the-art?
Text: Introduction
Authorship attribution (AA) is the task of identifying the author of a text, given a set of author-labeled training texts. This task typically makes use of stylometric cues at the surface lexical and syntactic level BIBREF0 , although BIBREF1 and BIBREF2 go beyond the sentence level, showing that discourse information can help. However, they achieve limited performance gains and lack an in-depth analysis of discourse featurization techniques. More recently, convolutional neural networks (CNNs) have demonstrated considerable success on AA relying only on character-level INLINEFORM0 -grams BIBREF3 , BIBREF4 . The strength of these models is evidenced by findings that traditional stylometric features such as word INLINEFORM1 -grams and POS-tags do not improve, and can sometimes even hurt performance BIBREF3 , BIBREF5 . However, none of these CNN models make use of discourse.
Our work builds upon these prior studies by exploring an effective method to (i) featurize the discourse information, and (ii) integrate discourse features into the best text classifier (i.e., CNN-based models), in the expectation of achieving state-of-the-art results in AA.
BIBREF1 (henceforth F&H14) made the first comprehensive attempt at using discourse information for AA. They employ an entity-grid model, an approach introduced by BIBREF6 for the task of ordering sentences. This model tracks how the grammatical relations of salient entities (e.g., subj, obj, etc.) change between pairs of sentences in a document, thus capturing a form of discourse coherence. The grid is summarized into a vector of transition probabilities. However, because the model only records the transition between two consecutive sentences at a time, the coherence is local. BIBREF2 (henceforth F15) further extends the entity-grid model by replacing grammatical relations with discourse relations from Rhetorical Structure Theory BIBREF7 . Their study uses a linear-kernel SVM to perform pairwise author classifications, where a non-discourse model captures lexical and syntactic features. They find that adding the entity-grid with grammatical relations enhances the non-discourse model by almost 1% in accuracy, and using RST relations provides an improvement of 3%. The study, however, works with only one small dataset and their models produce overall unremarkable performance ( INLINEFORM0 85%). BIBREF8 propose an advanced Recursive Neural Network (RecNN) architecture to work with RST in the more general area of text categorization and present impressive results. However, we suspect that the massive number of parameters of RecNNs would likely cause overfitting when working with smaller datasets, as is often the case in AA tasks.
In our paper, we opt for a state-of-the-art character bigram CNN classifier BIBREF4 , and investigate various ways in which the discourse information can be featurized and integrated into the CNN. Specifically,
We explore these questions using two approaches to represent salient entities: grammatical relations, and RST discourse relations. We apply these models to datasets of varying sizes and genres, and find that adding any discourse information improves AA consistently on longer documents, but has mixed results on shorter documents. Further, embedding the discourse features in a parallel CNN at the input end yields better performance than concatenating them to the output layer as a feature vector (Section SECREF3 ). The global featurization is more effective than the local one. We also show that SVMs, which can only use discourse probability vectors, neither produce a competitive performance (even with fine-tuning), nor generalize in using the discourse information effectively.
Background
Entity-grid model. Typical lexical features for AA are relatively superficial and restricted to within the same sentence. F&H14 hypothesize that discourse features beyond the sentence level also help authorship attribution. In particular, they propose an author has a particular style for representing entities across a discourse. Their work is based on the entity-grid model of BIBREF6 (henceforth B&L).
The entity-grid model tracks the grammatical relation (subj, obj, etc.) that salient entities take on throughout a document as a way to capture local coherence . A salient entity is defined as a noun phrase that co-occurs at least twice in a document. Extensive literature has shown that subject and object relations are a strong signal for salience and it follows from the Centering Theory that you want to avoid rough shifts in the center BIBREF9 , BIBREF10 . B&L thus focus on whether a salient entity is a subject (s), object (o), other (x), or is not present (-) in a given sentence, as illustrated in Table TABREF1 . Every sentence in a document is encoded with the grammatical relation of all the salient entities, resulting in a grid similar to Table TABREF6 .
The local coherence of a document is then defined on the basis of local entity transitions. A local entity transition is the sequence of grammatical relations that an entity can assume across INLINEFORM0 consecutive sentences, resulting in {s,o,x,-} INLINEFORM1 possible transitions. Following B&L, F&H14 consider sequences of length INLINEFORM2 =2, that is, transitions between two consecutive sentences, resulting in INLINEFORM3 =16 possible transitions. The probability for each transition is then calculated as the frequency of the transition divided by the total number of transitions. This step results in a single probability vector for every document, as illustrated in Table TABREF2 .
B&L apply this model to a sentence ordering task, where the more coherent option, as evidenced by its transition probabilities, was chosen. In authorship attribution, texts are however assumed to already be coherent. F&H14 instead hypothesize that an author unconsciously employs the same methods for describing entities as the discourse unfolds, resulting in discernible transition probability patterns across multiple of their texts. Indeed, F&H14 find that adding the B&L vectors increases the accuracy of AA by almost 1% over a baseline lexico-syntactic model.
RST discourse relations. F15 extends the notion of tracking salient entities to RST. Instead of using grammatical relations in the grid, RST discourse relations are specified. An RST discourse relation defines the relationship between two or more elementary discourse units (EDUs), which are spans of text that typically correspond to syntactic clauses. In a relation, an EDU can function as a nucleus (e.g., result.N) or as a satellite (e.g., summary.S). All the relations in a document then form a tree as in Figure FIGREF8 .
F15 finds that RST relations are more effective for AA than grammatical relations. In our paper, we populate the entity-grid in the same way as F15's “Shallow RST-style” encoding, but use fine-grained instead of coarse-grained RST relations, and do not distinguish between intra-sentential and multi-sentential RST relations, or salient and non-salient entities. We explore various featurization techniques using the coding scheme.
CNN model. shrestha2017 propose a convolutional neural network formulation for AA tasks (detailed in Section SECREF3 ). They report state-of-the-art performance on a corpus of Twitter data BIBREF11 , and compare their models with alternative architectures proposed in the literature: (i) SCH: an SVM that also uses character n-grams, among other stylometric features BIBREF11 ; (ii) LSTM-2: an LSTM trained on bigrams BIBREF12 ; (iii) CHAR: a Logistic Regression model that takes character n-grams BIBREF13 ; (iv) CNN-W: a CNN trained on word embeddings BIBREF14 . The authors show that the model CNN2 produces the best performance overall. Ruder:16 apply character INLINEFORM0 -gram CNNs to a wide range of datasets, providing strong empirical evidence that the architecture generalizes well. Further, they find that including word INLINEFORM1 -grams in addition to character INLINEFORM2 -grams reduces performance, which is in agreement with BIBREF5 's findings.
Models
Building on shrestha2017's work, we employ their character-bigram CNN (CNN2), and propose two extensions which utilize discourse information: (i) CNN2 enhanced with relation probability vectors (CNN2-PV), and (ii) CNN2 enhanced with discourse embeddings (CNN2-DE). The CNN2-PV allows us to conduct a comparison with F&H14 and F15, which also use relation probability vectors.
CNN2. CNN2 is the baseline model with no discourse features. Illustrated in Figure FIGREF10 (center), it consists of (i) an embedding layer, (ii) a convolution layer, (iii) a max-pooling layer, and (iv) a softmax layer. We briefly sketch the processing procedure and refer the reader to BIBREF4 for mathematical details.
The network takes a sequence of character bigrams INLINEFORM0 as input, and outputs a multinomial INLINEFORM1 over class labels as the prediction. The model first looks up the embedding matrix to produce a sequence of embeddings for INLINEFORM2 (i.e., the matrix INLINEFORM3 ), then pushes the embedding sequence through convolutional filters of three bigram-window sizes INLINEFORM4 , each yielding INLINEFORM5 feature maps. We then apply the max-over-time pooling BIBREF15 to the feature maps from each filter, and concatenate the resulting vectors to obtain a single vector INLINEFORM6 , which then goes through the softmax layer to produce predictions.
CNN2-PV. This model (Figure FIGREF10 , left+center) featurizes discourse information into a vector of relation probabilities. In order to derive the discourse features, an entity grid is constructed by feeding the document through an NLP pipeline to identify salient entities. Two flavors of discourse features are created by populating the entity grid with either (i) grammatical relations (GR) or (ii) RST discourse relations (RST). The GR features are represented as grammatical relation transitions derived from the entity grid, e.g., INLINEFORM0 . The RST features are represented as RST discourse relations with their nuclearity, e.g., INLINEFORM1 . The probability vectors are then distributions over relation types. For GR, the vector is a distribution over all the entity role transitions, i.e., INLINEFORM2 (see Table TABREF2 ). For RST, the vector is a distribution over all the RST discourse relations, i.e., INLINEFORM3 Denoting a feature as such with INLINEFORM4 , we construct the pooling vector INLINEFORM5 for the char-bigrams, and concatenate INLINEFORM6 to INLINEFORM7 before feeding the resulting vector to the softmax layer.
CNN2-DE. In this model (Figure FIGREF10 , center+right), we embed discourse features in high-dimensional space (similar to char-bigram embeddings). Let INLINEFORM0 be a sequence of discourse features, we treat it in a similar fashion to the char-bigram sequence INLINEFORM1 , i.e. feeding it through a “parallel” convolutional net (Figure FIGREF10 right). The operation results in a pooling vector INLINEFORM2 . We concatenate INLINEFORM3 to the pooling vector INLINEFORM4 (which is constructed from INLINEFORM5 ) then feed INLINEFORM6 to the softmax layer for the final prediction.
Experiments and Results
We begin by introducing the datasets (Section SECREF15 ), followed by detailing the featurization methods (Section SECREF17 ), the experiments (Section SECREF22 ), and finally reporting results (Section SECREF26 ).
Datasets
The statistics for the three datasets used in the experiments are summarized in Table TABREF16 .
novel-9. This dataset was compiled by F&H14: a collection of 19 novels by 9 nineteenth century British and American authors in the Project Gutenberg. To compare to F&H14, we apply the same resampling method (F&H14, Section 4.2) to correct the imbalance in authors by oversampling the texts of less-represented authors.
novel-50. This dataset extends novel-9, compiling the works of 50 randomly selected authors of the same period. For each author, we randomly select 5 novels for a total 250 novels.
IMDB62. IMDB62 consists of 62K movie reviews from 62 users (1,000 each) from the Internet Movie dataset, compiled by Seroussi:11. Unlike the novel datasets, the reviews are considerably shorter, with a mean of 349 words per text.
Featurization
As described in Section SECREF2 , in both the GR and RST variants, from each input entry we start by obtaining an entity grid.
CNN2-PV. We collect the probabilities of entity role transitions (in GR) or discourse relations (in RST) for the entries. Each entry corresponds to a probability distribution vector.
CNN2-DE. We employ two schema for creating discourse feature sequences from an entity grid. While we always read the grid by column (by a salient entity), we vary whether we track the entity across a number of sentences (n rows at a time) or across the entire document (one entire column at a time), denoted as local and global reading respectively.
For the GR discourse features, in the case of local reading, we process the entity roles one sentence pair at a time (Figure FIGREF18 , left). For example, in processing the pair INLINEFORM0 , we find the first non-empty role INLINEFORM1 for entity INLINEFORM2 in INLINEFORM3 . If INLINEFORM4 also has a non-empty role INLINEFORM5 in the INLINEFORM6 , we collect the entity role transition INLINEFORM7 . We then proceed to the following entity INLINEFORM8 , until we process all the entities in the grid and move to the next sentence pair. For the global reading, we instead read the entity roles by traversing one column of the entire document at a time (Figure FIGREF18 , right). The entity roles in all the sentences are read for one entity: we collect transitions for all the non-empty roles (e.g., INLINEFORM9 , but not INLINEFORM10 ).
For the RST discourse features, we process non-empty discourse relations also through either local or global reading. In the local reading, we read all the discourse relations in a sentence (a row) then move on to the next sentence. In the global reading, we read in discourse relations for one entity at a time. This results in sequences of discourse relations for the input entries.
Experiments
Baseline-dataset experiments. All the baseline-dataset experiments are evaluated on novel-9. As a comparison to previous work (F15), we evaluate our models using a pairwise classification task with GR discourse features. In her model, novels are partitioned into 1000-word chunks, and the model is evaluated with accuracy. Surpassing F15's SVM model by a large margin, we then further evaluate the more difficult multi-class task, i.e., all-class prediction simultaneously, with both GR and RST discourse features and the more robust F1 evaluation. In this multi-class task, we implement two SVMs to extend F15's SVM models: (i) SVM2: a linear-kernel SVM which takes char-bigrams as input, as our CNNs, and (ii) SVM2-PV: an updated SVM2 which takes also probability vector features.
Further, we are interested in finding a performance threshold on the minimally-required input text length for discourse information to “kick in”. To this end, we chunk the novels into different sizes: 200-2000 words, at 200-word intervals, and evaluate our CNNs in the multi-class condition.
Generalization-dataset experiments. To confirm that our models generalize, we pick the best models from the baseline-dataset experiments and evaluate on the novel-50 and IMDB62 datasets. For novel-50, the chunking size applied is 2000-word as per the baseline-dataset experiment results, and for IMDB62, texts are not chunked (i.e., we feed the models with the original reviews directly). For model comparison, we also run the SVMs (i.e., SVM2 and SVM2-PV) used in the baseline-dataset experiment. All the experiments conducted here are multi-class classification with macro-averaged F1 evaluation.
Model configurations. Following F15, we perform 5-fold cross-validation. The embedding sizes are tuned on novel-9 (multi-class condition): 50 for char-bigrams; 20 for discourse features. The learning rate is 0.001 using the Adam Optimizer BIBREF18 . For all models, we apply dropout regularization of 0.75 BIBREF19 , and run 50 epochs (batch size 32). The SVMs in the baseline-dataset experiments use default settings, following F15. For the SVMs in the generalization-dataset experiments, we tuned the hyperparameters on novel-9 with a grid search, and found the optimal setting as: stopping condition tol is 1e-5, at a max-iteration of 1,500.
Results
Baseline-dataset experiments. The results of the baseline-dataset experiments are reported in Table TABREF24 , TABREF25 and Figure FIGREF27 . In Table TABREF24 , Baseline denotes the dumb baseline model which always predicts the more-represented author of the pair. Both SVMs are from F15, and we report her results. SVM (LexSyn) takes character and word bi/trigrams and POS tags. SVM (LexSyn-PV) additionally includes probability vectors, similar to our CNN2-PV. In this part of the experiment, while the CNNs clear a large margin over SVMs, adding discourse in CNN2-PV brings only a small performance gain.
Table TABREF25 reports the results from the multi-class classification task, the more difficult task. Here, probability vector features (i.e., PV) again fail to contribute much. The discourse embedding features, on the other hand, manage to increase the F1 score by a noticeable amount, with the maximal improvement seen in the CNN2-DE (global) model with RST features (by 2.6 points). In contrast, the discourse-enhanced SVM2-PVs increase F1 by about 1 point, with overall much lower scores in comparison to the CNNs. In general, RST features work better than GR features.
The results of the varying-sizes experiments are plotted in Figure FIGREF27 . Again, we observe the overall pattern that discourse features improve the F1 score, and RST features procure superior performance. Crucially, however, we note there is no performance boost below the chunk size of 1000 for GR features, and below 600 for RST features. Where discourse features do help, the GR-based models achieve, on average, 1 extra point on F1, and the RST-based models around 2.
Generalization-dataset experiments. Table TABREF28 summarizes the results of the generalization-dataset experiments. On novel-50, most discourse-enhanced models improve the performance of the baseline non-discourse CNN2 to varying degrees. The clear pattern again emerges that RST features work better, with the best F1 score evidenced in the CNN2-DE (global) model (3.5 improvement in F1). On IMDB62, as expected with short text inputs (mean=349 words/review), the discourse features in general do not add further contribution. Even the best model CNN2-DE brings only marginal improvement, confirming our findings from varying the chunk size on novel-9, where discourse features did not help at this input size. Equipped with discourse features, SVM2-PV performs slightly better than SVM2 on novel-50 (by 0.4 with GR, 0.9 with RST features). On IMDB62, the same pattern persists for the SVMs: discourse features do not make noticeable improvements (by 0.0 and 0.5 with GR and RST respectively).
Analysis
General analysis. Overall, we have shown that discourse information can improve authorship attribution, but only when properly encoded. This result is critical in demonstrating the particular value of discourse information, because typical stylometric features such as word INLINEFORM0 -grams and POS tags do not add additional performance improvements BIBREF3 , BIBREF5 .
In addition, the type of discourse information and the way in which it is featurized are tantamount to this performance improvement: RST features provide overall stronger improvement, and the global reading scheme for discourse embedding works better than the local one. The discourse embedding proves to be a superior featurization technique, as evidenced by the generally higher performance of CNN2-DE models over CNN2-PV models. With an SVM, where the option is not available, we are only able to use relation probability vectors to obtain a very modest performance improvement.
Further, we found an input-length threshold for the discourse features to help (Section SECREF26 ). Not surprisingly, discourse does not contribute on shorter texts. Many of the feature grids are empty for these shorter texts– either there are no coreference chains or they are not correctly resolved. Currently we only have empirical results on short novel chunks and movie reviews, but believe the finding would generalize to Twitter or blog posts.
Discourse embeddings. It does not come as a surprise that discourse embedding-based models perform better than their relation probability-based peers. The former (i) leverages the weight learning of the entire computational graph of the CNN rather than only the softmax layer, as the PV models do, and (ii) provides a more fine-grained featurization of the discourse information. Rather than merely taking a probability over grammatical relation transitions (in GR) or discourse relation types (in RST), in DE-based models we learn the dependency between grammatical relation transitions/discourse relations through the INLINEFORM0 -sized filter sweeps.
To further study the information encoded in the discourse embeddings, we perform t-SNE clustering BIBREF20 on them, using the best performing model CNN2-DE (global). We examine the closest neighbors of each embedding, and observe that similar discourse relations tend to go together (e.g., explanation and interpretation; consequence and result). Some examples are given in Table TABREF29 . However, it is unclear how this pattern helps improve classification performance. We intend to investigate this question in future work.
Global vs. Local featurization. As described in Section SECREF17 , the global reading processes all the discourse features for one entity at a time, while the local approach reads one sentence (or one sentence pair) at a time. In all the relevant experiments, global featurization showed a clear performance advantage (on average 1 point gain in F1). Recall that the creation of the grids (both GR and RST) depend on coreference chains of entities (Section SECREF2 ), and only the global reading scheme takes advantage of the coreference pattern whereas the local reading breaks the chains. To find out whether coreference pattern is the key to the performance difference, we further ran a probe experiment where we read RST discourse relations in the order in which EDUs are arranged in the RST tree (i.e., left-to-right), and evaluated this model on novel-50 and IMDB62 with the same hyperparameter setting. The F1 scores turned out to be very close to the CNN2-DE (local) model, at 97.5 and 90.9. Based on this finding, we tentatively confirm the importance of the coreference pattern, and intend to further investigate how exactly it matters for the classification performance.
GR vs. RST. RST features in general effect higher performance gains than GR features (Table TABREF28 ). The RST parser produces a tree of discourse relations for the input text, thus introducing a “global view.” The GR features, on the other hand, are more restricted to a “local view” on entities between consecutive sentences. While a deeper empirical investigation is needed, one can intuitively imagine that identifying authorship by focusing on the local transitions between grammatical relations (as in GR) is more difficult than observing how the entire text is organized (as in RST).
Conclusion
We have conducted an in-depth investigation of techniques that (i) featurize discourse information, and (ii) effectively integrate discourse features into the state-of-the-art character-bigram CNN classifier for AA. Beyond confirming the overall superiority of RST features over GR features in larger and more difficult datasets, we present a discourse embedding technique that is unavailable for previously proposed discourse-enhanced models. The new technique enabled us to push the envelope of the current performance ceiling by a large margin.
Admittedly, in using the RST features with entity-grids, we lose the valuable RST tree structure. In future work, we intend to adopt more sophisticated methods such as RecNN, as per Ji:17, to retain more information from the RST trees while reducing the parameter size. Further, we aim to understand how discourse embeddings contribute to AA tasks, and find alternatives to coreference chains for shorter texts. | character bigram CNN classifier |
f10325d022e3f95223f79ab00f8b42e3bb7ca040 | f10325d022e3f95223f79ab00f8b42e3bb7ca040_0 | Q: How are discourse features incorporated into the model?
Text: Introduction
Authorship attribution (AA) is the task of identifying the author of a text, given a set of author-labeled training texts. This task typically makes use of stylometric cues at the surface lexical and syntactic level BIBREF0 , although BIBREF1 and BIBREF2 go beyond the sentence level, showing that discourse information can help. However, they achieve limited performance gains and lack an in-depth analysis of discourse featurization techniques. More recently, convolutional neural networks (CNNs) have demonstrated considerable success on AA relying only on character-level INLINEFORM0 -grams BIBREF3 , BIBREF4 . The strength of these models is evidenced by findings that traditional stylometric features such as word INLINEFORM1 -grams and POS-tags do not improve, and can sometimes even hurt performance BIBREF3 , BIBREF5 . However, none of these CNN models make use of discourse.
Our work builds upon these prior studies by exploring an effective method to (i) featurize the discourse information, and (ii) integrate discourse features into the best text classifier (i.e., CNN-based models), in the expectation of achieving state-of-the-art results in AA.
BIBREF1 (henceforth F&H14) made the first comprehensive attempt at using discourse information for AA. They employ an entity-grid model, an approach introduced by BIBREF6 for the task of ordering sentences. This model tracks how the grammatical relations of salient entities (e.g., subj, obj, etc.) change between pairs of sentences in a document, thus capturing a form of discourse coherence. The grid is summarized into a vector of transition probabilities. However, because the model only records the transition between two consecutive sentences at a time, the coherence is local. BIBREF2 (henceforth F15) further extends the entity-grid model by replacing grammatical relations with discourse relations from Rhetorical Structure Theory BIBREF7 . Their study uses a linear-kernel SVM to perform pairwise author classifications, where a non-discourse model captures lexical and syntactic features. They find that adding the entity-grid with grammatical relations enhances the non-discourse model by almost 1% in accuracy, and using RST relations provides an improvement of 3%. The study, however, works with only one small dataset and their models produce overall unremarkable performance ( INLINEFORM0 85%). BIBREF8 propose an advanced Recursive Neural Network (RecNN) architecture to work with RST in the more general area of text categorization and present impressive results. However, we suspect that the massive number of parameters of RecNNs would likely cause overfitting when working with smaller datasets, as is often the case in AA tasks.
In our paper, we opt for a state-of-the-art character bigram CNN classifier BIBREF4 , and investigate various ways in which the discourse information can be featurized and integrated into the CNN. Specifically,
We explore these questions using two approaches to represent salient entities: grammatical relations, and RST discourse relations. We apply these models to datasets of varying sizes and genres, and find that adding any discourse information improves AA consistently on longer documents, but has mixed results on shorter documents. Further, embedding the discourse features in a parallel CNN at the input end yields better performance than concatenating them to the output layer as a feature vector (Section SECREF3 ). The global featurization is more effective than the local one. We also show that SVMs, which can only use discourse probability vectors, neither produce a competitive performance (even with fine-tuning), nor generalize in using the discourse information effectively.
Background
Entity-grid model. Typical lexical features for AA are relatively superficial and restricted to within the same sentence. F&H14 hypothesize that discourse features beyond the sentence level also help authorship attribution. In particular, they propose an author has a particular style for representing entities across a discourse. Their work is based on the entity-grid model of BIBREF6 (henceforth B&L).
The entity-grid model tracks the grammatical relation (subj, obj, etc.) that salient entities take on throughout a document as a way to capture local coherence . A salient entity is defined as a noun phrase that co-occurs at least twice in a document. Extensive literature has shown that subject and object relations are a strong signal for salience and it follows from the Centering Theory that you want to avoid rough shifts in the center BIBREF9 , BIBREF10 . B&L thus focus on whether a salient entity is a subject (s), object (o), other (x), or is not present (-) in a given sentence, as illustrated in Table TABREF1 . Every sentence in a document is encoded with the grammatical relation of all the salient entities, resulting in a grid similar to Table TABREF6 .
The local coherence of a document is then defined on the basis of local entity transitions. A local entity transition is the sequence of grammatical relations that an entity can assume across INLINEFORM0 consecutive sentences, resulting in {s,o,x,-} INLINEFORM1 possible transitions. Following B&L, F&H14 consider sequences of length INLINEFORM2 =2, that is, transitions between two consecutive sentences, resulting in INLINEFORM3 =16 possible transitions. The probability for each transition is then calculated as the frequency of the transition divided by the total number of transitions. This step results in a single probability vector for every document, as illustrated in Table TABREF2 .
B&L apply this model to a sentence ordering task, where the more coherent option, as evidenced by its transition probabilities, was chosen. In authorship attribution, texts are however assumed to already be coherent. F&H14 instead hypothesize that an author unconsciously employs the same methods for describing entities as the discourse unfolds, resulting in discernible transition probability patterns across multiple of their texts. Indeed, F&H14 find that adding the B&L vectors increases the accuracy of AA by almost 1% over a baseline lexico-syntactic model.
RST discourse relations. F15 extends the notion of tracking salient entities to RST. Instead of using grammatical relations in the grid, RST discourse relations are specified. An RST discourse relation defines the relationship between two or more elementary discourse units (EDUs), which are spans of text that typically correspond to syntactic clauses. In a relation, an EDU can function as a nucleus (e.g., result.N) or as a satellite (e.g., summary.S). All the relations in a document then form a tree as in Figure FIGREF8 .
F15 finds that RST relations are more effective for AA than grammatical relations. In our paper, we populate the entity-grid in the same way as F15's “Shallow RST-style” encoding, but use fine-grained instead of coarse-grained RST relations, and do not distinguish between intra-sentential and multi-sentential RST relations, or salient and non-salient entities. We explore various featurization techniques using the coding scheme.
CNN model. shrestha2017 propose a convolutional neural network formulation for AA tasks (detailed in Section SECREF3 ). They report state-of-the-art performance on a corpus of Twitter data BIBREF11 , and compare their models with alternative architectures proposed in the literature: (i) SCH: an SVM that also uses character n-grams, among other stylometric features BIBREF11 ; (ii) LSTM-2: an LSTM trained on bigrams BIBREF12 ; (iii) CHAR: a Logistic Regression model that takes character n-grams BIBREF13 ; (iv) CNN-W: a CNN trained on word embeddings BIBREF14 . The authors show that the model CNN2 produces the best performance overall. Ruder:16 apply character INLINEFORM0 -gram CNNs to a wide range of datasets, providing strong empirical evidence that the architecture generalizes well. Further, they find that including word INLINEFORM1 -grams in addition to character INLINEFORM2 -grams reduces performance, which is in agreement with BIBREF5 's findings.
Models
Building on shrestha2017's work, we employ their character-bigram CNN (CNN2), and propose two extensions which utilize discourse information: (i) CNN2 enhanced with relation probability vectors (CNN2-PV), and (ii) CNN2 enhanced with discourse embeddings (CNN2-DE). The CNN2-PV allows us to conduct a comparison with F&H14 and F15, which also use relation probability vectors.
CNN2. CNN2 is the baseline model with no discourse features. Illustrated in Figure FIGREF10 (center), it consists of (i) an embedding layer, (ii) a convolution layer, (iii) a max-pooling layer, and (iv) a softmax layer. We briefly sketch the processing procedure and refer the reader to BIBREF4 for mathematical details.
The network takes a sequence of character bigrams INLINEFORM0 as input, and outputs a multinomial INLINEFORM1 over class labels as the prediction. The model first looks up the embedding matrix to produce a sequence of embeddings for INLINEFORM2 (i.e., the matrix INLINEFORM3 ), then pushes the embedding sequence through convolutional filters of three bigram-window sizes INLINEFORM4 , each yielding INLINEFORM5 feature maps. We then apply the max-over-time pooling BIBREF15 to the feature maps from each filter, and concatenate the resulting vectors to obtain a single vector INLINEFORM6 , which then goes through the softmax layer to produce predictions.
CNN2-PV. This model (Figure FIGREF10 , left+center) featurizes discourse information into a vector of relation probabilities. In order to derive the discourse features, an entity grid is constructed by feeding the document through an NLP pipeline to identify salient entities. Two flavors of discourse features are created by populating the entity grid with either (i) grammatical relations (GR) or (ii) RST discourse relations (RST). The GR features are represented as grammatical relation transitions derived from the entity grid, e.g., INLINEFORM0 . The RST features are represented as RST discourse relations with their nuclearity, e.g., INLINEFORM1 . The probability vectors are then distributions over relation types. For GR, the vector is a distribution over all the entity role transitions, i.e., INLINEFORM2 (see Table TABREF2 ). For RST, the vector is a distribution over all the RST discourse relations, i.e., INLINEFORM3 Denoting a feature as such with INLINEFORM4 , we construct the pooling vector INLINEFORM5 for the char-bigrams, and concatenate INLINEFORM6 to INLINEFORM7 before feeding the resulting vector to the softmax layer.
CNN2-DE. In this model (Figure FIGREF10 , center+right), we embed discourse features in high-dimensional space (similar to char-bigram embeddings). Let INLINEFORM0 be a sequence of discourse features, we treat it in a similar fashion to the char-bigram sequence INLINEFORM1 , i.e. feeding it through a “parallel” convolutional net (Figure FIGREF10 right). The operation results in a pooling vector INLINEFORM2 . We concatenate INLINEFORM3 to the pooling vector INLINEFORM4 (which is constructed from INLINEFORM5 ) then feed INLINEFORM6 to the softmax layer for the final prediction.
Experiments and Results
We begin by introducing the datasets (Section SECREF15 ), followed by detailing the featurization methods (Section SECREF17 ), the experiments (Section SECREF22 ), and finally reporting results (Section SECREF26 ).
Datasets
The statistics for the three datasets used in the experiments are summarized in Table TABREF16 .
novel-9. This dataset was compiled by F&H14: a collection of 19 novels by 9 nineteenth century British and American authors in the Project Gutenberg. To compare to F&H14, we apply the same resampling method (F&H14, Section 4.2) to correct the imbalance in authors by oversampling the texts of less-represented authors.
novel-50. This dataset extends novel-9, compiling the works of 50 randomly selected authors of the same period. For each author, we randomly select 5 novels for a total 250 novels.
IMDB62. IMDB62 consists of 62K movie reviews from 62 users (1,000 each) from the Internet Movie dataset, compiled by Seroussi:11. Unlike the novel datasets, the reviews are considerably shorter, with a mean of 349 words per text.
Featurization
As described in Section SECREF2 , in both the GR and RST variants, from each input entry we start by obtaining an entity grid.
CNN2-PV. We collect the probabilities of entity role transitions (in GR) or discourse relations (in RST) for the entries. Each entry corresponds to a probability distribution vector.
CNN2-DE. We employ two schema for creating discourse feature sequences from an entity grid. While we always read the grid by column (by a salient entity), we vary whether we track the entity across a number of sentences (n rows at a time) or across the entire document (one entire column at a time), denoted as local and global reading respectively.
For the GR discourse features, in the case of local reading, we process the entity roles one sentence pair at a time (Figure FIGREF18 , left). For example, in processing the pair INLINEFORM0 , we find the first non-empty role INLINEFORM1 for entity INLINEFORM2 in INLINEFORM3 . If INLINEFORM4 also has a non-empty role INLINEFORM5 in the INLINEFORM6 , we collect the entity role transition INLINEFORM7 . We then proceed to the following entity INLINEFORM8 , until we process all the entities in the grid and move to the next sentence pair. For the global reading, we instead read the entity roles by traversing one column of the entire document at a time (Figure FIGREF18 , right). The entity roles in all the sentences are read for one entity: we collect transitions for all the non-empty roles (e.g., INLINEFORM9 , but not INLINEFORM10 ).
For the RST discourse features, we process non-empty discourse relations also through either local or global reading. In the local reading, we read all the discourse relations in a sentence (a row) then move on to the next sentence. In the global reading, we read in discourse relations for one entity at a time. This results in sequences of discourse relations for the input entries.
Experiments
Baseline-dataset experiments. All the baseline-dataset experiments are evaluated on novel-9. As a comparison to previous work (F15), we evaluate our models using a pairwise classification task with GR discourse features. In her model, novels are partitioned into 1000-word chunks, and the model is evaluated with accuracy. Surpassing F15's SVM model by a large margin, we then further evaluate the more difficult multi-class task, i.e., all-class prediction simultaneously, with both GR and RST discourse features and the more robust F1 evaluation. In this multi-class task, we implement two SVMs to extend F15's SVM models: (i) SVM2: a linear-kernel SVM which takes char-bigrams as input, as our CNNs, and (ii) SVM2-PV: an updated SVM2 which takes also probability vector features.
Further, we are interested in finding a performance threshold on the minimally-required input text length for discourse information to “kick in”. To this end, we chunk the novels into different sizes: 200-2000 words, at 200-word intervals, and evaluate our CNNs in the multi-class condition.
Generalization-dataset experiments. To confirm that our models generalize, we pick the best models from the baseline-dataset experiments and evaluate on the novel-50 and IMDB62 datasets. For novel-50, the chunking size applied is 2000-word as per the baseline-dataset experiment results, and for IMDB62, texts are not chunked (i.e., we feed the models with the original reviews directly). For model comparison, we also run the SVMs (i.e., SVM2 and SVM2-PV) used in the baseline-dataset experiment. All the experiments conducted here are multi-class classification with macro-averaged F1 evaluation.
Model configurations. Following F15, we perform 5-fold cross-validation. The embedding sizes are tuned on novel-9 (multi-class condition): 50 for char-bigrams; 20 for discourse features. The learning rate is 0.001 using the Adam Optimizer BIBREF18 . For all models, we apply dropout regularization of 0.75 BIBREF19 , and run 50 epochs (batch size 32). The SVMs in the baseline-dataset experiments use default settings, following F15. For the SVMs in the generalization-dataset experiments, we tuned the hyperparameters on novel-9 with a grid search, and found the optimal setting as: stopping condition tol is 1e-5, at a max-iteration of 1,500.
Results
Baseline-dataset experiments. The results of the baseline-dataset experiments are reported in Table TABREF24 , TABREF25 and Figure FIGREF27 . In Table TABREF24 , Baseline denotes the dumb baseline model which always predicts the more-represented author of the pair. Both SVMs are from F15, and we report her results. SVM (LexSyn) takes character and word bi/trigrams and POS tags. SVM (LexSyn-PV) additionally includes probability vectors, similar to our CNN2-PV. In this part of the experiment, while the CNNs clear a large margin over SVMs, adding discourse in CNN2-PV brings only a small performance gain.
Table TABREF25 reports the results from the multi-class classification task, the more difficult task. Here, probability vector features (i.e., PV) again fail to contribute much. The discourse embedding features, on the other hand, manage to increase the F1 score by a noticeable amount, with the maximal improvement seen in the CNN2-DE (global) model with RST features (by 2.6 points). In contrast, the discourse-enhanced SVM2-PVs increase F1 by about 1 point, with overall much lower scores in comparison to the CNNs. In general, RST features work better than GR features.
The results of the varying-sizes experiments are plotted in Figure FIGREF27 . Again, we observe the overall pattern that discourse features improve the F1 score, and RST features procure superior performance. Crucially, however, we note there is no performance boost below the chunk size of 1000 for GR features, and below 600 for RST features. Where discourse features do help, the GR-based models achieve, on average, 1 extra point on F1, and the RST-based models around 2.
Generalization-dataset experiments. Table TABREF28 summarizes the results of the generalization-dataset experiments. On novel-50, most discourse-enhanced models improve the performance of the baseline non-discourse CNN2 to varying degrees. The clear pattern again emerges that RST features work better, with the best F1 score evidenced in the CNN2-DE (global) model (3.5 improvement in F1). On IMDB62, as expected with short text inputs (mean=349 words/review), the discourse features in general do not add further contribution. Even the best model CNN2-DE brings only marginal improvement, confirming our findings from varying the chunk size on novel-9, where discourse features did not help at this input size. Equipped with discourse features, SVM2-PV performs slightly better than SVM2 on novel-50 (by 0.4 with GR, 0.9 with RST features). On IMDB62, the same pattern persists for the SVMs: discourse features do not make noticeable improvements (by 0.0 and 0.5 with GR and RST respectively).
Analysis
General analysis. Overall, we have shown that discourse information can improve authorship attribution, but only when properly encoded. This result is critical in demonstrating the particular value of discourse information, because typical stylometric features such as word INLINEFORM0 -grams and POS tags do not add additional performance improvements BIBREF3 , BIBREF5 .
In addition, the type of discourse information and the way in which it is featurized are tantamount to this performance improvement: RST features provide overall stronger improvement, and the global reading scheme for discourse embedding works better than the local one. The discourse embedding proves to be a superior featurization technique, as evidenced by the generally higher performance of CNN2-DE models over CNN2-PV models. With an SVM, where the option is not available, we are only able to use relation probability vectors to obtain a very modest performance improvement.
Further, we found an input-length threshold for the discourse features to help (Section SECREF26 ). Not surprisingly, discourse does not contribute on shorter texts. Many of the feature grids are empty for these shorter texts– either there are no coreference chains or they are not correctly resolved. Currently we only have empirical results on short novel chunks and movie reviews, but believe the finding would generalize to Twitter or blog posts.
Discourse embeddings. It does not come as a surprise that discourse embedding-based models perform better than their relation probability-based peers. The former (i) leverages the weight learning of the entire computational graph of the CNN rather than only the softmax layer, as the PV models do, and (ii) provides a more fine-grained featurization of the discourse information. Rather than merely taking a probability over grammatical relation transitions (in GR) or discourse relation types (in RST), in DE-based models we learn the dependency between grammatical relation transitions/discourse relations through the INLINEFORM0 -sized filter sweeps.
To further study the information encoded in the discourse embeddings, we perform t-SNE clustering BIBREF20 on them, using the best performing model CNN2-DE (global). We examine the closest neighbors of each embedding, and observe that similar discourse relations tend to go together (e.g., explanation and interpretation; consequence and result). Some examples are given in Table TABREF29 . However, it is unclear how this pattern helps improve classification performance. We intend to investigate this question in future work.
Global vs. Local featurization. As described in Section SECREF17 , the global reading processes all the discourse features for one entity at a time, while the local approach reads one sentence (or one sentence pair) at a time. In all the relevant experiments, global featurization showed a clear performance advantage (on average 1 point gain in F1). Recall that the creation of the grids (both GR and RST) depend on coreference chains of entities (Section SECREF2 ), and only the global reading scheme takes advantage of the coreference pattern whereas the local reading breaks the chains. To find out whether coreference pattern is the key to the performance difference, we further ran a probe experiment where we read RST discourse relations in the order in which EDUs are arranged in the RST tree (i.e., left-to-right), and evaluated this model on novel-50 and IMDB62 with the same hyperparameter setting. The F1 scores turned out to be very close to the CNN2-DE (local) model, at 97.5 and 90.9. Based on this finding, we tentatively confirm the importance of the coreference pattern, and intend to further investigate how exactly it matters for the classification performance.
GR vs. RST. RST features in general effect higher performance gains than GR features (Table TABREF28 ). The RST parser produces a tree of discourse relations for the input text, thus introducing a “global view.” The GR features, on the other hand, are more restricted to a “local view” on entities between consecutive sentences. While a deeper empirical investigation is needed, one can intuitively imagine that identifying authorship by focusing on the local transitions between grammatical relations (as in GR) is more difficult than observing how the entire text is organized (as in RST).
Conclusion
We have conducted an in-depth investigation of techniques that (i) featurize discourse information, and (ii) effectively integrate discourse features into the state-of-the-art character-bigram CNN classifier for AA. Beyond confirming the overall superiority of RST features over GR features in larger and more difficult datasets, we present a discourse embedding technique that is unavailable for previously proposed discourse-enhanced models. The new technique enabled us to push the envelope of the current performance ceiling by a large margin.
Admittedly, in using the RST features with entity-grids, we lose the valuable RST tree structure. In future work, we intend to adopt more sophisticated methods such as RecNN, as per Ji:17, to retain more information from the RST trees while reducing the parameter size. Further, we aim to understand how discourse embeddings contribute to AA tasks, and find alternatives to coreference chains for shorter texts. | They derive entity grid with grammatical relations and RST discourse relations and concatenate them with pooling vector for the char-bigrams before feeding to the resulting vector to the softmax layer. |
5e65bb0481f3f5826291c7cc3e30436ab4314c61 | 5e65bb0481f3f5826291c7cc3e30436ab4314c61_0 | Q: What discourse features are used?
Text: Introduction
Authorship attribution (AA) is the task of identifying the author of a text, given a set of author-labeled training texts. This task typically makes use of stylometric cues at the surface lexical and syntactic level BIBREF0 , although BIBREF1 and BIBREF2 go beyond the sentence level, showing that discourse information can help. However, they achieve limited performance gains and lack an in-depth analysis of discourse featurization techniques. More recently, convolutional neural networks (CNNs) have demonstrated considerable success on AA relying only on character-level INLINEFORM0 -grams BIBREF3 , BIBREF4 . The strength of these models is evidenced by findings that traditional stylometric features such as word INLINEFORM1 -grams and POS-tags do not improve, and can sometimes even hurt performance BIBREF3 , BIBREF5 . However, none of these CNN models make use of discourse.
Our work builds upon these prior studies by exploring an effective method to (i) featurize the discourse information, and (ii) integrate discourse features into the best text classifier (i.e., CNN-based models), in the expectation of achieving state-of-the-art results in AA.
BIBREF1 (henceforth F&H14) made the first comprehensive attempt at using discourse information for AA. They employ an entity-grid model, an approach introduced by BIBREF6 for the task of ordering sentences. This model tracks how the grammatical relations of salient entities (e.g., subj, obj, etc.) change between pairs of sentences in a document, thus capturing a form of discourse coherence. The grid is summarized into a vector of transition probabilities. However, because the model only records the transition between two consecutive sentences at a time, the coherence is local. BIBREF2 (henceforth F15) further extends the entity-grid model by replacing grammatical relations with discourse relations from Rhetorical Structure Theory BIBREF7 . Their study uses a linear-kernel SVM to perform pairwise author classifications, where a non-discourse model captures lexical and syntactic features. They find that adding the entity-grid with grammatical relations enhances the non-discourse model by almost 1% in accuracy, and using RST relations provides an improvement of 3%. The study, however, works with only one small dataset and their models produce overall unremarkable performance ( INLINEFORM0 85%). BIBREF8 propose an advanced Recursive Neural Network (RecNN) architecture to work with RST in the more general area of text categorization and present impressive results. However, we suspect that the massive number of parameters of RecNNs would likely cause overfitting when working with smaller datasets, as is often the case in AA tasks.
In our paper, we opt for a state-of-the-art character bigram CNN classifier BIBREF4 , and investigate various ways in which the discourse information can be featurized and integrated into the CNN. Specifically,
We explore these questions using two approaches to represent salient entities: grammatical relations, and RST discourse relations. We apply these models to datasets of varying sizes and genres, and find that adding any discourse information improves AA consistently on longer documents, but has mixed results on shorter documents. Further, embedding the discourse features in a parallel CNN at the input end yields better performance than concatenating them to the output layer as a feature vector (Section SECREF3 ). The global featurization is more effective than the local one. We also show that SVMs, which can only use discourse probability vectors, neither produce a competitive performance (even with fine-tuning), nor generalize in using the discourse information effectively.
Background
Entity-grid model. Typical lexical features for AA are relatively superficial and restricted to within the same sentence. F&H14 hypothesize that discourse features beyond the sentence level also help authorship attribution. In particular, they propose an author has a particular style for representing entities across a discourse. Their work is based on the entity-grid model of BIBREF6 (henceforth B&L).
The entity-grid model tracks the grammatical relation (subj, obj, etc.) that salient entities take on throughout a document as a way to capture local coherence . A salient entity is defined as a noun phrase that co-occurs at least twice in a document. Extensive literature has shown that subject and object relations are a strong signal for salience and it follows from the Centering Theory that you want to avoid rough shifts in the center BIBREF9 , BIBREF10 . B&L thus focus on whether a salient entity is a subject (s), object (o), other (x), or is not present (-) in a given sentence, as illustrated in Table TABREF1 . Every sentence in a document is encoded with the grammatical relation of all the salient entities, resulting in a grid similar to Table TABREF6 .
The local coherence of a document is then defined on the basis of local entity transitions. A local entity transition is the sequence of grammatical relations that an entity can assume across INLINEFORM0 consecutive sentences, resulting in {s,o,x,-} INLINEFORM1 possible transitions. Following B&L, F&H14 consider sequences of length INLINEFORM2 =2, that is, transitions between two consecutive sentences, resulting in INLINEFORM3 =16 possible transitions. The probability for each transition is then calculated as the frequency of the transition divided by the total number of transitions. This step results in a single probability vector for every document, as illustrated in Table TABREF2 .
B&L apply this model to a sentence ordering task, where the more coherent option, as evidenced by its transition probabilities, was chosen. In authorship attribution, texts are however assumed to already be coherent. F&H14 instead hypothesize that an author unconsciously employs the same methods for describing entities as the discourse unfolds, resulting in discernible transition probability patterns across multiple of their texts. Indeed, F&H14 find that adding the B&L vectors increases the accuracy of AA by almost 1% over a baseline lexico-syntactic model.
RST discourse relations. F15 extends the notion of tracking salient entities to RST. Instead of using grammatical relations in the grid, RST discourse relations are specified. An RST discourse relation defines the relationship between two or more elementary discourse units (EDUs), which are spans of text that typically correspond to syntactic clauses. In a relation, an EDU can function as a nucleus (e.g., result.N) or as a satellite (e.g., summary.S). All the relations in a document then form a tree as in Figure FIGREF8 .
F15 finds that RST relations are more effective for AA than grammatical relations. In our paper, we populate the entity-grid in the same way as F15's “Shallow RST-style” encoding, but use fine-grained instead of coarse-grained RST relations, and do not distinguish between intra-sentential and multi-sentential RST relations, or salient and non-salient entities. We explore various featurization techniques using the coding scheme.
CNN model. shrestha2017 propose a convolutional neural network formulation for AA tasks (detailed in Section SECREF3 ). They report state-of-the-art performance on a corpus of Twitter data BIBREF11 , and compare their models with alternative architectures proposed in the literature: (i) SCH: an SVM that also uses character n-grams, among other stylometric features BIBREF11 ; (ii) LSTM-2: an LSTM trained on bigrams BIBREF12 ; (iii) CHAR: a Logistic Regression model that takes character n-grams BIBREF13 ; (iv) CNN-W: a CNN trained on word embeddings BIBREF14 . The authors show that the model CNN2 produces the best performance overall. Ruder:16 apply character INLINEFORM0 -gram CNNs to a wide range of datasets, providing strong empirical evidence that the architecture generalizes well. Further, they find that including word INLINEFORM1 -grams in addition to character INLINEFORM2 -grams reduces performance, which is in agreement with BIBREF5 's findings.
Models
Building on shrestha2017's work, we employ their character-bigram CNN (CNN2), and propose two extensions which utilize discourse information: (i) CNN2 enhanced with relation probability vectors (CNN2-PV), and (ii) CNN2 enhanced with discourse embeddings (CNN2-DE). The CNN2-PV allows us to conduct a comparison with F&H14 and F15, which also use relation probability vectors.
CNN2. CNN2 is the baseline model with no discourse features. Illustrated in Figure FIGREF10 (center), it consists of (i) an embedding layer, (ii) a convolution layer, (iii) a max-pooling layer, and (iv) a softmax layer. We briefly sketch the processing procedure and refer the reader to BIBREF4 for mathematical details.
The network takes a sequence of character bigrams INLINEFORM0 as input, and outputs a multinomial INLINEFORM1 over class labels as the prediction. The model first looks up the embedding matrix to produce a sequence of embeddings for INLINEFORM2 (i.e., the matrix INLINEFORM3 ), then pushes the embedding sequence through convolutional filters of three bigram-window sizes INLINEFORM4 , each yielding INLINEFORM5 feature maps. We then apply the max-over-time pooling BIBREF15 to the feature maps from each filter, and concatenate the resulting vectors to obtain a single vector INLINEFORM6 , which then goes through the softmax layer to produce predictions.
CNN2-PV. This model (Figure FIGREF10 , left+center) featurizes discourse information into a vector of relation probabilities. In order to derive the discourse features, an entity grid is constructed by feeding the document through an NLP pipeline to identify salient entities. Two flavors of discourse features are created by populating the entity grid with either (i) grammatical relations (GR) or (ii) RST discourse relations (RST). The GR features are represented as grammatical relation transitions derived from the entity grid, e.g., INLINEFORM0 . The RST features are represented as RST discourse relations with their nuclearity, e.g., INLINEFORM1 . The probability vectors are then distributions over relation types. For GR, the vector is a distribution over all the entity role transitions, i.e., INLINEFORM2 (see Table TABREF2 ). For RST, the vector is a distribution over all the RST discourse relations, i.e., INLINEFORM3 Denoting a feature as such with INLINEFORM4 , we construct the pooling vector INLINEFORM5 for the char-bigrams, and concatenate INLINEFORM6 to INLINEFORM7 before feeding the resulting vector to the softmax layer.
CNN2-DE. In this model (Figure FIGREF10 , center+right), we embed discourse features in high-dimensional space (similar to char-bigram embeddings). Let INLINEFORM0 be a sequence of discourse features, we treat it in a similar fashion to the char-bigram sequence INLINEFORM1 , i.e. feeding it through a “parallel” convolutional net (Figure FIGREF10 right). The operation results in a pooling vector INLINEFORM2 . We concatenate INLINEFORM3 to the pooling vector INLINEFORM4 (which is constructed from INLINEFORM5 ) then feed INLINEFORM6 to the softmax layer for the final prediction.
Experiments and Results
We begin by introducing the datasets (Section SECREF15 ), followed by detailing the featurization methods (Section SECREF17 ), the experiments (Section SECREF22 ), and finally reporting results (Section SECREF26 ).
Datasets
The statistics for the three datasets used in the experiments are summarized in Table TABREF16 .
novel-9. This dataset was compiled by F&H14: a collection of 19 novels by 9 nineteenth century British and American authors in the Project Gutenberg. To compare to F&H14, we apply the same resampling method (F&H14, Section 4.2) to correct the imbalance in authors by oversampling the texts of less-represented authors.
novel-50. This dataset extends novel-9, compiling the works of 50 randomly selected authors of the same period. For each author, we randomly select 5 novels for a total 250 novels.
IMDB62. IMDB62 consists of 62K movie reviews from 62 users (1,000 each) from the Internet Movie dataset, compiled by Seroussi:11. Unlike the novel datasets, the reviews are considerably shorter, with a mean of 349 words per text.
Featurization
As described in Section SECREF2 , in both the GR and RST variants, from each input entry we start by obtaining an entity grid.
CNN2-PV. We collect the probabilities of entity role transitions (in GR) or discourse relations (in RST) for the entries. Each entry corresponds to a probability distribution vector.
CNN2-DE. We employ two schema for creating discourse feature sequences from an entity grid. While we always read the grid by column (by a salient entity), we vary whether we track the entity across a number of sentences (n rows at a time) or across the entire document (one entire column at a time), denoted as local and global reading respectively.
For the GR discourse features, in the case of local reading, we process the entity roles one sentence pair at a time (Figure FIGREF18 , left). For example, in processing the pair INLINEFORM0 , we find the first non-empty role INLINEFORM1 for entity INLINEFORM2 in INLINEFORM3 . If INLINEFORM4 also has a non-empty role INLINEFORM5 in the INLINEFORM6 , we collect the entity role transition INLINEFORM7 . We then proceed to the following entity INLINEFORM8 , until we process all the entities in the grid and move to the next sentence pair. For the global reading, we instead read the entity roles by traversing one column of the entire document at a time (Figure FIGREF18 , right). The entity roles in all the sentences are read for one entity: we collect transitions for all the non-empty roles (e.g., INLINEFORM9 , but not INLINEFORM10 ).
For the RST discourse features, we process non-empty discourse relations also through either local or global reading. In the local reading, we read all the discourse relations in a sentence (a row) then move on to the next sentence. In the global reading, we read in discourse relations for one entity at a time. This results in sequences of discourse relations for the input entries.
Experiments
Baseline-dataset experiments. All the baseline-dataset experiments are evaluated on novel-9. As a comparison to previous work (F15), we evaluate our models using a pairwise classification task with GR discourse features. In her model, novels are partitioned into 1000-word chunks, and the model is evaluated with accuracy. Surpassing F15's SVM model by a large margin, we then further evaluate the more difficult multi-class task, i.e., all-class prediction simultaneously, with both GR and RST discourse features and the more robust F1 evaluation. In this multi-class task, we implement two SVMs to extend F15's SVM models: (i) SVM2: a linear-kernel SVM which takes char-bigrams as input, as our CNNs, and (ii) SVM2-PV: an updated SVM2 which takes also probability vector features.
Further, we are interested in finding a performance threshold on the minimally-required input text length for discourse information to “kick in”. To this end, we chunk the novels into different sizes: 200-2000 words, at 200-word intervals, and evaluate our CNNs in the multi-class condition.
Generalization-dataset experiments. To confirm that our models generalize, we pick the best models from the baseline-dataset experiments and evaluate on the novel-50 and IMDB62 datasets. For novel-50, the chunking size applied is 2000-word as per the baseline-dataset experiment results, and for IMDB62, texts are not chunked (i.e., we feed the models with the original reviews directly). For model comparison, we also run the SVMs (i.e., SVM2 and SVM2-PV) used in the baseline-dataset experiment. All the experiments conducted here are multi-class classification with macro-averaged F1 evaluation.
Model configurations. Following F15, we perform 5-fold cross-validation. The embedding sizes are tuned on novel-9 (multi-class condition): 50 for char-bigrams; 20 for discourse features. The learning rate is 0.001 using the Adam Optimizer BIBREF18 . For all models, we apply dropout regularization of 0.75 BIBREF19 , and run 50 epochs (batch size 32). The SVMs in the baseline-dataset experiments use default settings, following F15. For the SVMs in the generalization-dataset experiments, we tuned the hyperparameters on novel-9 with a grid search, and found the optimal setting as: stopping condition tol is 1e-5, at a max-iteration of 1,500.
Results
Baseline-dataset experiments. The results of the baseline-dataset experiments are reported in Table TABREF24 , TABREF25 and Figure FIGREF27 . In Table TABREF24 , Baseline denotes the dumb baseline model which always predicts the more-represented author of the pair. Both SVMs are from F15, and we report her results. SVM (LexSyn) takes character and word bi/trigrams and POS tags. SVM (LexSyn-PV) additionally includes probability vectors, similar to our CNN2-PV. In this part of the experiment, while the CNNs clear a large margin over SVMs, adding discourse in CNN2-PV brings only a small performance gain.
Table TABREF25 reports the results from the multi-class classification task, the more difficult task. Here, probability vector features (i.e., PV) again fail to contribute much. The discourse embedding features, on the other hand, manage to increase the F1 score by a noticeable amount, with the maximal improvement seen in the CNN2-DE (global) model with RST features (by 2.6 points). In contrast, the discourse-enhanced SVM2-PVs increase F1 by about 1 point, with overall much lower scores in comparison to the CNNs. In general, RST features work better than GR features.
The results of the varying-sizes experiments are plotted in Figure FIGREF27 . Again, we observe the overall pattern that discourse features improve the F1 score, and RST features procure superior performance. Crucially, however, we note there is no performance boost below the chunk size of 1000 for GR features, and below 600 for RST features. Where discourse features do help, the GR-based models achieve, on average, 1 extra point on F1, and the RST-based models around 2.
Generalization-dataset experiments. Table TABREF28 summarizes the results of the generalization-dataset experiments. On novel-50, most discourse-enhanced models improve the performance of the baseline non-discourse CNN2 to varying degrees. The clear pattern again emerges that RST features work better, with the best F1 score evidenced in the CNN2-DE (global) model (3.5 improvement in F1). On IMDB62, as expected with short text inputs (mean=349 words/review), the discourse features in general do not add further contribution. Even the best model CNN2-DE brings only marginal improvement, confirming our findings from varying the chunk size on novel-9, where discourse features did not help at this input size. Equipped with discourse features, SVM2-PV performs slightly better than SVM2 on novel-50 (by 0.4 with GR, 0.9 with RST features). On IMDB62, the same pattern persists for the SVMs: discourse features do not make noticeable improvements (by 0.0 and 0.5 with GR and RST respectively).
Analysis
General analysis. Overall, we have shown that discourse information can improve authorship attribution, but only when properly encoded. This result is critical in demonstrating the particular value of discourse information, because typical stylometric features such as word INLINEFORM0 -grams and POS tags do not add additional performance improvements BIBREF3 , BIBREF5 .
In addition, the type of discourse information and the way in which it is featurized are tantamount to this performance improvement: RST features provide overall stronger improvement, and the global reading scheme for discourse embedding works better than the local one. The discourse embedding proves to be a superior featurization technique, as evidenced by the generally higher performance of CNN2-DE models over CNN2-PV models. With an SVM, where the option is not available, we are only able to use relation probability vectors to obtain a very modest performance improvement.
Further, we found an input-length threshold for the discourse features to help (Section SECREF26 ). Not surprisingly, discourse does not contribute on shorter texts. Many of the feature grids are empty for these shorter texts– either there are no coreference chains or they are not correctly resolved. Currently we only have empirical results on short novel chunks and movie reviews, but believe the finding would generalize to Twitter or blog posts.
Discourse embeddings. It does not come as a surprise that discourse embedding-based models perform better than their relation probability-based peers. The former (i) leverages the weight learning of the entire computational graph of the CNN rather than only the softmax layer, as the PV models do, and (ii) provides a more fine-grained featurization of the discourse information. Rather than merely taking a probability over grammatical relation transitions (in GR) or discourse relation types (in RST), in DE-based models we learn the dependency between grammatical relation transitions/discourse relations through the INLINEFORM0 -sized filter sweeps.
To further study the information encoded in the discourse embeddings, we perform t-SNE clustering BIBREF20 on them, using the best performing model CNN2-DE (global). We examine the closest neighbors of each embedding, and observe that similar discourse relations tend to go together (e.g., explanation and interpretation; consequence and result). Some examples are given in Table TABREF29 . However, it is unclear how this pattern helps improve classification performance. We intend to investigate this question in future work.
Global vs. Local featurization. As described in Section SECREF17 , the global reading processes all the discourse features for one entity at a time, while the local approach reads one sentence (or one sentence pair) at a time. In all the relevant experiments, global featurization showed a clear performance advantage (on average 1 point gain in F1). Recall that the creation of the grids (both GR and RST) depend on coreference chains of entities (Section SECREF2 ), and only the global reading scheme takes advantage of the coreference pattern whereas the local reading breaks the chains. To find out whether coreference pattern is the key to the performance difference, we further ran a probe experiment where we read RST discourse relations in the order in which EDUs are arranged in the RST tree (i.e., left-to-right), and evaluated this model on novel-50 and IMDB62 with the same hyperparameter setting. The F1 scores turned out to be very close to the CNN2-DE (local) model, at 97.5 and 90.9. Based on this finding, we tentatively confirm the importance of the coreference pattern, and intend to further investigate how exactly it matters for the classification performance.
GR vs. RST. RST features in general effect higher performance gains than GR features (Table TABREF28 ). The RST parser produces a tree of discourse relations for the input text, thus introducing a “global view.” The GR features, on the other hand, are more restricted to a “local view” on entities between consecutive sentences. While a deeper empirical investigation is needed, one can intuitively imagine that identifying authorship by focusing on the local transitions between grammatical relations (as in GR) is more difficult than observing how the entire text is organized (as in RST).
Conclusion
We have conducted an in-depth investigation of techniques that (i) featurize discourse information, and (ii) effectively integrate discourse features into the state-of-the-art character-bigram CNN classifier for AA. Beyond confirming the overall superiority of RST features over GR features in larger and more difficult datasets, we present a discourse embedding technique that is unavailable for previously proposed discourse-enhanced models. The new technique enabled us to push the envelope of the current performance ceiling by a large margin.
Admittedly, in using the RST features with entity-grids, we lose the valuable RST tree structure. In future work, we intend to adopt more sophisticated methods such as RecNN, as per Ji:17, to retain more information from the RST trees while reducing the parameter size. Further, we aim to understand how discourse embeddings contribute to AA tasks, and find alternatives to coreference chains for shorter texts. | Entity grid with grammatical relations and RST discourse relations. |
848ab388703c24faad79d83d254e4fd88ab27e2a | 848ab388703c24faad79d83d254e4fd88ab27e2a_0 | Q: How are proof scores calculated?
Text: Introduction
Recent advancements in deep learning intensified the long-standing interests in integrating symbolic reasoning with connectionist models BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . The attraction of said integration stems from the complementing properties of these systems. Symbolic reasoning models offer interpretability, efficient generalisation from a small number of examples, and the ability to leverage knowledge provided by an expert. However, these systems are unable to handle ambiguous and noisy high-dimensional data such as sensory inputs BIBREF5 . On the other hand, representation learning models exhibit robustness to noise and ambiguity, can learn task-specific representations, and achieve state-of-the-art results on a wide variety of tasks BIBREF6 . However, being universal function approximators, these models require vast amounts of training data and are treated as non-interpretable black boxes.
One way of integrating the symbolic and sub-symbolic models is by continuously relaxing discrete operations and implementing them in a connectionist framework. Recent approaches in this direction focused on learning algorithmic behaviour without the explicit symbolic representations of a program BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , and consequently with it BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . In the inductive logic programming setting, two new models, NTP BIBREF0 and Differentiable Inductive Logic Programming ( $\partial $ ILP) BIBREF16 successfully combined the interpretability and data efficiency of a logic programming system with the expressiveness and robustness of neural networks.
In this paper, we focus on the NTP model proposed by BIBREF0 . Akin to recent neural-symbolic models, NTP rely on a continuous relaxation of a discrete algorithm, operating over the sub-symbolic representations. In this case, the algorithm is an analogue to Prolog's backward chaining with a relaxed unification operator. The backward chaining algorithm constructs neural networks, which model continuously relaxed proof paths using sub-symbolic representations. These representations are learned end-to-end by maximising the proof scores of facts in the KB, while minimising the score of facts not in the KB, in a link prediction setting BIBREF17 . However, while the symbolic unification checks whether two terms can represent the same structure, the relaxed unification measures the similarity between their sub-symbolic representations.
This continuous relaxation is at the crux of NTP' inability to scale to large datasets. During both training and inference, NTP need to compute all possible proof trees needed for proving a query, relying on the continuous unification of the query with all the rules and facts in the KB. This procedure quickly becomes infeasible for large datasets, as the number of nodes of the resulting computation graph grows exponentially.
Our insight is that we can radically reduce the computational complexity of inference and learning by generating only the most promising proof paths. In particular, we show that the problem of finding the facts in the KB that best explain a query can be reduced to a $k$ -nearest neighbour problem, for which efficient exact and approximate solutions exist BIBREF18 . This enables us to apply NTP to previously unreachable real-world datasets, such as WordNet.
Background
In NTP, the neural network structure is built recursively, and its construction is defined in terms of modules similarly to dynamic neural module networks BIBREF19 . Each module, given a goal, a KB, and a current proof state as inputs, produces a list of new proof states, where the proof states are neural networks representing partial proof success scores.
Unification Module. In backward chaining, unification between two atoms is used for checking whether they can represent the same structure. In discrete unification, non-variable symbols are checked for equality, and the proof fails if the symbols differ. In NTP, rather than comparing symbols, their embedding representations are compared by means of a RBF kernel. This allows matching different symbols with similar semantics, such as matching relations like ${grandFatherOf}$ and ${grandpaOf}$ . Given a proof state $= (_, _)$ , where $_$ and $_$ denote a substitution set and a proof score, respectively, unification is computed as follows:
1. unify(, , ) =
2. unify(, G, ) =
3. unify(H, , ) =
4. unify(h::H, g::G, ) = unify(H,G,')
with ' = (', ') where:
'= {ll {h/g} if hV
{g/h} if gV, hV
otherwise }
'= ( , { ll k(h:, g:) if hV, gV
1 otherwise } )
where $_{h:}$ and $_{g:}$ denote the embedding representations of $h$ and $g$ , respectively.
OR Module. This module attempts to apply rules in a KB. The name of this module stems from the fact that a KB can be seen as a large disjunction of rules and facts. In backward chaining reasoning systems, the OR module is used for unifying a goal with all facts and rules in a KB: if the goal unifies with the head of the rule, then a series of goals is derived from the body of such a rule. In NTP, we calculate the similarity between the rule and the facts via the unify operator. Upon calculating the continuous unification scores, OR calls AND to prove all sub-goals in the body of the rule.
or(G, d, ) = ' | ' and(B, d, unify(H, G, )),
H :– B
AND Module. This module is used for proving a conjunction of sub-goals derived from a rule body. It first applies substitutions to the first atom, which is afterwards proven by calling the OR module. Remaining sub-goals are proven by recursively calling the AND module.
1. and(_, _, ) =
2. and(_, 0, _) =
3. and(, _, ) =
4. and(G:G, d, ) = ” | ”and(G, d, '),
' or(substitute(G, ), d-1, )
For further details on NTPs and the particular implementation of these modules, see BIBREF0
After building all the proof states, NTPs define the final success score of proving a query as an $$ over all the generated valid proof scores (neural networks).
Assume a KB $\mathcal {K}$ , composed of $|\mathcal {K}|$ facts and no rules, for brevity. Note that $|\mathcal {K}|$ can be impractical within the scope of NTP. For instance, Freebase BIBREF20 is composed of approximately 637 million facts, while YAGO3 BIBREF21 is composed by approximately 9 million facts. Given a query $g \triangleq [{grandpaOf}, {abe}, {bart}]$ , NTP compares its embedding representation – given by the embedding vectors of ${grandpaOf}$ , ${abe}$ , and ${bart}$ – with the representation of each of the $|\mathcal {K}|$ facts.
The resulting proof score of $g$ is given by:
$$ \begin{aligned} \max _{f \in \mathcal {K}} & \; {unify}_(g, [f_{p}, f_{s}, f_{o}], (\emptyset , )) \\ & = \max _{f \in \mathcal {K}} \; \min \big \lbrace , \operatorname{k}(_{\scriptsize {grandpaOf}:}, _{f_{p}:}),\\ &\qquad \qquad \qquad \operatorname{k}(_{{abe}:}, _{f_{s}:}), \operatorname{k}(_{{bart}:}, _{f_{o}:}) \big \rbrace , \end{aligned}$$ (Eq. 3)
where $f \triangleq [f_{p}, f_{s}, f_{o}]$ is a fact in $\mathcal {K}$ denoting a relationship of type $f_{p}$ between $f_{s}$ and $f_{o}$ , $_{s:}$ is the embedding representation of a symbol $s$ , $$ denotes the initial proof score, and $\operatorname{k}({}\cdot {}, {}\cdot {})$ denotes the RBF kernel. Note that the maximum proof score is given by the fact $f \in \mathcal {K}$ that maximises the similarity between its components and the goal $\mathcal {K}$0 : solving the maximisation problem in eq:inference can be equivalently stated as a nearest neighbour search problem. In this work, we use ANNS during the forward pass for considering only the most promising proof paths during the construction of the neural network.
Nearest Neighbourhood Search
From ex:inference, we can see that the inference problem can be reduced to a nearest neighbour search problem. Given a query $g$ , the problem is finding the fact(s) in $\mathcal {K}$ that maximise the unification score. This represents a computational bottleneck, since it is very costly to find the exact nearest neighbour in high-dimensional Euclidean spaces, due to the curse of dimensionality BIBREF22 . Exact methods are rarely more efficient than brute-force linear scan methods when the dimensionality is high BIBREF23 , BIBREF24 . A practical solution consists in ANNS algorithms, which relax the condition of the exact search by allowing a small number of mistakes. Several families of ANNS algorithms exist, such as LSH BIBREF25 , PQ BIBREF26 , and PG BIBREF27 . In this work we use HNSW BIBREF24 , BIBREF28 , a graph-based incremental ANNS structure which can offer much better logarithmic complexity scaling in comparison with other approaches.
Related Work
Many machine learning methods rely on efficient nearest neighbour search for solving specific sub-problems. Given the computational complexity of nearest neighbour search, approximate methods, driven by advanced index structures, hash or even graph-based approaches are used to speed up the bottleneck of costly comparison. ANNS algorithms have been used to speed up various sorts of machine learning models, including mixture model clustering BIBREF29 , case-based reasoning BIBREF30 to Gaussian process regression BIBREF31 , among others. Similarly to this work, BIBREF32 also rely on approximate nearest neighbours to speed up Memory-Augmented neural networks. Similarly to our work, they apply ANNS to query the external memory (in our case the KB memory) for $k$ closest words. They present drastic savings in speed and memory usage. Though as of this moment, our speed savings are not as drastic, the memory savings we achieve are sufficient so that we can train on WordNet, a dataset previously considered out of reach of NTP.
Experiments
We compared results obtained by our model, which we refer to as NTP 2.0, with those obtained by the original NTP proposed by BIBREF0 . Results on several smaller datasets – namely Countries, Nations, Kinship, and UMLS – are shown in tab:results. When unifying goals with facts in the KB, for each goal, we use ANNS for retrieving the $k$ most similar (in embedding space) facts, and use those for computing the final proof scores. We report results for $k = 1$ , as we did not notice sensible differences for $k \in \lbrace 2, 5, 10 \rbrace $ . However, we noticed sensible improvements in the case of Countries, and an overall decrease in performance in UMLS. A possible explanation is that ANNS (with $k = 1$ ), due to its inherently approximate nature, does not always retrieve the closest fact(s) exactly. This behaviour may be a problem in some datasets where exact nearest neighbour search is crucial for correctly answering queries. We also evaluated NTP 2.0 on WordNet BIBREF33 , a KB encoding lexical knowledge about the English language. In particular, we use the WordNet used by BIBREF34 for their experiments. This dataset is significantly larger than the other datasets used by BIBREF0 – it is composed by 38.696 entities, 11 relations, and the training set is composed by 112,581 facts. In WordNet, the accuracies on the validation and test sets were 65.29% and 65.72%, respectively – which is on par with the Distance Model, a Neural Link Predictor discussed by BIBREF34 , which achieves a test accuracy of 68.3%. However, we did not consider a full hyper-parameter sweep, and did not regularise the model using Neural Link Predictors, which sensibly improves NTP' predictive accuracy BIBREF0 . A subset of the induced rules is shown in tab:rules.
Conclusions
We proposed a way to sensibly scale up NTP by reducing parts of their inference steps to ANNS problems, for which very efficient and scalable solutions exist in the literature. | '= ( , { ll k(h:, g:) if hV, gV
1 otherwise } )
where $_{h:}$ and $_{g:}$ denote the embedding representations of $h$ and $g$ , respectively. |
68794289ed6078b49760dc5fdf88618290e94993 | 68794289ed6078b49760dc5fdf88618290e94993_0 | Q: What are proof paths?
Text: Introduction
Recent advancements in deep learning intensified the long-standing interests in integrating symbolic reasoning with connectionist models BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 . The attraction of said integration stems from the complementing properties of these systems. Symbolic reasoning models offer interpretability, efficient generalisation from a small number of examples, and the ability to leverage knowledge provided by an expert. However, these systems are unable to handle ambiguous and noisy high-dimensional data such as sensory inputs BIBREF5 . On the other hand, representation learning models exhibit robustness to noise and ambiguity, can learn task-specific representations, and achieve state-of-the-art results on a wide variety of tasks BIBREF6 . However, being universal function approximators, these models require vast amounts of training data and are treated as non-interpretable black boxes.
One way of integrating the symbolic and sub-symbolic models is by continuously relaxing discrete operations and implementing them in a connectionist framework. Recent approaches in this direction focused on learning algorithmic behaviour without the explicit symbolic representations of a program BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 , and consequently with it BIBREF12 , BIBREF13 , BIBREF14 , BIBREF15 . In the inductive logic programming setting, two new models, NTP BIBREF0 and Differentiable Inductive Logic Programming ( $\partial $ ILP) BIBREF16 successfully combined the interpretability and data efficiency of a logic programming system with the expressiveness and robustness of neural networks.
In this paper, we focus on the NTP model proposed by BIBREF0 . Akin to recent neural-symbolic models, NTP rely on a continuous relaxation of a discrete algorithm, operating over the sub-symbolic representations. In this case, the algorithm is an analogue to Prolog's backward chaining with a relaxed unification operator. The backward chaining algorithm constructs neural networks, which model continuously relaxed proof paths using sub-symbolic representations. These representations are learned end-to-end by maximising the proof scores of facts in the KB, while minimising the score of facts not in the KB, in a link prediction setting BIBREF17 . However, while the symbolic unification checks whether two terms can represent the same structure, the relaxed unification measures the similarity between their sub-symbolic representations.
This continuous relaxation is at the crux of NTP' inability to scale to large datasets. During both training and inference, NTP need to compute all possible proof trees needed for proving a query, relying on the continuous unification of the query with all the rules and facts in the KB. This procedure quickly becomes infeasible for large datasets, as the number of nodes of the resulting computation graph grows exponentially.
Our insight is that we can radically reduce the computational complexity of inference and learning by generating only the most promising proof paths. In particular, we show that the problem of finding the facts in the KB that best explain a query can be reduced to a $k$ -nearest neighbour problem, for which efficient exact and approximate solutions exist BIBREF18 . This enables us to apply NTP to previously unreachable real-world datasets, such as WordNet.
Background
In NTP, the neural network structure is built recursively, and its construction is defined in terms of modules similarly to dynamic neural module networks BIBREF19 . Each module, given a goal, a KB, and a current proof state as inputs, produces a list of new proof states, where the proof states are neural networks representing partial proof success scores.
Unification Module. In backward chaining, unification between two atoms is used for checking whether they can represent the same structure. In discrete unification, non-variable symbols are checked for equality, and the proof fails if the symbols differ. In NTP, rather than comparing symbols, their embedding representations are compared by means of a RBF kernel. This allows matching different symbols with similar semantics, such as matching relations like ${grandFatherOf}$ and ${grandpaOf}$ . Given a proof state $= (_, _)$ , where $_$ and $_$ denote a substitution set and a proof score, respectively, unification is computed as follows:
1. unify(, , ) =
2. unify(, G, ) =
3. unify(H, , ) =
4. unify(h::H, g::G, ) = unify(H,G,')
with ' = (', ') where:
'= {ll {h/g} if hV
{g/h} if gV, hV
otherwise }
'= ( , { ll k(h:, g:) if hV, gV
1 otherwise } )
where $_{h:}$ and $_{g:}$ denote the embedding representations of $h$ and $g$ , respectively.
OR Module. This module attempts to apply rules in a KB. The name of this module stems from the fact that a KB can be seen as a large disjunction of rules and facts. In backward chaining reasoning systems, the OR module is used for unifying a goal with all facts and rules in a KB: if the goal unifies with the head of the rule, then a series of goals is derived from the body of such a rule. In NTP, we calculate the similarity between the rule and the facts via the unify operator. Upon calculating the continuous unification scores, OR calls AND to prove all sub-goals in the body of the rule.
or(G, d, ) = ' | ' and(B, d, unify(H, G, )),
H :– B
AND Module. This module is used for proving a conjunction of sub-goals derived from a rule body. It first applies substitutions to the first atom, which is afterwards proven by calling the OR module. Remaining sub-goals are proven by recursively calling the AND module.
1. and(_, _, ) =
2. and(_, 0, _) =
3. and(, _, ) =
4. and(G:G, d, ) = ” | ”and(G, d, '),
' or(substitute(G, ), d-1, )
For further details on NTPs and the particular implementation of these modules, see BIBREF0
After building all the proof states, NTPs define the final success score of proving a query as an $$ over all the generated valid proof scores (neural networks).
Assume a KB $\mathcal {K}$ , composed of $|\mathcal {K}|$ facts and no rules, for brevity. Note that $|\mathcal {K}|$ can be impractical within the scope of NTP. For instance, Freebase BIBREF20 is composed of approximately 637 million facts, while YAGO3 BIBREF21 is composed by approximately 9 million facts. Given a query $g \triangleq [{grandpaOf}, {abe}, {bart}]$ , NTP compares its embedding representation – given by the embedding vectors of ${grandpaOf}$ , ${abe}$ , and ${bart}$ – with the representation of each of the $|\mathcal {K}|$ facts.
The resulting proof score of $g$ is given by:
$$ \begin{aligned} \max _{f \in \mathcal {K}} & \; {unify}_(g, [f_{p}, f_{s}, f_{o}], (\emptyset , )) \\ & = \max _{f \in \mathcal {K}} \; \min \big \lbrace , \operatorname{k}(_{\scriptsize {grandpaOf}:}, _{f_{p}:}),\\ &\qquad \qquad \qquad \operatorname{k}(_{{abe}:}, _{f_{s}:}), \operatorname{k}(_{{bart}:}, _{f_{o}:}) \big \rbrace , \end{aligned}$$ (Eq. 3)
where $f \triangleq [f_{p}, f_{s}, f_{o}]$ is a fact in $\mathcal {K}$ denoting a relationship of type $f_{p}$ between $f_{s}$ and $f_{o}$ , $_{s:}$ is the embedding representation of a symbol $s$ , $$ denotes the initial proof score, and $\operatorname{k}({}\cdot {}, {}\cdot {})$ denotes the RBF kernel. Note that the maximum proof score is given by the fact $f \in \mathcal {K}$ that maximises the similarity between its components and the goal $\mathcal {K}$0 : solving the maximisation problem in eq:inference can be equivalently stated as a nearest neighbour search problem. In this work, we use ANNS during the forward pass for considering only the most promising proof paths during the construction of the neural network.
Nearest Neighbourhood Search
From ex:inference, we can see that the inference problem can be reduced to a nearest neighbour search problem. Given a query $g$ , the problem is finding the fact(s) in $\mathcal {K}$ that maximise the unification score. This represents a computational bottleneck, since it is very costly to find the exact nearest neighbour in high-dimensional Euclidean spaces, due to the curse of dimensionality BIBREF22 . Exact methods are rarely more efficient than brute-force linear scan methods when the dimensionality is high BIBREF23 , BIBREF24 . A practical solution consists in ANNS algorithms, which relax the condition of the exact search by allowing a small number of mistakes. Several families of ANNS algorithms exist, such as LSH BIBREF25 , PQ BIBREF26 , and PG BIBREF27 . In this work we use HNSW BIBREF24 , BIBREF28 , a graph-based incremental ANNS structure which can offer much better logarithmic complexity scaling in comparison with other approaches.
Related Work
Many machine learning methods rely on efficient nearest neighbour search for solving specific sub-problems. Given the computational complexity of nearest neighbour search, approximate methods, driven by advanced index structures, hash or even graph-based approaches are used to speed up the bottleneck of costly comparison. ANNS algorithms have been used to speed up various sorts of machine learning models, including mixture model clustering BIBREF29 , case-based reasoning BIBREF30 to Gaussian process regression BIBREF31 , among others. Similarly to this work, BIBREF32 also rely on approximate nearest neighbours to speed up Memory-Augmented neural networks. Similarly to our work, they apply ANNS to query the external memory (in our case the KB memory) for $k$ closest words. They present drastic savings in speed and memory usage. Though as of this moment, our speed savings are not as drastic, the memory savings we achieve are sufficient so that we can train on WordNet, a dataset previously considered out of reach of NTP.
Experiments
We compared results obtained by our model, which we refer to as NTP 2.0, with those obtained by the original NTP proposed by BIBREF0 . Results on several smaller datasets – namely Countries, Nations, Kinship, and UMLS – are shown in tab:results. When unifying goals with facts in the KB, for each goal, we use ANNS for retrieving the $k$ most similar (in embedding space) facts, and use those for computing the final proof scores. We report results for $k = 1$ , as we did not notice sensible differences for $k \in \lbrace 2, 5, 10 \rbrace $ . However, we noticed sensible improvements in the case of Countries, and an overall decrease in performance in UMLS. A possible explanation is that ANNS (with $k = 1$ ), due to its inherently approximate nature, does not always retrieve the closest fact(s) exactly. This behaviour may be a problem in some datasets where exact nearest neighbour search is crucial for correctly answering queries. We also evaluated NTP 2.0 on WordNet BIBREF33 , a KB encoding lexical knowledge about the English language. In particular, we use the WordNet used by BIBREF34 for their experiments. This dataset is significantly larger than the other datasets used by BIBREF0 – it is composed by 38.696 entities, 11 relations, and the training set is composed by 112,581 facts. In WordNet, the accuracies on the validation and test sets were 65.29% and 65.72%, respectively – which is on par with the Distance Model, a Neural Link Predictor discussed by BIBREF34 , which achieves a test accuracy of 68.3%. However, we did not consider a full hyper-parameter sweep, and did not regularise the model using Neural Link Predictors, which sensibly improves NTP' predictive accuracy BIBREF0 . A subset of the induced rules is shown in tab:rules.
Conclusions
We proposed a way to sensibly scale up NTP by reducing parts of their inference steps to ANNS problems, for which very efficient and scalable solutions exist in the literature. | A sequence of logical statements represented in a computational graph |
62048ea0aab61abe21fb30d70c4a1bc5fb946137 | 62048ea0aab61abe21fb30d70c4a1bc5fb946137_0 | Q: What is the size of the model?
Text: Introduction
There has been a recent shift of research attention in the word segmentation literature from statistical methods to deep learning BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Neural network models have been exploited due to their strength in non-sparse representation learning and non-linear power in feature combination, which have led to advances in many NLP tasks. So far, neural word segmentors have given comparable accuracies to the best statictical models.
With respect to non-sparse representation, character embeddings have been exploited as a foundation of neural word segmentors. They serve to reduce sparsity of character ngrams, allowing, for example, “猫(cat) 躺(lie) 在(in) 墙角(corner)” to be connected with “狗(dog) 蹲(sit) 在(in) 墙角(corner)” BIBREF0 , which is infeasible by using sparse one-hot character features. In addition to character embeddings, distributed representations of character bigrams BIBREF6 , BIBREF1 and words BIBREF2 , BIBREF5 have also been shown to improve segmentation accuracies.
With respect to non-linear modeling power, various network structures have been exploited to represent contexts for segmentation disambiguation, including multi-layer perceptrons on five-character windows BIBREF0 , BIBREF6 , BIBREF1 , BIBREF7 , as well as LSTMs on characters BIBREF3 , BIBREF8 and words BIBREF2 , BIBREF4 , BIBREF5 . For structured learning and inference, CRF has been used for character sequence labelling models BIBREF1 , BIBREF3 and structural beam search has been used for word-based segmentors BIBREF4 , BIBREF5 .
Previous research has shown that segmentation accuracies can be improved by pretraining character and word embeddings over large Chinese texts, which is consistent with findings on other NLP tasks, such as parsing BIBREF9 . Pretraining can be regarded as one way of leveraging external resources to improve accuracies, which is practically highly useful and has become a standard practice in neural NLP. On the other hand, statistical segmentation research has exploited raw texts for semi-supervised learning, by collecting clues from raw texts more thoroughly such as mutual information and punctuation BIBREF10 , BIBREF11 , and making use of self-predictions BIBREF12 , BIBREF13 . It has also utilised heterogenous annotations such as POS BIBREF14 , BIBREF15 and segmentation under different standards BIBREF16 . To our knowledge, such rich external information has not been systematically investigated for neural segmentation.
We fill this gap by investigating rich external pretraining for neural segmentation. Following BIBREF4 and BIBREF5 , we adopt a globally optimised beam-search framework for neural structured prediction BIBREF9 , BIBREF17 , BIBREF18 , which allows word information to be modelled explicitly. Different from previous work, we make our model conceptually simple and modular, so that the most important sub module, namely a five-character window context, can be pretrained using external data. We adopt a multi-task learning strategy BIBREF19 , casting each external source of information as a auxiliary classification task, sharing a five-character window network. After pretraining, the character window network is used to initialize the corresponding module in our segmentor.
Results on 6 different benchmarks show that our method outperforms the best statistical and neural segmentation models consistently, giving the best reported results on 5 datasets in different domains and genres. Our implementation is based on LibN3L BIBREF20 . Code and models can be downloaded from http://gitHub.com/jiesutd/RichWordSegmentor
Related Work
Work on statistical word segmentation dates back to the 1990s BIBREF21 . State-of-the-art approaches include character sequence labeling models BIBREF22 using CRFs BIBREF23 , BIBREF24 and max-margin structured models leveraging word features BIBREF25 , BIBREF26 , BIBREF27 . Semi-supervised methods have been applied to both character-based and word-based models, exploring external training data for better segmentation BIBREF11 , BIBREF12 , BIBREF13 , BIBREF28 . Our work belongs to recent neural word segmentation.
To our knowledge, there has been no work in the literature systematically investigating rich external resources for neural word segmentation training. Closest in spirit to our work, BIBREF11 empirically studied the use of various external resources for enhancing a statistical segmentor, including character mutual information, access variety information, punctuation and other statistical information. Their baseline is similar to ours in the sense that both character and word contexts are considered. On the other hand, their model is statistical while ours is neural. Consequently, they integrate external knowledge as features, while we integrate it by shared network parameters. Our results show a similar degree of error reduction compared to theirs by using external data.
Our model inherits from previous findings on context representations, such as character windows BIBREF6 , BIBREF1 , BIBREF7 and LSTMs BIBREF3 , BIBREF8 . Similar to BIBREF5 and BIBREF4 , we use word context on top of character context. However, words play a relatively less important role in our model, and we find that word LSTM, which has been used by all previous neural segmentation work, is unnecessary for our model. Our model is conceptually simpler and more modularised compared with BIBREF5 and BIBREF4 , allowing a central sub module, namely a five-character context window, to be pretrained.
Model
Our segmentor works incrementally from left to right, as the example shown in Table TABREF1 . At each step, the state consists of a sequence of words that have been fully recognized, denoted as INLINEFORM0 , a current partially recognized word INLINEFORM1 , and a sequence of next incoming characters, denoted as INLINEFORM2 , as shown in Figure FIGREF4 . Given an input sentence, INLINEFORM3 and INLINEFORM4 are initialized to INLINEFORM5 and INLINEFORM6 , respectively, and INLINEFORM7 contains all the input characters. At each step, a decision is made on INLINEFORM8 , either appending it as a part of INLINEFORM9 , or seperating it as the beginning of a new word. The incremental process repeats until INLINEFORM10 is empty and INLINEFORM11 is null again ( INLINEFORM12 , INLINEFORM13 ). Formally, the process can be regarded as a state-transition process, where a state is a tuple INLINEFORM14 , and the transition actions include Sep (seperate) and App (append), as shown by the deduction system in Figure FIGREF7 .
In the figure, INLINEFORM0 denotes the score of a state, given by a neural network model. The score of the initial state (i.e. axiom) is 0, and the score of a non-axiom state is the sum of scores of all incremental decisions resulting in the state. Similar to BIBREF5 and BIBREF4 , our model is a global structural model, using the overall score to disambiguate states, which correspond to sequences of inter-dependent transition actions.
Different from previous work, the structure of our scoring network is shown in Figure FIGREF4 . It consists of three main layers. On the bottom is a representation layer, which derives dense representations INLINEFORM0 and INLINEFORM1 for INLINEFORM2 and INLINEFORM3 , respectively. We compare various distributed representations and neural network structures for learning INLINEFORM4 and INLINEFORM5 , detailed in Section SECREF8 . On top of the representation layer, we use a hidden layer to merge INLINEFORM6 and INLINEFORM7 into a single vector DISPLAYFORM0
The hidden feature vector INLINEFORM0 is used to represent the state INLINEFORM1 , for calculating the scores of the next action. In particular, a linear output layer with two nodes is employed: DISPLAYFORM0
The first and second node of INLINEFORM0 represent the scores of Sep and App given INLINEFORM1 , namely INLINEFORM2 , INLINEFORM3 respectively.
Representation Learning
Characters. We investigate two different approaches to encode incoming characters, namely a window approach and an LSTM approach. For the former, we follow prior methods BIBREF22 , BIBREF1 , using five-character window INLINEFORM0 to represent incoming characters. Shown in Figure FIGREF13 , a multi-layer perceptron (MLP) is employed to derive a five-character window vector INLINEFORM1 from single-character vector representations INLINEFORM2 . DISPLAYFORM0
For the latter, we follow recent work BIBREF3 , BIBREF5 , using a bi-directional LSTM to encode input character sequence. In particular, the bi-directional LSTM hidden vector INLINEFORM0 of the next incoming character INLINEFORM1 is used to represent the coming characters INLINEFORM2 given a state. Intuitively, a five-character window provides a local context from which the meaning of the middle character can be better disambiguated. LSTM, on the other hand, captures larger contexts, which can contain more useful clues for dismbiguation but also irrelevant information. It is therefore interesting to investigate a combination of their strengths, by first deriving a locally-disambiguated version of INLINEFORM3 , and then feed it to LSTM for a globally disambiguated representation.
Now with regard to the single-character vector representation INLINEFORM0 , we follow previous work and consider both character embedding INLINEFORM1 and character-bigram embedding INLINEFORM2 , investigating the effect of each on the accuracies. When both INLINEFORM3 and INLINEFORM4 are utilized, the concatenated vector is taken as INLINEFORM5 .
Partial Word. We take a very simple approach to representing the partial word INLINEFORM0 , by using the embedding vectors of its first and last characters, as well as the embedding of its length. Length embeddings are randomly initialized and then tuned in model training. INLINEFORM1 has relatively less influence on the empirical segmentation accuracies. DISPLAYFORM0
Word. Similar to the character case, we investigate two different approaches to encoding incoming characters, namely a window approach and an LSTM approach. For the former, we follow prior methods BIBREF25 , BIBREF27 , using the two-word window INLINEFORM0 to represent recognized words. A hidden layer is employed to derive a two-word vector INLINEFORM1 from single word embeddings INLINEFORM2 and INLINEFORM3 . DISPLAYFORM0
For the latter, we follow BIBREF5 and BIBREF4 , using an uni-directional LSTM on words that have been recognized.
Pretraining
Neural network models for NLP benefit from pretraining of word/character embeddings, learning distributed sementic information from large raw texts for reducing sparsity. The three basic elements in our neural segmentor, namely characters, character bigrams and words, can all be pretrained over large unsegmented data. We pretrain the five-character window network in Figure FIGREF13 as an unit, learning the MLP parameter together with character and bigram embeddings. We consider four types of commonly explored external data to this end, all of which have been studied for statistical word segmentation, but not for neural network segmentors.
Raw Text. Although raw texts do not contain explicit word boundary information, statistics such as mutual information between consecutive characters can be useful features for guiding segmentation BIBREF11 . For neural segmentation, these distributional statistics can be implicitly learned by pretraining character embeddings. We therefore consider a more explicit clue for pretraining our character window network, namely punctuations BIBREF10 .
Punctuation can serve as a type of explicit mark-up BIBREF30 , indicating that the two characters on its left and right belong to two different words. We leverage this source of information by extracting character five-grams excluding punctuation from raw sentences, using them as inputs to classify whether there is punctuation before middle character. Denoting the resulting five character window as INLINEFORM0 , the MLP in Figure FIGREF13 is used to derive its representation INLINEFORM1 , which is then fed to a softmax layer for binary classification: DISPLAYFORM0
Here INLINEFORM0 indicates the probability of a punctuation mark existing before INLINEFORM1 . Standard backpropagation training of the MLP in Figure FIGREF13 can be done jointly with the training of INLINEFORM2 and INLINEFORM3 . After such training, the embedding INLINEFORM4 and MLP values can be used to initialize the corresponding parameters for INLINEFORM5 in the main segmentor, before its training.
Automatically Segmented Text. Large texts automatically segmented by a baseline segmentor can be used for self-training BIBREF13 or deriving statistical features BIBREF12 . We adopt a simple strategy, taking automatically segmented text as silver data to pretrain the five-character window network. Given INLINEFORM0 , INLINEFORM1 is derived using the MLP in Figure FIGREF13 , and then used to classify the segmentation of INLINEFORM2 into B(begining)/M(middle)/E(end)/S(single character word) labels. DISPLAYFORM0
Here INLINEFORM0 and INLINEFORM1 are model parameters. Training can be done in the same way as training with punctuation.
Heterogenous Training Data. Multiple segmentation corpora exist for Chinese, with different segmentation granularities. There has been investigation on leveraging two corpora under different annotation standards to improve statistical segmentation BIBREF16 . We try to utilize heterogenous treebanks by taking an external treebank as labeled data, training a B/M/E/S classifier for the character windows network. DISPLAYFORM0
POS Data. Previous research has shown that POS information is closely related to segmentation BIBREF14 , BIBREF15 . We verify the utility of POS information for our segmentor by pretraining a classifier that predicts the POS on each character, according to the character window representation INLINEFORM0 . In particular, given INLINEFORM1 , the POS of the word that INLINEFORM2 belongs to is used as the output. DISPLAYFORM0
Multitask Learning. While each type of external training data can offer one source of segmentation information, different external data can be complimentary to each other. We aim to inject all sources of information into the character window representation INLINEFORM0 by using it as a shared representation for different classification tasks. Neural model have been shown capable of doing multi-task learning via parameter sharing BIBREF19 . Shown in Figure FIGREF13 , in our case, the output layer for each task is independent, but the hidden layer INLINEFORM1 and all layers below INLINEFORM2 are shared.
For training with all sources above, we randomly sample sentences from the Punc./Auto-seg/Heter./POS sources with the ratio of 10/1/1/1, for each sentence in punctuation corpus we take only 2 characters (character before and after the punctuation) as input instances.
[t] InputInput OutputOutput Parameters: INLINEFORM0
Process:
agenda INLINEFORM0 INLINEFORM1
j in [0:Len( INLINEFORM0 )] beam = []
INLINEFORM0 in agenda INLINEFORM1 = Action( INLINEFORM2 , Sep)
Add( INLINEFORM0 , beam)
INLINEFORM0 = Action( INLINEFORM1 , App)
Add( INLINEFORM0 , beam)
agenda INLINEFORM0 Top(beam, B)
INLINEFORM0 agenda INLINEFORM1 = BestIn(agenda)
Update( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 )
return
INLINEFORM0 = BestIn(agenda)
Update( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 )
return
Training
Decoding and Training
To train the main segmentor, we adopt the global transition-based learning and beam-search strategy of BIBREF31 . For decoding, standard beam search is used, where the B best partial output hypotheses at each step are maintained in an agenda. Initially, the agenda contains only the start state. At each step, all hypotheses in the agenda are expanded, by applying all possible actions and B highest scored resulting hypotheses are used as the agenda for the next step.
For training, the same decoding process is applied to each training example INLINEFORM0 . At step INLINEFORM1 , if the gold-standard sequence of transition actions INLINEFORM2 falls out of the agenda, max-margin update is performed by taking the current best hypothesis INLINEFORM3 in the beam as a negative example, and INLINEFORM4 as a positive example. The loss function is DISPLAYFORM0
where INLINEFORM0 is the number of incorrect local decisions in INLINEFORM1 , and INLINEFORM2 controls the score margin.
The strategy above is early-update BIBREF32 . On the other hand, if the gold-standard hypothesis does not fall out of the agenda until the full sentence has been segmented, a final update is made between the highest scored hypothesis INLINEFORM0 (non-gold standard) in the agenda and the gold-standard INLINEFORM1 , using exactly the same loss function. Pseudocode for the online learning algorithm is shown in Algorithm SECREF14 .
We use Adagrad BIBREF33 to optimize model parameters, with an initial learning rate INLINEFORM0 . INLINEFORM1 regularization and dropout BIBREF34 on input are used to reduce overfitting, with a INLINEFORM2 weight INLINEFORM3 and a dropout rate INLINEFORM4 . All the parameters in our model are randomly initialized to a value INLINEFORM5 , where INLINEFORM6 BIBREF35 . We fine-tune character and character bigram embeddings, but not word embeddings, acccording to BIBREF5 .
Experimental Settings
Data. We use Chinese Treebank 6.0 (CTB6) BIBREF36 as our main dataset. Training, development and test set splits follow previous work BIBREF37 . In order to verify the robustness of our model, we additionally use SIGHAN 2005 bake-off BIBREF38 and NLPCC 2016 shared task for Weibo segmentation BIBREF39 as test datasets, where the standard splits are used. For pretraining embedding of words, characters and character bigrams, we use Chinese Gigaword (simplified Chinese sections), automatically segmented using ZPar 0.6 off-the-shelf BIBREF25 , the statictics of which are shown in Table TABREF24 .
For pretraining character representations, we extract punctuation classification data from the Gigaword corpus, and use the word-based ZPar and a standard character-based CRF model BIBREF40 to obtain automatic segmentation results. We compare pretraining using ZPar results only and using results that both segmentors agree on. For heterogenous segmentation corpus and POS data, we use a People's Daily corpus of 5 months. Statistics are listed in Table TABREF24 .
Evaluation. The standard word precision, recall and F1 measure BIBREF38 are used to evaluate segmentation performances.
Hyper-parameter Values. We adopt commonly used values for most hyperparameters, but tuned the sizes of hidden layers on the development set. The values are summarized in Table TABREF20 .
Development Experiments
We perform development experiments to verify the usefulness of various context representations, network configurations and different pretraining methods, respectively.
The influence of character and word context representations are empirically studied by varying the network structures for INLINEFORM0 and INLINEFORM1 in Figure FIGREF4 , respectively. All the experiments in this section are performed using a beam size of 8.
Character Context. We fix the word representation INLINEFORM0 to a 2-word window and compare different character context representations. The results are shown in Table TABREF27 , where “no char” represents our model without INLINEFORM1 , “5-char window” represents a five-character window context, “char LSTM” represents character LSTM context and “5-char window + LSTM” represents a combination, detailed in Section SECREF8 . “-char emb” and “-bichar emb” represent the combined window and LSTM context without character and character-bigram information, respectively.
As can be seen from the table, without character information, the F-score is 84.62%, demonstrating the necessity of character contexts. Using window and LSTM representations, the F-scores increase to 95.41% and 95.51%, respectively. A combination of the two lead to further improvement, showing that local and global character contexts are indeed complementary, as hypothesized in Section SECREF8 . Finally, by removing character and character-bigram embeddings, the F-score decreases to 95.20% and 94.27%, respectively, which suggests that character bigrams are more useful compared to character unigrams. This is likely because they contain more distinct tokens and hence offer a larger parameter space.
Word Context. The influence of various word contexts are shown in Table TABREF28 . Without using word information, our segmentor gives an F-score of 95.66% on the development data. Using a context of only INLINEFORM0 (1-word window), the F-measure increases to 95.78%. This shows that word contexts are far less important in our model compared to character contexts, and also compared to word contexts in previous word-based segmentors BIBREF5 , BIBREF4 . This is likely due to the difference in our neural network structures, and that we fine-tune both character and character bigram embeddings, which significantly enlarges the adjustable parameter space as compared with BIBREF5 . The fact that word contexts can contribute relatively less than characters in a word is also not surprising in the sense that word-based neural segmentors do not outperform the best character-based models by large margins. Given that character context is what we pretrain, our model relies more heavily on them.
With both INLINEFORM0 and INLINEFORM1 being used for the context, the F-score further increases to 95.86%, showing that a 2-word window is useful by offering more contextual information. On the other hand, when INLINEFORM2 is also considered, the F-score does not improve further. This is consistent with previous findings of statistical word segmentation BIBREF25 , which adopt a 2-word context. Interestingly, using a word LSTM does not bring further improvements, even when it is combined with a window context. This suggests that global word contexts may not offer crucial additional information compared with local word contexts. Intuitively, words are significantly less polysemous compared with characters, and hence can serve as effective contexts even if used locally, to supplement a more crucial character context.
We verify the effectiveness of structured learning and inference by measuring the influence of beam size on the baseline segmentor. Figure FIGREF30 shows the F-scores against different numbers of training iterations with beam size 1,2,4,8 and 16, respectively. When the beam size is 1, the inference is local and greedy. As the size of the beam increases, more global structural ambiguities can be resolved since learning is designed to guide search. A contrast between beam sizes 1 and 2 demonstrates the usefulness of structured learning and inference. As the beam size increases, the gain by doubling the beam size decreases. We choose a beam size of 8 for the remaining experiments for a tradeoff between speed and accuracy.
Table TABREF31 shows the effectiveness of rich pretraining of INLINEFORM0 on the development set. In particular, by using punctuation information, the F-score increases from 95.86% to 96.25%, with a relative error reduction of 9.4%. This is consistent with the observation of BIBREF11 , who show that punctuation is more effective compared with mutual information and access variety as semi-supervised data for a statistical word segmentation model. With automatically-segmented data, heterogenous segmentation and POS information, the F-score increases to 96.26%, 96.27% and 96.22%, respectively, showing the relevance of all information sources to neural segmentation, which is consistent with observations made for statistical word segmentation BIBREF16 , BIBREF12 , BIBREF28 . Finally, by integrating all above information via multi-task learning, the F-score is further improved to 96.48%, with a 15.0% relative error reduction.
Both our model and BIBREF5 use global learning and beam search, but our network is different. BIBREF5 utilizes the action history with LSTM encoder, while we use partial word rather than action information. Besides, the character and character bigram embeddings are fine-tuned in our model while BIBREF5 set the embeddings fixed during training. We study the F-measure distribution with respect to sentence length on our baseline model, multitask pretraining model and BIBREF5 . In particular, we cluster the sentences in the development dataset into 6 categories based on their length and evaluate their F1-values, respectively. As shown in Figure FIGREF35 , the models give different error distributions, with our models being more robust to the sentence length compared with BIBREF5 . Their model is better on very short sentences, but worse on all other cases. This shows the relative advantages of our model.
Final Results
Our final results on CTB6 are shown in Table TABREF38 , which lists the results of several current state-of-the-art methods. Without multitask pretraining, our model gives an F-score of 95.44%, which is higher than the neural segmentor of BIBREF5 , which gives the best accuracies among pure neural segments on this dataset. By using multitask pretraining, the result increases to 96.21%, with a relative error reduction of 16.9%. In comparison, BIBREF11 investigated heterogenous semi-supervised learning on a state-of-the-art statistical model, obtaining a relative error reduction of 13.8%. Our findings show that external data can be as useful for neural segmentation as for statistical segmentation.
Our final results compare favourably to the best statistical models, including those using semi-supervised learning BIBREF11 , BIBREF12 , and those leveraging joint POS and syntactic information BIBREF37 . In addition, it also outperforms the best neural models, in particular BIBREF5 *, which is a hybrid neural and statistical model, integrating manual discrete features into their word-based neural model. We achieve the best reported F-score on this dataset. To our knowledge, this is the first time a pure neural network model outperforms all existing methods on this dataset, allowing the use of external data . We also evaluate our model pretrained only on punctuation and auto-segmented data, which do not include additional manual labels. The results on CTB test data show the accuracy of 95.8% and 95.7%, respectivley, which are comparable with those statistical semi-supervised methods BIBREF11 , BIBREF12 . They are also among the top performance methods in Table TABREF38 . Compared with discrete semi-supervised methods BIBREF11 , BIBREF12 , our semi-supervised model is free from hand-crafted features.
In addition to CTB6, which has been the most commonly adopted by recent segmentation research, we additionally evaluate our results on the SIGHAN 2005 bakeoff and Weibo datasets, to examine cross domain robustness. Different state-of-the-art methods for which results are recorded on these datasets are listed in Table TABREF40 . Most neural models reported results only on the PKU and MSR datasets of the bakeoff test sets, which are in simplified Chinese. The AS and CityU corpora are in traditional Chinese, sourced from Taiwan and Hong Kong corpora, respectively. We map them into simplified Chinese before segmentation. The Weibo corpus is in a yet different genre, being social media text. BIBREF41 achieved the best results on this dataset by using a statistical model with features learned using external lexicons, the CTB7 corpus and the People Daily corpus. Similar to Table TABREF38 , our method gives the best accuracies on all corpora except for MSR, where it underperforms the hybrid model of BIBREF5 by 0.2%. To our knowledge, we are the first to report results for a neural segmentor on more than 3 datasets, with competitive results consistently. It verifies that knowledge learned from a certain set of resources can be used to enhance cross-domain robustness in training a neural segmentor for different datasets, which is of practical importance.
Conclusion
We investigated rich external resources for enhancing neural word segmentation, by building a globally optimised beam-search model that leverages both character and word contexts. Taking each type of external resource as an auxiliary classification task, we use neural multi-task learning to pre-train a set of shared parameters for character contexts. Results show that rich pretraining leads to 15.4% relative error reduction, and our model gives results highly competitive to the best systems on six different benchmarks.
Acknowledgments
We thank the anonymous reviewers for their insightful comments and the support of NSFC 61572245. We would like to thank Meishan Zhang for his insightful discussion and assisting coding. Yue Zhang is the corresponding author. | Unanswerable |
25e4dbc7e211a1ebe02ee8dff675b846fb18fdc5 | 25e4dbc7e211a1ebe02ee8dff675b846fb18fdc5_0 | Q: What external sources are used?
Text: Introduction
There has been a recent shift of research attention in the word segmentation literature from statistical methods to deep learning BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Neural network models have been exploited due to their strength in non-sparse representation learning and non-linear power in feature combination, which have led to advances in many NLP tasks. So far, neural word segmentors have given comparable accuracies to the best statictical models.
With respect to non-sparse representation, character embeddings have been exploited as a foundation of neural word segmentors. They serve to reduce sparsity of character ngrams, allowing, for example, “猫(cat) 躺(lie) 在(in) 墙角(corner)” to be connected with “狗(dog) 蹲(sit) 在(in) 墙角(corner)” BIBREF0 , which is infeasible by using sparse one-hot character features. In addition to character embeddings, distributed representations of character bigrams BIBREF6 , BIBREF1 and words BIBREF2 , BIBREF5 have also been shown to improve segmentation accuracies.
With respect to non-linear modeling power, various network structures have been exploited to represent contexts for segmentation disambiguation, including multi-layer perceptrons on five-character windows BIBREF0 , BIBREF6 , BIBREF1 , BIBREF7 , as well as LSTMs on characters BIBREF3 , BIBREF8 and words BIBREF2 , BIBREF4 , BIBREF5 . For structured learning and inference, CRF has been used for character sequence labelling models BIBREF1 , BIBREF3 and structural beam search has been used for word-based segmentors BIBREF4 , BIBREF5 .
Previous research has shown that segmentation accuracies can be improved by pretraining character and word embeddings over large Chinese texts, which is consistent with findings on other NLP tasks, such as parsing BIBREF9 . Pretraining can be regarded as one way of leveraging external resources to improve accuracies, which is practically highly useful and has become a standard practice in neural NLP. On the other hand, statistical segmentation research has exploited raw texts for semi-supervised learning, by collecting clues from raw texts more thoroughly such as mutual information and punctuation BIBREF10 , BIBREF11 , and making use of self-predictions BIBREF12 , BIBREF13 . It has also utilised heterogenous annotations such as POS BIBREF14 , BIBREF15 and segmentation under different standards BIBREF16 . To our knowledge, such rich external information has not been systematically investigated for neural segmentation.
We fill this gap by investigating rich external pretraining for neural segmentation. Following BIBREF4 and BIBREF5 , we adopt a globally optimised beam-search framework for neural structured prediction BIBREF9 , BIBREF17 , BIBREF18 , which allows word information to be modelled explicitly. Different from previous work, we make our model conceptually simple and modular, so that the most important sub module, namely a five-character window context, can be pretrained using external data. We adopt a multi-task learning strategy BIBREF19 , casting each external source of information as a auxiliary classification task, sharing a five-character window network. After pretraining, the character window network is used to initialize the corresponding module in our segmentor.
Results on 6 different benchmarks show that our method outperforms the best statistical and neural segmentation models consistently, giving the best reported results on 5 datasets in different domains and genres. Our implementation is based on LibN3L BIBREF20 . Code and models can be downloaded from http://gitHub.com/jiesutd/RichWordSegmentor
Related Work
Work on statistical word segmentation dates back to the 1990s BIBREF21 . State-of-the-art approaches include character sequence labeling models BIBREF22 using CRFs BIBREF23 , BIBREF24 and max-margin structured models leveraging word features BIBREF25 , BIBREF26 , BIBREF27 . Semi-supervised methods have been applied to both character-based and word-based models, exploring external training data for better segmentation BIBREF11 , BIBREF12 , BIBREF13 , BIBREF28 . Our work belongs to recent neural word segmentation.
To our knowledge, there has been no work in the literature systematically investigating rich external resources for neural word segmentation training. Closest in spirit to our work, BIBREF11 empirically studied the use of various external resources for enhancing a statistical segmentor, including character mutual information, access variety information, punctuation and other statistical information. Their baseline is similar to ours in the sense that both character and word contexts are considered. On the other hand, their model is statistical while ours is neural. Consequently, they integrate external knowledge as features, while we integrate it by shared network parameters. Our results show a similar degree of error reduction compared to theirs by using external data.
Our model inherits from previous findings on context representations, such as character windows BIBREF6 , BIBREF1 , BIBREF7 and LSTMs BIBREF3 , BIBREF8 . Similar to BIBREF5 and BIBREF4 , we use word context on top of character context. However, words play a relatively less important role in our model, and we find that word LSTM, which has been used by all previous neural segmentation work, is unnecessary for our model. Our model is conceptually simpler and more modularised compared with BIBREF5 and BIBREF4 , allowing a central sub module, namely a five-character context window, to be pretrained.
Model
Our segmentor works incrementally from left to right, as the example shown in Table TABREF1 . At each step, the state consists of a sequence of words that have been fully recognized, denoted as INLINEFORM0 , a current partially recognized word INLINEFORM1 , and a sequence of next incoming characters, denoted as INLINEFORM2 , as shown in Figure FIGREF4 . Given an input sentence, INLINEFORM3 and INLINEFORM4 are initialized to INLINEFORM5 and INLINEFORM6 , respectively, and INLINEFORM7 contains all the input characters. At each step, a decision is made on INLINEFORM8 , either appending it as a part of INLINEFORM9 , or seperating it as the beginning of a new word. The incremental process repeats until INLINEFORM10 is empty and INLINEFORM11 is null again ( INLINEFORM12 , INLINEFORM13 ). Formally, the process can be regarded as a state-transition process, where a state is a tuple INLINEFORM14 , and the transition actions include Sep (seperate) and App (append), as shown by the deduction system in Figure FIGREF7 .
In the figure, INLINEFORM0 denotes the score of a state, given by a neural network model. The score of the initial state (i.e. axiom) is 0, and the score of a non-axiom state is the sum of scores of all incremental decisions resulting in the state. Similar to BIBREF5 and BIBREF4 , our model is a global structural model, using the overall score to disambiguate states, which correspond to sequences of inter-dependent transition actions.
Different from previous work, the structure of our scoring network is shown in Figure FIGREF4 . It consists of three main layers. On the bottom is a representation layer, which derives dense representations INLINEFORM0 and INLINEFORM1 for INLINEFORM2 and INLINEFORM3 , respectively. We compare various distributed representations and neural network structures for learning INLINEFORM4 and INLINEFORM5 , detailed in Section SECREF8 . On top of the representation layer, we use a hidden layer to merge INLINEFORM6 and INLINEFORM7 into a single vector DISPLAYFORM0
The hidden feature vector INLINEFORM0 is used to represent the state INLINEFORM1 , for calculating the scores of the next action. In particular, a linear output layer with two nodes is employed: DISPLAYFORM0
The first and second node of INLINEFORM0 represent the scores of Sep and App given INLINEFORM1 , namely INLINEFORM2 , INLINEFORM3 respectively.
Representation Learning
Characters. We investigate two different approaches to encode incoming characters, namely a window approach and an LSTM approach. For the former, we follow prior methods BIBREF22 , BIBREF1 , using five-character window INLINEFORM0 to represent incoming characters. Shown in Figure FIGREF13 , a multi-layer perceptron (MLP) is employed to derive a five-character window vector INLINEFORM1 from single-character vector representations INLINEFORM2 . DISPLAYFORM0
For the latter, we follow recent work BIBREF3 , BIBREF5 , using a bi-directional LSTM to encode input character sequence. In particular, the bi-directional LSTM hidden vector INLINEFORM0 of the next incoming character INLINEFORM1 is used to represent the coming characters INLINEFORM2 given a state. Intuitively, a five-character window provides a local context from which the meaning of the middle character can be better disambiguated. LSTM, on the other hand, captures larger contexts, which can contain more useful clues for dismbiguation but also irrelevant information. It is therefore interesting to investigate a combination of their strengths, by first deriving a locally-disambiguated version of INLINEFORM3 , and then feed it to LSTM for a globally disambiguated representation.
Now with regard to the single-character vector representation INLINEFORM0 , we follow previous work and consider both character embedding INLINEFORM1 and character-bigram embedding INLINEFORM2 , investigating the effect of each on the accuracies. When both INLINEFORM3 and INLINEFORM4 are utilized, the concatenated vector is taken as INLINEFORM5 .
Partial Word. We take a very simple approach to representing the partial word INLINEFORM0 , by using the embedding vectors of its first and last characters, as well as the embedding of its length. Length embeddings are randomly initialized and then tuned in model training. INLINEFORM1 has relatively less influence on the empirical segmentation accuracies. DISPLAYFORM0
Word. Similar to the character case, we investigate two different approaches to encoding incoming characters, namely a window approach and an LSTM approach. For the former, we follow prior methods BIBREF25 , BIBREF27 , using the two-word window INLINEFORM0 to represent recognized words. A hidden layer is employed to derive a two-word vector INLINEFORM1 from single word embeddings INLINEFORM2 and INLINEFORM3 . DISPLAYFORM0
For the latter, we follow BIBREF5 and BIBREF4 , using an uni-directional LSTM on words that have been recognized.
Pretraining
Neural network models for NLP benefit from pretraining of word/character embeddings, learning distributed sementic information from large raw texts for reducing sparsity. The three basic elements in our neural segmentor, namely characters, character bigrams and words, can all be pretrained over large unsegmented data. We pretrain the five-character window network in Figure FIGREF13 as an unit, learning the MLP parameter together with character and bigram embeddings. We consider four types of commonly explored external data to this end, all of which have been studied for statistical word segmentation, but not for neural network segmentors.
Raw Text. Although raw texts do not contain explicit word boundary information, statistics such as mutual information between consecutive characters can be useful features for guiding segmentation BIBREF11 . For neural segmentation, these distributional statistics can be implicitly learned by pretraining character embeddings. We therefore consider a more explicit clue for pretraining our character window network, namely punctuations BIBREF10 .
Punctuation can serve as a type of explicit mark-up BIBREF30 , indicating that the two characters on its left and right belong to two different words. We leverage this source of information by extracting character five-grams excluding punctuation from raw sentences, using them as inputs to classify whether there is punctuation before middle character. Denoting the resulting five character window as INLINEFORM0 , the MLP in Figure FIGREF13 is used to derive its representation INLINEFORM1 , which is then fed to a softmax layer for binary classification: DISPLAYFORM0
Here INLINEFORM0 indicates the probability of a punctuation mark existing before INLINEFORM1 . Standard backpropagation training of the MLP in Figure FIGREF13 can be done jointly with the training of INLINEFORM2 and INLINEFORM3 . After such training, the embedding INLINEFORM4 and MLP values can be used to initialize the corresponding parameters for INLINEFORM5 in the main segmentor, before its training.
Automatically Segmented Text. Large texts automatically segmented by a baseline segmentor can be used for self-training BIBREF13 or deriving statistical features BIBREF12 . We adopt a simple strategy, taking automatically segmented text as silver data to pretrain the five-character window network. Given INLINEFORM0 , INLINEFORM1 is derived using the MLP in Figure FIGREF13 , and then used to classify the segmentation of INLINEFORM2 into B(begining)/M(middle)/E(end)/S(single character word) labels. DISPLAYFORM0
Here INLINEFORM0 and INLINEFORM1 are model parameters. Training can be done in the same way as training with punctuation.
Heterogenous Training Data. Multiple segmentation corpora exist for Chinese, with different segmentation granularities. There has been investigation on leveraging two corpora under different annotation standards to improve statistical segmentation BIBREF16 . We try to utilize heterogenous treebanks by taking an external treebank as labeled data, training a B/M/E/S classifier for the character windows network. DISPLAYFORM0
POS Data. Previous research has shown that POS information is closely related to segmentation BIBREF14 , BIBREF15 . We verify the utility of POS information for our segmentor by pretraining a classifier that predicts the POS on each character, according to the character window representation INLINEFORM0 . In particular, given INLINEFORM1 , the POS of the word that INLINEFORM2 belongs to is used as the output. DISPLAYFORM0
Multitask Learning. While each type of external training data can offer one source of segmentation information, different external data can be complimentary to each other. We aim to inject all sources of information into the character window representation INLINEFORM0 by using it as a shared representation for different classification tasks. Neural model have been shown capable of doing multi-task learning via parameter sharing BIBREF19 . Shown in Figure FIGREF13 , in our case, the output layer for each task is independent, but the hidden layer INLINEFORM1 and all layers below INLINEFORM2 are shared.
For training with all sources above, we randomly sample sentences from the Punc./Auto-seg/Heter./POS sources with the ratio of 10/1/1/1, for each sentence in punctuation corpus we take only 2 characters (character before and after the punctuation) as input instances.
[t] InputInput OutputOutput Parameters: INLINEFORM0
Process:
agenda INLINEFORM0 INLINEFORM1
j in [0:Len( INLINEFORM0 )] beam = []
INLINEFORM0 in agenda INLINEFORM1 = Action( INLINEFORM2 , Sep)
Add( INLINEFORM0 , beam)
INLINEFORM0 = Action( INLINEFORM1 , App)
Add( INLINEFORM0 , beam)
agenda INLINEFORM0 Top(beam, B)
INLINEFORM0 agenda INLINEFORM1 = BestIn(agenda)
Update( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 )
return
INLINEFORM0 = BestIn(agenda)
Update( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 )
return
Training
Decoding and Training
To train the main segmentor, we adopt the global transition-based learning and beam-search strategy of BIBREF31 . For decoding, standard beam search is used, where the B best partial output hypotheses at each step are maintained in an agenda. Initially, the agenda contains only the start state. At each step, all hypotheses in the agenda are expanded, by applying all possible actions and B highest scored resulting hypotheses are used as the agenda for the next step.
For training, the same decoding process is applied to each training example INLINEFORM0 . At step INLINEFORM1 , if the gold-standard sequence of transition actions INLINEFORM2 falls out of the agenda, max-margin update is performed by taking the current best hypothesis INLINEFORM3 in the beam as a negative example, and INLINEFORM4 as a positive example. The loss function is DISPLAYFORM0
where INLINEFORM0 is the number of incorrect local decisions in INLINEFORM1 , and INLINEFORM2 controls the score margin.
The strategy above is early-update BIBREF32 . On the other hand, if the gold-standard hypothesis does not fall out of the agenda until the full sentence has been segmented, a final update is made between the highest scored hypothesis INLINEFORM0 (non-gold standard) in the agenda and the gold-standard INLINEFORM1 , using exactly the same loss function. Pseudocode for the online learning algorithm is shown in Algorithm SECREF14 .
We use Adagrad BIBREF33 to optimize model parameters, with an initial learning rate INLINEFORM0 . INLINEFORM1 regularization and dropout BIBREF34 on input are used to reduce overfitting, with a INLINEFORM2 weight INLINEFORM3 and a dropout rate INLINEFORM4 . All the parameters in our model are randomly initialized to a value INLINEFORM5 , where INLINEFORM6 BIBREF35 . We fine-tune character and character bigram embeddings, but not word embeddings, acccording to BIBREF5 .
Experimental Settings
Data. We use Chinese Treebank 6.0 (CTB6) BIBREF36 as our main dataset. Training, development and test set splits follow previous work BIBREF37 . In order to verify the robustness of our model, we additionally use SIGHAN 2005 bake-off BIBREF38 and NLPCC 2016 shared task for Weibo segmentation BIBREF39 as test datasets, where the standard splits are used. For pretraining embedding of words, characters and character bigrams, we use Chinese Gigaword (simplified Chinese sections), automatically segmented using ZPar 0.6 off-the-shelf BIBREF25 , the statictics of which are shown in Table TABREF24 .
For pretraining character representations, we extract punctuation classification data from the Gigaword corpus, and use the word-based ZPar and a standard character-based CRF model BIBREF40 to obtain automatic segmentation results. We compare pretraining using ZPar results only and using results that both segmentors agree on. For heterogenous segmentation corpus and POS data, we use a People's Daily corpus of 5 months. Statistics are listed in Table TABREF24 .
Evaluation. The standard word precision, recall and F1 measure BIBREF38 are used to evaluate segmentation performances.
Hyper-parameter Values. We adopt commonly used values for most hyperparameters, but tuned the sizes of hidden layers on the development set. The values are summarized in Table TABREF20 .
Development Experiments
We perform development experiments to verify the usefulness of various context representations, network configurations and different pretraining methods, respectively.
The influence of character and word context representations are empirically studied by varying the network structures for INLINEFORM0 and INLINEFORM1 in Figure FIGREF4 , respectively. All the experiments in this section are performed using a beam size of 8.
Character Context. We fix the word representation INLINEFORM0 to a 2-word window and compare different character context representations. The results are shown in Table TABREF27 , where “no char” represents our model without INLINEFORM1 , “5-char window” represents a five-character window context, “char LSTM” represents character LSTM context and “5-char window + LSTM” represents a combination, detailed in Section SECREF8 . “-char emb” and “-bichar emb” represent the combined window and LSTM context without character and character-bigram information, respectively.
As can be seen from the table, without character information, the F-score is 84.62%, demonstrating the necessity of character contexts. Using window and LSTM representations, the F-scores increase to 95.41% and 95.51%, respectively. A combination of the two lead to further improvement, showing that local and global character contexts are indeed complementary, as hypothesized in Section SECREF8 . Finally, by removing character and character-bigram embeddings, the F-score decreases to 95.20% and 94.27%, respectively, which suggests that character bigrams are more useful compared to character unigrams. This is likely because they contain more distinct tokens and hence offer a larger parameter space.
Word Context. The influence of various word contexts are shown in Table TABREF28 . Without using word information, our segmentor gives an F-score of 95.66% on the development data. Using a context of only INLINEFORM0 (1-word window), the F-measure increases to 95.78%. This shows that word contexts are far less important in our model compared to character contexts, and also compared to word contexts in previous word-based segmentors BIBREF5 , BIBREF4 . This is likely due to the difference in our neural network structures, and that we fine-tune both character and character bigram embeddings, which significantly enlarges the adjustable parameter space as compared with BIBREF5 . The fact that word contexts can contribute relatively less than characters in a word is also not surprising in the sense that word-based neural segmentors do not outperform the best character-based models by large margins. Given that character context is what we pretrain, our model relies more heavily on them.
With both INLINEFORM0 and INLINEFORM1 being used for the context, the F-score further increases to 95.86%, showing that a 2-word window is useful by offering more contextual information. On the other hand, when INLINEFORM2 is also considered, the F-score does not improve further. This is consistent with previous findings of statistical word segmentation BIBREF25 , which adopt a 2-word context. Interestingly, using a word LSTM does not bring further improvements, even when it is combined with a window context. This suggests that global word contexts may not offer crucial additional information compared with local word contexts. Intuitively, words are significantly less polysemous compared with characters, and hence can serve as effective contexts even if used locally, to supplement a more crucial character context.
We verify the effectiveness of structured learning and inference by measuring the influence of beam size on the baseline segmentor. Figure FIGREF30 shows the F-scores against different numbers of training iterations with beam size 1,2,4,8 and 16, respectively. When the beam size is 1, the inference is local and greedy. As the size of the beam increases, more global structural ambiguities can be resolved since learning is designed to guide search. A contrast between beam sizes 1 and 2 demonstrates the usefulness of structured learning and inference. As the beam size increases, the gain by doubling the beam size decreases. We choose a beam size of 8 for the remaining experiments for a tradeoff between speed and accuracy.
Table TABREF31 shows the effectiveness of rich pretraining of INLINEFORM0 on the development set. In particular, by using punctuation information, the F-score increases from 95.86% to 96.25%, with a relative error reduction of 9.4%. This is consistent with the observation of BIBREF11 , who show that punctuation is more effective compared with mutual information and access variety as semi-supervised data for a statistical word segmentation model. With automatically-segmented data, heterogenous segmentation and POS information, the F-score increases to 96.26%, 96.27% and 96.22%, respectively, showing the relevance of all information sources to neural segmentation, which is consistent with observations made for statistical word segmentation BIBREF16 , BIBREF12 , BIBREF28 . Finally, by integrating all above information via multi-task learning, the F-score is further improved to 96.48%, with a 15.0% relative error reduction.
Both our model and BIBREF5 use global learning and beam search, but our network is different. BIBREF5 utilizes the action history with LSTM encoder, while we use partial word rather than action information. Besides, the character and character bigram embeddings are fine-tuned in our model while BIBREF5 set the embeddings fixed during training. We study the F-measure distribution with respect to sentence length on our baseline model, multitask pretraining model and BIBREF5 . In particular, we cluster the sentences in the development dataset into 6 categories based on their length and evaluate their F1-values, respectively. As shown in Figure FIGREF35 , the models give different error distributions, with our models being more robust to the sentence length compared with BIBREF5 . Their model is better on very short sentences, but worse on all other cases. This shows the relative advantages of our model.
Final Results
Our final results on CTB6 are shown in Table TABREF38 , which lists the results of several current state-of-the-art methods. Without multitask pretraining, our model gives an F-score of 95.44%, which is higher than the neural segmentor of BIBREF5 , which gives the best accuracies among pure neural segments on this dataset. By using multitask pretraining, the result increases to 96.21%, with a relative error reduction of 16.9%. In comparison, BIBREF11 investigated heterogenous semi-supervised learning on a state-of-the-art statistical model, obtaining a relative error reduction of 13.8%. Our findings show that external data can be as useful for neural segmentation as for statistical segmentation.
Our final results compare favourably to the best statistical models, including those using semi-supervised learning BIBREF11 , BIBREF12 , and those leveraging joint POS and syntactic information BIBREF37 . In addition, it also outperforms the best neural models, in particular BIBREF5 *, which is a hybrid neural and statistical model, integrating manual discrete features into their word-based neural model. We achieve the best reported F-score on this dataset. To our knowledge, this is the first time a pure neural network model outperforms all existing methods on this dataset, allowing the use of external data . We also evaluate our model pretrained only on punctuation and auto-segmented data, which do not include additional manual labels. The results on CTB test data show the accuracy of 95.8% and 95.7%, respectivley, which are comparable with those statistical semi-supervised methods BIBREF11 , BIBREF12 . They are also among the top performance methods in Table TABREF38 . Compared with discrete semi-supervised methods BIBREF11 , BIBREF12 , our semi-supervised model is free from hand-crafted features.
In addition to CTB6, which has been the most commonly adopted by recent segmentation research, we additionally evaluate our results on the SIGHAN 2005 bakeoff and Weibo datasets, to examine cross domain robustness. Different state-of-the-art methods for which results are recorded on these datasets are listed in Table TABREF40 . Most neural models reported results only on the PKU and MSR datasets of the bakeoff test sets, which are in simplified Chinese. The AS and CityU corpora are in traditional Chinese, sourced from Taiwan and Hong Kong corpora, respectively. We map them into simplified Chinese before segmentation. The Weibo corpus is in a yet different genre, being social media text. BIBREF41 achieved the best results on this dataset by using a statistical model with features learned using external lexicons, the CTB7 corpus and the People Daily corpus. Similar to Table TABREF38 , our method gives the best accuracies on all corpora except for MSR, where it underperforms the hybrid model of BIBREF5 by 0.2%. To our knowledge, we are the first to report results for a neural segmentor on more than 3 datasets, with competitive results consistently. It verifies that knowledge learned from a certain set of resources can be used to enhance cross-domain robustness in training a neural segmentor for different datasets, which is of practical importance.
Conclusion
We investigated rich external resources for enhancing neural word segmentation, by building a globally optimised beam-search model that leverages both character and word contexts. Taking each type of external resource as an auxiliary classification task, we use neural multi-task learning to pre-train a set of shared parameters for character contexts. Results show that rich pretraining leads to 15.4% relative error reduction, and our model gives results highly competitive to the best systems on six different benchmarks.
Acknowledgments
We thank the anonymous reviewers for their insightful comments and the support of NSFC 61572245. We would like to thank Meishan Zhang for his insightful discussion and assisting coding. Yue Zhang is the corresponding author. | Raw data from Gigaword, Automatically segmented text from Gigaword, Heterogenous training data from People's Daily, POS data from People's Daily |
9893c5f36f9d503678749cb0466eeaa0cfc9413f | 9893c5f36f9d503678749cb0466eeaa0cfc9413f_0 | Q: What submodules does the model consist of?
Text: Introduction
There has been a recent shift of research attention in the word segmentation literature from statistical methods to deep learning BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 . Neural network models have been exploited due to their strength in non-sparse representation learning and non-linear power in feature combination, which have led to advances in many NLP tasks. So far, neural word segmentors have given comparable accuracies to the best statictical models.
With respect to non-sparse representation, character embeddings have been exploited as a foundation of neural word segmentors. They serve to reduce sparsity of character ngrams, allowing, for example, “猫(cat) 躺(lie) 在(in) 墙角(corner)” to be connected with “狗(dog) 蹲(sit) 在(in) 墙角(corner)” BIBREF0 , which is infeasible by using sparse one-hot character features. In addition to character embeddings, distributed representations of character bigrams BIBREF6 , BIBREF1 and words BIBREF2 , BIBREF5 have also been shown to improve segmentation accuracies.
With respect to non-linear modeling power, various network structures have been exploited to represent contexts for segmentation disambiguation, including multi-layer perceptrons on five-character windows BIBREF0 , BIBREF6 , BIBREF1 , BIBREF7 , as well as LSTMs on characters BIBREF3 , BIBREF8 and words BIBREF2 , BIBREF4 , BIBREF5 . For structured learning and inference, CRF has been used for character sequence labelling models BIBREF1 , BIBREF3 and structural beam search has been used for word-based segmentors BIBREF4 , BIBREF5 .
Previous research has shown that segmentation accuracies can be improved by pretraining character and word embeddings over large Chinese texts, which is consistent with findings on other NLP tasks, such as parsing BIBREF9 . Pretraining can be regarded as one way of leveraging external resources to improve accuracies, which is practically highly useful and has become a standard practice in neural NLP. On the other hand, statistical segmentation research has exploited raw texts for semi-supervised learning, by collecting clues from raw texts more thoroughly such as mutual information and punctuation BIBREF10 , BIBREF11 , and making use of self-predictions BIBREF12 , BIBREF13 . It has also utilised heterogenous annotations such as POS BIBREF14 , BIBREF15 and segmentation under different standards BIBREF16 . To our knowledge, such rich external information has not been systematically investigated for neural segmentation.
We fill this gap by investigating rich external pretraining for neural segmentation. Following BIBREF4 and BIBREF5 , we adopt a globally optimised beam-search framework for neural structured prediction BIBREF9 , BIBREF17 , BIBREF18 , which allows word information to be modelled explicitly. Different from previous work, we make our model conceptually simple and modular, so that the most important sub module, namely a five-character window context, can be pretrained using external data. We adopt a multi-task learning strategy BIBREF19 , casting each external source of information as a auxiliary classification task, sharing a five-character window network. After pretraining, the character window network is used to initialize the corresponding module in our segmentor.
Results on 6 different benchmarks show that our method outperforms the best statistical and neural segmentation models consistently, giving the best reported results on 5 datasets in different domains and genres. Our implementation is based on LibN3L BIBREF20 . Code and models can be downloaded from http://gitHub.com/jiesutd/RichWordSegmentor
Related Work
Work on statistical word segmentation dates back to the 1990s BIBREF21 . State-of-the-art approaches include character sequence labeling models BIBREF22 using CRFs BIBREF23 , BIBREF24 and max-margin structured models leveraging word features BIBREF25 , BIBREF26 , BIBREF27 . Semi-supervised methods have been applied to both character-based and word-based models, exploring external training data for better segmentation BIBREF11 , BIBREF12 , BIBREF13 , BIBREF28 . Our work belongs to recent neural word segmentation.
To our knowledge, there has been no work in the literature systematically investigating rich external resources for neural word segmentation training. Closest in spirit to our work, BIBREF11 empirically studied the use of various external resources for enhancing a statistical segmentor, including character mutual information, access variety information, punctuation and other statistical information. Their baseline is similar to ours in the sense that both character and word contexts are considered. On the other hand, their model is statistical while ours is neural. Consequently, they integrate external knowledge as features, while we integrate it by shared network parameters. Our results show a similar degree of error reduction compared to theirs by using external data.
Our model inherits from previous findings on context representations, such as character windows BIBREF6 , BIBREF1 , BIBREF7 and LSTMs BIBREF3 , BIBREF8 . Similar to BIBREF5 and BIBREF4 , we use word context on top of character context. However, words play a relatively less important role in our model, and we find that word LSTM, which has been used by all previous neural segmentation work, is unnecessary for our model. Our model is conceptually simpler and more modularised compared with BIBREF5 and BIBREF4 , allowing a central sub module, namely a five-character context window, to be pretrained.
Model
Our segmentor works incrementally from left to right, as the example shown in Table TABREF1 . At each step, the state consists of a sequence of words that have been fully recognized, denoted as INLINEFORM0 , a current partially recognized word INLINEFORM1 , and a sequence of next incoming characters, denoted as INLINEFORM2 , as shown in Figure FIGREF4 . Given an input sentence, INLINEFORM3 and INLINEFORM4 are initialized to INLINEFORM5 and INLINEFORM6 , respectively, and INLINEFORM7 contains all the input characters. At each step, a decision is made on INLINEFORM8 , either appending it as a part of INLINEFORM9 , or seperating it as the beginning of a new word. The incremental process repeats until INLINEFORM10 is empty and INLINEFORM11 is null again ( INLINEFORM12 , INLINEFORM13 ). Formally, the process can be regarded as a state-transition process, where a state is a tuple INLINEFORM14 , and the transition actions include Sep (seperate) and App (append), as shown by the deduction system in Figure FIGREF7 .
In the figure, INLINEFORM0 denotes the score of a state, given by a neural network model. The score of the initial state (i.e. axiom) is 0, and the score of a non-axiom state is the sum of scores of all incremental decisions resulting in the state. Similar to BIBREF5 and BIBREF4 , our model is a global structural model, using the overall score to disambiguate states, which correspond to sequences of inter-dependent transition actions.
Different from previous work, the structure of our scoring network is shown in Figure FIGREF4 . It consists of three main layers. On the bottom is a representation layer, which derives dense representations INLINEFORM0 and INLINEFORM1 for INLINEFORM2 and INLINEFORM3 , respectively. We compare various distributed representations and neural network structures for learning INLINEFORM4 and INLINEFORM5 , detailed in Section SECREF8 . On top of the representation layer, we use a hidden layer to merge INLINEFORM6 and INLINEFORM7 into a single vector DISPLAYFORM0
The hidden feature vector INLINEFORM0 is used to represent the state INLINEFORM1 , for calculating the scores of the next action. In particular, a linear output layer with two nodes is employed: DISPLAYFORM0
The first and second node of INLINEFORM0 represent the scores of Sep and App given INLINEFORM1 , namely INLINEFORM2 , INLINEFORM3 respectively.
Representation Learning
Characters. We investigate two different approaches to encode incoming characters, namely a window approach and an LSTM approach. For the former, we follow prior methods BIBREF22 , BIBREF1 , using five-character window INLINEFORM0 to represent incoming characters. Shown in Figure FIGREF13 , a multi-layer perceptron (MLP) is employed to derive a five-character window vector INLINEFORM1 from single-character vector representations INLINEFORM2 . DISPLAYFORM0
For the latter, we follow recent work BIBREF3 , BIBREF5 , using a bi-directional LSTM to encode input character sequence. In particular, the bi-directional LSTM hidden vector INLINEFORM0 of the next incoming character INLINEFORM1 is used to represent the coming characters INLINEFORM2 given a state. Intuitively, a five-character window provides a local context from which the meaning of the middle character can be better disambiguated. LSTM, on the other hand, captures larger contexts, which can contain more useful clues for dismbiguation but also irrelevant information. It is therefore interesting to investigate a combination of their strengths, by first deriving a locally-disambiguated version of INLINEFORM3 , and then feed it to LSTM for a globally disambiguated representation.
Now with regard to the single-character vector representation INLINEFORM0 , we follow previous work and consider both character embedding INLINEFORM1 and character-bigram embedding INLINEFORM2 , investigating the effect of each on the accuracies. When both INLINEFORM3 and INLINEFORM4 are utilized, the concatenated vector is taken as INLINEFORM5 .
Partial Word. We take a very simple approach to representing the partial word INLINEFORM0 , by using the embedding vectors of its first and last characters, as well as the embedding of its length. Length embeddings are randomly initialized and then tuned in model training. INLINEFORM1 has relatively less influence on the empirical segmentation accuracies. DISPLAYFORM0
Word. Similar to the character case, we investigate two different approaches to encoding incoming characters, namely a window approach and an LSTM approach. For the former, we follow prior methods BIBREF25 , BIBREF27 , using the two-word window INLINEFORM0 to represent recognized words. A hidden layer is employed to derive a two-word vector INLINEFORM1 from single word embeddings INLINEFORM2 and INLINEFORM3 . DISPLAYFORM0
For the latter, we follow BIBREF5 and BIBREF4 , using an uni-directional LSTM on words that have been recognized.
Pretraining
Neural network models for NLP benefit from pretraining of word/character embeddings, learning distributed sementic information from large raw texts for reducing sparsity. The three basic elements in our neural segmentor, namely characters, character bigrams and words, can all be pretrained over large unsegmented data. We pretrain the five-character window network in Figure FIGREF13 as an unit, learning the MLP parameter together with character and bigram embeddings. We consider four types of commonly explored external data to this end, all of which have been studied for statistical word segmentation, but not for neural network segmentors.
Raw Text. Although raw texts do not contain explicit word boundary information, statistics such as mutual information between consecutive characters can be useful features for guiding segmentation BIBREF11 . For neural segmentation, these distributional statistics can be implicitly learned by pretraining character embeddings. We therefore consider a more explicit clue for pretraining our character window network, namely punctuations BIBREF10 .
Punctuation can serve as a type of explicit mark-up BIBREF30 , indicating that the two characters on its left and right belong to two different words. We leverage this source of information by extracting character five-grams excluding punctuation from raw sentences, using them as inputs to classify whether there is punctuation before middle character. Denoting the resulting five character window as INLINEFORM0 , the MLP in Figure FIGREF13 is used to derive its representation INLINEFORM1 , which is then fed to a softmax layer for binary classification: DISPLAYFORM0
Here INLINEFORM0 indicates the probability of a punctuation mark existing before INLINEFORM1 . Standard backpropagation training of the MLP in Figure FIGREF13 can be done jointly with the training of INLINEFORM2 and INLINEFORM3 . After such training, the embedding INLINEFORM4 and MLP values can be used to initialize the corresponding parameters for INLINEFORM5 in the main segmentor, before its training.
Automatically Segmented Text. Large texts automatically segmented by a baseline segmentor can be used for self-training BIBREF13 or deriving statistical features BIBREF12 . We adopt a simple strategy, taking automatically segmented text as silver data to pretrain the five-character window network. Given INLINEFORM0 , INLINEFORM1 is derived using the MLP in Figure FIGREF13 , and then used to classify the segmentation of INLINEFORM2 into B(begining)/M(middle)/E(end)/S(single character word) labels. DISPLAYFORM0
Here INLINEFORM0 and INLINEFORM1 are model parameters. Training can be done in the same way as training with punctuation.
Heterogenous Training Data. Multiple segmentation corpora exist for Chinese, with different segmentation granularities. There has been investigation on leveraging two corpora under different annotation standards to improve statistical segmentation BIBREF16 . We try to utilize heterogenous treebanks by taking an external treebank as labeled data, training a B/M/E/S classifier for the character windows network. DISPLAYFORM0
POS Data. Previous research has shown that POS information is closely related to segmentation BIBREF14 , BIBREF15 . We verify the utility of POS information for our segmentor by pretraining a classifier that predicts the POS on each character, according to the character window representation INLINEFORM0 . In particular, given INLINEFORM1 , the POS of the word that INLINEFORM2 belongs to is used as the output. DISPLAYFORM0
Multitask Learning. While each type of external training data can offer one source of segmentation information, different external data can be complimentary to each other. We aim to inject all sources of information into the character window representation INLINEFORM0 by using it as a shared representation for different classification tasks. Neural model have been shown capable of doing multi-task learning via parameter sharing BIBREF19 . Shown in Figure FIGREF13 , in our case, the output layer for each task is independent, but the hidden layer INLINEFORM1 and all layers below INLINEFORM2 are shared.
For training with all sources above, we randomly sample sentences from the Punc./Auto-seg/Heter./POS sources with the ratio of 10/1/1/1, for each sentence in punctuation corpus we take only 2 characters (character before and after the punctuation) as input instances.
[t] InputInput OutputOutput Parameters: INLINEFORM0
Process:
agenda INLINEFORM0 INLINEFORM1
j in [0:Len( INLINEFORM0 )] beam = []
INLINEFORM0 in agenda INLINEFORM1 = Action( INLINEFORM2 , Sep)
Add( INLINEFORM0 , beam)
INLINEFORM0 = Action( INLINEFORM1 , App)
Add( INLINEFORM0 , beam)
agenda INLINEFORM0 Top(beam, B)
INLINEFORM0 agenda INLINEFORM1 = BestIn(agenda)
Update( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 )
return
INLINEFORM0 = BestIn(agenda)
Update( INLINEFORM0 , INLINEFORM1 , INLINEFORM2 )
return
Training
Decoding and Training
To train the main segmentor, we adopt the global transition-based learning and beam-search strategy of BIBREF31 . For decoding, standard beam search is used, where the B best partial output hypotheses at each step are maintained in an agenda. Initially, the agenda contains only the start state. At each step, all hypotheses in the agenda are expanded, by applying all possible actions and B highest scored resulting hypotheses are used as the agenda for the next step.
For training, the same decoding process is applied to each training example INLINEFORM0 . At step INLINEFORM1 , if the gold-standard sequence of transition actions INLINEFORM2 falls out of the agenda, max-margin update is performed by taking the current best hypothesis INLINEFORM3 in the beam as a negative example, and INLINEFORM4 as a positive example. The loss function is DISPLAYFORM0
where INLINEFORM0 is the number of incorrect local decisions in INLINEFORM1 , and INLINEFORM2 controls the score margin.
The strategy above is early-update BIBREF32 . On the other hand, if the gold-standard hypothesis does not fall out of the agenda until the full sentence has been segmented, a final update is made between the highest scored hypothesis INLINEFORM0 (non-gold standard) in the agenda and the gold-standard INLINEFORM1 , using exactly the same loss function. Pseudocode for the online learning algorithm is shown in Algorithm SECREF14 .
We use Adagrad BIBREF33 to optimize model parameters, with an initial learning rate INLINEFORM0 . INLINEFORM1 regularization and dropout BIBREF34 on input are used to reduce overfitting, with a INLINEFORM2 weight INLINEFORM3 and a dropout rate INLINEFORM4 . All the parameters in our model are randomly initialized to a value INLINEFORM5 , where INLINEFORM6 BIBREF35 . We fine-tune character and character bigram embeddings, but not word embeddings, acccording to BIBREF5 .
Experimental Settings
Data. We use Chinese Treebank 6.0 (CTB6) BIBREF36 as our main dataset. Training, development and test set splits follow previous work BIBREF37 . In order to verify the robustness of our model, we additionally use SIGHAN 2005 bake-off BIBREF38 and NLPCC 2016 shared task for Weibo segmentation BIBREF39 as test datasets, where the standard splits are used. For pretraining embedding of words, characters and character bigrams, we use Chinese Gigaword (simplified Chinese sections), automatically segmented using ZPar 0.6 off-the-shelf BIBREF25 , the statictics of which are shown in Table TABREF24 .
For pretraining character representations, we extract punctuation classification data from the Gigaword corpus, and use the word-based ZPar and a standard character-based CRF model BIBREF40 to obtain automatic segmentation results. We compare pretraining using ZPar results only and using results that both segmentors agree on. For heterogenous segmentation corpus and POS data, we use a People's Daily corpus of 5 months. Statistics are listed in Table TABREF24 .
Evaluation. The standard word precision, recall and F1 measure BIBREF38 are used to evaluate segmentation performances.
Hyper-parameter Values. We adopt commonly used values for most hyperparameters, but tuned the sizes of hidden layers on the development set. The values are summarized in Table TABREF20 .
Development Experiments
We perform development experiments to verify the usefulness of various context representations, network configurations and different pretraining methods, respectively.
The influence of character and word context representations are empirically studied by varying the network structures for INLINEFORM0 and INLINEFORM1 in Figure FIGREF4 , respectively. All the experiments in this section are performed using a beam size of 8.
Character Context. We fix the word representation INLINEFORM0 to a 2-word window and compare different character context representations. The results are shown in Table TABREF27 , where “no char” represents our model without INLINEFORM1 , “5-char window” represents a five-character window context, “char LSTM” represents character LSTM context and “5-char window + LSTM” represents a combination, detailed in Section SECREF8 . “-char emb” and “-bichar emb” represent the combined window and LSTM context without character and character-bigram information, respectively.
As can be seen from the table, without character information, the F-score is 84.62%, demonstrating the necessity of character contexts. Using window and LSTM representations, the F-scores increase to 95.41% and 95.51%, respectively. A combination of the two lead to further improvement, showing that local and global character contexts are indeed complementary, as hypothesized in Section SECREF8 . Finally, by removing character and character-bigram embeddings, the F-score decreases to 95.20% and 94.27%, respectively, which suggests that character bigrams are more useful compared to character unigrams. This is likely because they contain more distinct tokens and hence offer a larger parameter space.
Word Context. The influence of various word contexts are shown in Table TABREF28 . Without using word information, our segmentor gives an F-score of 95.66% on the development data. Using a context of only INLINEFORM0 (1-word window), the F-measure increases to 95.78%. This shows that word contexts are far less important in our model compared to character contexts, and also compared to word contexts in previous word-based segmentors BIBREF5 , BIBREF4 . This is likely due to the difference in our neural network structures, and that we fine-tune both character and character bigram embeddings, which significantly enlarges the adjustable parameter space as compared with BIBREF5 . The fact that word contexts can contribute relatively less than characters in a word is also not surprising in the sense that word-based neural segmentors do not outperform the best character-based models by large margins. Given that character context is what we pretrain, our model relies more heavily on them.
With both INLINEFORM0 and INLINEFORM1 being used for the context, the F-score further increases to 95.86%, showing that a 2-word window is useful by offering more contextual information. On the other hand, when INLINEFORM2 is also considered, the F-score does not improve further. This is consistent with previous findings of statistical word segmentation BIBREF25 , which adopt a 2-word context. Interestingly, using a word LSTM does not bring further improvements, even when it is combined with a window context. This suggests that global word contexts may not offer crucial additional information compared with local word contexts. Intuitively, words are significantly less polysemous compared with characters, and hence can serve as effective contexts even if used locally, to supplement a more crucial character context.
We verify the effectiveness of structured learning and inference by measuring the influence of beam size on the baseline segmentor. Figure FIGREF30 shows the F-scores against different numbers of training iterations with beam size 1,2,4,8 and 16, respectively. When the beam size is 1, the inference is local and greedy. As the size of the beam increases, more global structural ambiguities can be resolved since learning is designed to guide search. A contrast between beam sizes 1 and 2 demonstrates the usefulness of structured learning and inference. As the beam size increases, the gain by doubling the beam size decreases. We choose a beam size of 8 for the remaining experiments for a tradeoff between speed and accuracy.
Table TABREF31 shows the effectiveness of rich pretraining of INLINEFORM0 on the development set. In particular, by using punctuation information, the F-score increases from 95.86% to 96.25%, with a relative error reduction of 9.4%. This is consistent with the observation of BIBREF11 , who show that punctuation is more effective compared with mutual information and access variety as semi-supervised data for a statistical word segmentation model. With automatically-segmented data, heterogenous segmentation and POS information, the F-score increases to 96.26%, 96.27% and 96.22%, respectively, showing the relevance of all information sources to neural segmentation, which is consistent with observations made for statistical word segmentation BIBREF16 , BIBREF12 , BIBREF28 . Finally, by integrating all above information via multi-task learning, the F-score is further improved to 96.48%, with a 15.0% relative error reduction.
Both our model and BIBREF5 use global learning and beam search, but our network is different. BIBREF5 utilizes the action history with LSTM encoder, while we use partial word rather than action information. Besides, the character and character bigram embeddings are fine-tuned in our model while BIBREF5 set the embeddings fixed during training. We study the F-measure distribution with respect to sentence length on our baseline model, multitask pretraining model and BIBREF5 . In particular, we cluster the sentences in the development dataset into 6 categories based on their length and evaluate their F1-values, respectively. As shown in Figure FIGREF35 , the models give different error distributions, with our models being more robust to the sentence length compared with BIBREF5 . Their model is better on very short sentences, but worse on all other cases. This shows the relative advantages of our model.
Final Results
Our final results on CTB6 are shown in Table TABREF38 , which lists the results of several current state-of-the-art methods. Without multitask pretraining, our model gives an F-score of 95.44%, which is higher than the neural segmentor of BIBREF5 , which gives the best accuracies among pure neural segments on this dataset. By using multitask pretraining, the result increases to 96.21%, with a relative error reduction of 16.9%. In comparison, BIBREF11 investigated heterogenous semi-supervised learning on a state-of-the-art statistical model, obtaining a relative error reduction of 13.8%. Our findings show that external data can be as useful for neural segmentation as for statistical segmentation.
Our final results compare favourably to the best statistical models, including those using semi-supervised learning BIBREF11 , BIBREF12 , and those leveraging joint POS and syntactic information BIBREF37 . In addition, it also outperforms the best neural models, in particular BIBREF5 *, which is a hybrid neural and statistical model, integrating manual discrete features into their word-based neural model. We achieve the best reported F-score on this dataset. To our knowledge, this is the first time a pure neural network model outperforms all existing methods on this dataset, allowing the use of external data . We also evaluate our model pretrained only on punctuation and auto-segmented data, which do not include additional manual labels. The results on CTB test data show the accuracy of 95.8% and 95.7%, respectivley, which are comparable with those statistical semi-supervised methods BIBREF11 , BIBREF12 . They are also among the top performance methods in Table TABREF38 . Compared with discrete semi-supervised methods BIBREF11 , BIBREF12 , our semi-supervised model is free from hand-crafted features.
In addition to CTB6, which has been the most commonly adopted by recent segmentation research, we additionally evaluate our results on the SIGHAN 2005 bakeoff and Weibo datasets, to examine cross domain robustness. Different state-of-the-art methods for which results are recorded on these datasets are listed in Table TABREF40 . Most neural models reported results only on the PKU and MSR datasets of the bakeoff test sets, which are in simplified Chinese. The AS and CityU corpora are in traditional Chinese, sourced from Taiwan and Hong Kong corpora, respectively. We map them into simplified Chinese before segmentation. The Weibo corpus is in a yet different genre, being social media text. BIBREF41 achieved the best results on this dataset by using a statistical model with features learned using external lexicons, the CTB7 corpus and the People Daily corpus. Similar to Table TABREF38 , our method gives the best accuracies on all corpora except for MSR, where it underperforms the hybrid model of BIBREF5 by 0.2%. To our knowledge, we are the first to report results for a neural segmentor on more than 3 datasets, with competitive results consistently. It verifies that knowledge learned from a certain set of resources can be used to enhance cross-domain robustness in training a neural segmentor for different datasets, which is of practical importance.
Conclusion
We investigated rich external resources for enhancing neural word segmentation, by building a globally optimised beam-search model that leverages both character and word contexts. Taking each type of external resource as an auxiliary classification task, we use neural multi-task learning to pre-train a set of shared parameters for character contexts. Results show that rich pretraining leads to 15.4% relative error reduction, and our model gives results highly competitive to the best systems on six different benchmarks.
Acknowledgments
We thank the anonymous reviewers for their insightful comments and the support of NSFC 61572245. We would like to thank Meishan Zhang for his insightful discussion and assisting coding. Yue Zhang is the corresponding author. | five-character window context |
5d85d7d4d013293b4405beb4b53fa79ac7c03401 | 5d85d7d4d013293b4405beb4b53fa79ac7c03401_0 | Q: How they add human prefference annotation to fine-tuning process?
Text: Introduction
Recent advances in sequence-to-sequence learning architecture BIBREF0 and the transformer model BIBREF1 have raised increasing interest in natural language generation (NLG) tasks, including story generation BIBREF2, open-domain dialogue response generation BIBREF3 and abstractive summarization BIBREF4. Despite the fast advances of models, there remains a huge gap in the evaluation of NLG models and it is hard to measure the progress due to the lack of good evaluation metrics. While perplexity is a good measure of how well a model fits some data, it does not measure performance at the desired task. Word overlap based metrics such as BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7 capture quality better than the perplexity and are useful in translation and summarization. However, they still correlate poorly with human evaluation BIBREF8 in open domain text generation tasks including story generation and dialogue response generation because two equally good generated texts may have no n-gram overlap. Human evaluation is generally considered to be the gold standard evaluation, however, it does not scale well as it is generally expensive and time-consuming to conduct human evaluation.
Apart from measuring relative progress between different models, automated evaluation metrics also play an important role in the training stage of NLG models. It is a common practice to tune the model hyperparameter, detect convergence, perform early-stopping, and select the best checkpoints based on the model's performance on automated evaluation metrics. While acceptable for tasks where automated metrics correlate well with human evaluations, including machine translation and text summarization, this can be erroneous and result in sub-optimal training in open domain NLG tasks because available automated metrics correlate poorly with human evaluation, as demonstrated in the experimental section of this paper.
To tackle the aforementioned problems, in this paper, we propose a self-supervised approach with transfer learning to learn to compare the quality of two samples as an automated comparative Turing test. The motivation of our approach is that we can better assess the quality of generated samples or trained NLG model by comparing it with another one. Our model is a text pair classification model trained to compare the task-specific quality of two samples, which is then used to evaluate the quality of trained NLG models. As human preference annotation is generally expensive, our model is designed to be able to perform self-supervised training using only generated samples and gold reference samples without human preference annotation. When human preference annotation is available, our model can be further fine-tuned to better imitate human judgment. To evaluate the model-level quality of NLG models based on pairwise comparison in sample-level, we adopt the skill rating system similar to ELO BIBREF9 and Trueskill BIBREF10, which is a method for assigning a numerical skill to players in a player-vs-player game, given a win-loss record of games played. In our scenario, the players are NLG models to be evaluated and a higher rating indicates a better model. The skill rating system makes it possible to evaluate all n models without needing to run $n^{2}$ matches and is able to take into account the amount of new information each comparison provides.
The contribution of this paper is threefold:
We propose a “learning to compare” model to better assess the quality of text generated by NLG models based on pairwise comparison. Our model is able to transfer natural language understanding knowledge from BERT by fine-tuning in a self-supervised way while also able to be further fine-tuned with human preference annotation. Once trained, our model is able to perform inter-model comparison without the need for gold references, which greatly enlarges the potentially available test set and reduces the potential risk of overfitting the reference in the test set.
We propose to use the skill rating system to perform model-level evaluation based on the sample-level evaluation information provided by our pairwise comparison model. The skill rating system is more efficient and accurate than several baseline approaches.
We conduct experiments on both story generation task and open domain dialogue response generation task. Experimental results show that our approach correlates better with human evaluation on both datasets. Moreover, we show that using automated metrics such as BLEU to perform hyperparameter tuning and early-stopping results in sub-optimal model and our approach helps alleviate this problem.
Related Work
Evaluation of NLG models has been a long-standing open problem. While human evaluation may be ideal, it is generally expensive to conduct and does not scale well. Various automated evaluation approaches are proposed to facilitate the development and evaluation of NLG models. We summarize these evaluation approaches below.
Text Overlap Metrics, including BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7, are the most popular metrics employed in the evaluation of NLG models. They evaluate generated text by comparing the similarity between the generated text and human written references. While this works well in tasks where the diversity of acceptable output is limited, such as machine translation and text summarization, text overlap metrics are shown to have weak or no correlation with human judgments in open domain natural language generation tasks BIBREF8. There are two major drawbacks in these metrics. First, text overlap metrics can not distinguish minor variations in a generated text which may make the sentence not equally grammatically correct or semantically meaningful. Second, there may exist multiple equally good outputs for the given input and comparing against one gold reference can be erroneous.
Perplexity is commonly used to evaluate the quality of a language model. It measures how well a probability distribution predicts a sample and captures the degree of uncertainty in the model. It is used to evaluate models in open-domain NLG tasks such as story generation BIBREF2 and open domain dialogue systems. However, “how likely a sentence is generated by a given model” may not be comparable across different models and does not indicate the quality of the sentence.
Parameterized Metrics learn a parameterized model to evaluate generated text. Adversarial evaluation models BIBREF11, BIBREF12 assigns a score based on how easy it is to distinguish the dialogue model responses from human responses. However, training such a discriminator can be difficult as the binary classification task can be easily over-fitted and leads to poor generalizability BIBREF11. Moreover, the information we get from the discriminator accuracy is limited as we can not compare the quality of two generated sentences when they both succeed or fail in fooling the discriminator. Recent study shows that the discriminator accuracy does not correlate well with human preference BIBREF13. Automated Dialogue Evaluation Model (ADEM) BIBREF14 is another parameterized metric proposed for dialogue system evaluation. It learns to score a generated dialogue response based on the context and the human written reference. However, it requires human-annotated scores for generated sentences. It is generally hard to design appropriate questions for crowdsourcing these scores, which makes the annotation very expensive to get and the inter-annotator agreement score is only moderate BIBREF14. As a result, the training data is limited and noisy, which makes the scoring task even harder. It can be problematic when comparing models with similar quality. In addition, this model is designed only for evaluating dialogue response generation models. More recently, embedding similarity based metrics such as HUSE BIBREF15 and BERTScore BIBREF16. These metrics alleviate the first problem of text overlap metrics by modeling semantic similarity better. However, they can not address the response diversity problem and thus are only suitable for machine translation and text summarization.
Another line of research on NLG evaluation is to unify human evaluation with statistical evaluation BIBREF17, BIBREF18. These works are orthogonal to our paper as they mainly focus on the combination of human evaluation and automated evaluation.
Another related work of our research is the skill rating system, which evaluates players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. It is first adopted to evaluate GANs BIBREF19 for synthesizing images BIBREF20 by competing generators against discriminators. Their approach is an approximation of skill rating as the original skill rating system requires game played by two symmetric players, while in their system the players are asymmetric. Their approach does not include the “tie” option, thus can not distinguish cases where the discriminator is confident enough or not. More importantly, their approach is only designed for evaluating GANs while our approach can be used for any NLG models.
Methodology
We present the proposed approach in this section. We begin with the sample-level pairwise comparison model. Afterwards, we introduce how to adopt the skill rating system to perform model-level evaluation of NLG models.
Methodology ::: Learning to Compare
The proposed comparative evaluator is a text pair relation classifier which is trained to compare the task-specific quality of two samples. The motivation of evaluating one sample by comparing it with another sample is drawn from the insight learned when conducting human evaluation for NLG models. We find that when comparing two NLG models, instead of asking human annotator to assign scores separately for samples generated by different models, which resembles the case in the ADEM model BIBREF14, it is much easier for human annotators to directly compare one sample generated by the first model against another sample from the second model pairwisely and compute the win/loss rate. The comparison-based evaluation may also be more accurate, which is demonstrated by a higher inter-annotator agreement score in our preliminary experiments.
The comparative evaluator learns a total order of sample quality by classifying whether the first compared sample is better ($>$), worse ($<$), or indistinguishable ($\approx $) in terms of its quality compared with another sample. In this way, our model encodes the inductive bias that sometimes two samples can have similar quality and it is hard and unreliable to choose the better sample. By giving our model the third “tie” option, it can explicitly express its uncertainty and choose its preference only when being confident enough. This design choice is motivated by the practice that adding the “tie” option for human annotator when performing pairwise human evaluation can often make the comparison easier and more reliable. For a text sample, our comparative evaluator can provide a more informative assessment than the binary discriminative evaluator because one evaluated sample can receive multiple feedback from the comparative evaluator by comparing it with multiple other samples. In contrast, the discriminative evaluator can only evaluate a sample once, which is more likely to suffer from the inherent uncertainty of the evaluator.
We propose two approaches to construct pairwise training examples for training a comparative evaluator. The first approach generates strong supervision examples. It is based on the intuition that human written references are generally of better quality than machine-generated samples, and it is hard to tell the difference in term of the quality when two compared samples are both generated by machines or human written reference. We denote $S_{+}$$/$$S_{-}$ as the set of real/generated samples. For a real sample $s_{+}\in S_{+}$ and a generated sample $s_{-}\in S_{-}$, we assign the label “better ($>$)” to the pair ($s_+$, $s_-$) and “worse ($<$)” to ($s_-$, $s_+$). For two samples both from real data or from the generated samples, we assign the label “indistinguishable ($\approx $)” to such pairs (i.e., ($s_+^i$, $s_+^j$) and ($s_-^i$, $s_-^j$)). For a training set with $n$ real samples and $n$ generated samples, we can construct $\binom{2n}{2}$ pairwise training examples for the comparative evaluator, allowing to enhance the generalization ability and introduce more informative learning signals than the standard real/fake binary discriminative evaluator. Note that when constructing a sample pair ($s_-^i$, $s_-^j$), $s_-^i$ and $s_-^j$ are sampled from the same checkpoint of the same model in order to ensure that they are of similar quality in expectation.
One problem of the strong supervision approach is that it always labels two generated samples as indistinguishable. However, during inference, the input of the comparative evaluator is a pair of two generated samples from different models. Thus it requires the model to capture the quality relation in training examples and generalize well to successfully compare two samples rather than simply classifying them as indistinguishable, which provides relatively less information for evaluating NLG models.
To tackle this problem, we propose an approach to construct weak supervision examples for training the comparative evaluator. The intuition of our weak supervision approach is that during training, the quality of the NLG model keeps improving until convergence. Given two checkpoints of the same model, we can thus consider samples generated by the more recent checkpoint are of better quality compared with samples generated by the earlier version of the same model. This approach is considered to be weak supervision because the model quality may not improve monotonically and sometimes it is hard to decide whether the model begins to overfit the training data and its quality starts to decline. To minimize the noise introduced by these problems, we empirically set the minimal margin between two selected checkpoints to be $10\%$ of the total training iteration and do not select two “almost converged” checkpoints. The construction of training samples is similar to the first approach. In addition, motivated by the fact that the larger the margin between the quality two selected version of the model, the easier for the comparative evaluator to learn to distinguish the training examples, we propose to use curriculum learning BIBREF21 by feeding the comparative evaluator with sample pairs with larger margin (i.e. more training iterations between two selected checkpoints) during initial training stage and gradually decrease the margin to let the model gradually learn to capture smaller quality differences. Moreover, when human preference annotation is available, we can additionally fine-tune the comparative evaluator with human annotations.
The comparative evaluator is trained with maximum likelihood estimation (MLE) objective, as described in eq DISPLAY_FORM6
where $\mathcal {X}$ is the set of pairwise training examples contructed as described above, $Q(x_1, x_2) \in \lbrace >,<,\approx \rbrace $ is the true label for the pair ($x_1$, $x_2$), $D_\phi ^q(x_1, x_2)$ is the probability of the comparative discriminator's prediction being $q$ ($q \in \lbrace >,<,\approx \rbrace $) for the pair ($x_1$, $x_2$).
As comparing the quality of generated text requires good natural language understanding ability and our comparative evaluator is formulated as a sentence pair classification model, we propose to fine-tune BERT BIBREF22 as the comparative evaluator, the architecture of the resulting comparative evaluator is illustrated by Figure 1. Note that the compared sample A and B are based on the same context, which ensures that they are comparable.
Methodology ::: Skill Rating
In player-vs-player games such as chess or tennis, skill rating systems such as Elo BIBREF9 or Glicko2 BIBREF23 evaluate players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. We adopt the skill rating system for model-level evaluation of NLG models. By taking the trained comparative evaluator as the “playground” and NLG models as “player”, the “player-vs-player” game is played by sampling one output sample from each NLG model conditioning on the same input and the game output is decided by the comparative evaluator.
Following previous work BIBREF20, in our paper, we use the Glicko2 system BIBREF23. The employed system can be summarized as follows: each player's skill rating is represented as a Gaussian distribution, with a mean and standard deviation, representing the current state of the evidence about their “true” skill rating. As we evaluate frozen snapshots of NLG models, we disabled an irrelevant feature of Glicko2 that increases uncertainty about a human player’s skill when they have not participated in a match for some time. Another difference is that conventional skill rating systems do not support the “tie” option, which is important for the system to be stable and reliable in our case because the evaluator is not perfect. To incorporate this feature, we follow the intuition that a player's skill rating should be increased when it draws with another player with a higher skill rating and vice versa. We come up with a simple rule which increases/decreases the skill rating of one player by a ratio (e.g. 0.1) of the changes in its skill rating when it wins/loses if it draws with another player with higher/lower skill rating. In our experiments, the skill rating is performed by randomly sampling two compared models, simulating a “game” between two selected models by sampling one sample from each model and comparing them with the comparative evaluator, and then updating the skill rating of selected models according to the outcome. This procedure is performed iteratively until convergence, which is defined as the order of skill ratings of compared models keeps the same after each model is selected at least 50 times. While the sampling procedure can be optimized by bayesian optimization BIBREF24 or multi-armed bandit algorithms BIBREF25, we choose to keep the method as simple as possible and use random sampling.
Experiments
We set up experiments in order to answer the following research questions:
RQ1: Can the comparative evaluator correlate better with human preference in sample-level than previous automated metrics when evaluating open domain NLG models?
RQ2: Can the comparative evaluator correlate better with human preference in model-level, so that our approach can measure the progress on open domain NLG better?
RQ3: As existing approaches fail to correlate well with human preference, whether and to what extent this problem affects the quality of the final NLG model when performing hyperparameter search and early-stopping?
RQ4: If the previous problem exists, can proposed comparative evaluator reduce this problem?
Experiments ::: Experimental Settings ::: Datasets
We evaluate the effectiveness of the proposed approach on two open domain natural language generation tasks: story generation and open domain dialogue response generation. For story generation, we use the WritingPrompts dataset released by BIBREF2. The WritingPrompts dataset is a large dataset of 303,358 human-generated stories paired with writing prompts from an online forum. NLG models are trained by taking writing prompts as input and generating the whole story. The average length of prompts is 28.4 and the average length of stories is 734.5 words, which makes human evaluation very expensive and better automated metrics are thus critical. For open domain dialogue response generation task, we use the Dailydialog dataset BIBREF26, which consists of dialogues that resemble daily conversations across multiple topics. It comprises of 13k dialogues with an average of 7.9 turns per dialog.
Experiments ::: Experimental Settings ::: Compared Models and Metrics
As our objective is to evaluate the evaluators rather than comparing state-of-the-art models, we choose three representative sequence-to-sequence architectures: LSTM BIBREF27 seq2seq, Convolutional seq2seq BIBREF28, and transformer BIBREF1 model. We compare models with different architectures, hyperparameter choices, and early-stopping criteria with different automated metrics, as well as human evaluation.
Regarding the evaluation metric (and criteria for choosing hyperparameter choice and early-stopping), we compare the proposed approach with the discriminative evaluator, BLEU score (average of 2-, 3-, 4-grams), perplexity, and ADEM. When evaluating generated stories, we cut off the story at the nearest sentence for stories longer than 250 words.
The proposed comparative evaluator is employed for choosing hyperparameter by performing skill rating among all models trained with different hyperparameter choices. For early-stopping, as incrementally performing skill rating is computationally expensive, we propose to perform n (e.g. 1000) pairwise comparison between the samples generated by the latest checkpoint and the previous k (e.g. 2) checkpoints and stop training when the wining rate of latest checkpoint keeps being smaller than its losing rate for 5 iterations.
Experiments ::: Experimental Settings ::: Detail of Parameterized Evaluators
The proposed comparative evaluator is trained by fine-tuning BERT-large as a sentence-pair classifier. To ensure fair evaluation, we also train the discriminative evaluator by fine-tuning BERT. For ADEM, we adopt its original implementation as its architecture is relatively complicated. In addition, we perform ablation study by evaluating three variants of the comparative evaluator where it is trained without strong supervision examples, without weak supervision examples, without fine-tuning with human preference annotations, and without transferring from BERT.
Experiments ::: Experimental Settings ::: Human Evaluation Procedure
As human evaluation is expensive, sample-level evaluation is performed jointly with model-level evaluation, which is also used for evaluating the ability of different metrics for performing hyperparameter search and early-stopping. Concretely, we perform 10 groups of evaluations for performing hyperparameter selecting and early-stopping with five compared automated metrics. In each evaluation, each of the five compared metrics is used to select the best hyperparameter combination or early-stopping checkpoint with other variants fixed.
We choose to perform score-based human evaluation for four reasons: 1) the ADEM baseline requires human-annotated score as training examples, 2) we can construct up to $\binom{2n}{2}$ training examples for our comparative evaluator with $n$ human-annotated scores, 3) score-based human evaluation facilitates the evaluation of correlation scores, and 4) as all other metrics do not perform pairwise comparison, using pairwise human evaluation will likely be biased toward our approach.
We sample 20 generated samples from each model (out of 5) of the 20 evaluation groups. We invite 20 human annotators which are all graduate students with good English language proficiency to score these samples. Each annotator scores one sample from each model, such that each model is uniformly evaluated. The score scales from 1 to 5, higher score indicates better overall sample quality. According to experimental results from BIBREF14, we do not ask annotators to provide specific scores for fluency or informativeness. To test the inner-annotator agreement score, we additionally ask them to evaluate another 40 generated samples, of which 20 samples are scored from 1 to 5 and another 20 are evaluated based on pairwise comparison with 4 other generated samples and scored to 1-5 based on how many times they are considered to be better than a reference sample. We get an inter-annotator agreement score $\kappa =0.53$ for directly scoring and $\kappa =0.76$ with pairwise comparison, which validates our intuition that evaluation by comparison may be more accurate. These additional human annotations are used as training data for ADEM and the comparative evaluator.
Experiments ::: Experimental Designs & Results ::: RQ1: Sample-Level Correlation
To test the correlation of different automated metrics with respect to human preference, we employ different metrics to score the collected 2000 samples and calculate their Pearson and Spearman correlation with human scores. For comparative evaluator, as the evaluation is performed pairwisely and no absolute score is available, we use two different approaches to get an absolute score for each sample: 1) we sample 50 common references from machine-generated samples for each task and compare each sample with all references by the comparative evaluator. A sample gets 3 points when beats a reference, 1 point when draws with the reference, and get 0 point when loses, 2) we adopt skill rating system by regarding each sample as an NLG model which always outputs the same sample and use the skill rating for each sample as its score. To ensure the computational budget to be roughly the same, we fix the number of plays in skill rating to 10,000.
The experimental results are summarized in Table 1. We can see that the proposed comparative evaluator correlates far better with human judgment than BLEU and perplexity. When compared with recently proposed parameterized metrics including adversarial evaluator and ADEM, our model consistently outperforms them by a large margin, which demonstrates that our comparison-based evaluation metric is able to evaluate sample quality more accurately. In addition, we find that evaluating generated samples by comparing it with a set of randomly selected samples or using sample-level skill rating performs almost equally well. This is not surprising as the employed skill rating is able to handle the inherent variance of players (i.e. NLG models). As this variance does not exist when we regard a sample as a model which always generates the same sample.
Experiments ::: Experimental Designs & Results ::: RQ2: Model-Level Correlation
As for model-level evaluation, we employ the average score of the evaluated 100 samples as each model's score and calculate their correlation with human scores. For comparative evaluator, we propose three different approaches to get an absolute score for each sample: 1) we calculate the average reference-based score (method 1 for sample-level comparison) of each sample as model-level score, 2) we calculate the average skill rating of each sample obtained in the experiments of RQ1 as model-level score, 2) we use the proposed skill rating system to get a model-level skill rating for each compared model.
Results are shown in Table 2. We can see that the proposed comparative evaluator with skill rating significantly outperforms all compared baselines, including comparative evaluator with averaged sample-level scores. This demonstrates the effectiveness of the skill rating system for performing model-level comparison with pairwise sample-level evaluation. In addition, the poor correlation between conventional evaluation metrics including BLEU and perplexity demonstrates the necessity of better automated evaluation metrics in open domain NLG evaluation.
Experiments ::: Experimental Designs & Results ::: RQ3&4: Automated Metrics for Model Training
We further investigate the impact of imperfect metrics on training NLG models. As described in the human evaluation procedure, we perform 10 runs to test the reliability of each metric when used to perform hyperparameter tuning and early-stopping respectively. In each run, we select the best hyperparameter combination or early-stopping checkpoint based on each of the five compared metrics. Human evaluation is then employed to identify the best choice. We evaluate the performance of each metric by how many times (out of 10) they succeeded in selecting the best hyperparameter combination or early-stopping checkpoint (out of 4) and the average human-annotated score for their selected models.
The results are shown in Table 3. We can see that conventional automated metrics perform poorly and result in sub-optimal result when performing hyperparameter search and selecting the best performing checkpoints. Converting evaluation metric from BLEU or perplexity to the proposed comparative evaluator can yield non-neglectable improvements without changing model architecture or training objective. While previous work on NLG evaluation mostly focuses on the evaluation stage and does not explore the influence of imperfect metrics during model training, our experiments demonstrate the existence of this problem and that the proposed method can, to some extent, alleviate this problem.
Experiments ::: Qualitative Analysis
We present several comparison examples in the Dailydialog dataset for qualitative analysis of the proposed comparative evaluator. From the first example, we can see that the comparative evaluator is capable of identifying that generic and dull responses (i.e. “I don't know”) should be considered as of worse quality. The second example suggests that our approach handles the diversity in possible responses well, as it regards both positive response and negative response as valid responses. Hopefully, these examples may provide us with some insights about why the proposed metric correlates better with human preference.
Experiments ::: Ablation Study
To better understand the proposed comparative evaluator and analyze the relative importance of its different components, we conduct an ablation study with several variants of the proposed model:
w/o comparison: Evaluating generated samples without comparison, which degrades to the adversarial evaluation method.
w/o strong supervision: Training the comparative evaluator without “strong supervision”, which models the inductive bias that human written reference samples are generally of better quality compared with that generated by NLG models.
w/o weak supervision: Training without “weak supervision”, which models the inductive bias that the quality of NLG models generally improves during training.
w/o human preference annotation Training without human annotated preference data (i.e. only with strong and weak supervision).
w/o tie option The variant of comparative evaluator where the model must select the better sample rather than able to admit its uncertainty.
w/o BERT The variant where the model is trained from scratch instead of fine-tuning BERT.
We evaluate these model variants on the Dailydialog dataset. Results are presented in Table 5. We can see that comparison-based evaluation is very effective as our model correlates much better than adversarial evaluator. The tie option is also very important as it can prevent the comparative evaluator from making uncertain decision and model the inductive bias that samples generated by the same model are generally of similar quality, which may help our model generalize better. As for different sources of training examples, we find that human preference annotation is the most important, which is not surprising. In addition, we find that the proposed weak supervision also helps, but is of smaller relative importance compared with strong supervision. This may be due to the fact that examples constructed by the weak supervision approach may contain a lot of noise. We can also see that our model correlates well with human preference without training with human preference annotation, this is very important in practice as human annotations are not always available. Finally, we find that transferring the natural language understanding ability from BERT to be very important for the final performance.
Discussion and Conclusion
In this paper, we present a novel comparison-based parameterized automated evaluation metric for evaluating open domain NLG models. The proposed model is based on the intuition that we can better evaluate the quality of a sample by comparing it with other samples. Our model allows the model to admit its uncertainty with the “tie” option. We adopt the skill rating system to perform model-level evaluation based on sample-level pairwise comparison.
By transferring pretrained natural language understanding knowledge from BERT and fine-tuning with strong and weak supervision examples and human preference annotations, our model correlates better with human judgment than other compared metrics. In addition, we find that when used as evaluation metrics, conventional metrics such as BLEU and perplexity may affect the training stage of NLG models as they may lead to sub-optimal hyperparameter choice and checkpoint selection. Our model, in contrast, is much more reliable when performing these choices. | human preference annotation is available, $Q(x_1, x_2) \in \lbrace >,<,\approx \rbrace $ is the true label for the pair |
6dc9960f046ec6bd280a721724458f66d5a9a585 | 6dc9960f046ec6bd280a721724458f66d5a9a585_0 | Q: What previous automated evalution approaches authors mention?
Text: Introduction
Recent advances in sequence-to-sequence learning architecture BIBREF0 and the transformer model BIBREF1 have raised increasing interest in natural language generation (NLG) tasks, including story generation BIBREF2, open-domain dialogue response generation BIBREF3 and abstractive summarization BIBREF4. Despite the fast advances of models, there remains a huge gap in the evaluation of NLG models and it is hard to measure the progress due to the lack of good evaluation metrics. While perplexity is a good measure of how well a model fits some data, it does not measure performance at the desired task. Word overlap based metrics such as BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7 capture quality better than the perplexity and are useful in translation and summarization. However, they still correlate poorly with human evaluation BIBREF8 in open domain text generation tasks including story generation and dialogue response generation because two equally good generated texts may have no n-gram overlap. Human evaluation is generally considered to be the gold standard evaluation, however, it does not scale well as it is generally expensive and time-consuming to conduct human evaluation.
Apart from measuring relative progress between different models, automated evaluation metrics also play an important role in the training stage of NLG models. It is a common practice to tune the model hyperparameter, detect convergence, perform early-stopping, and select the best checkpoints based on the model's performance on automated evaluation metrics. While acceptable for tasks where automated metrics correlate well with human evaluations, including machine translation and text summarization, this can be erroneous and result in sub-optimal training in open domain NLG tasks because available automated metrics correlate poorly with human evaluation, as demonstrated in the experimental section of this paper.
To tackle the aforementioned problems, in this paper, we propose a self-supervised approach with transfer learning to learn to compare the quality of two samples as an automated comparative Turing test. The motivation of our approach is that we can better assess the quality of generated samples or trained NLG model by comparing it with another one. Our model is a text pair classification model trained to compare the task-specific quality of two samples, which is then used to evaluate the quality of trained NLG models. As human preference annotation is generally expensive, our model is designed to be able to perform self-supervised training using only generated samples and gold reference samples without human preference annotation. When human preference annotation is available, our model can be further fine-tuned to better imitate human judgment. To evaluate the model-level quality of NLG models based on pairwise comparison in sample-level, we adopt the skill rating system similar to ELO BIBREF9 and Trueskill BIBREF10, which is a method for assigning a numerical skill to players in a player-vs-player game, given a win-loss record of games played. In our scenario, the players are NLG models to be evaluated and a higher rating indicates a better model. The skill rating system makes it possible to evaluate all n models without needing to run $n^{2}$ matches and is able to take into account the amount of new information each comparison provides.
The contribution of this paper is threefold:
We propose a “learning to compare” model to better assess the quality of text generated by NLG models based on pairwise comparison. Our model is able to transfer natural language understanding knowledge from BERT by fine-tuning in a self-supervised way while also able to be further fine-tuned with human preference annotation. Once trained, our model is able to perform inter-model comparison without the need for gold references, which greatly enlarges the potentially available test set and reduces the potential risk of overfitting the reference in the test set.
We propose to use the skill rating system to perform model-level evaluation based on the sample-level evaluation information provided by our pairwise comparison model. The skill rating system is more efficient and accurate than several baseline approaches.
We conduct experiments on both story generation task and open domain dialogue response generation task. Experimental results show that our approach correlates better with human evaluation on both datasets. Moreover, we show that using automated metrics such as BLEU to perform hyperparameter tuning and early-stopping results in sub-optimal model and our approach helps alleviate this problem.
Related Work
Evaluation of NLG models has been a long-standing open problem. While human evaluation may be ideal, it is generally expensive to conduct and does not scale well. Various automated evaluation approaches are proposed to facilitate the development and evaluation of NLG models. We summarize these evaluation approaches below.
Text Overlap Metrics, including BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7, are the most popular metrics employed in the evaluation of NLG models. They evaluate generated text by comparing the similarity between the generated text and human written references. While this works well in tasks where the diversity of acceptable output is limited, such as machine translation and text summarization, text overlap metrics are shown to have weak or no correlation with human judgments in open domain natural language generation tasks BIBREF8. There are two major drawbacks in these metrics. First, text overlap metrics can not distinguish minor variations in a generated text which may make the sentence not equally grammatically correct or semantically meaningful. Second, there may exist multiple equally good outputs for the given input and comparing against one gold reference can be erroneous.
Perplexity is commonly used to evaluate the quality of a language model. It measures how well a probability distribution predicts a sample and captures the degree of uncertainty in the model. It is used to evaluate models in open-domain NLG tasks such as story generation BIBREF2 and open domain dialogue systems. However, “how likely a sentence is generated by a given model” may not be comparable across different models and does not indicate the quality of the sentence.
Parameterized Metrics learn a parameterized model to evaluate generated text. Adversarial evaluation models BIBREF11, BIBREF12 assigns a score based on how easy it is to distinguish the dialogue model responses from human responses. However, training such a discriminator can be difficult as the binary classification task can be easily over-fitted and leads to poor generalizability BIBREF11. Moreover, the information we get from the discriminator accuracy is limited as we can not compare the quality of two generated sentences when they both succeed or fail in fooling the discriminator. Recent study shows that the discriminator accuracy does not correlate well with human preference BIBREF13. Automated Dialogue Evaluation Model (ADEM) BIBREF14 is another parameterized metric proposed for dialogue system evaluation. It learns to score a generated dialogue response based on the context and the human written reference. However, it requires human-annotated scores for generated sentences. It is generally hard to design appropriate questions for crowdsourcing these scores, which makes the annotation very expensive to get and the inter-annotator agreement score is only moderate BIBREF14. As a result, the training data is limited and noisy, which makes the scoring task even harder. It can be problematic when comparing models with similar quality. In addition, this model is designed only for evaluating dialogue response generation models. More recently, embedding similarity based metrics such as HUSE BIBREF15 and BERTScore BIBREF16. These metrics alleviate the first problem of text overlap metrics by modeling semantic similarity better. However, they can not address the response diversity problem and thus are only suitable for machine translation and text summarization.
Another line of research on NLG evaluation is to unify human evaluation with statistical evaluation BIBREF17, BIBREF18. These works are orthogonal to our paper as they mainly focus on the combination of human evaluation and automated evaluation.
Another related work of our research is the skill rating system, which evaluates players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. It is first adopted to evaluate GANs BIBREF19 for synthesizing images BIBREF20 by competing generators against discriminators. Their approach is an approximation of skill rating as the original skill rating system requires game played by two symmetric players, while in their system the players are asymmetric. Their approach does not include the “tie” option, thus can not distinguish cases where the discriminator is confident enough or not. More importantly, their approach is only designed for evaluating GANs while our approach can be used for any NLG models.
Methodology
We present the proposed approach in this section. We begin with the sample-level pairwise comparison model. Afterwards, we introduce how to adopt the skill rating system to perform model-level evaluation of NLG models.
Methodology ::: Learning to Compare
The proposed comparative evaluator is a text pair relation classifier which is trained to compare the task-specific quality of two samples. The motivation of evaluating one sample by comparing it with another sample is drawn from the insight learned when conducting human evaluation for NLG models. We find that when comparing two NLG models, instead of asking human annotator to assign scores separately for samples generated by different models, which resembles the case in the ADEM model BIBREF14, it is much easier for human annotators to directly compare one sample generated by the first model against another sample from the second model pairwisely and compute the win/loss rate. The comparison-based evaluation may also be more accurate, which is demonstrated by a higher inter-annotator agreement score in our preliminary experiments.
The comparative evaluator learns a total order of sample quality by classifying whether the first compared sample is better ($>$), worse ($<$), or indistinguishable ($\approx $) in terms of its quality compared with another sample. In this way, our model encodes the inductive bias that sometimes two samples can have similar quality and it is hard and unreliable to choose the better sample. By giving our model the third “tie” option, it can explicitly express its uncertainty and choose its preference only when being confident enough. This design choice is motivated by the practice that adding the “tie” option for human annotator when performing pairwise human evaluation can often make the comparison easier and more reliable. For a text sample, our comparative evaluator can provide a more informative assessment than the binary discriminative evaluator because one evaluated sample can receive multiple feedback from the comparative evaluator by comparing it with multiple other samples. In contrast, the discriminative evaluator can only evaluate a sample once, which is more likely to suffer from the inherent uncertainty of the evaluator.
We propose two approaches to construct pairwise training examples for training a comparative evaluator. The first approach generates strong supervision examples. It is based on the intuition that human written references are generally of better quality than machine-generated samples, and it is hard to tell the difference in term of the quality when two compared samples are both generated by machines or human written reference. We denote $S_{+}$$/$$S_{-}$ as the set of real/generated samples. For a real sample $s_{+}\in S_{+}$ and a generated sample $s_{-}\in S_{-}$, we assign the label “better ($>$)” to the pair ($s_+$, $s_-$) and “worse ($<$)” to ($s_-$, $s_+$). For two samples both from real data or from the generated samples, we assign the label “indistinguishable ($\approx $)” to such pairs (i.e., ($s_+^i$, $s_+^j$) and ($s_-^i$, $s_-^j$)). For a training set with $n$ real samples and $n$ generated samples, we can construct $\binom{2n}{2}$ pairwise training examples for the comparative evaluator, allowing to enhance the generalization ability and introduce more informative learning signals than the standard real/fake binary discriminative evaluator. Note that when constructing a sample pair ($s_-^i$, $s_-^j$), $s_-^i$ and $s_-^j$ are sampled from the same checkpoint of the same model in order to ensure that they are of similar quality in expectation.
One problem of the strong supervision approach is that it always labels two generated samples as indistinguishable. However, during inference, the input of the comparative evaluator is a pair of two generated samples from different models. Thus it requires the model to capture the quality relation in training examples and generalize well to successfully compare two samples rather than simply classifying them as indistinguishable, which provides relatively less information for evaluating NLG models.
To tackle this problem, we propose an approach to construct weak supervision examples for training the comparative evaluator. The intuition of our weak supervision approach is that during training, the quality of the NLG model keeps improving until convergence. Given two checkpoints of the same model, we can thus consider samples generated by the more recent checkpoint are of better quality compared with samples generated by the earlier version of the same model. This approach is considered to be weak supervision because the model quality may not improve monotonically and sometimes it is hard to decide whether the model begins to overfit the training data and its quality starts to decline. To minimize the noise introduced by these problems, we empirically set the minimal margin between two selected checkpoints to be $10\%$ of the total training iteration and do not select two “almost converged” checkpoints. The construction of training samples is similar to the first approach. In addition, motivated by the fact that the larger the margin between the quality two selected version of the model, the easier for the comparative evaluator to learn to distinguish the training examples, we propose to use curriculum learning BIBREF21 by feeding the comparative evaluator with sample pairs with larger margin (i.e. more training iterations between two selected checkpoints) during initial training stage and gradually decrease the margin to let the model gradually learn to capture smaller quality differences. Moreover, when human preference annotation is available, we can additionally fine-tune the comparative evaluator with human annotations.
The comparative evaluator is trained with maximum likelihood estimation (MLE) objective, as described in eq DISPLAY_FORM6
where $\mathcal {X}$ is the set of pairwise training examples contructed as described above, $Q(x_1, x_2) \in \lbrace >,<,\approx \rbrace $ is the true label for the pair ($x_1$, $x_2$), $D_\phi ^q(x_1, x_2)$ is the probability of the comparative discriminator's prediction being $q$ ($q \in \lbrace >,<,\approx \rbrace $) for the pair ($x_1$, $x_2$).
As comparing the quality of generated text requires good natural language understanding ability and our comparative evaluator is formulated as a sentence pair classification model, we propose to fine-tune BERT BIBREF22 as the comparative evaluator, the architecture of the resulting comparative evaluator is illustrated by Figure 1. Note that the compared sample A and B are based on the same context, which ensures that they are comparable.
Methodology ::: Skill Rating
In player-vs-player games such as chess or tennis, skill rating systems such as Elo BIBREF9 or Glicko2 BIBREF23 evaluate players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. We adopt the skill rating system for model-level evaluation of NLG models. By taking the trained comparative evaluator as the “playground” and NLG models as “player”, the “player-vs-player” game is played by sampling one output sample from each NLG model conditioning on the same input and the game output is decided by the comparative evaluator.
Following previous work BIBREF20, in our paper, we use the Glicko2 system BIBREF23. The employed system can be summarized as follows: each player's skill rating is represented as a Gaussian distribution, with a mean and standard deviation, representing the current state of the evidence about their “true” skill rating. As we evaluate frozen snapshots of NLG models, we disabled an irrelevant feature of Glicko2 that increases uncertainty about a human player’s skill when they have not participated in a match for some time. Another difference is that conventional skill rating systems do not support the “tie” option, which is important for the system to be stable and reliable in our case because the evaluator is not perfect. To incorporate this feature, we follow the intuition that a player's skill rating should be increased when it draws with another player with a higher skill rating and vice versa. We come up with a simple rule which increases/decreases the skill rating of one player by a ratio (e.g. 0.1) of the changes in its skill rating when it wins/loses if it draws with another player with higher/lower skill rating. In our experiments, the skill rating is performed by randomly sampling two compared models, simulating a “game” between two selected models by sampling one sample from each model and comparing them with the comparative evaluator, and then updating the skill rating of selected models according to the outcome. This procedure is performed iteratively until convergence, which is defined as the order of skill ratings of compared models keeps the same after each model is selected at least 50 times. While the sampling procedure can be optimized by bayesian optimization BIBREF24 or multi-armed bandit algorithms BIBREF25, we choose to keep the method as simple as possible and use random sampling.
Experiments
We set up experiments in order to answer the following research questions:
RQ1: Can the comparative evaluator correlate better with human preference in sample-level than previous automated metrics when evaluating open domain NLG models?
RQ2: Can the comparative evaluator correlate better with human preference in model-level, so that our approach can measure the progress on open domain NLG better?
RQ3: As existing approaches fail to correlate well with human preference, whether and to what extent this problem affects the quality of the final NLG model when performing hyperparameter search and early-stopping?
RQ4: If the previous problem exists, can proposed comparative evaluator reduce this problem?
Experiments ::: Experimental Settings ::: Datasets
We evaluate the effectiveness of the proposed approach on two open domain natural language generation tasks: story generation and open domain dialogue response generation. For story generation, we use the WritingPrompts dataset released by BIBREF2. The WritingPrompts dataset is a large dataset of 303,358 human-generated stories paired with writing prompts from an online forum. NLG models are trained by taking writing prompts as input and generating the whole story. The average length of prompts is 28.4 and the average length of stories is 734.5 words, which makes human evaluation very expensive and better automated metrics are thus critical. For open domain dialogue response generation task, we use the Dailydialog dataset BIBREF26, which consists of dialogues that resemble daily conversations across multiple topics. It comprises of 13k dialogues with an average of 7.9 turns per dialog.
Experiments ::: Experimental Settings ::: Compared Models and Metrics
As our objective is to evaluate the evaluators rather than comparing state-of-the-art models, we choose three representative sequence-to-sequence architectures: LSTM BIBREF27 seq2seq, Convolutional seq2seq BIBREF28, and transformer BIBREF1 model. We compare models with different architectures, hyperparameter choices, and early-stopping criteria with different automated metrics, as well as human evaluation.
Regarding the evaluation metric (and criteria for choosing hyperparameter choice and early-stopping), we compare the proposed approach with the discriminative evaluator, BLEU score (average of 2-, 3-, 4-grams), perplexity, and ADEM. When evaluating generated stories, we cut off the story at the nearest sentence for stories longer than 250 words.
The proposed comparative evaluator is employed for choosing hyperparameter by performing skill rating among all models trained with different hyperparameter choices. For early-stopping, as incrementally performing skill rating is computationally expensive, we propose to perform n (e.g. 1000) pairwise comparison between the samples generated by the latest checkpoint and the previous k (e.g. 2) checkpoints and stop training when the wining rate of latest checkpoint keeps being smaller than its losing rate for 5 iterations.
Experiments ::: Experimental Settings ::: Detail of Parameterized Evaluators
The proposed comparative evaluator is trained by fine-tuning BERT-large as a sentence-pair classifier. To ensure fair evaluation, we also train the discriminative evaluator by fine-tuning BERT. For ADEM, we adopt its original implementation as its architecture is relatively complicated. In addition, we perform ablation study by evaluating three variants of the comparative evaluator where it is trained without strong supervision examples, without weak supervision examples, without fine-tuning with human preference annotations, and without transferring from BERT.
Experiments ::: Experimental Settings ::: Human Evaluation Procedure
As human evaluation is expensive, sample-level evaluation is performed jointly with model-level evaluation, which is also used for evaluating the ability of different metrics for performing hyperparameter search and early-stopping. Concretely, we perform 10 groups of evaluations for performing hyperparameter selecting and early-stopping with five compared automated metrics. In each evaluation, each of the five compared metrics is used to select the best hyperparameter combination or early-stopping checkpoint with other variants fixed.
We choose to perform score-based human evaluation for four reasons: 1) the ADEM baseline requires human-annotated score as training examples, 2) we can construct up to $\binom{2n}{2}$ training examples for our comparative evaluator with $n$ human-annotated scores, 3) score-based human evaluation facilitates the evaluation of correlation scores, and 4) as all other metrics do not perform pairwise comparison, using pairwise human evaluation will likely be biased toward our approach.
We sample 20 generated samples from each model (out of 5) of the 20 evaluation groups. We invite 20 human annotators which are all graduate students with good English language proficiency to score these samples. Each annotator scores one sample from each model, such that each model is uniformly evaluated. The score scales from 1 to 5, higher score indicates better overall sample quality. According to experimental results from BIBREF14, we do not ask annotators to provide specific scores for fluency or informativeness. To test the inner-annotator agreement score, we additionally ask them to evaluate another 40 generated samples, of which 20 samples are scored from 1 to 5 and another 20 are evaluated based on pairwise comparison with 4 other generated samples and scored to 1-5 based on how many times they are considered to be better than a reference sample. We get an inter-annotator agreement score $\kappa =0.53$ for directly scoring and $\kappa =0.76$ with pairwise comparison, which validates our intuition that evaluation by comparison may be more accurate. These additional human annotations are used as training data for ADEM and the comparative evaluator.
Experiments ::: Experimental Designs & Results ::: RQ1: Sample-Level Correlation
To test the correlation of different automated metrics with respect to human preference, we employ different metrics to score the collected 2000 samples and calculate their Pearson and Spearman correlation with human scores. For comparative evaluator, as the evaluation is performed pairwisely and no absolute score is available, we use two different approaches to get an absolute score for each sample: 1) we sample 50 common references from machine-generated samples for each task and compare each sample with all references by the comparative evaluator. A sample gets 3 points when beats a reference, 1 point when draws with the reference, and get 0 point when loses, 2) we adopt skill rating system by regarding each sample as an NLG model which always outputs the same sample and use the skill rating for each sample as its score. To ensure the computational budget to be roughly the same, we fix the number of plays in skill rating to 10,000.
The experimental results are summarized in Table 1. We can see that the proposed comparative evaluator correlates far better with human judgment than BLEU and perplexity. When compared with recently proposed parameterized metrics including adversarial evaluator and ADEM, our model consistently outperforms them by a large margin, which demonstrates that our comparison-based evaluation metric is able to evaluate sample quality more accurately. In addition, we find that evaluating generated samples by comparing it with a set of randomly selected samples or using sample-level skill rating performs almost equally well. This is not surprising as the employed skill rating is able to handle the inherent variance of players (i.e. NLG models). As this variance does not exist when we regard a sample as a model which always generates the same sample.
Experiments ::: Experimental Designs & Results ::: RQ2: Model-Level Correlation
As for model-level evaluation, we employ the average score of the evaluated 100 samples as each model's score and calculate their correlation with human scores. For comparative evaluator, we propose three different approaches to get an absolute score for each sample: 1) we calculate the average reference-based score (method 1 for sample-level comparison) of each sample as model-level score, 2) we calculate the average skill rating of each sample obtained in the experiments of RQ1 as model-level score, 2) we use the proposed skill rating system to get a model-level skill rating for each compared model.
Results are shown in Table 2. We can see that the proposed comparative evaluator with skill rating significantly outperforms all compared baselines, including comparative evaluator with averaged sample-level scores. This demonstrates the effectiveness of the skill rating system for performing model-level comparison with pairwise sample-level evaluation. In addition, the poor correlation between conventional evaluation metrics including BLEU and perplexity demonstrates the necessity of better automated evaluation metrics in open domain NLG evaluation.
Experiments ::: Experimental Designs & Results ::: RQ3&4: Automated Metrics for Model Training
We further investigate the impact of imperfect metrics on training NLG models. As described in the human evaluation procedure, we perform 10 runs to test the reliability of each metric when used to perform hyperparameter tuning and early-stopping respectively. In each run, we select the best hyperparameter combination or early-stopping checkpoint based on each of the five compared metrics. Human evaluation is then employed to identify the best choice. We evaluate the performance of each metric by how many times (out of 10) they succeeded in selecting the best hyperparameter combination or early-stopping checkpoint (out of 4) and the average human-annotated score for their selected models.
The results are shown in Table 3. We can see that conventional automated metrics perform poorly and result in sub-optimal result when performing hyperparameter search and selecting the best performing checkpoints. Converting evaluation metric from BLEU or perplexity to the proposed comparative evaluator can yield non-neglectable improvements without changing model architecture or training objective. While previous work on NLG evaluation mostly focuses on the evaluation stage and does not explore the influence of imperfect metrics during model training, our experiments demonstrate the existence of this problem and that the proposed method can, to some extent, alleviate this problem.
Experiments ::: Qualitative Analysis
We present several comparison examples in the Dailydialog dataset for qualitative analysis of the proposed comparative evaluator. From the first example, we can see that the comparative evaluator is capable of identifying that generic and dull responses (i.e. “I don't know”) should be considered as of worse quality. The second example suggests that our approach handles the diversity in possible responses well, as it regards both positive response and negative response as valid responses. Hopefully, these examples may provide us with some insights about why the proposed metric correlates better with human preference.
Experiments ::: Ablation Study
To better understand the proposed comparative evaluator and analyze the relative importance of its different components, we conduct an ablation study with several variants of the proposed model:
w/o comparison: Evaluating generated samples without comparison, which degrades to the adversarial evaluation method.
w/o strong supervision: Training the comparative evaluator without “strong supervision”, which models the inductive bias that human written reference samples are generally of better quality compared with that generated by NLG models.
w/o weak supervision: Training without “weak supervision”, which models the inductive bias that the quality of NLG models generally improves during training.
w/o human preference annotation Training without human annotated preference data (i.e. only with strong and weak supervision).
w/o tie option The variant of comparative evaluator where the model must select the better sample rather than able to admit its uncertainty.
w/o BERT The variant where the model is trained from scratch instead of fine-tuning BERT.
We evaluate these model variants on the Dailydialog dataset. Results are presented in Table 5. We can see that comparison-based evaluation is very effective as our model correlates much better than adversarial evaluator. The tie option is also very important as it can prevent the comparative evaluator from making uncertain decision and model the inductive bias that samples generated by the same model are generally of similar quality, which may help our model generalize better. As for different sources of training examples, we find that human preference annotation is the most important, which is not surprising. In addition, we find that the proposed weak supervision also helps, but is of smaller relative importance compared with strong supervision. This may be due to the fact that examples constructed by the weak supervision approach may contain a lot of noise. We can also see that our model correlates well with human preference without training with human preference annotation, this is very important in practice as human annotations are not always available. Finally, we find that transferring the natural language understanding ability from BERT to be very important for the final performance.
Discussion and Conclusion
In this paper, we present a novel comparison-based parameterized automated evaluation metric for evaluating open domain NLG models. The proposed model is based on the intuition that we can better evaluate the quality of a sample by comparing it with other samples. Our model allows the model to admit its uncertainty with the “tie” option. We adopt the skill rating system to perform model-level evaluation based on sample-level pairwise comparison.
By transferring pretrained natural language understanding knowledge from BERT and fine-tuning with strong and weak supervision examples and human preference annotations, our model correlates better with human judgment than other compared metrics. In addition, we find that when used as evaluation metrics, conventional metrics such as BLEU and perplexity may affect the training stage of NLG models as they may lead to sub-optimal hyperparameter choice and checkpoint selection. Our model, in contrast, is much more reliable when performing these choices. | Text Overlap Metrics, including BLEU, Perplexity, Parameterized Metrics |
75b69eef4a38ec16df63d60be9708a3c44a79c56 | 75b69eef4a38ec16df63d60be9708a3c44a79c56_0 | Q: How much better peformance is achieved in human evaluation when model is trained considering proposed metric?
Text: Introduction
Recent advances in sequence-to-sequence learning architecture BIBREF0 and the transformer model BIBREF1 have raised increasing interest in natural language generation (NLG) tasks, including story generation BIBREF2, open-domain dialogue response generation BIBREF3 and abstractive summarization BIBREF4. Despite the fast advances of models, there remains a huge gap in the evaluation of NLG models and it is hard to measure the progress due to the lack of good evaluation metrics. While perplexity is a good measure of how well a model fits some data, it does not measure performance at the desired task. Word overlap based metrics such as BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7 capture quality better than the perplexity and are useful in translation and summarization. However, they still correlate poorly with human evaluation BIBREF8 in open domain text generation tasks including story generation and dialogue response generation because two equally good generated texts may have no n-gram overlap. Human evaluation is generally considered to be the gold standard evaluation, however, it does not scale well as it is generally expensive and time-consuming to conduct human evaluation.
Apart from measuring relative progress between different models, automated evaluation metrics also play an important role in the training stage of NLG models. It is a common practice to tune the model hyperparameter, detect convergence, perform early-stopping, and select the best checkpoints based on the model's performance on automated evaluation metrics. While acceptable for tasks where automated metrics correlate well with human evaluations, including machine translation and text summarization, this can be erroneous and result in sub-optimal training in open domain NLG tasks because available automated metrics correlate poorly with human evaluation, as demonstrated in the experimental section of this paper.
To tackle the aforementioned problems, in this paper, we propose a self-supervised approach with transfer learning to learn to compare the quality of two samples as an automated comparative Turing test. The motivation of our approach is that we can better assess the quality of generated samples or trained NLG model by comparing it with another one. Our model is a text pair classification model trained to compare the task-specific quality of two samples, which is then used to evaluate the quality of trained NLG models. As human preference annotation is generally expensive, our model is designed to be able to perform self-supervised training using only generated samples and gold reference samples without human preference annotation. When human preference annotation is available, our model can be further fine-tuned to better imitate human judgment. To evaluate the model-level quality of NLG models based on pairwise comparison in sample-level, we adopt the skill rating system similar to ELO BIBREF9 and Trueskill BIBREF10, which is a method for assigning a numerical skill to players in a player-vs-player game, given a win-loss record of games played. In our scenario, the players are NLG models to be evaluated and a higher rating indicates a better model. The skill rating system makes it possible to evaluate all n models without needing to run $n^{2}$ matches and is able to take into account the amount of new information each comparison provides.
The contribution of this paper is threefold:
We propose a “learning to compare” model to better assess the quality of text generated by NLG models based on pairwise comparison. Our model is able to transfer natural language understanding knowledge from BERT by fine-tuning in a self-supervised way while also able to be further fine-tuned with human preference annotation. Once trained, our model is able to perform inter-model comparison without the need for gold references, which greatly enlarges the potentially available test set and reduces the potential risk of overfitting the reference in the test set.
We propose to use the skill rating system to perform model-level evaluation based on the sample-level evaluation information provided by our pairwise comparison model. The skill rating system is more efficient and accurate than several baseline approaches.
We conduct experiments on both story generation task and open domain dialogue response generation task. Experimental results show that our approach correlates better with human evaluation on both datasets. Moreover, we show that using automated metrics such as BLEU to perform hyperparameter tuning and early-stopping results in sub-optimal model and our approach helps alleviate this problem.
Related Work
Evaluation of NLG models has been a long-standing open problem. While human evaluation may be ideal, it is generally expensive to conduct and does not scale well. Various automated evaluation approaches are proposed to facilitate the development and evaluation of NLG models. We summarize these evaluation approaches below.
Text Overlap Metrics, including BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7, are the most popular metrics employed in the evaluation of NLG models. They evaluate generated text by comparing the similarity between the generated text and human written references. While this works well in tasks where the diversity of acceptable output is limited, such as machine translation and text summarization, text overlap metrics are shown to have weak or no correlation with human judgments in open domain natural language generation tasks BIBREF8. There are two major drawbacks in these metrics. First, text overlap metrics can not distinguish minor variations in a generated text which may make the sentence not equally grammatically correct or semantically meaningful. Second, there may exist multiple equally good outputs for the given input and comparing against one gold reference can be erroneous.
Perplexity is commonly used to evaluate the quality of a language model. It measures how well a probability distribution predicts a sample and captures the degree of uncertainty in the model. It is used to evaluate models in open-domain NLG tasks such as story generation BIBREF2 and open domain dialogue systems. However, “how likely a sentence is generated by a given model” may not be comparable across different models and does not indicate the quality of the sentence.
Parameterized Metrics learn a parameterized model to evaluate generated text. Adversarial evaluation models BIBREF11, BIBREF12 assigns a score based on how easy it is to distinguish the dialogue model responses from human responses. However, training such a discriminator can be difficult as the binary classification task can be easily over-fitted and leads to poor generalizability BIBREF11. Moreover, the information we get from the discriminator accuracy is limited as we can not compare the quality of two generated sentences when they both succeed or fail in fooling the discriminator. Recent study shows that the discriminator accuracy does not correlate well with human preference BIBREF13. Automated Dialogue Evaluation Model (ADEM) BIBREF14 is another parameterized metric proposed for dialogue system evaluation. It learns to score a generated dialogue response based on the context and the human written reference. However, it requires human-annotated scores for generated sentences. It is generally hard to design appropriate questions for crowdsourcing these scores, which makes the annotation very expensive to get and the inter-annotator agreement score is only moderate BIBREF14. As a result, the training data is limited and noisy, which makes the scoring task even harder. It can be problematic when comparing models with similar quality. In addition, this model is designed only for evaluating dialogue response generation models. More recently, embedding similarity based metrics such as HUSE BIBREF15 and BERTScore BIBREF16. These metrics alleviate the first problem of text overlap metrics by modeling semantic similarity better. However, they can not address the response diversity problem and thus are only suitable for machine translation and text summarization.
Another line of research on NLG evaluation is to unify human evaluation with statistical evaluation BIBREF17, BIBREF18. These works are orthogonal to our paper as they mainly focus on the combination of human evaluation and automated evaluation.
Another related work of our research is the skill rating system, which evaluates players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. It is first adopted to evaluate GANs BIBREF19 for synthesizing images BIBREF20 by competing generators against discriminators. Their approach is an approximation of skill rating as the original skill rating system requires game played by two symmetric players, while in their system the players are asymmetric. Their approach does not include the “tie” option, thus can not distinguish cases where the discriminator is confident enough or not. More importantly, their approach is only designed for evaluating GANs while our approach can be used for any NLG models.
Methodology
We present the proposed approach in this section. We begin with the sample-level pairwise comparison model. Afterwards, we introduce how to adopt the skill rating system to perform model-level evaluation of NLG models.
Methodology ::: Learning to Compare
The proposed comparative evaluator is a text pair relation classifier which is trained to compare the task-specific quality of two samples. The motivation of evaluating one sample by comparing it with another sample is drawn from the insight learned when conducting human evaluation for NLG models. We find that when comparing two NLG models, instead of asking human annotator to assign scores separately for samples generated by different models, which resembles the case in the ADEM model BIBREF14, it is much easier for human annotators to directly compare one sample generated by the first model against another sample from the second model pairwisely and compute the win/loss rate. The comparison-based evaluation may also be more accurate, which is demonstrated by a higher inter-annotator agreement score in our preliminary experiments.
The comparative evaluator learns a total order of sample quality by classifying whether the first compared sample is better ($>$), worse ($<$), or indistinguishable ($\approx $) in terms of its quality compared with another sample. In this way, our model encodes the inductive bias that sometimes two samples can have similar quality and it is hard and unreliable to choose the better sample. By giving our model the third “tie” option, it can explicitly express its uncertainty and choose its preference only when being confident enough. This design choice is motivated by the practice that adding the “tie” option for human annotator when performing pairwise human evaluation can often make the comparison easier and more reliable. For a text sample, our comparative evaluator can provide a more informative assessment than the binary discriminative evaluator because one evaluated sample can receive multiple feedback from the comparative evaluator by comparing it with multiple other samples. In contrast, the discriminative evaluator can only evaluate a sample once, which is more likely to suffer from the inherent uncertainty of the evaluator.
We propose two approaches to construct pairwise training examples for training a comparative evaluator. The first approach generates strong supervision examples. It is based on the intuition that human written references are generally of better quality than machine-generated samples, and it is hard to tell the difference in term of the quality when two compared samples are both generated by machines or human written reference. We denote $S_{+}$$/$$S_{-}$ as the set of real/generated samples. For a real sample $s_{+}\in S_{+}$ and a generated sample $s_{-}\in S_{-}$, we assign the label “better ($>$)” to the pair ($s_+$, $s_-$) and “worse ($<$)” to ($s_-$, $s_+$). For two samples both from real data or from the generated samples, we assign the label “indistinguishable ($\approx $)” to such pairs (i.e., ($s_+^i$, $s_+^j$) and ($s_-^i$, $s_-^j$)). For a training set with $n$ real samples and $n$ generated samples, we can construct $\binom{2n}{2}$ pairwise training examples for the comparative evaluator, allowing to enhance the generalization ability and introduce more informative learning signals than the standard real/fake binary discriminative evaluator. Note that when constructing a sample pair ($s_-^i$, $s_-^j$), $s_-^i$ and $s_-^j$ are sampled from the same checkpoint of the same model in order to ensure that they are of similar quality in expectation.
One problem of the strong supervision approach is that it always labels two generated samples as indistinguishable. However, during inference, the input of the comparative evaluator is a pair of two generated samples from different models. Thus it requires the model to capture the quality relation in training examples and generalize well to successfully compare two samples rather than simply classifying them as indistinguishable, which provides relatively less information for evaluating NLG models.
To tackle this problem, we propose an approach to construct weak supervision examples for training the comparative evaluator. The intuition of our weak supervision approach is that during training, the quality of the NLG model keeps improving until convergence. Given two checkpoints of the same model, we can thus consider samples generated by the more recent checkpoint are of better quality compared with samples generated by the earlier version of the same model. This approach is considered to be weak supervision because the model quality may not improve monotonically and sometimes it is hard to decide whether the model begins to overfit the training data and its quality starts to decline. To minimize the noise introduced by these problems, we empirically set the minimal margin between two selected checkpoints to be $10\%$ of the total training iteration and do not select two “almost converged” checkpoints. The construction of training samples is similar to the first approach. In addition, motivated by the fact that the larger the margin between the quality two selected version of the model, the easier for the comparative evaluator to learn to distinguish the training examples, we propose to use curriculum learning BIBREF21 by feeding the comparative evaluator with sample pairs with larger margin (i.e. more training iterations between two selected checkpoints) during initial training stage and gradually decrease the margin to let the model gradually learn to capture smaller quality differences. Moreover, when human preference annotation is available, we can additionally fine-tune the comparative evaluator with human annotations.
The comparative evaluator is trained with maximum likelihood estimation (MLE) objective, as described in eq DISPLAY_FORM6
where $\mathcal {X}$ is the set of pairwise training examples contructed as described above, $Q(x_1, x_2) \in \lbrace >,<,\approx \rbrace $ is the true label for the pair ($x_1$, $x_2$), $D_\phi ^q(x_1, x_2)$ is the probability of the comparative discriminator's prediction being $q$ ($q \in \lbrace >,<,\approx \rbrace $) for the pair ($x_1$, $x_2$).
As comparing the quality of generated text requires good natural language understanding ability and our comparative evaluator is formulated as a sentence pair classification model, we propose to fine-tune BERT BIBREF22 as the comparative evaluator, the architecture of the resulting comparative evaluator is illustrated by Figure 1. Note that the compared sample A and B are based on the same context, which ensures that they are comparable.
Methodology ::: Skill Rating
In player-vs-player games such as chess or tennis, skill rating systems such as Elo BIBREF9 or Glicko2 BIBREF23 evaluate players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. We adopt the skill rating system for model-level evaluation of NLG models. By taking the trained comparative evaluator as the “playground” and NLG models as “player”, the “player-vs-player” game is played by sampling one output sample from each NLG model conditioning on the same input and the game output is decided by the comparative evaluator.
Following previous work BIBREF20, in our paper, we use the Glicko2 system BIBREF23. The employed system can be summarized as follows: each player's skill rating is represented as a Gaussian distribution, with a mean and standard deviation, representing the current state of the evidence about their “true” skill rating. As we evaluate frozen snapshots of NLG models, we disabled an irrelevant feature of Glicko2 that increases uncertainty about a human player’s skill when they have not participated in a match for some time. Another difference is that conventional skill rating systems do not support the “tie” option, which is important for the system to be stable and reliable in our case because the evaluator is not perfect. To incorporate this feature, we follow the intuition that a player's skill rating should be increased when it draws with another player with a higher skill rating and vice versa. We come up with a simple rule which increases/decreases the skill rating of one player by a ratio (e.g. 0.1) of the changes in its skill rating when it wins/loses if it draws with another player with higher/lower skill rating. In our experiments, the skill rating is performed by randomly sampling two compared models, simulating a “game” between two selected models by sampling one sample from each model and comparing them with the comparative evaluator, and then updating the skill rating of selected models according to the outcome. This procedure is performed iteratively until convergence, which is defined as the order of skill ratings of compared models keeps the same after each model is selected at least 50 times. While the sampling procedure can be optimized by bayesian optimization BIBREF24 or multi-armed bandit algorithms BIBREF25, we choose to keep the method as simple as possible and use random sampling.
Experiments
We set up experiments in order to answer the following research questions:
RQ1: Can the comparative evaluator correlate better with human preference in sample-level than previous automated metrics when evaluating open domain NLG models?
RQ2: Can the comparative evaluator correlate better with human preference in model-level, so that our approach can measure the progress on open domain NLG better?
RQ3: As existing approaches fail to correlate well with human preference, whether and to what extent this problem affects the quality of the final NLG model when performing hyperparameter search and early-stopping?
RQ4: If the previous problem exists, can proposed comparative evaluator reduce this problem?
Experiments ::: Experimental Settings ::: Datasets
We evaluate the effectiveness of the proposed approach on two open domain natural language generation tasks: story generation and open domain dialogue response generation. For story generation, we use the WritingPrompts dataset released by BIBREF2. The WritingPrompts dataset is a large dataset of 303,358 human-generated stories paired with writing prompts from an online forum. NLG models are trained by taking writing prompts as input and generating the whole story. The average length of prompts is 28.4 and the average length of stories is 734.5 words, which makes human evaluation very expensive and better automated metrics are thus critical. For open domain dialogue response generation task, we use the Dailydialog dataset BIBREF26, which consists of dialogues that resemble daily conversations across multiple topics. It comprises of 13k dialogues with an average of 7.9 turns per dialog.
Experiments ::: Experimental Settings ::: Compared Models and Metrics
As our objective is to evaluate the evaluators rather than comparing state-of-the-art models, we choose three representative sequence-to-sequence architectures: LSTM BIBREF27 seq2seq, Convolutional seq2seq BIBREF28, and transformer BIBREF1 model. We compare models with different architectures, hyperparameter choices, and early-stopping criteria with different automated metrics, as well as human evaluation.
Regarding the evaluation metric (and criteria for choosing hyperparameter choice and early-stopping), we compare the proposed approach with the discriminative evaluator, BLEU score (average of 2-, 3-, 4-grams), perplexity, and ADEM. When evaluating generated stories, we cut off the story at the nearest sentence for stories longer than 250 words.
The proposed comparative evaluator is employed for choosing hyperparameter by performing skill rating among all models trained with different hyperparameter choices. For early-stopping, as incrementally performing skill rating is computationally expensive, we propose to perform n (e.g. 1000) pairwise comparison between the samples generated by the latest checkpoint and the previous k (e.g. 2) checkpoints and stop training when the wining rate of latest checkpoint keeps being smaller than its losing rate for 5 iterations.
Experiments ::: Experimental Settings ::: Detail of Parameterized Evaluators
The proposed comparative evaluator is trained by fine-tuning BERT-large as a sentence-pair classifier. To ensure fair evaluation, we also train the discriminative evaluator by fine-tuning BERT. For ADEM, we adopt its original implementation as its architecture is relatively complicated. In addition, we perform ablation study by evaluating three variants of the comparative evaluator where it is trained without strong supervision examples, without weak supervision examples, without fine-tuning with human preference annotations, and without transferring from BERT.
Experiments ::: Experimental Settings ::: Human Evaluation Procedure
As human evaluation is expensive, sample-level evaluation is performed jointly with model-level evaluation, which is also used for evaluating the ability of different metrics for performing hyperparameter search and early-stopping. Concretely, we perform 10 groups of evaluations for performing hyperparameter selecting and early-stopping with five compared automated metrics. In each evaluation, each of the five compared metrics is used to select the best hyperparameter combination or early-stopping checkpoint with other variants fixed.
We choose to perform score-based human evaluation for four reasons: 1) the ADEM baseline requires human-annotated score as training examples, 2) we can construct up to $\binom{2n}{2}$ training examples for our comparative evaluator with $n$ human-annotated scores, 3) score-based human evaluation facilitates the evaluation of correlation scores, and 4) as all other metrics do not perform pairwise comparison, using pairwise human evaluation will likely be biased toward our approach.
We sample 20 generated samples from each model (out of 5) of the 20 evaluation groups. We invite 20 human annotators which are all graduate students with good English language proficiency to score these samples. Each annotator scores one sample from each model, such that each model is uniformly evaluated. The score scales from 1 to 5, higher score indicates better overall sample quality. According to experimental results from BIBREF14, we do not ask annotators to provide specific scores for fluency or informativeness. To test the inner-annotator agreement score, we additionally ask them to evaluate another 40 generated samples, of which 20 samples are scored from 1 to 5 and another 20 are evaluated based on pairwise comparison with 4 other generated samples and scored to 1-5 based on how many times they are considered to be better than a reference sample. We get an inter-annotator agreement score $\kappa =0.53$ for directly scoring and $\kappa =0.76$ with pairwise comparison, which validates our intuition that evaluation by comparison may be more accurate. These additional human annotations are used as training data for ADEM and the comparative evaluator.
Experiments ::: Experimental Designs & Results ::: RQ1: Sample-Level Correlation
To test the correlation of different automated metrics with respect to human preference, we employ different metrics to score the collected 2000 samples and calculate their Pearson and Spearman correlation with human scores. For comparative evaluator, as the evaluation is performed pairwisely and no absolute score is available, we use two different approaches to get an absolute score for each sample: 1) we sample 50 common references from machine-generated samples for each task and compare each sample with all references by the comparative evaluator. A sample gets 3 points when beats a reference, 1 point when draws with the reference, and get 0 point when loses, 2) we adopt skill rating system by regarding each sample as an NLG model which always outputs the same sample and use the skill rating for each sample as its score. To ensure the computational budget to be roughly the same, we fix the number of plays in skill rating to 10,000.
The experimental results are summarized in Table 1. We can see that the proposed comparative evaluator correlates far better with human judgment than BLEU and perplexity. When compared with recently proposed parameterized metrics including adversarial evaluator and ADEM, our model consistently outperforms them by a large margin, which demonstrates that our comparison-based evaluation metric is able to evaluate sample quality more accurately. In addition, we find that evaluating generated samples by comparing it with a set of randomly selected samples or using sample-level skill rating performs almost equally well. This is not surprising as the employed skill rating is able to handle the inherent variance of players (i.e. NLG models). As this variance does not exist when we regard a sample as a model which always generates the same sample.
Experiments ::: Experimental Designs & Results ::: RQ2: Model-Level Correlation
As for model-level evaluation, we employ the average score of the evaluated 100 samples as each model's score and calculate their correlation with human scores. For comparative evaluator, we propose three different approaches to get an absolute score for each sample: 1) we calculate the average reference-based score (method 1 for sample-level comparison) of each sample as model-level score, 2) we calculate the average skill rating of each sample obtained in the experiments of RQ1 as model-level score, 2) we use the proposed skill rating system to get a model-level skill rating for each compared model.
Results are shown in Table 2. We can see that the proposed comparative evaluator with skill rating significantly outperforms all compared baselines, including comparative evaluator with averaged sample-level scores. This demonstrates the effectiveness of the skill rating system for performing model-level comparison with pairwise sample-level evaluation. In addition, the poor correlation between conventional evaluation metrics including BLEU and perplexity demonstrates the necessity of better automated evaluation metrics in open domain NLG evaluation.
Experiments ::: Experimental Designs & Results ::: RQ3&4: Automated Metrics for Model Training
We further investigate the impact of imperfect metrics on training NLG models. As described in the human evaluation procedure, we perform 10 runs to test the reliability of each metric when used to perform hyperparameter tuning and early-stopping respectively. In each run, we select the best hyperparameter combination or early-stopping checkpoint based on each of the five compared metrics. Human evaluation is then employed to identify the best choice. We evaluate the performance of each metric by how many times (out of 10) they succeeded in selecting the best hyperparameter combination or early-stopping checkpoint (out of 4) and the average human-annotated score for their selected models.
The results are shown in Table 3. We can see that conventional automated metrics perform poorly and result in sub-optimal result when performing hyperparameter search and selecting the best performing checkpoints. Converting evaluation metric from BLEU or perplexity to the proposed comparative evaluator can yield non-neglectable improvements without changing model architecture or training objective. While previous work on NLG evaluation mostly focuses on the evaluation stage and does not explore the influence of imperfect metrics during model training, our experiments demonstrate the existence of this problem and that the proposed method can, to some extent, alleviate this problem.
Experiments ::: Qualitative Analysis
We present several comparison examples in the Dailydialog dataset for qualitative analysis of the proposed comparative evaluator. From the first example, we can see that the comparative evaluator is capable of identifying that generic and dull responses (i.e. “I don't know”) should be considered as of worse quality. The second example suggests that our approach handles the diversity in possible responses well, as it regards both positive response and negative response as valid responses. Hopefully, these examples may provide us with some insights about why the proposed metric correlates better with human preference.
Experiments ::: Ablation Study
To better understand the proposed comparative evaluator and analyze the relative importance of its different components, we conduct an ablation study with several variants of the proposed model:
w/o comparison: Evaluating generated samples without comparison, which degrades to the adversarial evaluation method.
w/o strong supervision: Training the comparative evaluator without “strong supervision”, which models the inductive bias that human written reference samples are generally of better quality compared with that generated by NLG models.
w/o weak supervision: Training without “weak supervision”, which models the inductive bias that the quality of NLG models generally improves during training.
w/o human preference annotation Training without human annotated preference data (i.e. only with strong and weak supervision).
w/o tie option The variant of comparative evaluator where the model must select the better sample rather than able to admit its uncertainty.
w/o BERT The variant where the model is trained from scratch instead of fine-tuning BERT.
We evaluate these model variants on the Dailydialog dataset. Results are presented in Table 5. We can see that comparison-based evaluation is very effective as our model correlates much better than adversarial evaluator. The tie option is also very important as it can prevent the comparative evaluator from making uncertain decision and model the inductive bias that samples generated by the same model are generally of similar quality, which may help our model generalize better. As for different sources of training examples, we find that human preference annotation is the most important, which is not surprising. In addition, we find that the proposed weak supervision also helps, but is of smaller relative importance compared with strong supervision. This may be due to the fact that examples constructed by the weak supervision approach may contain a lot of noise. We can also see that our model correlates well with human preference without training with human preference annotation, this is very important in practice as human annotations are not always available. Finally, we find that transferring the natural language understanding ability from BERT to be very important for the final performance.
Discussion and Conclusion
In this paper, we present a novel comparison-based parameterized automated evaluation metric for evaluating open domain NLG models. The proposed model is based on the intuition that we can better evaluate the quality of a sample by comparing it with other samples. Our model allows the model to admit its uncertainty with the “tie” option. We adopt the skill rating system to perform model-level evaluation based on sample-level pairwise comparison.
By transferring pretrained natural language understanding knowledge from BERT and fine-tuning with strong and weak supervision examples and human preference annotations, our model correlates better with human judgment than other compared metrics. In addition, we find that when used as evaluation metrics, conventional metrics such as BLEU and perplexity may affect the training stage of NLG models as they may lead to sub-optimal hyperparameter choice and checkpoint selection. Our model, in contrast, is much more reliable when performing these choices. | Pearson correlation to human judgement - proposed vs next best metric
Sample level comparison:
- Story generation: 0.387 vs 0.148
- Dialogue: 0.472 vs 0.341
Model level comparison:
- Story generation: 0.631 vs 0.302
- Dialogue: 0.783 vs 0.553 |
7488855f09b97eb6a027212fb7ace1d338f36a2b | 7488855f09b97eb6a027212fb7ace1d338f36a2b_0 | Q: Do the authors suggest that proposed metric replace human evaluation on this task?
Text: Introduction
Recent advances in sequence-to-sequence learning architecture BIBREF0 and the transformer model BIBREF1 have raised increasing interest in natural language generation (NLG) tasks, including story generation BIBREF2, open-domain dialogue response generation BIBREF3 and abstractive summarization BIBREF4. Despite the fast advances of models, there remains a huge gap in the evaluation of NLG models and it is hard to measure the progress due to the lack of good evaluation metrics. While perplexity is a good measure of how well a model fits some data, it does not measure performance at the desired task. Word overlap based metrics such as BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7 capture quality better than the perplexity and are useful in translation and summarization. However, they still correlate poorly with human evaluation BIBREF8 in open domain text generation tasks including story generation and dialogue response generation because two equally good generated texts may have no n-gram overlap. Human evaluation is generally considered to be the gold standard evaluation, however, it does not scale well as it is generally expensive and time-consuming to conduct human evaluation.
Apart from measuring relative progress between different models, automated evaluation metrics also play an important role in the training stage of NLG models. It is a common practice to tune the model hyperparameter, detect convergence, perform early-stopping, and select the best checkpoints based on the model's performance on automated evaluation metrics. While acceptable for tasks where automated metrics correlate well with human evaluations, including machine translation and text summarization, this can be erroneous and result in sub-optimal training in open domain NLG tasks because available automated metrics correlate poorly with human evaluation, as demonstrated in the experimental section of this paper.
To tackle the aforementioned problems, in this paper, we propose a self-supervised approach with transfer learning to learn to compare the quality of two samples as an automated comparative Turing test. The motivation of our approach is that we can better assess the quality of generated samples or trained NLG model by comparing it with another one. Our model is a text pair classification model trained to compare the task-specific quality of two samples, which is then used to evaluate the quality of trained NLG models. As human preference annotation is generally expensive, our model is designed to be able to perform self-supervised training using only generated samples and gold reference samples without human preference annotation. When human preference annotation is available, our model can be further fine-tuned to better imitate human judgment. To evaluate the model-level quality of NLG models based on pairwise comparison in sample-level, we adopt the skill rating system similar to ELO BIBREF9 and Trueskill BIBREF10, which is a method for assigning a numerical skill to players in a player-vs-player game, given a win-loss record of games played. In our scenario, the players are NLG models to be evaluated and a higher rating indicates a better model. The skill rating system makes it possible to evaluate all n models without needing to run $n^{2}$ matches and is able to take into account the amount of new information each comparison provides.
The contribution of this paper is threefold:
We propose a “learning to compare” model to better assess the quality of text generated by NLG models based on pairwise comparison. Our model is able to transfer natural language understanding knowledge from BERT by fine-tuning in a self-supervised way while also able to be further fine-tuned with human preference annotation. Once trained, our model is able to perform inter-model comparison without the need for gold references, which greatly enlarges the potentially available test set and reduces the potential risk of overfitting the reference in the test set.
We propose to use the skill rating system to perform model-level evaluation based on the sample-level evaluation information provided by our pairwise comparison model. The skill rating system is more efficient and accurate than several baseline approaches.
We conduct experiments on both story generation task and open domain dialogue response generation task. Experimental results show that our approach correlates better with human evaluation on both datasets. Moreover, we show that using automated metrics such as BLEU to perform hyperparameter tuning and early-stopping results in sub-optimal model and our approach helps alleviate this problem.
Related Work
Evaluation of NLG models has been a long-standing open problem. While human evaluation may be ideal, it is generally expensive to conduct and does not scale well. Various automated evaluation approaches are proposed to facilitate the development and evaluation of NLG models. We summarize these evaluation approaches below.
Text Overlap Metrics, including BLEU BIBREF5, METEOR BIBREF6 and ROUGE BIBREF7, are the most popular metrics employed in the evaluation of NLG models. They evaluate generated text by comparing the similarity between the generated text and human written references. While this works well in tasks where the diversity of acceptable output is limited, such as machine translation and text summarization, text overlap metrics are shown to have weak or no correlation with human judgments in open domain natural language generation tasks BIBREF8. There are two major drawbacks in these metrics. First, text overlap metrics can not distinguish minor variations in a generated text which may make the sentence not equally grammatically correct or semantically meaningful. Second, there may exist multiple equally good outputs for the given input and comparing against one gold reference can be erroneous.
Perplexity is commonly used to evaluate the quality of a language model. It measures how well a probability distribution predicts a sample and captures the degree of uncertainty in the model. It is used to evaluate models in open-domain NLG tasks such as story generation BIBREF2 and open domain dialogue systems. However, “how likely a sentence is generated by a given model” may not be comparable across different models and does not indicate the quality of the sentence.
Parameterized Metrics learn a parameterized model to evaluate generated text. Adversarial evaluation models BIBREF11, BIBREF12 assigns a score based on how easy it is to distinguish the dialogue model responses from human responses. However, training such a discriminator can be difficult as the binary classification task can be easily over-fitted and leads to poor generalizability BIBREF11. Moreover, the information we get from the discriminator accuracy is limited as we can not compare the quality of two generated sentences when they both succeed or fail in fooling the discriminator. Recent study shows that the discriminator accuracy does not correlate well with human preference BIBREF13. Automated Dialogue Evaluation Model (ADEM) BIBREF14 is another parameterized metric proposed for dialogue system evaluation. It learns to score a generated dialogue response based on the context and the human written reference. However, it requires human-annotated scores for generated sentences. It is generally hard to design appropriate questions for crowdsourcing these scores, which makes the annotation very expensive to get and the inter-annotator agreement score is only moderate BIBREF14. As a result, the training data is limited and noisy, which makes the scoring task even harder. It can be problematic when comparing models with similar quality. In addition, this model is designed only for evaluating dialogue response generation models. More recently, embedding similarity based metrics such as HUSE BIBREF15 and BERTScore BIBREF16. These metrics alleviate the first problem of text overlap metrics by modeling semantic similarity better. However, they can not address the response diversity problem and thus are only suitable for machine translation and text summarization.
Another line of research on NLG evaluation is to unify human evaluation with statistical evaluation BIBREF17, BIBREF18. These works are orthogonal to our paper as they mainly focus on the combination of human evaluation and automated evaluation.
Another related work of our research is the skill rating system, which evaluates players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. It is first adopted to evaluate GANs BIBREF19 for synthesizing images BIBREF20 by competing generators against discriminators. Their approach is an approximation of skill rating as the original skill rating system requires game played by two symmetric players, while in their system the players are asymmetric. Their approach does not include the “tie” option, thus can not distinguish cases where the discriminator is confident enough or not. More importantly, their approach is only designed for evaluating GANs while our approach can be used for any NLG models.
Methodology
We present the proposed approach in this section. We begin with the sample-level pairwise comparison model. Afterwards, we introduce how to adopt the skill rating system to perform model-level evaluation of NLG models.
Methodology ::: Learning to Compare
The proposed comparative evaluator is a text pair relation classifier which is trained to compare the task-specific quality of two samples. The motivation of evaluating one sample by comparing it with another sample is drawn from the insight learned when conducting human evaluation for NLG models. We find that when comparing two NLG models, instead of asking human annotator to assign scores separately for samples generated by different models, which resembles the case in the ADEM model BIBREF14, it is much easier for human annotators to directly compare one sample generated by the first model against another sample from the second model pairwisely and compute the win/loss rate. The comparison-based evaluation may also be more accurate, which is demonstrated by a higher inter-annotator agreement score in our preliminary experiments.
The comparative evaluator learns a total order of sample quality by classifying whether the first compared sample is better ($>$), worse ($<$), or indistinguishable ($\approx $) in terms of its quality compared with another sample. In this way, our model encodes the inductive bias that sometimes two samples can have similar quality and it is hard and unreliable to choose the better sample. By giving our model the third “tie” option, it can explicitly express its uncertainty and choose its preference only when being confident enough. This design choice is motivated by the practice that adding the “tie” option for human annotator when performing pairwise human evaluation can often make the comparison easier and more reliable. For a text sample, our comparative evaluator can provide a more informative assessment than the binary discriminative evaluator because one evaluated sample can receive multiple feedback from the comparative evaluator by comparing it with multiple other samples. In contrast, the discriminative evaluator can only evaluate a sample once, which is more likely to suffer from the inherent uncertainty of the evaluator.
We propose two approaches to construct pairwise training examples for training a comparative evaluator. The first approach generates strong supervision examples. It is based on the intuition that human written references are generally of better quality than machine-generated samples, and it is hard to tell the difference in term of the quality when two compared samples are both generated by machines or human written reference. We denote $S_{+}$$/$$S_{-}$ as the set of real/generated samples. For a real sample $s_{+}\in S_{+}$ and a generated sample $s_{-}\in S_{-}$, we assign the label “better ($>$)” to the pair ($s_+$, $s_-$) and “worse ($<$)” to ($s_-$, $s_+$). For two samples both from real data or from the generated samples, we assign the label “indistinguishable ($\approx $)” to such pairs (i.e., ($s_+^i$, $s_+^j$) and ($s_-^i$, $s_-^j$)). For a training set with $n$ real samples and $n$ generated samples, we can construct $\binom{2n}{2}$ pairwise training examples for the comparative evaluator, allowing to enhance the generalization ability and introduce more informative learning signals than the standard real/fake binary discriminative evaluator. Note that when constructing a sample pair ($s_-^i$, $s_-^j$), $s_-^i$ and $s_-^j$ are sampled from the same checkpoint of the same model in order to ensure that they are of similar quality in expectation.
One problem of the strong supervision approach is that it always labels two generated samples as indistinguishable. However, during inference, the input of the comparative evaluator is a pair of two generated samples from different models. Thus it requires the model to capture the quality relation in training examples and generalize well to successfully compare two samples rather than simply classifying them as indistinguishable, which provides relatively less information for evaluating NLG models.
To tackle this problem, we propose an approach to construct weak supervision examples for training the comparative evaluator. The intuition of our weak supervision approach is that during training, the quality of the NLG model keeps improving until convergence. Given two checkpoints of the same model, we can thus consider samples generated by the more recent checkpoint are of better quality compared with samples generated by the earlier version of the same model. This approach is considered to be weak supervision because the model quality may not improve monotonically and sometimes it is hard to decide whether the model begins to overfit the training data and its quality starts to decline. To minimize the noise introduced by these problems, we empirically set the minimal margin between two selected checkpoints to be $10\%$ of the total training iteration and do not select two “almost converged” checkpoints. The construction of training samples is similar to the first approach. In addition, motivated by the fact that the larger the margin between the quality two selected version of the model, the easier for the comparative evaluator to learn to distinguish the training examples, we propose to use curriculum learning BIBREF21 by feeding the comparative evaluator with sample pairs with larger margin (i.e. more training iterations between two selected checkpoints) during initial training stage and gradually decrease the margin to let the model gradually learn to capture smaller quality differences. Moreover, when human preference annotation is available, we can additionally fine-tune the comparative evaluator with human annotations.
The comparative evaluator is trained with maximum likelihood estimation (MLE) objective, as described in eq DISPLAY_FORM6
where $\mathcal {X}$ is the set of pairwise training examples contructed as described above, $Q(x_1, x_2) \in \lbrace >,<,\approx \rbrace $ is the true label for the pair ($x_1$, $x_2$), $D_\phi ^q(x_1, x_2)$ is the probability of the comparative discriminator's prediction being $q$ ($q \in \lbrace >,<,\approx \rbrace $) for the pair ($x_1$, $x_2$).
As comparing the quality of generated text requires good natural language understanding ability and our comparative evaluator is formulated as a sentence pair classification model, we propose to fine-tune BERT BIBREF22 as the comparative evaluator, the architecture of the resulting comparative evaluator is illustrated by Figure 1. Note that the compared sample A and B are based on the same context, which ensures that they are comparable.
Methodology ::: Skill Rating
In player-vs-player games such as chess or tennis, skill rating systems such as Elo BIBREF9 or Glicko2 BIBREF23 evaluate players by observing a record of wins and losses of multiple players and inferring the value of a latent, unobserved skill variable for each player that explains the records of wins and losses. We adopt the skill rating system for model-level evaluation of NLG models. By taking the trained comparative evaluator as the “playground” and NLG models as “player”, the “player-vs-player” game is played by sampling one output sample from each NLG model conditioning on the same input and the game output is decided by the comparative evaluator.
Following previous work BIBREF20, in our paper, we use the Glicko2 system BIBREF23. The employed system can be summarized as follows: each player's skill rating is represented as a Gaussian distribution, with a mean and standard deviation, representing the current state of the evidence about their “true” skill rating. As we evaluate frozen snapshots of NLG models, we disabled an irrelevant feature of Glicko2 that increases uncertainty about a human player’s skill when they have not participated in a match for some time. Another difference is that conventional skill rating systems do not support the “tie” option, which is important for the system to be stable and reliable in our case because the evaluator is not perfect. To incorporate this feature, we follow the intuition that a player's skill rating should be increased when it draws with another player with a higher skill rating and vice versa. We come up with a simple rule which increases/decreases the skill rating of one player by a ratio (e.g. 0.1) of the changes in its skill rating when it wins/loses if it draws with another player with higher/lower skill rating. In our experiments, the skill rating is performed by randomly sampling two compared models, simulating a “game” between two selected models by sampling one sample from each model and comparing them with the comparative evaluator, and then updating the skill rating of selected models according to the outcome. This procedure is performed iteratively until convergence, which is defined as the order of skill ratings of compared models keeps the same after each model is selected at least 50 times. While the sampling procedure can be optimized by bayesian optimization BIBREF24 or multi-armed bandit algorithms BIBREF25, we choose to keep the method as simple as possible and use random sampling.
Experiments
We set up experiments in order to answer the following research questions:
RQ1: Can the comparative evaluator correlate better with human preference in sample-level than previous automated metrics when evaluating open domain NLG models?
RQ2: Can the comparative evaluator correlate better with human preference in model-level, so that our approach can measure the progress on open domain NLG better?
RQ3: As existing approaches fail to correlate well with human preference, whether and to what extent this problem affects the quality of the final NLG model when performing hyperparameter search and early-stopping?
RQ4: If the previous problem exists, can proposed comparative evaluator reduce this problem?
Experiments ::: Experimental Settings ::: Datasets
We evaluate the effectiveness of the proposed approach on two open domain natural language generation tasks: story generation and open domain dialogue response generation. For story generation, we use the WritingPrompts dataset released by BIBREF2. The WritingPrompts dataset is a large dataset of 303,358 human-generated stories paired with writing prompts from an online forum. NLG models are trained by taking writing prompts as input and generating the whole story. The average length of prompts is 28.4 and the average length of stories is 734.5 words, which makes human evaluation very expensive and better automated metrics are thus critical. For open domain dialogue response generation task, we use the Dailydialog dataset BIBREF26, which consists of dialogues that resemble daily conversations across multiple topics. It comprises of 13k dialogues with an average of 7.9 turns per dialog.
Experiments ::: Experimental Settings ::: Compared Models and Metrics
As our objective is to evaluate the evaluators rather than comparing state-of-the-art models, we choose three representative sequence-to-sequence architectures: LSTM BIBREF27 seq2seq, Convolutional seq2seq BIBREF28, and transformer BIBREF1 model. We compare models with different architectures, hyperparameter choices, and early-stopping criteria with different automated metrics, as well as human evaluation.
Regarding the evaluation metric (and criteria for choosing hyperparameter choice and early-stopping), we compare the proposed approach with the discriminative evaluator, BLEU score (average of 2-, 3-, 4-grams), perplexity, and ADEM. When evaluating generated stories, we cut off the story at the nearest sentence for stories longer than 250 words.
The proposed comparative evaluator is employed for choosing hyperparameter by performing skill rating among all models trained with different hyperparameter choices. For early-stopping, as incrementally performing skill rating is computationally expensive, we propose to perform n (e.g. 1000) pairwise comparison between the samples generated by the latest checkpoint and the previous k (e.g. 2) checkpoints and stop training when the wining rate of latest checkpoint keeps being smaller than its losing rate for 5 iterations.
Experiments ::: Experimental Settings ::: Detail of Parameterized Evaluators
The proposed comparative evaluator is trained by fine-tuning BERT-large as a sentence-pair classifier. To ensure fair evaluation, we also train the discriminative evaluator by fine-tuning BERT. For ADEM, we adopt its original implementation as its architecture is relatively complicated. In addition, we perform ablation study by evaluating three variants of the comparative evaluator where it is trained without strong supervision examples, without weak supervision examples, without fine-tuning with human preference annotations, and without transferring from BERT.
Experiments ::: Experimental Settings ::: Human Evaluation Procedure
As human evaluation is expensive, sample-level evaluation is performed jointly with model-level evaluation, which is also used for evaluating the ability of different metrics for performing hyperparameter search and early-stopping. Concretely, we perform 10 groups of evaluations for performing hyperparameter selecting and early-stopping with five compared automated metrics. In each evaluation, each of the five compared metrics is used to select the best hyperparameter combination or early-stopping checkpoint with other variants fixed.
We choose to perform score-based human evaluation for four reasons: 1) the ADEM baseline requires human-annotated score as training examples, 2) we can construct up to $\binom{2n}{2}$ training examples for our comparative evaluator with $n$ human-annotated scores, 3) score-based human evaluation facilitates the evaluation of correlation scores, and 4) as all other metrics do not perform pairwise comparison, using pairwise human evaluation will likely be biased toward our approach.
We sample 20 generated samples from each model (out of 5) of the 20 evaluation groups. We invite 20 human annotators which are all graduate students with good English language proficiency to score these samples. Each annotator scores one sample from each model, such that each model is uniformly evaluated. The score scales from 1 to 5, higher score indicates better overall sample quality. According to experimental results from BIBREF14, we do not ask annotators to provide specific scores for fluency or informativeness. To test the inner-annotator agreement score, we additionally ask them to evaluate another 40 generated samples, of which 20 samples are scored from 1 to 5 and another 20 are evaluated based on pairwise comparison with 4 other generated samples and scored to 1-5 based on how many times they are considered to be better than a reference sample. We get an inter-annotator agreement score $\kappa =0.53$ for directly scoring and $\kappa =0.76$ with pairwise comparison, which validates our intuition that evaluation by comparison may be more accurate. These additional human annotations are used as training data for ADEM and the comparative evaluator.
Experiments ::: Experimental Designs & Results ::: RQ1: Sample-Level Correlation
To test the correlation of different automated metrics with respect to human preference, we employ different metrics to score the collected 2000 samples and calculate their Pearson and Spearman correlation with human scores. For comparative evaluator, as the evaluation is performed pairwisely and no absolute score is available, we use two different approaches to get an absolute score for each sample: 1) we sample 50 common references from machine-generated samples for each task and compare each sample with all references by the comparative evaluator. A sample gets 3 points when beats a reference, 1 point when draws with the reference, and get 0 point when loses, 2) we adopt skill rating system by regarding each sample as an NLG model which always outputs the same sample and use the skill rating for each sample as its score. To ensure the computational budget to be roughly the same, we fix the number of plays in skill rating to 10,000.
The experimental results are summarized in Table 1. We can see that the proposed comparative evaluator correlates far better with human judgment than BLEU and perplexity. When compared with recently proposed parameterized metrics including adversarial evaluator and ADEM, our model consistently outperforms them by a large margin, which demonstrates that our comparison-based evaluation metric is able to evaluate sample quality more accurately. In addition, we find that evaluating generated samples by comparing it with a set of randomly selected samples or using sample-level skill rating performs almost equally well. This is not surprising as the employed skill rating is able to handle the inherent variance of players (i.e. NLG models). As this variance does not exist when we regard a sample as a model which always generates the same sample.
Experiments ::: Experimental Designs & Results ::: RQ2: Model-Level Correlation
As for model-level evaluation, we employ the average score of the evaluated 100 samples as each model's score and calculate their correlation with human scores. For comparative evaluator, we propose three different approaches to get an absolute score for each sample: 1) we calculate the average reference-based score (method 1 for sample-level comparison) of each sample as model-level score, 2) we calculate the average skill rating of each sample obtained in the experiments of RQ1 as model-level score, 2) we use the proposed skill rating system to get a model-level skill rating for each compared model.
Results are shown in Table 2. We can see that the proposed comparative evaluator with skill rating significantly outperforms all compared baselines, including comparative evaluator with averaged sample-level scores. This demonstrates the effectiveness of the skill rating system for performing model-level comparison with pairwise sample-level evaluation. In addition, the poor correlation between conventional evaluation metrics including BLEU and perplexity demonstrates the necessity of better automated evaluation metrics in open domain NLG evaluation.
Experiments ::: Experimental Designs & Results ::: RQ3&4: Automated Metrics for Model Training
We further investigate the impact of imperfect metrics on training NLG models. As described in the human evaluation procedure, we perform 10 runs to test the reliability of each metric when used to perform hyperparameter tuning and early-stopping respectively. In each run, we select the best hyperparameter combination or early-stopping checkpoint based on each of the five compared metrics. Human evaluation is then employed to identify the best choice. We evaluate the performance of each metric by how many times (out of 10) they succeeded in selecting the best hyperparameter combination or early-stopping checkpoint (out of 4) and the average human-annotated score for their selected models.
The results are shown in Table 3. We can see that conventional automated metrics perform poorly and result in sub-optimal result when performing hyperparameter search and selecting the best performing checkpoints. Converting evaluation metric from BLEU or perplexity to the proposed comparative evaluator can yield non-neglectable improvements without changing model architecture or training objective. While previous work on NLG evaluation mostly focuses on the evaluation stage and does not explore the influence of imperfect metrics during model training, our experiments demonstrate the existence of this problem and that the proposed method can, to some extent, alleviate this problem.
Experiments ::: Qualitative Analysis
We present several comparison examples in the Dailydialog dataset for qualitative analysis of the proposed comparative evaluator. From the first example, we can see that the comparative evaluator is capable of identifying that generic and dull responses (i.e. “I don't know”) should be considered as of worse quality. The second example suggests that our approach handles the diversity in possible responses well, as it regards both positive response and negative response as valid responses. Hopefully, these examples may provide us with some insights about why the proposed metric correlates better with human preference.
Experiments ::: Ablation Study
To better understand the proposed comparative evaluator and analyze the relative importance of its different components, we conduct an ablation study with several variants of the proposed model:
w/o comparison: Evaluating generated samples without comparison, which degrades to the adversarial evaluation method.
w/o strong supervision: Training the comparative evaluator without “strong supervision”, which models the inductive bias that human written reference samples are generally of better quality compared with that generated by NLG models.
w/o weak supervision: Training without “weak supervision”, which models the inductive bias that the quality of NLG models generally improves during training.
w/o human preference annotation Training without human annotated preference data (i.e. only with strong and weak supervision).
w/o tie option The variant of comparative evaluator where the model must select the better sample rather than able to admit its uncertainty.
w/o BERT The variant where the model is trained from scratch instead of fine-tuning BERT.
We evaluate these model variants on the Dailydialog dataset. Results are presented in Table 5. We can see that comparison-based evaluation is very effective as our model correlates much better than adversarial evaluator. The tie option is also very important as it can prevent the comparative evaluator from making uncertain decision and model the inductive bias that samples generated by the same model are generally of similar quality, which may help our model generalize better. As for different sources of training examples, we find that human preference annotation is the most important, which is not surprising. In addition, we find that the proposed weak supervision also helps, but is of smaller relative importance compared with strong supervision. This may be due to the fact that examples constructed by the weak supervision approach may contain a lot of noise. We can also see that our model correlates well with human preference without training with human preference annotation, this is very important in practice as human annotations are not always available. Finally, we find that transferring the natural language understanding ability from BERT to be very important for the final performance.
Discussion and Conclusion
In this paper, we present a novel comparison-based parameterized automated evaluation metric for evaluating open domain NLG models. The proposed model is based on the intuition that we can better evaluate the quality of a sample by comparing it with other samples. Our model allows the model to admit its uncertainty with the “tie” option. We adopt the skill rating system to perform model-level evaluation based on sample-level pairwise comparison.
By transferring pretrained natural language understanding knowledge from BERT and fine-tuning with strong and weak supervision examples and human preference annotations, our model correlates better with human judgment than other compared metrics. In addition, we find that when used as evaluation metrics, conventional metrics such as BLEU and perplexity may affect the training stage of NLG models as they may lead to sub-optimal hyperparameter choice and checkpoint selection. Our model, in contrast, is much more reliable when performing these choices. | No |
1083ec9a2a33f7fe2b6b51bbcebd2d9aec4b4de2 | 1083ec9a2a33f7fe2b6b51bbcebd2d9aec4b4de2_0 | Q: What is the training objective of their pair-to-sequence model?
Text: Introduction
Extractive reading comprehension BIBREF0 , BIBREF1 obtains great attentions from both research and industry in recent years. End-to-end neural models BIBREF2 , BIBREF3 , BIBREF4 have achieved remarkable performance on the task if answers are assumed to be in the given paragraph. Nonetheless, the current systems are still not good at deciding whether no answer is presented in the context BIBREF5 . For unanswerable questions, the systems are supposed to abstain from answering rather than making unreliable guesses, which is an embodiment of language understanding ability.
We attack the problem by automatically generating unanswerable questions for data augmentation to improve question answering models. The generated unanswerable questions should not be too easy for the question answering model so that data augmentation can better help the model. For example, a simple baseline method is randomly choosing a question asked for another paragraph, and using it as an unanswerable question. However, it would be trivial to determine whether the retrieved question is answerable by using word-overlap heuristics, because the question is irrelevant to the context BIBREF6 . In this work, we propose to generate unanswerable questions by editing an answerable question and conditioning on the corresponding paragraph that contains the answer. So the generated unanswerable questions are more lexically similar and relevant to the context. Moreover, by using the answerable question as a prototype and its answer span as a plausible answer, the generated examples can provide more discriminative training signal to the question answering model.
To create training data for unanswerable question generation, we use (plausible) answer spans in paragraphs as pivots to align pairs of answerable questions and unanswerable questions. As shown in Figure 1 , the answerable and unanswerable questions of a paragraph are aligned through the text span “Victoria Department of Education” for being both the answer and plausible answer. These two questions are lexically similar and both asked with the same answer type in mind. In this way, we obtain the data with which the models can learn to ask unanswerable questions by editing answerable ones with word exchanges, negations, etc. Consequently, we can generate a mass of unanswerable questions with existing large-scale machine reading comprehension datasets.
Inspired by the neural reading comprehension models BIBREF7 , BIBREF8 , we introduce a pair-to-sequence model to better capture the interactions between questions and paragraphs. The proposed model first encodes input question and paragraph separately, and then conducts attention-based matching to make them aware of each other. Finally, the context-aware representations are used to generate outputs. To facilitate the use of context words during the generation process, we also incorporate the copy mechanism BIBREF9 , BIBREF10 .
Experimental results on the unanswerable question generation task shows that the pair-to-sequence model generates consistently better results over the sequence-to-sequence baseline and performs better with long paragraphs than with short answer sentences. Further experimental results show that the generated unanswerable questions can improve multiple machine reading comprehension models. Even using BERT fine-tuning as a strong reading comprehension model, we can still obtain a $1.9$ % absolute improvement of F1 score with BERT-base model and $1.7$ % absolute F1 improvement with BERT-large model.
Related Work
Machine Reading Comprehension (MRC) Various large-scale datasets BIBREF0 , BIBREF1 , BIBREF11 , BIBREF12 , BIBREF5 , BIBREF13 have spurred rapid progress on machine reading comprehension in recent years. SQuAD BIBREF1 is an extractive benchmark whose questions and answers spans are annotated by humans. Neural reading comprehension systems BIBREF14 , BIBREF2 , BIBREF3 , BIBREF15 , BIBREF8 , BIBREF16 , BIBREF4 , BIBREF17 have outperformed humans on this task in terms of automatic metrics. The SQuAD 2.0 dataset BIBREF5 extends SQuAD with more than $50,000$ crowdsourced unanswerable questions. So far, neural reading comprehension models still fall behind humans on SQuAD 2.0. Abstaining from answering when no answer can be inferred from the given document does require more understanding than barely extracting an answer.
Question Generation for MRC In recent years, there has been an increasing interest in generating questions for reading comprehension. BIBREF18 show that neural models based on the encoder-decoder framework can generate significantly better questions than rule-based systems BIBREF19 . To generate answer-focused questions, one can simply indicate the answer positions in the context with extra features BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 . BIBREF25 and BIBREF26 separate answer representations for further matching. BIBREF27 introduce a latent variable for capturing variability and an observed variable for controlling question types. In summary, the above mentioned systems aim to generate answerable questions with certain context. On the contrary, our goal is to generate unanswerable questions.
Adversarial Examples for MRC To evaluate the language understanding ability of pre-trained systems, BIBREF28 construct adversarial examples by adding distractor sentences that do not contradict question answering for humans to the paragraph. BIBREF29 and BIBREF30 use questions to retrieve paragraphs that do not contain the answer as adversarial examples. BIBREF5 create unanswerable questions through rigid rules, which swap entities, numbers and antonyms of answerable questions. It has been shown that adversarial examples generated by rule-based systems are much easier to detect than ones in the SQuAD 2.0 dataset.
Data Augmentation for MRC Several attempts have been made to augment training data for machine reading comprehension. We categorize these work according to the type of the augmentation data: external data source, paragraphs or questions. BIBREF31 fine-tune BERT on the SQuAD dataset jointly with another dataset TriviaQA BIBREF12 . BIBREF4 paraphrase paragraphs with backtranslation. Another line of work adheres to generate answerable questions. BIBREF32 propose to generate questions based on the unlabeled text for semi-supervised question answering. BIBREF33 propose a rule-based system to generate multiple-choice questions with candidate options upon the paragraphs. We aim at generating unanswerable questions as a means of data augmentation.
Problem Formulation
Given an answerable question $q$ and its corresponding paragraph $p$ that contains the answer $a$ , we aim to generate unanswerable questions $\tilde{q}$ that fulfills certain requirements. First, it cannot be answered by paragraph $p$ . Second, it must be relevant to both answerable question $q$ and paragraph $p$ , which refrains from producing irrelevant questions. Third, it should ask for something of the same type as answer $a$ .
As shown in Figure 2 , we investigate two simple neural models built upon encoder-decoder architecture BIBREF34 , BIBREF35 to generate unanswerable questions. A sequence-to-sequence model takes the concatenated paragraph and question as input, and encodes the input in a sequential manner. A pair-to-sequence model is further introduced to capture the interactions between inputs. The decoder of two models generates unanswerable questions sequentially. We factorize the probability of generating the unanswerable question $P(\tilde{q}|q,p,a)$ as: P(q|q,p,a)=t=1|q|P(qt|q<t,q,p,a) where $\tilde{q}_{<t}=\tilde{q}_1 \dots \tilde{q}_{t-1}$ .
Sequence-to-Sequence Model
In the sequence-to-sequence model, paragraph and question pairs are packed into an ordered sequence $x$ with a special separator in between. To indicate answers in paragraphs, we introduce token type embeddings which can also be used to distinguish questions from paragraphs in sequence-to-sequence model. As we can see in Figure 2 , the token type can be answer (A), paragraph (P), or question (Q). For a given token, we construct the input representation $\mathbf {e}_i$ by summing the corresponding word embeddings, character embeddings and token type embeddings. Here characters are embedded by an embedding matrix followed by a max pooling layer.
We apply a single-layer bi-directional recurrent neural networks with long short-term memory units (LSTM; BIBREF36 ) to produce encoder hidden states $\mathbf {h}_i=(\mathbf {h}_{i-1}, \mathbf {e}_i)$ . On each decoding step $t$ , the hidden states of decoder (a single-layer unidirectional LSTM network) are computed by $\mathbf {s}_t=(\mathbf {s}_{t-1}, [\mathbf {y}_{t-1}; \mathbf {c}_{t-1}])$ , where $\mathbf {y}_{t-1}$ is the word embedding of previously predicted token and $\mathbf {c}_{t-1}$ is the encoder context vector of previous step. Besides, we use an attention mechanism to summarize the encoder-side information into $\mathbf {c}_{t}$ for current step. The attention distribution $\gamma _t$ over source words is computed as in BIBREF37 : score(hi , st)=hiTWst
i,t=(score(hi,st)) / Zt
ct=i|x|i,t hi where $Z_t = {\sum _{k}^{|x|}\exp (score(\mathbf {h}_k,\mathbf {s}_t))}$ , $\mathbf {W}_\gamma $ in score function is a learnable parameter.
Next, $\mathbf {s}_t$ is concatenated with $\mathbf {c}_t$ to produce the vocabulary distribution $P_{v}$ :
$$P_{v}=(\mathbf {W}_v[\mathbf {s}_t;\mathbf {c}_t] + \mathbf {b}_{v})$$ (Eq. 4)
where $\mathbf {W}_v$ and $\mathbf {b}_{v}$ are learnable parameters. Copy mechanism BIBREF10 is incorporated to directly copy words from inputs, because words in paragraphs or source questions are of great value for unanswerable question generation. Specifically, we use $\mathbf {s}_t$ and $\mathbf {c}_t$ to produce a gating probability $g_t$ :
$$g_t=(\mathbf {W}_g[\mathbf {s}_t;\mathbf {c}_t] + \mathbf {b}_{g})$$ (Eq. 5)
where $\mathbf {W}_g$ and $\mathbf {b}_{g}$ are learnable parameters. The gate $g_t$ determines whether generating a word from the vocabulary or copying a word from inputs. Finally, we obtain the probability of generating $\tilde{q}_t$ by:
$$P(\tilde{q}_t|\tilde{q}_{<t},q,p,a)=g_t P_{v}(\tilde{q}_t) + (1-g_t)\sum _{i \in \zeta _{\tilde{q}_t}}\hat{\gamma }_{i,t} \nonumber $$ (Eq. 6)
where $\zeta _{\tilde{q}_t}$ denotes all the occurrence of $\tilde{q}_t$ in inputs, and the copying score $\hat{\gamma }_t$ is computed in the same way as attention scores $\gamma _t$ (see Equation ( "Sequence-to-Sequence Model" )) while using different parameters.
Pair-to-Sequence Model
Paragraph and question interactions play a vitally important role in machine reading comprehension. The interactions make the paragraph and question aware of each other and help to predict the answer more precisely. Therefore we propose a pair-to-sequence model, conducting attention based interactions in encoder and subsequently decoding with two series of representations.
In pair-to-sequence model, the paragraph and question are embedded as in sequence-to-sequence model, but encoded separately by weight-shared bi-directional LSTM networks, yielding $\mathbf {h}_i^p=(\mathbf {h}_{i-1}^p, \mathbf {e}_{i-1}^p)$ as paragraph encodings and $\mathbf {h}_i^q=(\mathbf {h}_{i-1}^q, \mathbf {e}_{i-1}^q)$ as question encodings. The same attention mechanism as in sequence-to-sequence model is used in the following interaction layer to produce question-aware paragraph representations ${\mathbf {h}}_i^p$ : i,j=(score(hip,hjq))/Zi
hip=j=1|q|i,jhjq
hip=(Wp[hip;hip] + bp) where $Z_i=\sum _{k=1}^{|q|}\exp (score(\mathbf {h}_i^p,\mathbf {h}_k^q))$ , $\mathbf {W}_p$ and $\mathbf {b}_p$ are learnable parameters. Similarly, the paragraph-aware question representations ${\mathbf {h}}_i^q$ are produced by: i,j=(score(hip,hjq))/Zj
hiq=i=1|p|i,jhip
hjq=(Wq[hjq;hjq] + bq) where $Z_j=\sum _{k=1}^{|p|}\exp (score(\mathbf {h}_k^p,\mathbf {h}_j^q))$ , $\mathbf {W}_q$ and $\mathbf {b}_q$ are learnable parameters.
Accordingly, the decoder now takes paragraph context $\mathbf {c}^p_{t-1}$ and question context $\mathbf {c}^q_{t-1}$ as encoder context, computed as $\mathbf {c}_t$ (see Equation ( "Sequence-to-Sequence Model" )) in sequence-to-sequence model, to update decoder hidden states $\mathbf {s}_t=(\mathbf {s}_{t-1},[\mathbf {y}_{t-1};\mathbf {c}^p_{t-1};\mathbf {c}^q_{t-1}])$ and predict tokens. Copy mechanism is also adopted as described before, and copying words from both the paragraph and question is viable.
Training and Inference
The training objective is to minimize the negative likelihood of the aligned unanswerable question $\tilde{q}$ given the answerable question $q$ and its corresponding paragraph $p$ that contains the answer $a$ : L=-(q,q,p,a)DP(q|q,p,a;) where $\mathcal {D}$ is the training corpus and $\theta $ denotes all the parameters. Sequence-to-sequence and pair-to-sequence models are trained with the same objective.
During inference, the unanswerable question for question answering pair $(q,p,a)$ is obtained via $\textrm {argmax}_{q^{\prime }}P(q^{\prime }|q,p,a)$ , where $q^{\prime }$ represents candidate outputs. Beam search is used to avoid iterating over all possible outputs.
Experiments
We conduct experiments on the SQuAD 2.0 dataset BIBREF5 . The extractive machine reading benchmark contains about $100,000$ answerable questions and over $50,000$ crowdsourced unanswerable questions towards Wikipedia paragraphs. Crowdworkers are requested to craft unanswerable questions that are relevant to the given paragraph. Moreover, for each unanswerable question, a plausible answer span is annotated, which indicates the incorrect answer obtained by only relying on type-matching heuristics. Both answers and plausible answers are text spans in the paragraphs.
Unanswerable Question Generation
We use (plausible) answer spans in paragraphs as pivots to align pairs of answerable questions and unanswerable questions. An aligned pair is shown in Figure 1 . As to the spans that correspond to multiple answerable and unanswerable questions, we sort the pairs by Levenshtein distance BIBREF38 and keep the pair with the minimum distance, and make sure that each question is only paired once.
We obtain $20,240$ aligned pairs from the SQuAD 2.0 dataset in total. The Levenshtein distance between the answerable and unanswerable questions in pairs is $3.5$ on average. Specifically, the $17,475$ pairs extracted from the SQuAD 2.0 training set are used to train generation models. Since the SQuAD 2.0 test set is hidden, we randomly sample 46 articles from the SQuAD 2.0 training set with $1,805$ ( $\sim $ 10%) pairs as holdout set and evaluate generation models with $2,765$ pairs extracted the SQuAD 2.0 development set.
We implement generation models upon OpenNMT BIBREF39 . We preprocess the corpus with the spaCy toolkit for tokenization and sentence segmentation. We lowercase tokens and build the vocabulary on SQuAD 2.0 training set with word frequency threshold of 9 to remove most noisy tokens introduced in data collection and tokenization. We set word, character and token type embeddings dimension to 300. We use the glove.840B.300d pre-trained embeddings BIBREF40 to initialize word embeddings, and do further updates during training. Both encoder and decoder share the same vocabulary and word embeddings. The hidden state size of LSTM network is 150. Dropout probability is set to $0.2$ . The data are shuffled and split into mini-batches of size 32 for training. The model is optimized with Adagrad BIBREF41 with an initial learning rate of $0.15$ . During inference, the beam size is 5. We prohibit producing unknown words by setting the score of <unk> token to -inf. We filter the beam outputs that make no differences to the input question.
The generation quality is evaluated using three automatic evaluation metrics: BLEU BIBREF42 , ROUGE BIBREF43 and GLEU BIBREF44 . BLEU is a commonly used metric in machine translation that computes n-gram precisions over references. Recall-oriented ROUGE metric is widely adopted in summarization, and ROUGE-L measures longest common subsequence between system outputs and references. GLEU is a variant of BLEU with the modification that penalizes system output n-grams that present in input but absent from the reference. This makes GLEU a preferable metric for tasks with subtle but critical differences in a monolingual setting as in our unanswerable question generation task.
We also conduct human evaluation on 100 samples in three criteria: (1) unanswerability, which indicates whether the question is unanswerable or not; (2) relatedness, which measures semantic relatedness between the generated question and input question answering pair; (3) readability, which indicates the grammaticality and fluency. We ask three raters to score the generated questions in terms of relatedness and readability on a 1-3 scale (3 for the best) and determine the answerability in binary (1 for unanswerable). The raters are not aware of the question generation methods in advance.
Results of the automatic evaluation are shown in Table 1 . We find that the proposed pair-to-sequence model that captures interactions between paragraph and question performs consistently better than sequence-to-sequence model. Moreover, replacing the input paragraph with the answer sentence hurts model performance, which indicates that using the whole paragraph as context provides more helpful information to unanswerable question generation. We also try to generate unanswerable questions by only relying on answerable questions (see “-Paragraph”), or the paragraph (see “-Question”). Unsurprisingly, both ablation models obtain worse performance compared with the full model. These two ablation results also demonstrate that the input answerable question helps more to improve performance compared with the input paragraph. We argue that the original answerable question provides more direct information due to the fact that the average edit distance between the example pairs is $3.5$ . At last, we remove the copy mechanism that restrains prediction tokens to the vocabulary. The results indicate the necessity of copying tokens from answerable questions and paragraphs to outputs, which relieves the out-of-vocabulary problem.
Table 3 shows the human evaluation results of generated unanswerable questions. We compare with the baseline method TfIdf, which uses the input answerable question to retrieve similar questions towards other articles as outputs. The retrieved questions are mostly unanswerable and readable, but they are not quite relevant to the question answering pair. Notice that being relevant is demonstrated to be important for data augmentation in further experiments on machine reading comprehension. Here pair-to-sequence model still outperforms sequence-to-sequence model in terms of all three metrics. But the differences in human evaluation are not as notable as in the automatic metrics.
As shown in Table 4 , we further randomly sample 100 system outputs to analyze the types of generated unanswerable questions. We borrow the types defined in BIBREF5 for SQuAD 2.0. We categorize the outputs with grammatical errors that make them hard to understand into Other. Samples that fall into Impossible Condition are mainly produced by non-entity substitution. We can see that models tend to generate unanswerable questions by inserting negation and swapping entities. These two types are also most commonly used when crowdworkers pose unanswerable questions according to answerable ones. We also find that the current models still have difficulties in utilizing antonyms and exclusion conditions, which could be improved by incorporating external resources.
In Figure 3 , we present a sample paragraph and its corresponding answerable questions and generated unanswerable questions. In the first example, two models generate unanswerable questions by swapping the location entity “Victoria” with “texas” and inserting negation word “never”, respectively. In the second example, sequence-to-sequence model omits the condition “in Victoria” and yields an answerable question. Pair-to-sequence model inserts the negation “no longer” properly, which is not mentioned in the paragraph. In the third example, grammatical errors are found in the output of . The last example shows that inserting negation words in different positions (“n't public” versus “not in victoria”) can express different meanings. Such cases are critical for generated questions' answerability, which is hard to handle in a rule-based system.
Data Augmentation for Machine Reading Comprehension
We apply our automatically generated unanswerable questions as augmentation data to the following reading comprehension models:
BiDAF BIBREF2 is a benchmark model on extractive machine reading comprehension. Based on BiDAF, BIBREF45 propose the BiDAF-No-Answer model to predict the distribution of answer candidates and the probability of a question being unanswerable at the same time.
BIBREF29 propose the DocQA model to address document-level reading comprehension. The no-answer probability is also predicted jointly.
It is the state-of-the-art model on unanswerable machine reading comprehension. We adopt the uncased version of BERT BIBREF31 for fine-tuning. The batch sizes of BERT-base and BERT-large are set to 12 and 24 respectively. The rest hyperparameters are kept untouched as in the official instructions of fine-tuning BERT-Large on SQuAD 2.0.
We first generate unanswerable questions using the trained generation model. Specifically, we use the answerable questions in the SQuAD 2.0 training set, besides ones aligned before, to generate unanswerable questions. Then we use the paragraph and answers of answerable questions along with the generated questions to construct training examples. At last, we have an augmentation data containing $69,090$ unanswerable examples.
We train question answering models with augmentation data in two separate phases. In the first phase, we train the models by combining the augmentation data and all $86,821$ SQuAD 2.0 answerable examples. Subsequently, we use the original SQuAD 2.0 training data alone to further fine-tune model parameters.
Exact Match (EM) and F1 are two metrics used to evaluate model performance. EM measures the percentage of predictions that match ground truth answers exactly. F1 measures the word overlap between the prediction and ground truth answers. We use pair-to-sequence model with answerable questions and paragraphs for data augmentation by default.
Table 2 shows the exact match and F1 scores of multiple reading comprehension models with and without data augmentation. We can see that the generated unanswerable questions can improve both specifically designed reading comprehension models and strong BERT fine-tuning models, yielding $1.9$ absolute F1 improvement with BERT-base model and $1.7$ absolute F1 improvement with BERT-large model. Our submitted model obtains an EM score of $80.75$ and an F1 score of $83.85$ on the hidden test set.
As shown in Table 5 , pair-to-sequence model proves to be a better option for generating augmentation data than other three methods. Besides the sequence-to-sequence model, we use answerable questions to retrieve questions from other articles with TfIdf. The retrieved questions are of little help to improve the model, because they are less relevant to the paragraph as shown in Table 3 . We refer to the rule-based method BIBREF28 that swaps entities and replaces words with antonyms as Rule. In comparison to the above methods, pair-to-sequence model can yield the largest improvement.
Results in Table 6 show that enlarging the size of augmentation data can further improve model performance, especially with the BERT-base model. We conduct experiments using two and three times the size of the base augmentation data (i.e., $69,090$ unanswerable questions). We generate multiple unanswerable questions for each answerable question by using beam search. Because we only generate unanswerable questions, the data imbalance problem could mitigate the improvement of incorporating more augmentation data.
Conclusions
In this paper, we propose to generate unanswerable questions as a means of data augmentation for machine reading comprehension. We produce relevant unanswerable questions by editing answerable questions and conditioning on the corresponding paragraph. A pair-to-sequence model is introduced in order to capture the interactions between question and paragraph. We also present a way to construct training data for unanswerable question generation models. Both automatic and human evaluations show that the proposed model consistently outperforms the sequence-to-sequence baseline. The results on the SQuAD 2.0 dataset show that our generated unanswerable questions can help to improve multiple reading comprehension models. As for future work, we would like to enhance the ability to utilize antonyms for unanswerable question generation by leveraging external resources.
Acknowledgments
We thank anonymous reviewers for their helpful comments. Qin and Liu were supported by National Natural Science Foundation of China (NSFC) via grants 61632011 and 61772156. | is to minimize the negative likelihood of the aligned unanswerable question $\tilde{q}$ given the answerable question $q$ and its corresponding paragraph $p$ that contains the answer |
58a00ca123d67b9be55021493384c0acef4c568d | 58a00ca123d67b9be55021493384c0acef4c568d_0 | Q: How do they ensure the generated questions are unanswerable?
Text: Introduction
Extractive reading comprehension BIBREF0 , BIBREF1 obtains great attentions from both research and industry in recent years. End-to-end neural models BIBREF2 , BIBREF3 , BIBREF4 have achieved remarkable performance on the task if answers are assumed to be in the given paragraph. Nonetheless, the current systems are still not good at deciding whether no answer is presented in the context BIBREF5 . For unanswerable questions, the systems are supposed to abstain from answering rather than making unreliable guesses, which is an embodiment of language understanding ability.
We attack the problem by automatically generating unanswerable questions for data augmentation to improve question answering models. The generated unanswerable questions should not be too easy for the question answering model so that data augmentation can better help the model. For example, a simple baseline method is randomly choosing a question asked for another paragraph, and using it as an unanswerable question. However, it would be trivial to determine whether the retrieved question is answerable by using word-overlap heuristics, because the question is irrelevant to the context BIBREF6 . In this work, we propose to generate unanswerable questions by editing an answerable question and conditioning on the corresponding paragraph that contains the answer. So the generated unanswerable questions are more lexically similar and relevant to the context. Moreover, by using the answerable question as a prototype and its answer span as a plausible answer, the generated examples can provide more discriminative training signal to the question answering model.
To create training data for unanswerable question generation, we use (plausible) answer spans in paragraphs as pivots to align pairs of answerable questions and unanswerable questions. As shown in Figure 1 , the answerable and unanswerable questions of a paragraph are aligned through the text span “Victoria Department of Education” for being both the answer and plausible answer. These two questions are lexically similar and both asked with the same answer type in mind. In this way, we obtain the data with which the models can learn to ask unanswerable questions by editing answerable ones with word exchanges, negations, etc. Consequently, we can generate a mass of unanswerable questions with existing large-scale machine reading comprehension datasets.
Inspired by the neural reading comprehension models BIBREF7 , BIBREF8 , we introduce a pair-to-sequence model to better capture the interactions between questions and paragraphs. The proposed model first encodes input question and paragraph separately, and then conducts attention-based matching to make them aware of each other. Finally, the context-aware representations are used to generate outputs. To facilitate the use of context words during the generation process, we also incorporate the copy mechanism BIBREF9 , BIBREF10 .
Experimental results on the unanswerable question generation task shows that the pair-to-sequence model generates consistently better results over the sequence-to-sequence baseline and performs better with long paragraphs than with short answer sentences. Further experimental results show that the generated unanswerable questions can improve multiple machine reading comprehension models. Even using BERT fine-tuning as a strong reading comprehension model, we can still obtain a $1.9$ % absolute improvement of F1 score with BERT-base model and $1.7$ % absolute F1 improvement with BERT-large model.
Related Work
Machine Reading Comprehension (MRC) Various large-scale datasets BIBREF0 , BIBREF1 , BIBREF11 , BIBREF12 , BIBREF5 , BIBREF13 have spurred rapid progress on machine reading comprehension in recent years. SQuAD BIBREF1 is an extractive benchmark whose questions and answers spans are annotated by humans. Neural reading comprehension systems BIBREF14 , BIBREF2 , BIBREF3 , BIBREF15 , BIBREF8 , BIBREF16 , BIBREF4 , BIBREF17 have outperformed humans on this task in terms of automatic metrics. The SQuAD 2.0 dataset BIBREF5 extends SQuAD with more than $50,000$ crowdsourced unanswerable questions. So far, neural reading comprehension models still fall behind humans on SQuAD 2.0. Abstaining from answering when no answer can be inferred from the given document does require more understanding than barely extracting an answer.
Question Generation for MRC In recent years, there has been an increasing interest in generating questions for reading comprehension. BIBREF18 show that neural models based on the encoder-decoder framework can generate significantly better questions than rule-based systems BIBREF19 . To generate answer-focused questions, one can simply indicate the answer positions in the context with extra features BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 . BIBREF25 and BIBREF26 separate answer representations for further matching. BIBREF27 introduce a latent variable for capturing variability and an observed variable for controlling question types. In summary, the above mentioned systems aim to generate answerable questions with certain context. On the contrary, our goal is to generate unanswerable questions.
Adversarial Examples for MRC To evaluate the language understanding ability of pre-trained systems, BIBREF28 construct adversarial examples by adding distractor sentences that do not contradict question answering for humans to the paragraph. BIBREF29 and BIBREF30 use questions to retrieve paragraphs that do not contain the answer as adversarial examples. BIBREF5 create unanswerable questions through rigid rules, which swap entities, numbers and antonyms of answerable questions. It has been shown that adversarial examples generated by rule-based systems are much easier to detect than ones in the SQuAD 2.0 dataset.
Data Augmentation for MRC Several attempts have been made to augment training data for machine reading comprehension. We categorize these work according to the type of the augmentation data: external data source, paragraphs or questions. BIBREF31 fine-tune BERT on the SQuAD dataset jointly with another dataset TriviaQA BIBREF12 . BIBREF4 paraphrase paragraphs with backtranslation. Another line of work adheres to generate answerable questions. BIBREF32 propose to generate questions based on the unlabeled text for semi-supervised question answering. BIBREF33 propose a rule-based system to generate multiple-choice questions with candidate options upon the paragraphs. We aim at generating unanswerable questions as a means of data augmentation.
Problem Formulation
Given an answerable question $q$ and its corresponding paragraph $p$ that contains the answer $a$ , we aim to generate unanswerable questions $\tilde{q}$ that fulfills certain requirements. First, it cannot be answered by paragraph $p$ . Second, it must be relevant to both answerable question $q$ and paragraph $p$ , which refrains from producing irrelevant questions. Third, it should ask for something of the same type as answer $a$ .
As shown in Figure 2 , we investigate two simple neural models built upon encoder-decoder architecture BIBREF34 , BIBREF35 to generate unanswerable questions. A sequence-to-sequence model takes the concatenated paragraph and question as input, and encodes the input in a sequential manner. A pair-to-sequence model is further introduced to capture the interactions between inputs. The decoder of two models generates unanswerable questions sequentially. We factorize the probability of generating the unanswerable question $P(\tilde{q}|q,p,a)$ as: P(q|q,p,a)=t=1|q|P(qt|q<t,q,p,a) where $\tilde{q}_{<t}=\tilde{q}_1 \dots \tilde{q}_{t-1}$ .
Sequence-to-Sequence Model
In the sequence-to-sequence model, paragraph and question pairs are packed into an ordered sequence $x$ with a special separator in between. To indicate answers in paragraphs, we introduce token type embeddings which can also be used to distinguish questions from paragraphs in sequence-to-sequence model. As we can see in Figure 2 , the token type can be answer (A), paragraph (P), or question (Q). For a given token, we construct the input representation $\mathbf {e}_i$ by summing the corresponding word embeddings, character embeddings and token type embeddings. Here characters are embedded by an embedding matrix followed by a max pooling layer.
We apply a single-layer bi-directional recurrent neural networks with long short-term memory units (LSTM; BIBREF36 ) to produce encoder hidden states $\mathbf {h}_i=(\mathbf {h}_{i-1}, \mathbf {e}_i)$ . On each decoding step $t$ , the hidden states of decoder (a single-layer unidirectional LSTM network) are computed by $\mathbf {s}_t=(\mathbf {s}_{t-1}, [\mathbf {y}_{t-1}; \mathbf {c}_{t-1}])$ , where $\mathbf {y}_{t-1}$ is the word embedding of previously predicted token and $\mathbf {c}_{t-1}$ is the encoder context vector of previous step. Besides, we use an attention mechanism to summarize the encoder-side information into $\mathbf {c}_{t}$ for current step. The attention distribution $\gamma _t$ over source words is computed as in BIBREF37 : score(hi , st)=hiTWst
i,t=(score(hi,st)) / Zt
ct=i|x|i,t hi where $Z_t = {\sum _{k}^{|x|}\exp (score(\mathbf {h}_k,\mathbf {s}_t))}$ , $\mathbf {W}_\gamma $ in score function is a learnable parameter.
Next, $\mathbf {s}_t$ is concatenated with $\mathbf {c}_t$ to produce the vocabulary distribution $P_{v}$ :
$$P_{v}=(\mathbf {W}_v[\mathbf {s}_t;\mathbf {c}_t] + \mathbf {b}_{v})$$ (Eq. 4)
where $\mathbf {W}_v$ and $\mathbf {b}_{v}$ are learnable parameters. Copy mechanism BIBREF10 is incorporated to directly copy words from inputs, because words in paragraphs or source questions are of great value for unanswerable question generation. Specifically, we use $\mathbf {s}_t$ and $\mathbf {c}_t$ to produce a gating probability $g_t$ :
$$g_t=(\mathbf {W}_g[\mathbf {s}_t;\mathbf {c}_t] + \mathbf {b}_{g})$$ (Eq. 5)
where $\mathbf {W}_g$ and $\mathbf {b}_{g}$ are learnable parameters. The gate $g_t$ determines whether generating a word from the vocabulary or copying a word from inputs. Finally, we obtain the probability of generating $\tilde{q}_t$ by:
$$P(\tilde{q}_t|\tilde{q}_{<t},q,p,a)=g_t P_{v}(\tilde{q}_t) + (1-g_t)\sum _{i \in \zeta _{\tilde{q}_t}}\hat{\gamma }_{i,t} \nonumber $$ (Eq. 6)
where $\zeta _{\tilde{q}_t}$ denotes all the occurrence of $\tilde{q}_t$ in inputs, and the copying score $\hat{\gamma }_t$ is computed in the same way as attention scores $\gamma _t$ (see Equation ( "Sequence-to-Sequence Model" )) while using different parameters.
Pair-to-Sequence Model
Paragraph and question interactions play a vitally important role in machine reading comprehension. The interactions make the paragraph and question aware of each other and help to predict the answer more precisely. Therefore we propose a pair-to-sequence model, conducting attention based interactions in encoder and subsequently decoding with two series of representations.
In pair-to-sequence model, the paragraph and question are embedded as in sequence-to-sequence model, but encoded separately by weight-shared bi-directional LSTM networks, yielding $\mathbf {h}_i^p=(\mathbf {h}_{i-1}^p, \mathbf {e}_{i-1}^p)$ as paragraph encodings and $\mathbf {h}_i^q=(\mathbf {h}_{i-1}^q, \mathbf {e}_{i-1}^q)$ as question encodings. The same attention mechanism as in sequence-to-sequence model is used in the following interaction layer to produce question-aware paragraph representations ${\mathbf {h}}_i^p$ : i,j=(score(hip,hjq))/Zi
hip=j=1|q|i,jhjq
hip=(Wp[hip;hip] + bp) where $Z_i=\sum _{k=1}^{|q|}\exp (score(\mathbf {h}_i^p,\mathbf {h}_k^q))$ , $\mathbf {W}_p$ and $\mathbf {b}_p$ are learnable parameters. Similarly, the paragraph-aware question representations ${\mathbf {h}}_i^q$ are produced by: i,j=(score(hip,hjq))/Zj
hiq=i=1|p|i,jhip
hjq=(Wq[hjq;hjq] + bq) where $Z_j=\sum _{k=1}^{|p|}\exp (score(\mathbf {h}_k^p,\mathbf {h}_j^q))$ , $\mathbf {W}_q$ and $\mathbf {b}_q$ are learnable parameters.
Accordingly, the decoder now takes paragraph context $\mathbf {c}^p_{t-1}$ and question context $\mathbf {c}^q_{t-1}$ as encoder context, computed as $\mathbf {c}_t$ (see Equation ( "Sequence-to-Sequence Model" )) in sequence-to-sequence model, to update decoder hidden states $\mathbf {s}_t=(\mathbf {s}_{t-1},[\mathbf {y}_{t-1};\mathbf {c}^p_{t-1};\mathbf {c}^q_{t-1}])$ and predict tokens. Copy mechanism is also adopted as described before, and copying words from both the paragraph and question is viable.
Training and Inference
The training objective is to minimize the negative likelihood of the aligned unanswerable question $\tilde{q}$ given the answerable question $q$ and its corresponding paragraph $p$ that contains the answer $a$ : L=-(q,q,p,a)DP(q|q,p,a;) where $\mathcal {D}$ is the training corpus and $\theta $ denotes all the parameters. Sequence-to-sequence and pair-to-sequence models are trained with the same objective.
During inference, the unanswerable question for question answering pair $(q,p,a)$ is obtained via $\textrm {argmax}_{q^{\prime }}P(q^{\prime }|q,p,a)$ , where $q^{\prime }$ represents candidate outputs. Beam search is used to avoid iterating over all possible outputs.
Experiments
We conduct experiments on the SQuAD 2.0 dataset BIBREF5 . The extractive machine reading benchmark contains about $100,000$ answerable questions and over $50,000$ crowdsourced unanswerable questions towards Wikipedia paragraphs. Crowdworkers are requested to craft unanswerable questions that are relevant to the given paragraph. Moreover, for each unanswerable question, a plausible answer span is annotated, which indicates the incorrect answer obtained by only relying on type-matching heuristics. Both answers and plausible answers are text spans in the paragraphs.
Unanswerable Question Generation
We use (plausible) answer spans in paragraphs as pivots to align pairs of answerable questions and unanswerable questions. An aligned pair is shown in Figure 1 . As to the spans that correspond to multiple answerable and unanswerable questions, we sort the pairs by Levenshtein distance BIBREF38 and keep the pair with the minimum distance, and make sure that each question is only paired once.
We obtain $20,240$ aligned pairs from the SQuAD 2.0 dataset in total. The Levenshtein distance between the answerable and unanswerable questions in pairs is $3.5$ on average. Specifically, the $17,475$ pairs extracted from the SQuAD 2.0 training set are used to train generation models. Since the SQuAD 2.0 test set is hidden, we randomly sample 46 articles from the SQuAD 2.0 training set with $1,805$ ( $\sim $ 10%) pairs as holdout set and evaluate generation models with $2,765$ pairs extracted the SQuAD 2.0 development set.
We implement generation models upon OpenNMT BIBREF39 . We preprocess the corpus with the spaCy toolkit for tokenization and sentence segmentation. We lowercase tokens and build the vocabulary on SQuAD 2.0 training set with word frequency threshold of 9 to remove most noisy tokens introduced in data collection and tokenization. We set word, character and token type embeddings dimension to 300. We use the glove.840B.300d pre-trained embeddings BIBREF40 to initialize word embeddings, and do further updates during training. Both encoder and decoder share the same vocabulary and word embeddings. The hidden state size of LSTM network is 150. Dropout probability is set to $0.2$ . The data are shuffled and split into mini-batches of size 32 for training. The model is optimized with Adagrad BIBREF41 with an initial learning rate of $0.15$ . During inference, the beam size is 5. We prohibit producing unknown words by setting the score of <unk> token to -inf. We filter the beam outputs that make no differences to the input question.
The generation quality is evaluated using three automatic evaluation metrics: BLEU BIBREF42 , ROUGE BIBREF43 and GLEU BIBREF44 . BLEU is a commonly used metric in machine translation that computes n-gram precisions over references. Recall-oriented ROUGE metric is widely adopted in summarization, and ROUGE-L measures longest common subsequence between system outputs and references. GLEU is a variant of BLEU with the modification that penalizes system output n-grams that present in input but absent from the reference. This makes GLEU a preferable metric for tasks with subtle but critical differences in a monolingual setting as in our unanswerable question generation task.
We also conduct human evaluation on 100 samples in three criteria: (1) unanswerability, which indicates whether the question is unanswerable or not; (2) relatedness, which measures semantic relatedness between the generated question and input question answering pair; (3) readability, which indicates the grammaticality and fluency. We ask three raters to score the generated questions in terms of relatedness and readability on a 1-3 scale (3 for the best) and determine the answerability in binary (1 for unanswerable). The raters are not aware of the question generation methods in advance.
Results of the automatic evaluation are shown in Table 1 . We find that the proposed pair-to-sequence model that captures interactions between paragraph and question performs consistently better than sequence-to-sequence model. Moreover, replacing the input paragraph with the answer sentence hurts model performance, which indicates that using the whole paragraph as context provides more helpful information to unanswerable question generation. We also try to generate unanswerable questions by only relying on answerable questions (see “-Paragraph”), or the paragraph (see “-Question”). Unsurprisingly, both ablation models obtain worse performance compared with the full model. These two ablation results also demonstrate that the input answerable question helps more to improve performance compared with the input paragraph. We argue that the original answerable question provides more direct information due to the fact that the average edit distance between the example pairs is $3.5$ . At last, we remove the copy mechanism that restrains prediction tokens to the vocabulary. The results indicate the necessity of copying tokens from answerable questions and paragraphs to outputs, which relieves the out-of-vocabulary problem.
Table 3 shows the human evaluation results of generated unanswerable questions. We compare with the baseline method TfIdf, which uses the input answerable question to retrieve similar questions towards other articles as outputs. The retrieved questions are mostly unanswerable and readable, but they are not quite relevant to the question answering pair. Notice that being relevant is demonstrated to be important for data augmentation in further experiments on machine reading comprehension. Here pair-to-sequence model still outperforms sequence-to-sequence model in terms of all three metrics. But the differences in human evaluation are not as notable as in the automatic metrics.
As shown in Table 4 , we further randomly sample 100 system outputs to analyze the types of generated unanswerable questions. We borrow the types defined in BIBREF5 for SQuAD 2.0. We categorize the outputs with grammatical errors that make them hard to understand into Other. Samples that fall into Impossible Condition are mainly produced by non-entity substitution. We can see that models tend to generate unanswerable questions by inserting negation and swapping entities. These two types are also most commonly used when crowdworkers pose unanswerable questions according to answerable ones. We also find that the current models still have difficulties in utilizing antonyms and exclusion conditions, which could be improved by incorporating external resources.
In Figure 3 , we present a sample paragraph and its corresponding answerable questions and generated unanswerable questions. In the first example, two models generate unanswerable questions by swapping the location entity “Victoria” with “texas” and inserting negation word “never”, respectively. In the second example, sequence-to-sequence model omits the condition “in Victoria” and yields an answerable question. Pair-to-sequence model inserts the negation “no longer” properly, which is not mentioned in the paragraph. In the third example, grammatical errors are found in the output of . The last example shows that inserting negation words in different positions (“n't public” versus “not in victoria”) can express different meanings. Such cases are critical for generated questions' answerability, which is hard to handle in a rule-based system.
Data Augmentation for Machine Reading Comprehension
We apply our automatically generated unanswerable questions as augmentation data to the following reading comprehension models:
BiDAF BIBREF2 is a benchmark model on extractive machine reading comprehension. Based on BiDAF, BIBREF45 propose the BiDAF-No-Answer model to predict the distribution of answer candidates and the probability of a question being unanswerable at the same time.
BIBREF29 propose the DocQA model to address document-level reading comprehension. The no-answer probability is also predicted jointly.
It is the state-of-the-art model on unanswerable machine reading comprehension. We adopt the uncased version of BERT BIBREF31 for fine-tuning. The batch sizes of BERT-base and BERT-large are set to 12 and 24 respectively. The rest hyperparameters are kept untouched as in the official instructions of fine-tuning BERT-Large on SQuAD 2.0.
We first generate unanswerable questions using the trained generation model. Specifically, we use the answerable questions in the SQuAD 2.0 training set, besides ones aligned before, to generate unanswerable questions. Then we use the paragraph and answers of answerable questions along with the generated questions to construct training examples. At last, we have an augmentation data containing $69,090$ unanswerable examples.
We train question answering models with augmentation data in two separate phases. In the first phase, we train the models by combining the augmentation data and all $86,821$ SQuAD 2.0 answerable examples. Subsequently, we use the original SQuAD 2.0 training data alone to further fine-tune model parameters.
Exact Match (EM) and F1 are two metrics used to evaluate model performance. EM measures the percentage of predictions that match ground truth answers exactly. F1 measures the word overlap between the prediction and ground truth answers. We use pair-to-sequence model with answerable questions and paragraphs for data augmentation by default.
Table 2 shows the exact match and F1 scores of multiple reading comprehension models with and without data augmentation. We can see that the generated unanswerable questions can improve both specifically designed reading comprehension models and strong BERT fine-tuning models, yielding $1.9$ absolute F1 improvement with BERT-base model and $1.7$ absolute F1 improvement with BERT-large model. Our submitted model obtains an EM score of $80.75$ and an F1 score of $83.85$ on the hidden test set.
As shown in Table 5 , pair-to-sequence model proves to be a better option for generating augmentation data than other three methods. Besides the sequence-to-sequence model, we use answerable questions to retrieve questions from other articles with TfIdf. The retrieved questions are of little help to improve the model, because they are less relevant to the paragraph as shown in Table 3 . We refer to the rule-based method BIBREF28 that swaps entities and replaces words with antonyms as Rule. In comparison to the above methods, pair-to-sequence model can yield the largest improvement.
Results in Table 6 show that enlarging the size of augmentation data can further improve model performance, especially with the BERT-base model. We conduct experiments using two and three times the size of the base augmentation data (i.e., $69,090$ unanswerable questions). We generate multiple unanswerable questions for each answerable question by using beam search. Because we only generate unanswerable questions, the data imbalance problem could mitigate the improvement of incorporating more augmentation data.
Conclusions
In this paper, we propose to generate unanswerable questions as a means of data augmentation for machine reading comprehension. We produce relevant unanswerable questions by editing answerable questions and conditioning on the corresponding paragraph. A pair-to-sequence model is introduced in order to capture the interactions between question and paragraph. We also present a way to construct training data for unanswerable question generation models. Both automatic and human evaluations show that the proposed model consistently outperforms the sequence-to-sequence baseline. The results on the SQuAD 2.0 dataset show that our generated unanswerable questions can help to improve multiple reading comprehension models. As for future work, we would like to enhance the ability to utilize antonyms for unanswerable question generation by leveraging external resources.
Acknowledgments
We thank anonymous reviewers for their helpful comments. Qin and Liu were supported by National Natural Science Foundation of China (NSFC) via grants 61632011 and 61772156. | learn to ask unanswerable questions by editing answerable ones with word exchanges, negations, etc |
199bdb3a6b1f7c89d95ea6c6ddbbb5eff484fa1f | 199bdb3a6b1f7c89d95ea6c6ddbbb5eff484fa1f_0 | Q: Does their approach require a dataset of unanswerable questions mapped to similar answerable questions?
Text: Introduction
Extractive reading comprehension BIBREF0 , BIBREF1 obtains great attentions from both research and industry in recent years. End-to-end neural models BIBREF2 , BIBREF3 , BIBREF4 have achieved remarkable performance on the task if answers are assumed to be in the given paragraph. Nonetheless, the current systems are still not good at deciding whether no answer is presented in the context BIBREF5 . For unanswerable questions, the systems are supposed to abstain from answering rather than making unreliable guesses, which is an embodiment of language understanding ability.
We attack the problem by automatically generating unanswerable questions for data augmentation to improve question answering models. The generated unanswerable questions should not be too easy for the question answering model so that data augmentation can better help the model. For example, a simple baseline method is randomly choosing a question asked for another paragraph, and using it as an unanswerable question. However, it would be trivial to determine whether the retrieved question is answerable by using word-overlap heuristics, because the question is irrelevant to the context BIBREF6 . In this work, we propose to generate unanswerable questions by editing an answerable question and conditioning on the corresponding paragraph that contains the answer. So the generated unanswerable questions are more lexically similar and relevant to the context. Moreover, by using the answerable question as a prototype and its answer span as a plausible answer, the generated examples can provide more discriminative training signal to the question answering model.
To create training data for unanswerable question generation, we use (plausible) answer spans in paragraphs as pivots to align pairs of answerable questions and unanswerable questions. As shown in Figure 1 , the answerable and unanswerable questions of a paragraph are aligned through the text span “Victoria Department of Education” for being both the answer and plausible answer. These two questions are lexically similar and both asked with the same answer type in mind. In this way, we obtain the data with which the models can learn to ask unanswerable questions by editing answerable ones with word exchanges, negations, etc. Consequently, we can generate a mass of unanswerable questions with existing large-scale machine reading comprehension datasets.
Inspired by the neural reading comprehension models BIBREF7 , BIBREF8 , we introduce a pair-to-sequence model to better capture the interactions between questions and paragraphs. The proposed model first encodes input question and paragraph separately, and then conducts attention-based matching to make them aware of each other. Finally, the context-aware representations are used to generate outputs. To facilitate the use of context words during the generation process, we also incorporate the copy mechanism BIBREF9 , BIBREF10 .
Experimental results on the unanswerable question generation task shows that the pair-to-sequence model generates consistently better results over the sequence-to-sequence baseline and performs better with long paragraphs than with short answer sentences. Further experimental results show that the generated unanswerable questions can improve multiple machine reading comprehension models. Even using BERT fine-tuning as a strong reading comprehension model, we can still obtain a $1.9$ % absolute improvement of F1 score with BERT-base model and $1.7$ % absolute F1 improvement with BERT-large model.
Related Work
Machine Reading Comprehension (MRC) Various large-scale datasets BIBREF0 , BIBREF1 , BIBREF11 , BIBREF12 , BIBREF5 , BIBREF13 have spurred rapid progress on machine reading comprehension in recent years. SQuAD BIBREF1 is an extractive benchmark whose questions and answers spans are annotated by humans. Neural reading comprehension systems BIBREF14 , BIBREF2 , BIBREF3 , BIBREF15 , BIBREF8 , BIBREF16 , BIBREF4 , BIBREF17 have outperformed humans on this task in terms of automatic metrics. The SQuAD 2.0 dataset BIBREF5 extends SQuAD with more than $50,000$ crowdsourced unanswerable questions. So far, neural reading comprehension models still fall behind humans on SQuAD 2.0. Abstaining from answering when no answer can be inferred from the given document does require more understanding than barely extracting an answer.
Question Generation for MRC In recent years, there has been an increasing interest in generating questions for reading comprehension. BIBREF18 show that neural models based on the encoder-decoder framework can generate significantly better questions than rule-based systems BIBREF19 . To generate answer-focused questions, one can simply indicate the answer positions in the context with extra features BIBREF20 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 . BIBREF25 and BIBREF26 separate answer representations for further matching. BIBREF27 introduce a latent variable for capturing variability and an observed variable for controlling question types. In summary, the above mentioned systems aim to generate answerable questions with certain context. On the contrary, our goal is to generate unanswerable questions.
Adversarial Examples for MRC To evaluate the language understanding ability of pre-trained systems, BIBREF28 construct adversarial examples by adding distractor sentences that do not contradict question answering for humans to the paragraph. BIBREF29 and BIBREF30 use questions to retrieve paragraphs that do not contain the answer as adversarial examples. BIBREF5 create unanswerable questions through rigid rules, which swap entities, numbers and antonyms of answerable questions. It has been shown that adversarial examples generated by rule-based systems are much easier to detect than ones in the SQuAD 2.0 dataset.
Data Augmentation for MRC Several attempts have been made to augment training data for machine reading comprehension. We categorize these work according to the type of the augmentation data: external data source, paragraphs or questions. BIBREF31 fine-tune BERT on the SQuAD dataset jointly with another dataset TriviaQA BIBREF12 . BIBREF4 paraphrase paragraphs with backtranslation. Another line of work adheres to generate answerable questions. BIBREF32 propose to generate questions based on the unlabeled text for semi-supervised question answering. BIBREF33 propose a rule-based system to generate multiple-choice questions with candidate options upon the paragraphs. We aim at generating unanswerable questions as a means of data augmentation.
Problem Formulation
Given an answerable question $q$ and its corresponding paragraph $p$ that contains the answer $a$ , we aim to generate unanswerable questions $\tilde{q}$ that fulfills certain requirements. First, it cannot be answered by paragraph $p$ . Second, it must be relevant to both answerable question $q$ and paragraph $p$ , which refrains from producing irrelevant questions. Third, it should ask for something of the same type as answer $a$ .
As shown in Figure 2 , we investigate two simple neural models built upon encoder-decoder architecture BIBREF34 , BIBREF35 to generate unanswerable questions. A sequence-to-sequence model takes the concatenated paragraph and question as input, and encodes the input in a sequential manner. A pair-to-sequence model is further introduced to capture the interactions between inputs. The decoder of two models generates unanswerable questions sequentially. We factorize the probability of generating the unanswerable question $P(\tilde{q}|q,p,a)$ as: P(q|q,p,a)=t=1|q|P(qt|q<t,q,p,a) where $\tilde{q}_{<t}=\tilde{q}_1 \dots \tilde{q}_{t-1}$ .
Sequence-to-Sequence Model
In the sequence-to-sequence model, paragraph and question pairs are packed into an ordered sequence $x$ with a special separator in between. To indicate answers in paragraphs, we introduce token type embeddings which can also be used to distinguish questions from paragraphs in sequence-to-sequence model. As we can see in Figure 2 , the token type can be answer (A), paragraph (P), or question (Q). For a given token, we construct the input representation $\mathbf {e}_i$ by summing the corresponding word embeddings, character embeddings and token type embeddings. Here characters are embedded by an embedding matrix followed by a max pooling layer.
We apply a single-layer bi-directional recurrent neural networks with long short-term memory units (LSTM; BIBREF36 ) to produce encoder hidden states $\mathbf {h}_i=(\mathbf {h}_{i-1}, \mathbf {e}_i)$ . On each decoding step $t$ , the hidden states of decoder (a single-layer unidirectional LSTM network) are computed by $\mathbf {s}_t=(\mathbf {s}_{t-1}, [\mathbf {y}_{t-1}; \mathbf {c}_{t-1}])$ , where $\mathbf {y}_{t-1}$ is the word embedding of previously predicted token and $\mathbf {c}_{t-1}$ is the encoder context vector of previous step. Besides, we use an attention mechanism to summarize the encoder-side information into $\mathbf {c}_{t}$ for current step. The attention distribution $\gamma _t$ over source words is computed as in BIBREF37 : score(hi , st)=hiTWst
i,t=(score(hi,st)) / Zt
ct=i|x|i,t hi where $Z_t = {\sum _{k}^{|x|}\exp (score(\mathbf {h}_k,\mathbf {s}_t))}$ , $\mathbf {W}_\gamma $ in score function is a learnable parameter.
Next, $\mathbf {s}_t$ is concatenated with $\mathbf {c}_t$ to produce the vocabulary distribution $P_{v}$ :
$$P_{v}=(\mathbf {W}_v[\mathbf {s}_t;\mathbf {c}_t] + \mathbf {b}_{v})$$ (Eq. 4)
where $\mathbf {W}_v$ and $\mathbf {b}_{v}$ are learnable parameters. Copy mechanism BIBREF10 is incorporated to directly copy words from inputs, because words in paragraphs or source questions are of great value for unanswerable question generation. Specifically, we use $\mathbf {s}_t$ and $\mathbf {c}_t$ to produce a gating probability $g_t$ :
$$g_t=(\mathbf {W}_g[\mathbf {s}_t;\mathbf {c}_t] + \mathbf {b}_{g})$$ (Eq. 5)
where $\mathbf {W}_g$ and $\mathbf {b}_{g}$ are learnable parameters. The gate $g_t$ determines whether generating a word from the vocabulary or copying a word from inputs. Finally, we obtain the probability of generating $\tilde{q}_t$ by:
$$P(\tilde{q}_t|\tilde{q}_{<t},q,p,a)=g_t P_{v}(\tilde{q}_t) + (1-g_t)\sum _{i \in \zeta _{\tilde{q}_t}}\hat{\gamma }_{i,t} \nonumber $$ (Eq. 6)
where $\zeta _{\tilde{q}_t}$ denotes all the occurrence of $\tilde{q}_t$ in inputs, and the copying score $\hat{\gamma }_t$ is computed in the same way as attention scores $\gamma _t$ (see Equation ( "Sequence-to-Sequence Model" )) while using different parameters.
Pair-to-Sequence Model
Paragraph and question interactions play a vitally important role in machine reading comprehension. The interactions make the paragraph and question aware of each other and help to predict the answer more precisely. Therefore we propose a pair-to-sequence model, conducting attention based interactions in encoder and subsequently decoding with two series of representations.
In pair-to-sequence model, the paragraph and question are embedded as in sequence-to-sequence model, but encoded separately by weight-shared bi-directional LSTM networks, yielding $\mathbf {h}_i^p=(\mathbf {h}_{i-1}^p, \mathbf {e}_{i-1}^p)$ as paragraph encodings and $\mathbf {h}_i^q=(\mathbf {h}_{i-1}^q, \mathbf {e}_{i-1}^q)$ as question encodings. The same attention mechanism as in sequence-to-sequence model is used in the following interaction layer to produce question-aware paragraph representations ${\mathbf {h}}_i^p$ : i,j=(score(hip,hjq))/Zi
hip=j=1|q|i,jhjq
hip=(Wp[hip;hip] + bp) where $Z_i=\sum _{k=1}^{|q|}\exp (score(\mathbf {h}_i^p,\mathbf {h}_k^q))$ , $\mathbf {W}_p$ and $\mathbf {b}_p$ are learnable parameters. Similarly, the paragraph-aware question representations ${\mathbf {h}}_i^q$ are produced by: i,j=(score(hip,hjq))/Zj
hiq=i=1|p|i,jhip
hjq=(Wq[hjq;hjq] + bq) where $Z_j=\sum _{k=1}^{|p|}\exp (score(\mathbf {h}_k^p,\mathbf {h}_j^q))$ , $\mathbf {W}_q$ and $\mathbf {b}_q$ are learnable parameters.
Accordingly, the decoder now takes paragraph context $\mathbf {c}^p_{t-1}$ and question context $\mathbf {c}^q_{t-1}$ as encoder context, computed as $\mathbf {c}_t$ (see Equation ( "Sequence-to-Sequence Model" )) in sequence-to-sequence model, to update decoder hidden states $\mathbf {s}_t=(\mathbf {s}_{t-1},[\mathbf {y}_{t-1};\mathbf {c}^p_{t-1};\mathbf {c}^q_{t-1}])$ and predict tokens. Copy mechanism is also adopted as described before, and copying words from both the paragraph and question is viable.
Training and Inference
The training objective is to minimize the negative likelihood of the aligned unanswerable question $\tilde{q}$ given the answerable question $q$ and its corresponding paragraph $p$ that contains the answer $a$ : L=-(q,q,p,a)DP(q|q,p,a;) where $\mathcal {D}$ is the training corpus and $\theta $ denotes all the parameters. Sequence-to-sequence and pair-to-sequence models are trained with the same objective.
During inference, the unanswerable question for question answering pair $(q,p,a)$ is obtained via $\textrm {argmax}_{q^{\prime }}P(q^{\prime }|q,p,a)$ , where $q^{\prime }$ represents candidate outputs. Beam search is used to avoid iterating over all possible outputs.
Experiments
We conduct experiments on the SQuAD 2.0 dataset BIBREF5 . The extractive machine reading benchmark contains about $100,000$ answerable questions and over $50,000$ crowdsourced unanswerable questions towards Wikipedia paragraphs. Crowdworkers are requested to craft unanswerable questions that are relevant to the given paragraph. Moreover, for each unanswerable question, a plausible answer span is annotated, which indicates the incorrect answer obtained by only relying on type-matching heuristics. Both answers and plausible answers are text spans in the paragraphs.
Unanswerable Question Generation
We use (plausible) answer spans in paragraphs as pivots to align pairs of answerable questions and unanswerable questions. An aligned pair is shown in Figure 1 . As to the spans that correspond to multiple answerable and unanswerable questions, we sort the pairs by Levenshtein distance BIBREF38 and keep the pair with the minimum distance, and make sure that each question is only paired once.
We obtain $20,240$ aligned pairs from the SQuAD 2.0 dataset in total. The Levenshtein distance between the answerable and unanswerable questions in pairs is $3.5$ on average. Specifically, the $17,475$ pairs extracted from the SQuAD 2.0 training set are used to train generation models. Since the SQuAD 2.0 test set is hidden, we randomly sample 46 articles from the SQuAD 2.0 training set with $1,805$ ( $\sim $ 10%) pairs as holdout set and evaluate generation models with $2,765$ pairs extracted the SQuAD 2.0 development set.
We implement generation models upon OpenNMT BIBREF39 . We preprocess the corpus with the spaCy toolkit for tokenization and sentence segmentation. We lowercase tokens and build the vocabulary on SQuAD 2.0 training set with word frequency threshold of 9 to remove most noisy tokens introduced in data collection and tokenization. We set word, character and token type embeddings dimension to 300. We use the glove.840B.300d pre-trained embeddings BIBREF40 to initialize word embeddings, and do further updates during training. Both encoder and decoder share the same vocabulary and word embeddings. The hidden state size of LSTM network is 150. Dropout probability is set to $0.2$ . The data are shuffled and split into mini-batches of size 32 for training. The model is optimized with Adagrad BIBREF41 with an initial learning rate of $0.15$ . During inference, the beam size is 5. We prohibit producing unknown words by setting the score of <unk> token to -inf. We filter the beam outputs that make no differences to the input question.
The generation quality is evaluated using three automatic evaluation metrics: BLEU BIBREF42 , ROUGE BIBREF43 and GLEU BIBREF44 . BLEU is a commonly used metric in machine translation that computes n-gram precisions over references. Recall-oriented ROUGE metric is widely adopted in summarization, and ROUGE-L measures longest common subsequence between system outputs and references. GLEU is a variant of BLEU with the modification that penalizes system output n-grams that present in input but absent from the reference. This makes GLEU a preferable metric for tasks with subtle but critical differences in a monolingual setting as in our unanswerable question generation task.
We also conduct human evaluation on 100 samples in three criteria: (1) unanswerability, which indicates whether the question is unanswerable or not; (2) relatedness, which measures semantic relatedness between the generated question and input question answering pair; (3) readability, which indicates the grammaticality and fluency. We ask three raters to score the generated questions in terms of relatedness and readability on a 1-3 scale (3 for the best) and determine the answerability in binary (1 for unanswerable). The raters are not aware of the question generation methods in advance.
Results of the automatic evaluation are shown in Table 1 . We find that the proposed pair-to-sequence model that captures interactions between paragraph and question performs consistently better than sequence-to-sequence model. Moreover, replacing the input paragraph with the answer sentence hurts model performance, which indicates that using the whole paragraph as context provides more helpful information to unanswerable question generation. We also try to generate unanswerable questions by only relying on answerable questions (see “-Paragraph”), or the paragraph (see “-Question”). Unsurprisingly, both ablation models obtain worse performance compared with the full model. These two ablation results also demonstrate that the input answerable question helps more to improve performance compared with the input paragraph. We argue that the original answerable question provides more direct information due to the fact that the average edit distance between the example pairs is $3.5$ . At last, we remove the copy mechanism that restrains prediction tokens to the vocabulary. The results indicate the necessity of copying tokens from answerable questions and paragraphs to outputs, which relieves the out-of-vocabulary problem.
Table 3 shows the human evaluation results of generated unanswerable questions. We compare with the baseline method TfIdf, which uses the input answerable question to retrieve similar questions towards other articles as outputs. The retrieved questions are mostly unanswerable and readable, but they are not quite relevant to the question answering pair. Notice that being relevant is demonstrated to be important for data augmentation in further experiments on machine reading comprehension. Here pair-to-sequence model still outperforms sequence-to-sequence model in terms of all three metrics. But the differences in human evaluation are not as notable as in the automatic metrics.
As shown in Table 4 , we further randomly sample 100 system outputs to analyze the types of generated unanswerable questions. We borrow the types defined in BIBREF5 for SQuAD 2.0. We categorize the outputs with grammatical errors that make them hard to understand into Other. Samples that fall into Impossible Condition are mainly produced by non-entity substitution. We can see that models tend to generate unanswerable questions by inserting negation and swapping entities. These two types are also most commonly used when crowdworkers pose unanswerable questions according to answerable ones. We also find that the current models still have difficulties in utilizing antonyms and exclusion conditions, which could be improved by incorporating external resources.
In Figure 3 , we present a sample paragraph and its corresponding answerable questions and generated unanswerable questions. In the first example, two models generate unanswerable questions by swapping the location entity “Victoria” with “texas” and inserting negation word “never”, respectively. In the second example, sequence-to-sequence model omits the condition “in Victoria” and yields an answerable question. Pair-to-sequence model inserts the negation “no longer” properly, which is not mentioned in the paragraph. In the third example, grammatical errors are found in the output of . The last example shows that inserting negation words in different positions (“n't public” versus “not in victoria”) can express different meanings. Such cases are critical for generated questions' answerability, which is hard to handle in a rule-based system.
Data Augmentation for Machine Reading Comprehension
We apply our automatically generated unanswerable questions as augmentation data to the following reading comprehension models:
BiDAF BIBREF2 is a benchmark model on extractive machine reading comprehension. Based on BiDAF, BIBREF45 propose the BiDAF-No-Answer model to predict the distribution of answer candidates and the probability of a question being unanswerable at the same time.
BIBREF29 propose the DocQA model to address document-level reading comprehension. The no-answer probability is also predicted jointly.
It is the state-of-the-art model on unanswerable machine reading comprehension. We adopt the uncased version of BERT BIBREF31 for fine-tuning. The batch sizes of BERT-base and BERT-large are set to 12 and 24 respectively. The rest hyperparameters are kept untouched as in the official instructions of fine-tuning BERT-Large on SQuAD 2.0.
We first generate unanswerable questions using the trained generation model. Specifically, we use the answerable questions in the SQuAD 2.0 training set, besides ones aligned before, to generate unanswerable questions. Then we use the paragraph and answers of answerable questions along with the generated questions to construct training examples. At last, we have an augmentation data containing $69,090$ unanswerable examples.
We train question answering models with augmentation data in two separate phases. In the first phase, we train the models by combining the augmentation data and all $86,821$ SQuAD 2.0 answerable examples. Subsequently, we use the original SQuAD 2.0 training data alone to further fine-tune model parameters.
Exact Match (EM) and F1 are two metrics used to evaluate model performance. EM measures the percentage of predictions that match ground truth answers exactly. F1 measures the word overlap between the prediction and ground truth answers. We use pair-to-sequence model with answerable questions and paragraphs for data augmentation by default.
Table 2 shows the exact match and F1 scores of multiple reading comprehension models with and without data augmentation. We can see that the generated unanswerable questions can improve both specifically designed reading comprehension models and strong BERT fine-tuning models, yielding $1.9$ absolute F1 improvement with BERT-base model and $1.7$ absolute F1 improvement with BERT-large model. Our submitted model obtains an EM score of $80.75$ and an F1 score of $83.85$ on the hidden test set.
As shown in Table 5 , pair-to-sequence model proves to be a better option for generating augmentation data than other three methods. Besides the sequence-to-sequence model, we use answerable questions to retrieve questions from other articles with TfIdf. The retrieved questions are of little help to improve the model, because they are less relevant to the paragraph as shown in Table 3 . We refer to the rule-based method BIBREF28 that swaps entities and replaces words with antonyms as Rule. In comparison to the above methods, pair-to-sequence model can yield the largest improvement.
Results in Table 6 show that enlarging the size of augmentation data can further improve model performance, especially with the BERT-base model. We conduct experiments using two and three times the size of the base augmentation data (i.e., $69,090$ unanswerable questions). We generate multiple unanswerable questions for each answerable question by using beam search. Because we only generate unanswerable questions, the data imbalance problem could mitigate the improvement of incorporating more augmentation data.
Conclusions
In this paper, we propose to generate unanswerable questions as a means of data augmentation for machine reading comprehension. We produce relevant unanswerable questions by editing answerable questions and conditioning on the corresponding paragraph. A pair-to-sequence model is introduced in order to capture the interactions between question and paragraph. We also present a way to construct training data for unanswerable question generation models. Both automatic and human evaluations show that the proposed model consistently outperforms the sequence-to-sequence baseline. The results on the SQuAD 2.0 dataset show that our generated unanswerable questions can help to improve multiple reading comprehension models. As for future work, we would like to enhance the ability to utilize antonyms for unanswerable question generation by leveraging external resources.
Acknowledgments
We thank anonymous reviewers for their helpful comments. Qin and Liu were supported by National Natural Science Foundation of China (NSFC) via grants 61632011 and 61772156. | Yes |
5ea87432b9166d6a4ab8806599cd2b1f9178622f | 5ea87432b9166d6a4ab8806599cd2b1f9178622f_0 | Q: What conclusions are drawn from these experiments?
Text: Introduction
Recent studies in information extraction domain (but also in other natural language processing fields) show that deep learning models produce state-of-the-art results BIBREF0 . Deep architectures employ multiple layers to learn hierarchical representations of the input data. In the last few years, neural networks based on dense vector representations provided the best results in various NLP tasks, including named entities recognition BIBREF1 , semantic role labelling BIBREF2 , question answering BIBREF3 and multitask learning BIBREF4 . The core element of most deep learning solutions is the dense distributed semantic representation of words, often called word embeddings. Distributional vectors follow the distributional hypothesis that words with a similar meaning tend to appear in similar contexts. Word embeddings capture the similarity between words and are often used as the first layer in deep learning models. Two of the most common and very efficient methods to produce word embeddings are Continuous Bag-of-Words (CBOW) and Skip-gram (SG), which produce distributed representations of words in a vector space, grouping them by similarity BIBREF5 , BIBREF6 . With the progress of machine learning techniques, it is possible to train such models on much larger data sets, and these often outperform the simple ones. It is possible to use a set of text documents containing even billions of words as training data. Both architectures (CBOW and SG) describe how the neural network learns the vector word representations for each word. In CBOW architecture the task is predicting the word given its context and in SG the task in predicting the context given the word.
Due to a significant increase of quality using deep learning methods together with word embeddings as the input layer for neural networks, many word vector sets have been created, using different corpora. The widest range of available word embeddings is available for English BIBREF7 and there were not so many options for less popular languages, e.g. Polish. There was a definite need within CLARIN-PL project and Sentimenti to increase the quality of NLP methods for Polish which were utilising available Polish word vectors BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 but only FastText modification of Skip-gram BIBREF9 was able to produce vectors for unknown words, based on character n-grams. The observation was that even using a sophisticated deep neural structure, the result strongly depends on the initial distributional representation. There was a need to build a massive corpus of Polish and create high-quality word vectors from that corpus. This work describes how we extended KGR7 1G corpus to become KGR10 with 4 billion words. Next, we present the different variants of word embeddings produced using this corpus. In the article about the recognition of named entities for Polish from the previous year, these embeddings were used in one of the three voting models to obtain the best results and the final system PolDeepNer BIBREF12 took the second place in PolEval2018 Task 2 BIBREF13 . In this article, we evaluated KGR10 FastText word embeddings in recognition of timexes.
Available word embeddings
At the time we were testing word embeddings for different applications, there were 2 most popular sources of word vectors. The first one, called IPIPAN, is the result of the project Compositional distributional semantic models for identification, discrimination and disambiguation of senses in Polish texts, the process of creating word embeddings is described in article BIBREF10 and corpora used were National Corpus of Polish (NKJP) BIBREF14 and Wikipedia (Wiki). The second one, called FASTTEXT, is original FastText word embeddings set, created for 157 languages (including Polish). Authors used Wikipedia and Common Crawl as the linguistic data source. Table TABREF6 shows the number of tokens in each corpus and the name of the institution which prepared it. There is also information about the public availability of the resource.
Table TABREF7 presents the most commonly used word embeddings in CLARIN-PL before the creation of our embeddings.
Building a larger corpus
KGR7 corpus (also called plWordNet Corpus 7.0, PLWNC 7.0) BIBREF15 , BIBREF16 was created at the Wroclaw University of Science and Technology by G4.19 Group. Due to the licences of documents in this corpus, this resource is not publicly available. Table TABREF8 contains KGR7 subcorpora and statistics BIBREF17 . One of the subcorpora in KGR7 is KIPI (the IPI PAN Corpus) BIBREF18 . KGR7 covers texts from a wide range of domains like: blogs, science, stenographic recordings, news, journalism, books and parliamentary transcripts. All texts come from the second half of the 20th century and represent the modern Polish language.
plWordNet Corpus 10.0 (KGR10)
KGR10, also known as plWordNet Corpus 10.0 (PLWNC 10.0), is the result of the work on the toolchain to automatic acquisition and extraction of the website content, called CorpoGrabber BIBREF19 . It is a pipeline of tools to get the most relevant content of the website, including all subsites (up to the user-defined depth). The proposed toolchain can be used to build a big Web corpus of text documents. It requires the list of the root websites as the input. Tools composing CorpoGrabber are adapted to Polish, but most subtasks are language independent. The whole process can be run in parallel on a single machine and includes the following tasks: download of the HTML subpages of each input page URL with HTTrack, extraction of plain text from each subpage by removing boilerplate content (such as navigation links, headers, footers, advertisements from HTML pages) BIBREF20 , deduplication of plain text BIBREF20 , bad quality documents removal utilising Morphological Analysis Converter and Aggregator (MACA) BIBREF21 , documents tagging using Wrocław CRF Tagger (WCRFT) BIBREF22 . Last two steps are available only for Polish.
In order to significantly expand the set of documents in KGR7, we utilised DMOZ (short for directory.mozilla.org) – a multilingual open content directory of World Wide Web links, also known as Open Directory Project (ODP). The website with directory was closed in 2017, but the database still can be found on the web. Polish part of this directory contains more than 30,000 links to Polish websites. We used these links as root URLs for CorpoGrabber, and we downloaded more than 7TB of HTML web pages. After the extraction of text from HTML pages, deduplication of documents (including texts from KGR7) and removing bad quality documents (containing more than 30% of words outside the Morfeusz BIBREF23 dictionary) the result is KGR10 corpus, which contains 4,015,569,051 tokens and 18,084,712 unique words. Due to component licenses, KGR10 corpus is not publicly available.
KGR10 word embeddings
We created a new Polish word embeddings models using the KGR10 corpus. We built 16 models of word embeddings using the implementation of CBOW and Skip-gram methods in the FastText tool BIBREF9 . These models are available under an open license in the CLARIN-PL project DSpace repository. The internal encoding solution based on embeddings of n-grams composing each word makes it possible to obtain FastText vector representations, also for words which were not processed during the creation of the model. A vector representation is associated with character n-gram and each word is represented as the sum of its n-gram vector representations. Previous solutions ignored the morphology of words and were assigning a distinct vector to each word. This is a limitation for languages with large vocabularies and many rare words, like Turkish, Finnish or Polish BIBREF9 . Authors observed that using word representations trained with subword information outperformed the plain Skip-gram model and the improvement was most significant for morphologically rich Slavic languages such as Czech (8% reduction of perplexity over SG) and Russian (13% reduction) BIBREF9 . We expected that word embeddings created that way for Polish should also provide such improvements. There were also previous attempts to build KGR10 word vectors with other methods (including FastText), and the results are presented in the article BIBREF8 . We selected the best models from that article – with embedding ID prefix EP (embeddings, previous) in Table TABREF13 – to compare with new models, marked as embedding ID prefix EC in Table TABREF13 ).
The word embeddings models used in PolDeepNer for recognition of timexes and named entities were EE1, . It was built on a plain KGR10. The dimension of word embedding is 300, the method of constructing vectors was Skip-gram BIBREF9 , and the number of negative samples for each positive example was 10.
Temporal expressions
Temporal expressions (henceforth timexes) tell us when something happens, how long something lasts, or how often something occurs. The correct interpretation of a timex often involves knowing the context. Usually, a person is aware of their location in time, i.e., they know what day, month and year it is, and whether it is the beginning or the end of week or month. Therefore, they refer to specific dates, using incomplete expressions such as 12 November, Thursday, the following week, after three days. The temporal context is often necessary to determine to which specific date and time timexes refer. These examples do not exhaust the complexity of the problem of recognising timexes.
TimeML BIBREF24 is a markup language for describing timexes that has been adapted to many languages. One of the best-known methods of recognition of timexes called HeidelTime BIBREF25 , which uses the TIMEX3 annotation standard, currently supports 13 languages (with the use of hand-crafted resources). PLIMEX is a specification for the description of Polish timexes. It is based on TIMEX3 used in TimeML. Classes proposed in TimeML are adapted, namely: date, time, duration, set.
Recognition of timexes
There are many methods for recognising timexes that are widely used in natural language engineering. For English (but not exclusively), in approaches based on supervised learning, sequence labelling methods are often used, especially Conditional Random Fields BIBREF26 . A review of the methods in the article BIBREF27 about the recognition of timexes for English and Spanish has shown a certain shift within the most popular solutions. As with the normalisation of timexes, the best results are still achieved with rule-based methods, many new solutions have been introduced in the area of recognition. The best systems listed in BIBREF27 , called TIPSem BIBREF28 and ClearTK BIBREF29 , use CRFs for recognition, so initially, we decided to apply the CRF-based approach for this task. The results were described in BIBREF30 , BIBREF31 .
In recent years, solutions based on deep neural networks, using word representation in the form of word embeddings, created with the use of large linguistic corpus, have begun to dominate in the field of recognition of word expressions. The most popular solutions include bidirectional long short-term memory neural networks (henceforth Bi-LSTM), often in combination with conditional random fields, as presented in the paper BIBREF32 dedicated to the recognition of proper names. For the Polish language, deep networks have also recently been used to recognise word expressions. In the issue of recognition of timexes, a bidirectional gated recurrent unit network (GRU) has been used BIBREF33 , BIBREF34 . GRU network is described in detail in the article BIBREF35 . In case of recognition of event descriptions using Bi-LSTM and Bi-GRU, where most of the Liner2 features were included in the input feature vector, better results were obtained BIBREF36 than for the Liner2 method (but without taking into account domain dictionaries). In last year's publication on the issue of named entities recognition using BiLSTM+CRF (together with G4.19 Group members), we received a statistically significant improvement in the quality of recognition compared to a solution using CRF only. The solution has been called PolDeepNer BIBREF12 .
Experiments and Results
Experiments were carried out by the method proposed in BIBREF27 . The first part is described as Task A, the purpose of which is to identify the boundaries of timexes and assign them to one of the following classes: date, time, duration, set.
We trained the final models using the train set and we evaluated it using the test set, which was the reproduction of analysis performed in articles BIBREF37 , BIBREF38 . The division is presented in Table TABREF16 . We used BiLSTM+CRF classifier as in previous work BIBREF12 . We used precision, recall and F1 metrics from the classic NER task BIBREF12 , where true positive system answer has the same boundaries and type as annotation in gold data set. We evaluated all 17 word embeddings models using these metrics. The results are presented in Tables TABREF17 , TABREF18 and TABREF19 .
We chose the best 3 results from each word embeddings group (EE, EP, EC) from Table TABREF19 presenting F1-scores for all models. Then we evaluated these results using more detailed measures for timexes, presented in BIBREF27 . The following measures were used to evaluate the quality of boundaries and class recognition, so-called strict match: strict precision (Str.P), strict recall (Str.R) and strict F1-score (Str.F1). A relaxed match (Rel.P, Rel.R, Rel.F1) evaluation has also been carried out to determine whether there is an overlap between the system entity and gold entity, e.g. [Sunday] and [Sunday morning] BIBREF27 . If there was an overlap, a relaxed type F1-score (Type.F1) was calculated BIBREF27 . The results are presented in Table TABREF20 .
Conclusions
The analysis of results from Tables TABREF17 , TABREF18 and TABREF19 show that 12 of 15 best results were obtained using new word embeddings. The evaluation results presented in Table TABREF20 (the chosen best embeddings models from Table TABREF19 ) prove that the best group of word embeddings is EC. The highest type F1-score was obtained for EC1 model, built using binary FastText Skip-gram method utilising subword information, with vector dimension equal to 300 and negative sampling equal to 10. The ability of the model to provide vector representation for the unknown words seems to be the most important. Also, previous models built using KGR10 (EP) are probably less accurate due to an incorrect tokenisation of the corpus. We used WCRFT tagger BIBREF22 , which utilises Toki BIBREF21 to tokenise the input text before the creation of the embeddings model. The comparison of EC1 with previous results obtained using only CRF BIBREF38 show the significant improvement across all the tested metrics: 3.6pp increase in strict F1-score, 1.36pp increase in relaxed precision, 5.61pp increase in relaxed recall and 3.51pp increase in relaxed F1-score.
Acknowledgements
Work co-financed as part of the investment in the CLARIN-PL research infrastructure funded by the Polish Ministry of Science and Higher Education and in part by the National Centre for Research and Development, Poland, under grant no POIR.01.01.01-00-0472/16. | best results were obtained using new word embeddings, best group of word embeddings is EC, The highest type F1-score was obtained for EC1 model, built using binary FastText Skip-gram method utilising subword information, ability of the model to provide vector representation for the unknown words seems to be the most important |
3af9156b95a4c2d67cc54b80b92cc7b918fea2a9 | 3af9156b95a4c2d67cc54b80b92cc7b918fea2a9_0 | Q: What experiments are presented?
Text: Introduction
Recent studies in information extraction domain (but also in other natural language processing fields) show that deep learning models produce state-of-the-art results BIBREF0 . Deep architectures employ multiple layers to learn hierarchical representations of the input data. In the last few years, neural networks based on dense vector representations provided the best results in various NLP tasks, including named entities recognition BIBREF1 , semantic role labelling BIBREF2 , question answering BIBREF3 and multitask learning BIBREF4 . The core element of most deep learning solutions is the dense distributed semantic representation of words, often called word embeddings. Distributional vectors follow the distributional hypothesis that words with a similar meaning tend to appear in similar contexts. Word embeddings capture the similarity between words and are often used as the first layer in deep learning models. Two of the most common and very efficient methods to produce word embeddings are Continuous Bag-of-Words (CBOW) and Skip-gram (SG), which produce distributed representations of words in a vector space, grouping them by similarity BIBREF5 , BIBREF6 . With the progress of machine learning techniques, it is possible to train such models on much larger data sets, and these often outperform the simple ones. It is possible to use a set of text documents containing even billions of words as training data. Both architectures (CBOW and SG) describe how the neural network learns the vector word representations for each word. In CBOW architecture the task is predicting the word given its context and in SG the task in predicting the context given the word.
Due to a significant increase of quality using deep learning methods together with word embeddings as the input layer for neural networks, many word vector sets have been created, using different corpora. The widest range of available word embeddings is available for English BIBREF7 and there were not so many options for less popular languages, e.g. Polish. There was a definite need within CLARIN-PL project and Sentimenti to increase the quality of NLP methods for Polish which were utilising available Polish word vectors BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 but only FastText modification of Skip-gram BIBREF9 was able to produce vectors for unknown words, based on character n-grams. The observation was that even using a sophisticated deep neural structure, the result strongly depends on the initial distributional representation. There was a need to build a massive corpus of Polish and create high-quality word vectors from that corpus. This work describes how we extended KGR7 1G corpus to become KGR10 with 4 billion words. Next, we present the different variants of word embeddings produced using this corpus. In the article about the recognition of named entities for Polish from the previous year, these embeddings were used in one of the three voting models to obtain the best results and the final system PolDeepNer BIBREF12 took the second place in PolEval2018 Task 2 BIBREF13 . In this article, we evaluated KGR10 FastText word embeddings in recognition of timexes.
Available word embeddings
At the time we were testing word embeddings for different applications, there were 2 most popular sources of word vectors. The first one, called IPIPAN, is the result of the project Compositional distributional semantic models for identification, discrimination and disambiguation of senses in Polish texts, the process of creating word embeddings is described in article BIBREF10 and corpora used were National Corpus of Polish (NKJP) BIBREF14 and Wikipedia (Wiki). The second one, called FASTTEXT, is original FastText word embeddings set, created for 157 languages (including Polish). Authors used Wikipedia and Common Crawl as the linguistic data source. Table TABREF6 shows the number of tokens in each corpus and the name of the institution which prepared it. There is also information about the public availability of the resource.
Table TABREF7 presents the most commonly used word embeddings in CLARIN-PL before the creation of our embeddings.
Building a larger corpus
KGR7 corpus (also called plWordNet Corpus 7.0, PLWNC 7.0) BIBREF15 , BIBREF16 was created at the Wroclaw University of Science and Technology by G4.19 Group. Due to the licences of documents in this corpus, this resource is not publicly available. Table TABREF8 contains KGR7 subcorpora and statistics BIBREF17 . One of the subcorpora in KGR7 is KIPI (the IPI PAN Corpus) BIBREF18 . KGR7 covers texts from a wide range of domains like: blogs, science, stenographic recordings, news, journalism, books and parliamentary transcripts. All texts come from the second half of the 20th century and represent the modern Polish language.
plWordNet Corpus 10.0 (KGR10)
KGR10, also known as plWordNet Corpus 10.0 (PLWNC 10.0), is the result of the work on the toolchain to automatic acquisition and extraction of the website content, called CorpoGrabber BIBREF19 . It is a pipeline of tools to get the most relevant content of the website, including all subsites (up to the user-defined depth). The proposed toolchain can be used to build a big Web corpus of text documents. It requires the list of the root websites as the input. Tools composing CorpoGrabber are adapted to Polish, but most subtasks are language independent. The whole process can be run in parallel on a single machine and includes the following tasks: download of the HTML subpages of each input page URL with HTTrack, extraction of plain text from each subpage by removing boilerplate content (such as navigation links, headers, footers, advertisements from HTML pages) BIBREF20 , deduplication of plain text BIBREF20 , bad quality documents removal utilising Morphological Analysis Converter and Aggregator (MACA) BIBREF21 , documents tagging using Wrocław CRF Tagger (WCRFT) BIBREF22 . Last two steps are available only for Polish.
In order to significantly expand the set of documents in KGR7, we utilised DMOZ (short for directory.mozilla.org) – a multilingual open content directory of World Wide Web links, also known as Open Directory Project (ODP). The website with directory was closed in 2017, but the database still can be found on the web. Polish part of this directory contains more than 30,000 links to Polish websites. We used these links as root URLs for CorpoGrabber, and we downloaded more than 7TB of HTML web pages. After the extraction of text from HTML pages, deduplication of documents (including texts from KGR7) and removing bad quality documents (containing more than 30% of words outside the Morfeusz BIBREF23 dictionary) the result is KGR10 corpus, which contains 4,015,569,051 tokens and 18,084,712 unique words. Due to component licenses, KGR10 corpus is not publicly available.
KGR10 word embeddings
We created a new Polish word embeddings models using the KGR10 corpus. We built 16 models of word embeddings using the implementation of CBOW and Skip-gram methods in the FastText tool BIBREF9 . These models are available under an open license in the CLARIN-PL project DSpace repository. The internal encoding solution based on embeddings of n-grams composing each word makes it possible to obtain FastText vector representations, also for words which were not processed during the creation of the model. A vector representation is associated with character n-gram and each word is represented as the sum of its n-gram vector representations. Previous solutions ignored the morphology of words and were assigning a distinct vector to each word. This is a limitation for languages with large vocabularies and many rare words, like Turkish, Finnish or Polish BIBREF9 . Authors observed that using word representations trained with subword information outperformed the plain Skip-gram model and the improvement was most significant for morphologically rich Slavic languages such as Czech (8% reduction of perplexity over SG) and Russian (13% reduction) BIBREF9 . We expected that word embeddings created that way for Polish should also provide such improvements. There were also previous attempts to build KGR10 word vectors with other methods (including FastText), and the results are presented in the article BIBREF8 . We selected the best models from that article – with embedding ID prefix EP (embeddings, previous) in Table TABREF13 – to compare with new models, marked as embedding ID prefix EC in Table TABREF13 ).
The word embeddings models used in PolDeepNer for recognition of timexes and named entities were EE1, . It was built on a plain KGR10. The dimension of word embedding is 300, the method of constructing vectors was Skip-gram BIBREF9 , and the number of negative samples for each positive example was 10.
Temporal expressions
Temporal expressions (henceforth timexes) tell us when something happens, how long something lasts, or how often something occurs. The correct interpretation of a timex often involves knowing the context. Usually, a person is aware of their location in time, i.e., they know what day, month and year it is, and whether it is the beginning or the end of week or month. Therefore, they refer to specific dates, using incomplete expressions such as 12 November, Thursday, the following week, after three days. The temporal context is often necessary to determine to which specific date and time timexes refer. These examples do not exhaust the complexity of the problem of recognising timexes.
TimeML BIBREF24 is a markup language for describing timexes that has been adapted to many languages. One of the best-known methods of recognition of timexes called HeidelTime BIBREF25 , which uses the TIMEX3 annotation standard, currently supports 13 languages (with the use of hand-crafted resources). PLIMEX is a specification for the description of Polish timexes. It is based on TIMEX3 used in TimeML. Classes proposed in TimeML are adapted, namely: date, time, duration, set.
Recognition of timexes
There are many methods for recognising timexes that are widely used in natural language engineering. For English (but not exclusively), in approaches based on supervised learning, sequence labelling methods are often used, especially Conditional Random Fields BIBREF26 . A review of the methods in the article BIBREF27 about the recognition of timexes for English and Spanish has shown a certain shift within the most popular solutions. As with the normalisation of timexes, the best results are still achieved with rule-based methods, many new solutions have been introduced in the area of recognition. The best systems listed in BIBREF27 , called TIPSem BIBREF28 and ClearTK BIBREF29 , use CRFs for recognition, so initially, we decided to apply the CRF-based approach for this task. The results were described in BIBREF30 , BIBREF31 .
In recent years, solutions based on deep neural networks, using word representation in the form of word embeddings, created with the use of large linguistic corpus, have begun to dominate in the field of recognition of word expressions. The most popular solutions include bidirectional long short-term memory neural networks (henceforth Bi-LSTM), often in combination with conditional random fields, as presented in the paper BIBREF32 dedicated to the recognition of proper names. For the Polish language, deep networks have also recently been used to recognise word expressions. In the issue of recognition of timexes, a bidirectional gated recurrent unit network (GRU) has been used BIBREF33 , BIBREF34 . GRU network is described in detail in the article BIBREF35 . In case of recognition of event descriptions using Bi-LSTM and Bi-GRU, where most of the Liner2 features were included in the input feature vector, better results were obtained BIBREF36 than for the Liner2 method (but without taking into account domain dictionaries). In last year's publication on the issue of named entities recognition using BiLSTM+CRF (together with G4.19 Group members), we received a statistically significant improvement in the quality of recognition compared to a solution using CRF only. The solution has been called PolDeepNer BIBREF12 .
Experiments and Results
Experiments were carried out by the method proposed in BIBREF27 . The first part is described as Task A, the purpose of which is to identify the boundaries of timexes and assign them to one of the following classes: date, time, duration, set.
We trained the final models using the train set and we evaluated it using the test set, which was the reproduction of analysis performed in articles BIBREF37 , BIBREF38 . The division is presented in Table TABREF16 . We used BiLSTM+CRF classifier as in previous work BIBREF12 . We used precision, recall and F1 metrics from the classic NER task BIBREF12 , where true positive system answer has the same boundaries and type as annotation in gold data set. We evaluated all 17 word embeddings models using these metrics. The results are presented in Tables TABREF17 , TABREF18 and TABREF19 .
We chose the best 3 results from each word embeddings group (EE, EP, EC) from Table TABREF19 presenting F1-scores for all models. Then we evaluated these results using more detailed measures for timexes, presented in BIBREF27 . The following measures were used to evaluate the quality of boundaries and class recognition, so-called strict match: strict precision (Str.P), strict recall (Str.R) and strict F1-score (Str.F1). A relaxed match (Rel.P, Rel.R, Rel.F1) evaluation has also been carried out to determine whether there is an overlap between the system entity and gold entity, e.g. [Sunday] and [Sunday morning] BIBREF27 . If there was an overlap, a relaxed type F1-score (Type.F1) was calculated BIBREF27 . The results are presented in Table TABREF20 .
Conclusions
The analysis of results from Tables TABREF17 , TABREF18 and TABREF19 show that 12 of 15 best results were obtained using new word embeddings. The evaluation results presented in Table TABREF20 (the chosen best embeddings models from Table TABREF19 ) prove that the best group of word embeddings is EC. The highest type F1-score was obtained for EC1 model, built using binary FastText Skip-gram method utilising subword information, with vector dimension equal to 300 and negative sampling equal to 10. The ability of the model to provide vector representation for the unknown words seems to be the most important. Also, previous models built using KGR10 (EP) are probably less accurate due to an incorrect tokenisation of the corpus. We used WCRFT tagger BIBREF22 , which utilises Toki BIBREF21 to tokenise the input text before the creation of the embeddings model. The comparison of EC1 with previous results obtained using only CRF BIBREF38 show the significant improvement across all the tested metrics: 3.6pp increase in strict F1-score, 1.36pp increase in relaxed precision, 5.61pp increase in relaxed recall and 3.51pp increase in relaxed F1-score.
Acknowledgements
Work co-financed as part of the investment in the CLARIN-PL research infrastructure funded by the Polish Ministry of Science and Higher Education and in part by the National Centre for Research and Development, Poland, under grant no POIR.01.01.01-00-0472/16. | identify the boundaries of timexes and assign them to one of the following classes: date, time, duration, set, Then we evaluated these results using more detailed measures for timexes |
7e328cc3cffa521e73f111d6796aaa9661c8eb07 | 7e328cc3cffa521e73f111d6796aaa9661c8eb07_0 | Q: What is specific about the specific embeddings?
Text: Introduction
Recent studies in information extraction domain (but also in other natural language processing fields) show that deep learning models produce state-of-the-art results BIBREF0 . Deep architectures employ multiple layers to learn hierarchical representations of the input data. In the last few years, neural networks based on dense vector representations provided the best results in various NLP tasks, including named entities recognition BIBREF1 , semantic role labelling BIBREF2 , question answering BIBREF3 and multitask learning BIBREF4 . The core element of most deep learning solutions is the dense distributed semantic representation of words, often called word embeddings. Distributional vectors follow the distributional hypothesis that words with a similar meaning tend to appear in similar contexts. Word embeddings capture the similarity between words and are often used as the first layer in deep learning models. Two of the most common and very efficient methods to produce word embeddings are Continuous Bag-of-Words (CBOW) and Skip-gram (SG), which produce distributed representations of words in a vector space, grouping them by similarity BIBREF5 , BIBREF6 . With the progress of machine learning techniques, it is possible to train such models on much larger data sets, and these often outperform the simple ones. It is possible to use a set of text documents containing even billions of words as training data. Both architectures (CBOW and SG) describe how the neural network learns the vector word representations for each word. In CBOW architecture the task is predicting the word given its context and in SG the task in predicting the context given the word.
Due to a significant increase of quality using deep learning methods together with word embeddings as the input layer for neural networks, many word vector sets have been created, using different corpora. The widest range of available word embeddings is available for English BIBREF7 and there were not so many options for less popular languages, e.g. Polish. There was a definite need within CLARIN-PL project and Sentimenti to increase the quality of NLP methods for Polish which were utilising available Polish word vectors BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 but only FastText modification of Skip-gram BIBREF9 was able to produce vectors for unknown words, based on character n-grams. The observation was that even using a sophisticated deep neural structure, the result strongly depends on the initial distributional representation. There was a need to build a massive corpus of Polish and create high-quality word vectors from that corpus. This work describes how we extended KGR7 1G corpus to become KGR10 with 4 billion words. Next, we present the different variants of word embeddings produced using this corpus. In the article about the recognition of named entities for Polish from the previous year, these embeddings were used in one of the three voting models to obtain the best results and the final system PolDeepNer BIBREF12 took the second place in PolEval2018 Task 2 BIBREF13 . In this article, we evaluated KGR10 FastText word embeddings in recognition of timexes.
Available word embeddings
At the time we were testing word embeddings for different applications, there were 2 most popular sources of word vectors. The first one, called IPIPAN, is the result of the project Compositional distributional semantic models for identification, discrimination and disambiguation of senses in Polish texts, the process of creating word embeddings is described in article BIBREF10 and corpora used were National Corpus of Polish (NKJP) BIBREF14 and Wikipedia (Wiki). The second one, called FASTTEXT, is original FastText word embeddings set, created for 157 languages (including Polish). Authors used Wikipedia and Common Crawl as the linguistic data source. Table TABREF6 shows the number of tokens in each corpus and the name of the institution which prepared it. There is also information about the public availability of the resource.
Table TABREF7 presents the most commonly used word embeddings in CLARIN-PL before the creation of our embeddings.
Building a larger corpus
KGR7 corpus (also called plWordNet Corpus 7.0, PLWNC 7.0) BIBREF15 , BIBREF16 was created at the Wroclaw University of Science and Technology by G4.19 Group. Due to the licences of documents in this corpus, this resource is not publicly available. Table TABREF8 contains KGR7 subcorpora and statistics BIBREF17 . One of the subcorpora in KGR7 is KIPI (the IPI PAN Corpus) BIBREF18 . KGR7 covers texts from a wide range of domains like: blogs, science, stenographic recordings, news, journalism, books and parliamentary transcripts. All texts come from the second half of the 20th century and represent the modern Polish language.
plWordNet Corpus 10.0 (KGR10)
KGR10, also known as plWordNet Corpus 10.0 (PLWNC 10.0), is the result of the work on the toolchain to automatic acquisition and extraction of the website content, called CorpoGrabber BIBREF19 . It is a pipeline of tools to get the most relevant content of the website, including all subsites (up to the user-defined depth). The proposed toolchain can be used to build a big Web corpus of text documents. It requires the list of the root websites as the input. Tools composing CorpoGrabber are adapted to Polish, but most subtasks are language independent. The whole process can be run in parallel on a single machine and includes the following tasks: download of the HTML subpages of each input page URL with HTTrack, extraction of plain text from each subpage by removing boilerplate content (such as navigation links, headers, footers, advertisements from HTML pages) BIBREF20 , deduplication of plain text BIBREF20 , bad quality documents removal utilising Morphological Analysis Converter and Aggregator (MACA) BIBREF21 , documents tagging using Wrocław CRF Tagger (WCRFT) BIBREF22 . Last two steps are available only for Polish.
In order to significantly expand the set of documents in KGR7, we utilised DMOZ (short for directory.mozilla.org) – a multilingual open content directory of World Wide Web links, also known as Open Directory Project (ODP). The website with directory was closed in 2017, but the database still can be found on the web. Polish part of this directory contains more than 30,000 links to Polish websites. We used these links as root URLs for CorpoGrabber, and we downloaded more than 7TB of HTML web pages. After the extraction of text from HTML pages, deduplication of documents (including texts from KGR7) and removing bad quality documents (containing more than 30% of words outside the Morfeusz BIBREF23 dictionary) the result is KGR10 corpus, which contains 4,015,569,051 tokens and 18,084,712 unique words. Due to component licenses, KGR10 corpus is not publicly available.
KGR10 word embeddings
We created a new Polish word embeddings models using the KGR10 corpus. We built 16 models of word embeddings using the implementation of CBOW and Skip-gram methods in the FastText tool BIBREF9 . These models are available under an open license in the CLARIN-PL project DSpace repository. The internal encoding solution based on embeddings of n-grams composing each word makes it possible to obtain FastText vector representations, also for words which were not processed during the creation of the model. A vector representation is associated with character n-gram and each word is represented as the sum of its n-gram vector representations. Previous solutions ignored the morphology of words and were assigning a distinct vector to each word. This is a limitation for languages with large vocabularies and many rare words, like Turkish, Finnish or Polish BIBREF9 . Authors observed that using word representations trained with subword information outperformed the plain Skip-gram model and the improvement was most significant for morphologically rich Slavic languages such as Czech (8% reduction of perplexity over SG) and Russian (13% reduction) BIBREF9 . We expected that word embeddings created that way for Polish should also provide such improvements. There were also previous attempts to build KGR10 word vectors with other methods (including FastText), and the results are presented in the article BIBREF8 . We selected the best models from that article – with embedding ID prefix EP (embeddings, previous) in Table TABREF13 – to compare with new models, marked as embedding ID prefix EC in Table TABREF13 ).
The word embeddings models used in PolDeepNer for recognition of timexes and named entities were EE1, . It was built on a plain KGR10. The dimension of word embedding is 300, the method of constructing vectors was Skip-gram BIBREF9 , and the number of negative samples for each positive example was 10.
Temporal expressions
Temporal expressions (henceforth timexes) tell us when something happens, how long something lasts, or how often something occurs. The correct interpretation of a timex often involves knowing the context. Usually, a person is aware of their location in time, i.e., they know what day, month and year it is, and whether it is the beginning or the end of week or month. Therefore, they refer to specific dates, using incomplete expressions such as 12 November, Thursday, the following week, after three days. The temporal context is often necessary to determine to which specific date and time timexes refer. These examples do not exhaust the complexity of the problem of recognising timexes.
TimeML BIBREF24 is a markup language for describing timexes that has been adapted to many languages. One of the best-known methods of recognition of timexes called HeidelTime BIBREF25 , which uses the TIMEX3 annotation standard, currently supports 13 languages (with the use of hand-crafted resources). PLIMEX is a specification for the description of Polish timexes. It is based on TIMEX3 used in TimeML. Classes proposed in TimeML are adapted, namely: date, time, duration, set.
Recognition of timexes
There are many methods for recognising timexes that are widely used in natural language engineering. For English (but not exclusively), in approaches based on supervised learning, sequence labelling methods are often used, especially Conditional Random Fields BIBREF26 . A review of the methods in the article BIBREF27 about the recognition of timexes for English and Spanish has shown a certain shift within the most popular solutions. As with the normalisation of timexes, the best results are still achieved with rule-based methods, many new solutions have been introduced in the area of recognition. The best systems listed in BIBREF27 , called TIPSem BIBREF28 and ClearTK BIBREF29 , use CRFs for recognition, so initially, we decided to apply the CRF-based approach for this task. The results were described in BIBREF30 , BIBREF31 .
In recent years, solutions based on deep neural networks, using word representation in the form of word embeddings, created with the use of large linguistic corpus, have begun to dominate in the field of recognition of word expressions. The most popular solutions include bidirectional long short-term memory neural networks (henceforth Bi-LSTM), often in combination with conditional random fields, as presented in the paper BIBREF32 dedicated to the recognition of proper names. For the Polish language, deep networks have also recently been used to recognise word expressions. In the issue of recognition of timexes, a bidirectional gated recurrent unit network (GRU) has been used BIBREF33 , BIBREF34 . GRU network is described in detail in the article BIBREF35 . In case of recognition of event descriptions using Bi-LSTM and Bi-GRU, where most of the Liner2 features were included in the input feature vector, better results were obtained BIBREF36 than for the Liner2 method (but without taking into account domain dictionaries). In last year's publication on the issue of named entities recognition using BiLSTM+CRF (together with G4.19 Group members), we received a statistically significant improvement in the quality of recognition compared to a solution using CRF only. The solution has been called PolDeepNer BIBREF12 .
Experiments and Results
Experiments were carried out by the method proposed in BIBREF27 . The first part is described as Task A, the purpose of which is to identify the boundaries of timexes and assign them to one of the following classes: date, time, duration, set.
We trained the final models using the train set and we evaluated it using the test set, which was the reproduction of analysis performed in articles BIBREF37 , BIBREF38 . The division is presented in Table TABREF16 . We used BiLSTM+CRF classifier as in previous work BIBREF12 . We used precision, recall and F1 metrics from the classic NER task BIBREF12 , where true positive system answer has the same boundaries and type as annotation in gold data set. We evaluated all 17 word embeddings models using these metrics. The results are presented in Tables TABREF17 , TABREF18 and TABREF19 .
We chose the best 3 results from each word embeddings group (EE, EP, EC) from Table TABREF19 presenting F1-scores for all models. Then we evaluated these results using more detailed measures for timexes, presented in BIBREF27 . The following measures were used to evaluate the quality of boundaries and class recognition, so-called strict match: strict precision (Str.P), strict recall (Str.R) and strict F1-score (Str.F1). A relaxed match (Rel.P, Rel.R, Rel.F1) evaluation has also been carried out to determine whether there is an overlap between the system entity and gold entity, e.g. [Sunday] and [Sunday morning] BIBREF27 . If there was an overlap, a relaxed type F1-score (Type.F1) was calculated BIBREF27 . The results are presented in Table TABREF20 .
Conclusions
The analysis of results from Tables TABREF17 , TABREF18 and TABREF19 show that 12 of 15 best results were obtained using new word embeddings. The evaluation results presented in Table TABREF20 (the chosen best embeddings models from Table TABREF19 ) prove that the best group of word embeddings is EC. The highest type F1-score was obtained for EC1 model, built using binary FastText Skip-gram method utilising subword information, with vector dimension equal to 300 and negative sampling equal to 10. The ability of the model to provide vector representation for the unknown words seems to be the most important. Also, previous models built using KGR10 (EP) are probably less accurate due to an incorrect tokenisation of the corpus. We used WCRFT tagger BIBREF22 , which utilises Toki BIBREF21 to tokenise the input text before the creation of the embeddings model. The comparison of EC1 with previous results obtained using only CRF BIBREF38 show the significant improvement across all the tested metrics: 3.6pp increase in strict F1-score, 1.36pp increase in relaxed precision, 5.61pp increase in relaxed recall and 3.51pp increase in relaxed F1-score.
Acknowledgements
Work co-financed as part of the investment in the CLARIN-PL research infrastructure funded by the Polish Ministry of Science and Higher Education and in part by the National Centre for Research and Development, Poland, under grant no POIR.01.01.01-00-0472/16. | predicting the word given its context |
80f19be1cbe1f0ec89bbafb9c5f7a8ded37881fb | 80f19be1cbe1f0ec89bbafb9c5f7a8ded37881fb_0 | Q: What embedding algorithm is used to build the embeddings?
Text: Introduction
Recent studies in information extraction domain (but also in other natural language processing fields) show that deep learning models produce state-of-the-art results BIBREF0 . Deep architectures employ multiple layers to learn hierarchical representations of the input data. In the last few years, neural networks based on dense vector representations provided the best results in various NLP tasks, including named entities recognition BIBREF1 , semantic role labelling BIBREF2 , question answering BIBREF3 and multitask learning BIBREF4 . The core element of most deep learning solutions is the dense distributed semantic representation of words, often called word embeddings. Distributional vectors follow the distributional hypothesis that words with a similar meaning tend to appear in similar contexts. Word embeddings capture the similarity between words and are often used as the first layer in deep learning models. Two of the most common and very efficient methods to produce word embeddings are Continuous Bag-of-Words (CBOW) and Skip-gram (SG), which produce distributed representations of words in a vector space, grouping them by similarity BIBREF5 , BIBREF6 . With the progress of machine learning techniques, it is possible to train such models on much larger data sets, and these often outperform the simple ones. It is possible to use a set of text documents containing even billions of words as training data. Both architectures (CBOW and SG) describe how the neural network learns the vector word representations for each word. In CBOW architecture the task is predicting the word given its context and in SG the task in predicting the context given the word.
Due to a significant increase of quality using deep learning methods together with word embeddings as the input layer for neural networks, many word vector sets have been created, using different corpora. The widest range of available word embeddings is available for English BIBREF7 and there were not so many options for less popular languages, e.g. Polish. There was a definite need within CLARIN-PL project and Sentimenti to increase the quality of NLP methods for Polish which were utilising available Polish word vectors BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 but only FastText modification of Skip-gram BIBREF9 was able to produce vectors for unknown words, based on character n-grams. The observation was that even using a sophisticated deep neural structure, the result strongly depends on the initial distributional representation. There was a need to build a massive corpus of Polish and create high-quality word vectors from that corpus. This work describes how we extended KGR7 1G corpus to become KGR10 with 4 billion words. Next, we present the different variants of word embeddings produced using this corpus. In the article about the recognition of named entities for Polish from the previous year, these embeddings were used in one of the three voting models to obtain the best results and the final system PolDeepNer BIBREF12 took the second place in PolEval2018 Task 2 BIBREF13 . In this article, we evaluated KGR10 FastText word embeddings in recognition of timexes.
Available word embeddings
At the time we were testing word embeddings for different applications, there were 2 most popular sources of word vectors. The first one, called IPIPAN, is the result of the project Compositional distributional semantic models for identification, discrimination and disambiguation of senses in Polish texts, the process of creating word embeddings is described in article BIBREF10 and corpora used were National Corpus of Polish (NKJP) BIBREF14 and Wikipedia (Wiki). The second one, called FASTTEXT, is original FastText word embeddings set, created for 157 languages (including Polish). Authors used Wikipedia and Common Crawl as the linguistic data source. Table TABREF6 shows the number of tokens in each corpus and the name of the institution which prepared it. There is also information about the public availability of the resource.
Table TABREF7 presents the most commonly used word embeddings in CLARIN-PL before the creation of our embeddings.
Building a larger corpus
KGR7 corpus (also called plWordNet Corpus 7.0, PLWNC 7.0) BIBREF15 , BIBREF16 was created at the Wroclaw University of Science and Technology by G4.19 Group. Due to the licences of documents in this corpus, this resource is not publicly available. Table TABREF8 contains KGR7 subcorpora and statistics BIBREF17 . One of the subcorpora in KGR7 is KIPI (the IPI PAN Corpus) BIBREF18 . KGR7 covers texts from a wide range of domains like: blogs, science, stenographic recordings, news, journalism, books and parliamentary transcripts. All texts come from the second half of the 20th century and represent the modern Polish language.
plWordNet Corpus 10.0 (KGR10)
KGR10, also known as plWordNet Corpus 10.0 (PLWNC 10.0), is the result of the work on the toolchain to automatic acquisition and extraction of the website content, called CorpoGrabber BIBREF19 . It is a pipeline of tools to get the most relevant content of the website, including all subsites (up to the user-defined depth). The proposed toolchain can be used to build a big Web corpus of text documents. It requires the list of the root websites as the input. Tools composing CorpoGrabber are adapted to Polish, but most subtasks are language independent. The whole process can be run in parallel on a single machine and includes the following tasks: download of the HTML subpages of each input page URL with HTTrack, extraction of plain text from each subpage by removing boilerplate content (such as navigation links, headers, footers, advertisements from HTML pages) BIBREF20 , deduplication of plain text BIBREF20 , bad quality documents removal utilising Morphological Analysis Converter and Aggregator (MACA) BIBREF21 , documents tagging using Wrocław CRF Tagger (WCRFT) BIBREF22 . Last two steps are available only for Polish.
In order to significantly expand the set of documents in KGR7, we utilised DMOZ (short for directory.mozilla.org) – a multilingual open content directory of World Wide Web links, also known as Open Directory Project (ODP). The website with directory was closed in 2017, but the database still can be found on the web. Polish part of this directory contains more than 30,000 links to Polish websites. We used these links as root URLs for CorpoGrabber, and we downloaded more than 7TB of HTML web pages. After the extraction of text from HTML pages, deduplication of documents (including texts from KGR7) and removing bad quality documents (containing more than 30% of words outside the Morfeusz BIBREF23 dictionary) the result is KGR10 corpus, which contains 4,015,569,051 tokens and 18,084,712 unique words. Due to component licenses, KGR10 corpus is not publicly available.
KGR10 word embeddings
We created a new Polish word embeddings models using the KGR10 corpus. We built 16 models of word embeddings using the implementation of CBOW and Skip-gram methods in the FastText tool BIBREF9 . These models are available under an open license in the CLARIN-PL project DSpace repository. The internal encoding solution based on embeddings of n-grams composing each word makes it possible to obtain FastText vector representations, also for words which were not processed during the creation of the model. A vector representation is associated with character n-gram and each word is represented as the sum of its n-gram vector representations. Previous solutions ignored the morphology of words and were assigning a distinct vector to each word. This is a limitation for languages with large vocabularies and many rare words, like Turkish, Finnish or Polish BIBREF9 . Authors observed that using word representations trained with subword information outperformed the plain Skip-gram model and the improvement was most significant for morphologically rich Slavic languages such as Czech (8% reduction of perplexity over SG) and Russian (13% reduction) BIBREF9 . We expected that word embeddings created that way for Polish should also provide such improvements. There were also previous attempts to build KGR10 word vectors with other methods (including FastText), and the results are presented in the article BIBREF8 . We selected the best models from that article – with embedding ID prefix EP (embeddings, previous) in Table TABREF13 – to compare with new models, marked as embedding ID prefix EC in Table TABREF13 ).
The word embeddings models used in PolDeepNer for recognition of timexes and named entities were EE1, . It was built on a plain KGR10. The dimension of word embedding is 300, the method of constructing vectors was Skip-gram BIBREF9 , and the number of negative samples for each positive example was 10.
Temporal expressions
Temporal expressions (henceforth timexes) tell us when something happens, how long something lasts, or how often something occurs. The correct interpretation of a timex often involves knowing the context. Usually, a person is aware of their location in time, i.e., they know what day, month and year it is, and whether it is the beginning or the end of week or month. Therefore, they refer to specific dates, using incomplete expressions such as 12 November, Thursday, the following week, after three days. The temporal context is often necessary to determine to which specific date and time timexes refer. These examples do not exhaust the complexity of the problem of recognising timexes.
TimeML BIBREF24 is a markup language for describing timexes that has been adapted to many languages. One of the best-known methods of recognition of timexes called HeidelTime BIBREF25 , which uses the TIMEX3 annotation standard, currently supports 13 languages (with the use of hand-crafted resources). PLIMEX is a specification for the description of Polish timexes. It is based on TIMEX3 used in TimeML. Classes proposed in TimeML are adapted, namely: date, time, duration, set.
Recognition of timexes
There are many methods for recognising timexes that are widely used in natural language engineering. For English (but not exclusively), in approaches based on supervised learning, sequence labelling methods are often used, especially Conditional Random Fields BIBREF26 . A review of the methods in the article BIBREF27 about the recognition of timexes for English and Spanish has shown a certain shift within the most popular solutions. As with the normalisation of timexes, the best results are still achieved with rule-based methods, many new solutions have been introduced in the area of recognition. The best systems listed in BIBREF27 , called TIPSem BIBREF28 and ClearTK BIBREF29 , use CRFs for recognition, so initially, we decided to apply the CRF-based approach for this task. The results were described in BIBREF30 , BIBREF31 .
In recent years, solutions based on deep neural networks, using word representation in the form of word embeddings, created with the use of large linguistic corpus, have begun to dominate in the field of recognition of word expressions. The most popular solutions include bidirectional long short-term memory neural networks (henceforth Bi-LSTM), often in combination with conditional random fields, as presented in the paper BIBREF32 dedicated to the recognition of proper names. For the Polish language, deep networks have also recently been used to recognise word expressions. In the issue of recognition of timexes, a bidirectional gated recurrent unit network (GRU) has been used BIBREF33 , BIBREF34 . GRU network is described in detail in the article BIBREF35 . In case of recognition of event descriptions using Bi-LSTM and Bi-GRU, where most of the Liner2 features were included in the input feature vector, better results were obtained BIBREF36 than for the Liner2 method (but without taking into account domain dictionaries). In last year's publication on the issue of named entities recognition using BiLSTM+CRF (together with G4.19 Group members), we received a statistically significant improvement in the quality of recognition compared to a solution using CRF only. The solution has been called PolDeepNer BIBREF12 .
Experiments and Results
Experiments were carried out by the method proposed in BIBREF27 . The first part is described as Task A, the purpose of which is to identify the boundaries of timexes and assign them to one of the following classes: date, time, duration, set.
We trained the final models using the train set and we evaluated it using the test set, which was the reproduction of analysis performed in articles BIBREF37 , BIBREF38 . The division is presented in Table TABREF16 . We used BiLSTM+CRF classifier as in previous work BIBREF12 . We used precision, recall and F1 metrics from the classic NER task BIBREF12 , where true positive system answer has the same boundaries and type as annotation in gold data set. We evaluated all 17 word embeddings models using these metrics. The results are presented in Tables TABREF17 , TABREF18 and TABREF19 .
We chose the best 3 results from each word embeddings group (EE, EP, EC) from Table TABREF19 presenting F1-scores for all models. Then we evaluated these results using more detailed measures for timexes, presented in BIBREF27 . The following measures were used to evaluate the quality of boundaries and class recognition, so-called strict match: strict precision (Str.P), strict recall (Str.R) and strict F1-score (Str.F1). A relaxed match (Rel.P, Rel.R, Rel.F1) evaluation has also been carried out to determine whether there is an overlap between the system entity and gold entity, e.g. [Sunday] and [Sunday morning] BIBREF27 . If there was an overlap, a relaxed type F1-score (Type.F1) was calculated BIBREF27 . The results are presented in Table TABREF20 .
Conclusions
The analysis of results from Tables TABREF17 , TABREF18 and TABREF19 show that 12 of 15 best results were obtained using new word embeddings. The evaluation results presented in Table TABREF20 (the chosen best embeddings models from Table TABREF19 ) prove that the best group of word embeddings is EC. The highest type F1-score was obtained for EC1 model, built using binary FastText Skip-gram method utilising subword information, with vector dimension equal to 300 and negative sampling equal to 10. The ability of the model to provide vector representation for the unknown words seems to be the most important. Also, previous models built using KGR10 (EP) are probably less accurate due to an incorrect tokenisation of the corpus. We used WCRFT tagger BIBREF22 , which utilises Toki BIBREF21 to tokenise the input text before the creation of the embeddings model. The comparison of EC1 with previous results obtained using only CRF BIBREF38 show the significant improvement across all the tested metrics: 3.6pp increase in strict F1-score, 1.36pp increase in relaxed precision, 5.61pp increase in relaxed recall and 3.51pp increase in relaxed F1-score.
Acknowledgements
Work co-financed as part of the investment in the CLARIN-PL research infrastructure funded by the Polish Ministry of Science and Higher Education and in part by the National Centre for Research and Development, Poland, under grant no POIR.01.01.01-00-0472/16. | CBOW and Skip-gram methods in the FastText tool BIBREF9 |
b3238158392684a5a6b62a7eabaa2a10fbecf3e6 | b3238158392684a5a6b62a7eabaa2a10fbecf3e6_0 | Q: How was the KGR10 corpus created?
Text: Introduction
Recent studies in information extraction domain (but also in other natural language processing fields) show that deep learning models produce state-of-the-art results BIBREF0 . Deep architectures employ multiple layers to learn hierarchical representations of the input data. In the last few years, neural networks based on dense vector representations provided the best results in various NLP tasks, including named entities recognition BIBREF1 , semantic role labelling BIBREF2 , question answering BIBREF3 and multitask learning BIBREF4 . The core element of most deep learning solutions is the dense distributed semantic representation of words, often called word embeddings. Distributional vectors follow the distributional hypothesis that words with a similar meaning tend to appear in similar contexts. Word embeddings capture the similarity between words and are often used as the first layer in deep learning models. Two of the most common and very efficient methods to produce word embeddings are Continuous Bag-of-Words (CBOW) and Skip-gram (SG), which produce distributed representations of words in a vector space, grouping them by similarity BIBREF5 , BIBREF6 . With the progress of machine learning techniques, it is possible to train such models on much larger data sets, and these often outperform the simple ones. It is possible to use a set of text documents containing even billions of words as training data. Both architectures (CBOW and SG) describe how the neural network learns the vector word representations for each word. In CBOW architecture the task is predicting the word given its context and in SG the task in predicting the context given the word.
Due to a significant increase of quality using deep learning methods together with word embeddings as the input layer for neural networks, many word vector sets have been created, using different corpora. The widest range of available word embeddings is available for English BIBREF7 and there were not so many options for less popular languages, e.g. Polish. There was a definite need within CLARIN-PL project and Sentimenti to increase the quality of NLP methods for Polish which were utilising available Polish word vectors BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 but only FastText modification of Skip-gram BIBREF9 was able to produce vectors for unknown words, based on character n-grams. The observation was that even using a sophisticated deep neural structure, the result strongly depends on the initial distributional representation. There was a need to build a massive corpus of Polish and create high-quality word vectors from that corpus. This work describes how we extended KGR7 1G corpus to become KGR10 with 4 billion words. Next, we present the different variants of word embeddings produced using this corpus. In the article about the recognition of named entities for Polish from the previous year, these embeddings were used in one of the three voting models to obtain the best results and the final system PolDeepNer BIBREF12 took the second place in PolEval2018 Task 2 BIBREF13 . In this article, we evaluated KGR10 FastText word embeddings in recognition of timexes.
Available word embeddings
At the time we were testing word embeddings for different applications, there were 2 most popular sources of word vectors. The first one, called IPIPAN, is the result of the project Compositional distributional semantic models for identification, discrimination and disambiguation of senses in Polish texts, the process of creating word embeddings is described in article BIBREF10 and corpora used were National Corpus of Polish (NKJP) BIBREF14 and Wikipedia (Wiki). The second one, called FASTTEXT, is original FastText word embeddings set, created for 157 languages (including Polish). Authors used Wikipedia and Common Crawl as the linguistic data source. Table TABREF6 shows the number of tokens in each corpus and the name of the institution which prepared it. There is also information about the public availability of the resource.
Table TABREF7 presents the most commonly used word embeddings in CLARIN-PL before the creation of our embeddings.
Building a larger corpus
KGR7 corpus (also called plWordNet Corpus 7.0, PLWNC 7.0) BIBREF15 , BIBREF16 was created at the Wroclaw University of Science and Technology by G4.19 Group. Due to the licences of documents in this corpus, this resource is not publicly available. Table TABREF8 contains KGR7 subcorpora and statistics BIBREF17 . One of the subcorpora in KGR7 is KIPI (the IPI PAN Corpus) BIBREF18 . KGR7 covers texts from a wide range of domains like: blogs, science, stenographic recordings, news, journalism, books and parliamentary transcripts. All texts come from the second half of the 20th century and represent the modern Polish language.
plWordNet Corpus 10.0 (KGR10)
KGR10, also known as plWordNet Corpus 10.0 (PLWNC 10.0), is the result of the work on the toolchain to automatic acquisition and extraction of the website content, called CorpoGrabber BIBREF19 . It is a pipeline of tools to get the most relevant content of the website, including all subsites (up to the user-defined depth). The proposed toolchain can be used to build a big Web corpus of text documents. It requires the list of the root websites as the input. Tools composing CorpoGrabber are adapted to Polish, but most subtasks are language independent. The whole process can be run in parallel on a single machine and includes the following tasks: download of the HTML subpages of each input page URL with HTTrack, extraction of plain text from each subpage by removing boilerplate content (such as navigation links, headers, footers, advertisements from HTML pages) BIBREF20 , deduplication of plain text BIBREF20 , bad quality documents removal utilising Morphological Analysis Converter and Aggregator (MACA) BIBREF21 , documents tagging using Wrocław CRF Tagger (WCRFT) BIBREF22 . Last two steps are available only for Polish.
In order to significantly expand the set of documents in KGR7, we utilised DMOZ (short for directory.mozilla.org) – a multilingual open content directory of World Wide Web links, also known as Open Directory Project (ODP). The website with directory was closed in 2017, but the database still can be found on the web. Polish part of this directory contains more than 30,000 links to Polish websites. We used these links as root URLs for CorpoGrabber, and we downloaded more than 7TB of HTML web pages. After the extraction of text from HTML pages, deduplication of documents (including texts from KGR7) and removing bad quality documents (containing more than 30% of words outside the Morfeusz BIBREF23 dictionary) the result is KGR10 corpus, which contains 4,015,569,051 tokens and 18,084,712 unique words. Due to component licenses, KGR10 corpus is not publicly available.
KGR10 word embeddings
We created a new Polish word embeddings models using the KGR10 corpus. We built 16 models of word embeddings using the implementation of CBOW and Skip-gram methods in the FastText tool BIBREF9 . These models are available under an open license in the CLARIN-PL project DSpace repository. The internal encoding solution based on embeddings of n-grams composing each word makes it possible to obtain FastText vector representations, also for words which were not processed during the creation of the model. A vector representation is associated with character n-gram and each word is represented as the sum of its n-gram vector representations. Previous solutions ignored the morphology of words and were assigning a distinct vector to each word. This is a limitation for languages with large vocabularies and many rare words, like Turkish, Finnish or Polish BIBREF9 . Authors observed that using word representations trained with subword information outperformed the plain Skip-gram model and the improvement was most significant for morphologically rich Slavic languages such as Czech (8% reduction of perplexity over SG) and Russian (13% reduction) BIBREF9 . We expected that word embeddings created that way for Polish should also provide such improvements. There were also previous attempts to build KGR10 word vectors with other methods (including FastText), and the results are presented in the article BIBREF8 . We selected the best models from that article – with embedding ID prefix EP (embeddings, previous) in Table TABREF13 – to compare with new models, marked as embedding ID prefix EC in Table TABREF13 ).
The word embeddings models used in PolDeepNer for recognition of timexes and named entities were EE1, . It was built on a plain KGR10. The dimension of word embedding is 300, the method of constructing vectors was Skip-gram BIBREF9 , and the number of negative samples for each positive example was 10.
Temporal expressions
Temporal expressions (henceforth timexes) tell us when something happens, how long something lasts, or how often something occurs. The correct interpretation of a timex often involves knowing the context. Usually, a person is aware of their location in time, i.e., they know what day, month and year it is, and whether it is the beginning or the end of week or month. Therefore, they refer to specific dates, using incomplete expressions such as 12 November, Thursday, the following week, after three days. The temporal context is often necessary to determine to which specific date and time timexes refer. These examples do not exhaust the complexity of the problem of recognising timexes.
TimeML BIBREF24 is a markup language for describing timexes that has been adapted to many languages. One of the best-known methods of recognition of timexes called HeidelTime BIBREF25 , which uses the TIMEX3 annotation standard, currently supports 13 languages (with the use of hand-crafted resources). PLIMEX is a specification for the description of Polish timexes. It is based on TIMEX3 used in TimeML. Classes proposed in TimeML are adapted, namely: date, time, duration, set.
Recognition of timexes
There are many methods for recognising timexes that are widely used in natural language engineering. For English (but not exclusively), in approaches based on supervised learning, sequence labelling methods are often used, especially Conditional Random Fields BIBREF26 . A review of the methods in the article BIBREF27 about the recognition of timexes for English and Spanish has shown a certain shift within the most popular solutions. As with the normalisation of timexes, the best results are still achieved with rule-based methods, many new solutions have been introduced in the area of recognition. The best systems listed in BIBREF27 , called TIPSem BIBREF28 and ClearTK BIBREF29 , use CRFs for recognition, so initially, we decided to apply the CRF-based approach for this task. The results were described in BIBREF30 , BIBREF31 .
In recent years, solutions based on deep neural networks, using word representation in the form of word embeddings, created with the use of large linguistic corpus, have begun to dominate in the field of recognition of word expressions. The most popular solutions include bidirectional long short-term memory neural networks (henceforth Bi-LSTM), often in combination with conditional random fields, as presented in the paper BIBREF32 dedicated to the recognition of proper names. For the Polish language, deep networks have also recently been used to recognise word expressions. In the issue of recognition of timexes, a bidirectional gated recurrent unit network (GRU) has been used BIBREF33 , BIBREF34 . GRU network is described in detail in the article BIBREF35 . In case of recognition of event descriptions using Bi-LSTM and Bi-GRU, where most of the Liner2 features were included in the input feature vector, better results were obtained BIBREF36 than for the Liner2 method (but without taking into account domain dictionaries). In last year's publication on the issue of named entities recognition using BiLSTM+CRF (together with G4.19 Group members), we received a statistically significant improvement in the quality of recognition compared to a solution using CRF only. The solution has been called PolDeepNer BIBREF12 .
Experiments and Results
Experiments were carried out by the method proposed in BIBREF27 . The first part is described as Task A, the purpose of which is to identify the boundaries of timexes and assign them to one of the following classes: date, time, duration, set.
We trained the final models using the train set and we evaluated it using the test set, which was the reproduction of analysis performed in articles BIBREF37 , BIBREF38 . The division is presented in Table TABREF16 . We used BiLSTM+CRF classifier as in previous work BIBREF12 . We used precision, recall and F1 metrics from the classic NER task BIBREF12 , where true positive system answer has the same boundaries and type as annotation in gold data set. We evaluated all 17 word embeddings models using these metrics. The results are presented in Tables TABREF17 , TABREF18 and TABREF19 .
We chose the best 3 results from each word embeddings group (EE, EP, EC) from Table TABREF19 presenting F1-scores for all models. Then we evaluated these results using more detailed measures for timexes, presented in BIBREF27 . The following measures were used to evaluate the quality of boundaries and class recognition, so-called strict match: strict precision (Str.P), strict recall (Str.R) and strict F1-score (Str.F1). A relaxed match (Rel.P, Rel.R, Rel.F1) evaluation has also been carried out to determine whether there is an overlap between the system entity and gold entity, e.g. [Sunday] and [Sunday morning] BIBREF27 . If there was an overlap, a relaxed type F1-score (Type.F1) was calculated BIBREF27 . The results are presented in Table TABREF20 .
Conclusions
The analysis of results from Tables TABREF17 , TABREF18 and TABREF19 show that 12 of 15 best results were obtained using new word embeddings. The evaluation results presented in Table TABREF20 (the chosen best embeddings models from Table TABREF19 ) prove that the best group of word embeddings is EC. The highest type F1-score was obtained for EC1 model, built using binary FastText Skip-gram method utilising subword information, with vector dimension equal to 300 and negative sampling equal to 10. The ability of the model to provide vector representation for the unknown words seems to be the most important. Also, previous models built using KGR10 (EP) are probably less accurate due to an incorrect tokenisation of the corpus. We used WCRFT tagger BIBREF22 , which utilises Toki BIBREF21 to tokenise the input text before the creation of the embeddings model. The comparison of EC1 with previous results obtained using only CRF BIBREF38 show the significant improvement across all the tested metrics: 3.6pp increase in strict F1-score, 1.36pp increase in relaxed precision, 5.61pp increase in relaxed recall and 3.51pp increase in relaxed F1-score.
Acknowledgements
Work co-financed as part of the investment in the CLARIN-PL research infrastructure funded by the Polish Ministry of Science and Higher Education and in part by the National Centre for Research and Development, Poland, under grant no POIR.01.01.01-00-0472/16. | most relevant content of the website, including all subsites |
526ae24fa861d52536b66bcc2d2ddfce483511d6 | 526ae24fa861d52536b66bcc2d2ddfce483511d6_0 | Q: How big are improvements with multilingual ASR training vs single language training?
Text: Introduction
Automatic speech recognition (ASR) technology has been made a dramatic progress and is currently brought to a pratical levels of performance assisted by large speech corpora and the introduction of deep learning techniques. However, this is not the case for low-resource languages which do not have large corpora like English and Japanese have. There are about 5,000 languages in the world over half of which are faced with the danger of extinction. Therefore, constructing ASR systems for these endangered languages is an important issue.
The Ainu are an indigenous people of northern Japan and Sakhakin in Russia, but their language has been fading away ever since the Meiji Restoration and Modernization. On the other hand, active efforts to preserve their culture have been initiated by the Government of Japan, and exceptionally large oral recordings have been made. Nevertheless, a majority of the recordings have not been transcribed and utilized effectively. Since transcribing them requires expertise in the Ainu language, not so many people are able to work on this task. Hence, there is a strong demand for an ASR system for the Ainu language. We started a project of Ainu ASR and this article is the first report of this project.
We have built an Ainu speech corpus based on data provided by the Ainu Museum and the Nibutani Ainu Culture Museum. The oral recordings in this data consist of folklore and folk songs, and we chose the former to construct the ASR model. The end-to-end method of speech recognition has been proposed recently and has achieved performance comparable to that of the conventional DNN-HMM hybrid modeling BIBREF0, BIBREF1, BIBREF2. End-to-end systems do not have a complex hierarchical structure and do not require expertise in target languages such as their phonology and morphology. In this study we adopt the attention mechanism BIBREF3, BIBREF4 and combine it with Connectionist Temporal Classification (CTC) BIBREF5, BIBREF6. In this work, we investigate the modeling unit and utilization of corpora of other languages.
Overview of the Ainu Language
This section briefly overviews the background of the data collection, the Ainu language, and its writing system. After that, we describe how Ainu recordings are classified and review previous works dealing with the Ainu language.
Overview of the Ainu Language ::: Background
The Ainu people had total population of about 20,000 in the mid-19th century BIBREF7 and they used to live widely distributed in the area that includes Hokkaido, Sakhalin, and the Kuril Islands. The number of native speakers, however, rapidly decreased through the assimilation policy after late 19th century. At present, there are only less than 10 native speakers, and UNESCO listed their language as critically endangered in 2009 BIBREF8. In response to this situation, Ainu folklore and songs have been actively recorded since the late 20th century in efforts initiated by the Government of Japan. For example, the Ainu Museum started audio recording of Ainu folklore in 1976 with the cooperation of a few Ainu elders which resulted in the collection of speech data with the total duration of roughly 700 hours. This kind of data should be a key to the understanding of Ainu culture, but most of it is not transcribed and fully studied yet.
Overview of the Ainu Language ::: The Ainu Language and its Writing System
The Ainu language is an agglutinative language and has some similarities to Japanese. However, its genealogical relationship with other languages has not been clearly understood yet. Among its features such as closed syllables and personal verbal affixes, one important feature is that there are many compound words. For example, a word atuykorkamuy (means “a sea turtle”) can be disassembled into atuy (“the sea”), kor (“to have”), and kamuy (“god”).
Although the Ainu people did not traditionally have a writing system, the Ainu language is currently written following the examples in a reference book “Akor itak” BIBREF9. With this writing system, it is transcribed with sixteen Roman letters {a, c, e, h, i, k, m, n, o, p, r, s, t, u, w, y}. Since each of these letters correspond to a unique pronunciation, we call them “phones” for convenience. In addition, the symbol {=} is used for connecting a verb and a personal affix and { ' } is used to represent the pharyngeal stop. For the purpose of transcribing recordings, consonant symbols {b, d, g, z} are additionally used to transcribe Japanese sounds the speakers utter. The symbols { _ , __ } are used to transcribe drops and liaisons of phones. An example is shown below.
Overview of the Ainu Language ::: Types of Ainu Recordings
The Ainu oral traditions are classified into three types: “yukar” (heroic epics), “kamuy yukar” (mythic epics), and “uwepeker” (prose tales). Yukar and kamuy yukar are recited in the rhythm while uwepeker is not. In this study we focus on the the prose tales as the first step.
Overview of the Ainu Language ::: Previous Work
There have so far been a few studies dealing with the Ainu language. ainulrec built a dependency tree bank in the scheme of Universal Dependencies. postag developed tools for part-of-speech (POS) tagging and word segmentation. Ainu speech recognition was tried by ainutrans with 2.5 hours of Ainu folklore data even though the Ainu language was not their main target. Their phone error rare was about 40% which is not an accuracy level for practical use yet.
It appears that there has not been a substantial Ainu speech recognition study yet that utilizes corpora of a reasonable size. Therefore, our first step was to build a speech corpus for ASR based on the data sets provided by the Ainu Museum and the Nibutani Ainu Culture Museum.
Ainu Speech Corpus
In this section we explain the content of the data sets and how we modified it for our ASR corpus.
Ainu Speech Corpus ::: Numbers of Speakers and Episodes
The corpus we have prepared for ASR in this study is composed of text and speech. Table 1 shows the number of episodes and the total speech duration for each speaker. Among the total of eight speakers, the data of the speakers KM and UT is from the Ainu Museum, and the rest is from Nibutani Ainu Culture Museum. All speakers are female. The length of the recording for a speaker varies depending on the circumstances at the recording times. A sample text and its English translation are shown in Table 2.
Ainu Speech Corpus ::: Data Annotation
For efficient training of ASR model, we have made some modifications to the provided data. First, from the transcripts explained in Section 2.1, the symbols {_ , __ , '} have been removed as seen in the example below.
Though the equal symbol (`=') does not represent a sound, we keep it because it is used in almost all of the Ainu documents and provides grammatical information.
To train an ASR system, the speech data needs to be segmented into a set of manageable chunks. For the ease of automatic processing, we chose to segment speech into inter-pausal units (IPUs) BIBREF10which is a stretch of speech bounded by pauses. The number of IPUs for each speaker is shown in Table 1.
End-to-end Speech Recognition
In this section, the two approaches to end-to-end speech recognition that we adopt in this work are summarized. Then, we introduce four modeling units we explained, i.e., phone, syllable, word piece, and word. We also discuss multilingual training that we adopt for tackling the low resource problem.
End-to-end Speech Recognition ::: End-to-end Modeling
End-to-end models have an architecture much simpler than that of conventional DNN-HMM hybrid models. Since they predict character or word symbols directly from acoustic features, pronunciation dictionaries and language modeling are not required explicitly. In this paper, we utilize two kinds of end-to-end models, namely, Connectionist Temporal Classification (CTC) and the attention-based encoder-decoder model.
CTC augments the output symbol set with the “blank” symbol `$\phi $'. It outputs symbols by contracting frame-wise outputs from recurrent neural networks (RNNs). This is done by first collapsed repeating symbols and then removing all blank symbols as in the following example:
The probability of an output sequence $\mathbf {L}$ for an input acoustic feature sequence $\mathbf {X}$, where $|\mathbf {L}| < |\mathbf {X}|$, is defined as follows.
$\mathcal {B}$ is a function to contract the outputs of RNNs, so $\mathcal {B}^{-1}(\mathbf {L})$ means the set of symbol sequences which is reduced to $\mathbf {L}$. The model is trained to maximize (1).
The attention-based encoder-decoder model is another method for mapping between two sequences with different lengths. It has two RNNs called the “encoder” and the “decoder”. In naive encoder-decoder model, the encoder converts the input sequence into a single context vector which is the last hidden state of the encoder RNN from which the decoder infers output symbols. In an attention-based model, the context vector $\mathbf {c}_l$ at $l$-th decoding step is the sum of the product of all encoder outputs $h_1, ... , h_\mathrm {T}$ and the $l$-th attention weight $\alpha _{1,l}, ... , \alpha _{\mathrm {T},l}$ as shown in (2). Here, $\mathrm {T}$ is the length of the encoder output.
The attention weights $\alpha _{1,l}, ... , \alpha _{\mathrm {T},l}$ indicates the relative importances of the encoder output frames for the $l$-th decoding step and the model parameters to generate these weights are determined in an end-to-end training.
In our model, the attention-based model and the CTC share the encoder and are optimized simultaneously as shown in Figure 1.BIBREF11 Long Short-Term Memory (LSTM) BIBREF12 is used for RNNs in the encoder and the decoder.
End-to-end Speech Recognition ::: Modeling Units
In the conventional DNN-HMM hybrid modeling, the acoustic model outputs probabilities triphone states from each acoustic feature which is converted into the most likely word sequence. An end-to-end model, on the other hand, has some degree of freedom in the modeling unit other than phones, and there are some studies that use characters or words as a unit BIBREF13, BIBREF14. A word unit based end-to-end model can take long context into consideration at the inference time, but it has the data sparsity problem due to its large vocabulary size. Though phone unit based model does not have such a problem, it cannot grasp so long context. It depends on the size of available corpora to decide which to adopt. In addition to these both models, a word piece unit, which is defined by automatically dividing a word into frequent parts, has been proposed BIBREF15, BIBREF16, and its vocabulary size can be determined almost freely.
In this paper, we investigate the modeling unit for the end-to-end Ainu speech recognition since the optimal unit for this size of corpus is not obvious. BIBREF17 It is presupposed that all units can be converted into word units automatically. The candidates are phone, syllable, word piece (WP), and word. Examples of them are shown in Table 3 and the details of each unit are described below.
End-to-end Speech Recognition ::: Modeling Units ::: Phone
As mentioned in Section 2.1, we regard the Roman letters as phones. `=' and the special symbol `$\langle $wb$\rangle $', which means a word boundary, are added to make it possible to convert the output into a sequence of words like the `original' in Table 3.
End-to-end Speech Recognition ::: Modeling Units ::: Syllable
A syllable of the Ainu language takes the form of either V, CV, VC, or CVC, where `C' and `V' mean consonant and vowel, respectively. The phones {a, e, i, o, u} are vowels and the rest of the Roman letters in Section 2.2 are consonants. In this work, every word is divided into syllables by the following procedure.
A word with a single letter is unchanged.
Two consecutive Cs and Vs are given a syllable boundary between them.
R$^*${CC, VV}R$^*$$\rightarrow $ R$^*${C-C, V-V}R$^*$
(R $$ {C, V})
Put a syllable boundary after the segment-initial V if it is following by at least two phones.
VCR$^+$$\rightarrow $ V-CR$^+$
Put a syllable boundary after CV repeatedly from left to right until only CV or CVC is left.
(CV)$^*${CV, CVC} $\rightarrow $ (CV-)$^*${CV, CVC}
In addition, `=' and `$\langle $wb$\rangle $' are added as explained in Section 4.2.1. through the model training process.
This procedure does not always generate a morphologically relevant syllable segmentation. For example, a word isermakus (meaning “(for a god) to protect from behind”) is divided as i-ser-ma-kus, but the right syllabification is i-ser-mak-us.
End-to-end Speech Recognition ::: Modeling Units ::: Word Piece
The byte pair encoding (BPE) BIBREF18 and the unigram language modeling BIBREF19 are alternative methods for dividing a word into word pieces. The former repeatedly replaces the most common character pair with a new single symbol until the vocabulary becomes the intended size. The latter decides the segmentation to maximize the likelihood of occurrence of the sequence. We adopt the latter and use the open-source software SentencePiece BIBREF20. With this tool, `$\langle $wb$\rangle $' and other units are often merged to constitute a single piece as seen in Table 3.
End-to-end Speech Recognition ::: Modeling Units ::: Word
The original text can be segmented into words separated by spaces. To make the vocabulary smaller for the ease of training, `=' is treated as a word and infrequent words are replaced with a special label `$\langle $unk$\rangle $'. As seen in Table 3, `a=saha' is dealt with as three words (`a', `=', `saha') and the word `kokopan' is replaced with `$\langle $unk$\rangle $'.
End-to-end Speech Recognition ::: Multilingual Training
When an enough amount of data is not available for the target languages, the ASR model training can be enhanced by taking advantage of data from other languages BIBREF21, BIBREF22. There are some similarities between Ainu and Japanese language BIBREF23. For instance, both have almost the same set of vowels and do not have consonant clusters (like `str' of `strike' in English). Hence, the multilingual training with a Japanese corpus is expected to be effective. In addition, an English corpus is used for the purpose of comparison. The corpora used are the JNAS corpus BIBREF24 (in Japanese) and the WSJ corpus BIBREF25 (in English). JNAS comprises roughly 80 hours from 320 speakers, and WSJ has about 70 hours of speech from 280 speakers.
In the multilingual training, the encoder and the attention module are shared among the Ainu ASR model and the models for other languages, and they are trained using data for all languages. Figure 2 shows the architecture for the multilingual learning with two corpora. When the input acoustic features are from the Ainu ASR corpus, they go through the shared encoder and attention module and are delivered into the decoder on the left side in Figure 2 as a context vector. In this case, the right-side decoder is not trained.
Experimental Evaluation
In this section the setting and results of ASR experiments are described and the results are discussed.
Experimental Evaluation ::: Data Setup
The ASR experiments were performed in speaker-open condition as well as speaker-closed condition.
In the speaker-closed condition, two episodes were set aside from each speaker as development and test sets. Thereafter, the total sizes of the development and test sets turns out to be 1585 IPUs spanning 2 hours 23 minutes and 1841 IPUs spanning 2 hours and 48 minutes respectively. The ASR model is trained with the rest data. In the speaker-open condition, all the data except for the test speaker's were used for training As it would be difficult to train the model if all of the data of speaker KM or UT were removed, experiments using their speaker-open conditions were not conducted.
Experimental Evaluation ::: Experimental Setting
The input acoustic features were 120-dimensional vectors made by frame stacking BIBREF26 three 40-dimensional log-mel filter banks features at contiguous time frames. The window length and the frame shift were set to be 25ms and 10ms. The encoder was composed of five BiLSTM layers and the attention-based decoder had a single layer of LSTM. Each LSTM had 320 cells and their weights were randomly initialized using a uniform distribution DBLP:journals/corr/HeZR015 with biases of zero. The fully connected layers were initialized following $\mathcal {U}{(-0.1, 0.1)}$. The weight decay BIBREF27 whose rate was $10^{-5}$ and the dropout BIBREF28 following $\mathcal {B}e(0.2)$ were used to alleviate overfitting. The parameters were optimized with Adam BIBREF29. The learning rate was $10^{-3}$ at first and was multiplied by $10^{-1}$ at the beginning of 31st and 36th epoch BIBREF30. The mini-batch size was 30 and the utterances (IPUs) were sorted in an ascending order of length. To stabilize the training, we removed utterances longer than 12 seconds.
The loss function of the model was a linear sum of the loss from CTC and the attention-based decoder,
where $\lambda $ was set to be 0.5. Through all experiments, the phone labels are used to train the auxiliary CTC task because it is reported that the hierarchical architecture, using few and general labels in the auxiliary task, improves the performance BIBREF31.
Strictly speaking, the number of each modeling units depends on the training set, but there are roughly 25-phone, 500-syllable, and 5,000-word units including special symbols that represent the start and end of a sentence. The words occurring less than twice were replaced with `$\langle $unk$\rangle $'. The vocabulary size for word piece modeling was set to be 500. These settings were based on the results of preliminary experiments with the development set.
For the multilingual training, we made three training scripts by concatenating the script of Ainu and other languages (JNAS, WSJ, JNAS and WSJ). The model was trained by these scripts until 30th epoch. From 31$^{\rm {st}}$ and 40th epoch, the model was fine-turned by the Ainu script. Phone units are used for JNAS and WSJ throughout the experiments.
Experimental Evaluation ::: Results
Table 4 shows the phone error rates (PERs) and word error rates (WERs) for the speaker-closed and speaker-open settings. The `average' is weighted by the numbers of tokens in the ground truth transcriptions for speaker-wise evaluation sets.
The word recognition accuracy reached about 80% in the speaker-closed setting. In the speaker-open setting it was 60% on average and varied greatly from speaker to speaker (from 50% to 70%). The best phone accuracies in the speaker-closed and speaker-open settings were about 94% and 86%. Regardless of the settings, the syllable-based modeling yielded the best WER and PER. This suggests that syllables provide reasonable coverage and constraints for the Ainu language in a corpus of this size.
The PERs of the word unit model were larger than those of other units. This is because the word model often outputs the `$\langle $unk$\rangle $' symbols while other unit models are able to output symbols similar in sound as below.
In this example, the PER of the syllable model is 5% and that of the word model is 30% even though the WERs are the same. (The output of the syllable model is rewritten into words using the `$\langle $wb$\rangle $' symbol.)
WERs are generally much larger than PERs and it is further aggravated with the Ainu language. This is because, as mentioned in Section 2.1, the Ainu language has a lot of compound words and the model may be confused about whether the output is multiple words or a single compound word. The actual outputs frequently contain errors as below. The WER of this example is 57% though the PER is zero.
The results of multilingual training in which the modeling unit is syllables are presented in Table 5. All error rates are the weighted averages of all evaluated speakers. Here, `+ both' represents the result of training with both JNAS and WSJ corpora. The multilingual training is effective in the speaker-open setting, providing a relative WER improvement of 10%. The JNAS corpus was more helpful than the WSJ corpus because of the similarities between Ainu and Japanese language.
Summary
In this study, we first developed a speech corpus for Ainu ASR and then, using the end-to-end model with CTC and the attention mechanism, compared four modeling units: phones, syllables, word pieces, and words. The best performance was obtained with the syllable unit, with which WERs in the speaker-closed and speaker-open settings were respectively about 20% and 40% while PERs were about 6% and 14%. Multilingual training using the JNAS improved the performance in the speaker-open setting. Future tasks include reducing the between-speaker performance differences by using speaker adaptation techniques.
Acknowledgement
The data sets used in this study are provided by the Ainu Museum and Nibutani Ainu Culture Museum. The authors would like to thank Prof. Osami Okuda of Sapporo Gakuin University for his useful advices on the Ainu language. | relative WER improvement of 10%. |
8a5254ca726a2914214a4c0b6b42811a007ecfc6 | 8a5254ca726a2914214a4c0b6b42811a007ecfc6_0 | Q: How much transcribed data is available for for Ainu language?
Text: Introduction
Automatic speech recognition (ASR) technology has been made a dramatic progress and is currently brought to a pratical levels of performance assisted by large speech corpora and the introduction of deep learning techniques. However, this is not the case for low-resource languages which do not have large corpora like English and Japanese have. There are about 5,000 languages in the world over half of which are faced with the danger of extinction. Therefore, constructing ASR systems for these endangered languages is an important issue.
The Ainu are an indigenous people of northern Japan and Sakhakin in Russia, but their language has been fading away ever since the Meiji Restoration and Modernization. On the other hand, active efforts to preserve their culture have been initiated by the Government of Japan, and exceptionally large oral recordings have been made. Nevertheless, a majority of the recordings have not been transcribed and utilized effectively. Since transcribing them requires expertise in the Ainu language, not so many people are able to work on this task. Hence, there is a strong demand for an ASR system for the Ainu language. We started a project of Ainu ASR and this article is the first report of this project.
We have built an Ainu speech corpus based on data provided by the Ainu Museum and the Nibutani Ainu Culture Museum. The oral recordings in this data consist of folklore and folk songs, and we chose the former to construct the ASR model. The end-to-end method of speech recognition has been proposed recently and has achieved performance comparable to that of the conventional DNN-HMM hybrid modeling BIBREF0, BIBREF1, BIBREF2. End-to-end systems do not have a complex hierarchical structure and do not require expertise in target languages such as their phonology and morphology. In this study we adopt the attention mechanism BIBREF3, BIBREF4 and combine it with Connectionist Temporal Classification (CTC) BIBREF5, BIBREF6. In this work, we investigate the modeling unit and utilization of corpora of other languages.
Overview of the Ainu Language
This section briefly overviews the background of the data collection, the Ainu language, and its writing system. After that, we describe how Ainu recordings are classified and review previous works dealing with the Ainu language.
Overview of the Ainu Language ::: Background
The Ainu people had total population of about 20,000 in the mid-19th century BIBREF7 and they used to live widely distributed in the area that includes Hokkaido, Sakhalin, and the Kuril Islands. The number of native speakers, however, rapidly decreased through the assimilation policy after late 19th century. At present, there are only less than 10 native speakers, and UNESCO listed their language as critically endangered in 2009 BIBREF8. In response to this situation, Ainu folklore and songs have been actively recorded since the late 20th century in efforts initiated by the Government of Japan. For example, the Ainu Museum started audio recording of Ainu folklore in 1976 with the cooperation of a few Ainu elders which resulted in the collection of speech data with the total duration of roughly 700 hours. This kind of data should be a key to the understanding of Ainu culture, but most of it is not transcribed and fully studied yet.
Overview of the Ainu Language ::: The Ainu Language and its Writing System
The Ainu language is an agglutinative language and has some similarities to Japanese. However, its genealogical relationship with other languages has not been clearly understood yet. Among its features such as closed syllables and personal verbal affixes, one important feature is that there are many compound words. For example, a word atuykorkamuy (means “a sea turtle”) can be disassembled into atuy (“the sea”), kor (“to have”), and kamuy (“god”).
Although the Ainu people did not traditionally have a writing system, the Ainu language is currently written following the examples in a reference book “Akor itak” BIBREF9. With this writing system, it is transcribed with sixteen Roman letters {a, c, e, h, i, k, m, n, o, p, r, s, t, u, w, y}. Since each of these letters correspond to a unique pronunciation, we call them “phones” for convenience. In addition, the symbol {=} is used for connecting a verb and a personal affix and { ' } is used to represent the pharyngeal stop. For the purpose of transcribing recordings, consonant symbols {b, d, g, z} are additionally used to transcribe Japanese sounds the speakers utter. The symbols { _ , __ } are used to transcribe drops and liaisons of phones. An example is shown below.
Overview of the Ainu Language ::: Types of Ainu Recordings
The Ainu oral traditions are classified into three types: “yukar” (heroic epics), “kamuy yukar” (mythic epics), and “uwepeker” (prose tales). Yukar and kamuy yukar are recited in the rhythm while uwepeker is not. In this study we focus on the the prose tales as the first step.
Overview of the Ainu Language ::: Previous Work
There have so far been a few studies dealing with the Ainu language. ainulrec built a dependency tree bank in the scheme of Universal Dependencies. postag developed tools for part-of-speech (POS) tagging and word segmentation. Ainu speech recognition was tried by ainutrans with 2.5 hours of Ainu folklore data even though the Ainu language was not their main target. Their phone error rare was about 40% which is not an accuracy level for practical use yet.
It appears that there has not been a substantial Ainu speech recognition study yet that utilizes corpora of a reasonable size. Therefore, our first step was to build a speech corpus for ASR based on the data sets provided by the Ainu Museum and the Nibutani Ainu Culture Museum.
Ainu Speech Corpus
In this section we explain the content of the data sets and how we modified it for our ASR corpus.
Ainu Speech Corpus ::: Numbers of Speakers and Episodes
The corpus we have prepared for ASR in this study is composed of text and speech. Table 1 shows the number of episodes and the total speech duration for each speaker. Among the total of eight speakers, the data of the speakers KM and UT is from the Ainu Museum, and the rest is from Nibutani Ainu Culture Museum. All speakers are female. The length of the recording for a speaker varies depending on the circumstances at the recording times. A sample text and its English translation are shown in Table 2.
Ainu Speech Corpus ::: Data Annotation
For efficient training of ASR model, we have made some modifications to the provided data. First, from the transcripts explained in Section 2.1, the symbols {_ , __ , '} have been removed as seen in the example below.
Though the equal symbol (`=') does not represent a sound, we keep it because it is used in almost all of the Ainu documents and provides grammatical information.
To train an ASR system, the speech data needs to be segmented into a set of manageable chunks. For the ease of automatic processing, we chose to segment speech into inter-pausal units (IPUs) BIBREF10which is a stretch of speech bounded by pauses. The number of IPUs for each speaker is shown in Table 1.
End-to-end Speech Recognition
In this section, the two approaches to end-to-end speech recognition that we adopt in this work are summarized. Then, we introduce four modeling units we explained, i.e., phone, syllable, word piece, and word. We also discuss multilingual training that we adopt for tackling the low resource problem.
End-to-end Speech Recognition ::: End-to-end Modeling
End-to-end models have an architecture much simpler than that of conventional DNN-HMM hybrid models. Since they predict character or word symbols directly from acoustic features, pronunciation dictionaries and language modeling are not required explicitly. In this paper, we utilize two kinds of end-to-end models, namely, Connectionist Temporal Classification (CTC) and the attention-based encoder-decoder model.
CTC augments the output symbol set with the “blank” symbol `$\phi $'. It outputs symbols by contracting frame-wise outputs from recurrent neural networks (RNNs). This is done by first collapsed repeating symbols and then removing all blank symbols as in the following example:
The probability of an output sequence $\mathbf {L}$ for an input acoustic feature sequence $\mathbf {X}$, where $|\mathbf {L}| < |\mathbf {X}|$, is defined as follows.
$\mathcal {B}$ is a function to contract the outputs of RNNs, so $\mathcal {B}^{-1}(\mathbf {L})$ means the set of symbol sequences which is reduced to $\mathbf {L}$. The model is trained to maximize (1).
The attention-based encoder-decoder model is another method for mapping between two sequences with different lengths. It has two RNNs called the “encoder” and the “decoder”. In naive encoder-decoder model, the encoder converts the input sequence into a single context vector which is the last hidden state of the encoder RNN from which the decoder infers output symbols. In an attention-based model, the context vector $\mathbf {c}_l$ at $l$-th decoding step is the sum of the product of all encoder outputs $h_1, ... , h_\mathrm {T}$ and the $l$-th attention weight $\alpha _{1,l}, ... , \alpha _{\mathrm {T},l}$ as shown in (2). Here, $\mathrm {T}$ is the length of the encoder output.
The attention weights $\alpha _{1,l}, ... , \alpha _{\mathrm {T},l}$ indicates the relative importances of the encoder output frames for the $l$-th decoding step and the model parameters to generate these weights are determined in an end-to-end training.
In our model, the attention-based model and the CTC share the encoder and are optimized simultaneously as shown in Figure 1.BIBREF11 Long Short-Term Memory (LSTM) BIBREF12 is used for RNNs in the encoder and the decoder.
End-to-end Speech Recognition ::: Modeling Units
In the conventional DNN-HMM hybrid modeling, the acoustic model outputs probabilities triphone states from each acoustic feature which is converted into the most likely word sequence. An end-to-end model, on the other hand, has some degree of freedom in the modeling unit other than phones, and there are some studies that use characters or words as a unit BIBREF13, BIBREF14. A word unit based end-to-end model can take long context into consideration at the inference time, but it has the data sparsity problem due to its large vocabulary size. Though phone unit based model does not have such a problem, it cannot grasp so long context. It depends on the size of available corpora to decide which to adopt. In addition to these both models, a word piece unit, which is defined by automatically dividing a word into frequent parts, has been proposed BIBREF15, BIBREF16, and its vocabulary size can be determined almost freely.
In this paper, we investigate the modeling unit for the end-to-end Ainu speech recognition since the optimal unit for this size of corpus is not obvious. BIBREF17 It is presupposed that all units can be converted into word units automatically. The candidates are phone, syllable, word piece (WP), and word. Examples of them are shown in Table 3 and the details of each unit are described below.
End-to-end Speech Recognition ::: Modeling Units ::: Phone
As mentioned in Section 2.1, we regard the Roman letters as phones. `=' and the special symbol `$\langle $wb$\rangle $', which means a word boundary, are added to make it possible to convert the output into a sequence of words like the `original' in Table 3.
End-to-end Speech Recognition ::: Modeling Units ::: Syllable
A syllable of the Ainu language takes the form of either V, CV, VC, or CVC, where `C' and `V' mean consonant and vowel, respectively. The phones {a, e, i, o, u} are vowels and the rest of the Roman letters in Section 2.2 are consonants. In this work, every word is divided into syllables by the following procedure.
A word with a single letter is unchanged.
Two consecutive Cs and Vs are given a syllable boundary between them.
R$^*${CC, VV}R$^*$$\rightarrow $ R$^*${C-C, V-V}R$^*$
(R $$ {C, V})
Put a syllable boundary after the segment-initial V if it is following by at least two phones.
VCR$^+$$\rightarrow $ V-CR$^+$
Put a syllable boundary after CV repeatedly from left to right until only CV or CVC is left.
(CV)$^*${CV, CVC} $\rightarrow $ (CV-)$^*${CV, CVC}
In addition, `=' and `$\langle $wb$\rangle $' are added as explained in Section 4.2.1. through the model training process.
This procedure does not always generate a morphologically relevant syllable segmentation. For example, a word isermakus (meaning “(for a god) to protect from behind”) is divided as i-ser-ma-kus, but the right syllabification is i-ser-mak-us.
End-to-end Speech Recognition ::: Modeling Units ::: Word Piece
The byte pair encoding (BPE) BIBREF18 and the unigram language modeling BIBREF19 are alternative methods for dividing a word into word pieces. The former repeatedly replaces the most common character pair with a new single symbol until the vocabulary becomes the intended size. The latter decides the segmentation to maximize the likelihood of occurrence of the sequence. We adopt the latter and use the open-source software SentencePiece BIBREF20. With this tool, `$\langle $wb$\rangle $' and other units are often merged to constitute a single piece as seen in Table 3.
End-to-end Speech Recognition ::: Modeling Units ::: Word
The original text can be segmented into words separated by spaces. To make the vocabulary smaller for the ease of training, `=' is treated as a word and infrequent words are replaced with a special label `$\langle $unk$\rangle $'. As seen in Table 3, `a=saha' is dealt with as three words (`a', `=', `saha') and the word `kokopan' is replaced with `$\langle $unk$\rangle $'.
End-to-end Speech Recognition ::: Multilingual Training
When an enough amount of data is not available for the target languages, the ASR model training can be enhanced by taking advantage of data from other languages BIBREF21, BIBREF22. There are some similarities between Ainu and Japanese language BIBREF23. For instance, both have almost the same set of vowels and do not have consonant clusters (like `str' of `strike' in English). Hence, the multilingual training with a Japanese corpus is expected to be effective. In addition, an English corpus is used for the purpose of comparison. The corpora used are the JNAS corpus BIBREF24 (in Japanese) and the WSJ corpus BIBREF25 (in English). JNAS comprises roughly 80 hours from 320 speakers, and WSJ has about 70 hours of speech from 280 speakers.
In the multilingual training, the encoder and the attention module are shared among the Ainu ASR model and the models for other languages, and they are trained using data for all languages. Figure 2 shows the architecture for the multilingual learning with two corpora. When the input acoustic features are from the Ainu ASR corpus, they go through the shared encoder and attention module and are delivered into the decoder on the left side in Figure 2 as a context vector. In this case, the right-side decoder is not trained.
Experimental Evaluation
In this section the setting and results of ASR experiments are described and the results are discussed.
Experimental Evaluation ::: Data Setup
The ASR experiments were performed in speaker-open condition as well as speaker-closed condition.
In the speaker-closed condition, two episodes were set aside from each speaker as development and test sets. Thereafter, the total sizes of the development and test sets turns out to be 1585 IPUs spanning 2 hours 23 minutes and 1841 IPUs spanning 2 hours and 48 minutes respectively. The ASR model is trained with the rest data. In the speaker-open condition, all the data except for the test speaker's were used for training As it would be difficult to train the model if all of the data of speaker KM or UT were removed, experiments using their speaker-open conditions were not conducted.
Experimental Evaluation ::: Experimental Setting
The input acoustic features were 120-dimensional vectors made by frame stacking BIBREF26 three 40-dimensional log-mel filter banks features at contiguous time frames. The window length and the frame shift were set to be 25ms and 10ms. The encoder was composed of five BiLSTM layers and the attention-based decoder had a single layer of LSTM. Each LSTM had 320 cells and their weights were randomly initialized using a uniform distribution DBLP:journals/corr/HeZR015 with biases of zero. The fully connected layers were initialized following $\mathcal {U}{(-0.1, 0.1)}$. The weight decay BIBREF27 whose rate was $10^{-5}$ and the dropout BIBREF28 following $\mathcal {B}e(0.2)$ were used to alleviate overfitting. The parameters were optimized with Adam BIBREF29. The learning rate was $10^{-3}$ at first and was multiplied by $10^{-1}$ at the beginning of 31st and 36th epoch BIBREF30. The mini-batch size was 30 and the utterances (IPUs) were sorted in an ascending order of length. To stabilize the training, we removed utterances longer than 12 seconds.
The loss function of the model was a linear sum of the loss from CTC and the attention-based decoder,
where $\lambda $ was set to be 0.5. Through all experiments, the phone labels are used to train the auxiliary CTC task because it is reported that the hierarchical architecture, using few and general labels in the auxiliary task, improves the performance BIBREF31.
Strictly speaking, the number of each modeling units depends on the training set, but there are roughly 25-phone, 500-syllable, and 5,000-word units including special symbols that represent the start and end of a sentence. The words occurring less than twice were replaced with `$\langle $unk$\rangle $'. The vocabulary size for word piece modeling was set to be 500. These settings were based on the results of preliminary experiments with the development set.
For the multilingual training, we made three training scripts by concatenating the script of Ainu and other languages (JNAS, WSJ, JNAS and WSJ). The model was trained by these scripts until 30th epoch. From 31$^{\rm {st}}$ and 40th epoch, the model was fine-turned by the Ainu script. Phone units are used for JNAS and WSJ throughout the experiments.
Experimental Evaluation ::: Results
Table 4 shows the phone error rates (PERs) and word error rates (WERs) for the speaker-closed and speaker-open settings. The `average' is weighted by the numbers of tokens in the ground truth transcriptions for speaker-wise evaluation sets.
The word recognition accuracy reached about 80% in the speaker-closed setting. In the speaker-open setting it was 60% on average and varied greatly from speaker to speaker (from 50% to 70%). The best phone accuracies in the speaker-closed and speaker-open settings were about 94% and 86%. Regardless of the settings, the syllable-based modeling yielded the best WER and PER. This suggests that syllables provide reasonable coverage and constraints for the Ainu language in a corpus of this size.
The PERs of the word unit model were larger than those of other units. This is because the word model often outputs the `$\langle $unk$\rangle $' symbols while other unit models are able to output symbols similar in sound as below.
In this example, the PER of the syllable model is 5% and that of the word model is 30% even though the WERs are the same. (The output of the syllable model is rewritten into words using the `$\langle $wb$\rangle $' symbol.)
WERs are generally much larger than PERs and it is further aggravated with the Ainu language. This is because, as mentioned in Section 2.1, the Ainu language has a lot of compound words and the model may be confused about whether the output is multiple words or a single compound word. The actual outputs frequently contain errors as below. The WER of this example is 57% though the PER is zero.
The results of multilingual training in which the modeling unit is syllables are presented in Table 5. All error rates are the weighted averages of all evaluated speakers. Here, `+ both' represents the result of training with both JNAS and WSJ corpora. The multilingual training is effective in the speaker-open setting, providing a relative WER improvement of 10%. The JNAS corpus was more helpful than the WSJ corpus because of the similarities between Ainu and Japanese language.
Summary
In this study, we first developed a speech corpus for Ainu ASR and then, using the end-to-end model with CTC and the attention mechanism, compared four modeling units: phones, syllables, word pieces, and words. The best performance was obtained with the syllable unit, with which WERs in the speaker-closed and speaker-open settings were respectively about 20% and 40% while PERs were about 6% and 14%. Multilingual training using the JNAS improved the performance in the speaker-open setting. Future tasks include reducing the between-speaker performance differences by using speaker adaptation techniques.
Acknowledgement
The data sets used in this study are provided by the Ainu Museum and Nibutani Ainu Culture Museum. The authors would like to thank Prof. Osami Okuda of Sapporo Gakuin University for his useful advices on the Ainu language. | Transcribed data is available for duration of 38h 54m 38s for 8 speakers. |
3c0d66f9e55a89d13187da7b7128666df9a742ce | 3c0d66f9e55a89d13187da7b7128666df9a742ce_0 | Q: What is the difference between speaker-open and speaker-closed setting?
Text: Introduction
Automatic speech recognition (ASR) technology has been made a dramatic progress and is currently brought to a pratical levels of performance assisted by large speech corpora and the introduction of deep learning techniques. However, this is not the case for low-resource languages which do not have large corpora like English and Japanese have. There are about 5,000 languages in the world over half of which are faced with the danger of extinction. Therefore, constructing ASR systems for these endangered languages is an important issue.
The Ainu are an indigenous people of northern Japan and Sakhakin in Russia, but their language has been fading away ever since the Meiji Restoration and Modernization. On the other hand, active efforts to preserve their culture have been initiated by the Government of Japan, and exceptionally large oral recordings have been made. Nevertheless, a majority of the recordings have not been transcribed and utilized effectively. Since transcribing them requires expertise in the Ainu language, not so many people are able to work on this task. Hence, there is a strong demand for an ASR system for the Ainu language. We started a project of Ainu ASR and this article is the first report of this project.
We have built an Ainu speech corpus based on data provided by the Ainu Museum and the Nibutani Ainu Culture Museum. The oral recordings in this data consist of folklore and folk songs, and we chose the former to construct the ASR model. The end-to-end method of speech recognition has been proposed recently and has achieved performance comparable to that of the conventional DNN-HMM hybrid modeling BIBREF0, BIBREF1, BIBREF2. End-to-end systems do not have a complex hierarchical structure and do not require expertise in target languages such as their phonology and morphology. In this study we adopt the attention mechanism BIBREF3, BIBREF4 and combine it with Connectionist Temporal Classification (CTC) BIBREF5, BIBREF6. In this work, we investigate the modeling unit and utilization of corpora of other languages.
Overview of the Ainu Language
This section briefly overviews the background of the data collection, the Ainu language, and its writing system. After that, we describe how Ainu recordings are classified and review previous works dealing with the Ainu language.
Overview of the Ainu Language ::: Background
The Ainu people had total population of about 20,000 in the mid-19th century BIBREF7 and they used to live widely distributed in the area that includes Hokkaido, Sakhalin, and the Kuril Islands. The number of native speakers, however, rapidly decreased through the assimilation policy after late 19th century. At present, there are only less than 10 native speakers, and UNESCO listed their language as critically endangered in 2009 BIBREF8. In response to this situation, Ainu folklore and songs have been actively recorded since the late 20th century in efforts initiated by the Government of Japan. For example, the Ainu Museum started audio recording of Ainu folklore in 1976 with the cooperation of a few Ainu elders which resulted in the collection of speech data with the total duration of roughly 700 hours. This kind of data should be a key to the understanding of Ainu culture, but most of it is not transcribed and fully studied yet.
Overview of the Ainu Language ::: The Ainu Language and its Writing System
The Ainu language is an agglutinative language and has some similarities to Japanese. However, its genealogical relationship with other languages has not been clearly understood yet. Among its features such as closed syllables and personal verbal affixes, one important feature is that there are many compound words. For example, a word atuykorkamuy (means “a sea turtle”) can be disassembled into atuy (“the sea”), kor (“to have”), and kamuy (“god”).
Although the Ainu people did not traditionally have a writing system, the Ainu language is currently written following the examples in a reference book “Akor itak” BIBREF9. With this writing system, it is transcribed with sixteen Roman letters {a, c, e, h, i, k, m, n, o, p, r, s, t, u, w, y}. Since each of these letters correspond to a unique pronunciation, we call them “phones” for convenience. In addition, the symbol {=} is used for connecting a verb and a personal affix and { ' } is used to represent the pharyngeal stop. For the purpose of transcribing recordings, consonant symbols {b, d, g, z} are additionally used to transcribe Japanese sounds the speakers utter. The symbols { _ , __ } are used to transcribe drops and liaisons of phones. An example is shown below.
Overview of the Ainu Language ::: Types of Ainu Recordings
The Ainu oral traditions are classified into three types: “yukar” (heroic epics), “kamuy yukar” (mythic epics), and “uwepeker” (prose tales). Yukar and kamuy yukar are recited in the rhythm while uwepeker is not. In this study we focus on the the prose tales as the first step.
Overview of the Ainu Language ::: Previous Work
There have so far been a few studies dealing with the Ainu language. ainulrec built a dependency tree bank in the scheme of Universal Dependencies. postag developed tools for part-of-speech (POS) tagging and word segmentation. Ainu speech recognition was tried by ainutrans with 2.5 hours of Ainu folklore data even though the Ainu language was not their main target. Their phone error rare was about 40% which is not an accuracy level for practical use yet.
It appears that there has not been a substantial Ainu speech recognition study yet that utilizes corpora of a reasonable size. Therefore, our first step was to build a speech corpus for ASR based on the data sets provided by the Ainu Museum and the Nibutani Ainu Culture Museum.
Ainu Speech Corpus
In this section we explain the content of the data sets and how we modified it for our ASR corpus.
Ainu Speech Corpus ::: Numbers of Speakers and Episodes
The corpus we have prepared for ASR in this study is composed of text and speech. Table 1 shows the number of episodes and the total speech duration for each speaker. Among the total of eight speakers, the data of the speakers KM and UT is from the Ainu Museum, and the rest is from Nibutani Ainu Culture Museum. All speakers are female. The length of the recording for a speaker varies depending on the circumstances at the recording times. A sample text and its English translation are shown in Table 2.
Ainu Speech Corpus ::: Data Annotation
For efficient training of ASR model, we have made some modifications to the provided data. First, from the transcripts explained in Section 2.1, the symbols {_ , __ , '} have been removed as seen in the example below.
Though the equal symbol (`=') does not represent a sound, we keep it because it is used in almost all of the Ainu documents and provides grammatical information.
To train an ASR system, the speech data needs to be segmented into a set of manageable chunks. For the ease of automatic processing, we chose to segment speech into inter-pausal units (IPUs) BIBREF10which is a stretch of speech bounded by pauses. The number of IPUs for each speaker is shown in Table 1.
End-to-end Speech Recognition
In this section, the two approaches to end-to-end speech recognition that we adopt in this work are summarized. Then, we introduce four modeling units we explained, i.e., phone, syllable, word piece, and word. We also discuss multilingual training that we adopt for tackling the low resource problem.
End-to-end Speech Recognition ::: End-to-end Modeling
End-to-end models have an architecture much simpler than that of conventional DNN-HMM hybrid models. Since they predict character or word symbols directly from acoustic features, pronunciation dictionaries and language modeling are not required explicitly. In this paper, we utilize two kinds of end-to-end models, namely, Connectionist Temporal Classification (CTC) and the attention-based encoder-decoder model.
CTC augments the output symbol set with the “blank” symbol `$\phi $'. It outputs symbols by contracting frame-wise outputs from recurrent neural networks (RNNs). This is done by first collapsed repeating symbols and then removing all blank symbols as in the following example:
The probability of an output sequence $\mathbf {L}$ for an input acoustic feature sequence $\mathbf {X}$, where $|\mathbf {L}| < |\mathbf {X}|$, is defined as follows.
$\mathcal {B}$ is a function to contract the outputs of RNNs, so $\mathcal {B}^{-1}(\mathbf {L})$ means the set of symbol sequences which is reduced to $\mathbf {L}$. The model is trained to maximize (1).
The attention-based encoder-decoder model is another method for mapping between two sequences with different lengths. It has two RNNs called the “encoder” and the “decoder”. In naive encoder-decoder model, the encoder converts the input sequence into a single context vector which is the last hidden state of the encoder RNN from which the decoder infers output symbols. In an attention-based model, the context vector $\mathbf {c}_l$ at $l$-th decoding step is the sum of the product of all encoder outputs $h_1, ... , h_\mathrm {T}$ and the $l$-th attention weight $\alpha _{1,l}, ... , \alpha _{\mathrm {T},l}$ as shown in (2). Here, $\mathrm {T}$ is the length of the encoder output.
The attention weights $\alpha _{1,l}, ... , \alpha _{\mathrm {T},l}$ indicates the relative importances of the encoder output frames for the $l$-th decoding step and the model parameters to generate these weights are determined in an end-to-end training.
In our model, the attention-based model and the CTC share the encoder and are optimized simultaneously as shown in Figure 1.BIBREF11 Long Short-Term Memory (LSTM) BIBREF12 is used for RNNs in the encoder and the decoder.
End-to-end Speech Recognition ::: Modeling Units
In the conventional DNN-HMM hybrid modeling, the acoustic model outputs probabilities triphone states from each acoustic feature which is converted into the most likely word sequence. An end-to-end model, on the other hand, has some degree of freedom in the modeling unit other than phones, and there are some studies that use characters or words as a unit BIBREF13, BIBREF14. A word unit based end-to-end model can take long context into consideration at the inference time, but it has the data sparsity problem due to its large vocabulary size. Though phone unit based model does not have such a problem, it cannot grasp so long context. It depends on the size of available corpora to decide which to adopt. In addition to these both models, a word piece unit, which is defined by automatically dividing a word into frequent parts, has been proposed BIBREF15, BIBREF16, and its vocabulary size can be determined almost freely.
In this paper, we investigate the modeling unit for the end-to-end Ainu speech recognition since the optimal unit for this size of corpus is not obvious. BIBREF17 It is presupposed that all units can be converted into word units automatically. The candidates are phone, syllable, word piece (WP), and word. Examples of them are shown in Table 3 and the details of each unit are described below.
End-to-end Speech Recognition ::: Modeling Units ::: Phone
As mentioned in Section 2.1, we regard the Roman letters as phones. `=' and the special symbol `$\langle $wb$\rangle $', which means a word boundary, are added to make it possible to convert the output into a sequence of words like the `original' in Table 3.
End-to-end Speech Recognition ::: Modeling Units ::: Syllable
A syllable of the Ainu language takes the form of either V, CV, VC, or CVC, where `C' and `V' mean consonant and vowel, respectively. The phones {a, e, i, o, u} are vowels and the rest of the Roman letters in Section 2.2 are consonants. In this work, every word is divided into syllables by the following procedure.
A word with a single letter is unchanged.
Two consecutive Cs and Vs are given a syllable boundary between them.
R$^*${CC, VV}R$^*$$\rightarrow $ R$^*${C-C, V-V}R$^*$
(R $$ {C, V})
Put a syllable boundary after the segment-initial V if it is following by at least two phones.
VCR$^+$$\rightarrow $ V-CR$^+$
Put a syllable boundary after CV repeatedly from left to right until only CV or CVC is left.
(CV)$^*${CV, CVC} $\rightarrow $ (CV-)$^*${CV, CVC}
In addition, `=' and `$\langle $wb$\rangle $' are added as explained in Section 4.2.1. through the model training process.
This procedure does not always generate a morphologically relevant syllable segmentation. For example, a word isermakus (meaning “(for a god) to protect from behind”) is divided as i-ser-ma-kus, but the right syllabification is i-ser-mak-us.
End-to-end Speech Recognition ::: Modeling Units ::: Word Piece
The byte pair encoding (BPE) BIBREF18 and the unigram language modeling BIBREF19 are alternative methods for dividing a word into word pieces. The former repeatedly replaces the most common character pair with a new single symbol until the vocabulary becomes the intended size. The latter decides the segmentation to maximize the likelihood of occurrence of the sequence. We adopt the latter and use the open-source software SentencePiece BIBREF20. With this tool, `$\langle $wb$\rangle $' and other units are often merged to constitute a single piece as seen in Table 3.
End-to-end Speech Recognition ::: Modeling Units ::: Word
The original text can be segmented into words separated by spaces. To make the vocabulary smaller for the ease of training, `=' is treated as a word and infrequent words are replaced with a special label `$\langle $unk$\rangle $'. As seen in Table 3, `a=saha' is dealt with as three words (`a', `=', `saha') and the word `kokopan' is replaced with `$\langle $unk$\rangle $'.
End-to-end Speech Recognition ::: Multilingual Training
When an enough amount of data is not available for the target languages, the ASR model training can be enhanced by taking advantage of data from other languages BIBREF21, BIBREF22. There are some similarities between Ainu and Japanese language BIBREF23. For instance, both have almost the same set of vowels and do not have consonant clusters (like `str' of `strike' in English). Hence, the multilingual training with a Japanese corpus is expected to be effective. In addition, an English corpus is used for the purpose of comparison. The corpora used are the JNAS corpus BIBREF24 (in Japanese) and the WSJ corpus BIBREF25 (in English). JNAS comprises roughly 80 hours from 320 speakers, and WSJ has about 70 hours of speech from 280 speakers.
In the multilingual training, the encoder and the attention module are shared among the Ainu ASR model and the models for other languages, and they are trained using data for all languages. Figure 2 shows the architecture for the multilingual learning with two corpora. When the input acoustic features are from the Ainu ASR corpus, they go through the shared encoder and attention module and are delivered into the decoder on the left side in Figure 2 as a context vector. In this case, the right-side decoder is not trained.
Experimental Evaluation
In this section the setting and results of ASR experiments are described and the results are discussed.
Experimental Evaluation ::: Data Setup
The ASR experiments were performed in speaker-open condition as well as speaker-closed condition.
In the speaker-closed condition, two episodes were set aside from each speaker as development and test sets. Thereafter, the total sizes of the development and test sets turns out to be 1585 IPUs spanning 2 hours 23 minutes and 1841 IPUs spanning 2 hours and 48 minutes respectively. The ASR model is trained with the rest data. In the speaker-open condition, all the data except for the test speaker's were used for training As it would be difficult to train the model if all of the data of speaker KM or UT were removed, experiments using their speaker-open conditions were not conducted.
Experimental Evaluation ::: Experimental Setting
The input acoustic features were 120-dimensional vectors made by frame stacking BIBREF26 three 40-dimensional log-mel filter banks features at contiguous time frames. The window length and the frame shift were set to be 25ms and 10ms. The encoder was composed of five BiLSTM layers and the attention-based decoder had a single layer of LSTM. Each LSTM had 320 cells and their weights were randomly initialized using a uniform distribution DBLP:journals/corr/HeZR015 with biases of zero. The fully connected layers were initialized following $\mathcal {U}{(-0.1, 0.1)}$. The weight decay BIBREF27 whose rate was $10^{-5}$ and the dropout BIBREF28 following $\mathcal {B}e(0.2)$ were used to alleviate overfitting. The parameters were optimized with Adam BIBREF29. The learning rate was $10^{-3}$ at first and was multiplied by $10^{-1}$ at the beginning of 31st and 36th epoch BIBREF30. The mini-batch size was 30 and the utterances (IPUs) were sorted in an ascending order of length. To stabilize the training, we removed utterances longer than 12 seconds.
The loss function of the model was a linear sum of the loss from CTC and the attention-based decoder,
where $\lambda $ was set to be 0.5. Through all experiments, the phone labels are used to train the auxiliary CTC task because it is reported that the hierarchical architecture, using few and general labels in the auxiliary task, improves the performance BIBREF31.
Strictly speaking, the number of each modeling units depends on the training set, but there are roughly 25-phone, 500-syllable, and 5,000-word units including special symbols that represent the start and end of a sentence. The words occurring less than twice were replaced with `$\langle $unk$\rangle $'. The vocabulary size for word piece modeling was set to be 500. These settings were based on the results of preliminary experiments with the development set.
For the multilingual training, we made three training scripts by concatenating the script of Ainu and other languages (JNAS, WSJ, JNAS and WSJ). The model was trained by these scripts until 30th epoch. From 31$^{\rm {st}}$ and 40th epoch, the model was fine-turned by the Ainu script. Phone units are used for JNAS and WSJ throughout the experiments.
Experimental Evaluation ::: Results
Table 4 shows the phone error rates (PERs) and word error rates (WERs) for the speaker-closed and speaker-open settings. The `average' is weighted by the numbers of tokens in the ground truth transcriptions for speaker-wise evaluation sets.
The word recognition accuracy reached about 80% in the speaker-closed setting. In the speaker-open setting it was 60% on average and varied greatly from speaker to speaker (from 50% to 70%). The best phone accuracies in the speaker-closed and speaker-open settings were about 94% and 86%. Regardless of the settings, the syllable-based modeling yielded the best WER and PER. This suggests that syllables provide reasonable coverage and constraints for the Ainu language in a corpus of this size.
The PERs of the word unit model were larger than those of other units. This is because the word model often outputs the `$\langle $unk$\rangle $' symbols while other unit models are able to output symbols similar in sound as below.
In this example, the PER of the syllable model is 5% and that of the word model is 30% even though the WERs are the same. (The output of the syllable model is rewritten into words using the `$\langle $wb$\rangle $' symbol.)
WERs are generally much larger than PERs and it is further aggravated with the Ainu language. This is because, as mentioned in Section 2.1, the Ainu language has a lot of compound words and the model may be confused about whether the output is multiple words or a single compound word. The actual outputs frequently contain errors as below. The WER of this example is 57% though the PER is zero.
The results of multilingual training in which the modeling unit is syllables are presented in Table 5. All error rates are the weighted averages of all evaluated speakers. Here, `+ both' represents the result of training with both JNAS and WSJ corpora. The multilingual training is effective in the speaker-open setting, providing a relative WER improvement of 10%. The JNAS corpus was more helpful than the WSJ corpus because of the similarities between Ainu and Japanese language.
Summary
In this study, we first developed a speech corpus for Ainu ASR and then, using the end-to-end model with CTC and the attention mechanism, compared four modeling units: phones, syllables, word pieces, and words. The best performance was obtained with the syllable unit, with which WERs in the speaker-closed and speaker-open settings were respectively about 20% and 40% while PERs were about 6% and 14%. Multilingual training using the JNAS improved the performance in the speaker-open setting. Future tasks include reducing the between-speaker performance differences by using speaker adaptation techniques.
Acknowledgement
The data sets used in this study are provided by the Ainu Museum and Nibutani Ainu Culture Museum. The authors would like to thank Prof. Osami Okuda of Sapporo Gakuin University for his useful advices on the Ainu language. | In the speaker-closed condition, two episodes were set aside from each speaker as development and test sets., In the speaker-open condition, all the data except for the test speaker's were used for training |
13d92cbc2c77134626e26166c64ca5c00aec0bf5 | 13d92cbc2c77134626e26166c64ca5c00aec0bf5_0 | Q: What baseline approaches do they compare against?
Text: Introduction
Extracting external textual knowledge for machine comprehensive systems has long been an important yet challenging problem. Success requires not only precise retrieval of the relevant information sparsely restored in a large knowledge source but also a deep understanding of both the selected knowledge and the input query to give the corresponding output. Initiated by chen2017drqa, the task was termed as Machine Reading at Scale (MRS), seeking to provide a challenging situation where machines are required to do both semantic retrieval and comprehension at different levels of granularity for the final downstream task.
Progress on MRS has been made by improving individual IR or comprehension sub-modules with recent advancements on representative learning BIBREF0, BIBREF1, BIBREF2. However, partially due to the lack of annotated data for intermediate retrieval in an MRS setting, the evaluations were done mainly on the final downstream task and with much less consideration on the intermediate retrieval performance. This led to the convention that upstream retrieval modules mostly focus on getting better coverage of the downstream information such that the upper-bound of the downstream score can be improved, rather than finding more exact information. This convention is misaligned with the nature of MRS where equal effort should be put in emphasizing the models' joint performance and optimizing the relationship between the semantic retrieval and the downstream comprehension sub-tasks.
Hence, to shed light on the importance of semantic retrieval for downstream comprehension tasks, we start by establishing a simple yet effective hierarchical pipeline system for MRS using Wikipedia as the external knowledge source. The system is composed of a term-based retrieval module, two neural modules for both paragraph-level retrieval and sentence-level retrieval, and a neural downstream task module. We evaluated the system on two recent large-scale open domain benchmarks for fact verification and multi-hop QA, namely FEVER BIBREF3 and HotpotQA BIBREF4, in which retrieval performance can also be evaluated accurately since intermediate annotations on evidences are provided. Our system achieves the start-of-the-art results with 45.32% for answer EM and 25.14% joint EM on HotpotQA (8% absolute improvement on answer EM and doubling the joint EM over the previous best results) and with 67.26% on FEVER score (3% absolute improvement over previously published systems).
We then provide empirical studies to validate design decisions. Specifically, we prove the necessity of both paragraph-level retrieval and sentence-level retrieval for maintaining good performance, and further illustrate that a better semantic retrieval module not only is beneficial to achieving high recall and keeping high upper bound for downstream task, but also plays an important role in shaping the downstream data distribution and providing more relevant and high-quality data for downstream sub-module training and inference. These mechanisms are vital for a good MRS system on both QA and fact verification.
Related Work
Machine Reading at Scale First proposed and formalized in chen2017drqa, MRS has gained popularity with increasing amount of work on both dataset collection BIBREF5, BIBREF6 and MRS model developments BIBREF7, BIBREF8, BIBREF9. In some previous work BIBREF10, paragraph-level retrieval modules were mainly for improving the recall of required information, while in some other works BIBREF4, sentence-level retrieval modules were merely for solving the auxiliary sentence selection task. In our work, we focus on revealing the relationship between semantic retrieval at different granularity levels and the downstream comprehension task. To the best of our knowledge, we are the first to apply and optimize neural semantic retrieval at both paragraph and sentence levels for MRS.
Automatic Fact Checking: Recent work BIBREF11 formalized the task of automatic fact checking from the viewpoint of machine learning and NLP. The release of FEVER BIBREF3 stimulates many recent developments BIBREF12, BIBREF13, BIBREF14 on data-driven neural networks for automatic fact checking. We consider the task also as MRS because they share almost the same setup except that the downstream task is verification or natural language inference (NLI) rather than QA.
Information Retrieval Success in deep neural networks inspires their application to information retrieval (IR) tasks BIBREF15, BIBREF16, BIBREF17, BIBREF18. In typical IR settings, systems are required to retrieve and rank BIBREF19 elements from a collection of documents based on their relevance to the query. This setting might be very different from the retrieval in MRS where systems are asked to select facts needed to answer a question or verify a statement. We refer the retrieval in MRS as Semantic Retrieval since it emphasizes on semantic understanding.
Method
In previous works, an MRS system can be complicated with different sub-components processing different retrieval and comprehension sub-tasks at different levels of granularity, and with some sub-components intertwined. For interpretability considerations, we used a unified pipeline setup. The overview of the system is in Fig. FIGREF2.
To be specific, we formulate the MRS system as a function that maps an input tuple $(q, \mathbf {K})$ to an output tuple $(\hat{y}, \mathbf {S})$ where $q$ indicates the input query, $\mathbf {K}$ is the textual KB, $\hat{y}$ is the output prediction, and $\mathbf {S}$ is selected supporting sentences from Wikipedia. Let $\mathbf {E}$ denotes a set of necessary evidences or facts selected from $\mathbf {K}$ for the prediction. For a QA task, $q$ is the input question and $\hat{y}$ is the predicted answer. For a verification task, $q$ is the input claim and $\hat{y}$ is the predicted truthfulness of the input claim. For all tasks, $\mathbf {K}$ is Wikipedia.
The system procedure is listed below:
(1) Term-Based Retrieval: To begin with, we used a combination of the TF-IDF method and a rule-based keyword matching method to narrow the scope from whole Wikipedia down to a set of related paragraphs; this is a standard procedure in MRS BIBREF20, BIBREF10, BIBREF12. The focus of this step is to efficiently select a candidate set $\mathbf {P_I}$ that can cover the information as much as possible ($\mathbf {P_I} \subset \mathbf {K}$) while keeping the size of the set acceptable enough for downstream processing.
(2) Paragraph-Level Neural Retrieval: After obtaining the initial set, we compare each paragraph in $\mathbf {P_I}$ with the input query $q$ using a neural model (which will be explained later in Sec SECREF4). The outputs of the neural model are treated as the relatedness score between the input query and the paragraphs. The scores will be used to sort all the upstream paragraphs. Then, $\mathbf {P_I}$ will be narrowed to a new set $\mathbf {P_N}$ ($\mathbf {P_N} \subset \mathbf {P_I}$) by selecting top $k_p$ paragraphs having relatedness score higher than some threshold value $h_p$ (going out from the P-Level grey box in Fig. FIGREF2). $k_p$ and $h_p$ would be chosen by keeping a good balance between the recall and precision of the paragraph retrieval.
(3) Sentence-Level Neural Retrieval: Next, we select the evidence at the sentence-level by decomposing all the paragraphs in $\mathbf {P_N}$ into sentences. Similarly, each sentence is compared with the query using a neural model (see details in Sec SECREF4) and obtain a set of sentences $\mathbf {S} \subset \mathbf {P_N}$ for the downstream task by choosing top $k_s$ sentences with output scores higher than some threshold $h_s$ (S-Level grey box in Fig. FIGREF2). During evaluation, $\mathbf {S}$ is often evaluated against some ground truth sentence set denoted as $\mathbf {E}$.
(4) Downstream Modeling: At the final step, we simply applied task-specific neural models (e.g., QA and NLI) on the concatenation of all the sentences in $\mathbf {S}$ and the query, obtaining the final output $\hat{y}$.
In some experiments, we modified the setup for certain analysis or ablation purposes which will be explained individually in Sec SECREF6.
Method ::: Modeling and Training
Throughout all our experiments, we used BERT-Base BIBREF2 to provide the state-of-the-art contextualized modeling of the input text.
Semantic Retrieval: We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss. To be specific, we fed the query and context into BERT as:
We applied an affine layer and sigmoid activation on the last layer output of the [$\mathit {CLS}$] token which is a scalar value. The parameters were updated with the objective function:
where $\hat{p}_i$ is the output of the model, $\mathbf {T}^{p/s}_{pos}$ is the positive set and $\mathbf {T}^{p/s}_{neg}$ is the negative set. As shown in Fig. FIGREF2, at sentence level, ground-truth sentences were served as positive examples while other sentences from upstream retrieved set were served as negative examples. Similarly at the paragraph-level, paragraphs having any ground-truth sentence were used as positive examples and other paragraphs from the upstream term-based retrieval processes were used as negative examples.
QA: We followed devlin2018bert for QA span prediction modeling. To correctly handle yes-or-no questions in HotpotQA, we fed the two additional “$\mathit {yes}$" and “$\mathit {no}$" tokens between [$\mathit {CLS}$] and the $Query$ as:
where the supervision was given to the second or the third token when the answer is “yes" or “no", such that they can compete with all other predicted spans. The parameters of the neural QA model were trained to maximize the log probabilities of the true start and end indexes as:
where $\hat{y}^s_i$ and $\hat{y}^e_i$ are the predicted probability on the ground-truth start and end position for the $i$th example, respectively. It is worth noting that we used ground truth supporting sentences plus some other sentences sampled from upstream retrieved set as the context for training the QA module such that it will adapt to the upstream data distribution during inference.
Fact Verification: Following Thorne18Fever, we formulate downstream fact verification as the 3-way natural language inference (NLI) classification problem BIBREF21, BIBREF22 and train the model with 3-way cross entropy loss. The input format is the same as that of semantic retrieval and the objective is $\mathcal {J}_{ver} = -\sum _{i} \mathbf {y}_i \cdot \log (\hat{\mathbf {y}}_i)$, where $\hat{\mathbf {y}}_i \in \mathbf {R^3}$ denotes the model's output for the three verification labels, and $\mathbf {y}_i$ is a one-hot embedding for the ground-truth label. For verifiable queries, we used ground truth evidential sentences plus some other sentences sampled from upstream retrieved set as new evidential context for NLI. For non-verifiable queries, we only used sentences sampled from upstream retrieved set as context because those queries are not associated with ground truth evidential sentences. This detail is important for the model to identify non-verifiable queries and will be explained more in Sec SECREF6. Additional training details and hyper-parameter selections are in the Appendix (Sec. SECREF8; Table TABREF27).
It is worth noting that each sub-module in the system relies on its preceding sub-module to provide data both for training and inference. This means that there will be upstream data distribution misalignment if we trained the sub-module in isolation without considering the properties of its precedent upstream module. The problem is similar to the concept of internal covariate shift BIBREF23, where the distribution of each layer's inputs changes inside a neural network. Therefore, it makes sense to study this issue in a joint MRS setting rather than a typical supervised learning setting where training and test data tend to be fixed and modules being isolated. We release our code and the organized data both for reproducibility and providing an off-the-shelf testbed to facilitate future research on MRS.
Experimental Setup
MRS requires a system not only to retrieve relevant content from textual KBs but also to poccess enough understanding ability to solve the downstream task. To understand the impact or importance of semantic retrieval on the downstream comprehension, we established a unified experimental setup that involves two different downstream tasks, i.e., multi-hop QA and fact verification.
Experimental Setup ::: Tasks and Datasets
HotpotQA: This dataset is a recent large-scale QA dataset that brings in new features: (1) the questions require finding and reasoning over multiple documents; (2) the questions are diverse and not limited to pre-existing KBs; (3) it offers a new comparison question type BIBREF4. We experimented our system on HotpotQA in the fullwiki setting, where a system must find the answer to a question in the scope of the entire Wikipedia, an ideal MRS setup. The sizes of the train, dev and test split are 90,564, 7,405, and 7,405. More importantly, HotpotQA also provides human-annotated sentence-level supporting facts that are needed to answer each question. Those intermediate annotations enable evaluation on models' joint ability on both fact retrieval and answer span prediction, facilitating our direct analysis on the explainable predictions and its relations with the upstream retrieval.
FEVER: The Fact Extraction and VERification dataset BIBREF3 is a recent dataset collected to facilitate the automatic fact checking. The work also proposes a benchmark task in which given an arbitrary input claim, candidate systems are asked to select evidential sentences from Wikipedia and label the claim as either Support, Refute, or Not Enough Info, if the claim can be verified to be true, false, or non-verifiable, respectively, based on the evidence. The sizes of the train, dev and test split are 145,449, 19,998, and 9,998. Similar to HotpotQA, the dataset provides annotated sentence-level facts needed for the verification. These intermediate annotations could provide an accurate evaluation on the results of semantic retrieval and thus suits well for the analysis on the effects of retrieval module on downstream verification.
As in chen2017drqa, we use Wikipedia as our unique knowledge base because it is a comprehensive and self-evolving information source often used to facilitate intelligent systems. Moreover, as Wikipedia is the source for both HotpotQA and FEVER, it helps standardize any further analysis of the effects of semantic retrieval on the two different downstream tasks.
Experimental Setup ::: Metrics
Following Thorne18Fever, yang2018hotpotqa, we used annotated sentence-level facts to calculate the F1, Precision and Recall scores for evaluating sentence-level retrieval. Similarly, we labeled all the paragraphs that contain any ground truth fact as ground truth paragraphs and used the same three metrics for paragraph-level retrieval evaluation. For HotpotQA, following yang2018hotpotqa, we used exact match (EM) and F1 metrics for QA span prediction evaluation, and used the joint EM and F1 to evaluate models' joint performance on both retrieval and QA. The joint EM and F1 are calculated as: $P_j = P_a \cdot P_s; R_j = R_a \cdot R_s; F_j = \frac{2P_j \cdot R_j}{P_j + R_j}; \text{EM}_j = \text{EM}_a \cdot \text{EM}_s$, where $P$, $R$, and $\text{EM}$ denote precision, recall and EM; the subscript $a$ and $s$ indicate that the scores are for answer span and supporting facts.
For the FEVER task, following Thorne18Fever, we used the Label Accuracy for evaluating downstream verification and the Fever Score for joint performance. Fever score will award one point for each example with the correct predicted label only if all ground truth facts were contained in the predicted facts set with at most 5 elements. We also used Oracle Score for the two retrieval modules. The scores were proposed in nie2019combining and indicate the upperbound of final FEVER Score at one intermediate layer assuming all downstream modules are perfect. All scores are averaged over examples in the whole evaluation set.
Results on Benchmarks
We chose the best system based on the dev set, and used that for submitting private test predictions on both FEVER and HotpotQA .
As can be seen in Table TABREF8, with the proposed hierarchical system design, the whole pipeline system achieves new start-of-the-art on HotpotQA with large-margin improvements on all the metrics. More specifically, the biggest improvement comes from the EM for the supporting fact which in turn leads to doubling of the joint EM on previous best results. The scores for answer predictions are also higher than all previous best results with $\sim $8 absolute points increase on EM and $\sim $9 absolute points on F1. All the improvements are consistent between test and dev set evaluation.
Similarly for FEVER, we showed F1 for evidence, the Label Accuracy, and the FEVER Score (same as benchmark evaluation) for models in Table TABREF9. Our system obtained substantially higher scores than all previously published results with a $\sim $4 and $\sim $3 points absolute improvement on Label Accuracy and FEVER Score. In particular, the system gains 74.62 on the evidence F1, 22 points greater that of the second system, demonstrating its ability on semantic retrieval.
Previous systems BIBREF24, BIBREF4 on HotpotQA treat supporting fact retrieval (sentence-level retrieval) just as an auxiliary task for providing extra model explainability. In nie2019combining, although they used a similar three-stage system for FEVER, they only applied one neural retrieval module at sentence-level which potentially weaken its retrieval ability. Both of these previous best systems are different from our fully hierarchical pipeline approach. These observations lead to the assumption that the performance gain comes mainly from the hierarchical retrieval and its positive effects on downstream. Therefore, to validate the system design decisions in Sec SECREF3 and reveal the importance of semantic retrieval towards downstream, we conducted a series of ablation and analysis experiments on all the modules. We started by examining the necessity of both paragraph and sentence retrieval and give insights on why both of them matters.
Analysis and Ablations
Intuitively, both the paragraph-level and sentence-level retrieval sub-module help speeding up the downstream processing. More importantly, since downstream modules were trained by sampled data from upstream modules, both of neural retrieval sub-modules also play an implicit but important role in controlling the immediate retrieval distribution i.e. the distribution of set $\mathbf {P_N}$ and set $\mathbf {S}$ (as shown in Fig. FIGREF2), and providing better inference data and training data for downstream modules.
Analysis and Ablations ::: Ablation Studies ::: Setups:
To reveal the importance of neural retrieval modules at both paragraph and sentence level for maintaining the performance of the overall system, we removed either of them and examine the consequences. Because the removal of a module in the pipeline might change the distribution of the input of the downstream modules, we re-trained all the downstream modules accordingly. To be specific, in the system without the paragraph-level neural retrieval module, we re-trained the sentence-level retrieval module with negative sentences directly sampled from the term-based retrieval set and then also re-trained the downstream QA or verification module. In the system without the sentence-level neural retrieval module, we re-train the downstream QA or verification module by sampling data from both ground truth set and retrieved set directly from the paragraph-level module. We tested the simplified systems on both FEVER and HotpotQA.
Analysis and Ablations ::: Ablation Studies ::: Results:
Table TABREF13 and TABREF14 shows the ablation results for the two neural retrieval modules at both paragraph and sentence level on HotpotQA and FEVER. To begin with, we can see that removing paragraph-level retrieval module significantly reduces the precision for sentence-level retrieval and the corresponding F1 on both tasks. More importantly, this loss of retrieval precision also led to substantial decreases for all the downstream scores on both QA and verification task in spite of their higher upper-bound and recall scores. This indicates that the negative effects on downstream module induced by the omission of paragraph-level retrieval can not be amended by the sentence-level retrieval module, and focusing semantic retrieval merely on improving the recall or the upper-bound of final score will risk jeopardizing the performance of the overall system.
Next, the removal of sentence-level retrieval module induces a $\sim $2 point drop on EM and F1 score in the QA task, and a $\sim $15 point drop on FEVER Score in the verification task. This suggests that rather than just enhance explainability for QA, the sentence-level retrieval module can also help pinpoint relevant information and reduce the noise in the evidence that might otherwise distract the downstream comprehension module. Another interesting finding is that without sentence-level retrieval module, the QA module suffered much less than the verification module; conversely, the removal of paragraph-level retrieval neural induces a 11 point drop on answer EM comparing to a $\sim $9 point drop on Label Accuracy in the verification task. This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval. Finally, we also evaluate the F1 score on FEVER for each classification label and we observe a significant drop of F1 on Not Enough Info category without retrieval module, meaning that semantic retrieval is vital for the downstream verification module's discriminative ability on Not Enough Info label.
Analysis and Ablations ::: Sub-Module Change Analysis
To further study the effects of upstream semantic retrieval towards downstream tasks, we change training or inference data between intermediate layers and then examine how this modification will affect the downstream performance.
Analysis and Ablations ::: Sub-Module Change Analysis ::: Effects of Paragraph-level Retrieval
We fixed $h_p=0$ (the value achieving the best performance) and re-trained all the downstream parameters and track their performance as $k_p$ (the number of selected paragraph) being changed from 1 to 12. The increasing of $k_p$ means a potential higher coverage of the answer but more noise in the retrieved facts. Fig. FIGREF17 shows the results. As can be seen that the EM scores for supporting fact retrieval, answer prediction, and joint performance increase sharply when $k_p$ is changed from 1 to 2. This is consistent with the fact that at least two paragraphs are required to ask each question in HotpotQA. Then, after the peak, every score decrease as $k_p$ becomes larger except the recall of supporting fact which peaks when $k_p=4$. This indicates that even though the neural sentence-level retrieval module poccesses a certain level of ability to select correct facts from noisier upstream information, the final QA module is more sensitive to upstream data and fails to maintain the overall system performance. Moreover, the reduction on answer EM and joint EM suggests that it might be risky to give too much information for downstream modules with a unit of a paragraph.
Analysis and Ablations ::: Sub-Module Change Analysis ::: Effects of Sentence-level Retrieval
Similarly, to study the effects of neural sentence-level retrieval module towards downstream QA and verification modules, we fixed $k_s$ to be 5 and set $h_s$ ranging from 0.1 to 0.9 with a 0.1 interval. Then, we re-trained the downstream QA and verification modules with different $h_s$ value and experimented on both HotpotQA and FEVER.
Question Answering: Fig. FIGREF18 shows the trend of performance. Intuitively, the precision increase while the recall decrease as the system becomes more strict about the retrieved sentences. The EM score for supporting fact retrieval and joint performance reaches their highest value when $h_s=0.5$, a natural balancing point between precision and recall. More interestingly, the EM score for answer prediction peaks when $h_s=0.2$ and where the recall is higher than the precision. This misalignment between answer prediction performance and retrieval performance indicates that unlike the observation at paragraph-level, the downstream QA module is able to stand a certain amount of noise at sentence-level and benefit from a higher recall.
Fact Verification: Fig. FIGREF19 shows the trends for Label Accuracy, FEVER Score, and Evidence F1 by modifying upstream sentence-level threshold $h_s$. We observed that the general trend is similar to that of QA task where both the label accuracy and FEVER score peak at $h_s=0.2$ whereas the retrieval F1 peaks at $h_s=0.5$. Note that, although the downstream verification could take advantage of a higher recall, the module is more sensitive to sentence-level retrieval comparing to the QA module in HotpotQA. More detailed results are in the Appendix.
Analysis and Ablations ::: Answer Breakdown
We further sample 200 examples from HotpotQA and manually tag them according to several common answer types BIBREF4. The proportion of different answer types is shown in Figure FIGREF24. The performance of the system on each answer type is shown in Table TABREF23. The most frequent answer type is 'Person' (24%) and the least frequent answer type is 'Event' (2%). It is also interesting to note that the model performs the best in Yes/No questions as shown in Table TABREF23, reaching an accuracy of 70.6%.
Analysis and Ablations ::: Examples
Fig. FIGREF26 shows an example that is correctly handled by the full pipeline system but not by the system without paragraph-level retrieval module. We can see that it is very difficult to filter the distracting sentence after sentence-level either by the sentence retrieval module or the QA module.
Above findings in both FEVER and HotpotQA bring us some important guidelines for MRS: (1) A paragraph-level retrieval module is imperative; (2) Downstream task module is able to undertake a certain amount of noise from sentence-level retrieval; (3) Cascade effects on downstream task might be caused by modification at paragraph-level retrieval.
Conclusion
We proposed a simple yet effective hierarchical pipeline system that achieves state-of-the-art results on two MRS tasks. Ablation studies demonstrate the importance of semantic retrieval at both paragraph and sentence levels in the MRS system. The work can give general guidelines on MRS modeling and inspire future research on the relationship between semantic retrieval and downstream comprehension in a joint setting.
Acknowledgments
We thank the reviewers for their helpful comments and Yicheng Wang for useful comments. This work was supported by awards from Verisk, Google, Facebook, Salesforce, and Adobe (plus Amazon and Google GPU cloud credits). The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency.
Training Details
The hyper-parameters were chosen based on the performance of the system on the dev set. The hyper-parameters search space is shown in Table TABREF27 and the learning rate was set to $10^{-5}$ in all experiments.
Term-Based Retrieval Details ::: FEVER
We used the same key-word matching method in nie2019combining to get a candidate set for each query. We also used TF-IDF BIBREF20 method to get top-5 related documents for each query. Then, the two sets were combined to get final term-based retrieval set for FEVER. The mean and standard deviation of the number of the retrieved paragraph in the merged set were 8.06 and 4.88.
Term-Based Retrieval Details ::: HotpotQA
We first used the same procedure on FEVER to get an initial candidate set for each query in HotpotQA. Because HotpotQA requires at least 2-hop reasoning for each query, we then extract all the hyperlinked documents from the retrieved documents in the initial candidate set, rank them with TF-IDF BIBREF20 score and then select top-5 most related documents and add them to the candidate set. This gives the final term-based retrieval set for HotpotQA. The mean and standard deviation of the number of the retrieved paragraph for each query in HotpotQA were 39.43 and 16.05.
Detailed Results
The results of sentence-level retrieval and downstream QA with different values of $h_s$ on HotpotQA are in Table TABREF28.
The results of sentence-level retrieval and downstream verification with different values of $h_s$ on FEVER are in Table TABREF34.
The results of sentence-level retrieval and downstream QA with different values of $k_p$ on HotpotQA are in Table TABREF35.
Examples and Case Study
We further provide examples, case study and error analysis for the full pipeline system. The examples are shown from Tables TABREF37, TABREF38, TABREF39, TABREF40, TABREF41. The examples show high diversity on the semantic level and the error occurs often due to the system's failure of extracting precise (either wrong, surplus or insufficient) information from KB. | HotspotQA: Yang, Ding, Muppet
Fever: Hanselowski, Yoneda, Nie |
9df4a7bd0abb99ae81f0ebb29c488f1caa0f268f | 9df4a7bd0abb99ae81f0ebb29c488f1caa0f268f_0 | Q: How do they train the retrieval modules?
Text: Introduction
Extracting external textual knowledge for machine comprehensive systems has long been an important yet challenging problem. Success requires not only precise retrieval of the relevant information sparsely restored in a large knowledge source but also a deep understanding of both the selected knowledge and the input query to give the corresponding output. Initiated by chen2017drqa, the task was termed as Machine Reading at Scale (MRS), seeking to provide a challenging situation where machines are required to do both semantic retrieval and comprehension at different levels of granularity for the final downstream task.
Progress on MRS has been made by improving individual IR or comprehension sub-modules with recent advancements on representative learning BIBREF0, BIBREF1, BIBREF2. However, partially due to the lack of annotated data for intermediate retrieval in an MRS setting, the evaluations were done mainly on the final downstream task and with much less consideration on the intermediate retrieval performance. This led to the convention that upstream retrieval modules mostly focus on getting better coverage of the downstream information such that the upper-bound of the downstream score can be improved, rather than finding more exact information. This convention is misaligned with the nature of MRS where equal effort should be put in emphasizing the models' joint performance and optimizing the relationship between the semantic retrieval and the downstream comprehension sub-tasks.
Hence, to shed light on the importance of semantic retrieval for downstream comprehension tasks, we start by establishing a simple yet effective hierarchical pipeline system for MRS using Wikipedia as the external knowledge source. The system is composed of a term-based retrieval module, two neural modules for both paragraph-level retrieval and sentence-level retrieval, and a neural downstream task module. We evaluated the system on two recent large-scale open domain benchmarks for fact verification and multi-hop QA, namely FEVER BIBREF3 and HotpotQA BIBREF4, in which retrieval performance can also be evaluated accurately since intermediate annotations on evidences are provided. Our system achieves the start-of-the-art results with 45.32% for answer EM and 25.14% joint EM on HotpotQA (8% absolute improvement on answer EM and doubling the joint EM over the previous best results) and with 67.26% on FEVER score (3% absolute improvement over previously published systems).
We then provide empirical studies to validate design decisions. Specifically, we prove the necessity of both paragraph-level retrieval and sentence-level retrieval for maintaining good performance, and further illustrate that a better semantic retrieval module not only is beneficial to achieving high recall and keeping high upper bound for downstream task, but also plays an important role in shaping the downstream data distribution and providing more relevant and high-quality data for downstream sub-module training and inference. These mechanisms are vital for a good MRS system on both QA and fact verification.
Related Work
Machine Reading at Scale First proposed and formalized in chen2017drqa, MRS has gained popularity with increasing amount of work on both dataset collection BIBREF5, BIBREF6 and MRS model developments BIBREF7, BIBREF8, BIBREF9. In some previous work BIBREF10, paragraph-level retrieval modules were mainly for improving the recall of required information, while in some other works BIBREF4, sentence-level retrieval modules were merely for solving the auxiliary sentence selection task. In our work, we focus on revealing the relationship between semantic retrieval at different granularity levels and the downstream comprehension task. To the best of our knowledge, we are the first to apply and optimize neural semantic retrieval at both paragraph and sentence levels for MRS.
Automatic Fact Checking: Recent work BIBREF11 formalized the task of automatic fact checking from the viewpoint of machine learning and NLP. The release of FEVER BIBREF3 stimulates many recent developments BIBREF12, BIBREF13, BIBREF14 on data-driven neural networks for automatic fact checking. We consider the task also as MRS because they share almost the same setup except that the downstream task is verification or natural language inference (NLI) rather than QA.
Information Retrieval Success in deep neural networks inspires their application to information retrieval (IR) tasks BIBREF15, BIBREF16, BIBREF17, BIBREF18. In typical IR settings, systems are required to retrieve and rank BIBREF19 elements from a collection of documents based on their relevance to the query. This setting might be very different from the retrieval in MRS where systems are asked to select facts needed to answer a question or verify a statement. We refer the retrieval in MRS as Semantic Retrieval since it emphasizes on semantic understanding.
Method
In previous works, an MRS system can be complicated with different sub-components processing different retrieval and comprehension sub-tasks at different levels of granularity, and with some sub-components intertwined. For interpretability considerations, we used a unified pipeline setup. The overview of the system is in Fig. FIGREF2.
To be specific, we formulate the MRS system as a function that maps an input tuple $(q, \mathbf {K})$ to an output tuple $(\hat{y}, \mathbf {S})$ where $q$ indicates the input query, $\mathbf {K}$ is the textual KB, $\hat{y}$ is the output prediction, and $\mathbf {S}$ is selected supporting sentences from Wikipedia. Let $\mathbf {E}$ denotes a set of necessary evidences or facts selected from $\mathbf {K}$ for the prediction. For a QA task, $q$ is the input question and $\hat{y}$ is the predicted answer. For a verification task, $q$ is the input claim and $\hat{y}$ is the predicted truthfulness of the input claim. For all tasks, $\mathbf {K}$ is Wikipedia.
The system procedure is listed below:
(1) Term-Based Retrieval: To begin with, we used a combination of the TF-IDF method and a rule-based keyword matching method to narrow the scope from whole Wikipedia down to a set of related paragraphs; this is a standard procedure in MRS BIBREF20, BIBREF10, BIBREF12. The focus of this step is to efficiently select a candidate set $\mathbf {P_I}$ that can cover the information as much as possible ($\mathbf {P_I} \subset \mathbf {K}$) while keeping the size of the set acceptable enough for downstream processing.
(2) Paragraph-Level Neural Retrieval: After obtaining the initial set, we compare each paragraph in $\mathbf {P_I}$ with the input query $q$ using a neural model (which will be explained later in Sec SECREF4). The outputs of the neural model are treated as the relatedness score between the input query and the paragraphs. The scores will be used to sort all the upstream paragraphs. Then, $\mathbf {P_I}$ will be narrowed to a new set $\mathbf {P_N}$ ($\mathbf {P_N} \subset \mathbf {P_I}$) by selecting top $k_p$ paragraphs having relatedness score higher than some threshold value $h_p$ (going out from the P-Level grey box in Fig. FIGREF2). $k_p$ and $h_p$ would be chosen by keeping a good balance between the recall and precision of the paragraph retrieval.
(3) Sentence-Level Neural Retrieval: Next, we select the evidence at the sentence-level by decomposing all the paragraphs in $\mathbf {P_N}$ into sentences. Similarly, each sentence is compared with the query using a neural model (see details in Sec SECREF4) and obtain a set of sentences $\mathbf {S} \subset \mathbf {P_N}$ for the downstream task by choosing top $k_s$ sentences with output scores higher than some threshold $h_s$ (S-Level grey box in Fig. FIGREF2). During evaluation, $\mathbf {S}$ is often evaluated against some ground truth sentence set denoted as $\mathbf {E}$.
(4) Downstream Modeling: At the final step, we simply applied task-specific neural models (e.g., QA and NLI) on the concatenation of all the sentences in $\mathbf {S}$ and the query, obtaining the final output $\hat{y}$.
In some experiments, we modified the setup for certain analysis or ablation purposes which will be explained individually in Sec SECREF6.
Method ::: Modeling and Training
Throughout all our experiments, we used BERT-Base BIBREF2 to provide the state-of-the-art contextualized modeling of the input text.
Semantic Retrieval: We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss. To be specific, we fed the query and context into BERT as:
We applied an affine layer and sigmoid activation on the last layer output of the [$\mathit {CLS}$] token which is a scalar value. The parameters were updated with the objective function:
where $\hat{p}_i$ is the output of the model, $\mathbf {T}^{p/s}_{pos}$ is the positive set and $\mathbf {T}^{p/s}_{neg}$ is the negative set. As shown in Fig. FIGREF2, at sentence level, ground-truth sentences were served as positive examples while other sentences from upstream retrieved set were served as negative examples. Similarly at the paragraph-level, paragraphs having any ground-truth sentence were used as positive examples and other paragraphs from the upstream term-based retrieval processes were used as negative examples.
QA: We followed devlin2018bert for QA span prediction modeling. To correctly handle yes-or-no questions in HotpotQA, we fed the two additional “$\mathit {yes}$" and “$\mathit {no}$" tokens between [$\mathit {CLS}$] and the $Query$ as:
where the supervision was given to the second or the third token when the answer is “yes" or “no", such that they can compete with all other predicted spans. The parameters of the neural QA model were trained to maximize the log probabilities of the true start and end indexes as:
where $\hat{y}^s_i$ and $\hat{y}^e_i$ are the predicted probability on the ground-truth start and end position for the $i$th example, respectively. It is worth noting that we used ground truth supporting sentences plus some other sentences sampled from upstream retrieved set as the context for training the QA module such that it will adapt to the upstream data distribution during inference.
Fact Verification: Following Thorne18Fever, we formulate downstream fact verification as the 3-way natural language inference (NLI) classification problem BIBREF21, BIBREF22 and train the model with 3-way cross entropy loss. The input format is the same as that of semantic retrieval and the objective is $\mathcal {J}_{ver} = -\sum _{i} \mathbf {y}_i \cdot \log (\hat{\mathbf {y}}_i)$, where $\hat{\mathbf {y}}_i \in \mathbf {R^3}$ denotes the model's output for the three verification labels, and $\mathbf {y}_i$ is a one-hot embedding for the ground-truth label. For verifiable queries, we used ground truth evidential sentences plus some other sentences sampled from upstream retrieved set as new evidential context for NLI. For non-verifiable queries, we only used sentences sampled from upstream retrieved set as context because those queries are not associated with ground truth evidential sentences. This detail is important for the model to identify non-verifiable queries and will be explained more in Sec SECREF6. Additional training details and hyper-parameter selections are in the Appendix (Sec. SECREF8; Table TABREF27).
It is worth noting that each sub-module in the system relies on its preceding sub-module to provide data both for training and inference. This means that there will be upstream data distribution misalignment if we trained the sub-module in isolation without considering the properties of its precedent upstream module. The problem is similar to the concept of internal covariate shift BIBREF23, where the distribution of each layer's inputs changes inside a neural network. Therefore, it makes sense to study this issue in a joint MRS setting rather than a typical supervised learning setting where training and test data tend to be fixed and modules being isolated. We release our code and the organized data both for reproducibility and providing an off-the-shelf testbed to facilitate future research on MRS.
Experimental Setup
MRS requires a system not only to retrieve relevant content from textual KBs but also to poccess enough understanding ability to solve the downstream task. To understand the impact or importance of semantic retrieval on the downstream comprehension, we established a unified experimental setup that involves two different downstream tasks, i.e., multi-hop QA and fact verification.
Experimental Setup ::: Tasks and Datasets
HotpotQA: This dataset is a recent large-scale QA dataset that brings in new features: (1) the questions require finding and reasoning over multiple documents; (2) the questions are diverse and not limited to pre-existing KBs; (3) it offers a new comparison question type BIBREF4. We experimented our system on HotpotQA in the fullwiki setting, where a system must find the answer to a question in the scope of the entire Wikipedia, an ideal MRS setup. The sizes of the train, dev and test split are 90,564, 7,405, and 7,405. More importantly, HotpotQA also provides human-annotated sentence-level supporting facts that are needed to answer each question. Those intermediate annotations enable evaluation on models' joint ability on both fact retrieval and answer span prediction, facilitating our direct analysis on the explainable predictions and its relations with the upstream retrieval.
FEVER: The Fact Extraction and VERification dataset BIBREF3 is a recent dataset collected to facilitate the automatic fact checking. The work also proposes a benchmark task in which given an arbitrary input claim, candidate systems are asked to select evidential sentences from Wikipedia and label the claim as either Support, Refute, or Not Enough Info, if the claim can be verified to be true, false, or non-verifiable, respectively, based on the evidence. The sizes of the train, dev and test split are 145,449, 19,998, and 9,998. Similar to HotpotQA, the dataset provides annotated sentence-level facts needed for the verification. These intermediate annotations could provide an accurate evaluation on the results of semantic retrieval and thus suits well for the analysis on the effects of retrieval module on downstream verification.
As in chen2017drqa, we use Wikipedia as our unique knowledge base because it is a comprehensive and self-evolving information source often used to facilitate intelligent systems. Moreover, as Wikipedia is the source for both HotpotQA and FEVER, it helps standardize any further analysis of the effects of semantic retrieval on the two different downstream tasks.
Experimental Setup ::: Metrics
Following Thorne18Fever, yang2018hotpotqa, we used annotated sentence-level facts to calculate the F1, Precision and Recall scores for evaluating sentence-level retrieval. Similarly, we labeled all the paragraphs that contain any ground truth fact as ground truth paragraphs and used the same three metrics for paragraph-level retrieval evaluation. For HotpotQA, following yang2018hotpotqa, we used exact match (EM) and F1 metrics for QA span prediction evaluation, and used the joint EM and F1 to evaluate models' joint performance on both retrieval and QA. The joint EM and F1 are calculated as: $P_j = P_a \cdot P_s; R_j = R_a \cdot R_s; F_j = \frac{2P_j \cdot R_j}{P_j + R_j}; \text{EM}_j = \text{EM}_a \cdot \text{EM}_s$, where $P$, $R$, and $\text{EM}$ denote precision, recall and EM; the subscript $a$ and $s$ indicate that the scores are for answer span and supporting facts.
For the FEVER task, following Thorne18Fever, we used the Label Accuracy for evaluating downstream verification and the Fever Score for joint performance. Fever score will award one point for each example with the correct predicted label only if all ground truth facts were contained in the predicted facts set with at most 5 elements. We also used Oracle Score for the two retrieval modules. The scores were proposed in nie2019combining and indicate the upperbound of final FEVER Score at one intermediate layer assuming all downstream modules are perfect. All scores are averaged over examples in the whole evaluation set.
Results on Benchmarks
We chose the best system based on the dev set, and used that for submitting private test predictions on both FEVER and HotpotQA .
As can be seen in Table TABREF8, with the proposed hierarchical system design, the whole pipeline system achieves new start-of-the-art on HotpotQA with large-margin improvements on all the metrics. More specifically, the biggest improvement comes from the EM for the supporting fact which in turn leads to doubling of the joint EM on previous best results. The scores for answer predictions are also higher than all previous best results with $\sim $8 absolute points increase on EM and $\sim $9 absolute points on F1. All the improvements are consistent between test and dev set evaluation.
Similarly for FEVER, we showed F1 for evidence, the Label Accuracy, and the FEVER Score (same as benchmark evaluation) for models in Table TABREF9. Our system obtained substantially higher scores than all previously published results with a $\sim $4 and $\sim $3 points absolute improvement on Label Accuracy and FEVER Score. In particular, the system gains 74.62 on the evidence F1, 22 points greater that of the second system, demonstrating its ability on semantic retrieval.
Previous systems BIBREF24, BIBREF4 on HotpotQA treat supporting fact retrieval (sentence-level retrieval) just as an auxiliary task for providing extra model explainability. In nie2019combining, although they used a similar three-stage system for FEVER, they only applied one neural retrieval module at sentence-level which potentially weaken its retrieval ability. Both of these previous best systems are different from our fully hierarchical pipeline approach. These observations lead to the assumption that the performance gain comes mainly from the hierarchical retrieval and its positive effects on downstream. Therefore, to validate the system design decisions in Sec SECREF3 and reveal the importance of semantic retrieval towards downstream, we conducted a series of ablation and analysis experiments on all the modules. We started by examining the necessity of both paragraph and sentence retrieval and give insights on why both of them matters.
Analysis and Ablations
Intuitively, both the paragraph-level and sentence-level retrieval sub-module help speeding up the downstream processing. More importantly, since downstream modules were trained by sampled data from upstream modules, both of neural retrieval sub-modules also play an implicit but important role in controlling the immediate retrieval distribution i.e. the distribution of set $\mathbf {P_N}$ and set $\mathbf {S}$ (as shown in Fig. FIGREF2), and providing better inference data and training data for downstream modules.
Analysis and Ablations ::: Ablation Studies ::: Setups:
To reveal the importance of neural retrieval modules at both paragraph and sentence level for maintaining the performance of the overall system, we removed either of them and examine the consequences. Because the removal of a module in the pipeline might change the distribution of the input of the downstream modules, we re-trained all the downstream modules accordingly. To be specific, in the system without the paragraph-level neural retrieval module, we re-trained the sentence-level retrieval module with negative sentences directly sampled from the term-based retrieval set and then also re-trained the downstream QA or verification module. In the system without the sentence-level neural retrieval module, we re-train the downstream QA or verification module by sampling data from both ground truth set and retrieved set directly from the paragraph-level module. We tested the simplified systems on both FEVER and HotpotQA.
Analysis and Ablations ::: Ablation Studies ::: Results:
Table TABREF13 and TABREF14 shows the ablation results for the two neural retrieval modules at both paragraph and sentence level on HotpotQA and FEVER. To begin with, we can see that removing paragraph-level retrieval module significantly reduces the precision for sentence-level retrieval and the corresponding F1 on both tasks. More importantly, this loss of retrieval precision also led to substantial decreases for all the downstream scores on both QA and verification task in spite of their higher upper-bound and recall scores. This indicates that the negative effects on downstream module induced by the omission of paragraph-level retrieval can not be amended by the sentence-level retrieval module, and focusing semantic retrieval merely on improving the recall or the upper-bound of final score will risk jeopardizing the performance of the overall system.
Next, the removal of sentence-level retrieval module induces a $\sim $2 point drop on EM and F1 score in the QA task, and a $\sim $15 point drop on FEVER Score in the verification task. This suggests that rather than just enhance explainability for QA, the sentence-level retrieval module can also help pinpoint relevant information and reduce the noise in the evidence that might otherwise distract the downstream comprehension module. Another interesting finding is that without sentence-level retrieval module, the QA module suffered much less than the verification module; conversely, the removal of paragraph-level retrieval neural induces a 11 point drop on answer EM comparing to a $\sim $9 point drop on Label Accuracy in the verification task. This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval. Finally, we also evaluate the F1 score on FEVER for each classification label and we observe a significant drop of F1 on Not Enough Info category without retrieval module, meaning that semantic retrieval is vital for the downstream verification module's discriminative ability on Not Enough Info label.
Analysis and Ablations ::: Sub-Module Change Analysis
To further study the effects of upstream semantic retrieval towards downstream tasks, we change training or inference data between intermediate layers and then examine how this modification will affect the downstream performance.
Analysis and Ablations ::: Sub-Module Change Analysis ::: Effects of Paragraph-level Retrieval
We fixed $h_p=0$ (the value achieving the best performance) and re-trained all the downstream parameters and track their performance as $k_p$ (the number of selected paragraph) being changed from 1 to 12. The increasing of $k_p$ means a potential higher coverage of the answer but more noise in the retrieved facts. Fig. FIGREF17 shows the results. As can be seen that the EM scores for supporting fact retrieval, answer prediction, and joint performance increase sharply when $k_p$ is changed from 1 to 2. This is consistent with the fact that at least two paragraphs are required to ask each question in HotpotQA. Then, after the peak, every score decrease as $k_p$ becomes larger except the recall of supporting fact which peaks when $k_p=4$. This indicates that even though the neural sentence-level retrieval module poccesses a certain level of ability to select correct facts from noisier upstream information, the final QA module is more sensitive to upstream data and fails to maintain the overall system performance. Moreover, the reduction on answer EM and joint EM suggests that it might be risky to give too much information for downstream modules with a unit of a paragraph.
Analysis and Ablations ::: Sub-Module Change Analysis ::: Effects of Sentence-level Retrieval
Similarly, to study the effects of neural sentence-level retrieval module towards downstream QA and verification modules, we fixed $k_s$ to be 5 and set $h_s$ ranging from 0.1 to 0.9 with a 0.1 interval. Then, we re-trained the downstream QA and verification modules with different $h_s$ value and experimented on both HotpotQA and FEVER.
Question Answering: Fig. FIGREF18 shows the trend of performance. Intuitively, the precision increase while the recall decrease as the system becomes more strict about the retrieved sentences. The EM score for supporting fact retrieval and joint performance reaches their highest value when $h_s=0.5$, a natural balancing point between precision and recall. More interestingly, the EM score for answer prediction peaks when $h_s=0.2$ and where the recall is higher than the precision. This misalignment between answer prediction performance and retrieval performance indicates that unlike the observation at paragraph-level, the downstream QA module is able to stand a certain amount of noise at sentence-level and benefit from a higher recall.
Fact Verification: Fig. FIGREF19 shows the trends for Label Accuracy, FEVER Score, and Evidence F1 by modifying upstream sentence-level threshold $h_s$. We observed that the general trend is similar to that of QA task where both the label accuracy and FEVER score peak at $h_s=0.2$ whereas the retrieval F1 peaks at $h_s=0.5$. Note that, although the downstream verification could take advantage of a higher recall, the module is more sensitive to sentence-level retrieval comparing to the QA module in HotpotQA. More detailed results are in the Appendix.
Analysis and Ablations ::: Answer Breakdown
We further sample 200 examples from HotpotQA and manually tag them according to several common answer types BIBREF4. The proportion of different answer types is shown in Figure FIGREF24. The performance of the system on each answer type is shown in Table TABREF23. The most frequent answer type is 'Person' (24%) and the least frequent answer type is 'Event' (2%). It is also interesting to note that the model performs the best in Yes/No questions as shown in Table TABREF23, reaching an accuracy of 70.6%.
Analysis and Ablations ::: Examples
Fig. FIGREF26 shows an example that is correctly handled by the full pipeline system but not by the system without paragraph-level retrieval module. We can see that it is very difficult to filter the distracting sentence after sentence-level either by the sentence retrieval module or the QA module.
Above findings in both FEVER and HotpotQA bring us some important guidelines for MRS: (1) A paragraph-level retrieval module is imperative; (2) Downstream task module is able to undertake a certain amount of noise from sentence-level retrieval; (3) Cascade effects on downstream task might be caused by modification at paragraph-level retrieval.
Conclusion
We proposed a simple yet effective hierarchical pipeline system that achieves state-of-the-art results on two MRS tasks. Ablation studies demonstrate the importance of semantic retrieval at both paragraph and sentence levels in the MRS system. The work can give general guidelines on MRS modeling and inspire future research on the relationship between semantic retrieval and downstream comprehension in a joint setting.
Acknowledgments
We thank the reviewers for their helpful comments and Yicheng Wang for useful comments. This work was supported by awards from Verisk, Google, Facebook, Salesforce, and Adobe (plus Amazon and Google GPU cloud credits). The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency.
Training Details
The hyper-parameters were chosen based on the performance of the system on the dev set. The hyper-parameters search space is shown in Table TABREF27 and the learning rate was set to $10^{-5}$ in all experiments.
Term-Based Retrieval Details ::: FEVER
We used the same key-word matching method in nie2019combining to get a candidate set for each query. We also used TF-IDF BIBREF20 method to get top-5 related documents for each query. Then, the two sets were combined to get final term-based retrieval set for FEVER. The mean and standard deviation of the number of the retrieved paragraph in the merged set were 8.06 and 4.88.
Term-Based Retrieval Details ::: HotpotQA
We first used the same procedure on FEVER to get an initial candidate set for each query in HotpotQA. Because HotpotQA requires at least 2-hop reasoning for each query, we then extract all the hyperlinked documents from the retrieved documents in the initial candidate set, rank them with TF-IDF BIBREF20 score and then select top-5 most related documents and add them to the candidate set. This gives the final term-based retrieval set for HotpotQA. The mean and standard deviation of the number of the retrieved paragraph for each query in HotpotQA were 39.43 and 16.05.
Detailed Results
The results of sentence-level retrieval and downstream QA with different values of $h_s$ on HotpotQA are in Table TABREF28.
The results of sentence-level retrieval and downstream verification with different values of $h_s$ on FEVER are in Table TABREF34.
The results of sentence-level retrieval and downstream QA with different values of $k_p$ on HotpotQA are in Table TABREF35.
Examples and Case Study
We further provide examples, case study and error analysis for the full pipeline system. The examples are shown from Tables TABREF37, TABREF38, TABREF39, TABREF40, TABREF41. The examples show high diversity on the semantic level and the error occurs often due to the system's failure of extracting precise (either wrong, surplus or insufficient) information from KB. | We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss. |
b7291845ccf08313e09195befd3c8030f28f6a9e | b7291845ccf08313e09195befd3c8030f28f6a9e_0 | Q: How do they model the neural retrieval modules?
Text: Introduction
Extracting external textual knowledge for machine comprehensive systems has long been an important yet challenging problem. Success requires not only precise retrieval of the relevant information sparsely restored in a large knowledge source but also a deep understanding of both the selected knowledge and the input query to give the corresponding output. Initiated by chen2017drqa, the task was termed as Machine Reading at Scale (MRS), seeking to provide a challenging situation where machines are required to do both semantic retrieval and comprehension at different levels of granularity for the final downstream task.
Progress on MRS has been made by improving individual IR or comprehension sub-modules with recent advancements on representative learning BIBREF0, BIBREF1, BIBREF2. However, partially due to the lack of annotated data for intermediate retrieval in an MRS setting, the evaluations were done mainly on the final downstream task and with much less consideration on the intermediate retrieval performance. This led to the convention that upstream retrieval modules mostly focus on getting better coverage of the downstream information such that the upper-bound of the downstream score can be improved, rather than finding more exact information. This convention is misaligned with the nature of MRS where equal effort should be put in emphasizing the models' joint performance and optimizing the relationship between the semantic retrieval and the downstream comprehension sub-tasks.
Hence, to shed light on the importance of semantic retrieval for downstream comprehension tasks, we start by establishing a simple yet effective hierarchical pipeline system for MRS using Wikipedia as the external knowledge source. The system is composed of a term-based retrieval module, two neural modules for both paragraph-level retrieval and sentence-level retrieval, and a neural downstream task module. We evaluated the system on two recent large-scale open domain benchmarks for fact verification and multi-hop QA, namely FEVER BIBREF3 and HotpotQA BIBREF4, in which retrieval performance can also be evaluated accurately since intermediate annotations on evidences are provided. Our system achieves the start-of-the-art results with 45.32% for answer EM and 25.14% joint EM on HotpotQA (8% absolute improvement on answer EM and doubling the joint EM over the previous best results) and with 67.26% on FEVER score (3% absolute improvement over previously published systems).
We then provide empirical studies to validate design decisions. Specifically, we prove the necessity of both paragraph-level retrieval and sentence-level retrieval for maintaining good performance, and further illustrate that a better semantic retrieval module not only is beneficial to achieving high recall and keeping high upper bound for downstream task, but also plays an important role in shaping the downstream data distribution and providing more relevant and high-quality data for downstream sub-module training and inference. These mechanisms are vital for a good MRS system on both QA and fact verification.
Related Work
Machine Reading at Scale First proposed and formalized in chen2017drqa, MRS has gained popularity with increasing amount of work on both dataset collection BIBREF5, BIBREF6 and MRS model developments BIBREF7, BIBREF8, BIBREF9. In some previous work BIBREF10, paragraph-level retrieval modules were mainly for improving the recall of required information, while in some other works BIBREF4, sentence-level retrieval modules were merely for solving the auxiliary sentence selection task. In our work, we focus on revealing the relationship between semantic retrieval at different granularity levels and the downstream comprehension task. To the best of our knowledge, we are the first to apply and optimize neural semantic retrieval at both paragraph and sentence levels for MRS.
Automatic Fact Checking: Recent work BIBREF11 formalized the task of automatic fact checking from the viewpoint of machine learning and NLP. The release of FEVER BIBREF3 stimulates many recent developments BIBREF12, BIBREF13, BIBREF14 on data-driven neural networks for automatic fact checking. We consider the task also as MRS because they share almost the same setup except that the downstream task is verification or natural language inference (NLI) rather than QA.
Information Retrieval Success in deep neural networks inspires their application to information retrieval (IR) tasks BIBREF15, BIBREF16, BIBREF17, BIBREF18. In typical IR settings, systems are required to retrieve and rank BIBREF19 elements from a collection of documents based on their relevance to the query. This setting might be very different from the retrieval in MRS where systems are asked to select facts needed to answer a question or verify a statement. We refer the retrieval in MRS as Semantic Retrieval since it emphasizes on semantic understanding.
Method
In previous works, an MRS system can be complicated with different sub-components processing different retrieval and comprehension sub-tasks at different levels of granularity, and with some sub-components intertwined. For interpretability considerations, we used a unified pipeline setup. The overview of the system is in Fig. FIGREF2.
To be specific, we formulate the MRS system as a function that maps an input tuple $(q, \mathbf {K})$ to an output tuple $(\hat{y}, \mathbf {S})$ where $q$ indicates the input query, $\mathbf {K}$ is the textual KB, $\hat{y}$ is the output prediction, and $\mathbf {S}$ is selected supporting sentences from Wikipedia. Let $\mathbf {E}$ denotes a set of necessary evidences or facts selected from $\mathbf {K}$ for the prediction. For a QA task, $q$ is the input question and $\hat{y}$ is the predicted answer. For a verification task, $q$ is the input claim and $\hat{y}$ is the predicted truthfulness of the input claim. For all tasks, $\mathbf {K}$ is Wikipedia.
The system procedure is listed below:
(1) Term-Based Retrieval: To begin with, we used a combination of the TF-IDF method and a rule-based keyword matching method to narrow the scope from whole Wikipedia down to a set of related paragraphs; this is a standard procedure in MRS BIBREF20, BIBREF10, BIBREF12. The focus of this step is to efficiently select a candidate set $\mathbf {P_I}$ that can cover the information as much as possible ($\mathbf {P_I} \subset \mathbf {K}$) while keeping the size of the set acceptable enough for downstream processing.
(2) Paragraph-Level Neural Retrieval: After obtaining the initial set, we compare each paragraph in $\mathbf {P_I}$ with the input query $q$ using a neural model (which will be explained later in Sec SECREF4). The outputs of the neural model are treated as the relatedness score between the input query and the paragraphs. The scores will be used to sort all the upstream paragraphs. Then, $\mathbf {P_I}$ will be narrowed to a new set $\mathbf {P_N}$ ($\mathbf {P_N} \subset \mathbf {P_I}$) by selecting top $k_p$ paragraphs having relatedness score higher than some threshold value $h_p$ (going out from the P-Level grey box in Fig. FIGREF2). $k_p$ and $h_p$ would be chosen by keeping a good balance between the recall and precision of the paragraph retrieval.
(3) Sentence-Level Neural Retrieval: Next, we select the evidence at the sentence-level by decomposing all the paragraphs in $\mathbf {P_N}$ into sentences. Similarly, each sentence is compared with the query using a neural model (see details in Sec SECREF4) and obtain a set of sentences $\mathbf {S} \subset \mathbf {P_N}$ for the downstream task by choosing top $k_s$ sentences with output scores higher than some threshold $h_s$ (S-Level grey box in Fig. FIGREF2). During evaluation, $\mathbf {S}$ is often evaluated against some ground truth sentence set denoted as $\mathbf {E}$.
(4) Downstream Modeling: At the final step, we simply applied task-specific neural models (e.g., QA and NLI) on the concatenation of all the sentences in $\mathbf {S}$ and the query, obtaining the final output $\hat{y}$.
In some experiments, we modified the setup for certain analysis or ablation purposes which will be explained individually in Sec SECREF6.
Method ::: Modeling and Training
Throughout all our experiments, we used BERT-Base BIBREF2 to provide the state-of-the-art contextualized modeling of the input text.
Semantic Retrieval: We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss. To be specific, we fed the query and context into BERT as:
We applied an affine layer and sigmoid activation on the last layer output of the [$\mathit {CLS}$] token which is a scalar value. The parameters were updated with the objective function:
where $\hat{p}_i$ is the output of the model, $\mathbf {T}^{p/s}_{pos}$ is the positive set and $\mathbf {T}^{p/s}_{neg}$ is the negative set. As shown in Fig. FIGREF2, at sentence level, ground-truth sentences were served as positive examples while other sentences from upstream retrieved set were served as negative examples. Similarly at the paragraph-level, paragraphs having any ground-truth sentence were used as positive examples and other paragraphs from the upstream term-based retrieval processes were used as negative examples.
QA: We followed devlin2018bert for QA span prediction modeling. To correctly handle yes-or-no questions in HotpotQA, we fed the two additional “$\mathit {yes}$" and “$\mathit {no}$" tokens between [$\mathit {CLS}$] and the $Query$ as:
where the supervision was given to the second or the third token when the answer is “yes" or “no", such that they can compete with all other predicted spans. The parameters of the neural QA model were trained to maximize the log probabilities of the true start and end indexes as:
where $\hat{y}^s_i$ and $\hat{y}^e_i$ are the predicted probability on the ground-truth start and end position for the $i$th example, respectively. It is worth noting that we used ground truth supporting sentences plus some other sentences sampled from upstream retrieved set as the context for training the QA module such that it will adapt to the upstream data distribution during inference.
Fact Verification: Following Thorne18Fever, we formulate downstream fact verification as the 3-way natural language inference (NLI) classification problem BIBREF21, BIBREF22 and train the model with 3-way cross entropy loss. The input format is the same as that of semantic retrieval and the objective is $\mathcal {J}_{ver} = -\sum _{i} \mathbf {y}_i \cdot \log (\hat{\mathbf {y}}_i)$, where $\hat{\mathbf {y}}_i \in \mathbf {R^3}$ denotes the model's output for the three verification labels, and $\mathbf {y}_i$ is a one-hot embedding for the ground-truth label. For verifiable queries, we used ground truth evidential sentences plus some other sentences sampled from upstream retrieved set as new evidential context for NLI. For non-verifiable queries, we only used sentences sampled from upstream retrieved set as context because those queries are not associated with ground truth evidential sentences. This detail is important for the model to identify non-verifiable queries and will be explained more in Sec SECREF6. Additional training details and hyper-parameter selections are in the Appendix (Sec. SECREF8; Table TABREF27).
It is worth noting that each sub-module in the system relies on its preceding sub-module to provide data both for training and inference. This means that there will be upstream data distribution misalignment if we trained the sub-module in isolation without considering the properties of its precedent upstream module. The problem is similar to the concept of internal covariate shift BIBREF23, where the distribution of each layer's inputs changes inside a neural network. Therefore, it makes sense to study this issue in a joint MRS setting rather than a typical supervised learning setting where training and test data tend to be fixed and modules being isolated. We release our code and the organized data both for reproducibility and providing an off-the-shelf testbed to facilitate future research on MRS.
Experimental Setup
MRS requires a system not only to retrieve relevant content from textual KBs but also to poccess enough understanding ability to solve the downstream task. To understand the impact or importance of semantic retrieval on the downstream comprehension, we established a unified experimental setup that involves two different downstream tasks, i.e., multi-hop QA and fact verification.
Experimental Setup ::: Tasks and Datasets
HotpotQA: This dataset is a recent large-scale QA dataset that brings in new features: (1) the questions require finding and reasoning over multiple documents; (2) the questions are diverse and not limited to pre-existing KBs; (3) it offers a new comparison question type BIBREF4. We experimented our system on HotpotQA in the fullwiki setting, where a system must find the answer to a question in the scope of the entire Wikipedia, an ideal MRS setup. The sizes of the train, dev and test split are 90,564, 7,405, and 7,405. More importantly, HotpotQA also provides human-annotated sentence-level supporting facts that are needed to answer each question. Those intermediate annotations enable evaluation on models' joint ability on both fact retrieval and answer span prediction, facilitating our direct analysis on the explainable predictions and its relations with the upstream retrieval.
FEVER: The Fact Extraction and VERification dataset BIBREF3 is a recent dataset collected to facilitate the automatic fact checking. The work also proposes a benchmark task in which given an arbitrary input claim, candidate systems are asked to select evidential sentences from Wikipedia and label the claim as either Support, Refute, or Not Enough Info, if the claim can be verified to be true, false, or non-verifiable, respectively, based on the evidence. The sizes of the train, dev and test split are 145,449, 19,998, and 9,998. Similar to HotpotQA, the dataset provides annotated sentence-level facts needed for the verification. These intermediate annotations could provide an accurate evaluation on the results of semantic retrieval and thus suits well for the analysis on the effects of retrieval module on downstream verification.
As in chen2017drqa, we use Wikipedia as our unique knowledge base because it is a comprehensive and self-evolving information source often used to facilitate intelligent systems. Moreover, as Wikipedia is the source for both HotpotQA and FEVER, it helps standardize any further analysis of the effects of semantic retrieval on the two different downstream tasks.
Experimental Setup ::: Metrics
Following Thorne18Fever, yang2018hotpotqa, we used annotated sentence-level facts to calculate the F1, Precision and Recall scores for evaluating sentence-level retrieval. Similarly, we labeled all the paragraphs that contain any ground truth fact as ground truth paragraphs and used the same three metrics for paragraph-level retrieval evaluation. For HotpotQA, following yang2018hotpotqa, we used exact match (EM) and F1 metrics for QA span prediction evaluation, and used the joint EM and F1 to evaluate models' joint performance on both retrieval and QA. The joint EM and F1 are calculated as: $P_j = P_a \cdot P_s; R_j = R_a \cdot R_s; F_j = \frac{2P_j \cdot R_j}{P_j + R_j}; \text{EM}_j = \text{EM}_a \cdot \text{EM}_s$, where $P$, $R$, and $\text{EM}$ denote precision, recall and EM; the subscript $a$ and $s$ indicate that the scores are for answer span and supporting facts.
For the FEVER task, following Thorne18Fever, we used the Label Accuracy for evaluating downstream verification and the Fever Score for joint performance. Fever score will award one point for each example with the correct predicted label only if all ground truth facts were contained in the predicted facts set with at most 5 elements. We also used Oracle Score for the two retrieval modules. The scores were proposed in nie2019combining and indicate the upperbound of final FEVER Score at one intermediate layer assuming all downstream modules are perfect. All scores are averaged over examples in the whole evaluation set.
Results on Benchmarks
We chose the best system based on the dev set, and used that for submitting private test predictions on both FEVER and HotpotQA .
As can be seen in Table TABREF8, with the proposed hierarchical system design, the whole pipeline system achieves new start-of-the-art on HotpotQA with large-margin improvements on all the metrics. More specifically, the biggest improvement comes from the EM for the supporting fact which in turn leads to doubling of the joint EM on previous best results. The scores for answer predictions are also higher than all previous best results with $\sim $8 absolute points increase on EM and $\sim $9 absolute points on F1. All the improvements are consistent between test and dev set evaluation.
Similarly for FEVER, we showed F1 for evidence, the Label Accuracy, and the FEVER Score (same as benchmark evaluation) for models in Table TABREF9. Our system obtained substantially higher scores than all previously published results with a $\sim $4 and $\sim $3 points absolute improvement on Label Accuracy and FEVER Score. In particular, the system gains 74.62 on the evidence F1, 22 points greater that of the second system, demonstrating its ability on semantic retrieval.
Previous systems BIBREF24, BIBREF4 on HotpotQA treat supporting fact retrieval (sentence-level retrieval) just as an auxiliary task for providing extra model explainability. In nie2019combining, although they used a similar three-stage system for FEVER, they only applied one neural retrieval module at sentence-level which potentially weaken its retrieval ability. Both of these previous best systems are different from our fully hierarchical pipeline approach. These observations lead to the assumption that the performance gain comes mainly from the hierarchical retrieval and its positive effects on downstream. Therefore, to validate the system design decisions in Sec SECREF3 and reveal the importance of semantic retrieval towards downstream, we conducted a series of ablation and analysis experiments on all the modules. We started by examining the necessity of both paragraph and sentence retrieval and give insights on why both of them matters.
Analysis and Ablations
Intuitively, both the paragraph-level and sentence-level retrieval sub-module help speeding up the downstream processing. More importantly, since downstream modules were trained by sampled data from upstream modules, both of neural retrieval sub-modules also play an implicit but important role in controlling the immediate retrieval distribution i.e. the distribution of set $\mathbf {P_N}$ and set $\mathbf {S}$ (as shown in Fig. FIGREF2), and providing better inference data and training data for downstream modules.
Analysis and Ablations ::: Ablation Studies ::: Setups:
To reveal the importance of neural retrieval modules at both paragraph and sentence level for maintaining the performance of the overall system, we removed either of them and examine the consequences. Because the removal of a module in the pipeline might change the distribution of the input of the downstream modules, we re-trained all the downstream modules accordingly. To be specific, in the system without the paragraph-level neural retrieval module, we re-trained the sentence-level retrieval module with negative sentences directly sampled from the term-based retrieval set and then also re-trained the downstream QA or verification module. In the system without the sentence-level neural retrieval module, we re-train the downstream QA or verification module by sampling data from both ground truth set and retrieved set directly from the paragraph-level module. We tested the simplified systems on both FEVER and HotpotQA.
Analysis and Ablations ::: Ablation Studies ::: Results:
Table TABREF13 and TABREF14 shows the ablation results for the two neural retrieval modules at both paragraph and sentence level on HotpotQA and FEVER. To begin with, we can see that removing paragraph-level retrieval module significantly reduces the precision for sentence-level retrieval and the corresponding F1 on both tasks. More importantly, this loss of retrieval precision also led to substantial decreases for all the downstream scores on both QA and verification task in spite of their higher upper-bound and recall scores. This indicates that the negative effects on downstream module induced by the omission of paragraph-level retrieval can not be amended by the sentence-level retrieval module, and focusing semantic retrieval merely on improving the recall or the upper-bound of final score will risk jeopardizing the performance of the overall system.
Next, the removal of sentence-level retrieval module induces a $\sim $2 point drop on EM and F1 score in the QA task, and a $\sim $15 point drop on FEVER Score in the verification task. This suggests that rather than just enhance explainability for QA, the sentence-level retrieval module can also help pinpoint relevant information and reduce the noise in the evidence that might otherwise distract the downstream comprehension module. Another interesting finding is that without sentence-level retrieval module, the QA module suffered much less than the verification module; conversely, the removal of paragraph-level retrieval neural induces a 11 point drop on answer EM comparing to a $\sim $9 point drop on Label Accuracy in the verification task. This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval. Finally, we also evaluate the F1 score on FEVER for each classification label and we observe a significant drop of F1 on Not Enough Info category without retrieval module, meaning that semantic retrieval is vital for the downstream verification module's discriminative ability on Not Enough Info label.
Analysis and Ablations ::: Sub-Module Change Analysis
To further study the effects of upstream semantic retrieval towards downstream tasks, we change training or inference data between intermediate layers and then examine how this modification will affect the downstream performance.
Analysis and Ablations ::: Sub-Module Change Analysis ::: Effects of Paragraph-level Retrieval
We fixed $h_p=0$ (the value achieving the best performance) and re-trained all the downstream parameters and track their performance as $k_p$ (the number of selected paragraph) being changed from 1 to 12. The increasing of $k_p$ means a potential higher coverage of the answer but more noise in the retrieved facts. Fig. FIGREF17 shows the results. As can be seen that the EM scores for supporting fact retrieval, answer prediction, and joint performance increase sharply when $k_p$ is changed from 1 to 2. This is consistent with the fact that at least two paragraphs are required to ask each question in HotpotQA. Then, after the peak, every score decrease as $k_p$ becomes larger except the recall of supporting fact which peaks when $k_p=4$. This indicates that even though the neural sentence-level retrieval module poccesses a certain level of ability to select correct facts from noisier upstream information, the final QA module is more sensitive to upstream data and fails to maintain the overall system performance. Moreover, the reduction on answer EM and joint EM suggests that it might be risky to give too much information for downstream modules with a unit of a paragraph.
Analysis and Ablations ::: Sub-Module Change Analysis ::: Effects of Sentence-level Retrieval
Similarly, to study the effects of neural sentence-level retrieval module towards downstream QA and verification modules, we fixed $k_s$ to be 5 and set $h_s$ ranging from 0.1 to 0.9 with a 0.1 interval. Then, we re-trained the downstream QA and verification modules with different $h_s$ value and experimented on both HotpotQA and FEVER.
Question Answering: Fig. FIGREF18 shows the trend of performance. Intuitively, the precision increase while the recall decrease as the system becomes more strict about the retrieved sentences. The EM score for supporting fact retrieval and joint performance reaches their highest value when $h_s=0.5$, a natural balancing point between precision and recall. More interestingly, the EM score for answer prediction peaks when $h_s=0.2$ and where the recall is higher than the precision. This misalignment between answer prediction performance and retrieval performance indicates that unlike the observation at paragraph-level, the downstream QA module is able to stand a certain amount of noise at sentence-level and benefit from a higher recall.
Fact Verification: Fig. FIGREF19 shows the trends for Label Accuracy, FEVER Score, and Evidence F1 by modifying upstream sentence-level threshold $h_s$. We observed that the general trend is similar to that of QA task where both the label accuracy and FEVER score peak at $h_s=0.2$ whereas the retrieval F1 peaks at $h_s=0.5$. Note that, although the downstream verification could take advantage of a higher recall, the module is more sensitive to sentence-level retrieval comparing to the QA module in HotpotQA. More detailed results are in the Appendix.
Analysis and Ablations ::: Answer Breakdown
We further sample 200 examples from HotpotQA and manually tag them according to several common answer types BIBREF4. The proportion of different answer types is shown in Figure FIGREF24. The performance of the system on each answer type is shown in Table TABREF23. The most frequent answer type is 'Person' (24%) and the least frequent answer type is 'Event' (2%). It is also interesting to note that the model performs the best in Yes/No questions as shown in Table TABREF23, reaching an accuracy of 70.6%.
Analysis and Ablations ::: Examples
Fig. FIGREF26 shows an example that is correctly handled by the full pipeline system but not by the system without paragraph-level retrieval module. We can see that it is very difficult to filter the distracting sentence after sentence-level either by the sentence retrieval module or the QA module.
Above findings in both FEVER and HotpotQA bring us some important guidelines for MRS: (1) A paragraph-level retrieval module is imperative; (2) Downstream task module is able to undertake a certain amount of noise from sentence-level retrieval; (3) Cascade effects on downstream task might be caused by modification at paragraph-level retrieval.
Conclusion
We proposed a simple yet effective hierarchical pipeline system that achieves state-of-the-art results on two MRS tasks. Ablation studies demonstrate the importance of semantic retrieval at both paragraph and sentence levels in the MRS system. The work can give general guidelines on MRS modeling and inspire future research on the relationship between semantic retrieval and downstream comprehension in a joint setting.
Acknowledgments
We thank the reviewers for their helpful comments and Yicheng Wang for useful comments. This work was supported by awards from Verisk, Google, Facebook, Salesforce, and Adobe (plus Amazon and Google GPU cloud credits). The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency.
Training Details
The hyper-parameters were chosen based on the performance of the system on the dev set. The hyper-parameters search space is shown in Table TABREF27 and the learning rate was set to $10^{-5}$ in all experiments.
Term-Based Retrieval Details ::: FEVER
We used the same key-word matching method in nie2019combining to get a candidate set for each query. We also used TF-IDF BIBREF20 method to get top-5 related documents for each query. Then, the two sets were combined to get final term-based retrieval set for FEVER. The mean and standard deviation of the number of the retrieved paragraph in the merged set were 8.06 and 4.88.
Term-Based Retrieval Details ::: HotpotQA
We first used the same procedure on FEVER to get an initial candidate set for each query in HotpotQA. Because HotpotQA requires at least 2-hop reasoning for each query, we then extract all the hyperlinked documents from the retrieved documents in the initial candidate set, rank them with TF-IDF BIBREF20 score and then select top-5 most related documents and add them to the candidate set. This gives the final term-based retrieval set for HotpotQA. The mean and standard deviation of the number of the retrieved paragraph for each query in HotpotQA were 39.43 and 16.05.
Detailed Results
The results of sentence-level retrieval and downstream QA with different values of $h_s$ on HotpotQA are in Table TABREF28.
The results of sentence-level retrieval and downstream verification with different values of $h_s$ on FEVER are in Table TABREF34.
The results of sentence-level retrieval and downstream QA with different values of $k_p$ on HotpotQA are in Table TABREF35.
Examples and Case Study
We further provide examples, case study and error analysis for the full pipeline system. The examples are shown from Tables TABREF37, TABREF38, TABREF39, TABREF40, TABREF41. The examples show high diversity on the semantic level and the error occurs often due to the system's failure of extracting precise (either wrong, surplus or insufficient) information from KB. | BERT-Base BIBREF2 to provide the state-of-the-art contextualized modeling |
ac54a9c30c968e5225978a37032158a6ffd4ddb8 | ac54a9c30c968e5225978a37032158a6ffd4ddb8_0 | Q: Retrieval at what level performs better, sentence level or paragraph level?
Text: Introduction
Extracting external textual knowledge for machine comprehensive systems has long been an important yet challenging problem. Success requires not only precise retrieval of the relevant information sparsely restored in a large knowledge source but also a deep understanding of both the selected knowledge and the input query to give the corresponding output. Initiated by chen2017drqa, the task was termed as Machine Reading at Scale (MRS), seeking to provide a challenging situation where machines are required to do both semantic retrieval and comprehension at different levels of granularity for the final downstream task.
Progress on MRS has been made by improving individual IR or comprehension sub-modules with recent advancements on representative learning BIBREF0, BIBREF1, BIBREF2. However, partially due to the lack of annotated data for intermediate retrieval in an MRS setting, the evaluations were done mainly on the final downstream task and with much less consideration on the intermediate retrieval performance. This led to the convention that upstream retrieval modules mostly focus on getting better coverage of the downstream information such that the upper-bound of the downstream score can be improved, rather than finding more exact information. This convention is misaligned with the nature of MRS where equal effort should be put in emphasizing the models' joint performance and optimizing the relationship between the semantic retrieval and the downstream comprehension sub-tasks.
Hence, to shed light on the importance of semantic retrieval for downstream comprehension tasks, we start by establishing a simple yet effective hierarchical pipeline system for MRS using Wikipedia as the external knowledge source. The system is composed of a term-based retrieval module, two neural modules for both paragraph-level retrieval and sentence-level retrieval, and a neural downstream task module. We evaluated the system on two recent large-scale open domain benchmarks for fact verification and multi-hop QA, namely FEVER BIBREF3 and HotpotQA BIBREF4, in which retrieval performance can also be evaluated accurately since intermediate annotations on evidences are provided. Our system achieves the start-of-the-art results with 45.32% for answer EM and 25.14% joint EM on HotpotQA (8% absolute improvement on answer EM and doubling the joint EM over the previous best results) and with 67.26% on FEVER score (3% absolute improvement over previously published systems).
We then provide empirical studies to validate design decisions. Specifically, we prove the necessity of both paragraph-level retrieval and sentence-level retrieval for maintaining good performance, and further illustrate that a better semantic retrieval module not only is beneficial to achieving high recall and keeping high upper bound for downstream task, but also plays an important role in shaping the downstream data distribution and providing more relevant and high-quality data for downstream sub-module training and inference. These mechanisms are vital for a good MRS system on both QA and fact verification.
Related Work
Machine Reading at Scale First proposed and formalized in chen2017drqa, MRS has gained popularity with increasing amount of work on both dataset collection BIBREF5, BIBREF6 and MRS model developments BIBREF7, BIBREF8, BIBREF9. In some previous work BIBREF10, paragraph-level retrieval modules were mainly for improving the recall of required information, while in some other works BIBREF4, sentence-level retrieval modules were merely for solving the auxiliary sentence selection task. In our work, we focus on revealing the relationship between semantic retrieval at different granularity levels and the downstream comprehension task. To the best of our knowledge, we are the first to apply and optimize neural semantic retrieval at both paragraph and sentence levels for MRS.
Automatic Fact Checking: Recent work BIBREF11 formalized the task of automatic fact checking from the viewpoint of machine learning and NLP. The release of FEVER BIBREF3 stimulates many recent developments BIBREF12, BIBREF13, BIBREF14 on data-driven neural networks for automatic fact checking. We consider the task also as MRS because they share almost the same setup except that the downstream task is verification or natural language inference (NLI) rather than QA.
Information Retrieval Success in deep neural networks inspires their application to information retrieval (IR) tasks BIBREF15, BIBREF16, BIBREF17, BIBREF18. In typical IR settings, systems are required to retrieve and rank BIBREF19 elements from a collection of documents based on their relevance to the query. This setting might be very different from the retrieval in MRS where systems are asked to select facts needed to answer a question or verify a statement. We refer the retrieval in MRS as Semantic Retrieval since it emphasizes on semantic understanding.
Method
In previous works, an MRS system can be complicated with different sub-components processing different retrieval and comprehension sub-tasks at different levels of granularity, and with some sub-components intertwined. For interpretability considerations, we used a unified pipeline setup. The overview of the system is in Fig. FIGREF2.
To be specific, we formulate the MRS system as a function that maps an input tuple $(q, \mathbf {K})$ to an output tuple $(\hat{y}, \mathbf {S})$ where $q$ indicates the input query, $\mathbf {K}$ is the textual KB, $\hat{y}$ is the output prediction, and $\mathbf {S}$ is selected supporting sentences from Wikipedia. Let $\mathbf {E}$ denotes a set of necessary evidences or facts selected from $\mathbf {K}$ for the prediction. For a QA task, $q$ is the input question and $\hat{y}$ is the predicted answer. For a verification task, $q$ is the input claim and $\hat{y}$ is the predicted truthfulness of the input claim. For all tasks, $\mathbf {K}$ is Wikipedia.
The system procedure is listed below:
(1) Term-Based Retrieval: To begin with, we used a combination of the TF-IDF method and a rule-based keyword matching method to narrow the scope from whole Wikipedia down to a set of related paragraphs; this is a standard procedure in MRS BIBREF20, BIBREF10, BIBREF12. The focus of this step is to efficiently select a candidate set $\mathbf {P_I}$ that can cover the information as much as possible ($\mathbf {P_I} \subset \mathbf {K}$) while keeping the size of the set acceptable enough for downstream processing.
(2) Paragraph-Level Neural Retrieval: After obtaining the initial set, we compare each paragraph in $\mathbf {P_I}$ with the input query $q$ using a neural model (which will be explained later in Sec SECREF4). The outputs of the neural model are treated as the relatedness score between the input query and the paragraphs. The scores will be used to sort all the upstream paragraphs. Then, $\mathbf {P_I}$ will be narrowed to a new set $\mathbf {P_N}$ ($\mathbf {P_N} \subset \mathbf {P_I}$) by selecting top $k_p$ paragraphs having relatedness score higher than some threshold value $h_p$ (going out from the P-Level grey box in Fig. FIGREF2). $k_p$ and $h_p$ would be chosen by keeping a good balance between the recall and precision of the paragraph retrieval.
(3) Sentence-Level Neural Retrieval: Next, we select the evidence at the sentence-level by decomposing all the paragraphs in $\mathbf {P_N}$ into sentences. Similarly, each sentence is compared with the query using a neural model (see details in Sec SECREF4) and obtain a set of sentences $\mathbf {S} \subset \mathbf {P_N}$ for the downstream task by choosing top $k_s$ sentences with output scores higher than some threshold $h_s$ (S-Level grey box in Fig. FIGREF2). During evaluation, $\mathbf {S}$ is often evaluated against some ground truth sentence set denoted as $\mathbf {E}$.
(4) Downstream Modeling: At the final step, we simply applied task-specific neural models (e.g., QA and NLI) on the concatenation of all the sentences in $\mathbf {S}$ and the query, obtaining the final output $\hat{y}$.
In some experiments, we modified the setup for certain analysis or ablation purposes which will be explained individually in Sec SECREF6.
Method ::: Modeling and Training
Throughout all our experiments, we used BERT-Base BIBREF2 to provide the state-of-the-art contextualized modeling of the input text.
Semantic Retrieval: We treated the neural semantic retrieval at both the paragraph and sentence level as binary classification problems with models' parameters updated by minimizing binary cross entropy loss. To be specific, we fed the query and context into BERT as:
We applied an affine layer and sigmoid activation on the last layer output of the [$\mathit {CLS}$] token which is a scalar value. The parameters were updated with the objective function:
where $\hat{p}_i$ is the output of the model, $\mathbf {T}^{p/s}_{pos}$ is the positive set and $\mathbf {T}^{p/s}_{neg}$ is the negative set. As shown in Fig. FIGREF2, at sentence level, ground-truth sentences were served as positive examples while other sentences from upstream retrieved set were served as negative examples. Similarly at the paragraph-level, paragraphs having any ground-truth sentence were used as positive examples and other paragraphs from the upstream term-based retrieval processes were used as negative examples.
QA: We followed devlin2018bert for QA span prediction modeling. To correctly handle yes-or-no questions in HotpotQA, we fed the two additional “$\mathit {yes}$" and “$\mathit {no}$" tokens between [$\mathit {CLS}$] and the $Query$ as:
where the supervision was given to the second or the third token when the answer is “yes" or “no", such that they can compete with all other predicted spans. The parameters of the neural QA model were trained to maximize the log probabilities of the true start and end indexes as:
where $\hat{y}^s_i$ and $\hat{y}^e_i$ are the predicted probability on the ground-truth start and end position for the $i$th example, respectively. It is worth noting that we used ground truth supporting sentences plus some other sentences sampled from upstream retrieved set as the context for training the QA module such that it will adapt to the upstream data distribution during inference.
Fact Verification: Following Thorne18Fever, we formulate downstream fact verification as the 3-way natural language inference (NLI) classification problem BIBREF21, BIBREF22 and train the model with 3-way cross entropy loss. The input format is the same as that of semantic retrieval and the objective is $\mathcal {J}_{ver} = -\sum _{i} \mathbf {y}_i \cdot \log (\hat{\mathbf {y}}_i)$, where $\hat{\mathbf {y}}_i \in \mathbf {R^3}$ denotes the model's output for the three verification labels, and $\mathbf {y}_i$ is a one-hot embedding for the ground-truth label. For verifiable queries, we used ground truth evidential sentences plus some other sentences sampled from upstream retrieved set as new evidential context for NLI. For non-verifiable queries, we only used sentences sampled from upstream retrieved set as context because those queries are not associated with ground truth evidential sentences. This detail is important for the model to identify non-verifiable queries and will be explained more in Sec SECREF6. Additional training details and hyper-parameter selections are in the Appendix (Sec. SECREF8; Table TABREF27).
It is worth noting that each sub-module in the system relies on its preceding sub-module to provide data both for training and inference. This means that there will be upstream data distribution misalignment if we trained the sub-module in isolation without considering the properties of its precedent upstream module. The problem is similar to the concept of internal covariate shift BIBREF23, where the distribution of each layer's inputs changes inside a neural network. Therefore, it makes sense to study this issue in a joint MRS setting rather than a typical supervised learning setting where training and test data tend to be fixed and modules being isolated. We release our code and the organized data both for reproducibility and providing an off-the-shelf testbed to facilitate future research on MRS.
Experimental Setup
MRS requires a system not only to retrieve relevant content from textual KBs but also to poccess enough understanding ability to solve the downstream task. To understand the impact or importance of semantic retrieval on the downstream comprehension, we established a unified experimental setup that involves two different downstream tasks, i.e., multi-hop QA and fact verification.
Experimental Setup ::: Tasks and Datasets
HotpotQA: This dataset is a recent large-scale QA dataset that brings in new features: (1) the questions require finding and reasoning over multiple documents; (2) the questions are diverse and not limited to pre-existing KBs; (3) it offers a new comparison question type BIBREF4. We experimented our system on HotpotQA in the fullwiki setting, where a system must find the answer to a question in the scope of the entire Wikipedia, an ideal MRS setup. The sizes of the train, dev and test split are 90,564, 7,405, and 7,405. More importantly, HotpotQA also provides human-annotated sentence-level supporting facts that are needed to answer each question. Those intermediate annotations enable evaluation on models' joint ability on both fact retrieval and answer span prediction, facilitating our direct analysis on the explainable predictions and its relations with the upstream retrieval.
FEVER: The Fact Extraction and VERification dataset BIBREF3 is a recent dataset collected to facilitate the automatic fact checking. The work also proposes a benchmark task in which given an arbitrary input claim, candidate systems are asked to select evidential sentences from Wikipedia and label the claim as either Support, Refute, or Not Enough Info, if the claim can be verified to be true, false, or non-verifiable, respectively, based on the evidence. The sizes of the train, dev and test split are 145,449, 19,998, and 9,998. Similar to HotpotQA, the dataset provides annotated sentence-level facts needed for the verification. These intermediate annotations could provide an accurate evaluation on the results of semantic retrieval and thus suits well for the analysis on the effects of retrieval module on downstream verification.
As in chen2017drqa, we use Wikipedia as our unique knowledge base because it is a comprehensive and self-evolving information source often used to facilitate intelligent systems. Moreover, as Wikipedia is the source for both HotpotQA and FEVER, it helps standardize any further analysis of the effects of semantic retrieval on the two different downstream tasks.
Experimental Setup ::: Metrics
Following Thorne18Fever, yang2018hotpotqa, we used annotated sentence-level facts to calculate the F1, Precision and Recall scores for evaluating sentence-level retrieval. Similarly, we labeled all the paragraphs that contain any ground truth fact as ground truth paragraphs and used the same three metrics for paragraph-level retrieval evaluation. For HotpotQA, following yang2018hotpotqa, we used exact match (EM) and F1 metrics for QA span prediction evaluation, and used the joint EM and F1 to evaluate models' joint performance on both retrieval and QA. The joint EM and F1 are calculated as: $P_j = P_a \cdot P_s; R_j = R_a \cdot R_s; F_j = \frac{2P_j \cdot R_j}{P_j + R_j}; \text{EM}_j = \text{EM}_a \cdot \text{EM}_s$, where $P$, $R$, and $\text{EM}$ denote precision, recall and EM; the subscript $a$ and $s$ indicate that the scores are for answer span and supporting facts.
For the FEVER task, following Thorne18Fever, we used the Label Accuracy for evaluating downstream verification and the Fever Score for joint performance. Fever score will award one point for each example with the correct predicted label only if all ground truth facts were contained in the predicted facts set with at most 5 elements. We also used Oracle Score for the two retrieval modules. The scores were proposed in nie2019combining and indicate the upperbound of final FEVER Score at one intermediate layer assuming all downstream modules are perfect. All scores are averaged over examples in the whole evaluation set.
Results on Benchmarks
We chose the best system based on the dev set, and used that for submitting private test predictions on both FEVER and HotpotQA .
As can be seen in Table TABREF8, with the proposed hierarchical system design, the whole pipeline system achieves new start-of-the-art on HotpotQA with large-margin improvements on all the metrics. More specifically, the biggest improvement comes from the EM for the supporting fact which in turn leads to doubling of the joint EM on previous best results. The scores for answer predictions are also higher than all previous best results with $\sim $8 absolute points increase on EM and $\sim $9 absolute points on F1. All the improvements are consistent between test and dev set evaluation.
Similarly for FEVER, we showed F1 for evidence, the Label Accuracy, and the FEVER Score (same as benchmark evaluation) for models in Table TABREF9. Our system obtained substantially higher scores than all previously published results with a $\sim $4 and $\sim $3 points absolute improvement on Label Accuracy and FEVER Score. In particular, the system gains 74.62 on the evidence F1, 22 points greater that of the second system, demonstrating its ability on semantic retrieval.
Previous systems BIBREF24, BIBREF4 on HotpotQA treat supporting fact retrieval (sentence-level retrieval) just as an auxiliary task for providing extra model explainability. In nie2019combining, although they used a similar three-stage system for FEVER, they only applied one neural retrieval module at sentence-level which potentially weaken its retrieval ability. Both of these previous best systems are different from our fully hierarchical pipeline approach. These observations lead to the assumption that the performance gain comes mainly from the hierarchical retrieval and its positive effects on downstream. Therefore, to validate the system design decisions in Sec SECREF3 and reveal the importance of semantic retrieval towards downstream, we conducted a series of ablation and analysis experiments on all the modules. We started by examining the necessity of both paragraph and sentence retrieval and give insights on why both of them matters.
Analysis and Ablations
Intuitively, both the paragraph-level and sentence-level retrieval sub-module help speeding up the downstream processing. More importantly, since downstream modules were trained by sampled data from upstream modules, both of neural retrieval sub-modules also play an implicit but important role in controlling the immediate retrieval distribution i.e. the distribution of set $\mathbf {P_N}$ and set $\mathbf {S}$ (as shown in Fig. FIGREF2), and providing better inference data and training data for downstream modules.
Analysis and Ablations ::: Ablation Studies ::: Setups:
To reveal the importance of neural retrieval modules at both paragraph and sentence level for maintaining the performance of the overall system, we removed either of them and examine the consequences. Because the removal of a module in the pipeline might change the distribution of the input of the downstream modules, we re-trained all the downstream modules accordingly. To be specific, in the system without the paragraph-level neural retrieval module, we re-trained the sentence-level retrieval module with negative sentences directly sampled from the term-based retrieval set and then also re-trained the downstream QA or verification module. In the system without the sentence-level neural retrieval module, we re-train the downstream QA or verification module by sampling data from both ground truth set and retrieved set directly from the paragraph-level module. We tested the simplified systems on both FEVER and HotpotQA.
Analysis and Ablations ::: Ablation Studies ::: Results:
Table TABREF13 and TABREF14 shows the ablation results for the two neural retrieval modules at both paragraph and sentence level on HotpotQA and FEVER. To begin with, we can see that removing paragraph-level retrieval module significantly reduces the precision for sentence-level retrieval and the corresponding F1 on both tasks. More importantly, this loss of retrieval precision also led to substantial decreases for all the downstream scores on both QA and verification task in spite of their higher upper-bound and recall scores. This indicates that the negative effects on downstream module induced by the omission of paragraph-level retrieval can not be amended by the sentence-level retrieval module, and focusing semantic retrieval merely on improving the recall or the upper-bound of final score will risk jeopardizing the performance of the overall system.
Next, the removal of sentence-level retrieval module induces a $\sim $2 point drop on EM and F1 score in the QA task, and a $\sim $15 point drop on FEVER Score in the verification task. This suggests that rather than just enhance explainability for QA, the sentence-level retrieval module can also help pinpoint relevant information and reduce the noise in the evidence that might otherwise distract the downstream comprehension module. Another interesting finding is that without sentence-level retrieval module, the QA module suffered much less than the verification module; conversely, the removal of paragraph-level retrieval neural induces a 11 point drop on answer EM comparing to a $\sim $9 point drop on Label Accuracy in the verification task. This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval. Finally, we also evaluate the F1 score on FEVER for each classification label and we observe a significant drop of F1 on Not Enough Info category without retrieval module, meaning that semantic retrieval is vital for the downstream verification module's discriminative ability on Not Enough Info label.
Analysis and Ablations ::: Sub-Module Change Analysis
To further study the effects of upstream semantic retrieval towards downstream tasks, we change training or inference data between intermediate layers and then examine how this modification will affect the downstream performance.
Analysis and Ablations ::: Sub-Module Change Analysis ::: Effects of Paragraph-level Retrieval
We fixed $h_p=0$ (the value achieving the best performance) and re-trained all the downstream parameters and track their performance as $k_p$ (the number of selected paragraph) being changed from 1 to 12. The increasing of $k_p$ means a potential higher coverage of the answer but more noise in the retrieved facts. Fig. FIGREF17 shows the results. As can be seen that the EM scores for supporting fact retrieval, answer prediction, and joint performance increase sharply when $k_p$ is changed from 1 to 2. This is consistent with the fact that at least two paragraphs are required to ask each question in HotpotQA. Then, after the peak, every score decrease as $k_p$ becomes larger except the recall of supporting fact which peaks when $k_p=4$. This indicates that even though the neural sentence-level retrieval module poccesses a certain level of ability to select correct facts from noisier upstream information, the final QA module is more sensitive to upstream data and fails to maintain the overall system performance. Moreover, the reduction on answer EM and joint EM suggests that it might be risky to give too much information for downstream modules with a unit of a paragraph.
Analysis and Ablations ::: Sub-Module Change Analysis ::: Effects of Sentence-level Retrieval
Similarly, to study the effects of neural sentence-level retrieval module towards downstream QA and verification modules, we fixed $k_s$ to be 5 and set $h_s$ ranging from 0.1 to 0.9 with a 0.1 interval. Then, we re-trained the downstream QA and verification modules with different $h_s$ value and experimented on both HotpotQA and FEVER.
Question Answering: Fig. FIGREF18 shows the trend of performance. Intuitively, the precision increase while the recall decrease as the system becomes more strict about the retrieved sentences. The EM score for supporting fact retrieval and joint performance reaches their highest value when $h_s=0.5$, a natural balancing point between precision and recall. More interestingly, the EM score for answer prediction peaks when $h_s=0.2$ and where the recall is higher than the precision. This misalignment between answer prediction performance and retrieval performance indicates that unlike the observation at paragraph-level, the downstream QA module is able to stand a certain amount of noise at sentence-level and benefit from a higher recall.
Fact Verification: Fig. FIGREF19 shows the trends for Label Accuracy, FEVER Score, and Evidence F1 by modifying upstream sentence-level threshold $h_s$. We observed that the general trend is similar to that of QA task where both the label accuracy and FEVER score peak at $h_s=0.2$ whereas the retrieval F1 peaks at $h_s=0.5$. Note that, although the downstream verification could take advantage of a higher recall, the module is more sensitive to sentence-level retrieval comparing to the QA module in HotpotQA. More detailed results are in the Appendix.
Analysis and Ablations ::: Answer Breakdown
We further sample 200 examples from HotpotQA and manually tag them according to several common answer types BIBREF4. The proportion of different answer types is shown in Figure FIGREF24. The performance of the system on each answer type is shown in Table TABREF23. The most frequent answer type is 'Person' (24%) and the least frequent answer type is 'Event' (2%). It is also interesting to note that the model performs the best in Yes/No questions as shown in Table TABREF23, reaching an accuracy of 70.6%.
Analysis and Ablations ::: Examples
Fig. FIGREF26 shows an example that is correctly handled by the full pipeline system but not by the system without paragraph-level retrieval module. We can see that it is very difficult to filter the distracting sentence after sentence-level either by the sentence retrieval module or the QA module.
Above findings in both FEVER and HotpotQA bring us some important guidelines for MRS: (1) A paragraph-level retrieval module is imperative; (2) Downstream task module is able to undertake a certain amount of noise from sentence-level retrieval; (3) Cascade effects on downstream task might be caused by modification at paragraph-level retrieval.
Conclusion
We proposed a simple yet effective hierarchical pipeline system that achieves state-of-the-art results on two MRS tasks. Ablation studies demonstrate the importance of semantic retrieval at both paragraph and sentence levels in the MRS system. The work can give general guidelines on MRS modeling and inspire future research on the relationship between semantic retrieval and downstream comprehension in a joint setting.
Acknowledgments
We thank the reviewers for their helpful comments and Yicheng Wang for useful comments. This work was supported by awards from Verisk, Google, Facebook, Salesforce, and Adobe (plus Amazon and Google GPU cloud credits). The views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency.
Training Details
The hyper-parameters were chosen based on the performance of the system on the dev set. The hyper-parameters search space is shown in Table TABREF27 and the learning rate was set to $10^{-5}$ in all experiments.
Term-Based Retrieval Details ::: FEVER
We used the same key-word matching method in nie2019combining to get a candidate set for each query. We also used TF-IDF BIBREF20 method to get top-5 related documents for each query. Then, the two sets were combined to get final term-based retrieval set for FEVER. The mean and standard deviation of the number of the retrieved paragraph in the merged set were 8.06 and 4.88.
Term-Based Retrieval Details ::: HotpotQA
We first used the same procedure on FEVER to get an initial candidate set for each query in HotpotQA. Because HotpotQA requires at least 2-hop reasoning for each query, we then extract all the hyperlinked documents from the retrieved documents in the initial candidate set, rank them with TF-IDF BIBREF20 score and then select top-5 most related documents and add them to the candidate set. This gives the final term-based retrieval set for HotpotQA. The mean and standard deviation of the number of the retrieved paragraph for each query in HotpotQA were 39.43 and 16.05.
Detailed Results
The results of sentence-level retrieval and downstream QA with different values of $h_s$ on HotpotQA are in Table TABREF28.
The results of sentence-level retrieval and downstream verification with different values of $h_s$ on FEVER are in Table TABREF34.
The results of sentence-level retrieval and downstream QA with different values of $k_p$ on HotpotQA are in Table TABREF35.
Examples and Case Study
We further provide examples, case study and error analysis for the full pipeline system. The examples are shown from Tables TABREF37, TABREF38, TABREF39, TABREF40, TABREF41. The examples show high diversity on the semantic level and the error occurs often due to the system's failure of extracting precise (either wrong, surplus or insufficient) information from KB. | This seems to indicate that the downstream QA module relies more on the upstream paragraph-level retrieval whereas the verification module relies more on the upstream sentence-level retrieval. |
b236b9827253037b2fd7884d7bfec74619d96293 | b236b9827253037b2fd7884d7bfec74619d96293_0 | Q: How much better performance of proposed model compared to answer-selection models?
Text: Introduction
Understanding texts and being able to answer a question posed by a human is a long-standing goal in the artificial intelligence field. Given the rapid advancement of neural network-based models and the availability of large-scale datasets, such as SQuAD BIBREF0 and TriviaQA BIBREF1, researchers have begun to concentrate on building automatic question-answering (QA) systems. One example of such a system is called the machine-reading question-answering (MRQA) model, which provides answers to questions from given passages BIBREF2, BIBREF3, BIBREF4.
Recently, research has revealed that most of the questions in the existing MRQA datasets do not require reasoning across sentences in the given context (passage); instead, they can be answered by looking at only a single sentence BIBREF5. Using this characteristic, a simple model can achieve performances competitive with that of a sophisticated model. However, in most of the real scenarios of QA applications, more than one sentences should be utilized to extract a correct answer.
To alleviate this limitation in the previous datasets, another type of dataset was developed in which answering the question requires reasoning over multiple sentences in the given passages BIBREF6, BIBREF7. Figure shows an example of a recently released dataset, the HotpotQA. This dataset consists of not only question-answer pairs with context passages but also supporting sentence information for answering the question annotated by a human.
In this study, we are interested in building a model that exploits the relational information among sentences in passages and in classifying the supporting sentences that contain the essential information for answering the question. To this end, we propose a novel graph neural network model, named Propagate-selector (PS), that can be directly employed as a subsystem in the QA pipeline. First, we design a graph structure to hold information in the HotpotQA dataset by assigning each sentence to an independent graph node. Then, we connect the undirected edges between nodes using a proposed graph topology (see the discussion in SECREF1). Next, we allow PS to propagate information between the nodes through iterative hops to perform reasoning across the given sentences. Trough the propagate process, the model learns to understand information that cannot be inferred when considering sentences in isolation.
To the best of our knowledge, this is the first work to employ a graph neural network structure to find supporting sentences for a QA system. Through experiments, we demonstrate that the proposed method achieves better performances when classifying supporting sentences than those of the widely used answer-selection models BIBREF8, BIBREF9, BIBREF10, BIBREF11.
Related Work
Previous researchers have also investigated neural network-based models for MRQA. One line of inquiry employs an attention mechanism between tokens in the question and passage to compute the answer span from the given text BIBREF12, BIBREF3. As the task scope was extended from specific- to open-domain QA, several models have been proposed to select a relevant paragraph from the text to predict the answer span BIBREF13, BIBREF14. However, none of these methods have addressed reasoning over multiple sentences.
To understand the relational patterns in the dataset, graph neural network algorithms have also been previously proposed. BIBREF15 proposed a graph convolutional network to classify graph-structured data. This model was further investigated for applications involving large-scale graphs BIBREF16, for the effectiveness of aggregating and combining graph nodes by employing an attention mechanism BIBREF17, and for adopting recurrent node updates BIBREF18. In addition, one trial involved applying graph neural networks to QA tasks; however, this usage was limited to the entity level rather than sentence level understanding BIBREF19.
Task and Dataset
The specific problem we aim to tackle in this study is to classify supporting sentences in the MRQA task. We consider the target dataset HotpotQA, by BIBREF6, which is comprised of tuples ($<$Q, $P_n$, $Y_i$, A$>$) where Q is the question, $P_n$ is the set of passages as the given context, and each passage $P\,{\in }\,P_n$ is further comprised of a set of sentences $S_i$ ($S_i\,{\in }\,P_n)$. Here, $Y_i$ is a binary label indicating whether $S_i$ contains the information required to answer the question, and A is the answer. In particular, we call a sentence, $S_s\,{\in }\,S_i$, a supporting sentence when $Y_s$ is true. Figure shows an example of the HotpotQA dataset.
In this study, we do not use the answer information from the dataset; we use only the subsequent tuples $<$Q, $P_n$, $Y_i$$>$ when classifying supporting sentences. We believe that this subproblem plays an important role in building a full QA pipeline because the proposed models for this task will be combined with other MRQA models in an end-to-end training process.
Methodology ::: Propagate-Selector
In this paper, we are interested in identifying supporting sentences, among sentences in the given text that contain information essential to answering the question. To build a model that can perform reasoning across multiple sentences, we propose a graph neural network model called Propagate-selector (PS). PS consists of the following parts:
Topology: To build a model that understands the relationship between sentences for answering a question, we propose a graph neural network where each node represents a sentence from passages and the question. Figure depicts the topology of the proposed model. In an offline step, we organize the content of each instance in a graph where each node represents a sentence from the passages and the question. Then, we add edges between nodes using the following topology:
we fully connect nodes that represent sentences from the same passage (dotted-black);
we fully connect nodes that represent the first sentence of each passage (dotted-red);
we add an edge between the question and every node for each passage (dotted-blue).
In this way, we enable a path by which sentence nodes can propagate information between both inner and outer passages.
Node representation: Question $\textbf {Q}\,{\in }\,\mathbb {R}^{d\times Q}$ and sentence ${\textbf {S}}_i\,{\in }\,\mathbb {R}^{d\times S_i}$, (where $d$ is the dimensionality of the word embedding and $Q$ and ${S}_i$ represent the lengths of the sequences in Q and ${\textbf {S}}_i$, respectively), are processed to acquire the sentence-level information. Recent studies have shown that a pretrained language model helps the model capture the contextual meaning of words in the sentence BIBREF20, BIBREF21. Following this study, we select an ELMo BIBREF20 language model for the word-embedding layer of our model as follows: $\textbf {L}^{Q}\,{=}\,\text{ELMo}(\textbf {Q}),~\textbf {L}^{S}\,{=}\,\text{ELMo}(\textbf {S})$. Using these new representations, we compute the sentence representation as follows:
where $f_\theta $ is the RNN function with the weight parameters $\theta $, and $\textbf {N}^Q\,{\in }\,\mathbb {R}^{d^{\prime }}$ and $\textbf {N}^S\,{\in }\,\mathbb {R}^{d^{\prime }}$ are node representations for the question and sentence, respectively (where $d^{\prime }$ is the dimensionality of the RNN hidden units).
Aggregation: An iterative attentive aggregation function to the neighbor nodes is utilized to compute the amount of information to be propagated to each node in the graph as follows:
where $\textbf {A}_v\,{\in }\,\mathbb {R}^{d^{\prime }}$ is the aggregated information for the v-th node computed by attentive weighted summation of its neighbor nodes, $a_{vu}$ is attention weight between node v and its neighbor nodes $u~(u{\in }N(v))$, $\textbf {N}_u\,{\in }\,\mathbb {R}^{d^{\prime }}$ is the u-th node representation, $\sigma $ is a nonlinear activation function, and $\textbf {W}\,{\in }\,\mathbb {R}^{d^{\prime }\times d^{\prime }}$ is the learned model parameter. Because all the nodes belong to a graph structure in which the iterative aggregation is performed among nodes, the k in the equation indicates that the computation occurs in the k-th hop (iteration).
Update: The aggregated information for the v-th node, $\textbf {A}_v$ in equation (DISPLAY_FORM6), is combined with its previous node representation to update the node. We apply a skip connection to allow the model to learn the amount of information to be updated in each hop as follows:
where $\sigma $ is a nonlinear activation function, {;} indicates vector concatenation, and $\textbf {W}\,{\in }\,\mathbb {R}^{d^{\prime }\times 2d^{\prime }}$ is the learned model parameter.
Methodology ::: Optimization
Because our objective is to classify supporting sentences ($S_i\,{\in }\,{P_n}$) from the given tuples $<$Q, $P_n$, $Y_i$$>$, we define two types of loss to be minimized. One is a rank loss that computes the cross-entropy loss between a question and each sentence using the ground-truth $Y_i$ as follows:
where $g_{\theta }$ is a feedforward network that computes a similarity score between the final representation of the question and each sentence. The other is attention loss, which is defined in each hop as follows:
where $a_{qi}^{(k)}$ indicates the relevance between the question node q and the i-th sentence node in the k-th hop as computed by equation (DISPLAY_FORM6).
Finally, these two losses are combined to construct the final objective function:
where $\alpha $ is a hyperparameter.
Experiments
We regard the task as the problem of selecting the supporting sentences from the passages to answer the questions. Similar to the answer-selection task in the QA literature, we report the model performance using the mean average precision (MAP) and mean reciprocal rank (MRR) metrics. To evaluate the model performance, we use the HotpotQA dataset, which is described in section “Task and Dataset". Table shows properties of the dataset. We conduct a series of experiments to compare baseline methods with the newly proposed models. All codes developed for this research will be made available via a public web repository along with the dataset.
Experiments ::: Implementation Details
To implement the Propagate-selector (PS) model, we first use a small version of ELMo (13.6 M parameters) that provides 256-dimensional context embedding. This choice was based on the available batch size (50 for our experiments) when training the complete model on a single GPU (GTX 1080 Ti). When we tried using the original version of ELMo (93.6 M parameters, 1024-dimensional context embedding), we were able to increase the batch size only up to 20, which results in excessive training time (approximately 90 hours). For the sentence encoding, we used a GRU BIBREF22 with a hidden unit dimension of 200. The hidden unit weight matrix of the GRU is initialized using orthogonal weights BIBREF23. Dropout is applied for regularization purposes at a ratio of 0.7 for the RNN (in equation DISPLAY_FORM5) to 0.7 for the attention weight matrix (in equation DISPLAY_FORM6). For the nonlinear activation function (in equation DISPLAY_FORM6 and DISPLAY_FORM7), we use the $tanh$ function.
Regarding the vocabulary, we replaced vocabulary with fewer than 12 instances in terms of term-frequency with “UNK" tokens. The final vocabulary size was 138,156. We also applied the Adam optimizer BIBREF24, including gradient clipping by norm at a threshold of 5.
Experiments ::: Comparisons with Other Methods
Table shows the model performances on the HotpotQA dataset. Because the dataset only provides training (trainset) and validation (devset) subsets, we report the model performances on these datasets. While training the model, we implement early termination based on the devset performance and measure the best performance. To compare the model performances, we choose widely used answer-selection models such as CompAggr BIBREF8, IWAN BIBREF10, CompClip BIBREF9, sCARNN BIBREF11, and CompClip-LM BIBREF25 which were primarily developed to rank candidate answers for a given question. The CompClip-LM is based on CompClip and adopts ELMo in its word-embedding layer.
In addition to the main proposed model, PS-rnn-elmo, we also investigate three model variants: PS-rnn-elmo-s uses a small version of ELMo, PS-rnn uses GloVe BIBREF26 instead of ELMo as a word-embedding layer, and PS-avg employs average pooling ($\textbf {N}^Q{=}\,\text{average}(\textbf {Q})$ and $\textbf {N}^S{=}\,\text{average}(\textbf {S})$) instead of RNN encoding in equation (DISPLAY_FORM5).
As shown in Table , the proposed PS-rnn-elmo shows a significant MAP performance improvement compared to the previous best model, CompClip-LM (0.696 to 0.734 absolute).
Experiments ::: Hop Analysis
Table shows the model performance (PS-elmo) as the number of hops increases. We find that the model achieves the best performance in the 4-hop case but starts to degrade when the number of hops exceeds 4. We assume that the model experiences the vanishing gradient problem under a larger number of iterative propagations (hops). Table shows model performance with small version of ELMo.
Figure depicts the attention weight between the question node and each sentence node (hop-4 model case). As the hop number increases, we observe that the model properly identifies supporting sentences (in this example, sentence #4 and #17). This behavior demonstrates that our proposed model correctly learns how to propagate the necessary information among the sentence nodes via the iterative process.
Conclusion
In this paper, we propose a graph neural network that finds the sentences crucial for answering a question. The experiments demonstrate that the model correctly classifies supporting sentences by iteratively propagating the necessary information through its novel architecture. We believe that our approach will play an important role in building a QA pipeline in combination with other MRQA models trained in an end-to-end manner. | significant MAP performance improvement compared to the previous best model, CompClip-LM (0.696 to 0.734 absolute) |
b53efdbb9e53a65cd3828a3eb485c70f782a06e5 | b53efdbb9e53a65cd3828a3eb485c70f782a06e5_0 | Q: How are some nodes initially connected based on text structure?
Text: Introduction
Understanding texts and being able to answer a question posed by a human is a long-standing goal in the artificial intelligence field. Given the rapid advancement of neural network-based models and the availability of large-scale datasets, such as SQuAD BIBREF0 and TriviaQA BIBREF1, researchers have begun to concentrate on building automatic question-answering (QA) systems. One example of such a system is called the machine-reading question-answering (MRQA) model, which provides answers to questions from given passages BIBREF2, BIBREF3, BIBREF4.
Recently, research has revealed that most of the questions in the existing MRQA datasets do not require reasoning across sentences in the given context (passage); instead, they can be answered by looking at only a single sentence BIBREF5. Using this characteristic, a simple model can achieve performances competitive with that of a sophisticated model. However, in most of the real scenarios of QA applications, more than one sentences should be utilized to extract a correct answer.
To alleviate this limitation in the previous datasets, another type of dataset was developed in which answering the question requires reasoning over multiple sentences in the given passages BIBREF6, BIBREF7. Figure shows an example of a recently released dataset, the HotpotQA. This dataset consists of not only question-answer pairs with context passages but also supporting sentence information for answering the question annotated by a human.
In this study, we are interested in building a model that exploits the relational information among sentences in passages and in classifying the supporting sentences that contain the essential information for answering the question. To this end, we propose a novel graph neural network model, named Propagate-selector (PS), that can be directly employed as a subsystem in the QA pipeline. First, we design a graph structure to hold information in the HotpotQA dataset by assigning each sentence to an independent graph node. Then, we connect the undirected edges between nodes using a proposed graph topology (see the discussion in SECREF1). Next, we allow PS to propagate information between the nodes through iterative hops to perform reasoning across the given sentences. Trough the propagate process, the model learns to understand information that cannot be inferred when considering sentences in isolation.
To the best of our knowledge, this is the first work to employ a graph neural network structure to find supporting sentences for a QA system. Through experiments, we demonstrate that the proposed method achieves better performances when classifying supporting sentences than those of the widely used answer-selection models BIBREF8, BIBREF9, BIBREF10, BIBREF11.
Related Work
Previous researchers have also investigated neural network-based models for MRQA. One line of inquiry employs an attention mechanism between tokens in the question and passage to compute the answer span from the given text BIBREF12, BIBREF3. As the task scope was extended from specific- to open-domain QA, several models have been proposed to select a relevant paragraph from the text to predict the answer span BIBREF13, BIBREF14. However, none of these methods have addressed reasoning over multiple sentences.
To understand the relational patterns in the dataset, graph neural network algorithms have also been previously proposed. BIBREF15 proposed a graph convolutional network to classify graph-structured data. This model was further investigated for applications involving large-scale graphs BIBREF16, for the effectiveness of aggregating and combining graph nodes by employing an attention mechanism BIBREF17, and for adopting recurrent node updates BIBREF18. In addition, one trial involved applying graph neural networks to QA tasks; however, this usage was limited to the entity level rather than sentence level understanding BIBREF19.
Task and Dataset
The specific problem we aim to tackle in this study is to classify supporting sentences in the MRQA task. We consider the target dataset HotpotQA, by BIBREF6, which is comprised of tuples ($<$Q, $P_n$, $Y_i$, A$>$) where Q is the question, $P_n$ is the set of passages as the given context, and each passage $P\,{\in }\,P_n$ is further comprised of a set of sentences $S_i$ ($S_i\,{\in }\,P_n)$. Here, $Y_i$ is a binary label indicating whether $S_i$ contains the information required to answer the question, and A is the answer. In particular, we call a sentence, $S_s\,{\in }\,S_i$, a supporting sentence when $Y_s$ is true. Figure shows an example of the HotpotQA dataset.
In this study, we do not use the answer information from the dataset; we use only the subsequent tuples $<$Q, $P_n$, $Y_i$$>$ when classifying supporting sentences. We believe that this subproblem plays an important role in building a full QA pipeline because the proposed models for this task will be combined with other MRQA models in an end-to-end training process.
Methodology ::: Propagate-Selector
In this paper, we are interested in identifying supporting sentences, among sentences in the given text that contain information essential to answering the question. To build a model that can perform reasoning across multiple sentences, we propose a graph neural network model called Propagate-selector (PS). PS consists of the following parts:
Topology: To build a model that understands the relationship between sentences for answering a question, we propose a graph neural network where each node represents a sentence from passages and the question. Figure depicts the topology of the proposed model. In an offline step, we organize the content of each instance in a graph where each node represents a sentence from the passages and the question. Then, we add edges between nodes using the following topology:
we fully connect nodes that represent sentences from the same passage (dotted-black);
we fully connect nodes that represent the first sentence of each passage (dotted-red);
we add an edge between the question and every node for each passage (dotted-blue).
In this way, we enable a path by which sentence nodes can propagate information between both inner and outer passages.
Node representation: Question $\textbf {Q}\,{\in }\,\mathbb {R}^{d\times Q}$ and sentence ${\textbf {S}}_i\,{\in }\,\mathbb {R}^{d\times S_i}$, (where $d$ is the dimensionality of the word embedding and $Q$ and ${S}_i$ represent the lengths of the sequences in Q and ${\textbf {S}}_i$, respectively), are processed to acquire the sentence-level information. Recent studies have shown that a pretrained language model helps the model capture the contextual meaning of words in the sentence BIBREF20, BIBREF21. Following this study, we select an ELMo BIBREF20 language model for the word-embedding layer of our model as follows: $\textbf {L}^{Q}\,{=}\,\text{ELMo}(\textbf {Q}),~\textbf {L}^{S}\,{=}\,\text{ELMo}(\textbf {S})$. Using these new representations, we compute the sentence representation as follows:
where $f_\theta $ is the RNN function with the weight parameters $\theta $, and $\textbf {N}^Q\,{\in }\,\mathbb {R}^{d^{\prime }}$ and $\textbf {N}^S\,{\in }\,\mathbb {R}^{d^{\prime }}$ are node representations for the question and sentence, respectively (where $d^{\prime }$ is the dimensionality of the RNN hidden units).
Aggregation: An iterative attentive aggregation function to the neighbor nodes is utilized to compute the amount of information to be propagated to each node in the graph as follows:
where $\textbf {A}_v\,{\in }\,\mathbb {R}^{d^{\prime }}$ is the aggregated information for the v-th node computed by attentive weighted summation of its neighbor nodes, $a_{vu}$ is attention weight between node v and its neighbor nodes $u~(u{\in }N(v))$, $\textbf {N}_u\,{\in }\,\mathbb {R}^{d^{\prime }}$ is the u-th node representation, $\sigma $ is a nonlinear activation function, and $\textbf {W}\,{\in }\,\mathbb {R}^{d^{\prime }\times d^{\prime }}$ is the learned model parameter. Because all the nodes belong to a graph structure in which the iterative aggregation is performed among nodes, the k in the equation indicates that the computation occurs in the k-th hop (iteration).
Update: The aggregated information for the v-th node, $\textbf {A}_v$ in equation (DISPLAY_FORM6), is combined with its previous node representation to update the node. We apply a skip connection to allow the model to learn the amount of information to be updated in each hop as follows:
where $\sigma $ is a nonlinear activation function, {;} indicates vector concatenation, and $\textbf {W}\,{\in }\,\mathbb {R}^{d^{\prime }\times 2d^{\prime }}$ is the learned model parameter.
Methodology ::: Optimization
Because our objective is to classify supporting sentences ($S_i\,{\in }\,{P_n}$) from the given tuples $<$Q, $P_n$, $Y_i$$>$, we define two types of loss to be minimized. One is a rank loss that computes the cross-entropy loss between a question and each sentence using the ground-truth $Y_i$ as follows:
where $g_{\theta }$ is a feedforward network that computes a similarity score between the final representation of the question and each sentence. The other is attention loss, which is defined in each hop as follows:
where $a_{qi}^{(k)}$ indicates the relevance between the question node q and the i-th sentence node in the k-th hop as computed by equation (DISPLAY_FORM6).
Finally, these two losses are combined to construct the final objective function:
where $\alpha $ is a hyperparameter.
Experiments
We regard the task as the problem of selecting the supporting sentences from the passages to answer the questions. Similar to the answer-selection task in the QA literature, we report the model performance using the mean average precision (MAP) and mean reciprocal rank (MRR) metrics. To evaluate the model performance, we use the HotpotQA dataset, which is described in section “Task and Dataset". Table shows properties of the dataset. We conduct a series of experiments to compare baseline methods with the newly proposed models. All codes developed for this research will be made available via a public web repository along with the dataset.
Experiments ::: Implementation Details
To implement the Propagate-selector (PS) model, we first use a small version of ELMo (13.6 M parameters) that provides 256-dimensional context embedding. This choice was based on the available batch size (50 for our experiments) when training the complete model on a single GPU (GTX 1080 Ti). When we tried using the original version of ELMo (93.6 M parameters, 1024-dimensional context embedding), we were able to increase the batch size only up to 20, which results in excessive training time (approximately 90 hours). For the sentence encoding, we used a GRU BIBREF22 with a hidden unit dimension of 200. The hidden unit weight matrix of the GRU is initialized using orthogonal weights BIBREF23. Dropout is applied for regularization purposes at a ratio of 0.7 for the RNN (in equation DISPLAY_FORM5) to 0.7 for the attention weight matrix (in equation DISPLAY_FORM6). For the nonlinear activation function (in equation DISPLAY_FORM6 and DISPLAY_FORM7), we use the $tanh$ function.
Regarding the vocabulary, we replaced vocabulary with fewer than 12 instances in terms of term-frequency with “UNK" tokens. The final vocabulary size was 138,156. We also applied the Adam optimizer BIBREF24, including gradient clipping by norm at a threshold of 5.
Experiments ::: Comparisons with Other Methods
Table shows the model performances on the HotpotQA dataset. Because the dataset only provides training (trainset) and validation (devset) subsets, we report the model performances on these datasets. While training the model, we implement early termination based on the devset performance and measure the best performance. To compare the model performances, we choose widely used answer-selection models such as CompAggr BIBREF8, IWAN BIBREF10, CompClip BIBREF9, sCARNN BIBREF11, and CompClip-LM BIBREF25 which were primarily developed to rank candidate answers for a given question. The CompClip-LM is based on CompClip and adopts ELMo in its word-embedding layer.
In addition to the main proposed model, PS-rnn-elmo, we also investigate three model variants: PS-rnn-elmo-s uses a small version of ELMo, PS-rnn uses GloVe BIBREF26 instead of ELMo as a word-embedding layer, and PS-avg employs average pooling ($\textbf {N}^Q{=}\,\text{average}(\textbf {Q})$ and $\textbf {N}^S{=}\,\text{average}(\textbf {S})$) instead of RNN encoding in equation (DISPLAY_FORM5).
As shown in Table , the proposed PS-rnn-elmo shows a significant MAP performance improvement compared to the previous best model, CompClip-LM (0.696 to 0.734 absolute).
Experiments ::: Hop Analysis
Table shows the model performance (PS-elmo) as the number of hops increases. We find that the model achieves the best performance in the 4-hop case but starts to degrade when the number of hops exceeds 4. We assume that the model experiences the vanishing gradient problem under a larger number of iterative propagations (hops). Table shows model performance with small version of ELMo.
Figure depicts the attention weight between the question node and each sentence node (hop-4 model case). As the hop number increases, we observe that the model properly identifies supporting sentences (in this example, sentence #4 and #17). This behavior demonstrates that our proposed model correctly learns how to propagate the necessary information among the sentence nodes via the iterative process.
Conclusion
In this paper, we propose a graph neural network that finds the sentences crucial for answering a question. The experiments demonstrate that the model correctly classifies supporting sentences by iteratively propagating the necessary information through its novel architecture. We believe that our approach will play an important role in building a QA pipeline in combination with other MRQA models trained in an end-to-end manner. | we fully connect nodes that represent sentences from the same passage, we fully connect nodes that represent the first sentence of each passage, we add an edge between the question and every node for each passage |
4d5e2a83b517e9c082421f11a68a604269642f29 | 4d5e2a83b517e9c082421f11a68a604269642f29_0 | Q: how many domains did they experiment with?
Text: Introduction
When people interact with chatbots, smart speakers or digital assistants (e.g., Siri), one of their primary modes of interaction is information retrieval BIBREF0 . Thus, those that build dialog systems often have to tackle the problem of question answering.
Developers could support question answering using publicly available chatbot platforms, such as Watson Assistant or DialogFlow. To do this, a user would need to program an intent for each anticipated question with various examples of the question and one or more curated responses. This approach has the advantage of generating high quality answers, but it is limited to those questions anticipated by developers. Moreover, the management burden of such a system might be prohibitive as the number of questions that needs to be supported is likely to increase over time.
To overcome the burden of programming intents, developers might look towards more advanced question answering systems that are built using open domain question and answer data (e.g., from Stack Exchange or Wikipedia), reading comprehension models, and knowledge base searches. In particular, BIBREF1 previously demonstrated a two step system, called DrQA, that matches an input question to a relevant article from a knowledge base and then uses a recurrent neural network (RNN) based comprehension model to detect an answer within the matched article. This more flexible method was shown to produce promising results for questions related to Wikipedia articles and it performed competitively on the SQuAD benchmark BIBREF2 .
However, if developers wanted to integrate this sort of reading comprehension based methodology into their applications, how would they currently go about this? They would need to wrap pre-trained models in their own custom code and compile similar knowledge base articles at the very least. At the most, they may need to re-train reading comprehension models on open domain question and answer data (e.g., SQuAD) and/or implement their own knowledge base search algorithms.
In this paper we present Katecheo, a portable and modular system for reading comprehension based question answering that attempts to ease this development burden. The system provides a quickly deployable and easily extendable way for developers to integrate question answering functionality into their applications. Katecheo includes four configurable modules that collectively enable identification of questions, classification of those questions into topics, a search of knowledge base articles, and reading comprehension. The modules are tied together in a single inference graph that can be invoked via a REST API call. We demonstrate the system using publicly available, pre-trained models and knowledge base articles extracted from Stack Exchange sites. However, users can extend the system to any number of topics, or domains, without the need to modify the model serving code. All components of the system are open source and publicly available under a permissive Apache 2 License.
The rest of the paper is organized as follows. In the next section, we provide an overview of the system logic and its modules. In Section 3, we outline the architecture and configuration of Katecheo, including extending the system to an arbitrary number of topics. In Section 4, we report some results using example pre-trained models and public knowledge base articles. Then in conclusion, we summarize the system, its applicability, and future development work.
System Overview
Katecheo is partially inspired by the work of BIBREF1 on DrQA. That previously developed method has two primary phases of question answering: document retrieval and reading comprehension. Together these functionalities enable open domain question answering. However, many dialog systems are not completely open domain. For example, developers might want to create a chatbot that has targeted conversations about restaurant reservations and movie times. It would be advantageous for such a chatbot to answer questions about food and entertainment, but the developers might not want to allow the conversation to stray into other topics.
With Katecheo, one of our goals was to create a question answering system that is more flexible than those relying on curated responses while remaining more targeted than a completely open domain question answering system. The system includes document retrieval (or what we refer to as “knowledge base search”) and reading comprehension, but only within sets of curated knowledge base articles each corresponding to a particular topic (e.g., food or entertainment).
When a question text is input into the Katecheo system, it is processed through four modules: (1) question identification, (2) topic classification, (3) knowledge base search, and (4) reading comprehension. This overall logic is depicted in Figure FIGREF6 .
Question Identification
The first module in Katecheo, question identification, determines if the input text (labeled Q in Figure FIGREF6 ) is actually a question. In our experience, users of dialog systems provide a huge number of unexpected inputs. Some of these unexpected inputs are questions and some are just statements. Before going to the trouble of matching a knowledge base article and generating an answer, Katecheo completes this initial step to ensure that the input is a question. If the input is a question, the question identification module (henceforth the “question identifier") passes a positive indication/flag to the next module indicating that it should continue processing the question. Otherwise, it passes a negative flag to end the processing.
The question identifier uses a rule-based approach to question identification. As suggested in BIBREF3 , we utilize the presence of question marks and 5W1H words to determine if the input is a question. Based on our testing, this provides quite high performance (90%+ accuracy) and is not a blocker to overall performance.
Topic Classification
To reach our goal of a question answering system that would be more targeted than previous open domain question answering, we decided to allow the user of the system to define one or more topics. The topic classification module of the system (henceforth the “topic classifier") will attempt to classify the input question into one of the topics and then select a knowledge base article from a set of knowledge base articles corresponding to that topic.
One way we could enable this topic classification is by training a text classifier that would classify the input text into one of the user supplied topics. However, this approach would require (i) the user to provide both the topic and many example questions within that topic, and (ii) the system to retrain its classification model any time a new topic was added. We wanted to prioritize the ease of deployment, modularity and extensibility of the system, and, thus, we decided to take a slightly more naive approach.
Along with each topic, the user supplies the system with a pre-trained Named Entity Recognition (NER) model that identifies entities within that topic. The topic classifier then utilizes these pre-trained models to determine if the input question includes entities from one of the user supplied topics. If so, the topic classifier classifies the question into that topic. When two of the topics conflict, the system currently suspends processing and returns a null answer.
The system accepts NER models that are compatible with spaCy BIBREF4 . As discussed further below, the user can supply a link to a zip file that contains each topic NER model.
Note, it might be possible to remove the dependence on NER models in the future. We are currently exploring the use of other topic modeling techniques including non-negative matrix factorization and/or Latent Dirichlet Allocation (LDA). These techniques could enable the system to automatically match the input question to most appropriate topical knowledge base, and thus only rely on the user to supply knowledge base articles.
Knowledge Base Search
Once the topic has been identified, a search is made to match the question with an appropriate knowledge base article from a set of user supplied knowledge base articles corresponding to the user supplied topic. This matched article will be utilized in the next stage of processing to generate an answer.
The user supplied sets of knowledge base articles for each topic are in a JSON format and include a title and body text for each article. The system assumes that the knowledge base articles are in the form of a question and answer knowledge base (e.g., like a Stack Exchange site), rather than any arbitrarily structured articles. In this way, we are able to utilize the titles of the articles (i.e., the questions) in matching to user input questions.
In the knowledge base search module of Katecheo (henceforth the “KB Search" module), we use the Python package FuzzyWuzzy to perform string matching between the input question and the knowledge base article titles. FuzzyWuzzy uses Levenshtein Distance BIBREF5 match the input string to one or more input candidate strings.
We eventually plan to update this knowledge base search to an approach similar to that of BIBREF1 using bigram hashing and TF-IDF. However, the fuzzy string matching approach works reasonably well as long as the supplied knowledge bases are of a type where many of the article titles are in the form of topical questions.
Reading Comprehension
The final module of the Katecheo system is the reading comprehension (or just “comprehension") module. This module takes as input the original input question plus the matched knowledge base article body text and uses a reading comprehension model to select an appropriate answer from within the article.
The current release of Katecheo uses a Bi-Directional Attention Flow, or BiDAF, model for reading comprehension BIBREF6 . This BiDAF model includes a Convolutional Neural Network (CNN) based character level embedding layer, a word embedding layer that uses pre-trained GloVE embeddings, a Long Short-Term Memory Network (LSTM) based contextual embedding layer, an “attention flow layer", and a modeling layer include bi-directional LSTMs. We are using a pre-trained version of BiDAF available in the AllenNLP BIBREF7 library.
Future releases of Katecheo will include the ability to swap out the reading comprehension model for newer architectures based on, e.g., BERT BIBREF8 or XLNet BIBREF9 or custom trained models.
Architecture and Configuration
All four of the Katecheo modules are containerized with Docker BIBREF10 and are deployed as pods on top of Kubernetes BIBREF11 (see Figure FIGREF12 ). In this way, Katecheo is completely portable to any standard Kubernetes cluster including hosted versions in AWS, GCP, Digital Ocean, Azure, etc. and on-premises version that use vanilla Kubernetes, OpenShift, CaaS, etc.
To provide developers with a familiar interface to the question answering system, we provide a REST API interface. Developers can call Katecheo via a single endpoint with ingress to the system provided by Ambassador, a Kubernetes-native API Gateway.
Seldon-core is used to simplify the routing between the four modules, create the REST API, and manage deployments. To create the Seldon deployment of the four modules, as depicted in Figure FIGREF12 , we: (1) create a Python class for each module that contains standardized Seldon-specified methods and that loads the various models for making predictions; (2) wrap that Python class in a standard, containerized Seldon model server using a public Seldon Docker image and s2i ; (3) push the wrapped Python code to DockerHub ; (4) create a Seldon inference graph that links the modules in a Directed Acyclic Graph (DAG); and (5) deploy the inference graph to Kubernetes. After all of these steps are complete, a single REST API endpoint is exposed. When a user calls this single API endpoint the Seldon inference graph is invoked and the modules are executed using the specified routing logic.
To specify the topic names, topic NER models, and topic knowledge base JSON files (as mentioned in reference to Figure FIGREF6 ), the user need only fill out a JSON configuration file template in the following format:
[
{
"name": "topic 1 name",
"ner_model": "<link>",
"kb_file": "<link>"
},
{
"name": "topic 2 name",
"ner_model": "<link>",
"kb_file": "<link>"
},
etc...
]
where each INLINEFORM0 would be replaced with a respective URL containing the NER model or knowledge base JSON file. The linked NER models need to be spaCy compatible and compressed into a single zip file, and the linked knowledge base JSON files need to include both titles and bodies as specified in the Katecheo GitHub repository README file. Once this configuration file is created, a deploy script can be executed to automatically deploy all of the Katecheo modules.
Example Usage
We demonstrated the utility of Katecheo by deploying the system for question answering in two topics, Medical Sciences and Christianity. These topics are diverse enough that they would warrant different curated sets of knowledge base articles, and we can easily retrieve knowledge base articles for each of these subjects from the Medical Sciences and Christianity Stack Exchange sites, respectively.
We also have access to NER models for both of these topics. For the Medical Sciences NER model, we utilized the en_ner_bc5cdr_md model from scispaCy BIBREF12 , which is trained on the BC5CDR corpus BIBREF13 . For the Christianity topic, we utilize a custom spaCy NER model trained on annotated data from the GotQuestions website.
Example inputs and outputs of the system are included in Table TABREF17 . As can be seen, the system is able to match many questions with an appropriate topic and subsequently generate an answer using the BiDAF comprehension model. Not all of the answers would fit into conversational question answering in terms of naturalness, but others show promise.
There were cases in which the system was not able to classify an input question into an appropriate topic, even when there would have been a closely matching knowledge base article. In particular when testing the system on the Medical Sciences topic, we noticed a higher number of these cases (see the fourth and fifth rows of Table TABREF17 ). This is due to the fact that the pre-trained Medical Sciences NER model from scispaCy is primarily intended to recognize chemical and disease entities within text, not general medical sciences terminology. On the other hand, the NER model utilized for the Christianity topic is more generally applicable within that topic.
Conclusions
In conclusion, Katecheo is a portable and modular system for reading comprehension based question answering. It is portable because it is built on cloud native technologies (i.e., Docker and Kubernetes) and can be deployed to any cloud or on-premise environment. It is modular because it is composed of four configurable modules that collectively enable identification of questions, classification of those questions into topics, a search of knowledge base articles, and reading comprehension.
Initial usage of the system indicates that it provides a flexible and developer friendly way to enable question answering functionality for multiple topics or domains via REST API. That being said, the current configurations of Katecheo are limited to answering from knowledge bases constructed in a question and answer format, and the current topic classification relies on topical NER models that are compatible with spaCy. In the future, we plan to overcome these limitations by extending our knowledge base search methodology, enabling usage of a wider variety of pre-trained models, and exploring other topic matching/modeling techniques to remove our NER model dependency.
The complete source code, configuration information, deployment scripts, and examples for Katecheo are available at https://github.com/cvdigitalai/katecheo. A screencast demonstration of Katecheo is available at https://youtu.be/g51t6eRX2Y8. | 2 |
2c3b2c3bab6d18cb0895462e3cfd91cd0dee7f7d | 2c3b2c3bab6d18cb0895462e3cfd91cd0dee7f7d_0 | Q: what pretrained models were used?
Text: Introduction
When people interact with chatbots, smart speakers or digital assistants (e.g., Siri), one of their primary modes of interaction is information retrieval BIBREF0 . Thus, those that build dialog systems often have to tackle the problem of question answering.
Developers could support question answering using publicly available chatbot platforms, such as Watson Assistant or DialogFlow. To do this, a user would need to program an intent for each anticipated question with various examples of the question and one or more curated responses. This approach has the advantage of generating high quality answers, but it is limited to those questions anticipated by developers. Moreover, the management burden of such a system might be prohibitive as the number of questions that needs to be supported is likely to increase over time.
To overcome the burden of programming intents, developers might look towards more advanced question answering systems that are built using open domain question and answer data (e.g., from Stack Exchange or Wikipedia), reading comprehension models, and knowledge base searches. In particular, BIBREF1 previously demonstrated a two step system, called DrQA, that matches an input question to a relevant article from a knowledge base and then uses a recurrent neural network (RNN) based comprehension model to detect an answer within the matched article. This more flexible method was shown to produce promising results for questions related to Wikipedia articles and it performed competitively on the SQuAD benchmark BIBREF2 .
However, if developers wanted to integrate this sort of reading comprehension based methodology into their applications, how would they currently go about this? They would need to wrap pre-trained models in their own custom code and compile similar knowledge base articles at the very least. At the most, they may need to re-train reading comprehension models on open domain question and answer data (e.g., SQuAD) and/or implement their own knowledge base search algorithms.
In this paper we present Katecheo, a portable and modular system for reading comprehension based question answering that attempts to ease this development burden. The system provides a quickly deployable and easily extendable way for developers to integrate question answering functionality into their applications. Katecheo includes four configurable modules that collectively enable identification of questions, classification of those questions into topics, a search of knowledge base articles, and reading comprehension. The modules are tied together in a single inference graph that can be invoked via a REST API call. We demonstrate the system using publicly available, pre-trained models and knowledge base articles extracted from Stack Exchange sites. However, users can extend the system to any number of topics, or domains, without the need to modify the model serving code. All components of the system are open source and publicly available under a permissive Apache 2 License.
The rest of the paper is organized as follows. In the next section, we provide an overview of the system logic and its modules. In Section 3, we outline the architecture and configuration of Katecheo, including extending the system to an arbitrary number of topics. In Section 4, we report some results using example pre-trained models and public knowledge base articles. Then in conclusion, we summarize the system, its applicability, and future development work.
System Overview
Katecheo is partially inspired by the work of BIBREF1 on DrQA. That previously developed method has two primary phases of question answering: document retrieval and reading comprehension. Together these functionalities enable open domain question answering. However, many dialog systems are not completely open domain. For example, developers might want to create a chatbot that has targeted conversations about restaurant reservations and movie times. It would be advantageous for such a chatbot to answer questions about food and entertainment, but the developers might not want to allow the conversation to stray into other topics.
With Katecheo, one of our goals was to create a question answering system that is more flexible than those relying on curated responses while remaining more targeted than a completely open domain question answering system. The system includes document retrieval (or what we refer to as “knowledge base search”) and reading comprehension, but only within sets of curated knowledge base articles each corresponding to a particular topic (e.g., food or entertainment).
When a question text is input into the Katecheo system, it is processed through four modules: (1) question identification, (2) topic classification, (3) knowledge base search, and (4) reading comprehension. This overall logic is depicted in Figure FIGREF6 .
Question Identification
The first module in Katecheo, question identification, determines if the input text (labeled Q in Figure FIGREF6 ) is actually a question. In our experience, users of dialog systems provide a huge number of unexpected inputs. Some of these unexpected inputs are questions and some are just statements. Before going to the trouble of matching a knowledge base article and generating an answer, Katecheo completes this initial step to ensure that the input is a question. If the input is a question, the question identification module (henceforth the “question identifier") passes a positive indication/flag to the next module indicating that it should continue processing the question. Otherwise, it passes a negative flag to end the processing.
The question identifier uses a rule-based approach to question identification. As suggested in BIBREF3 , we utilize the presence of question marks and 5W1H words to determine if the input is a question. Based on our testing, this provides quite high performance (90%+ accuracy) and is not a blocker to overall performance.
Topic Classification
To reach our goal of a question answering system that would be more targeted than previous open domain question answering, we decided to allow the user of the system to define one or more topics. The topic classification module of the system (henceforth the “topic classifier") will attempt to classify the input question into one of the topics and then select a knowledge base article from a set of knowledge base articles corresponding to that topic.
One way we could enable this topic classification is by training a text classifier that would classify the input text into one of the user supplied topics. However, this approach would require (i) the user to provide both the topic and many example questions within that topic, and (ii) the system to retrain its classification model any time a new topic was added. We wanted to prioritize the ease of deployment, modularity and extensibility of the system, and, thus, we decided to take a slightly more naive approach.
Along with each topic, the user supplies the system with a pre-trained Named Entity Recognition (NER) model that identifies entities within that topic. The topic classifier then utilizes these pre-trained models to determine if the input question includes entities from one of the user supplied topics. If so, the topic classifier classifies the question into that topic. When two of the topics conflict, the system currently suspends processing and returns a null answer.
The system accepts NER models that are compatible with spaCy BIBREF4 . As discussed further below, the user can supply a link to a zip file that contains each topic NER model.
Note, it might be possible to remove the dependence on NER models in the future. We are currently exploring the use of other topic modeling techniques including non-negative matrix factorization and/or Latent Dirichlet Allocation (LDA). These techniques could enable the system to automatically match the input question to most appropriate topical knowledge base, and thus only rely on the user to supply knowledge base articles.
Knowledge Base Search
Once the topic has been identified, a search is made to match the question with an appropriate knowledge base article from a set of user supplied knowledge base articles corresponding to the user supplied topic. This matched article will be utilized in the next stage of processing to generate an answer.
The user supplied sets of knowledge base articles for each topic are in a JSON format and include a title and body text for each article. The system assumes that the knowledge base articles are in the form of a question and answer knowledge base (e.g., like a Stack Exchange site), rather than any arbitrarily structured articles. In this way, we are able to utilize the titles of the articles (i.e., the questions) in matching to user input questions.
In the knowledge base search module of Katecheo (henceforth the “KB Search" module), we use the Python package FuzzyWuzzy to perform string matching between the input question and the knowledge base article titles. FuzzyWuzzy uses Levenshtein Distance BIBREF5 match the input string to one or more input candidate strings.
We eventually plan to update this knowledge base search to an approach similar to that of BIBREF1 using bigram hashing and TF-IDF. However, the fuzzy string matching approach works reasonably well as long as the supplied knowledge bases are of a type where many of the article titles are in the form of topical questions.
Reading Comprehension
The final module of the Katecheo system is the reading comprehension (or just “comprehension") module. This module takes as input the original input question plus the matched knowledge base article body text and uses a reading comprehension model to select an appropriate answer from within the article.
The current release of Katecheo uses a Bi-Directional Attention Flow, or BiDAF, model for reading comprehension BIBREF6 . This BiDAF model includes a Convolutional Neural Network (CNN) based character level embedding layer, a word embedding layer that uses pre-trained GloVE embeddings, a Long Short-Term Memory Network (LSTM) based contextual embedding layer, an “attention flow layer", and a modeling layer include bi-directional LSTMs. We are using a pre-trained version of BiDAF available in the AllenNLP BIBREF7 library.
Future releases of Katecheo will include the ability to swap out the reading comprehension model for newer architectures based on, e.g., BERT BIBREF8 or XLNet BIBREF9 or custom trained models.
Architecture and Configuration
All four of the Katecheo modules are containerized with Docker BIBREF10 and are deployed as pods on top of Kubernetes BIBREF11 (see Figure FIGREF12 ). In this way, Katecheo is completely portable to any standard Kubernetes cluster including hosted versions in AWS, GCP, Digital Ocean, Azure, etc. and on-premises version that use vanilla Kubernetes, OpenShift, CaaS, etc.
To provide developers with a familiar interface to the question answering system, we provide a REST API interface. Developers can call Katecheo via a single endpoint with ingress to the system provided by Ambassador, a Kubernetes-native API Gateway.
Seldon-core is used to simplify the routing between the four modules, create the REST API, and manage deployments. To create the Seldon deployment of the four modules, as depicted in Figure FIGREF12 , we: (1) create a Python class for each module that contains standardized Seldon-specified methods and that loads the various models for making predictions; (2) wrap that Python class in a standard, containerized Seldon model server using a public Seldon Docker image and s2i ; (3) push the wrapped Python code to DockerHub ; (4) create a Seldon inference graph that links the modules in a Directed Acyclic Graph (DAG); and (5) deploy the inference graph to Kubernetes. After all of these steps are complete, a single REST API endpoint is exposed. When a user calls this single API endpoint the Seldon inference graph is invoked and the modules are executed using the specified routing logic.
To specify the topic names, topic NER models, and topic knowledge base JSON files (as mentioned in reference to Figure FIGREF6 ), the user need only fill out a JSON configuration file template in the following format:
[
{
"name": "topic 1 name",
"ner_model": "<link>",
"kb_file": "<link>"
},
{
"name": "topic 2 name",
"ner_model": "<link>",
"kb_file": "<link>"
},
etc...
]
where each INLINEFORM0 would be replaced with a respective URL containing the NER model or knowledge base JSON file. The linked NER models need to be spaCy compatible and compressed into a single zip file, and the linked knowledge base JSON files need to include both titles and bodies as specified in the Katecheo GitHub repository README file. Once this configuration file is created, a deploy script can be executed to automatically deploy all of the Katecheo modules.
Example Usage
We demonstrated the utility of Katecheo by deploying the system for question answering in two topics, Medical Sciences and Christianity. These topics are diverse enough that they would warrant different curated sets of knowledge base articles, and we can easily retrieve knowledge base articles for each of these subjects from the Medical Sciences and Christianity Stack Exchange sites, respectively.
We also have access to NER models for both of these topics. For the Medical Sciences NER model, we utilized the en_ner_bc5cdr_md model from scispaCy BIBREF12 , which is trained on the BC5CDR corpus BIBREF13 . For the Christianity topic, we utilize a custom spaCy NER model trained on annotated data from the GotQuestions website.
Example inputs and outputs of the system are included in Table TABREF17 . As can be seen, the system is able to match many questions with an appropriate topic and subsequently generate an answer using the BiDAF comprehension model. Not all of the answers would fit into conversational question answering in terms of naturalness, but others show promise.
There were cases in which the system was not able to classify an input question into an appropriate topic, even when there would have been a closely matching knowledge base article. In particular when testing the system on the Medical Sciences topic, we noticed a higher number of these cases (see the fourth and fifth rows of Table TABREF17 ). This is due to the fact that the pre-trained Medical Sciences NER model from scispaCy is primarily intended to recognize chemical and disease entities within text, not general medical sciences terminology. On the other hand, the NER model utilized for the Christianity topic is more generally applicable within that topic.
Conclusions
In conclusion, Katecheo is a portable and modular system for reading comprehension based question answering. It is portable because it is built on cloud native technologies (i.e., Docker and Kubernetes) and can be deployed to any cloud or on-premise environment. It is modular because it is composed of four configurable modules that collectively enable identification of questions, classification of those questions into topics, a search of knowledge base articles, and reading comprehension.
Initial usage of the system indicates that it provides a flexible and developer friendly way to enable question answering functionality for multiple topics or domains via REST API. That being said, the current configurations of Katecheo are limited to answering from knowledge bases constructed in a question and answer format, and the current topic classification relies on topical NER models that are compatible with spaCy. In the future, we plan to overcome these limitations by extending our knowledge base search methodology, enabling usage of a wider variety of pre-trained models, and exploring other topic matching/modeling techniques to remove our NER model dependency.
The complete source code, configuration information, deployment scripts, and examples for Katecheo are available at https://github.com/cvdigitalai/katecheo. A screencast demonstration of Katecheo is available at https://youtu.be/g51t6eRX2Y8. | BiDAF, BERT |
ea51aecd64bd95d42d28ab3f1b60eecadf6d3760 | ea51aecd64bd95d42d28ab3f1b60eecadf6d3760_0 | Q: What domains are contained in the polarity classification dataset?
Text: Introduction
Domain shift is a fundamental problem in machine learning, that has attracted a lot of attention in the natural language processing and vision communities BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . To understand and address this problem, generated by the lack of labeled data in a target domain, researchers have studied the behavior of machine learning methods in cross-domain settings BIBREF2 , BIBREF11 , BIBREF10 and came up with various domain adaptation techniques BIBREF12 , BIBREF5 , BIBREF6 , BIBREF9 . In cross-domain classification, a classifier is trained on data from a source domain and tested on data from a (different) target domain. The accuracy of machine learning methods is usually lower in the cross-domain setting, due to the distribution gap between different domains. However, researchers proposed several domain adaptation techniques by using the unlabeled test data to obtain better performance BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF7 . Interestingly, some recent works BIBREF10 , BIBREF17 indicate that string kernels can yield robust results in the cross-domain setting without any domain adaptation. In fact, methods based on string kernels have demonstrated impressive results in various text classification tasks ranging from native language identification BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 and authorship identification BIBREF22 to dialect identification BIBREF23 , BIBREF17 , BIBREF24 , sentiment analysis BIBREF10 , BIBREF25 and automatic essay scoring BIBREF26 . As long as a labeled training set is available, string kernels can reach state-of-the-art results in various languages including English BIBREF19 , BIBREF10 , BIBREF26 , Arabic BIBREF27 , BIBREF20 , BIBREF17 , BIBREF24 , Chinese BIBREF25 and Norwegian BIBREF20 . Different from all these recent approaches, we use unlabeled data from the test set in a transductive setting in order to significantly increase the performance of string kernels. In our recent work BIBREF28 , we proposed two transductive learning approaches combined into a unified framework that improves the results of string kernels in two different tasks. In this paper, we provide a formal and detailed description of our transductive algorithm and present results in cross-domain English polarity classification.
The paper is organized as follows. Related work on cross-domain text classification and string kernels is presented in Section SECREF2 . Section SECREF3 presents our approach to obtain domain adapted string kernels. The transductive transfer learning method is described in Section SECREF4 . The polarity classification experiments are presented in Section SECREF5 . Finally, we draw conclusions and discuss future work in Section SECREF6 .
Related Work
Cross-Domain Classification
Transfer learning (or domain adaptation) aims at building effective classifiers for a target domain when the only available labeled training data belongs to a different (source) domain. Domain adaptation techniques can be roughly divided into graph-based methods BIBREF1 , BIBREF29 , BIBREF9 , BIBREF30 , probabilistic models BIBREF3 , BIBREF4 , knowledge-based models BIBREF14 , BIBREF31 , BIBREF11 and joint optimization frameworks BIBREF12 . The transfer learning methods from the literature show promising results in a variety of real-world applications, such as image classification BIBREF12 , text classification BIBREF13 , BIBREF16 , BIBREF3 , polarity classification BIBREF1 , BIBREF29 , BIBREF4 , BIBREF6 , BIBREF30 and others BIBREF32 .
General transfer learning approaches. Long et al. BIBREF12 proposed a novel transfer learning framework to model distribution adaptation and label propagation in a unified way, based on the structural risk minimization principle and the regularization theory. Shu et al. BIBREF5 proposed a method that bridges the distribution gap between the source domain and the target domain through affinity learning, by exploiting the existence of a subset of data points in the target domain that are distributed similarly to the data points in the source domain. In BIBREF7 , deep learning is employed to jointly optimize the representation, the cross-domain transformation and the target label inference in an end-to-end fashion. More recently, Sun et al. BIBREF8 proposed an unsupervised domain adaptation method that minimizes the domain shift by aligning the second-order statistics of source and target distributions, without requiring any target labels. Chang et al. BIBREF9 proposed a framework based on using a parallel corpus to calibrate domain-specific kernels into a unified kernel for leveraging graph-based label propagation between domains.
Cross-domain text classification. Joachims BIBREF13 introduced the Transductive Support Vector Machines (TSVM) framework for text classification, which takes into account a particular test set and tries to minimize the error rate for those particular test samples. Ifrim et al. BIBREF14 presented a transductive learning approach for text classification based on combining latent variable models for decomposing the topic-word space into topic-concept and concept-word spaces, and explicit knowledge models with named concepts for populating latent variables. Guo et al. BIBREF16 proposed a transductive subspace representation learning method to address domain adaptation for cross-lingual text classification. Zhuang et al. BIBREF3 presented a probabilistic model, by which both the shared and distinct concepts in different domains can be learned by the Expectation-Maximization process which optimizes the data likelihood. In BIBREF33 , an algorithm to adapt a classification model by iteratively learning domain-specific features from the unlabeled test data is described.
Cross-domain polarity classification. In recent years, cross-domain sentiment (polarity) classification has gained popularity due to the advances in domain adaptation on one side, and to the abundance of documents from various domains available on the Web, expressing positive or negative opinion, on the other side. Some of the general domain adaptation frameworks have been applied to polarity classification BIBREF3 , BIBREF33 , BIBREF9 , but there are some approaches that have been specifically designed for the cross-domain sentiment classification task BIBREF0 , BIBREF34 , BIBREF1 , BIBREF29 , BIBREF11 , BIBREF4 , BIBREF6 , BIBREF10 , BIBREF30 . To the best of our knowledge, Blitzer et al. BIBREF0 were the first to report results on cross-domain classification proposing the structural correspondence learning (SCL) method, and its variant based on mutual information (SCL-MI). Pan et al. BIBREF1 proposed a spectral feature alignment (SFA) algorithm to align domain-specific words from different domains into unified clusters, using domain-independent words as a bridge. Bollegala et al. BIBREF31 used a cross-domain lexicon creation to generate a sentiment-sensitive thesaurus (SST) that groups different words expressing the same sentiment, using unigram and bigram features as BIBREF0 , BIBREF1 . Luo et al. BIBREF4 proposed a cross-domain sentiment classification framework based on a probabilistic model of the author's emotion state when writing. An Expectation-Maximization algorithm is then employed to solve the maximum likelihood problem and to obtain a latent emotion distribution of the author. Franco-Salvador et al. BIBREF11 combined various recent and knowledge-based approaches using a meta-learning scheme (KE-Meta). They performed cross-domain polarity classification without employing any domain adaptation technique. More recently, Fernández et al. BIBREF6 introduced the Distributional Correspondence Indexing (DCI) method for domain adaptation in sentiment classification. The approach builds term representations in a vector space common to both domains where each dimension reflects its distributional correspondence to a highly predictive term that behaves similarly across domains. A graph-based approach for sentiment classification that models the relatedness of different domains based on shared users and keywords is proposed in BIBREF30 .
String Kernels
In recent years, methods based on string kernels have demonstrated remarkable performance in various text classification tasks BIBREF35 , BIBREF36 , BIBREF22 , BIBREF19 , BIBREF10 , BIBREF17 , BIBREF26 . String kernels represent a way of using information at the character level by measuring the similarity of strings through character n-grams. Lodhi et al. BIBREF35 used string kernels for document categorization, obtaining very good results. String kernels were also successfully used in authorship identification BIBREF22 . More recently, various combinations of string kernels reached state-of-the-art accuracy rates in native language identification BIBREF19 and Arabic dialect identification BIBREF17 . Interestingly, string kernels have been used in cross-domain settings without any domain adaptation, obtaining impressive results. For instance, Ionescu et al. BIBREF19 have employed string kernels in a cross-corpus (and implicitly cross-domain) native language identification experiment, improving the state-of-the-art accuracy by a remarkable INLINEFORM0 . Giménez-Pérez et al. BIBREF10 have used string kernels for single-source and multi-source polarity classification. Remarkably, they obtain state-of-the-art performance without using knowledge from the target domain, which indicates that string kernels provide robust results in the cross-domain setting without any domain adaptation. Ionescu et al. BIBREF17 obtained the best performance in the Arabic Dialect Identification Shared Task of the 2017 VarDial Evaluation Campaign BIBREF37 , with an improvement of INLINEFORM1 over the second-best method. It is important to note that the training and the test speech samples prepared for the shared task were recorded in different setups BIBREF37 , or in other words, the training and the test sets are drawn from different distributions. Different from all these recent approaches BIBREF19 , BIBREF10 , BIBREF17 , we use unlabeled data from the target domain to significantly increase the performance of string kernels in cross-domain text classification, particularly in English polarity classification.
Transductive String Kernels
String kernels. Kernel functions BIBREF38 capture the intuitive notion of similarity between objects in a specific domain. For example, in text mining, string kernels can be used to measure the pairwise similarity between text samples, simply based on character n-grams. Various string kernel functions have been proposed to date BIBREF35 , BIBREF38 , BIBREF19 . Perhaps one of the most recently introduced string kernels is the histogram intersection string kernel BIBREF19 . For two strings over an alphabet INLINEFORM0 , INLINEFORM1 , the intersection string kernel is formally defined as follows: DISPLAYFORM0
where INLINEFORM0 is the number of occurrences of n-gram INLINEFORM1 as a substring in INLINEFORM2 , and INLINEFORM3 is the length of INLINEFORM4 . The spectrum string kernel or the presence bits string kernel can be defined in a similar fashion BIBREF19 .
Transductive string kernels. We present a simple and straightforward approach to produce a transductive similarity measure suitable for strings. We take the following steps to derive transductive string kernels. For a given kernel (similarity) function INLINEFORM0 , we first build the full kernel matrix INLINEFORM1 , by including the pairwise similarities of samples from both the train and the test sets. For a training set INLINEFORM2 of INLINEFORM3 samples and a test set INLINEFORM4 of INLINEFORM5 samples, such that INLINEFORM6 , each component in the full kernel matrix is defined as follows: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are samples from the set INLINEFORM2 , for all INLINEFORM3 . We then normalize the kernel matrix by dividing each component by the square root of the product of the two corresponding diagonal components: DISPLAYFORM0
We transform the normalized kernel matrix into a radial basis function (RBF) kernel matrix as follows: DISPLAYFORM0
Each row in the RBF kernel matrix INLINEFORM0 is now interpreted as a feature vector. In other words, each sample INLINEFORM1 is represented by a feature vector that contains the similarity between the respective sample INLINEFORM2 and all the samples in INLINEFORM3 . Since INLINEFORM4 includes the test samples as well, the feature vector is inherently adapted to the test set. Indeed, it is easy to see that the features will be different if we choose to apply the string kernel approach on a set of test samples INLINEFORM5 , such that INLINEFORM6 . It is important to note that through the features, the subsequent classifier will have some information about the test samples at training time. More specifically, the feature vector conveys information about how similar is every test sample to every training sample. We next consider the linear kernel, which is given by the scalar product between the new feature vectors. To obtain the final linear kernel matrix, we simply need to compute the product between the RBF kernel matrix and its transpose: DISPLAYFORM0
In this way, the samples from the test set, which are included in INLINEFORM0 , are used to obtain new (transductive) string kernels that are adapted to the test set at hand.
[!tpb] Transductive Kernel Algorithm
Input:
INLINEFORM0 – the training set of INLINEFORM1 training samples and associated class labels;
INLINEFORM0 – the set of INLINEFORM1 test samples;
INLINEFORM0 – a kernel function;
INLINEFORM0 – the number of test samples to be added in the second round of training;
INLINEFORM0 – a binary kernel classifier.
Domain-Adapted Kernel Matrix Computation Steps:
INLINEFORM0 INLINEFORM1 ; INLINEFORM2 ; INLINEFORM3 ; INLINEFORM4
INLINEFORM0 INLINEFORM1 INLINEFORM2
INLINEFORM0 INLINEFORM1 INLINEFORM2
INLINEFORM0
INLINEFORM0
Transductive Kernel Classifier Steps:
INLINEFORM0
INLINEFORM0
INLINEFORM0
INLINEFORM0 INLINEFORM1
INLINEFORM0
INLINEFORM0
INLINEFORM0 INLINEFORM1 the dual weights of INLINEFORM2 trained on INLINEFORM3 with the labels INLINEFORM4
INLINEFORM0
INLINEFORM0 ; INLINEFORM1
INLINEFORM0 INLINEFORM1
INLINEFORM0
INLINEFORM0 INLINEFORM1 sort INLINEFORM2 in descending order and return the sorted indexes
INLINEFORM0
INLINEFORM0
INLINEFORM0
INLINEFORM0
INLINEFORM0
Output:
INLINEFORM0 – the set of predicted labels for the test samples in INLINEFORM1 .
Transductive Kernel Classifier
We next present a simple yet effective approach for adapting a one-versus-all kernel classifier trained on a source domain to a different target domain. Our transductive kernel classifier (TKC) approach is composed of two learning iterations. Our entire framework is formally described in Algorithm SECREF3 .
Notations. We use the following notations in the algorithm. Sets, arrays and matrices are written in capital letters. All collection types are considered to be indexed starting from position 1. The elements of a set INLINEFORM0 are denoted by INLINEFORM1 , the elements of an array INLINEFORM2 are alternatively denoted by INLINEFORM3 or INLINEFORM4 , and the elements of a matrix INLINEFORM5 are denoted by INLINEFORM6 or INLINEFORM7 when convenient. The sequence INLINEFORM8 is denoted by INLINEFORM9 . We use sequences to index arrays or matrices as well. For example, for an array INLINEFORM10 and two integers INLINEFORM11 and INLINEFORM12 , INLINEFORM13 denotes the sub-array INLINEFORM14 . In a similar manner, INLINEFORM15 denotes a sub-matrix of the matrix INLINEFORM16 , while INLINEFORM17 returns the INLINEFORM18 -th row of M and INLINEFORM19 returns the INLINEFORM20 -th column of M. The zero matrix of INLINEFORM21 components is denoted by INLINEFORM22 , and the square zero matrix is denoted by INLINEFORM23 . The identity matrix is denoted by INLINEFORM24 .
Algorithm description. In steps 8-17, we compute the domain-adapted string kernel matrix, as described in the previous section. In the first learning iteration (when INLINEFORM0 ), we train several classifiers to distinguish each individual class from the rest, according to the one-versus-all (OVA) scheme. In step 27, the kernel classifier INLINEFORM1 is trained to distinguish a class from the others, assigning a dual weight to each training sample from the source domain. The returned column vector of dual weights is denoted by INLINEFORM2 and the bias value is denoted by INLINEFORM3 . The vector of weights INLINEFORM4 contains INLINEFORM5 values, such that the weight INLINEFORM6 corresponds to the training sample INLINEFORM7 . When the test kernel matrix INLINEFORM8 of INLINEFORM9 components is multiplied with the vector INLINEFORM10 in step 28, the result is a column vector of INLINEFORM11 positive or negative scores. Afterwards (step 34), the test samples are sorted in order to maximize the probability of correctly predicted labels. For each test sample INLINEFORM12 , we consider the score INLINEFORM13 (step 32) produced by the classifier for the chosen class INLINEFORM14 (step 31), which is selected according to the OVA scheme. The sorting is based on the hypothesis that if the classifier associates a higher score to a test sample, it means that the classifier is more confident about the predicted label for the respective test sample. Before the second learning iteration, a number of INLINEFORM15 test samples from the top of the sorted list are added to the training set (steps 35-39) for another round of training. As the classifier is more confident about the predicted labels INLINEFORM16 of the added test samples, the chance of including noisy examples (with wrong labels) is minimized. On the other hand, the classifier has the opportunity to learn some useful domain-specific patterns of the test domain. We believe that, at least in the cross-domain setting, the added test samples bring more useful information than noise. We would like to stress out that the ground-truth test labels are never used in our transductive algorithm. Although the test samples are required beforehand, their labels are not necessary. Hence, our approach is suitable in situations where unlabeled data from the target domain can be collected cheaply, and such situations appear very often in practice, considering the great amount of data available on the Web.
Polarity Classification
Data set. For the cross-domain polarity classification experiments, we use the second version of Multi-Domain Sentiment Dataset BIBREF0 . The data set contains Amazon product reviews of four different domains: Books (B), DVDs (D), Electronics (E) and Kitchen appliances (K). Reviews contain star ratings (from 1 to 5) which are converted into binary labels as follows: reviews rated with more than 3 stars are labeled as positive, and those with less than 3 stars as negative. In each domain, there are 1000 positive and 1000 negative reviews.
Baselines. We compare our approach with several methods BIBREF1 , BIBREF31 , BIBREF11 , BIBREF8 , BIBREF10 , BIBREF39 in two cross-domain settings. Using string kernels, Giménez-Pérez et al. BIBREF10 reported better performance than SST BIBREF31 and KE-Meta BIBREF11 in the multi-source domain setting. In addition, we compare our approach with SFA BIBREF1 , CORAL BIBREF8 and TR-TrAdaBoost BIBREF39 in the single-source setting.
Evaluation procedure and parameters. We follow the same evaluation methodology of Giménez-Pérez et al. BIBREF10 , to ensure a fair comparison. Furthermore, we use the same kernels, namely the presence bits string kernel ( INLINEFORM0 ) and the intersection string kernel ( INLINEFORM1 ), and the same range of character n-grams (5-8). To compute the string kernels, we used the open-source code provided by Ionescu et al. BIBREF19 , BIBREF40 . For the transductive kernel classifier, we select INLINEFORM2 unlabeled test samples to be included in the training set for the second round of training. We choose Kernel Ridge Regression BIBREF38 as classifier and set its regularization parameter to INLINEFORM3 in all our experiments. Although Giménez-Pérez et al. BIBREF10 used a different classifier, namely Kernel Discriminant Analysis, we observed that Kernel Ridge Regression produces similar results ( INLINEFORM4 ) when we employ the same string kernels. As Giménez-Pérez et al. BIBREF10 , we evaluate our approach in two cross-domain settings. In the multi-source setting, we train the models on all domains, except the one used for testing. In the single-source setting, we train the models on one of the four domains and we independently test the models on the remaining three domains.
Results in multi-source setting. The results for the multi-source cross-domain polarity classification setting are presented in Table TABREF8 . Both the transductive presence bits string kernel ( INLINEFORM0 ) and the transductive intersection kernel ( INLINEFORM1 ) obtain better results than their original counterparts. Moreover, according to the McNemar's test BIBREF41 , the results on the DVDs, the Electronics and the Kitchen target domains are significantly better than the best baseline string kernel, with a confidence level of INLINEFORM2 . When we employ the transductive kernel classifier (TKC), we obtain even better results. On all domains, the accuracy rates yielded by the transductive classifier are more than INLINEFORM3 better than the best baseline. For example, on the Books domain the accuracy of the transductive classifier based on the presence bits kernel ( INLINEFORM4 ) is INLINEFORM5 above the best baseline ( INLINEFORM6 ) represented by the intersection string kernel. Remarkably, the improvements brought by our transductive string kernel approach are statistically significant in all domains.
Results in single-source setting. The results for the single-source cross-domain polarity classification setting are presented in Table TABREF9 . We considered all possible combinations of source and target domains in this experiment, and we improve the results in each and every case. Without exception, the accuracy rates reached by the transductive string kernels are significantly better than the best baseline string kernel BIBREF10 , according to the McNemar's test performed at a confidence level of INLINEFORM0 . The highest improvements (above INLINEFORM1 ) are obtained when the source domain contains Books reviews and the target domain contains Kitchen reviews. As in the multi-source setting, we obtain much better results when the transductive classifier is employed for the learning task. In all cases, the accuracy rates of the transductive classifier are more than INLINEFORM2 better than the best baseline string kernel. Remarkably, in four cases (E INLINEFORM3 B, E INLINEFORM4 D, B INLINEFORM5 K and D INLINEFORM6 K) our improvements are greater than INLINEFORM7 . The improvements brought by our transductive classifier based on string kernels are statistically significant in each and every case. In comparison with SFA BIBREF1 , we obtain better results in all but one case (K INLINEFORM8 D). Remarkably, we surpass the other state-of-the-art approaches BIBREF8 , BIBREF39 in all cases.
Conclusion
In this paper, we presented two domain adaptation approaches that can be used together to improve the results of string kernels in cross-domain settings. We provided empirical evidence indicating that our framework can be successfully applied in cross-domain text classification, particularly in cross-domain English polarity classification. Indeed, the polarity classification experiments demonstrate that our framework achieves better accuracy rates than other state-of-the-art methods BIBREF1 , BIBREF31 , BIBREF11 , BIBREF8 , BIBREF10 , BIBREF39 . By using the same parameters across all the experiments, we showed that our transductive transfer learning framework can bring significant improvements without having to fine-tune the parameters for each individual setting. Although the framework described in this paper can be generally applied to any kernel method, we focused our work only on string kernel approaches used in text classification. In future work, we aim to combine the proposed transductive transfer learning framework with different kinds of kernels and classifiers, and employ it for other cross-domain tasks. | Books, DVDs, Electronics, Kitchen appliances |
e4cc2e73c90e568791737c97d77acef83588185f | e4cc2e73c90e568791737c97d77acef83588185f_0 | Q: How long is the dataset?
Text: Introduction
Domain shift is a fundamental problem in machine learning, that has attracted a lot of attention in the natural language processing and vision communities BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . To understand and address this problem, generated by the lack of labeled data in a target domain, researchers have studied the behavior of machine learning methods in cross-domain settings BIBREF2 , BIBREF11 , BIBREF10 and came up with various domain adaptation techniques BIBREF12 , BIBREF5 , BIBREF6 , BIBREF9 . In cross-domain classification, a classifier is trained on data from a source domain and tested on data from a (different) target domain. The accuracy of machine learning methods is usually lower in the cross-domain setting, due to the distribution gap between different domains. However, researchers proposed several domain adaptation techniques by using the unlabeled test data to obtain better performance BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF7 . Interestingly, some recent works BIBREF10 , BIBREF17 indicate that string kernels can yield robust results in the cross-domain setting without any domain adaptation. In fact, methods based on string kernels have demonstrated impressive results in various text classification tasks ranging from native language identification BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 and authorship identification BIBREF22 to dialect identification BIBREF23 , BIBREF17 , BIBREF24 , sentiment analysis BIBREF10 , BIBREF25 and automatic essay scoring BIBREF26 . As long as a labeled training set is available, string kernels can reach state-of-the-art results in various languages including English BIBREF19 , BIBREF10 , BIBREF26 , Arabic BIBREF27 , BIBREF20 , BIBREF17 , BIBREF24 , Chinese BIBREF25 and Norwegian BIBREF20 . Different from all these recent approaches, we use unlabeled data from the test set in a transductive setting in order to significantly increase the performance of string kernels. In our recent work BIBREF28 , we proposed two transductive learning approaches combined into a unified framework that improves the results of string kernels in two different tasks. In this paper, we provide a formal and detailed description of our transductive algorithm and present results in cross-domain English polarity classification.
The paper is organized as follows. Related work on cross-domain text classification and string kernels is presented in Section SECREF2 . Section SECREF3 presents our approach to obtain domain adapted string kernels. The transductive transfer learning method is described in Section SECREF4 . The polarity classification experiments are presented in Section SECREF5 . Finally, we draw conclusions and discuss future work in Section SECREF6 .
Related Work
Cross-Domain Classification
Transfer learning (or domain adaptation) aims at building effective classifiers for a target domain when the only available labeled training data belongs to a different (source) domain. Domain adaptation techniques can be roughly divided into graph-based methods BIBREF1 , BIBREF29 , BIBREF9 , BIBREF30 , probabilistic models BIBREF3 , BIBREF4 , knowledge-based models BIBREF14 , BIBREF31 , BIBREF11 and joint optimization frameworks BIBREF12 . The transfer learning methods from the literature show promising results in a variety of real-world applications, such as image classification BIBREF12 , text classification BIBREF13 , BIBREF16 , BIBREF3 , polarity classification BIBREF1 , BIBREF29 , BIBREF4 , BIBREF6 , BIBREF30 and others BIBREF32 .
General transfer learning approaches. Long et al. BIBREF12 proposed a novel transfer learning framework to model distribution adaptation and label propagation in a unified way, based on the structural risk minimization principle and the regularization theory. Shu et al. BIBREF5 proposed a method that bridges the distribution gap between the source domain and the target domain through affinity learning, by exploiting the existence of a subset of data points in the target domain that are distributed similarly to the data points in the source domain. In BIBREF7 , deep learning is employed to jointly optimize the representation, the cross-domain transformation and the target label inference in an end-to-end fashion. More recently, Sun et al. BIBREF8 proposed an unsupervised domain adaptation method that minimizes the domain shift by aligning the second-order statistics of source and target distributions, without requiring any target labels. Chang et al. BIBREF9 proposed a framework based on using a parallel corpus to calibrate domain-specific kernels into a unified kernel for leveraging graph-based label propagation between domains.
Cross-domain text classification. Joachims BIBREF13 introduced the Transductive Support Vector Machines (TSVM) framework for text classification, which takes into account a particular test set and tries to minimize the error rate for those particular test samples. Ifrim et al. BIBREF14 presented a transductive learning approach for text classification based on combining latent variable models for decomposing the topic-word space into topic-concept and concept-word spaces, and explicit knowledge models with named concepts for populating latent variables. Guo et al. BIBREF16 proposed a transductive subspace representation learning method to address domain adaptation for cross-lingual text classification. Zhuang et al. BIBREF3 presented a probabilistic model, by which both the shared and distinct concepts in different domains can be learned by the Expectation-Maximization process which optimizes the data likelihood. In BIBREF33 , an algorithm to adapt a classification model by iteratively learning domain-specific features from the unlabeled test data is described.
Cross-domain polarity classification. In recent years, cross-domain sentiment (polarity) classification has gained popularity due to the advances in domain adaptation on one side, and to the abundance of documents from various domains available on the Web, expressing positive or negative opinion, on the other side. Some of the general domain adaptation frameworks have been applied to polarity classification BIBREF3 , BIBREF33 , BIBREF9 , but there are some approaches that have been specifically designed for the cross-domain sentiment classification task BIBREF0 , BIBREF34 , BIBREF1 , BIBREF29 , BIBREF11 , BIBREF4 , BIBREF6 , BIBREF10 , BIBREF30 . To the best of our knowledge, Blitzer et al. BIBREF0 were the first to report results on cross-domain classification proposing the structural correspondence learning (SCL) method, and its variant based on mutual information (SCL-MI). Pan et al. BIBREF1 proposed a spectral feature alignment (SFA) algorithm to align domain-specific words from different domains into unified clusters, using domain-independent words as a bridge. Bollegala et al. BIBREF31 used a cross-domain lexicon creation to generate a sentiment-sensitive thesaurus (SST) that groups different words expressing the same sentiment, using unigram and bigram features as BIBREF0 , BIBREF1 . Luo et al. BIBREF4 proposed a cross-domain sentiment classification framework based on a probabilistic model of the author's emotion state when writing. An Expectation-Maximization algorithm is then employed to solve the maximum likelihood problem and to obtain a latent emotion distribution of the author. Franco-Salvador et al. BIBREF11 combined various recent and knowledge-based approaches using a meta-learning scheme (KE-Meta). They performed cross-domain polarity classification without employing any domain adaptation technique. More recently, Fernández et al. BIBREF6 introduced the Distributional Correspondence Indexing (DCI) method for domain adaptation in sentiment classification. The approach builds term representations in a vector space common to both domains where each dimension reflects its distributional correspondence to a highly predictive term that behaves similarly across domains. A graph-based approach for sentiment classification that models the relatedness of different domains based on shared users and keywords is proposed in BIBREF30 .
String Kernels
In recent years, methods based on string kernels have demonstrated remarkable performance in various text classification tasks BIBREF35 , BIBREF36 , BIBREF22 , BIBREF19 , BIBREF10 , BIBREF17 , BIBREF26 . String kernels represent a way of using information at the character level by measuring the similarity of strings through character n-grams. Lodhi et al. BIBREF35 used string kernels for document categorization, obtaining very good results. String kernels were also successfully used in authorship identification BIBREF22 . More recently, various combinations of string kernels reached state-of-the-art accuracy rates in native language identification BIBREF19 and Arabic dialect identification BIBREF17 . Interestingly, string kernels have been used in cross-domain settings without any domain adaptation, obtaining impressive results. For instance, Ionescu et al. BIBREF19 have employed string kernels in a cross-corpus (and implicitly cross-domain) native language identification experiment, improving the state-of-the-art accuracy by a remarkable INLINEFORM0 . Giménez-Pérez et al. BIBREF10 have used string kernels for single-source and multi-source polarity classification. Remarkably, they obtain state-of-the-art performance without using knowledge from the target domain, which indicates that string kernels provide robust results in the cross-domain setting without any domain adaptation. Ionescu et al. BIBREF17 obtained the best performance in the Arabic Dialect Identification Shared Task of the 2017 VarDial Evaluation Campaign BIBREF37 , with an improvement of INLINEFORM1 over the second-best method. It is important to note that the training and the test speech samples prepared for the shared task were recorded in different setups BIBREF37 , or in other words, the training and the test sets are drawn from different distributions. Different from all these recent approaches BIBREF19 , BIBREF10 , BIBREF17 , we use unlabeled data from the target domain to significantly increase the performance of string kernels in cross-domain text classification, particularly in English polarity classification.
Transductive String Kernels
String kernels. Kernel functions BIBREF38 capture the intuitive notion of similarity between objects in a specific domain. For example, in text mining, string kernels can be used to measure the pairwise similarity between text samples, simply based on character n-grams. Various string kernel functions have been proposed to date BIBREF35 , BIBREF38 , BIBREF19 . Perhaps one of the most recently introduced string kernels is the histogram intersection string kernel BIBREF19 . For two strings over an alphabet INLINEFORM0 , INLINEFORM1 , the intersection string kernel is formally defined as follows: DISPLAYFORM0
where INLINEFORM0 is the number of occurrences of n-gram INLINEFORM1 as a substring in INLINEFORM2 , and INLINEFORM3 is the length of INLINEFORM4 . The spectrum string kernel or the presence bits string kernel can be defined in a similar fashion BIBREF19 .
Transductive string kernels. We present a simple and straightforward approach to produce a transductive similarity measure suitable for strings. We take the following steps to derive transductive string kernels. For a given kernel (similarity) function INLINEFORM0 , we first build the full kernel matrix INLINEFORM1 , by including the pairwise similarities of samples from both the train and the test sets. For a training set INLINEFORM2 of INLINEFORM3 samples and a test set INLINEFORM4 of INLINEFORM5 samples, such that INLINEFORM6 , each component in the full kernel matrix is defined as follows: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are samples from the set INLINEFORM2 , for all INLINEFORM3 . We then normalize the kernel matrix by dividing each component by the square root of the product of the two corresponding diagonal components: DISPLAYFORM0
We transform the normalized kernel matrix into a radial basis function (RBF) kernel matrix as follows: DISPLAYFORM0
Each row in the RBF kernel matrix INLINEFORM0 is now interpreted as a feature vector. In other words, each sample INLINEFORM1 is represented by a feature vector that contains the similarity between the respective sample INLINEFORM2 and all the samples in INLINEFORM3 . Since INLINEFORM4 includes the test samples as well, the feature vector is inherently adapted to the test set. Indeed, it is easy to see that the features will be different if we choose to apply the string kernel approach on a set of test samples INLINEFORM5 , such that INLINEFORM6 . It is important to note that through the features, the subsequent classifier will have some information about the test samples at training time. More specifically, the feature vector conveys information about how similar is every test sample to every training sample. We next consider the linear kernel, which is given by the scalar product between the new feature vectors. To obtain the final linear kernel matrix, we simply need to compute the product between the RBF kernel matrix and its transpose: DISPLAYFORM0
In this way, the samples from the test set, which are included in INLINEFORM0 , are used to obtain new (transductive) string kernels that are adapted to the test set at hand.
[!tpb] Transductive Kernel Algorithm
Input:
INLINEFORM0 – the training set of INLINEFORM1 training samples and associated class labels;
INLINEFORM0 – the set of INLINEFORM1 test samples;
INLINEFORM0 – a kernel function;
INLINEFORM0 – the number of test samples to be added in the second round of training;
INLINEFORM0 – a binary kernel classifier.
Domain-Adapted Kernel Matrix Computation Steps:
INLINEFORM0 INLINEFORM1 ; INLINEFORM2 ; INLINEFORM3 ; INLINEFORM4
INLINEFORM0 INLINEFORM1 INLINEFORM2
INLINEFORM0 INLINEFORM1 INLINEFORM2
INLINEFORM0
INLINEFORM0
Transductive Kernel Classifier Steps:
INLINEFORM0
INLINEFORM0
INLINEFORM0
INLINEFORM0 INLINEFORM1
INLINEFORM0
INLINEFORM0
INLINEFORM0 INLINEFORM1 the dual weights of INLINEFORM2 trained on INLINEFORM3 with the labels INLINEFORM4
INLINEFORM0
INLINEFORM0 ; INLINEFORM1
INLINEFORM0 INLINEFORM1
INLINEFORM0
INLINEFORM0 INLINEFORM1 sort INLINEFORM2 in descending order and return the sorted indexes
INLINEFORM0
INLINEFORM0
INLINEFORM0
INLINEFORM0
INLINEFORM0
Output:
INLINEFORM0 – the set of predicted labels for the test samples in INLINEFORM1 .
Transductive Kernel Classifier
We next present a simple yet effective approach for adapting a one-versus-all kernel classifier trained on a source domain to a different target domain. Our transductive kernel classifier (TKC) approach is composed of two learning iterations. Our entire framework is formally described in Algorithm SECREF3 .
Notations. We use the following notations in the algorithm. Sets, arrays and matrices are written in capital letters. All collection types are considered to be indexed starting from position 1. The elements of a set INLINEFORM0 are denoted by INLINEFORM1 , the elements of an array INLINEFORM2 are alternatively denoted by INLINEFORM3 or INLINEFORM4 , and the elements of a matrix INLINEFORM5 are denoted by INLINEFORM6 or INLINEFORM7 when convenient. The sequence INLINEFORM8 is denoted by INLINEFORM9 . We use sequences to index arrays or matrices as well. For example, for an array INLINEFORM10 and two integers INLINEFORM11 and INLINEFORM12 , INLINEFORM13 denotes the sub-array INLINEFORM14 . In a similar manner, INLINEFORM15 denotes a sub-matrix of the matrix INLINEFORM16 , while INLINEFORM17 returns the INLINEFORM18 -th row of M and INLINEFORM19 returns the INLINEFORM20 -th column of M. The zero matrix of INLINEFORM21 components is denoted by INLINEFORM22 , and the square zero matrix is denoted by INLINEFORM23 . The identity matrix is denoted by INLINEFORM24 .
Algorithm description. In steps 8-17, we compute the domain-adapted string kernel matrix, as described in the previous section. In the first learning iteration (when INLINEFORM0 ), we train several classifiers to distinguish each individual class from the rest, according to the one-versus-all (OVA) scheme. In step 27, the kernel classifier INLINEFORM1 is trained to distinguish a class from the others, assigning a dual weight to each training sample from the source domain. The returned column vector of dual weights is denoted by INLINEFORM2 and the bias value is denoted by INLINEFORM3 . The vector of weights INLINEFORM4 contains INLINEFORM5 values, such that the weight INLINEFORM6 corresponds to the training sample INLINEFORM7 . When the test kernel matrix INLINEFORM8 of INLINEFORM9 components is multiplied with the vector INLINEFORM10 in step 28, the result is a column vector of INLINEFORM11 positive or negative scores. Afterwards (step 34), the test samples are sorted in order to maximize the probability of correctly predicted labels. For each test sample INLINEFORM12 , we consider the score INLINEFORM13 (step 32) produced by the classifier for the chosen class INLINEFORM14 (step 31), which is selected according to the OVA scheme. The sorting is based on the hypothesis that if the classifier associates a higher score to a test sample, it means that the classifier is more confident about the predicted label for the respective test sample. Before the second learning iteration, a number of INLINEFORM15 test samples from the top of the sorted list are added to the training set (steps 35-39) for another round of training. As the classifier is more confident about the predicted labels INLINEFORM16 of the added test samples, the chance of including noisy examples (with wrong labels) is minimized. On the other hand, the classifier has the opportunity to learn some useful domain-specific patterns of the test domain. We believe that, at least in the cross-domain setting, the added test samples bring more useful information than noise. We would like to stress out that the ground-truth test labels are never used in our transductive algorithm. Although the test samples are required beforehand, their labels are not necessary. Hence, our approach is suitable in situations where unlabeled data from the target domain can be collected cheaply, and such situations appear very often in practice, considering the great amount of data available on the Web.
Polarity Classification
Data set. For the cross-domain polarity classification experiments, we use the second version of Multi-Domain Sentiment Dataset BIBREF0 . The data set contains Amazon product reviews of four different domains: Books (B), DVDs (D), Electronics (E) and Kitchen appliances (K). Reviews contain star ratings (from 1 to 5) which are converted into binary labels as follows: reviews rated with more than 3 stars are labeled as positive, and those with less than 3 stars as negative. In each domain, there are 1000 positive and 1000 negative reviews.
Baselines. We compare our approach with several methods BIBREF1 , BIBREF31 , BIBREF11 , BIBREF8 , BIBREF10 , BIBREF39 in two cross-domain settings. Using string kernels, Giménez-Pérez et al. BIBREF10 reported better performance than SST BIBREF31 and KE-Meta BIBREF11 in the multi-source domain setting. In addition, we compare our approach with SFA BIBREF1 , CORAL BIBREF8 and TR-TrAdaBoost BIBREF39 in the single-source setting.
Evaluation procedure and parameters. We follow the same evaluation methodology of Giménez-Pérez et al. BIBREF10 , to ensure a fair comparison. Furthermore, we use the same kernels, namely the presence bits string kernel ( INLINEFORM0 ) and the intersection string kernel ( INLINEFORM1 ), and the same range of character n-grams (5-8). To compute the string kernels, we used the open-source code provided by Ionescu et al. BIBREF19 , BIBREF40 . For the transductive kernel classifier, we select INLINEFORM2 unlabeled test samples to be included in the training set for the second round of training. We choose Kernel Ridge Regression BIBREF38 as classifier and set its regularization parameter to INLINEFORM3 in all our experiments. Although Giménez-Pérez et al. BIBREF10 used a different classifier, namely Kernel Discriminant Analysis, we observed that Kernel Ridge Regression produces similar results ( INLINEFORM4 ) when we employ the same string kernels. As Giménez-Pérez et al. BIBREF10 , we evaluate our approach in two cross-domain settings. In the multi-source setting, we train the models on all domains, except the one used for testing. In the single-source setting, we train the models on one of the four domains and we independently test the models on the remaining three domains.
Results in multi-source setting. The results for the multi-source cross-domain polarity classification setting are presented in Table TABREF8 . Both the transductive presence bits string kernel ( INLINEFORM0 ) and the transductive intersection kernel ( INLINEFORM1 ) obtain better results than their original counterparts. Moreover, according to the McNemar's test BIBREF41 , the results on the DVDs, the Electronics and the Kitchen target domains are significantly better than the best baseline string kernel, with a confidence level of INLINEFORM2 . When we employ the transductive kernel classifier (TKC), we obtain even better results. On all domains, the accuracy rates yielded by the transductive classifier are more than INLINEFORM3 better than the best baseline. For example, on the Books domain the accuracy of the transductive classifier based on the presence bits kernel ( INLINEFORM4 ) is INLINEFORM5 above the best baseline ( INLINEFORM6 ) represented by the intersection string kernel. Remarkably, the improvements brought by our transductive string kernel approach are statistically significant in all domains.
Results in single-source setting. The results for the single-source cross-domain polarity classification setting are presented in Table TABREF9 . We considered all possible combinations of source and target domains in this experiment, and we improve the results in each and every case. Without exception, the accuracy rates reached by the transductive string kernels are significantly better than the best baseline string kernel BIBREF10 , according to the McNemar's test performed at a confidence level of INLINEFORM0 . The highest improvements (above INLINEFORM1 ) are obtained when the source domain contains Books reviews and the target domain contains Kitchen reviews. As in the multi-source setting, we obtain much better results when the transductive classifier is employed for the learning task. In all cases, the accuracy rates of the transductive classifier are more than INLINEFORM2 better than the best baseline string kernel. Remarkably, in four cases (E INLINEFORM3 B, E INLINEFORM4 D, B INLINEFORM5 K and D INLINEFORM6 K) our improvements are greater than INLINEFORM7 . The improvements brought by our transductive classifier based on string kernels are statistically significant in each and every case. In comparison with SFA BIBREF1 , we obtain better results in all but one case (K INLINEFORM8 D). Remarkably, we surpass the other state-of-the-art approaches BIBREF8 , BIBREF39 in all cases.
Conclusion
In this paper, we presented two domain adaptation approaches that can be used together to improve the results of string kernels in cross-domain settings. We provided empirical evidence indicating that our framework can be successfully applied in cross-domain text classification, particularly in cross-domain English polarity classification. Indeed, the polarity classification experiments demonstrate that our framework achieves better accuracy rates than other state-of-the-art methods BIBREF1 , BIBREF31 , BIBREF11 , BIBREF8 , BIBREF10 , BIBREF39 . By using the same parameters across all the experiments, we showed that our transductive transfer learning framework can bring significant improvements without having to fine-tune the parameters for each individual setting. Although the framework described in this paper can be generally applied to any kernel method, we focused our work only on string kernel approaches used in text classification. In future work, we aim to combine the proposed transductive transfer learning framework with different kinds of kernels and classifiers, and employ it for other cross-domain tasks. | 8000 |
cc28919313f897358ef864948c65318dc61cb03c | cc28919313f897358ef864948c65318dc61cb03c_0 | Q: What machine learning algorithms are used?
Text: Introduction
Domain shift is a fundamental problem in machine learning, that has attracted a lot of attention in the natural language processing and vision communities BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . To understand and address this problem, generated by the lack of labeled data in a target domain, researchers have studied the behavior of machine learning methods in cross-domain settings BIBREF2 , BIBREF11 , BIBREF10 and came up with various domain adaptation techniques BIBREF12 , BIBREF5 , BIBREF6 , BIBREF9 . In cross-domain classification, a classifier is trained on data from a source domain and tested on data from a (different) target domain. The accuracy of machine learning methods is usually lower in the cross-domain setting, due to the distribution gap between different domains. However, researchers proposed several domain adaptation techniques by using the unlabeled test data to obtain better performance BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF7 . Interestingly, some recent works BIBREF10 , BIBREF17 indicate that string kernels can yield robust results in the cross-domain setting without any domain adaptation. In fact, methods based on string kernels have demonstrated impressive results in various text classification tasks ranging from native language identification BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 and authorship identification BIBREF22 to dialect identification BIBREF23 , BIBREF17 , BIBREF24 , sentiment analysis BIBREF10 , BIBREF25 and automatic essay scoring BIBREF26 . As long as a labeled training set is available, string kernels can reach state-of-the-art results in various languages including English BIBREF19 , BIBREF10 , BIBREF26 , Arabic BIBREF27 , BIBREF20 , BIBREF17 , BIBREF24 , Chinese BIBREF25 and Norwegian BIBREF20 . Different from all these recent approaches, we use unlabeled data from the test set in a transductive setting in order to significantly increase the performance of string kernels. In our recent work BIBREF28 , we proposed two transductive learning approaches combined into a unified framework that improves the results of string kernels in two different tasks. In this paper, we provide a formal and detailed description of our transductive algorithm and present results in cross-domain English polarity classification.
The paper is organized as follows. Related work on cross-domain text classification and string kernels is presented in Section SECREF2 . Section SECREF3 presents our approach to obtain domain adapted string kernels. The transductive transfer learning method is described in Section SECREF4 . The polarity classification experiments are presented in Section SECREF5 . Finally, we draw conclusions and discuss future work in Section SECREF6 .
Related Work
Cross-Domain Classification
Transfer learning (or domain adaptation) aims at building effective classifiers for a target domain when the only available labeled training data belongs to a different (source) domain. Domain adaptation techniques can be roughly divided into graph-based methods BIBREF1 , BIBREF29 , BIBREF9 , BIBREF30 , probabilistic models BIBREF3 , BIBREF4 , knowledge-based models BIBREF14 , BIBREF31 , BIBREF11 and joint optimization frameworks BIBREF12 . The transfer learning methods from the literature show promising results in a variety of real-world applications, such as image classification BIBREF12 , text classification BIBREF13 , BIBREF16 , BIBREF3 , polarity classification BIBREF1 , BIBREF29 , BIBREF4 , BIBREF6 , BIBREF30 and others BIBREF32 .
General transfer learning approaches. Long et al. BIBREF12 proposed a novel transfer learning framework to model distribution adaptation and label propagation in a unified way, based on the structural risk minimization principle and the regularization theory. Shu et al. BIBREF5 proposed a method that bridges the distribution gap between the source domain and the target domain through affinity learning, by exploiting the existence of a subset of data points in the target domain that are distributed similarly to the data points in the source domain. In BIBREF7 , deep learning is employed to jointly optimize the representation, the cross-domain transformation and the target label inference in an end-to-end fashion. More recently, Sun et al. BIBREF8 proposed an unsupervised domain adaptation method that minimizes the domain shift by aligning the second-order statistics of source and target distributions, without requiring any target labels. Chang et al. BIBREF9 proposed a framework based on using a parallel corpus to calibrate domain-specific kernels into a unified kernel for leveraging graph-based label propagation between domains.
Cross-domain text classification. Joachims BIBREF13 introduced the Transductive Support Vector Machines (TSVM) framework for text classification, which takes into account a particular test set and tries to minimize the error rate for those particular test samples. Ifrim et al. BIBREF14 presented a transductive learning approach for text classification based on combining latent variable models for decomposing the topic-word space into topic-concept and concept-word spaces, and explicit knowledge models with named concepts for populating latent variables. Guo et al. BIBREF16 proposed a transductive subspace representation learning method to address domain adaptation for cross-lingual text classification. Zhuang et al. BIBREF3 presented a probabilistic model, by which both the shared and distinct concepts in different domains can be learned by the Expectation-Maximization process which optimizes the data likelihood. In BIBREF33 , an algorithm to adapt a classification model by iteratively learning domain-specific features from the unlabeled test data is described.
Cross-domain polarity classification. In recent years, cross-domain sentiment (polarity) classification has gained popularity due to the advances in domain adaptation on one side, and to the abundance of documents from various domains available on the Web, expressing positive or negative opinion, on the other side. Some of the general domain adaptation frameworks have been applied to polarity classification BIBREF3 , BIBREF33 , BIBREF9 , but there are some approaches that have been specifically designed for the cross-domain sentiment classification task BIBREF0 , BIBREF34 , BIBREF1 , BIBREF29 , BIBREF11 , BIBREF4 , BIBREF6 , BIBREF10 , BIBREF30 . To the best of our knowledge, Blitzer et al. BIBREF0 were the first to report results on cross-domain classification proposing the structural correspondence learning (SCL) method, and its variant based on mutual information (SCL-MI). Pan et al. BIBREF1 proposed a spectral feature alignment (SFA) algorithm to align domain-specific words from different domains into unified clusters, using domain-independent words as a bridge. Bollegala et al. BIBREF31 used a cross-domain lexicon creation to generate a sentiment-sensitive thesaurus (SST) that groups different words expressing the same sentiment, using unigram and bigram features as BIBREF0 , BIBREF1 . Luo et al. BIBREF4 proposed a cross-domain sentiment classification framework based on a probabilistic model of the author's emotion state when writing. An Expectation-Maximization algorithm is then employed to solve the maximum likelihood problem and to obtain a latent emotion distribution of the author. Franco-Salvador et al. BIBREF11 combined various recent and knowledge-based approaches using a meta-learning scheme (KE-Meta). They performed cross-domain polarity classification without employing any domain adaptation technique. More recently, Fernández et al. BIBREF6 introduced the Distributional Correspondence Indexing (DCI) method for domain adaptation in sentiment classification. The approach builds term representations in a vector space common to both domains where each dimension reflects its distributional correspondence to a highly predictive term that behaves similarly across domains. A graph-based approach for sentiment classification that models the relatedness of different domains based on shared users and keywords is proposed in BIBREF30 .
String Kernels
In recent years, methods based on string kernels have demonstrated remarkable performance in various text classification tasks BIBREF35 , BIBREF36 , BIBREF22 , BIBREF19 , BIBREF10 , BIBREF17 , BIBREF26 . String kernels represent a way of using information at the character level by measuring the similarity of strings through character n-grams. Lodhi et al. BIBREF35 used string kernels for document categorization, obtaining very good results. String kernels were also successfully used in authorship identification BIBREF22 . More recently, various combinations of string kernels reached state-of-the-art accuracy rates in native language identification BIBREF19 and Arabic dialect identification BIBREF17 . Interestingly, string kernels have been used in cross-domain settings without any domain adaptation, obtaining impressive results. For instance, Ionescu et al. BIBREF19 have employed string kernels in a cross-corpus (and implicitly cross-domain) native language identification experiment, improving the state-of-the-art accuracy by a remarkable INLINEFORM0 . Giménez-Pérez et al. BIBREF10 have used string kernels for single-source and multi-source polarity classification. Remarkably, they obtain state-of-the-art performance without using knowledge from the target domain, which indicates that string kernels provide robust results in the cross-domain setting without any domain adaptation. Ionescu et al. BIBREF17 obtained the best performance in the Arabic Dialect Identification Shared Task of the 2017 VarDial Evaluation Campaign BIBREF37 , with an improvement of INLINEFORM1 over the second-best method. It is important to note that the training and the test speech samples prepared for the shared task were recorded in different setups BIBREF37 , or in other words, the training and the test sets are drawn from different distributions. Different from all these recent approaches BIBREF19 , BIBREF10 , BIBREF17 , we use unlabeled data from the target domain to significantly increase the performance of string kernels in cross-domain text classification, particularly in English polarity classification.
Transductive String Kernels
String kernels. Kernel functions BIBREF38 capture the intuitive notion of similarity between objects in a specific domain. For example, in text mining, string kernels can be used to measure the pairwise similarity between text samples, simply based on character n-grams. Various string kernel functions have been proposed to date BIBREF35 , BIBREF38 , BIBREF19 . Perhaps one of the most recently introduced string kernels is the histogram intersection string kernel BIBREF19 . For two strings over an alphabet INLINEFORM0 , INLINEFORM1 , the intersection string kernel is formally defined as follows: DISPLAYFORM0
where INLINEFORM0 is the number of occurrences of n-gram INLINEFORM1 as a substring in INLINEFORM2 , and INLINEFORM3 is the length of INLINEFORM4 . The spectrum string kernel or the presence bits string kernel can be defined in a similar fashion BIBREF19 .
Transductive string kernels. We present a simple and straightforward approach to produce a transductive similarity measure suitable for strings. We take the following steps to derive transductive string kernels. For a given kernel (similarity) function INLINEFORM0 , we first build the full kernel matrix INLINEFORM1 , by including the pairwise similarities of samples from both the train and the test sets. For a training set INLINEFORM2 of INLINEFORM3 samples and a test set INLINEFORM4 of INLINEFORM5 samples, such that INLINEFORM6 , each component in the full kernel matrix is defined as follows: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are samples from the set INLINEFORM2 , for all INLINEFORM3 . We then normalize the kernel matrix by dividing each component by the square root of the product of the two corresponding diagonal components: DISPLAYFORM0
We transform the normalized kernel matrix into a radial basis function (RBF) kernel matrix as follows: DISPLAYFORM0
Each row in the RBF kernel matrix INLINEFORM0 is now interpreted as a feature vector. In other words, each sample INLINEFORM1 is represented by a feature vector that contains the similarity between the respective sample INLINEFORM2 and all the samples in INLINEFORM3 . Since INLINEFORM4 includes the test samples as well, the feature vector is inherently adapted to the test set. Indeed, it is easy to see that the features will be different if we choose to apply the string kernel approach on a set of test samples INLINEFORM5 , such that INLINEFORM6 . It is important to note that through the features, the subsequent classifier will have some information about the test samples at training time. More specifically, the feature vector conveys information about how similar is every test sample to every training sample. We next consider the linear kernel, which is given by the scalar product between the new feature vectors. To obtain the final linear kernel matrix, we simply need to compute the product between the RBF kernel matrix and its transpose: DISPLAYFORM0
In this way, the samples from the test set, which are included in INLINEFORM0 , are used to obtain new (transductive) string kernels that are adapted to the test set at hand.
[!tpb] Transductive Kernel Algorithm
Input:
INLINEFORM0 – the training set of INLINEFORM1 training samples and associated class labels;
INLINEFORM0 – the set of INLINEFORM1 test samples;
INLINEFORM0 – a kernel function;
INLINEFORM0 – the number of test samples to be added in the second round of training;
INLINEFORM0 – a binary kernel classifier.
Domain-Adapted Kernel Matrix Computation Steps:
INLINEFORM0 INLINEFORM1 ; INLINEFORM2 ; INLINEFORM3 ; INLINEFORM4
INLINEFORM0 INLINEFORM1 INLINEFORM2
INLINEFORM0 INLINEFORM1 INLINEFORM2
INLINEFORM0
INLINEFORM0
Transductive Kernel Classifier Steps:
INLINEFORM0
INLINEFORM0
INLINEFORM0
INLINEFORM0 INLINEFORM1
INLINEFORM0
INLINEFORM0
INLINEFORM0 INLINEFORM1 the dual weights of INLINEFORM2 trained on INLINEFORM3 with the labels INLINEFORM4
INLINEFORM0
INLINEFORM0 ; INLINEFORM1
INLINEFORM0 INLINEFORM1
INLINEFORM0
INLINEFORM0 INLINEFORM1 sort INLINEFORM2 in descending order and return the sorted indexes
INLINEFORM0
INLINEFORM0
INLINEFORM0
INLINEFORM0
INLINEFORM0
Output:
INLINEFORM0 – the set of predicted labels for the test samples in INLINEFORM1 .
Transductive Kernel Classifier
We next present a simple yet effective approach for adapting a one-versus-all kernel classifier trained on a source domain to a different target domain. Our transductive kernel classifier (TKC) approach is composed of two learning iterations. Our entire framework is formally described in Algorithm SECREF3 .
Notations. We use the following notations in the algorithm. Sets, arrays and matrices are written in capital letters. All collection types are considered to be indexed starting from position 1. The elements of a set INLINEFORM0 are denoted by INLINEFORM1 , the elements of an array INLINEFORM2 are alternatively denoted by INLINEFORM3 or INLINEFORM4 , and the elements of a matrix INLINEFORM5 are denoted by INLINEFORM6 or INLINEFORM7 when convenient. The sequence INLINEFORM8 is denoted by INLINEFORM9 . We use sequences to index arrays or matrices as well. For example, for an array INLINEFORM10 and two integers INLINEFORM11 and INLINEFORM12 , INLINEFORM13 denotes the sub-array INLINEFORM14 . In a similar manner, INLINEFORM15 denotes a sub-matrix of the matrix INLINEFORM16 , while INLINEFORM17 returns the INLINEFORM18 -th row of M and INLINEFORM19 returns the INLINEFORM20 -th column of M. The zero matrix of INLINEFORM21 components is denoted by INLINEFORM22 , and the square zero matrix is denoted by INLINEFORM23 . The identity matrix is denoted by INLINEFORM24 .
Algorithm description. In steps 8-17, we compute the domain-adapted string kernel matrix, as described in the previous section. In the first learning iteration (when INLINEFORM0 ), we train several classifiers to distinguish each individual class from the rest, according to the one-versus-all (OVA) scheme. In step 27, the kernel classifier INLINEFORM1 is trained to distinguish a class from the others, assigning a dual weight to each training sample from the source domain. The returned column vector of dual weights is denoted by INLINEFORM2 and the bias value is denoted by INLINEFORM3 . The vector of weights INLINEFORM4 contains INLINEFORM5 values, such that the weight INLINEFORM6 corresponds to the training sample INLINEFORM7 . When the test kernel matrix INLINEFORM8 of INLINEFORM9 components is multiplied with the vector INLINEFORM10 in step 28, the result is a column vector of INLINEFORM11 positive or negative scores. Afterwards (step 34), the test samples are sorted in order to maximize the probability of correctly predicted labels. For each test sample INLINEFORM12 , we consider the score INLINEFORM13 (step 32) produced by the classifier for the chosen class INLINEFORM14 (step 31), which is selected according to the OVA scheme. The sorting is based on the hypothesis that if the classifier associates a higher score to a test sample, it means that the classifier is more confident about the predicted label for the respective test sample. Before the second learning iteration, a number of INLINEFORM15 test samples from the top of the sorted list are added to the training set (steps 35-39) for another round of training. As the classifier is more confident about the predicted labels INLINEFORM16 of the added test samples, the chance of including noisy examples (with wrong labels) is minimized. On the other hand, the classifier has the opportunity to learn some useful domain-specific patterns of the test domain. We believe that, at least in the cross-domain setting, the added test samples bring more useful information than noise. We would like to stress out that the ground-truth test labels are never used in our transductive algorithm. Although the test samples are required beforehand, their labels are not necessary. Hence, our approach is suitable in situations where unlabeled data from the target domain can be collected cheaply, and such situations appear very often in practice, considering the great amount of data available on the Web.
Polarity Classification
Data set. For the cross-domain polarity classification experiments, we use the second version of Multi-Domain Sentiment Dataset BIBREF0 . The data set contains Amazon product reviews of four different domains: Books (B), DVDs (D), Electronics (E) and Kitchen appliances (K). Reviews contain star ratings (from 1 to 5) which are converted into binary labels as follows: reviews rated with more than 3 stars are labeled as positive, and those with less than 3 stars as negative. In each domain, there are 1000 positive and 1000 negative reviews.
Baselines. We compare our approach with several methods BIBREF1 , BIBREF31 , BIBREF11 , BIBREF8 , BIBREF10 , BIBREF39 in two cross-domain settings. Using string kernels, Giménez-Pérez et al. BIBREF10 reported better performance than SST BIBREF31 and KE-Meta BIBREF11 in the multi-source domain setting. In addition, we compare our approach with SFA BIBREF1 , CORAL BIBREF8 and TR-TrAdaBoost BIBREF39 in the single-source setting.
Evaluation procedure and parameters. We follow the same evaluation methodology of Giménez-Pérez et al. BIBREF10 , to ensure a fair comparison. Furthermore, we use the same kernels, namely the presence bits string kernel ( INLINEFORM0 ) and the intersection string kernel ( INLINEFORM1 ), and the same range of character n-grams (5-8). To compute the string kernels, we used the open-source code provided by Ionescu et al. BIBREF19 , BIBREF40 . For the transductive kernel classifier, we select INLINEFORM2 unlabeled test samples to be included in the training set for the second round of training. We choose Kernel Ridge Regression BIBREF38 as classifier and set its regularization parameter to INLINEFORM3 in all our experiments. Although Giménez-Pérez et al. BIBREF10 used a different classifier, namely Kernel Discriminant Analysis, we observed that Kernel Ridge Regression produces similar results ( INLINEFORM4 ) when we employ the same string kernels. As Giménez-Pérez et al. BIBREF10 , we evaluate our approach in two cross-domain settings. In the multi-source setting, we train the models on all domains, except the one used for testing. In the single-source setting, we train the models on one of the four domains and we independently test the models on the remaining three domains.
Results in multi-source setting. The results for the multi-source cross-domain polarity classification setting are presented in Table TABREF8 . Both the transductive presence bits string kernel ( INLINEFORM0 ) and the transductive intersection kernel ( INLINEFORM1 ) obtain better results than their original counterparts. Moreover, according to the McNemar's test BIBREF41 , the results on the DVDs, the Electronics and the Kitchen target domains are significantly better than the best baseline string kernel, with a confidence level of INLINEFORM2 . When we employ the transductive kernel classifier (TKC), we obtain even better results. On all domains, the accuracy rates yielded by the transductive classifier are more than INLINEFORM3 better than the best baseline. For example, on the Books domain the accuracy of the transductive classifier based on the presence bits kernel ( INLINEFORM4 ) is INLINEFORM5 above the best baseline ( INLINEFORM6 ) represented by the intersection string kernel. Remarkably, the improvements brought by our transductive string kernel approach are statistically significant in all domains.
Results in single-source setting. The results for the single-source cross-domain polarity classification setting are presented in Table TABREF9 . We considered all possible combinations of source and target domains in this experiment, and we improve the results in each and every case. Without exception, the accuracy rates reached by the transductive string kernels are significantly better than the best baseline string kernel BIBREF10 , according to the McNemar's test performed at a confidence level of INLINEFORM0 . The highest improvements (above INLINEFORM1 ) are obtained when the source domain contains Books reviews and the target domain contains Kitchen reviews. As in the multi-source setting, we obtain much better results when the transductive classifier is employed for the learning task. In all cases, the accuracy rates of the transductive classifier are more than INLINEFORM2 better than the best baseline string kernel. Remarkably, in four cases (E INLINEFORM3 B, E INLINEFORM4 D, B INLINEFORM5 K and D INLINEFORM6 K) our improvements are greater than INLINEFORM7 . The improvements brought by our transductive classifier based on string kernels are statistically significant in each and every case. In comparison with SFA BIBREF1 , we obtain better results in all but one case (K INLINEFORM8 D). Remarkably, we surpass the other state-of-the-art approaches BIBREF8 , BIBREF39 in all cases.
Conclusion
In this paper, we presented two domain adaptation approaches that can be used together to improve the results of string kernels in cross-domain settings. We provided empirical evidence indicating that our framework can be successfully applied in cross-domain text classification, particularly in cross-domain English polarity classification. Indeed, the polarity classification experiments demonstrate that our framework achieves better accuracy rates than other state-of-the-art methods BIBREF1 , BIBREF31 , BIBREF11 , BIBREF8 , BIBREF10 , BIBREF39 . By using the same parameters across all the experiments, we showed that our transductive transfer learning framework can bring significant improvements without having to fine-tune the parameters for each individual setting. Although the framework described in this paper can be generally applied to any kernel method, we focused our work only on string kernel approaches used in text classification. In future work, we aim to combine the proposed transductive transfer learning framework with different kinds of kernels and classifiers, and employ it for other cross-domain tasks. | string kernels, SST, KE-Meta, SFA, CORAL, TR-TrAdaBoost, Transductive string kernels, transductive kernel classifier |
b3857a590fd667ecc282f66d771e5b2773ce9632 | b3857a590fd667ecc282f66d771e5b2773ce9632_0 | Q: What is a string kernel?
Text: Introduction
Domain shift is a fundamental problem in machine learning, that has attracted a lot of attention in the natural language processing and vision communities BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , BIBREF9 , BIBREF10 . To understand and address this problem, generated by the lack of labeled data in a target domain, researchers have studied the behavior of machine learning methods in cross-domain settings BIBREF2 , BIBREF11 , BIBREF10 and came up with various domain adaptation techniques BIBREF12 , BIBREF5 , BIBREF6 , BIBREF9 . In cross-domain classification, a classifier is trained on data from a source domain and tested on data from a (different) target domain. The accuracy of machine learning methods is usually lower in the cross-domain setting, due to the distribution gap between different domains. However, researchers proposed several domain adaptation techniques by using the unlabeled test data to obtain better performance BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF7 . Interestingly, some recent works BIBREF10 , BIBREF17 indicate that string kernels can yield robust results in the cross-domain setting without any domain adaptation. In fact, methods based on string kernels have demonstrated impressive results in various text classification tasks ranging from native language identification BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 and authorship identification BIBREF22 to dialect identification BIBREF23 , BIBREF17 , BIBREF24 , sentiment analysis BIBREF10 , BIBREF25 and automatic essay scoring BIBREF26 . As long as a labeled training set is available, string kernels can reach state-of-the-art results in various languages including English BIBREF19 , BIBREF10 , BIBREF26 , Arabic BIBREF27 , BIBREF20 , BIBREF17 , BIBREF24 , Chinese BIBREF25 and Norwegian BIBREF20 . Different from all these recent approaches, we use unlabeled data from the test set in a transductive setting in order to significantly increase the performance of string kernels. In our recent work BIBREF28 , we proposed two transductive learning approaches combined into a unified framework that improves the results of string kernels in two different tasks. In this paper, we provide a formal and detailed description of our transductive algorithm and present results in cross-domain English polarity classification.
The paper is organized as follows. Related work on cross-domain text classification and string kernels is presented in Section SECREF2 . Section SECREF3 presents our approach to obtain domain adapted string kernels. The transductive transfer learning method is described in Section SECREF4 . The polarity classification experiments are presented in Section SECREF5 . Finally, we draw conclusions and discuss future work in Section SECREF6 .
Related Work
Cross-Domain Classification
Transfer learning (or domain adaptation) aims at building effective classifiers for a target domain when the only available labeled training data belongs to a different (source) domain. Domain adaptation techniques can be roughly divided into graph-based methods BIBREF1 , BIBREF29 , BIBREF9 , BIBREF30 , probabilistic models BIBREF3 , BIBREF4 , knowledge-based models BIBREF14 , BIBREF31 , BIBREF11 and joint optimization frameworks BIBREF12 . The transfer learning methods from the literature show promising results in a variety of real-world applications, such as image classification BIBREF12 , text classification BIBREF13 , BIBREF16 , BIBREF3 , polarity classification BIBREF1 , BIBREF29 , BIBREF4 , BIBREF6 , BIBREF30 and others BIBREF32 .
General transfer learning approaches. Long et al. BIBREF12 proposed a novel transfer learning framework to model distribution adaptation and label propagation in a unified way, based on the structural risk minimization principle and the regularization theory. Shu et al. BIBREF5 proposed a method that bridges the distribution gap between the source domain and the target domain through affinity learning, by exploiting the existence of a subset of data points in the target domain that are distributed similarly to the data points in the source domain. In BIBREF7 , deep learning is employed to jointly optimize the representation, the cross-domain transformation and the target label inference in an end-to-end fashion. More recently, Sun et al. BIBREF8 proposed an unsupervised domain adaptation method that minimizes the domain shift by aligning the second-order statistics of source and target distributions, without requiring any target labels. Chang et al. BIBREF9 proposed a framework based on using a parallel corpus to calibrate domain-specific kernels into a unified kernel for leveraging graph-based label propagation between domains.
Cross-domain text classification. Joachims BIBREF13 introduced the Transductive Support Vector Machines (TSVM) framework for text classification, which takes into account a particular test set and tries to minimize the error rate for those particular test samples. Ifrim et al. BIBREF14 presented a transductive learning approach for text classification based on combining latent variable models for decomposing the topic-word space into topic-concept and concept-word spaces, and explicit knowledge models with named concepts for populating latent variables. Guo et al. BIBREF16 proposed a transductive subspace representation learning method to address domain adaptation for cross-lingual text classification. Zhuang et al. BIBREF3 presented a probabilistic model, by which both the shared and distinct concepts in different domains can be learned by the Expectation-Maximization process which optimizes the data likelihood. In BIBREF33 , an algorithm to adapt a classification model by iteratively learning domain-specific features from the unlabeled test data is described.
Cross-domain polarity classification. In recent years, cross-domain sentiment (polarity) classification has gained popularity due to the advances in domain adaptation on one side, and to the abundance of documents from various domains available on the Web, expressing positive or negative opinion, on the other side. Some of the general domain adaptation frameworks have been applied to polarity classification BIBREF3 , BIBREF33 , BIBREF9 , but there are some approaches that have been specifically designed for the cross-domain sentiment classification task BIBREF0 , BIBREF34 , BIBREF1 , BIBREF29 , BIBREF11 , BIBREF4 , BIBREF6 , BIBREF10 , BIBREF30 . To the best of our knowledge, Blitzer et al. BIBREF0 were the first to report results on cross-domain classification proposing the structural correspondence learning (SCL) method, and its variant based on mutual information (SCL-MI). Pan et al. BIBREF1 proposed a spectral feature alignment (SFA) algorithm to align domain-specific words from different domains into unified clusters, using domain-independent words as a bridge. Bollegala et al. BIBREF31 used a cross-domain lexicon creation to generate a sentiment-sensitive thesaurus (SST) that groups different words expressing the same sentiment, using unigram and bigram features as BIBREF0 , BIBREF1 . Luo et al. BIBREF4 proposed a cross-domain sentiment classification framework based on a probabilistic model of the author's emotion state when writing. An Expectation-Maximization algorithm is then employed to solve the maximum likelihood problem and to obtain a latent emotion distribution of the author. Franco-Salvador et al. BIBREF11 combined various recent and knowledge-based approaches using a meta-learning scheme (KE-Meta). They performed cross-domain polarity classification without employing any domain adaptation technique. More recently, Fernández et al. BIBREF6 introduced the Distributional Correspondence Indexing (DCI) method for domain adaptation in sentiment classification. The approach builds term representations in a vector space common to both domains where each dimension reflects its distributional correspondence to a highly predictive term that behaves similarly across domains. A graph-based approach for sentiment classification that models the relatedness of different domains based on shared users and keywords is proposed in BIBREF30 .
String Kernels
In recent years, methods based on string kernels have demonstrated remarkable performance in various text classification tasks BIBREF35 , BIBREF36 , BIBREF22 , BIBREF19 , BIBREF10 , BIBREF17 , BIBREF26 . String kernels represent a way of using information at the character level by measuring the similarity of strings through character n-grams. Lodhi et al. BIBREF35 used string kernels for document categorization, obtaining very good results. String kernels were also successfully used in authorship identification BIBREF22 . More recently, various combinations of string kernels reached state-of-the-art accuracy rates in native language identification BIBREF19 and Arabic dialect identification BIBREF17 . Interestingly, string kernels have been used in cross-domain settings without any domain adaptation, obtaining impressive results. For instance, Ionescu et al. BIBREF19 have employed string kernels in a cross-corpus (and implicitly cross-domain) native language identification experiment, improving the state-of-the-art accuracy by a remarkable INLINEFORM0 . Giménez-Pérez et al. BIBREF10 have used string kernels for single-source and multi-source polarity classification. Remarkably, they obtain state-of-the-art performance without using knowledge from the target domain, which indicates that string kernels provide robust results in the cross-domain setting without any domain adaptation. Ionescu et al. BIBREF17 obtained the best performance in the Arabic Dialect Identification Shared Task of the 2017 VarDial Evaluation Campaign BIBREF37 , with an improvement of INLINEFORM1 over the second-best method. It is important to note that the training and the test speech samples prepared for the shared task were recorded in different setups BIBREF37 , or in other words, the training and the test sets are drawn from different distributions. Different from all these recent approaches BIBREF19 , BIBREF10 , BIBREF17 , we use unlabeled data from the target domain to significantly increase the performance of string kernels in cross-domain text classification, particularly in English polarity classification.
Transductive String Kernels
String kernels. Kernel functions BIBREF38 capture the intuitive notion of similarity between objects in a specific domain. For example, in text mining, string kernels can be used to measure the pairwise similarity between text samples, simply based on character n-grams. Various string kernel functions have been proposed to date BIBREF35 , BIBREF38 , BIBREF19 . Perhaps one of the most recently introduced string kernels is the histogram intersection string kernel BIBREF19 . For two strings over an alphabet INLINEFORM0 , INLINEFORM1 , the intersection string kernel is formally defined as follows: DISPLAYFORM0
where INLINEFORM0 is the number of occurrences of n-gram INLINEFORM1 as a substring in INLINEFORM2 , and INLINEFORM3 is the length of INLINEFORM4 . The spectrum string kernel or the presence bits string kernel can be defined in a similar fashion BIBREF19 .
Transductive string kernels. We present a simple and straightforward approach to produce a transductive similarity measure suitable for strings. We take the following steps to derive transductive string kernels. For a given kernel (similarity) function INLINEFORM0 , we first build the full kernel matrix INLINEFORM1 , by including the pairwise similarities of samples from both the train and the test sets. For a training set INLINEFORM2 of INLINEFORM3 samples and a test set INLINEFORM4 of INLINEFORM5 samples, such that INLINEFORM6 , each component in the full kernel matrix is defined as follows: DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are samples from the set INLINEFORM2 , for all INLINEFORM3 . We then normalize the kernel matrix by dividing each component by the square root of the product of the two corresponding diagonal components: DISPLAYFORM0
We transform the normalized kernel matrix into a radial basis function (RBF) kernel matrix as follows: DISPLAYFORM0
Each row in the RBF kernel matrix INLINEFORM0 is now interpreted as a feature vector. In other words, each sample INLINEFORM1 is represented by a feature vector that contains the similarity between the respective sample INLINEFORM2 and all the samples in INLINEFORM3 . Since INLINEFORM4 includes the test samples as well, the feature vector is inherently adapted to the test set. Indeed, it is easy to see that the features will be different if we choose to apply the string kernel approach on a set of test samples INLINEFORM5 , such that INLINEFORM6 . It is important to note that through the features, the subsequent classifier will have some information about the test samples at training time. More specifically, the feature vector conveys information about how similar is every test sample to every training sample. We next consider the linear kernel, which is given by the scalar product between the new feature vectors. To obtain the final linear kernel matrix, we simply need to compute the product between the RBF kernel matrix and its transpose: DISPLAYFORM0
In this way, the samples from the test set, which are included in INLINEFORM0 , are used to obtain new (transductive) string kernels that are adapted to the test set at hand.
[!tpb] Transductive Kernel Algorithm
Input:
INLINEFORM0 – the training set of INLINEFORM1 training samples and associated class labels;
INLINEFORM0 – the set of INLINEFORM1 test samples;
INLINEFORM0 – a kernel function;
INLINEFORM0 – the number of test samples to be added in the second round of training;
INLINEFORM0 – a binary kernel classifier.
Domain-Adapted Kernel Matrix Computation Steps:
INLINEFORM0 INLINEFORM1 ; INLINEFORM2 ; INLINEFORM3 ; INLINEFORM4
INLINEFORM0 INLINEFORM1 INLINEFORM2
INLINEFORM0 INLINEFORM1 INLINEFORM2
INLINEFORM0
INLINEFORM0
Transductive Kernel Classifier Steps:
INLINEFORM0
INLINEFORM0
INLINEFORM0
INLINEFORM0 INLINEFORM1
INLINEFORM0
INLINEFORM0
INLINEFORM0 INLINEFORM1 the dual weights of INLINEFORM2 trained on INLINEFORM3 with the labels INLINEFORM4
INLINEFORM0
INLINEFORM0 ; INLINEFORM1
INLINEFORM0 INLINEFORM1
INLINEFORM0
INLINEFORM0 INLINEFORM1 sort INLINEFORM2 in descending order and return the sorted indexes
INLINEFORM0
INLINEFORM0
INLINEFORM0
INLINEFORM0
INLINEFORM0
Output:
INLINEFORM0 – the set of predicted labels for the test samples in INLINEFORM1 .
Transductive Kernel Classifier
We next present a simple yet effective approach for adapting a one-versus-all kernel classifier trained on a source domain to a different target domain. Our transductive kernel classifier (TKC) approach is composed of two learning iterations. Our entire framework is formally described in Algorithm SECREF3 .
Notations. We use the following notations in the algorithm. Sets, arrays and matrices are written in capital letters. All collection types are considered to be indexed starting from position 1. The elements of a set INLINEFORM0 are denoted by INLINEFORM1 , the elements of an array INLINEFORM2 are alternatively denoted by INLINEFORM3 or INLINEFORM4 , and the elements of a matrix INLINEFORM5 are denoted by INLINEFORM6 or INLINEFORM7 when convenient. The sequence INLINEFORM8 is denoted by INLINEFORM9 . We use sequences to index arrays or matrices as well. For example, for an array INLINEFORM10 and two integers INLINEFORM11 and INLINEFORM12 , INLINEFORM13 denotes the sub-array INLINEFORM14 . In a similar manner, INLINEFORM15 denotes a sub-matrix of the matrix INLINEFORM16 , while INLINEFORM17 returns the INLINEFORM18 -th row of M and INLINEFORM19 returns the INLINEFORM20 -th column of M. The zero matrix of INLINEFORM21 components is denoted by INLINEFORM22 , and the square zero matrix is denoted by INLINEFORM23 . The identity matrix is denoted by INLINEFORM24 .
Algorithm description. In steps 8-17, we compute the domain-adapted string kernel matrix, as described in the previous section. In the first learning iteration (when INLINEFORM0 ), we train several classifiers to distinguish each individual class from the rest, according to the one-versus-all (OVA) scheme. In step 27, the kernel classifier INLINEFORM1 is trained to distinguish a class from the others, assigning a dual weight to each training sample from the source domain. The returned column vector of dual weights is denoted by INLINEFORM2 and the bias value is denoted by INLINEFORM3 . The vector of weights INLINEFORM4 contains INLINEFORM5 values, such that the weight INLINEFORM6 corresponds to the training sample INLINEFORM7 . When the test kernel matrix INLINEFORM8 of INLINEFORM9 components is multiplied with the vector INLINEFORM10 in step 28, the result is a column vector of INLINEFORM11 positive or negative scores. Afterwards (step 34), the test samples are sorted in order to maximize the probability of correctly predicted labels. For each test sample INLINEFORM12 , we consider the score INLINEFORM13 (step 32) produced by the classifier for the chosen class INLINEFORM14 (step 31), which is selected according to the OVA scheme. The sorting is based on the hypothesis that if the classifier associates a higher score to a test sample, it means that the classifier is more confident about the predicted label for the respective test sample. Before the second learning iteration, a number of INLINEFORM15 test samples from the top of the sorted list are added to the training set (steps 35-39) for another round of training. As the classifier is more confident about the predicted labels INLINEFORM16 of the added test samples, the chance of including noisy examples (with wrong labels) is minimized. On the other hand, the classifier has the opportunity to learn some useful domain-specific patterns of the test domain. We believe that, at least in the cross-domain setting, the added test samples bring more useful information than noise. We would like to stress out that the ground-truth test labels are never used in our transductive algorithm. Although the test samples are required beforehand, their labels are not necessary. Hence, our approach is suitable in situations where unlabeled data from the target domain can be collected cheaply, and such situations appear very often in practice, considering the great amount of data available on the Web.
Polarity Classification
Data set. For the cross-domain polarity classification experiments, we use the second version of Multi-Domain Sentiment Dataset BIBREF0 . The data set contains Amazon product reviews of four different domains: Books (B), DVDs (D), Electronics (E) and Kitchen appliances (K). Reviews contain star ratings (from 1 to 5) which are converted into binary labels as follows: reviews rated with more than 3 stars are labeled as positive, and those with less than 3 stars as negative. In each domain, there are 1000 positive and 1000 negative reviews.
Baselines. We compare our approach with several methods BIBREF1 , BIBREF31 , BIBREF11 , BIBREF8 , BIBREF10 , BIBREF39 in two cross-domain settings. Using string kernels, Giménez-Pérez et al. BIBREF10 reported better performance than SST BIBREF31 and KE-Meta BIBREF11 in the multi-source domain setting. In addition, we compare our approach with SFA BIBREF1 , CORAL BIBREF8 and TR-TrAdaBoost BIBREF39 in the single-source setting.
Evaluation procedure and parameters. We follow the same evaluation methodology of Giménez-Pérez et al. BIBREF10 , to ensure a fair comparison. Furthermore, we use the same kernels, namely the presence bits string kernel ( INLINEFORM0 ) and the intersection string kernel ( INLINEFORM1 ), and the same range of character n-grams (5-8). To compute the string kernels, we used the open-source code provided by Ionescu et al. BIBREF19 , BIBREF40 . For the transductive kernel classifier, we select INLINEFORM2 unlabeled test samples to be included in the training set for the second round of training. We choose Kernel Ridge Regression BIBREF38 as classifier and set its regularization parameter to INLINEFORM3 in all our experiments. Although Giménez-Pérez et al. BIBREF10 used a different classifier, namely Kernel Discriminant Analysis, we observed that Kernel Ridge Regression produces similar results ( INLINEFORM4 ) when we employ the same string kernels. As Giménez-Pérez et al. BIBREF10 , we evaluate our approach in two cross-domain settings. In the multi-source setting, we train the models on all domains, except the one used for testing. In the single-source setting, we train the models on one of the four domains and we independently test the models on the remaining three domains.
Results in multi-source setting. The results for the multi-source cross-domain polarity classification setting are presented in Table TABREF8 . Both the transductive presence bits string kernel ( INLINEFORM0 ) and the transductive intersection kernel ( INLINEFORM1 ) obtain better results than their original counterparts. Moreover, according to the McNemar's test BIBREF41 , the results on the DVDs, the Electronics and the Kitchen target domains are significantly better than the best baseline string kernel, with a confidence level of INLINEFORM2 . When we employ the transductive kernel classifier (TKC), we obtain even better results. On all domains, the accuracy rates yielded by the transductive classifier are more than INLINEFORM3 better than the best baseline. For example, on the Books domain the accuracy of the transductive classifier based on the presence bits kernel ( INLINEFORM4 ) is INLINEFORM5 above the best baseline ( INLINEFORM6 ) represented by the intersection string kernel. Remarkably, the improvements brought by our transductive string kernel approach are statistically significant in all domains.
Results in single-source setting. The results for the single-source cross-domain polarity classification setting are presented in Table TABREF9 . We considered all possible combinations of source and target domains in this experiment, and we improve the results in each and every case. Without exception, the accuracy rates reached by the transductive string kernels are significantly better than the best baseline string kernel BIBREF10 , according to the McNemar's test performed at a confidence level of INLINEFORM0 . The highest improvements (above INLINEFORM1 ) are obtained when the source domain contains Books reviews and the target domain contains Kitchen reviews. As in the multi-source setting, we obtain much better results when the transductive classifier is employed for the learning task. In all cases, the accuracy rates of the transductive classifier are more than INLINEFORM2 better than the best baseline string kernel. Remarkably, in four cases (E INLINEFORM3 B, E INLINEFORM4 D, B INLINEFORM5 K and D INLINEFORM6 K) our improvements are greater than INLINEFORM7 . The improvements brought by our transductive classifier based on string kernels are statistically significant in each and every case. In comparison with SFA BIBREF1 , we obtain better results in all but one case (K INLINEFORM8 D). Remarkably, we surpass the other state-of-the-art approaches BIBREF8 , BIBREF39 in all cases.
Conclusion
In this paper, we presented two domain adaptation approaches that can be used together to improve the results of string kernels in cross-domain settings. We provided empirical evidence indicating that our framework can be successfully applied in cross-domain text classification, particularly in cross-domain English polarity classification. Indeed, the polarity classification experiments demonstrate that our framework achieves better accuracy rates than other state-of-the-art methods BIBREF1 , BIBREF31 , BIBREF11 , BIBREF8 , BIBREF10 , BIBREF39 . By using the same parameters across all the experiments, we showed that our transductive transfer learning framework can bring significant improvements without having to fine-tune the parameters for each individual setting. Although the framework described in this paper can be generally applied to any kernel method, we focused our work only on string kernel approaches used in text classification. In future work, we aim to combine the proposed transductive transfer learning framework with different kinds of kernels and classifiers, and employ it for other cross-domain tasks. | String kernel is a technique that uses character n-grams to measure the similarity of strings |
b653f55d1dad5cd262a99502f63bf44c58ccc8cf | b653f55d1dad5cd262a99502f63bf44c58ccc8cf_0 | Q: Which dataset do they use to learn embeddings?
Text: Introduction
Vocal entrainment is an established social adaptation mechanism. It can be loosely defined as one speaker's spontaneous adaptation to the speaking style of the other speaker. Entrainment is a fairly complex multifaceted process and closely associated with many other mechanisms such as coordination, synchrony, convergence etc. While there are various aspects and levels of entrainment BIBREF0 , there is also a general agreement that entrainment is a sign of positive behavior towards the other speaker BIBREF1 , BIBREF2 , BIBREF3 . High degree of vocal entrainment has been associated with various interpersonal behavioral attributes, such as high empathy BIBREF4 , more agreement and less blame towards the partner and positive outcomes in couple therapy BIBREF5 , and high emotional bond BIBREF6 . A good understanding of entrainment provides insights to various interpersonal behaviors and facilitates the recognition and estimation of these behaviors in the realm of Behavioral Signal Processing BIBREF7 , BIBREF8 . Moreover, it also contributes to the modeling and development of `human-like' spoken dialog systems or conversational agents.
Unfortunately, quantifying entrainment has always been a challenging problem. There is a scarcity of reliable labeled speech databases on entrainment, possibly due to the subjective and diverse nature of its definition. This makes it difficult to capture entrainment using supervised models, unlike many other behaviors. Early studies on entrainment relied on highly subjective and context-dependent manual observation coding for measuring entrainment. The objective methods based on extracted speech features employed classical synchrony measures such as Pearson's correlation BIBREF0 and traditional (linear) time series analysis techniques BIBREF9 . Lee et al. BIBREF10 , BIBREF4 proposed a measure based on PCA representation of prosody and MFCC features of consecutive turns. Most of the these approaches assume a linear relationship between features of consecutive speaker turns which is not necessarily true, given the complex nature of entrainment. For example, the effect of rising pitch or energy can potentially have a nonlinear influence across speakers.
Recently, various complexity measures (such as largest Lyapunov exponent) of feature streams based on nonlinear dynamical systems modeling showed promising results in capturing entrainment BIBREF5 , BIBREF6 . A limitation of this modeling, however, is the assumption of the short-term stationary or slowly varying nature of the features. While this can be reasonable for global or session-level complexity, the measure is not very meaningful capturing turn-level or local entrainment. Nonlinear dynamical measures also suffer from scalability to a multidimensional feature set, including spectral coefficients such as MFCCs. Further, all of the above metrics are knowledge-driven and do not exploit the vast amount of information that can be gained from existing interactions.
A more holistic approach is to capture entrainment in consecutive speaker turns through a more robust nonlinear function. Conceptually speaking, such a formulation of entrainment is closely related to the problem of learning a transfer function which maps vocal patterns of one speaker turn to the next. A compelling choice to nonlinearly approximate the transfer function would be to employ Deep Neural Networks (DNNs). This is supported by recent promising applications of deep learning models, both in supervised and unsupervised paradigm, in modeling and classification of emotions and behaviors from speech. For example in BIBREF11 the authors learned, in an unsupervised manner, a latent embedding towards identifying behavior in out-of-domain tasks. Similarly in BIBREF12 , BIBREF13 the authors employ Neural Predictive Coding to derive embeddings that link to speaker characteristics in an unsupervised manner.
We propose an unsupervised training framework to contextually learn the transfer function that ties the two speakers. The learned bottleneck embedding contains cross-speaker information closely related to entrainment. We define a distance measure between the consecutive speaker turns represented in the bottleneck feature embedding space. We call this metric the Neural Entrainment Distance (NED).
Towards this modeling approach we use features that have already been established as useful for entrainment. The majority of research BIBREF0 , BIBREF14 , BIBREF10 , BIBREF5 , BIBREF6 focused on prosodic features like pitch, energy, and speech rate. Others also analyzed entrainment in spectral and voice quality features BIBREF10 , BIBREF4 . Unlike classical nonlinear measures, we jointly learn from a multidimensional feature set comprising of prosodic, spectral, and voice quality features.
We then experimentally investigate the validity and effectiveness of the NED measure in association with interpersonal behavior.
Datasets
We use two datasets in this work: the training is done on the Fisher Corpus English Part 1 (LDC2004S13) BIBREF15 and testing on the Suicide Risk Assessment corpus BIBREF16 , along with Fisher.
Preprocessing
A number of audio preprocessing steps are required in the entrainment framework for obtaining boundaries of relevant segments of audio from consecutive turns. First, we perform voice activity detection (VAD) to identify the speech regions. Following this, speaker diarization is performed in order to distinguish speech segments spoken by different speakers. However, our training dataset, the Fisher corpus also contains transcripts with speaker turn boundaries as well as timings for pauses within a turn. Since, these time stamps appeared to be reasonably accurate, we use them as oracle VAD and diarization. On the other hand, for the Suicide Risk Assessment corpus, we perform VAD and diarization on raw audio to obtain the turn boundaries. Subsequently, we also split a single turn into inter-pausal units (IPUs) if there is any pause of at least 50 ms present within the turn. For the purpose of capturing entrainment-related information, we only consider the initial and the final IPU of every turn. This is done based on the hypothesis that during a turn-taking, entrainment is mostly prominent between the most recent IPU of previous speaker's turn and the first IPU of the next speaker's turn BIBREF0 .
Feature Extraction
We extract 38 different acoustic features from the segments (IPUs) of our interest. The extracted feature set includes 4 prosody features (pitch, energy and their first order deltas), 31 spectral features (15 MFCCs, 8 MFBs, 8 LSFs) and 3 voice quality features (shimmer and 2 variants of jitter). We found in our early analysis that derivatives of spectral and voice quality features do not seem to contribute significantly to entrainment and hence we do include them for the NED model. The feature extraction is performed with a Hamming window of 25 ms width and 10 ms shift using the OpenSMILE toolkit BIBREF17 . For pitch, we perform an additional post-processing by applying a median-filter based smoothing technique (with a window size of 5 frames) as pitch extraction is not very robust and often prone to errors, such as halving or doubling errors. We also perform z-score normalization of the features across the whole session, except for pitch and energy features, which are normalized by dividing them by their respective means.
Turn-level Features
We propose to calculate NED as directional entrainment-related measure from speaker 1 to speaker 2 for a change of turn as shown in Figure FIGREF6 . The segments of interest in this case are the final IPU of speaker 1's turn and the initial IPU of the subsequent turn by speaker 2, marked by the bounding boxes in the figure. As turn-level features, we compute six statistical functionals over all frames in those two IPUs, generating two sets of functionals of features for each pair of turns. The functionals we compute are as follows: mean, median, standard deviation, 1st percentile, 99th percentile and range between 99th and 1st percentile. Thus we obtain INLINEFORM0 turn-level features from each IPU representing the turn. Let us denote the turn-level feature vector of the final IPU of speaker 1 and the initial IPU of speaker 2 as INLINEFORM1 and INLINEFORM2 , respectively, for further discussion in the paper.
Modeling with Neural Network
Most work in the entrainment literature directly computes a measure between INLINEFORM0 and INLINEFORM1 (such as correlation BIBREF0 ) or their lower-dimensional representations BIBREF10 . However, one conceptual limitation of all these approaches is that turn-level features INLINEFORM2 and INLINEFORM3 do not only contain the underlying acoustic information that can be entrained across turns, but also speaker-specific, phonetic and paralinguistic information that is specific to the corresponding turns and not influenced by the previous turn (non-entrainable). If we represent those two types of information as vector embeddings, INLINEFORM4 and INLINEFORM5 respectively, we can model turn-level feature vectors INLINEFORM6 as a nonlinear function INLINEFORM7 over them, i.e., INLINEFORM8 and INLINEFORM9 . In this formulation, the distance between INLINEFORM10 and INLINEFORM11 should be zero in the hypothetical case of `perfect' entrainment.
Our goal is to approximate the inverse mappings that maps the feature vector INLINEFORM0 to entrainment embedding INLINEFORM1 and ideally to learn the same from `perfect' or very highly entrained turns. Unfortunately, in absence of such a dataset, we learn it from consecutive turns in real data where entrainment is present, at least to some extent. As shown in Figure FIGREF6 , we adopt a feed-forward deep neural network (DNN) as an encoder for this purpose.
The different components of the model are described below:
First we use INLINEFORM0 as the input to the encoder network. We choose the output of the encoder network, INLINEFORM1 to be undercomplete representation of INLINEFORM2 , by restricting the dimensionality of INLINEFORM3 to be lower than that of INLINEFORM4 .
INLINEFORM0 is then passed through another feed-forward ( INLINEFORM1 ) network used as decoder to predict INLINEFORM2 . The output of the decoder is denoted as INLINEFORM3 .
Then INLINEFORM0 and its reference INLINEFORM1 are compared to obtain the loss function of the model, INLINEFORM2 .
Even though this deep neural network resembles autoencoder architectures, it does not reconstruct itself but rather tries to encode relevant information from one turn to predict the next turn, parallel to BIBREF12 , BIBREF13 , BIBREF11 . Thus the bottleneck embedding INLINEFORM0 can be considered closely related to the entrainment embedding INLINEFORM1 mentioned above.
Unsupervised Training of the Model
In this work, we use two fully connected layers as hidden layers both in the encoder and decoder network. Batch normalization layers and Rectified Linear Unit (ReLU) activation layers (in respective order) are used between fully connected layers in both of the networks. The dimension of the embedding is chosen to be 30. The number of neuron units in the hidden layers are: [ 228 INLINEFORM0 128 INLINEFORM1 30 INLINEFORM2 128 INLINEFORM3 228 ]. We use smooth L1 norm, a variant of L1 norm which is more robust to outliers BIBREF18 , so that
DISPLAYFORM0
where
DISPLAYFORM0
and INLINEFORM0 is the dimension of INLINEFORM1 which is 228 in our case.
For training the network, we choose a subset (80% of all sessions) of Fisher corpus and use all turn-level feature pairs ( INLINEFORM0 ). We employ the Adam optimizer BIBREF19 and a minibatch size of 128 for training the network. The validation error is computed on the validation subset (10% of the data) of the Fisher corpus and the best model is chosen.
Neural Entrainment Distance (NED) Measure
After the unsupervised training phase, we use the encoder network to obtain the embedding representation ( INLINEFORM0 ) from any turn-level feature vector INLINEFORM1 . To quantify the entrainment from a turn to the subsequent turn, we extract turn-level feature vectors from their final and initial IPUs, respectively, denoted as INLINEFORM2 and INLINEFORM3 . Next we encode INLINEFORM4 and INLINEFORM5 using the pretrained encoder network and obtain INLINEFORM6 and INLINEFORM7 as the outputs, respectively. Then we compute a distance measure INLINEFORM8 , which we term Neural Entrainment Distance (NED), between the two turns by taking smooth L1 distance INLINEFORM9 and INLINEFORM10 .
DISPLAYFORM0
where INLINEFORM0 is defined in Equation (2) and INLINEFORM1 is the dimensionality of the embedding. Note that even though smooth L1 distance is symmetric in nature, our distance measure is still asymmetric because of the directionality in the training of the neural network model.
Experimental Results
We conduct a number of experiments to validate NED as a valid proxy metric for entrainment.
Experiment 1: Classification of real vs. fake sessions
We first create a fake session ( INLINEFORM0 ) from each real session ( INLINEFORM1 ) by randomly shuffling the speaker turns. Then we run a simple classification experiment of using the NED measure to identify the real session from the pair ( INLINEFORM2 , INLINEFORM3 ). The steps of the experiments are as follows:
We compute NED for each (overlapping) pair of consecutive turns and their average across the session for both sessions in the pair ( INLINEFORM0 , INLINEFORM1 ).
The session with lower NED is inferred to be the real one. The hypothesis behind this rule is that higher entrainment is seen across consecutive turns than randomly paired turns and is well captured through a lower value of proposed measure.
If the inferred real session is indeed the real one, we consider it to be correctly classified.
We compute classification accuracy averaged over 30 runs (to account for the randomness in creating the fake session) and report it in Table TABREF24 . The experiment is conducted on two datasets: a subset (10%) of Fisher corpus set aside as test data and Suicide corpus. We use a number of baseline measures:
Baseline 1: smooth L1 distance directly computed between turn-level features ( INLINEFORM0 and INLINEFORM1 )
Baseline 2: PCA-based symmetric acoustic similarity measure by Lee et al. BIBREF10
Baseline 3: Nonlinear dynamical systems-based complexity measure BIBREF6 .
For the baselines, we conduct the classification experiments in a similar manner. Since Baseline 1 and 2 have multiple measures, we choose the best performing one for reporting, thus providing an upper-bound performance. Also, for baseline 2 we choose the session with higher value of the measure as real, since it measures similarity.
As we can see in Table TABREF24 , our proposed NED measure achieves higher accuracy than all baselines on the Fisher corpus. The accuracy of our measure declines in the Suicide corpus as compared to the Fisher corpus, which is probably due to data mismatch as the model was trained on Fisher (mismatch of acoustics, recording conditions, sampling frequency, interaction style etc.). However, our measure still performs better than all baselines on Suicide corpus.
Experiment 2: Correlation with Emotional Bond
According to prior work, both from domain theory BIBREF16 and from experimental validation BIBREF6 , a high emotional bond in patient-therapist interactions in the suicide therapy domain is associated with more entrainment. In this experiment, we compute the correlation of the proposed NED measure with the patient-perceived emotional bond ratings. Since the proposed measure is asymmetric in nature, we compute the measures for both patient-to-therapist and therapist-to-patient entrainment. We also compute the correlation of emotional bond with the baselines used in Experiment 1. We report Pearson's correlation coefficients ( INLINEFORM0 ) for this experiment in Table TABREF26 along with their INLINEFORM1 -values. We test against the null hypothesis INLINEFORM2 that there is no linear association between emotional bond and the candidate measure.
Results in Table TABREF26 show that the patient-to-therapist NED is negatively correlated with emotional bond with high statistical significance ( INLINEFORM0 ). This negative sign is consistent with previous studies as higher distance in acoustic features indicates lower entrainment. However, the therapist-to-patient NED does not have a significant correlation with emotional bond. A possible explanation for this finding is that the emotional bond is reported by the patient and influenced by the degree of their perceived therapist-entrainment. Thus, equipped with an asymmetric measure, we are also able to identify the latent directionality of the emotional bond metric. The complexity measure (Baseline 2) also shows statistically significant correlation, but the value of INLINEFORM1 is lower than that of the proposed measure.
To analyze the embeddings encoded by our model, we also compute a t-SNE BIBREF20 transformation of the difference of all patient-to-therapist turn embedding pairs, denoted as INLINEFORM0 in Equation (3). Figure FIGREF27 shows the results of a session with high emotional bond and another one with low emotional bond (with values of 7 and 1 respectively) as a 2-dimensional scatter plot. Visibly there is some separation between the sessions with low and high emotional bond.
Conclusion and Future Work
In this work, a novel deep neural network-based Neural Entrainment Distance (NED) measure is proposed for capturing entrainment in conversational speech. The neural network architecture consisting of an encoder and a decoder is trained on the Fisher corpus in an unsupervised training framework and then the measure is defined on the bottleneck embedding. We show that the proposed measure can distinguish between real and fake sessions by capturing presence of entrainment in real sessions. In this way we also validate the natural occurrence of vocal entrainment in dyadic conversations, well-known in psychology literature BIBREF21 , BIBREF22 , BIBREF23 . We further show that the measure for patient-to-therapist direction achieves statistically significant correlation with their perceived emotional bond. The proposed measure is asymmetric in nature and can be useful for analyzing different interpersonal (especially directional) behaviors in many other applications. Given the benefits shown by the unsupervised data-driven approach we will employ Recurrent Neural Networks (RNNs) to better capture temporal dynamics. We also intend to explore (weakly) supervised learning of entrainment using the bottleneck embeddings as features, in presence of session-level annotations.
Acknowledgements
The U.S. Army Medical Research Acquisition Activity, 820 Chandler Street, Fort Detrick MD 21702- 5014 is the awarding and administering acquisition office. This work was supported by the Office of the Assistant Secretary of Defense for Health Affairs through the Military Suicide Research Consortium under Award No. W81XWH-10-2-0181, and through the Psychological Health and Traumatic Brain Injury Research Program under Award No. W81XWH-15-1-0632. Opinions, interpretations, conclusions and recommendations are those of the author and are not necessarily endorsed by the Department of Defense. | Fisher Corpus English Part 1 |
22c802872b556996dd7d09eb1e15989d003f30c0 | 22c802872b556996dd7d09eb1e15989d003f30c0_0 | Q: How do they correlate NED with emotional bond levels?
Text: Introduction
Vocal entrainment is an established social adaptation mechanism. It can be loosely defined as one speaker's spontaneous adaptation to the speaking style of the other speaker. Entrainment is a fairly complex multifaceted process and closely associated with many other mechanisms such as coordination, synchrony, convergence etc. While there are various aspects and levels of entrainment BIBREF0 , there is also a general agreement that entrainment is a sign of positive behavior towards the other speaker BIBREF1 , BIBREF2 , BIBREF3 . High degree of vocal entrainment has been associated with various interpersonal behavioral attributes, such as high empathy BIBREF4 , more agreement and less blame towards the partner and positive outcomes in couple therapy BIBREF5 , and high emotional bond BIBREF6 . A good understanding of entrainment provides insights to various interpersonal behaviors and facilitates the recognition and estimation of these behaviors in the realm of Behavioral Signal Processing BIBREF7 , BIBREF8 . Moreover, it also contributes to the modeling and development of `human-like' spoken dialog systems or conversational agents.
Unfortunately, quantifying entrainment has always been a challenging problem. There is a scarcity of reliable labeled speech databases on entrainment, possibly due to the subjective and diverse nature of its definition. This makes it difficult to capture entrainment using supervised models, unlike many other behaviors. Early studies on entrainment relied on highly subjective and context-dependent manual observation coding for measuring entrainment. The objective methods based on extracted speech features employed classical synchrony measures such as Pearson's correlation BIBREF0 and traditional (linear) time series analysis techniques BIBREF9 . Lee et al. BIBREF10 , BIBREF4 proposed a measure based on PCA representation of prosody and MFCC features of consecutive turns. Most of the these approaches assume a linear relationship between features of consecutive speaker turns which is not necessarily true, given the complex nature of entrainment. For example, the effect of rising pitch or energy can potentially have a nonlinear influence across speakers.
Recently, various complexity measures (such as largest Lyapunov exponent) of feature streams based on nonlinear dynamical systems modeling showed promising results in capturing entrainment BIBREF5 , BIBREF6 . A limitation of this modeling, however, is the assumption of the short-term stationary or slowly varying nature of the features. While this can be reasonable for global or session-level complexity, the measure is not very meaningful capturing turn-level or local entrainment. Nonlinear dynamical measures also suffer from scalability to a multidimensional feature set, including spectral coefficients such as MFCCs. Further, all of the above metrics are knowledge-driven and do not exploit the vast amount of information that can be gained from existing interactions.
A more holistic approach is to capture entrainment in consecutive speaker turns through a more robust nonlinear function. Conceptually speaking, such a formulation of entrainment is closely related to the problem of learning a transfer function which maps vocal patterns of one speaker turn to the next. A compelling choice to nonlinearly approximate the transfer function would be to employ Deep Neural Networks (DNNs). This is supported by recent promising applications of deep learning models, both in supervised and unsupervised paradigm, in modeling and classification of emotions and behaviors from speech. For example in BIBREF11 the authors learned, in an unsupervised manner, a latent embedding towards identifying behavior in out-of-domain tasks. Similarly in BIBREF12 , BIBREF13 the authors employ Neural Predictive Coding to derive embeddings that link to speaker characteristics in an unsupervised manner.
We propose an unsupervised training framework to contextually learn the transfer function that ties the two speakers. The learned bottleneck embedding contains cross-speaker information closely related to entrainment. We define a distance measure between the consecutive speaker turns represented in the bottleneck feature embedding space. We call this metric the Neural Entrainment Distance (NED).
Towards this modeling approach we use features that have already been established as useful for entrainment. The majority of research BIBREF0 , BIBREF14 , BIBREF10 , BIBREF5 , BIBREF6 focused on prosodic features like pitch, energy, and speech rate. Others also analyzed entrainment in spectral and voice quality features BIBREF10 , BIBREF4 . Unlike classical nonlinear measures, we jointly learn from a multidimensional feature set comprising of prosodic, spectral, and voice quality features.
We then experimentally investigate the validity and effectiveness of the NED measure in association with interpersonal behavior.
Datasets
We use two datasets in this work: the training is done on the Fisher Corpus English Part 1 (LDC2004S13) BIBREF15 and testing on the Suicide Risk Assessment corpus BIBREF16 , along with Fisher.
Preprocessing
A number of audio preprocessing steps are required in the entrainment framework for obtaining boundaries of relevant segments of audio from consecutive turns. First, we perform voice activity detection (VAD) to identify the speech regions. Following this, speaker diarization is performed in order to distinguish speech segments spoken by different speakers. However, our training dataset, the Fisher corpus also contains transcripts with speaker turn boundaries as well as timings for pauses within a turn. Since, these time stamps appeared to be reasonably accurate, we use them as oracle VAD and diarization. On the other hand, for the Suicide Risk Assessment corpus, we perform VAD and diarization on raw audio to obtain the turn boundaries. Subsequently, we also split a single turn into inter-pausal units (IPUs) if there is any pause of at least 50 ms present within the turn. For the purpose of capturing entrainment-related information, we only consider the initial and the final IPU of every turn. This is done based on the hypothesis that during a turn-taking, entrainment is mostly prominent between the most recent IPU of previous speaker's turn and the first IPU of the next speaker's turn BIBREF0 .
Feature Extraction
We extract 38 different acoustic features from the segments (IPUs) of our interest. The extracted feature set includes 4 prosody features (pitch, energy and their first order deltas), 31 spectral features (15 MFCCs, 8 MFBs, 8 LSFs) and 3 voice quality features (shimmer and 2 variants of jitter). We found in our early analysis that derivatives of spectral and voice quality features do not seem to contribute significantly to entrainment and hence we do include them for the NED model. The feature extraction is performed with a Hamming window of 25 ms width and 10 ms shift using the OpenSMILE toolkit BIBREF17 . For pitch, we perform an additional post-processing by applying a median-filter based smoothing technique (with a window size of 5 frames) as pitch extraction is not very robust and often prone to errors, such as halving or doubling errors. We also perform z-score normalization of the features across the whole session, except for pitch and energy features, which are normalized by dividing them by their respective means.
Turn-level Features
We propose to calculate NED as directional entrainment-related measure from speaker 1 to speaker 2 for a change of turn as shown in Figure FIGREF6 . The segments of interest in this case are the final IPU of speaker 1's turn and the initial IPU of the subsequent turn by speaker 2, marked by the bounding boxes in the figure. As turn-level features, we compute six statistical functionals over all frames in those two IPUs, generating two sets of functionals of features for each pair of turns. The functionals we compute are as follows: mean, median, standard deviation, 1st percentile, 99th percentile and range between 99th and 1st percentile. Thus we obtain INLINEFORM0 turn-level features from each IPU representing the turn. Let us denote the turn-level feature vector of the final IPU of speaker 1 and the initial IPU of speaker 2 as INLINEFORM1 and INLINEFORM2 , respectively, for further discussion in the paper.
Modeling with Neural Network
Most work in the entrainment literature directly computes a measure between INLINEFORM0 and INLINEFORM1 (such as correlation BIBREF0 ) or their lower-dimensional representations BIBREF10 . However, one conceptual limitation of all these approaches is that turn-level features INLINEFORM2 and INLINEFORM3 do not only contain the underlying acoustic information that can be entrained across turns, but also speaker-specific, phonetic and paralinguistic information that is specific to the corresponding turns and not influenced by the previous turn (non-entrainable). If we represent those two types of information as vector embeddings, INLINEFORM4 and INLINEFORM5 respectively, we can model turn-level feature vectors INLINEFORM6 as a nonlinear function INLINEFORM7 over them, i.e., INLINEFORM8 and INLINEFORM9 . In this formulation, the distance between INLINEFORM10 and INLINEFORM11 should be zero in the hypothetical case of `perfect' entrainment.
Our goal is to approximate the inverse mappings that maps the feature vector INLINEFORM0 to entrainment embedding INLINEFORM1 and ideally to learn the same from `perfect' or very highly entrained turns. Unfortunately, in absence of such a dataset, we learn it from consecutive turns in real data where entrainment is present, at least to some extent. As shown in Figure FIGREF6 , we adopt a feed-forward deep neural network (DNN) as an encoder for this purpose.
The different components of the model are described below:
First we use INLINEFORM0 as the input to the encoder network. We choose the output of the encoder network, INLINEFORM1 to be undercomplete representation of INLINEFORM2 , by restricting the dimensionality of INLINEFORM3 to be lower than that of INLINEFORM4 .
INLINEFORM0 is then passed through another feed-forward ( INLINEFORM1 ) network used as decoder to predict INLINEFORM2 . The output of the decoder is denoted as INLINEFORM3 .
Then INLINEFORM0 and its reference INLINEFORM1 are compared to obtain the loss function of the model, INLINEFORM2 .
Even though this deep neural network resembles autoencoder architectures, it does not reconstruct itself but rather tries to encode relevant information from one turn to predict the next turn, parallel to BIBREF12 , BIBREF13 , BIBREF11 . Thus the bottleneck embedding INLINEFORM0 can be considered closely related to the entrainment embedding INLINEFORM1 mentioned above.
Unsupervised Training of the Model
In this work, we use two fully connected layers as hidden layers both in the encoder and decoder network. Batch normalization layers and Rectified Linear Unit (ReLU) activation layers (in respective order) are used between fully connected layers in both of the networks. The dimension of the embedding is chosen to be 30. The number of neuron units in the hidden layers are: [ 228 INLINEFORM0 128 INLINEFORM1 30 INLINEFORM2 128 INLINEFORM3 228 ]. We use smooth L1 norm, a variant of L1 norm which is more robust to outliers BIBREF18 , so that
DISPLAYFORM0
where
DISPLAYFORM0
and INLINEFORM0 is the dimension of INLINEFORM1 which is 228 in our case.
For training the network, we choose a subset (80% of all sessions) of Fisher corpus and use all turn-level feature pairs ( INLINEFORM0 ). We employ the Adam optimizer BIBREF19 and a minibatch size of 128 for training the network. The validation error is computed on the validation subset (10% of the data) of the Fisher corpus and the best model is chosen.
Neural Entrainment Distance (NED) Measure
After the unsupervised training phase, we use the encoder network to obtain the embedding representation ( INLINEFORM0 ) from any turn-level feature vector INLINEFORM1 . To quantify the entrainment from a turn to the subsequent turn, we extract turn-level feature vectors from their final and initial IPUs, respectively, denoted as INLINEFORM2 and INLINEFORM3 . Next we encode INLINEFORM4 and INLINEFORM5 using the pretrained encoder network and obtain INLINEFORM6 and INLINEFORM7 as the outputs, respectively. Then we compute a distance measure INLINEFORM8 , which we term Neural Entrainment Distance (NED), between the two turns by taking smooth L1 distance INLINEFORM9 and INLINEFORM10 .
DISPLAYFORM0
where INLINEFORM0 is defined in Equation (2) and INLINEFORM1 is the dimensionality of the embedding. Note that even though smooth L1 distance is symmetric in nature, our distance measure is still asymmetric because of the directionality in the training of the neural network model.
Experimental Results
We conduct a number of experiments to validate NED as a valid proxy metric for entrainment.
Experiment 1: Classification of real vs. fake sessions
We first create a fake session ( INLINEFORM0 ) from each real session ( INLINEFORM1 ) by randomly shuffling the speaker turns. Then we run a simple classification experiment of using the NED measure to identify the real session from the pair ( INLINEFORM2 , INLINEFORM3 ). The steps of the experiments are as follows:
We compute NED for each (overlapping) pair of consecutive turns and their average across the session for both sessions in the pair ( INLINEFORM0 , INLINEFORM1 ).
The session with lower NED is inferred to be the real one. The hypothesis behind this rule is that higher entrainment is seen across consecutive turns than randomly paired turns and is well captured through a lower value of proposed measure.
If the inferred real session is indeed the real one, we consider it to be correctly classified.
We compute classification accuracy averaged over 30 runs (to account for the randomness in creating the fake session) and report it in Table TABREF24 . The experiment is conducted on two datasets: a subset (10%) of Fisher corpus set aside as test data and Suicide corpus. We use a number of baseline measures:
Baseline 1: smooth L1 distance directly computed between turn-level features ( INLINEFORM0 and INLINEFORM1 )
Baseline 2: PCA-based symmetric acoustic similarity measure by Lee et al. BIBREF10
Baseline 3: Nonlinear dynamical systems-based complexity measure BIBREF6 .
For the baselines, we conduct the classification experiments in a similar manner. Since Baseline 1 and 2 have multiple measures, we choose the best performing one for reporting, thus providing an upper-bound performance. Also, for baseline 2 we choose the session with higher value of the measure as real, since it measures similarity.
As we can see in Table TABREF24 , our proposed NED measure achieves higher accuracy than all baselines on the Fisher corpus. The accuracy of our measure declines in the Suicide corpus as compared to the Fisher corpus, which is probably due to data mismatch as the model was trained on Fisher (mismatch of acoustics, recording conditions, sampling frequency, interaction style etc.). However, our measure still performs better than all baselines on Suicide corpus.
Experiment 2: Correlation with Emotional Bond
According to prior work, both from domain theory BIBREF16 and from experimental validation BIBREF6 , a high emotional bond in patient-therapist interactions in the suicide therapy domain is associated with more entrainment. In this experiment, we compute the correlation of the proposed NED measure with the patient-perceived emotional bond ratings. Since the proposed measure is asymmetric in nature, we compute the measures for both patient-to-therapist and therapist-to-patient entrainment. We also compute the correlation of emotional bond with the baselines used in Experiment 1. We report Pearson's correlation coefficients ( INLINEFORM0 ) for this experiment in Table TABREF26 along with their INLINEFORM1 -values. We test against the null hypothesis INLINEFORM2 that there is no linear association between emotional bond and the candidate measure.
Results in Table TABREF26 show that the patient-to-therapist NED is negatively correlated with emotional bond with high statistical significance ( INLINEFORM0 ). This negative sign is consistent with previous studies as higher distance in acoustic features indicates lower entrainment. However, the therapist-to-patient NED does not have a significant correlation with emotional bond. A possible explanation for this finding is that the emotional bond is reported by the patient and influenced by the degree of their perceived therapist-entrainment. Thus, equipped with an asymmetric measure, we are also able to identify the latent directionality of the emotional bond metric. The complexity measure (Baseline 2) also shows statistically significant correlation, but the value of INLINEFORM1 is lower than that of the proposed measure.
To analyze the embeddings encoded by our model, we also compute a t-SNE BIBREF20 transformation of the difference of all patient-to-therapist turn embedding pairs, denoted as INLINEFORM0 in Equation (3). Figure FIGREF27 shows the results of a session with high emotional bond and another one with low emotional bond (with values of 7 and 1 respectively) as a 2-dimensional scatter plot. Visibly there is some separation between the sessions with low and high emotional bond.
Conclusion and Future Work
In this work, a novel deep neural network-based Neural Entrainment Distance (NED) measure is proposed for capturing entrainment in conversational speech. The neural network architecture consisting of an encoder and a decoder is trained on the Fisher corpus in an unsupervised training framework and then the measure is defined on the bottleneck embedding. We show that the proposed measure can distinguish between real and fake sessions by capturing presence of entrainment in real sessions. In this way we also validate the natural occurrence of vocal entrainment in dyadic conversations, well-known in psychology literature BIBREF21 , BIBREF22 , BIBREF23 . We further show that the measure for patient-to-therapist direction achieves statistically significant correlation with their perceived emotional bond. The proposed measure is asymmetric in nature and can be useful for analyzing different interpersonal (especially directional) behaviors in many other applications. Given the benefits shown by the unsupervised data-driven approach we will employ Recurrent Neural Networks (RNNs) to better capture temporal dynamics. We also intend to explore (weakly) supervised learning of entrainment using the bottleneck embeddings as features, in presence of session-level annotations.
Acknowledgements
The U.S. Army Medical Research Acquisition Activity, 820 Chandler Street, Fort Detrick MD 21702- 5014 is the awarding and administering acquisition office. This work was supported by the Office of the Assistant Secretary of Defense for Health Affairs through the Military Suicide Research Consortium under Award No. W81XWH-10-2-0181, and through the Psychological Health and Traumatic Brain Injury Research Program under Award No. W81XWH-15-1-0632. Opinions, interpretations, conclusions and recommendations are those of the author and are not necessarily endorsed by the Department of Defense. | They compute Pearson’s correlation between NED measure for patient-to-therapist and patient-perceived emotional bond rating and NED measure for therapist-to-patient and patient-perceived emotional bond rating |
a7510ec34eaec2c7ac2869962b69cc41031221e5 | a7510ec34eaec2c7ac2869962b69cc41031221e5_0 | Q: What was their F1 score on the Bengali NER corpus?
Text: Introduction
Most modern approaches to NLP tasks rely on supervised learning algorithms to learn and generalize from labeled training data. While this has proven successful in high-resource scenarios, this is not realistic in many cases, such as low-resource languages, as the required amount of training data just doesn't exist. However, partial annotations are often easy to gather.
We study the problem of using partial annotations to train a Named Entity Recognition (NER) system. In this setting, all (or most) identified entities are correct, but not all entities have been identified, and crucially, there are no reliable examples of the negative class. The sentence shown in Figure FIGREF2 shows examples of both a gold and a partially annotated sentence. Such partially annotated data is relatively easy to obtain: for example, a human annotator who does not speak the target language may recognize common entities, but not uncommon ones. With no reliable examples of the negative class, the problem becomes one of estimating which unlabeled instances are true negatives and which are false negatives.
To address the above-mentioned challenge, we present Constrained Binary Learning (CBL) – a novel self-training based algorithm that focuses on iteratively identifying true negatives for the NER task while improving its learning. Towards this end, CBL uses constraints that incorporate background knowledge required for the entity recognition task.
We evaluate the proposed methods in 8 languages, showing a significant ability to learn from partial data. We additionally experiment with initializing CBL with domain-specific instance-weighting schemes, showing mixed results. In the process, we use weighted variants of popular NER models, showing strong performance in both non-neural and neural settings. Finally, we show experiments in a real-world setting, by employing non-speakers to manually annotate romanized Bengali text. We show that a small amount of non-speaker annotation combined with our method can outperform previous methods.
Related Work
The supervision paradigm in this paper, partial supervision, falls broadly under the category of semi-supervision BIBREF0, and is closely related to weak supervision BIBREF1 and incidental supervision BIBREF2, in the sense that data is constructed through some noisy process. However, all of the most related work shares a key difference from ours: reliance on a small amount of fully annotated data in addition to the noisy data.
FernandesBr11 introduces a transductive version of structured perceptron for partially annotated sequences. However, their definition of partial annotation is labels removed at random, so examples from all classes are still available if not contiguous.
Fidelity Weighted Learning BIBREF3 uses a teacher/student model, in which the teacher has access to (a small amount) of high quality data, and uses this to guide the student, which has access to (a large amount) of weak data.
HedderichKl18, following GoldbergerBe17, add a noise adaptation layer on top of an LSTM, which learns how to correct noisy labels, given a small amount of training data. We compare against this model in our experiments.
In the world of weak supervision, Snorkel BIBREF4, BIBREF5, is a system that combines automatic labeling functions with data integration and noise reduction methods to rapidly build large datasets. They rely on high recall and consequent redundancy of the labeling functions. We argue that in certain realistic cases, high-recall candidate identification is unavailable.
We draw inspiration from the Positive-Unlabeled (PU) learning framework BIBREF6, BIBREF7, BIBREF8, BIBREF9. Originally introduced for document classification, PU learning addresses problems where examples of a single class (for example, sports) are easy to obtain, but a full labeling of all other classes is prohibitively expensive.
Named entity classification as an instance of PU learning was introduced in Grave14, which uses constrained optimization with constraints similar to ours. However, they only address the problem of named entity classification, in which mentions are given, and the goal is to assign a type to a named-entity (like `location', `person', etc.) as opposed to our goal of identifying and typing named entities.
Although the task is slightly different, there has been work on building `silver standard' data from Wikipedia BIBREF10, BIBREF11, BIBREF12, using hyperlink annotations as the seed set and propagating throughout the document.
Partial annotation in various forms has also been studied in the contexts of POS-tagging BIBREF13, word sense disambiguation BIBREF14, temporal relation extraction BIBREF15, dependency parsing BIBREF16, and named entity recognition BIBREF17.
In particular, BIBREF17 study a similar problem with a few key differences: since they remove entity surfaces randomly, the dataset is too easy; and they do not use constraints on their output. We compare against their results in our experiments.
Our proposed method is most closely aligned with the Constraint Driven Learning (CoDL) framework BIBREF18, in which an iterative algorithm reminiscent of self-training is guided by constraints that are applied at each iteration.
Constrained Binary Learning
Our method assigns instance weights to all negative elements (tokens tagged as O), so that false negatives have low weights, and all other instances have high weights. We calculate weights according to the confidence predictions of a classifier trained iteratively over the partially annotated data. We refer to our method as Constrained Binary Learning (CBL).
We will first describe the motivation for this approach before moving on to the mechanics. We start with partially annotated data (which we call set $T$) in which some, but not all, positives are annotated (set $P$), and no negative is labeled. By default, we assume that any instance not labeled as positive is labeled as negative as opposed to unlabeled. This data (set $N$) is noisy in the sense that many true positives are labeled as negative (these are false negatives). Clearly, training on $T$ as-is will result in a noisy classifier.
Two possible approaches are: 1) find the false negatives and label them correctly, or 2) find the false negatives and remove them. The former method affords more training data, but runs the risk of adding noise, which could be worse than the original partial annotations. The latter is more forgiving because of an asymmetry in the penalties: it is important to remove all false negatives in $N$, but inadvertently removing true negatives from $N$ is typically not a problem, especially in NER, where negative examples dominate. Further, a binary model (only two labels) is sufficient in this case, as we need only detect entities, not type them.
We choose the latter method, but instead of removing false negatives, we adopt an instance-weighting approach, in which each instance is assigned a weight $v_i \ge 0$ according to confidence in the labeling of that instance. A weight of 0 means that the loss this instance incurs during training will not update the model.
With this in mind, CBL takes two phases: first, it learns a binary classifier $\lambda $ using a constrained iterative process modeled after the CODL framework BIBREF18, and depicted in Figure FIGREF5. The core of the algorithm is the train-predict-infer loop. The training process (line 4) is weighted, using weights $V$. At the start, these can be all 1 (Raw), or can be initialized with prior knowledge. The learned model is then used to predict on all of $T$ (line 5). In the inference step (line 6), we take the predictions from the prior round and the constraints $C$ and produce a new labeling on $T$, and a new set of weights $V$. The details of this inference step are presented later in this section. Although our ultimate strategy is simply to assign weights (not change labels), in this inner loop, we update the labels on $N$ according to classifier predictions.
In the second phase of CBL, we use the $\lambda $ trained in the previous phase to assign weights to instances as follows:
Where $P_{\lambda }(y_i=\text{O} \mid x_i)$ is understood as the classifier's confidence that instance $x_i$ takes the negative label. In practice it is sufficient to use any confidence score from the classifier, not necessarily a probability. If the classifier has accurately learned to detect entities, then for all the false negatives in $N$, $P_{\lambda }(y_i=\text{O}|x_i)$ is small, which is the goal.
Ultimately, we send the original multiclass partially annotated dataset along with final weights $V$ to a standard weighted NER classifier to learn a model. No weights are needed at test time.
Constrained Binary Learning ::: NER with CBL
So far, we have given a high-level view of the algorithm. In this section, we will give more low-level details, especially as they relate to the specific problem of NER. One contribution of this work is the inference step (line 6), which we address using a constrained Integer Linear Program (ILP) and describe in this section. However, the constraints are based on a value we call the entity ratio. First, we describe the entity ratio, then we describe the constraints and stopping condition of the algorithm.
Constrained Binary Learning ::: NER with CBL ::: Entity ratio and Balancing
We have observed that NER datasets tend to hold a relatively stable ratio of entity tokens to total tokens. We refer to this ratio as $b$, and define it with respect to some labeled dataset as:
where $N$ is the set of negative examples. Previous work has shown that in fully-annotated datasets the entity ratio tends to be about $0.09 \pm 0.05$, depending on the dataset and genre BIBREF19. Intuitively, knowledge of the gold entity ratio can help us estimate when we have found all the false negatives.
In our main experiments, we assume that the entity ratio with respect to the gold labeling is known for each training dataset. A similar assumption was made in ElkanNo08 when determining the $c$ value, and in Grave14 in the constraint determining the percentage of other examples. However, we also show in Section that knowledge of this ratio is not strictly necessary, and a flat value across all datasets produces similar performance.
With a weighted training set, it is also useful to define the weighted entity ratio.
When training an NER model on weighted data, one can change the weighted entity ratio to achieve different effects. To make balanced predictions on test, the entity ratio in the training data should roughly match that of the test data BIBREF20. To bias a model towards predicting positives or predicting negatives, the weighted entity ratio can be set higher or lower respectively. This effect is pronounced when using linear methods for NER, but not as clear in neural methods.
To change the entity ratio, we scale the weights in $N$ by a scaling constant $\gamma $. Targeting a particular $b^*$, we may write:
We can solve for $\gamma $:
To obtain weights, $v^*_i$, that attain the desired entity ratio, $b^*$, we scale all weights in $N$ by $\gamma $.
In the train-predict-infer loop, we balance the weights to a value near the gold ratio before training.
Constrained Binary Learning ::: NER with CBL ::: Constraints and Stopping Condition
We encode our constraints with an Integer Linear Program (ILP), shown in Figure FIGREF17. Intuitively, the job of the inference step is to take predictions ($\hat{T}$) and use knowledge of the task to `fix' them.
In the objective function (Eqn. DISPLAY_FORM18), token $i$ is represented by two indicator variables $y_{0i}$ and $y_{1i}$, representing negative and positive labels, respectively. Associated prediction scores $C_0$ and $C_1$ are from the classifier $\lambda $ in the last round of predictions. The first constraint (Eqn. ) encodes the fact that an instance cannot be both an entity and a non-entity.
The second constraint (Eqn. ) enforces the ratio of positive to total tokens in the corpus to match a required entity ratio. $|T|$ is the total number of tokens in the corpus. $b$ is the required entity ratio, which increases at each iteration. $\delta $ allows some flexibility, but is small.
Constraint encodes that instances in $P$ should be labeled positive since they were manually labeled and are by definition trustworthy. We set $\xi \ge 0.99$.
This framework is flexible in that more complex language- or task-specific constraints could be added. For example, in English and many other languages with Latin script, it may help to add a capitalization constraint. In languages with rich morphology, certain suffixes may indicate or contraindicate a named entity. For simplicity, and because of the number of languages in our experiments, we use only a few constraints.
After the ILP has selected predictions, we assign weights to each instance in preparation for training the next round. The decision process for an instance is:
This is similar to Equation (DISPLAY_FORM6), except that the set of tokens that the ILP labeled as positive is larger than $P$. With new labels and weights, we start the next iteration.
The stopping condition for the algorithm is related to the entity ratio. One important constraint (Eqn. ) governs how many positives are labeled at each round. This number starts at $|P|$ and is increased by a small value at each iteration, thereby improving recall. Positive instances are chosen in two ways. First, all instances in $P$ are constrained to be labeled positive (Eqn. ). Second, the objective function ensures that high-confidence positives will be chosen. The stopping condition is met when the number of required positive instances (computed using gold unweighted entity ratio) equals the number of predicted positive instances.
Experiments
We measure the performance of our method on 8 different languages using artificially perturbed labels to simulate the partial annotation setting.
Experiments ::: Data
We experiment on 8 languages. Four languages – English, German, Spanish, Dutch – come from the CoNLL 2002/2003 shared tasks BIBREF21, BIBREF22. These are taken from newswire text, and have labelset of Person, Organization, Location, Miscellaneous.
The remaining four languages come from the LORELEI project BIBREF23. These languages are: Amharic (amh: LDC2016E87), Arabic (ara: LDC2016E89), Hindi (hin: LDC2017E62), and Somali (som: LDC2016E91). These come from a variety of sources including discussion forums, newswire, and social media. The labelset is Person, Organization, Location, Geo-political entity. We define train/development/test splits, taking care to keep a similar distribution of genres in each split. Data statistics for all languages are shown in Table TABREF25.
Experiments ::: Artificial Perturbation
We create partial annotations by perturbing gold annotated data in two ways: lowering recall (to simulate missing entities), and lowering precision (to simulate noisy annotations).
To lower recall, we replace gold named entity tags with $O$ tags (for non-name). We do this by grouping named entity surface forms, and replacing tags on all occurrences of a randomly selected surface form until the desired amount remains. For example, if the token `Bangor' is chosen to be untagged, then every occurrence of `Bangor' will be untagged. We chose this slightly complicated method because the simplest idea (remove mentions randomly) leaves an artificially large diversity of surface forms, which makes the problem of discovering noisy entities easier.
To lower precision, we tag a random span (of a random start position, and a random length between 1 and 3) with a random named entity tag. We continue this process until we reach the desired precision. When both precision and recall are to be perturbed, the recall adjustment is made first, and then the number of random spans to be added is calculated by the entities that are left.
Experiments ::: NER Models
In principle, CBL can use any NER method that can be trained with instance weights. We experiment with both non-neural and neural models.
Experiments ::: NER Models ::: Non-neural Model
For our non-neural system, we use a version of Cogcomp NER BIBREF24, BIBREF25 modified to use Weighted Averaged Perceptron. This operates on a weighted training set $D_w = \lbrace (x_i, y_i, v_i) \rbrace _{i=1}^N $, where $N$ is the number of training examples, and $v_i \ge 0$ is the weight on the $i$th training example. In this non-neural system, a training example is a word with context encoded in the features. We change only the update rule, where the learning rate $\alpha $ is multiplied by the weight:
We use a standard set of features, as documented in BIBREF24. In order to keep the language-specific resources to a minimum, we did not use any gazetteers for any language. One of the most important features is Brown clusters, trained for 100, 500, and 1000 clusters for the CoNLL languages, and 2000 clusters for the remaining languages. We trained these clusters on Wikipedia text for the four CoNLL languages, and on the same monolingual text used to train the word vectors (described in Section SECREF26).
Experiments ::: NER Models ::: Neural Model
A common neural model for NER is the BiLSTM-CRF model BIBREF26. However, because the Conditional Random Field (CRF) layer calculates loss at the sentence level, we need a different method to incorporate token weights. We use a variant of the CRF that allows partial annotations by marginalizing over all possible sequences BIBREF27.
When using a standard BiLSTM-CRF model, the loss of a dataset ($D$) composed of sentences ($s$) is calculated as:
Where $P_\theta (\mathbf {y}^{(s)} | \textbf {x}^{(s)})$ is calculated by the CRF over outputs from the BiLSTM. In the marginal CRF framework, it is assumed that $\mathbf {y}^{(s)}$ is necessarily partial, denoted as $\mathbf {y}^{(s)}_p$. To incorporate partial annotations, the loss is calculated by marginalizing over all possible sequences consistent with the partial annotations, denoted as $C(\mathbf {y}_p^s)$.
However, this formulation assumes that all possible sequences are equally likely. To address this, BIBREF17 introduced a way to weigh sequences.
It's easy to see that this formulation is a generalization of the standard CRF if $q(.)=1$ for the gold sequence $\mathbf {y}$, and 0 for all others.
The product inside the summation depends on tag transition probabilities and tag emission probabilities, as well as token-level “weights" over the tagset. These weights can be seen as defining a soft gold labeling for each token, corresponding to confidence in each label.
For clarity, define the soft gold labeling over each token $x_i$ as $\mathbf {G}_i \in [0,1]^{L}$, where $L$ is the size of the labelset. Now, we may define $q(.)$ as:
Where $G_i^{y_i}$ is understood as the weight in $\mathbf {G}_i$ that corresponds to the label $y_i$.
We incorporate our instance weights in this model with the following intuitions. Recall that if an instance weight $v_i=0$, this indicates low confidence in the label on token $x_i$, and therefore the labeling should not update the model at training time. Conversely, if $v_i=1$, then this label is to be trusted entirely.
If $v_i=0$, we set the soft labeling weights over $x_i$ to be uniform, which is as good as no information. Since $v_i$ is defined as confidence in the O label, the soft labeling weight for O increases proportionally to $v_i$. Any remaining probability mass is distributed evenly among the other labels.
To be precise, for tokens in $N$, we calculate values for $\mathbf {G}_i$ as follows:
For example, consider phase 1 of Constrained Binary Learning, in which the labelset is collapsed to two labels ($L=2$). Assuming that the O label has index 0, then if $v_i=0$, then $\mathbf {G}_i = [0.5, 0.5]$. If $v_i=0.6$, then $\mathbf {G}_i = [0.6, 0.4]$.
For tokens in $P$ (which have some entity label with high confidence), we always set $\mathbf {G}_i$ with 1 in the given label index, and 0 elsewhere.
We use pretrained GloVe BIBREF28 word vectors for English, and the same pretrained vectors used in BIBREF29 for Dutch, German, and Spanish. The other languages are distributed with monolingual text BIBREF23, which we used to train our own skip-n-gram vectors.
Experiments ::: Baselines
We compare against several baselines, including two from prior work.
Experiments ::: Baselines ::: Raw annotations
The simplest baseline is to do nothing to the partially annotated data and train on it as is.
Experiments ::: Baselines ::: Instance Weights
Although CBL works with no initialization (that is, all tokens with weight 1), we found that a good weighting scheme can boost performance for certain models. We design weighting schemes that give instances in $N$ weights corresponding to an estimate of the label confidence. For example, non-name tokens such as respectfully should have weight 1, but possible names, such as Russell, should have a low weight, or 0. We propose two weighting schemes: frequency-based and window-based.
For the frequency-based weighting scheme, we observed that names have relatively low frequency (for example, Kennebunkport, Dushanbe) and common words are rarely names (for example the, and, so). We weigh each instance in $N$ according to its frequency.
where $freq(x_i)$ is the frequency of the $i^{th}$ token in $N$ divided by the count of the most frequent token. In our experiments, we computed frequencies over $P+N$, but these could be estimated on any sufficiently large corpus. We found that the neural model performed poorly when the weights followed a Zipfian distribution (e.g. most weights very small), so for those experiments, we took the log of the token count before normalizing.
For the window-based weighting scheme, noting that names rarely appear immediately adjacent to each other in English text, we set weights for tokens within a window of size 1 of a name (identified in $P$) to be $1.0$, and for tokens farther away to be 0.
where $d_i$ is the distance of the $i^{th}$ token to the nearest named entity in $P$.
Finally, we combine the two weighting schemes as:
Experiments ::: Baselines ::: Self-training with Marginal CRF
BIBREF17 propose a model based on marginal CRF BIBREF27 (described in Section SECREF26). They follow a self-training framework with cross-validation, using the trained model over all but one fold to update gold labeling distributions in the final fold. This process continues until convergence. They use a partial-CRF framework similar to ours, but taking predictions at face value, without constraints.
Experiments ::: Baselines ::: Neural Network with Noise Adaptation
Following BIBREF30, we used a neural network with a noise adaptation layer. This extra layer attempts to correct noisy examples given a probabilistic confusion matrix of label noise. Since this method needs a small amount of labeled data, we selected 500 random tokens to be the gold training set, in addition to the partial annotations.
As with our BiLSTM experiments, we use pretrained GloVe word vectors for English, and the same pretrained vectors used in BIBREF29 for Dutch, German, and Spanish. We omit results from the remaining languages because the scores were substantially worse even than training on raw annotations.
Experiments ::: Experimental Setup and Results
We show results from our experiments in Table TABREF30. In all experiments, the training data is perturbed at 90% precision and 50% recall. These parameters are similar to the scores obtained by human annotators in a foreign language (see Section SECREF5). We evaluate each experiment with both non-neural and neural methods.
First, to get an idea of the difficulty of NER in each language, we report scores from models trained on gold data without perturbation (Gold). Then we report results from an Oracle Weighting scheme (Oracle Weighting) that takes partially annotated data and assigns weights with knowledge of the true labels. Specifically, mislabeled entities in set $N$ are given weight 0, and all other tokens are given weight 1.0. This scheme is free from labeling noise, but should still get lower scores than Gold because of the smaller number of entities. Since our method estimates these weights, we do not expect CBL to outperform the Oracle method. Next, we show results from all baselines. The bottom two sections are our results, first with no initialization (Raw), and CBL over that, then with Combined Weighting initialization, and CBL over that.
Experiments ::: Analysis
Regardless of initialization or model, CBL improves over the baselines. Our best model, CBL-Raw BiLSTM-CRF, improves over the Raw Annotations BiLSTM-CRF baseline by 11.2 points F1, and the Self-training prior work by 2.6 points F1, showing that it is an effective way to address the problem of partial annotation. Further, the best CBL version for each model is within 3 points of the corresponding Oracle ceiling, suggesting that this weighting framework is nearly saturated.
The Combined weighting scheme is surprisingly effective for the non-neural model, which suggests that the intuition about frequency as distinction between names and non-names holds true. It gives modest improvement in the neural model. The Self-training method is effective, but is outperformed by our best CBL method, a difference we discuss in more detail in Section SECREF43. The Noise Adaptation method outperforms the Raw annotations Cogcomp baseline in most cases, but does not reach the performance of the Self-training method, despite using some fully labeled data.
It is instructive to compare the neural and non-neural versions of each setup. The neural method is better overall, but is less able to learn from the knowledge-based initialization weights. In the non-neural method, the difference between Raw and Combined is nearly 20 points, but the difference in the neural model is less than 3 points. Combined versions of the non-neural method outperform the neural method on 3 languages: Dutch, Arabic, and Hindi. Further, in the neural method, CBL-Raw is always worse than CBL-Combined. This may be due to the way that weights are used in each model. In the non-neural model, a low enough weight completely cancels the token, whereas in the neural model it is still used in training. Since the neural model performs well in the Oracle setting, we know that it can learn from hard weights, but it may have trouble with the subtle differences encoded in frequencies. We leave it to future work to discover improved ways of incorporating instance weights in a BiLSTM-CRF.
In seeking to understand the details of the other results, we need to consider the precision/recall tradeoff. First, all scores in the Gold row had higher precision than recall. Then, training on raw partially annotated data biases a classifier strongly towards predicting few entities. All results from the Raw annotations row have precision more than double the recall (e.g. Dutch Precision, Recall, F1 were: 91.5, 32.4, 47.9). In this context, the problem this paper explores is how to improve the recall of these datasets without harming the precision.
Experiments ::: Difference from Prior Work
While our method has several superficial similarities with prior work, most notably BIBREF17, there are some crucial differences.
Our methods are similar in that they both use a model trained at each step to assign a soft gold-labeling to each token. Each algorithm iteratively trains models using weights from the previous steps.
One difference is that BIBREF17 use cross-validation to train, while we follow BIBREF18 and retrain with the entire training set at each round.
However, the main difference has to do with the focus of each algorithm. Recall the discussion in Section SECREF3 regarding the two possible approaches of 1) find the false negatives and label them correctly, and 2) find the false negatives and remove them. Conceptually, the former was the approach taken by BIBREF17, the latter was our approach. Another way to look at this is as focusing on predicting correct tag labels (BIBREF17) or focus on predicting O tags with high confidence (ours).
Even though they use soft labeling (which they show to be consistently better than hard labeling), it is possible that the predicted tag distribution is incorrect. Our approach allows us to avoid much of the inevitable noise that comes from labelling with a weak model.
Bengali Case Study
So far our experiments have shown effectiveness on artificially perturbed labels, but one might argue that these systematic perturbations don't accurately simulate real-world noise. In this section, we show how our methods work in a real-world scenario, using Bengali data partially labeled by non-speakers.
Bengali Case Study ::: Non-speaker Annotations
In order to compare with prior work, we used the train/test split from ZPWVJKM16. We removed all gold labels from the train split, romanized it BIBREF31, and presented it to two non-Bengali speaking annotators using the TALEN interface BIBREF32. The instructions were to move quickly and annotate names only when there is high confidence (e.g. when you can also identify the English version of the name). They spent about 5 total hours annotating, without using Google Translate. This sort of non-speaker annotation is possible because the text contains many `easy' entities – foreign names – which are noticeably distinct from native Bengali words. For example, consider the following:
Romanized Bengali: ebisi'ra giliyyaana phinnddale aaja pyaalestaaina adhiinastha gaajaa theke aaja raate ekhabara jaaniyyechhena .
Translation: ABC's Gillian Fondley has reported today from Gaza under Palestine today.
The entities are Gillian Findlay, ABC, Palestine, and Gaza. While a fast-moving annotator may not catch most of these, `pyaalestaaina' could be considered an `easy' entity, because of its visual and aural similarity to `Palestine.' A clever annotator may also infer that if Palestine is mentioned, then Gaza may be present.
Annotators are moving fast and being intentionally non-thorough, so the recall will be low. Since they do not speak Bengali, there are likely to be some mistakes, so the precision may drop slightly also. This is exactly the noisy partial annotation scenario addressed in this paper. The statistics of this data can be seen in Table TABREF49, including annotation scores computed with respect to the gold training data for each annotator, as well as the combined score.
We show results in Table TABREF50, using the BiLSTM-CRF model. We compare against other low-resource approaches published on this dataset, including two based on Wikipedia BIBREF33, BIBREF12, another based on lexicon translation from a high-resource language BIBREF34. These prior methods operate under somewhat different paradigms than this work, but have the same goal: maximizing performance in the absence of gold training data.
Raw annotations is defined as before, and gives similar high-precision low-recall results. The Combined Weighting scheme improves over Raw annotations by 10 points, achieving a score comparable to the prior state of the art. Beyond that, CBL-Raw outperforms the prior best by nearly 6 points F1, although CBL-Combined again underwhelms.
To the best of our knowledge, this is the first result showing a method for non-speaker annotations to produce high-quality NER scores. The simplicity of this method and the small time investment for these results gives us confidence that this method can be effective for many low-resource languages.
Conclusions
We explore an understudied data scenario, and introduce a new constrained iterative algorithm to solve it. This algorithm performs well in experimental trials in several languages, on both artificially perturbed data, and in a truly low-resource situation.
Acknowledgements
This work was supported by Contracts HR0011-15-C-0113 and HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. | 52.0% |
869aaf397c9b4da7ab52d6dd0961887ae08da9ae | 869aaf397c9b4da7ab52d6dd0961887ae08da9ae_0 | Q: Which languages are evaluated?
Text: Introduction
Most modern approaches to NLP tasks rely on supervised learning algorithms to learn and generalize from labeled training data. While this has proven successful in high-resource scenarios, this is not realistic in many cases, such as low-resource languages, as the required amount of training data just doesn't exist. However, partial annotations are often easy to gather.
We study the problem of using partial annotations to train a Named Entity Recognition (NER) system. In this setting, all (or most) identified entities are correct, but not all entities have been identified, and crucially, there are no reliable examples of the negative class. The sentence shown in Figure FIGREF2 shows examples of both a gold and a partially annotated sentence. Such partially annotated data is relatively easy to obtain: for example, a human annotator who does not speak the target language may recognize common entities, but not uncommon ones. With no reliable examples of the negative class, the problem becomes one of estimating which unlabeled instances are true negatives and which are false negatives.
To address the above-mentioned challenge, we present Constrained Binary Learning (CBL) – a novel self-training based algorithm that focuses on iteratively identifying true negatives for the NER task while improving its learning. Towards this end, CBL uses constraints that incorporate background knowledge required for the entity recognition task.
We evaluate the proposed methods in 8 languages, showing a significant ability to learn from partial data. We additionally experiment with initializing CBL with domain-specific instance-weighting schemes, showing mixed results. In the process, we use weighted variants of popular NER models, showing strong performance in both non-neural and neural settings. Finally, we show experiments in a real-world setting, by employing non-speakers to manually annotate romanized Bengali text. We show that a small amount of non-speaker annotation combined with our method can outperform previous methods.
Related Work
The supervision paradigm in this paper, partial supervision, falls broadly under the category of semi-supervision BIBREF0, and is closely related to weak supervision BIBREF1 and incidental supervision BIBREF2, in the sense that data is constructed through some noisy process. However, all of the most related work shares a key difference from ours: reliance on a small amount of fully annotated data in addition to the noisy data.
FernandesBr11 introduces a transductive version of structured perceptron for partially annotated sequences. However, their definition of partial annotation is labels removed at random, so examples from all classes are still available if not contiguous.
Fidelity Weighted Learning BIBREF3 uses a teacher/student model, in which the teacher has access to (a small amount) of high quality data, and uses this to guide the student, which has access to (a large amount) of weak data.
HedderichKl18, following GoldbergerBe17, add a noise adaptation layer on top of an LSTM, which learns how to correct noisy labels, given a small amount of training data. We compare against this model in our experiments.
In the world of weak supervision, Snorkel BIBREF4, BIBREF5, is a system that combines automatic labeling functions with data integration and noise reduction methods to rapidly build large datasets. They rely on high recall and consequent redundancy of the labeling functions. We argue that in certain realistic cases, high-recall candidate identification is unavailable.
We draw inspiration from the Positive-Unlabeled (PU) learning framework BIBREF6, BIBREF7, BIBREF8, BIBREF9. Originally introduced for document classification, PU learning addresses problems where examples of a single class (for example, sports) are easy to obtain, but a full labeling of all other classes is prohibitively expensive.
Named entity classification as an instance of PU learning was introduced in Grave14, which uses constrained optimization with constraints similar to ours. However, they only address the problem of named entity classification, in which mentions are given, and the goal is to assign a type to a named-entity (like `location', `person', etc.) as opposed to our goal of identifying and typing named entities.
Although the task is slightly different, there has been work on building `silver standard' data from Wikipedia BIBREF10, BIBREF11, BIBREF12, using hyperlink annotations as the seed set and propagating throughout the document.
Partial annotation in various forms has also been studied in the contexts of POS-tagging BIBREF13, word sense disambiguation BIBREF14, temporal relation extraction BIBREF15, dependency parsing BIBREF16, and named entity recognition BIBREF17.
In particular, BIBREF17 study a similar problem with a few key differences: since they remove entity surfaces randomly, the dataset is too easy; and they do not use constraints on their output. We compare against their results in our experiments.
Our proposed method is most closely aligned with the Constraint Driven Learning (CoDL) framework BIBREF18, in which an iterative algorithm reminiscent of self-training is guided by constraints that are applied at each iteration.
Constrained Binary Learning
Our method assigns instance weights to all negative elements (tokens tagged as O), so that false negatives have low weights, and all other instances have high weights. We calculate weights according to the confidence predictions of a classifier trained iteratively over the partially annotated data. We refer to our method as Constrained Binary Learning (CBL).
We will first describe the motivation for this approach before moving on to the mechanics. We start with partially annotated data (which we call set $T$) in which some, but not all, positives are annotated (set $P$), and no negative is labeled. By default, we assume that any instance not labeled as positive is labeled as negative as opposed to unlabeled. This data (set $N$) is noisy in the sense that many true positives are labeled as negative (these are false negatives). Clearly, training on $T$ as-is will result in a noisy classifier.
Two possible approaches are: 1) find the false negatives and label them correctly, or 2) find the false negatives and remove them. The former method affords more training data, but runs the risk of adding noise, which could be worse than the original partial annotations. The latter is more forgiving because of an asymmetry in the penalties: it is important to remove all false negatives in $N$, but inadvertently removing true negatives from $N$ is typically not a problem, especially in NER, where negative examples dominate. Further, a binary model (only two labels) is sufficient in this case, as we need only detect entities, not type them.
We choose the latter method, but instead of removing false negatives, we adopt an instance-weighting approach, in which each instance is assigned a weight $v_i \ge 0$ according to confidence in the labeling of that instance. A weight of 0 means that the loss this instance incurs during training will not update the model.
With this in mind, CBL takes two phases: first, it learns a binary classifier $\lambda $ using a constrained iterative process modeled after the CODL framework BIBREF18, and depicted in Figure FIGREF5. The core of the algorithm is the train-predict-infer loop. The training process (line 4) is weighted, using weights $V$. At the start, these can be all 1 (Raw), or can be initialized with prior knowledge. The learned model is then used to predict on all of $T$ (line 5). In the inference step (line 6), we take the predictions from the prior round and the constraints $C$ and produce a new labeling on $T$, and a new set of weights $V$. The details of this inference step are presented later in this section. Although our ultimate strategy is simply to assign weights (not change labels), in this inner loop, we update the labels on $N$ according to classifier predictions.
In the second phase of CBL, we use the $\lambda $ trained in the previous phase to assign weights to instances as follows:
Where $P_{\lambda }(y_i=\text{O} \mid x_i)$ is understood as the classifier's confidence that instance $x_i$ takes the negative label. In practice it is sufficient to use any confidence score from the classifier, not necessarily a probability. If the classifier has accurately learned to detect entities, then for all the false negatives in $N$, $P_{\lambda }(y_i=\text{O}|x_i)$ is small, which is the goal.
Ultimately, we send the original multiclass partially annotated dataset along with final weights $V$ to a standard weighted NER classifier to learn a model. No weights are needed at test time.
Constrained Binary Learning ::: NER with CBL
So far, we have given a high-level view of the algorithm. In this section, we will give more low-level details, especially as they relate to the specific problem of NER. One contribution of this work is the inference step (line 6), which we address using a constrained Integer Linear Program (ILP) and describe in this section. However, the constraints are based on a value we call the entity ratio. First, we describe the entity ratio, then we describe the constraints and stopping condition of the algorithm.
Constrained Binary Learning ::: NER with CBL ::: Entity ratio and Balancing
We have observed that NER datasets tend to hold a relatively stable ratio of entity tokens to total tokens. We refer to this ratio as $b$, and define it with respect to some labeled dataset as:
where $N$ is the set of negative examples. Previous work has shown that in fully-annotated datasets the entity ratio tends to be about $0.09 \pm 0.05$, depending on the dataset and genre BIBREF19. Intuitively, knowledge of the gold entity ratio can help us estimate when we have found all the false negatives.
In our main experiments, we assume that the entity ratio with respect to the gold labeling is known for each training dataset. A similar assumption was made in ElkanNo08 when determining the $c$ value, and in Grave14 in the constraint determining the percentage of other examples. However, we also show in Section that knowledge of this ratio is not strictly necessary, and a flat value across all datasets produces similar performance.
With a weighted training set, it is also useful to define the weighted entity ratio.
When training an NER model on weighted data, one can change the weighted entity ratio to achieve different effects. To make balanced predictions on test, the entity ratio in the training data should roughly match that of the test data BIBREF20. To bias a model towards predicting positives or predicting negatives, the weighted entity ratio can be set higher or lower respectively. This effect is pronounced when using linear methods for NER, but not as clear in neural methods.
To change the entity ratio, we scale the weights in $N$ by a scaling constant $\gamma $. Targeting a particular $b^*$, we may write:
We can solve for $\gamma $:
To obtain weights, $v^*_i$, that attain the desired entity ratio, $b^*$, we scale all weights in $N$ by $\gamma $.
In the train-predict-infer loop, we balance the weights to a value near the gold ratio before training.
Constrained Binary Learning ::: NER with CBL ::: Constraints and Stopping Condition
We encode our constraints with an Integer Linear Program (ILP), shown in Figure FIGREF17. Intuitively, the job of the inference step is to take predictions ($\hat{T}$) and use knowledge of the task to `fix' them.
In the objective function (Eqn. DISPLAY_FORM18), token $i$ is represented by two indicator variables $y_{0i}$ and $y_{1i}$, representing negative and positive labels, respectively. Associated prediction scores $C_0$ and $C_1$ are from the classifier $\lambda $ in the last round of predictions. The first constraint (Eqn. ) encodes the fact that an instance cannot be both an entity and a non-entity.
The second constraint (Eqn. ) enforces the ratio of positive to total tokens in the corpus to match a required entity ratio. $|T|$ is the total number of tokens in the corpus. $b$ is the required entity ratio, which increases at each iteration. $\delta $ allows some flexibility, but is small.
Constraint encodes that instances in $P$ should be labeled positive since they were manually labeled and are by definition trustworthy. We set $\xi \ge 0.99$.
This framework is flexible in that more complex language- or task-specific constraints could be added. For example, in English and many other languages with Latin script, it may help to add a capitalization constraint. In languages with rich morphology, certain suffixes may indicate or contraindicate a named entity. For simplicity, and because of the number of languages in our experiments, we use only a few constraints.
After the ILP has selected predictions, we assign weights to each instance in preparation for training the next round. The decision process for an instance is:
This is similar to Equation (DISPLAY_FORM6), except that the set of tokens that the ILP labeled as positive is larger than $P$. With new labels and weights, we start the next iteration.
The stopping condition for the algorithm is related to the entity ratio. One important constraint (Eqn. ) governs how many positives are labeled at each round. This number starts at $|P|$ and is increased by a small value at each iteration, thereby improving recall. Positive instances are chosen in two ways. First, all instances in $P$ are constrained to be labeled positive (Eqn. ). Second, the objective function ensures that high-confidence positives will be chosen. The stopping condition is met when the number of required positive instances (computed using gold unweighted entity ratio) equals the number of predicted positive instances.
Experiments
We measure the performance of our method on 8 different languages using artificially perturbed labels to simulate the partial annotation setting.
Experiments ::: Data
We experiment on 8 languages. Four languages – English, German, Spanish, Dutch – come from the CoNLL 2002/2003 shared tasks BIBREF21, BIBREF22. These are taken from newswire text, and have labelset of Person, Organization, Location, Miscellaneous.
The remaining four languages come from the LORELEI project BIBREF23. These languages are: Amharic (amh: LDC2016E87), Arabic (ara: LDC2016E89), Hindi (hin: LDC2017E62), and Somali (som: LDC2016E91). These come from a variety of sources including discussion forums, newswire, and social media. The labelset is Person, Organization, Location, Geo-political entity. We define train/development/test splits, taking care to keep a similar distribution of genres in each split. Data statistics for all languages are shown in Table TABREF25.
Experiments ::: Artificial Perturbation
We create partial annotations by perturbing gold annotated data in two ways: lowering recall (to simulate missing entities), and lowering precision (to simulate noisy annotations).
To lower recall, we replace gold named entity tags with $O$ tags (for non-name). We do this by grouping named entity surface forms, and replacing tags on all occurrences of a randomly selected surface form until the desired amount remains. For example, if the token `Bangor' is chosen to be untagged, then every occurrence of `Bangor' will be untagged. We chose this slightly complicated method because the simplest idea (remove mentions randomly) leaves an artificially large diversity of surface forms, which makes the problem of discovering noisy entities easier.
To lower precision, we tag a random span (of a random start position, and a random length between 1 and 3) with a random named entity tag. We continue this process until we reach the desired precision. When both precision and recall are to be perturbed, the recall adjustment is made first, and then the number of random spans to be added is calculated by the entities that are left.
Experiments ::: NER Models
In principle, CBL can use any NER method that can be trained with instance weights. We experiment with both non-neural and neural models.
Experiments ::: NER Models ::: Non-neural Model
For our non-neural system, we use a version of Cogcomp NER BIBREF24, BIBREF25 modified to use Weighted Averaged Perceptron. This operates on a weighted training set $D_w = \lbrace (x_i, y_i, v_i) \rbrace _{i=1}^N $, where $N$ is the number of training examples, and $v_i \ge 0$ is the weight on the $i$th training example. In this non-neural system, a training example is a word with context encoded in the features. We change only the update rule, where the learning rate $\alpha $ is multiplied by the weight:
We use a standard set of features, as documented in BIBREF24. In order to keep the language-specific resources to a minimum, we did not use any gazetteers for any language. One of the most important features is Brown clusters, trained for 100, 500, and 1000 clusters for the CoNLL languages, and 2000 clusters for the remaining languages. We trained these clusters on Wikipedia text for the four CoNLL languages, and on the same monolingual text used to train the word vectors (described in Section SECREF26).
Experiments ::: NER Models ::: Neural Model
A common neural model for NER is the BiLSTM-CRF model BIBREF26. However, because the Conditional Random Field (CRF) layer calculates loss at the sentence level, we need a different method to incorporate token weights. We use a variant of the CRF that allows partial annotations by marginalizing over all possible sequences BIBREF27.
When using a standard BiLSTM-CRF model, the loss of a dataset ($D$) composed of sentences ($s$) is calculated as:
Where $P_\theta (\mathbf {y}^{(s)} | \textbf {x}^{(s)})$ is calculated by the CRF over outputs from the BiLSTM. In the marginal CRF framework, it is assumed that $\mathbf {y}^{(s)}$ is necessarily partial, denoted as $\mathbf {y}^{(s)}_p$. To incorporate partial annotations, the loss is calculated by marginalizing over all possible sequences consistent with the partial annotations, denoted as $C(\mathbf {y}_p^s)$.
However, this formulation assumes that all possible sequences are equally likely. To address this, BIBREF17 introduced a way to weigh sequences.
It's easy to see that this formulation is a generalization of the standard CRF if $q(.)=1$ for the gold sequence $\mathbf {y}$, and 0 for all others.
The product inside the summation depends on tag transition probabilities and tag emission probabilities, as well as token-level “weights" over the tagset. These weights can be seen as defining a soft gold labeling for each token, corresponding to confidence in each label.
For clarity, define the soft gold labeling over each token $x_i$ as $\mathbf {G}_i \in [0,1]^{L}$, where $L$ is the size of the labelset. Now, we may define $q(.)$ as:
Where $G_i^{y_i}$ is understood as the weight in $\mathbf {G}_i$ that corresponds to the label $y_i$.
We incorporate our instance weights in this model with the following intuitions. Recall that if an instance weight $v_i=0$, this indicates low confidence in the label on token $x_i$, and therefore the labeling should not update the model at training time. Conversely, if $v_i=1$, then this label is to be trusted entirely.
If $v_i=0$, we set the soft labeling weights over $x_i$ to be uniform, which is as good as no information. Since $v_i$ is defined as confidence in the O label, the soft labeling weight for O increases proportionally to $v_i$. Any remaining probability mass is distributed evenly among the other labels.
To be precise, for tokens in $N$, we calculate values for $\mathbf {G}_i$ as follows:
For example, consider phase 1 of Constrained Binary Learning, in which the labelset is collapsed to two labels ($L=2$). Assuming that the O label has index 0, then if $v_i=0$, then $\mathbf {G}_i = [0.5, 0.5]$. If $v_i=0.6$, then $\mathbf {G}_i = [0.6, 0.4]$.
For tokens in $P$ (which have some entity label with high confidence), we always set $\mathbf {G}_i$ with 1 in the given label index, and 0 elsewhere.
We use pretrained GloVe BIBREF28 word vectors for English, and the same pretrained vectors used in BIBREF29 for Dutch, German, and Spanish. The other languages are distributed with monolingual text BIBREF23, which we used to train our own skip-n-gram vectors.
Experiments ::: Baselines
We compare against several baselines, including two from prior work.
Experiments ::: Baselines ::: Raw annotations
The simplest baseline is to do nothing to the partially annotated data and train on it as is.
Experiments ::: Baselines ::: Instance Weights
Although CBL works with no initialization (that is, all tokens with weight 1), we found that a good weighting scheme can boost performance for certain models. We design weighting schemes that give instances in $N$ weights corresponding to an estimate of the label confidence. For example, non-name tokens such as respectfully should have weight 1, but possible names, such as Russell, should have a low weight, or 0. We propose two weighting schemes: frequency-based and window-based.
For the frequency-based weighting scheme, we observed that names have relatively low frequency (for example, Kennebunkport, Dushanbe) and common words are rarely names (for example the, and, so). We weigh each instance in $N$ according to its frequency.
where $freq(x_i)$ is the frequency of the $i^{th}$ token in $N$ divided by the count of the most frequent token. In our experiments, we computed frequencies over $P+N$, but these could be estimated on any sufficiently large corpus. We found that the neural model performed poorly when the weights followed a Zipfian distribution (e.g. most weights very small), so for those experiments, we took the log of the token count before normalizing.
For the window-based weighting scheme, noting that names rarely appear immediately adjacent to each other in English text, we set weights for tokens within a window of size 1 of a name (identified in $P$) to be $1.0$, and for tokens farther away to be 0.
where $d_i$ is the distance of the $i^{th}$ token to the nearest named entity in $P$.
Finally, we combine the two weighting schemes as:
Experiments ::: Baselines ::: Self-training with Marginal CRF
BIBREF17 propose a model based on marginal CRF BIBREF27 (described in Section SECREF26). They follow a self-training framework with cross-validation, using the trained model over all but one fold to update gold labeling distributions in the final fold. This process continues until convergence. They use a partial-CRF framework similar to ours, but taking predictions at face value, without constraints.
Experiments ::: Baselines ::: Neural Network with Noise Adaptation
Following BIBREF30, we used a neural network with a noise adaptation layer. This extra layer attempts to correct noisy examples given a probabilistic confusion matrix of label noise. Since this method needs a small amount of labeled data, we selected 500 random tokens to be the gold training set, in addition to the partial annotations.
As with our BiLSTM experiments, we use pretrained GloVe word vectors for English, and the same pretrained vectors used in BIBREF29 for Dutch, German, and Spanish. We omit results from the remaining languages because the scores were substantially worse even than training on raw annotations.
Experiments ::: Experimental Setup and Results
We show results from our experiments in Table TABREF30. In all experiments, the training data is perturbed at 90% precision and 50% recall. These parameters are similar to the scores obtained by human annotators in a foreign language (see Section SECREF5). We evaluate each experiment with both non-neural and neural methods.
First, to get an idea of the difficulty of NER in each language, we report scores from models trained on gold data without perturbation (Gold). Then we report results from an Oracle Weighting scheme (Oracle Weighting) that takes partially annotated data and assigns weights with knowledge of the true labels. Specifically, mislabeled entities in set $N$ are given weight 0, and all other tokens are given weight 1.0. This scheme is free from labeling noise, but should still get lower scores than Gold because of the smaller number of entities. Since our method estimates these weights, we do not expect CBL to outperform the Oracle method. Next, we show results from all baselines. The bottom two sections are our results, first with no initialization (Raw), and CBL over that, then with Combined Weighting initialization, and CBL over that.
Experiments ::: Analysis
Regardless of initialization or model, CBL improves over the baselines. Our best model, CBL-Raw BiLSTM-CRF, improves over the Raw Annotations BiLSTM-CRF baseline by 11.2 points F1, and the Self-training prior work by 2.6 points F1, showing that it is an effective way to address the problem of partial annotation. Further, the best CBL version for each model is within 3 points of the corresponding Oracle ceiling, suggesting that this weighting framework is nearly saturated.
The Combined weighting scheme is surprisingly effective for the non-neural model, which suggests that the intuition about frequency as distinction between names and non-names holds true. It gives modest improvement in the neural model. The Self-training method is effective, but is outperformed by our best CBL method, a difference we discuss in more detail in Section SECREF43. The Noise Adaptation method outperforms the Raw annotations Cogcomp baseline in most cases, but does not reach the performance of the Self-training method, despite using some fully labeled data.
It is instructive to compare the neural and non-neural versions of each setup. The neural method is better overall, but is less able to learn from the knowledge-based initialization weights. In the non-neural method, the difference between Raw and Combined is nearly 20 points, but the difference in the neural model is less than 3 points. Combined versions of the non-neural method outperform the neural method on 3 languages: Dutch, Arabic, and Hindi. Further, in the neural method, CBL-Raw is always worse than CBL-Combined. This may be due to the way that weights are used in each model. In the non-neural model, a low enough weight completely cancels the token, whereas in the neural model it is still used in training. Since the neural model performs well in the Oracle setting, we know that it can learn from hard weights, but it may have trouble with the subtle differences encoded in frequencies. We leave it to future work to discover improved ways of incorporating instance weights in a BiLSTM-CRF.
In seeking to understand the details of the other results, we need to consider the precision/recall tradeoff. First, all scores in the Gold row had higher precision than recall. Then, training on raw partially annotated data biases a classifier strongly towards predicting few entities. All results from the Raw annotations row have precision more than double the recall (e.g. Dutch Precision, Recall, F1 were: 91.5, 32.4, 47.9). In this context, the problem this paper explores is how to improve the recall of these datasets without harming the precision.
Experiments ::: Difference from Prior Work
While our method has several superficial similarities with prior work, most notably BIBREF17, there are some crucial differences.
Our methods are similar in that they both use a model trained at each step to assign a soft gold-labeling to each token. Each algorithm iteratively trains models using weights from the previous steps.
One difference is that BIBREF17 use cross-validation to train, while we follow BIBREF18 and retrain with the entire training set at each round.
However, the main difference has to do with the focus of each algorithm. Recall the discussion in Section SECREF3 regarding the two possible approaches of 1) find the false negatives and label them correctly, and 2) find the false negatives and remove them. Conceptually, the former was the approach taken by BIBREF17, the latter was our approach. Another way to look at this is as focusing on predicting correct tag labels (BIBREF17) or focus on predicting O tags with high confidence (ours).
Even though they use soft labeling (which they show to be consistently better than hard labeling), it is possible that the predicted tag distribution is incorrect. Our approach allows us to avoid much of the inevitable noise that comes from labelling with a weak model.
Bengali Case Study
So far our experiments have shown effectiveness on artificially perturbed labels, but one might argue that these systematic perturbations don't accurately simulate real-world noise. In this section, we show how our methods work in a real-world scenario, using Bengali data partially labeled by non-speakers.
Bengali Case Study ::: Non-speaker Annotations
In order to compare with prior work, we used the train/test split from ZPWVJKM16. We removed all gold labels from the train split, romanized it BIBREF31, and presented it to two non-Bengali speaking annotators using the TALEN interface BIBREF32. The instructions were to move quickly and annotate names only when there is high confidence (e.g. when you can also identify the English version of the name). They spent about 5 total hours annotating, without using Google Translate. This sort of non-speaker annotation is possible because the text contains many `easy' entities – foreign names – which are noticeably distinct from native Bengali words. For example, consider the following:
Romanized Bengali: ebisi'ra giliyyaana phinnddale aaja pyaalestaaina adhiinastha gaajaa theke aaja raate ekhabara jaaniyyechhena .
Translation: ABC's Gillian Fondley has reported today from Gaza under Palestine today.
The entities are Gillian Findlay, ABC, Palestine, and Gaza. While a fast-moving annotator may not catch most of these, `pyaalestaaina' could be considered an `easy' entity, because of its visual and aural similarity to `Palestine.' A clever annotator may also infer that if Palestine is mentioned, then Gaza may be present.
Annotators are moving fast and being intentionally non-thorough, so the recall will be low. Since they do not speak Bengali, there are likely to be some mistakes, so the precision may drop slightly also. This is exactly the noisy partial annotation scenario addressed in this paper. The statistics of this data can be seen in Table TABREF49, including annotation scores computed with respect to the gold training data for each annotator, as well as the combined score.
We show results in Table TABREF50, using the BiLSTM-CRF model. We compare against other low-resource approaches published on this dataset, including two based on Wikipedia BIBREF33, BIBREF12, another based on lexicon translation from a high-resource language BIBREF34. These prior methods operate under somewhat different paradigms than this work, but have the same goal: maximizing performance in the absence of gold training data.
Raw annotations is defined as before, and gives similar high-precision low-recall results. The Combined Weighting scheme improves over Raw annotations by 10 points, achieving a score comparable to the prior state of the art. Beyond that, CBL-Raw outperforms the prior best by nearly 6 points F1, although CBL-Combined again underwhelms.
To the best of our knowledge, this is the first result showing a method for non-speaker annotations to produce high-quality NER scores. The simplicity of this method and the small time investment for these results gives us confidence that this method can be effective for many low-resource languages.
Conclusions
We explore an understudied data scenario, and introduce a new constrained iterative algorithm to solve it. This algorithm performs well in experimental trials in several languages, on both artificially perturbed data, and in a truly low-resource situation.
Acknowledgements
This work was supported by Contracts HR0011-15-C-0113 and HR0011-18-2-0052 with the US Defense Advanced Research Projects Agency (DARPA). Approved for Public Release, Distribution Unlimited. The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. | Bengali, English, German, Spanish, Dutch, Amharic, Arabic, Hindi, Somali |
871c34219eb623bde9ac3937aa0f28fc3ad69445 | 871c34219eb623bde9ac3937aa0f28fc3ad69445_0 | Q: Which model have the smallest Character Error Rate and which have the smallest Word Error Rate?
Text: Introduction
Automatic Speech Recognition (ASR) has traditionally used Hidden Markov Models (HMM), describing temporal variability, combined with Gaussian Mixture Models (GMM), computing emission probabilities from HMM states, to model and map acoustic features to phones. In recent years, the introduction of deep neural networks replacing GMM for acoustic modeling showed huge improvements compared to previous state-of-the-art systems BIBREF0, BIBREF1. However, building and training such systems can be complex and a lot of preprocessing steps are involved. Traditional ASR systems are also factorized in several modules, the acoustic model representing only one of them along with lexicon and language models.
Recently, more direct approaches – called end-to-end methods – in which neural architectures are trained to directly model sequences of features as characters have been proposed BIBREF2, BIBREF3, BIBREF4. Predicting context independent targets such as characters using a single neural network architecture, drained a lot of interest from the research community as well as non-experts developers. This is caused by the simplicity of the pipeline and the possibility to create a complete ASR system without the need for expert knowledge. Moreover having an orthographic-based output allows to freely construct words, making it interesting against the Out-Of-Vocabulary problem encountered in traditional ASR systems.
End-to-end systems are nowadays extensively used and studied for multiple tasks and languages such as English, Mandarin or Japanese. However, for a language such as French, ASR performance and results with the existing methods have been scarcely studied, although the large number of silents letters, homophones or argot make comparing the assumptions made by each method very attractive.
In this context, we decided to study the three main types of architectures which have demonstrated promising results over traditional systems: 1) Connectionist Temporal Classification (CTC) BIBREF5, BIBREF6 which uses Markov assumptions (i.e. conditional independence between predictions at each time step) to efficiently solve sequential problems by dynamic programming, 2) Attention-based methods BIBREF7, BIBREF8 which rely on an attention mechanism to perform non-monotonic alignment between acoustic frames and recognized acoustic units and 3) RNN-tranducer BIBREF0, BIBREF9, BIBREF10 which extends CTC by additionally modeling the dependencies between outputs at different steps using a prediction network analogous to a language model. We extend our experiments by adding two hybrid end-to-end methods: a multi-task method called joint CTC-attention BIBREF11, BIBREF12 and a RNN-transducer extended with attention mechanisms BIBREF13. To complete our review, we build a state-of-art phone-based system based on lattice-free MMI criterion BIBREF14 and its end-to-end counterpart with both phonetic and orthographic units BIBREF15.
End-to-end systems for Speech Recognition ::: Connectionist Temporal Classification
The CTC BIBREF5 can be seen as a direct translation of conventional HMM-DNN ASR systems into lexicon-free systems. Thus, the CTC follows the general ASR formulation, training the model to maximize $P(Y|X)$ the probability distribution over all possible label sequences: Y = arg Y A* p(Y|X) Here, $X$ denotes the observations, $Y$ is a sequence of acoustic units of length $L$ such that $Y = \lbrace y_{l} \in \, \mathcal {A} | l = 1, ..., L\rbrace $, where $\mathcal {A}$ is an alphabet containing all distinct units. As in traditional HMM-DNN systems, the CTC model makes conditional independence assumptions between output predictions at different time steps given aligned inputs and it uses the probabilistic chain rule to factorize the posterior distribution $p(Y|X)$ into three distributions (i.e. framewise posterior distribution, transition probability and prior distribution of units). However, unlike HMM-based models, the framewise posterior distribution is defined here as a framewise acoustic unit sequence $B$ with an additional blank label ${<}blank{>}$ such as $B = \lbrace b_{t} \in \, \mathcal {A}\, \cup \, {<}blank{>} | t = 1, ..., T\rbrace $. p(Y|X) = b=1Bt=1T p(bt | bt-1, Y) p(bt|X)pctc(Y|X) p(Y)
Here, ${<}blank{>}$ introduces two contraction rules for the output labels, allowing to repeat or collapse successive acoustic units.
End-to-end systems for Speech Recognition ::: Attention-based model
As opposed to CTC, the attention-based approach BIBREF7, BIBREF8 does not assume conditional independence between predictions at different time steps and does not marginalize over all alignments. Thus the posterior distribution $p(Y|X)$ is directly computed by picking a soft alignment between each output step and every input step as follows: patt(Y|X) = l=1U p(yl | y1, ..., yl-1, X) Here $p(y_{l}|y_{1},...,y_{l-1}, X)$, – our attention-based objective function –, is obtained according to a probability distribution, typically a softmax, applied to the linear projection of the output of a recurrent neural network (or long-short term memory network), called decoder, such as: p(yl|y1,...,yl-1, X) = softmax(lin(RNN())) The decoder output is conditioned by the previous output $y_{l-1}$, a hidden vector $d_{l-1}$ and a context vector $c_{l}$. Here $d_{l-1}$ denotes the high level representation (i.e. hidden states) of the decoder at step $l-1$, encoding the target input, and $c_{l}$ designate the context – or symbol-wise vector in our case – for decoding step $l$, which is computed as the sum of the complete high representation $h$ of another recurrent neural network, encoding the source input $X$, weighted by $\alpha $ the attention weight: cl = s=1S l, s hs , l, s = (et, s)s'=1S (el, s') where $e_{t}$, also referred to as energy, measures how well the inputs around position $s$ and the output at position $l$ match, given the decoder states at decoding step $l-1$ and $h$ the encoder states for input $X$. In the following, we report the standard content-based mechanism and its location-aware variant which takes into account the alignment produced at the previous step using convolutional features: el, s = {ll content-based:
wT (W dl - 1 + Vhs + b)
location-based:
fu = F - 1
wT (W dl - 1 + Vhs + Ufl, s + b) . where $w$ and $b$ are vectors, $W$ the matrix for the decoder, $V$ the matrix for the high representation $h$ and $U$ the matrix for the convolutional filters, that takes the previous alignment for location-based attention mechanism into account.
End-to-end systems for Speech Recognition ::: RNN transducer
The RNN transducer architecture was first introduced by Graves and al. BIBREF9 to address the main limitation of the proposed CTC network: it cannot model interdependencies as it assumes conditional independence between predictions at different time steps.
To tackle this issue, the authors introduced a CTC-like network augmented with a separate RNN network predicting each label given the previous ones, analogous to a language model. With the addition of another network taking into account both encoder and decoder outputs, the system can jointly model interdependencies between both inputs and outputs and within the output label sequence.
Although the CTC and RNN-transducer are similar, it should be noted that unlike CTC which represent a loss function, RNN-transducer defines a model structure composed of the following subnetworks :
The encoder or transcription network: from an input value $x_{t}$ at timestep $t$ this network yields an output vector $h_{t}$ of dimension $|\mathcal {A}+1|$, where $+1$ denotes the ${<}blank{>}$ label which acts similarly as in CTC model.
The prediction network: given as input the previous label prediction $y_{u-1} \in \mathcal {A}$, this network compute an output vector $d_{u}$ dependent of the entire label sequence $y_{0}, ..., y_{u-1}$.
The joint network: using both encoder outputs $h_{t}^{enc}$ and prediction outputs $d_{u}^{dec}$, it computes $z_{t,u}$ for each input $t$ in the encoder sequence and label $u$ in prediction network such as: ht, ujoint = tanh(htenc + hudec)
zt,u = lin(ht,ujoint)
The output from the joint network is then passed to a softmax layer which defines a probability distribution over the set of possible target labels, including the blank symbol.
It should be noted that we made a small modification compared to the last proposed version BIBREF0: instead of feeding the hidden activations of both networks into a separate linear layer, whose outputs are then normalised, we include another linear layer and feed each hidden activations to its corresponding linear layer which yields a vector of dimension $J$, the defined joint-space.
Similarly to the CTC, the marginalized alignments are local and monotonic and the label likelihood can be computed using dynamic programming. However, unlike CTC, RNN transducer allows prediction of multiple characters at one time step, alongside their vertical probability transitions.
End-to-end systems for Speech Recognition ::: Other notable approaches
Joint CTC-attention The key idea behind the joint CTC-Attention BIBREF11 learning approach is simple. By training simultaneously the encoder using the attention mechanism with a standard CTC objective function as an auxiliary task, monotonic alignments between speech and label sequences can be enforced to reduce the irregular alignments caused by large jumps or loops on the same frame in the attention-based model. The objective function below formulates the multi-task learning of the network, where $0\le \lambda \le 1$ is a tunable parameter weighting the contribution of each loss function: LMTL = Lctc + (1 - ) Latt
= log pctc(Y|x) + (1 - ) log patt(Y|x) The approach proposed in BIBREF12 introduced a joint-decoding method to take into account the CTC predictions in the beam-search based decoding process of the attention-based model. Considering the difficulty to combine their respective scores, the attention-based decoder performs the beam search character-synchronously whereas the CTC performs it frame-synchronously, two methods were proposed.
The first one is a two-pass decoding process where the complete hypotheses from the attention model are computed and then rescored according to the following equation, where $p_{\textrm {ctc}}(Y|x)$ is computed using the standard CTC forward-backward algorithm: Y = arg C A* { log pctc(Y|x) + (1 - ) log patt(Y|x)} The second method is a one-pass decoding method where the probability of each partial hypothesis in the beam search process is computed directly using both CTC and attention model such as, given $h$ the partial hypothesis and $\alpha $ the score defined as the log probability of the hypothesized sequence:
End-to-end lattice-free MMI The end-to-end Lattice-Free MMI BIBREF15 is the end-to-end version of the method introduced by Povey et al. BIBREF14. In this version, a flat-start manner is adopted in order to remove the need of training an initial HMM-GMM for alignments and the tree-building pipeline. Although the approach seems more like a flat-start adaptation of the state-of-art method than end-to-end in terms of pipeline and it does not benefit from the open-vocabulary property to construct unseen words compared to previously presented methods, we use it in our experiments as it showed small degradation over the original lattice-free MMI with different acoustic units. We can therefore contrast the orthographic differences in productions between open systems and more constrained ones where the relationship between acoustic units and a word-level representation is restricted.
RNN-transducer with attention The RNN transducer architecture augmented with attention mechanisms was first mentioned, to the best of our knowledge, in BIBREF13. Here, the prediction network described in SECREF3 is replaced by an attention-based decoder similar to the one described in SECREF2 and used in the joint CTC-attention. This modification allows the decoder to access acoustic information alongside the sequence of previous predictions. As the decoder output computation is not affected by this change (the decoder and joint outputs computation are not dependent on a particular choice of segmentation), the architecture can be trained with the same forward-backward algorithm used for standard RNN-transducer. Finally, unlike the previous hybrid procedure, the inference procedure can be performed frame-synchronously with an unmodified greedy or beam search algorithm.
Database
We carried out our experiments using the data provided during the ESTER evaluation campaign (Evaluation of Broadcast News enriched transcription systems) BIBREF16 which is one of the most commonly used corpus for the evaluation of French ASR. Evaluations are done on test set. The details of the dataset, corresponding to 6h34 of speech, are described in BIBREF16. We use the same normalization and scoring rules as in the evaluation plan of the ESTER 2 campaign except that we do not use equivalence dictionary and partially pronounced words are scored as full words.
To train the acoustic models we use the 90h of the training set from ESTER2 augmented by 75h from ESTER1 training set and 90h from the additional subset provided in ESTER1 with their transcriptions provided in the corpus EPAC BIBREF17. We removed segments containing less than 1,5 seconds of transcribed speech and we excluded the utterances corresponding to segments with more than 3000 input frames or sentences of more than 400 characters for the end-to-end models. Because some irregulars segment-utterance pairs remained, we re-segmented the training data using the GMM-HMM model (with LDA-MLLT-SAT features) we build our phone-based chain model upon. During re-segmentation, only the audio parts matching the transcripts are selected. This brings the training data to approximately 231h. For neural networks training, we have applied 3-fold speed perturbation BIBREF18 and volume perturbation with random volume scale factor between 0.25 and 2, leading to a total of training data of 700h.
For language modeling, we use the manual transcripts from the training set. We extend this set with manually selected transcriptions from other speech sources (BREF corpus BIBREF19, oral interventions in EuroParl from '96-'06 BIBREF20 and a small portion of transcriptions from internal projects). The final corpus is composed of 2 041 916 sentences, for a total of 46 840 583 words.
Implementations
All our systems share equivalent optimization – no rescoring technique or post-processing is done – as well as equivalent resource usage. Each system is kept to its initial form (i.e. no further training on top of the reported system).
Implementations ::: Acoustic units
For our experiments, three kind of acoustic units were chosen: phones, characters and subwords. The baseline phone-based systems use the standard 36 phones used in French. The CTC, attention and hybrid systems each have two versions: one for characters with 41 classes (26 letters from the Latin alphabet, 14 letters with a diacritic and apostrophe) and another version for subwords where the number of classes is set to 500, the final set of subword units used in our training being selected by using a subword segmentation algorithm based on a unigram language model BIBREF21 and implemented in Google's toolkit SentencePiece BIBREF22. For the end-to-end variant of the chain model, characters units are used with the 41 classes set.
Implementations ::: Baseline systems
We used the Kaldi toolkit BIBREF23 to train the chain model and its end-to-end variant.
The chain model is a TDNN-HMM model trained with the LF-MMI objective function. The neural network is based on a sub-sampled time-delay neural network (TDNN) with 7 TDNN layers and 1024 units in each, time stride value being set to 1 in the first three layers, 0 in the fourth layer and 3 in the final ones. The end-to-end version of the chain model is trained in the same way as the original model but with a different architecture. The network is composed of a 1 LSTM Projected layer BIBREF24 with 512 units followed by 2 TDNN layers of 512 units - these first three layers being repeated twice - and another 1 LSTM Projected layer with 512 units when using character as unit. The time delay value in the recurrent connections of the projected LSTM layers is set to 3.
As the input for our models, we use a 40-dimensional high resolution MFCC vector (i.e. linear transform of the filterbanks) and CMVN for both the chain model trained with lattice free-MMI and its end-to-end variant. We also trained separately a phone-based chain model with the previous 40-dimensional MFCC vector concatenated with a 100-dimensional i-vector BIBREF25 as input to assess the impact of speaker-dependant features.
For the linguistic part, we also trained a word 3-gram language model using SRILM's n-gram counting method BIBREF26 with KN discounting. As lexicon we use the phonetic dictionary provided by the LIUM, thus the vocabulary of our language model is limited to the most frequent 50k words found in our training texts and also present in their dictionary.
For the end-to-end version modeling characters, we replace the phonetic lexicon by an orthographic lexicon with the same entries, where the orthographic representation is the word sequence with space inserted between each character.
Implementations ::: End-to-end systems
We use the ESPNET toolkit BIBREF27 to train the five end-to-end systems. For each method two acoustic units are used: character and subword. Ten epochs are used to train each model.
The acoustic models for all methods share the same architecture composed of VGG bottleneck BIBREF12 followed by a 3-layer bidirectional LSTM with 1024 units in each layer and each direction. For the models using attention mechanism we use a 1-layer LSTM with 1024 units and location-based mechanism with 10 centered convolution filters of width 100 for the convolutional feature extraction as decoder. When training jointly CTC and attention, $\lambda $ was set to $0.3$ based on preliminary experiments. For RNN-transducer the joint space between encoder and decoder was set to 1024 dimensions.
The input features for these models are a 80-dimensional raw filterbanks vector with their first and second derivatives with cepstral mean normalization (CMN).
For the experiments involving language models, we trained three different models using the RNNLM module available in ESPNET: one with characters, another with subwords and the last one with full words for multi-level combination when dealing with characters as units. Each model is incorporated at inference time using shallow fusion BIBREF28, except for the word-LM relying on multi-level decoding BIBREF29. The main architecture of our RNNLMs is a 1-layer RNN, the number of units in each layer depending of the target unit: 650 units for subwords and characters, and 1024 units for words. Unlike the systems described above, the vocabulary for the word-based RNNLM was limited according to the training texts only.
In order to directly compare the baseline systems to the end-to-end systems relying on different word-based LM (i.e. N-gram and RNN-based), another RNNLM was trained using available tools in Kaldi. The language model shares the same architecture as the word-RNNLM described in this subsection and was trained with equivalent training parameters. Following lattice rescoring approach proposed in BIBREF30, decoding was then performed with the RNNLM for all baseline systems. We observe a maximal WER improvement of 0.12% on the dev set and 0.16% on the test set compared to the systems relying on the original 3-gram. Adding to that a difference of less than 1.3% between words in language model vocabularies for baseline and end-to-end systems, we thus consider minimal the impact for our comparison.
Implementations ::: Decoding
To measure the best performance, we set the beam size to 30 in decoding under all conditions and for all models. When decoding with the attention-only model, we do not use sequence length control parameters such as coverage term or length normalization parameters BIBREF31. When joint-decoding, $\lambda $ is set to 0.2 based on our preliminary experiments. For CTC and attention experiments involving a RNNLM, the language model weight during decoding is set to respectively $0.3$ for character and subword LM, and $1.0$ for the word LM. For RNN-transducer, we downscale the use of external language model when performing multi-level LM decoding, setting the value to $0.3$.
Results
The results of our experiments in terms of Character Error Rate (CER) and Word Error Rate (WER) on the test set are gathered in the Table TABREF12. For CER we also report errors in the metric: correct, substituted, inserted and deleted characters.
It should be noted that the default CER computation in all frameworks does not use a special character for space during scoring. As important information relative to this character, denoting word-boundary errors, can be observed through the WER variation during comparison, we kept the initial computation for CER. Thus, for low CER variations, bigger WER differences are expected notably between traditional and end-to-end systems.
Results ::: Baseline systems
The phone-based chain model trained with lattice-free MMI criterion has a WER of 14.2 on the test set. Compared to the best reported system during the ESTER campaign (WER 12.1% BIBREF16), the performance show a relative degradation of 14.8%. Although the compared system rely on a HMM-GMM architecture, it should be noted that a triple-pass rescoring (+ post-processing) is applied, a consequent number of parameters is used, and a substantial amount of data is used for training the language model (more than 11 times our volume). Adding i-vectors features the performance of our model is further improved, leading to a WER of 13.7.
For the end-to-end phone-based system we denote a small WER degradation of 0.2% compared to the original system without i-vectors, which is a good trade-off considering the removal of the initial HMM-GMM training. Switching to characters as acoustic units we obtain a WER of 14.8, corresponding to a CER of 7.6. The detailed report show that all types of errors are quite balanced, with however a higher number of deletions. The system remains competitive even with orthographic units, despite the low correspondence between phonemes and letters in French. On the same note, a plain conversion of phonetic lexicon to a grapheme-based one does not negatively impact the performances. This was not excepted considering the use of alternative phonetic representation in French to denotes possible liaisons (the pronunciation of the final consonant of a word immediately before a following vowel sound in preceding word).
Results ::: End-to-end systems
Character-based models While, without language model, the attention-based model outperforms CTC model as expected, RNN-transducer performances exceed our initial estimations, surpassing previous models in terms of CER and WER. RNN-transducer even outperforms these models coupled with language model, regardless of the level of knowledge included (character and word-level). The CER obtained with this model is 8.5 while the WER is 19.7. This represent a relative decrease of almost 40% for the CER and 17% for the WER against the attention-based model with word LM, the second best system for classic end-to-end. Compared to the end-to-end chain model system modeling characters, we observe a small CER difference of 0.9 which corresponds to a WER difference of 4.9. While the CER is competitive, errors at word-level seem to indicate difficulties to model word boundaries compared to baseline systems.
Extending the comparison to hybrid models, only the RNN-transducer with attention mechanism could achieve similar or better results than its vanilla version. Although the joint CTC-attention procedure is beneficial to correct some limitations from individual approaches, the system can only reach a CER of 10.4 equivalent to a WER of 22.1. However, by adding word LM and using multi-level decoding, the system can achieve closer WER performance (18.6) despite the significant difference in terms of CER (9.6).
For the hybrid transducer relying on additional attention module, performances in all experiments are further improved compared to standard, reaching 8.2% CER and 19.1% WER without language model.
Concerning the best systems, it should be noted that the RNN-transducer performance is further improved with the use of language model, obtaining a CER of 8.0, close to our baseline score (7.6), with a word LM. In terms of WER it represents a relative improvement of 8.5% against previous results, which is however still far from the performance denoted with the baseline system for this metric (14.8%). For the RNN-transducer with an attention decoder, we achieve even better performance with a CER of 7.8 equivalent to a WER of 17.6. This is our best model with characters as acoustic units.
Focusing on the CER report, several observations can be made :
Insertion errors are lower for CTC models than attention-based systems, with the addition of language models included. Attention-based are expected to have higher number of deletions or insertions depending of the length difference between input and output sequences, it is however unanticipated to observe such a high number of deletion errors.
Following the last observation, we investigated the deletion errors done by the attention-only model. From what we found, the main reason is the existence of irregular segment-utterance pairs in the dataset (i.e: really low correspondence). Using coverage, penalty or length ratio terms helped on problematic pairs but degraded the global performances, regular short or long pairs being impacted.
Adding a language model decreases all errors in CTC systems while only deletion errors decrease in the attention-only system. Coupled with a word language model, substitution errors are even higher for attention model.
Similar observations can be made for RNN transducer. While we observe a small decrease of insertion errors with the addition of a language model, we also see a small increase in deletion errors. However the system is more impacted by the insertion changes as the number of substitutions decrease and the number of correct words increase.
Despite similar CER performances between CTC model with word LM and attention-only model with character or word LM for example, the first system cannot reach the word error rate of the second systems. It is beneficial to model linguistic information alongside acoustic information rather than in an external language model being at character or word level, although both can be combined to reach better performances. However, we should also consider that the training data for the acoustic model is the same as the data used to train the LM, augmented with a volume equivalent to less than a quarter of the initial training sentences.
Comparing the end-to-end chain model modeling characters to the RNN-transducer with language models, we can extract several useful information. Deletion errors made by the transducer are more influential at word level than the insertion errors made by the baseline system. From the hypotheses we observed that the insertion errors mostly happen on ambiguous verbal forms, gender forms or singular/plural forms in the baseline system. For the transducer, the same behaviour is observed however deletion errors at character level mostly happen on small words (such as article), common names and proper names which are numerous in the corpus.
Although we observe a smaller number of substitutions at character level for the RNN-transducer with or without attention compared to the baseline system, substitution errors impact more words than the baseline system. These errors are mostly due to the same problems described previously, while substitutions in baseline systems are more localized due particularly to the presence of OOV and ambiguous words.
Considering all the previous observations, further investigation should be done to compare and categorize errors at character and word level in each system and also assess the value of these errors. The character errors reported for RNN-transducer with attention should be sufficient motivation as we report, against the baseline system, a lower number of substitution and insertion errors coupled to an equivalent number of correct words despite a significant gap in WER performance.
Subword-based models Replacing characters with subword units improves the overall performance of all end-to-end methods. The gain is particularly important for CTC lowering the WER from $42.3$ to $28.4$ without language model. The gain observed when adding the language model to CTC is impressive with a relative improvement of almost 28% on WER (from $28.4$ to $21.2$). For the system relying only on attention, the WER is further improved without and with language model but, unlike when we used characters, the model is outperformed on both CER and WER by the model relying on CTC. Although we observe a similar CER for both methods we also note a significant difference in terms of correct characters and WER (almost $6\%$). The attention making mostly consecutive mistakes on the same words or groups of words (particularly at the beginning and end of utterances) while the CTC tends to recognize part of words as independent, thus incorrectly recognizing word boundaries. Adding RNN-transducer to the comparison, both previous methods are surpassed, on CER ($20.1$ for CTC, $17.5$ for attention and $15.2$ for transducer) and on WER ($21.1$ for CTC, $21.8$ for attention and $18.4$ for transducer). Decoding with an external language model, the CER and WER are further improved by about $5.5\%$ and $6.0\%$. It should be noted that the transducer model without language model exceed CTC and attention coupled to the subword LM.
Adding the hybrid systems to the comparison, we denote some differences compared to character-based systems. The RNN-transducer is not improved with attention mechanism and even slightly degraded for both CER and WER. The same observations can be done with and without LM addition. It seems the attention mechanism has more difficulty to model intra-subwords relations than intra-characters relations. However further work should be allocated to extend the comparison with different attention mechanisms, such as multi-head attention, and estimate the influence of architecture depending on output dimensions and representations.
Concerning the last hybrid system, joint CTC-attention is better suited to subword than characters, reaching comparable performances to transducer even without language model: 18.7% against $18.4$ for RNN-transducer and $18.5$. Although transducer are reported as our best system, it should be noted that joint CTC-attention reach equal or better performance on subword errors. Talking only about conventional ASR metric, we consider the two hybrid systems and vanilla transducer equivalent for subword units.
As in the previous section, we also made a focus on the detailed error report and denoted some differences compared to previous observations:
Akin to previous observations with characters, insertion errors are lower for CTC models ($1.4\%$) than attention-based models ($3.6\%$) with subwords. However, here, the number of insertions for CTC is even lower than for all other methods, transducer and hybrid systems showing an average insertion error of $2.5\%$.
Previously, we noted that a higher number of deletions or insertions should be expected with attention-only model. With subword units, we can observe a balanced number of deletions and insertions although we also denote a significant number of substitutions. Following this new observation, we also investigated the orthographic output from both models. We denoted that the limitation of attention model was mostly removed and word sequence was unrolled or stopped. However it translated to a really large number of substitutions, some subwords within the word structure being repeated or cut.
Although, we report a higher number of correct words and a lower number of errors for joint CTC-attention, the hybrid method obtains a higher WER than RNN-transducer and its hybrid version. Analyzing the hypothesis formulated and error distribution by both systems we could not extract any relevant information to explain the number of words impacted by the errors at character level.
On the same note, the following difference should still be noted: transducer-based models have a lower number of substitutions and equivalent or lower insertion whereas joint CTC-attention has a lower number of deletions and an equivalent or higher number of correct characters. Outside correct labels, only the CTC has a similar error distribution.
In case of joint CTC-Attention we can see that CTC as auxiliary function brings some benefices: the number of substitutions and insertions being further reduced compared to attention-only model. Additionally, the number of deletions is kept to the same range despite a high number of deletions for the CTC-only model. In case of additional attention module for RNN-transducer, although the attention-only has a lower number of deletion errors ($3.6$ versus $4.1$ for RNN-transducer), the inclusion of attention mechanism did not help to reduce this number. The error distribution is the same with and without attention. It also should be noted that RNN-transducer with attention has equivalent performance with characters and subword units.
Adding language models, all errors are lowered. The only exceptions being the number of insertions for CTC ($1.4\%$ raised to $2.3$), the number of deletions for RNN-transducer (from $4.1\%$ to $4.3$) and its hybrid counterpart (from $4.1$ to $4.4$). In these cases, and similarly as when we use character units, we can observe that the error rate (e.g.: insertion) decreases when the other (e.g.: deletion) increases.
Conclusion
In this paper, we experimentally showed that end-to-end approaches and different orthographic units were rather suitable to model the French language. RNN-transducer was found specially competitive with character units compared to other end-to-end approaches. Among the two orthographic units, subword was found beneficial for most methods to address the problems described in section SECREF14 and retain information on ambiguous patterns in French. Extending with language models, we could obtain promising results compared to traditional phone-based systems. The best performing systems being for character unit the RNN-transducer with additional attention module, achieving 7.8% in terms of CER and 17.6% on WER. For subword units, classic RNN-transducer, RNN-transducer with attention and joint CTC-attention show comparable performance on subword error rate and WER, with the first one being slightly better on WER ($17.4\%$) and the last one having a lower error rate on subword ($14.5\%$).
However, we also showed difference in produced errors for each method and different impact at word-level depending of the approach or units. Thus, future work will focus on analysing the orthographic output of these systems in two ways: 1) investigate errors produced by the end-to-end methods and explore several approaches to correct common errors done in French and 2) compare the end-to-end methods in a SLU context and evaluate the semantic value of the partially correct produced words. | character unit the RNN-transducer with additional attention module, For subword units, classic RNN-transducer, RNN-transducer with attention and joint CTC-attention show comparable performance |
285858416b1583aa3d8ba0494fd01c0d4332659f | 285858416b1583aa3d8ba0494fd01c0d4332659f_0 | Q: What will be in focus for future work?
Text: Introduction
Automatic Speech Recognition (ASR) has traditionally used Hidden Markov Models (HMM), describing temporal variability, combined with Gaussian Mixture Models (GMM), computing emission probabilities from HMM states, to model and map acoustic features to phones. In recent years, the introduction of deep neural networks replacing GMM for acoustic modeling showed huge improvements compared to previous state-of-the-art systems BIBREF0, BIBREF1. However, building and training such systems can be complex and a lot of preprocessing steps are involved. Traditional ASR systems are also factorized in several modules, the acoustic model representing only one of them along with lexicon and language models.
Recently, more direct approaches – called end-to-end methods – in which neural architectures are trained to directly model sequences of features as characters have been proposed BIBREF2, BIBREF3, BIBREF4. Predicting context independent targets such as characters using a single neural network architecture, drained a lot of interest from the research community as well as non-experts developers. This is caused by the simplicity of the pipeline and the possibility to create a complete ASR system without the need for expert knowledge. Moreover having an orthographic-based output allows to freely construct words, making it interesting against the Out-Of-Vocabulary problem encountered in traditional ASR systems.
End-to-end systems are nowadays extensively used and studied for multiple tasks and languages such as English, Mandarin or Japanese. However, for a language such as French, ASR performance and results with the existing methods have been scarcely studied, although the large number of silents letters, homophones or argot make comparing the assumptions made by each method very attractive.
In this context, we decided to study the three main types of architectures which have demonstrated promising results over traditional systems: 1) Connectionist Temporal Classification (CTC) BIBREF5, BIBREF6 which uses Markov assumptions (i.e. conditional independence between predictions at each time step) to efficiently solve sequential problems by dynamic programming, 2) Attention-based methods BIBREF7, BIBREF8 which rely on an attention mechanism to perform non-monotonic alignment between acoustic frames and recognized acoustic units and 3) RNN-tranducer BIBREF0, BIBREF9, BIBREF10 which extends CTC by additionally modeling the dependencies between outputs at different steps using a prediction network analogous to a language model. We extend our experiments by adding two hybrid end-to-end methods: a multi-task method called joint CTC-attention BIBREF11, BIBREF12 and a RNN-transducer extended with attention mechanisms BIBREF13. To complete our review, we build a state-of-art phone-based system based on lattice-free MMI criterion BIBREF14 and its end-to-end counterpart with both phonetic and orthographic units BIBREF15.
End-to-end systems for Speech Recognition ::: Connectionist Temporal Classification
The CTC BIBREF5 can be seen as a direct translation of conventional HMM-DNN ASR systems into lexicon-free systems. Thus, the CTC follows the general ASR formulation, training the model to maximize $P(Y|X)$ the probability distribution over all possible label sequences: Y = arg Y A* p(Y|X) Here, $X$ denotes the observations, $Y$ is a sequence of acoustic units of length $L$ such that $Y = \lbrace y_{l} \in \, \mathcal {A} | l = 1, ..., L\rbrace $, where $\mathcal {A}$ is an alphabet containing all distinct units. As in traditional HMM-DNN systems, the CTC model makes conditional independence assumptions between output predictions at different time steps given aligned inputs and it uses the probabilistic chain rule to factorize the posterior distribution $p(Y|X)$ into three distributions (i.e. framewise posterior distribution, transition probability and prior distribution of units). However, unlike HMM-based models, the framewise posterior distribution is defined here as a framewise acoustic unit sequence $B$ with an additional blank label ${<}blank{>}$ such as $B = \lbrace b_{t} \in \, \mathcal {A}\, \cup \, {<}blank{>} | t = 1, ..., T\rbrace $. p(Y|X) = b=1Bt=1T p(bt | bt-1, Y) p(bt|X)pctc(Y|X) p(Y)
Here, ${<}blank{>}$ introduces two contraction rules for the output labels, allowing to repeat or collapse successive acoustic units.
End-to-end systems for Speech Recognition ::: Attention-based model
As opposed to CTC, the attention-based approach BIBREF7, BIBREF8 does not assume conditional independence between predictions at different time steps and does not marginalize over all alignments. Thus the posterior distribution $p(Y|X)$ is directly computed by picking a soft alignment between each output step and every input step as follows: patt(Y|X) = l=1U p(yl | y1, ..., yl-1, X) Here $p(y_{l}|y_{1},...,y_{l-1}, X)$, – our attention-based objective function –, is obtained according to a probability distribution, typically a softmax, applied to the linear projection of the output of a recurrent neural network (or long-short term memory network), called decoder, such as: p(yl|y1,...,yl-1, X) = softmax(lin(RNN())) The decoder output is conditioned by the previous output $y_{l-1}$, a hidden vector $d_{l-1}$ and a context vector $c_{l}$. Here $d_{l-1}$ denotes the high level representation (i.e. hidden states) of the decoder at step $l-1$, encoding the target input, and $c_{l}$ designate the context – or symbol-wise vector in our case – for decoding step $l$, which is computed as the sum of the complete high representation $h$ of another recurrent neural network, encoding the source input $X$, weighted by $\alpha $ the attention weight: cl = s=1S l, s hs , l, s = (et, s)s'=1S (el, s') where $e_{t}$, also referred to as energy, measures how well the inputs around position $s$ and the output at position $l$ match, given the decoder states at decoding step $l-1$ and $h$ the encoder states for input $X$. In the following, we report the standard content-based mechanism and its location-aware variant which takes into account the alignment produced at the previous step using convolutional features: el, s = {ll content-based:
wT (W dl - 1 + Vhs + b)
location-based:
fu = F - 1
wT (W dl - 1 + Vhs + Ufl, s + b) . where $w$ and $b$ are vectors, $W$ the matrix for the decoder, $V$ the matrix for the high representation $h$ and $U$ the matrix for the convolutional filters, that takes the previous alignment for location-based attention mechanism into account.
End-to-end systems for Speech Recognition ::: RNN transducer
The RNN transducer architecture was first introduced by Graves and al. BIBREF9 to address the main limitation of the proposed CTC network: it cannot model interdependencies as it assumes conditional independence between predictions at different time steps.
To tackle this issue, the authors introduced a CTC-like network augmented with a separate RNN network predicting each label given the previous ones, analogous to a language model. With the addition of another network taking into account both encoder and decoder outputs, the system can jointly model interdependencies between both inputs and outputs and within the output label sequence.
Although the CTC and RNN-transducer are similar, it should be noted that unlike CTC which represent a loss function, RNN-transducer defines a model structure composed of the following subnetworks :
The encoder or transcription network: from an input value $x_{t}$ at timestep $t$ this network yields an output vector $h_{t}$ of dimension $|\mathcal {A}+1|$, where $+1$ denotes the ${<}blank{>}$ label which acts similarly as in CTC model.
The prediction network: given as input the previous label prediction $y_{u-1} \in \mathcal {A}$, this network compute an output vector $d_{u}$ dependent of the entire label sequence $y_{0}, ..., y_{u-1}$.
The joint network: using both encoder outputs $h_{t}^{enc}$ and prediction outputs $d_{u}^{dec}$, it computes $z_{t,u}$ for each input $t$ in the encoder sequence and label $u$ in prediction network such as: ht, ujoint = tanh(htenc + hudec)
zt,u = lin(ht,ujoint)
The output from the joint network is then passed to a softmax layer which defines a probability distribution over the set of possible target labels, including the blank symbol.
It should be noted that we made a small modification compared to the last proposed version BIBREF0: instead of feeding the hidden activations of both networks into a separate linear layer, whose outputs are then normalised, we include another linear layer and feed each hidden activations to its corresponding linear layer which yields a vector of dimension $J$, the defined joint-space.
Similarly to the CTC, the marginalized alignments are local and monotonic and the label likelihood can be computed using dynamic programming. However, unlike CTC, RNN transducer allows prediction of multiple characters at one time step, alongside their vertical probability transitions.
End-to-end systems for Speech Recognition ::: Other notable approaches
Joint CTC-attention The key idea behind the joint CTC-Attention BIBREF11 learning approach is simple. By training simultaneously the encoder using the attention mechanism with a standard CTC objective function as an auxiliary task, monotonic alignments between speech and label sequences can be enforced to reduce the irregular alignments caused by large jumps or loops on the same frame in the attention-based model. The objective function below formulates the multi-task learning of the network, where $0\le \lambda \le 1$ is a tunable parameter weighting the contribution of each loss function: LMTL = Lctc + (1 - ) Latt
= log pctc(Y|x) + (1 - ) log patt(Y|x) The approach proposed in BIBREF12 introduced a joint-decoding method to take into account the CTC predictions in the beam-search based decoding process of the attention-based model. Considering the difficulty to combine their respective scores, the attention-based decoder performs the beam search character-synchronously whereas the CTC performs it frame-synchronously, two methods were proposed.
The first one is a two-pass decoding process where the complete hypotheses from the attention model are computed and then rescored according to the following equation, where $p_{\textrm {ctc}}(Y|x)$ is computed using the standard CTC forward-backward algorithm: Y = arg C A* { log pctc(Y|x) + (1 - ) log patt(Y|x)} The second method is a one-pass decoding method where the probability of each partial hypothesis in the beam search process is computed directly using both CTC and attention model such as, given $h$ the partial hypothesis and $\alpha $ the score defined as the log probability of the hypothesized sequence:
End-to-end lattice-free MMI The end-to-end Lattice-Free MMI BIBREF15 is the end-to-end version of the method introduced by Povey et al. BIBREF14. In this version, a flat-start manner is adopted in order to remove the need of training an initial HMM-GMM for alignments and the tree-building pipeline. Although the approach seems more like a flat-start adaptation of the state-of-art method than end-to-end in terms of pipeline and it does not benefit from the open-vocabulary property to construct unseen words compared to previously presented methods, we use it in our experiments as it showed small degradation over the original lattice-free MMI with different acoustic units. We can therefore contrast the orthographic differences in productions between open systems and more constrained ones where the relationship between acoustic units and a word-level representation is restricted.
RNN-transducer with attention The RNN transducer architecture augmented with attention mechanisms was first mentioned, to the best of our knowledge, in BIBREF13. Here, the prediction network described in SECREF3 is replaced by an attention-based decoder similar to the one described in SECREF2 and used in the joint CTC-attention. This modification allows the decoder to access acoustic information alongside the sequence of previous predictions. As the decoder output computation is not affected by this change (the decoder and joint outputs computation are not dependent on a particular choice of segmentation), the architecture can be trained with the same forward-backward algorithm used for standard RNN-transducer. Finally, unlike the previous hybrid procedure, the inference procedure can be performed frame-synchronously with an unmodified greedy or beam search algorithm.
Database
We carried out our experiments using the data provided during the ESTER evaluation campaign (Evaluation of Broadcast News enriched transcription systems) BIBREF16 which is one of the most commonly used corpus for the evaluation of French ASR. Evaluations are done on test set. The details of the dataset, corresponding to 6h34 of speech, are described in BIBREF16. We use the same normalization and scoring rules as in the evaluation plan of the ESTER 2 campaign except that we do not use equivalence dictionary and partially pronounced words are scored as full words.
To train the acoustic models we use the 90h of the training set from ESTER2 augmented by 75h from ESTER1 training set and 90h from the additional subset provided in ESTER1 with their transcriptions provided in the corpus EPAC BIBREF17. We removed segments containing less than 1,5 seconds of transcribed speech and we excluded the utterances corresponding to segments with more than 3000 input frames or sentences of more than 400 characters for the end-to-end models. Because some irregulars segment-utterance pairs remained, we re-segmented the training data using the GMM-HMM model (with LDA-MLLT-SAT features) we build our phone-based chain model upon. During re-segmentation, only the audio parts matching the transcripts are selected. This brings the training data to approximately 231h. For neural networks training, we have applied 3-fold speed perturbation BIBREF18 and volume perturbation with random volume scale factor between 0.25 and 2, leading to a total of training data of 700h.
For language modeling, we use the manual transcripts from the training set. We extend this set with manually selected transcriptions from other speech sources (BREF corpus BIBREF19, oral interventions in EuroParl from '96-'06 BIBREF20 and a small portion of transcriptions from internal projects). The final corpus is composed of 2 041 916 sentences, for a total of 46 840 583 words.
Implementations
All our systems share equivalent optimization – no rescoring technique or post-processing is done – as well as equivalent resource usage. Each system is kept to its initial form (i.e. no further training on top of the reported system).
Implementations ::: Acoustic units
For our experiments, three kind of acoustic units were chosen: phones, characters and subwords. The baseline phone-based systems use the standard 36 phones used in French. The CTC, attention and hybrid systems each have two versions: one for characters with 41 classes (26 letters from the Latin alphabet, 14 letters with a diacritic and apostrophe) and another version for subwords where the number of classes is set to 500, the final set of subword units used in our training being selected by using a subword segmentation algorithm based on a unigram language model BIBREF21 and implemented in Google's toolkit SentencePiece BIBREF22. For the end-to-end variant of the chain model, characters units are used with the 41 classes set.
Implementations ::: Baseline systems
We used the Kaldi toolkit BIBREF23 to train the chain model and its end-to-end variant.
The chain model is a TDNN-HMM model trained with the LF-MMI objective function. The neural network is based on a sub-sampled time-delay neural network (TDNN) with 7 TDNN layers and 1024 units in each, time stride value being set to 1 in the first three layers, 0 in the fourth layer and 3 in the final ones. The end-to-end version of the chain model is trained in the same way as the original model but with a different architecture. The network is composed of a 1 LSTM Projected layer BIBREF24 with 512 units followed by 2 TDNN layers of 512 units - these first three layers being repeated twice - and another 1 LSTM Projected layer with 512 units when using character as unit. The time delay value in the recurrent connections of the projected LSTM layers is set to 3.
As the input for our models, we use a 40-dimensional high resolution MFCC vector (i.e. linear transform of the filterbanks) and CMVN for both the chain model trained with lattice free-MMI and its end-to-end variant. We also trained separately a phone-based chain model with the previous 40-dimensional MFCC vector concatenated with a 100-dimensional i-vector BIBREF25 as input to assess the impact of speaker-dependant features.
For the linguistic part, we also trained a word 3-gram language model using SRILM's n-gram counting method BIBREF26 with KN discounting. As lexicon we use the phonetic dictionary provided by the LIUM, thus the vocabulary of our language model is limited to the most frequent 50k words found in our training texts and also present in their dictionary.
For the end-to-end version modeling characters, we replace the phonetic lexicon by an orthographic lexicon with the same entries, where the orthographic representation is the word sequence with space inserted between each character.
Implementations ::: End-to-end systems
We use the ESPNET toolkit BIBREF27 to train the five end-to-end systems. For each method two acoustic units are used: character and subword. Ten epochs are used to train each model.
The acoustic models for all methods share the same architecture composed of VGG bottleneck BIBREF12 followed by a 3-layer bidirectional LSTM with 1024 units in each layer and each direction. For the models using attention mechanism we use a 1-layer LSTM with 1024 units and location-based mechanism with 10 centered convolution filters of width 100 for the convolutional feature extraction as decoder. When training jointly CTC and attention, $\lambda $ was set to $0.3$ based on preliminary experiments. For RNN-transducer the joint space between encoder and decoder was set to 1024 dimensions.
The input features for these models are a 80-dimensional raw filterbanks vector with their first and second derivatives with cepstral mean normalization (CMN).
For the experiments involving language models, we trained three different models using the RNNLM module available in ESPNET: one with characters, another with subwords and the last one with full words for multi-level combination when dealing with characters as units. Each model is incorporated at inference time using shallow fusion BIBREF28, except for the word-LM relying on multi-level decoding BIBREF29. The main architecture of our RNNLMs is a 1-layer RNN, the number of units in each layer depending of the target unit: 650 units for subwords and characters, and 1024 units for words. Unlike the systems described above, the vocabulary for the word-based RNNLM was limited according to the training texts only.
In order to directly compare the baseline systems to the end-to-end systems relying on different word-based LM (i.e. N-gram and RNN-based), another RNNLM was trained using available tools in Kaldi. The language model shares the same architecture as the word-RNNLM described in this subsection and was trained with equivalent training parameters. Following lattice rescoring approach proposed in BIBREF30, decoding was then performed with the RNNLM for all baseline systems. We observe a maximal WER improvement of 0.12% on the dev set and 0.16% on the test set compared to the systems relying on the original 3-gram. Adding to that a difference of less than 1.3% between words in language model vocabularies for baseline and end-to-end systems, we thus consider minimal the impact for our comparison.
Implementations ::: Decoding
To measure the best performance, we set the beam size to 30 in decoding under all conditions and for all models. When decoding with the attention-only model, we do not use sequence length control parameters such as coverage term or length normalization parameters BIBREF31. When joint-decoding, $\lambda $ is set to 0.2 based on our preliminary experiments. For CTC and attention experiments involving a RNNLM, the language model weight during decoding is set to respectively $0.3$ for character and subword LM, and $1.0$ for the word LM. For RNN-transducer, we downscale the use of external language model when performing multi-level LM decoding, setting the value to $0.3$.
Results
The results of our experiments in terms of Character Error Rate (CER) and Word Error Rate (WER) on the test set are gathered in the Table TABREF12. For CER we also report errors in the metric: correct, substituted, inserted and deleted characters.
It should be noted that the default CER computation in all frameworks does not use a special character for space during scoring. As important information relative to this character, denoting word-boundary errors, can be observed through the WER variation during comparison, we kept the initial computation for CER. Thus, for low CER variations, bigger WER differences are expected notably between traditional and end-to-end systems.
Results ::: Baseline systems
The phone-based chain model trained with lattice-free MMI criterion has a WER of 14.2 on the test set. Compared to the best reported system during the ESTER campaign (WER 12.1% BIBREF16), the performance show a relative degradation of 14.8%. Although the compared system rely on a HMM-GMM architecture, it should be noted that a triple-pass rescoring (+ post-processing) is applied, a consequent number of parameters is used, and a substantial amount of data is used for training the language model (more than 11 times our volume). Adding i-vectors features the performance of our model is further improved, leading to a WER of 13.7.
For the end-to-end phone-based system we denote a small WER degradation of 0.2% compared to the original system without i-vectors, which is a good trade-off considering the removal of the initial HMM-GMM training. Switching to characters as acoustic units we obtain a WER of 14.8, corresponding to a CER of 7.6. The detailed report show that all types of errors are quite balanced, with however a higher number of deletions. The system remains competitive even with orthographic units, despite the low correspondence between phonemes and letters in French. On the same note, a plain conversion of phonetic lexicon to a grapheme-based one does not negatively impact the performances. This was not excepted considering the use of alternative phonetic representation in French to denotes possible liaisons (the pronunciation of the final consonant of a word immediately before a following vowel sound in preceding word).
Results ::: End-to-end systems
Character-based models While, without language model, the attention-based model outperforms CTC model as expected, RNN-transducer performances exceed our initial estimations, surpassing previous models in terms of CER and WER. RNN-transducer even outperforms these models coupled with language model, regardless of the level of knowledge included (character and word-level). The CER obtained with this model is 8.5 while the WER is 19.7. This represent a relative decrease of almost 40% for the CER and 17% for the WER against the attention-based model with word LM, the second best system for classic end-to-end. Compared to the end-to-end chain model system modeling characters, we observe a small CER difference of 0.9 which corresponds to a WER difference of 4.9. While the CER is competitive, errors at word-level seem to indicate difficulties to model word boundaries compared to baseline systems.
Extending the comparison to hybrid models, only the RNN-transducer with attention mechanism could achieve similar or better results than its vanilla version. Although the joint CTC-attention procedure is beneficial to correct some limitations from individual approaches, the system can only reach a CER of 10.4 equivalent to a WER of 22.1. However, by adding word LM and using multi-level decoding, the system can achieve closer WER performance (18.6) despite the significant difference in terms of CER (9.6).
For the hybrid transducer relying on additional attention module, performances in all experiments are further improved compared to standard, reaching 8.2% CER and 19.1% WER without language model.
Concerning the best systems, it should be noted that the RNN-transducer performance is further improved with the use of language model, obtaining a CER of 8.0, close to our baseline score (7.6), with a word LM. In terms of WER it represents a relative improvement of 8.5% against previous results, which is however still far from the performance denoted with the baseline system for this metric (14.8%). For the RNN-transducer with an attention decoder, we achieve even better performance with a CER of 7.8 equivalent to a WER of 17.6. This is our best model with characters as acoustic units.
Focusing on the CER report, several observations can be made :
Insertion errors are lower for CTC models than attention-based systems, with the addition of language models included. Attention-based are expected to have higher number of deletions or insertions depending of the length difference between input and output sequences, it is however unanticipated to observe such a high number of deletion errors.
Following the last observation, we investigated the deletion errors done by the attention-only model. From what we found, the main reason is the existence of irregular segment-utterance pairs in the dataset (i.e: really low correspondence). Using coverage, penalty or length ratio terms helped on problematic pairs but degraded the global performances, regular short or long pairs being impacted.
Adding a language model decreases all errors in CTC systems while only deletion errors decrease in the attention-only system. Coupled with a word language model, substitution errors are even higher for attention model.
Similar observations can be made for RNN transducer. While we observe a small decrease of insertion errors with the addition of a language model, we also see a small increase in deletion errors. However the system is more impacted by the insertion changes as the number of substitutions decrease and the number of correct words increase.
Despite similar CER performances between CTC model with word LM and attention-only model with character or word LM for example, the first system cannot reach the word error rate of the second systems. It is beneficial to model linguistic information alongside acoustic information rather than in an external language model being at character or word level, although both can be combined to reach better performances. However, we should also consider that the training data for the acoustic model is the same as the data used to train the LM, augmented with a volume equivalent to less than a quarter of the initial training sentences.
Comparing the end-to-end chain model modeling characters to the RNN-transducer with language models, we can extract several useful information. Deletion errors made by the transducer are more influential at word level than the insertion errors made by the baseline system. From the hypotheses we observed that the insertion errors mostly happen on ambiguous verbal forms, gender forms or singular/plural forms in the baseline system. For the transducer, the same behaviour is observed however deletion errors at character level mostly happen on small words (such as article), common names and proper names which are numerous in the corpus.
Although we observe a smaller number of substitutions at character level for the RNN-transducer with or without attention compared to the baseline system, substitution errors impact more words than the baseline system. These errors are mostly due to the same problems described previously, while substitutions in baseline systems are more localized due particularly to the presence of OOV and ambiguous words.
Considering all the previous observations, further investigation should be done to compare and categorize errors at character and word level in each system and also assess the value of these errors. The character errors reported for RNN-transducer with attention should be sufficient motivation as we report, against the baseline system, a lower number of substitution and insertion errors coupled to an equivalent number of correct words despite a significant gap in WER performance.
Subword-based models Replacing characters with subword units improves the overall performance of all end-to-end methods. The gain is particularly important for CTC lowering the WER from $42.3$ to $28.4$ without language model. The gain observed when adding the language model to CTC is impressive with a relative improvement of almost 28% on WER (from $28.4$ to $21.2$). For the system relying only on attention, the WER is further improved without and with language model but, unlike when we used characters, the model is outperformed on both CER and WER by the model relying on CTC. Although we observe a similar CER for both methods we also note a significant difference in terms of correct characters and WER (almost $6\%$). The attention making mostly consecutive mistakes on the same words or groups of words (particularly at the beginning and end of utterances) while the CTC tends to recognize part of words as independent, thus incorrectly recognizing word boundaries. Adding RNN-transducer to the comparison, both previous methods are surpassed, on CER ($20.1$ for CTC, $17.5$ for attention and $15.2$ for transducer) and on WER ($21.1$ for CTC, $21.8$ for attention and $18.4$ for transducer). Decoding with an external language model, the CER and WER are further improved by about $5.5\%$ and $6.0\%$. It should be noted that the transducer model without language model exceed CTC and attention coupled to the subword LM.
Adding the hybrid systems to the comparison, we denote some differences compared to character-based systems. The RNN-transducer is not improved with attention mechanism and even slightly degraded for both CER and WER. The same observations can be done with and without LM addition. It seems the attention mechanism has more difficulty to model intra-subwords relations than intra-characters relations. However further work should be allocated to extend the comparison with different attention mechanisms, such as multi-head attention, and estimate the influence of architecture depending on output dimensions and representations.
Concerning the last hybrid system, joint CTC-attention is better suited to subword than characters, reaching comparable performances to transducer even without language model: 18.7% against $18.4$ for RNN-transducer and $18.5$. Although transducer are reported as our best system, it should be noted that joint CTC-attention reach equal or better performance on subword errors. Talking only about conventional ASR metric, we consider the two hybrid systems and vanilla transducer equivalent for subword units.
As in the previous section, we also made a focus on the detailed error report and denoted some differences compared to previous observations:
Akin to previous observations with characters, insertion errors are lower for CTC models ($1.4\%$) than attention-based models ($3.6\%$) with subwords. However, here, the number of insertions for CTC is even lower than for all other methods, transducer and hybrid systems showing an average insertion error of $2.5\%$.
Previously, we noted that a higher number of deletions or insertions should be expected with attention-only model. With subword units, we can observe a balanced number of deletions and insertions although we also denote a significant number of substitutions. Following this new observation, we also investigated the orthographic output from both models. We denoted that the limitation of attention model was mostly removed and word sequence was unrolled or stopped. However it translated to a really large number of substitutions, some subwords within the word structure being repeated or cut.
Although, we report a higher number of correct words and a lower number of errors for joint CTC-attention, the hybrid method obtains a higher WER than RNN-transducer and its hybrid version. Analyzing the hypothesis formulated and error distribution by both systems we could not extract any relevant information to explain the number of words impacted by the errors at character level.
On the same note, the following difference should still be noted: transducer-based models have a lower number of substitutions and equivalent or lower insertion whereas joint CTC-attention has a lower number of deletions and an equivalent or higher number of correct characters. Outside correct labels, only the CTC has a similar error distribution.
In case of joint CTC-Attention we can see that CTC as auxiliary function brings some benefices: the number of substitutions and insertions being further reduced compared to attention-only model. Additionally, the number of deletions is kept to the same range despite a high number of deletions for the CTC-only model. In case of additional attention module for RNN-transducer, although the attention-only has a lower number of deletion errors ($3.6$ versus $4.1$ for RNN-transducer), the inclusion of attention mechanism did not help to reduce this number. The error distribution is the same with and without attention. It also should be noted that RNN-transducer with attention has equivalent performance with characters and subword units.
Adding language models, all errors are lowered. The only exceptions being the number of insertions for CTC ($1.4\%$ raised to $2.3$), the number of deletions for RNN-transducer (from $4.1\%$ to $4.3$) and its hybrid counterpart (from $4.1$ to $4.4$). In these cases, and similarly as when we use character units, we can observe that the error rate (e.g.: insertion) decreases when the other (e.g.: deletion) increases.
Conclusion
In this paper, we experimentally showed that end-to-end approaches and different orthographic units were rather suitable to model the French language. RNN-transducer was found specially competitive with character units compared to other end-to-end approaches. Among the two orthographic units, subword was found beneficial for most methods to address the problems described in section SECREF14 and retain information on ambiguous patterns in French. Extending with language models, we could obtain promising results compared to traditional phone-based systems. The best performing systems being for character unit the RNN-transducer with additional attention module, achieving 7.8% in terms of CER and 17.6% on WER. For subword units, classic RNN-transducer, RNN-transducer with attention and joint CTC-attention show comparable performance on subword error rate and WER, with the first one being slightly better on WER ($17.4\%$) and the last one having a lower error rate on subword ($14.5\%$).
However, we also showed difference in produced errors for each method and different impact at word-level depending of the approach or units. Thus, future work will focus on analysing the orthographic output of these systems in two ways: 1) investigate errors produced by the end-to-end methods and explore several approaches to correct common errors done in French and 2) compare the end-to-end methods in a SLU context and evaluate the semantic value of the partially correct produced words. | 1) investigate errors produced by the end-to-end methods and explore several approaches to correct common errors done in French, 2) compare the end-to-end methods in a SLU context and evaluate the semantic value of the partially correct produced words |
150af1f5f4ce0ec94a7114397cffc59c4798441e | 150af1f5f4ce0ec94a7114397cffc59c4798441e_0 | Q: Which acoustic units are more suited to model the French language?
Text: Introduction
Automatic Speech Recognition (ASR) has traditionally used Hidden Markov Models (HMM), describing temporal variability, combined with Gaussian Mixture Models (GMM), computing emission probabilities from HMM states, to model and map acoustic features to phones. In recent years, the introduction of deep neural networks replacing GMM for acoustic modeling showed huge improvements compared to previous state-of-the-art systems BIBREF0, BIBREF1. However, building and training such systems can be complex and a lot of preprocessing steps are involved. Traditional ASR systems are also factorized in several modules, the acoustic model representing only one of them along with lexicon and language models.
Recently, more direct approaches – called end-to-end methods – in which neural architectures are trained to directly model sequences of features as characters have been proposed BIBREF2, BIBREF3, BIBREF4. Predicting context independent targets such as characters using a single neural network architecture, drained a lot of interest from the research community as well as non-experts developers. This is caused by the simplicity of the pipeline and the possibility to create a complete ASR system without the need for expert knowledge. Moreover having an orthographic-based output allows to freely construct words, making it interesting against the Out-Of-Vocabulary problem encountered in traditional ASR systems.
End-to-end systems are nowadays extensively used and studied for multiple tasks and languages such as English, Mandarin or Japanese. However, for a language such as French, ASR performance and results with the existing methods have been scarcely studied, although the large number of silents letters, homophones or argot make comparing the assumptions made by each method very attractive.
In this context, we decided to study the three main types of architectures which have demonstrated promising results over traditional systems: 1) Connectionist Temporal Classification (CTC) BIBREF5, BIBREF6 which uses Markov assumptions (i.e. conditional independence between predictions at each time step) to efficiently solve sequential problems by dynamic programming, 2) Attention-based methods BIBREF7, BIBREF8 which rely on an attention mechanism to perform non-monotonic alignment between acoustic frames and recognized acoustic units and 3) RNN-tranducer BIBREF0, BIBREF9, BIBREF10 which extends CTC by additionally modeling the dependencies between outputs at different steps using a prediction network analogous to a language model. We extend our experiments by adding two hybrid end-to-end methods: a multi-task method called joint CTC-attention BIBREF11, BIBREF12 and a RNN-transducer extended with attention mechanisms BIBREF13. To complete our review, we build a state-of-art phone-based system based on lattice-free MMI criterion BIBREF14 and its end-to-end counterpart with both phonetic and orthographic units BIBREF15.
End-to-end systems for Speech Recognition ::: Connectionist Temporal Classification
The CTC BIBREF5 can be seen as a direct translation of conventional HMM-DNN ASR systems into lexicon-free systems. Thus, the CTC follows the general ASR formulation, training the model to maximize $P(Y|X)$ the probability distribution over all possible label sequences: Y = arg Y A* p(Y|X) Here, $X$ denotes the observations, $Y$ is a sequence of acoustic units of length $L$ such that $Y = \lbrace y_{l} \in \, \mathcal {A} | l = 1, ..., L\rbrace $, where $\mathcal {A}$ is an alphabet containing all distinct units. As in traditional HMM-DNN systems, the CTC model makes conditional independence assumptions between output predictions at different time steps given aligned inputs and it uses the probabilistic chain rule to factorize the posterior distribution $p(Y|X)$ into three distributions (i.e. framewise posterior distribution, transition probability and prior distribution of units). However, unlike HMM-based models, the framewise posterior distribution is defined here as a framewise acoustic unit sequence $B$ with an additional blank label ${<}blank{>}$ such as $B = \lbrace b_{t} \in \, \mathcal {A}\, \cup \, {<}blank{>} | t = 1, ..., T\rbrace $. p(Y|X) = b=1Bt=1T p(bt | bt-1, Y) p(bt|X)pctc(Y|X) p(Y)
Here, ${<}blank{>}$ introduces two contraction rules for the output labels, allowing to repeat or collapse successive acoustic units.
End-to-end systems for Speech Recognition ::: Attention-based model
As opposed to CTC, the attention-based approach BIBREF7, BIBREF8 does not assume conditional independence between predictions at different time steps and does not marginalize over all alignments. Thus the posterior distribution $p(Y|X)$ is directly computed by picking a soft alignment between each output step and every input step as follows: patt(Y|X) = l=1U p(yl | y1, ..., yl-1, X) Here $p(y_{l}|y_{1},...,y_{l-1}, X)$, – our attention-based objective function –, is obtained according to a probability distribution, typically a softmax, applied to the linear projection of the output of a recurrent neural network (or long-short term memory network), called decoder, such as: p(yl|y1,...,yl-1, X) = softmax(lin(RNN())) The decoder output is conditioned by the previous output $y_{l-1}$, a hidden vector $d_{l-1}$ and a context vector $c_{l}$. Here $d_{l-1}$ denotes the high level representation (i.e. hidden states) of the decoder at step $l-1$, encoding the target input, and $c_{l}$ designate the context – or symbol-wise vector in our case – for decoding step $l$, which is computed as the sum of the complete high representation $h$ of another recurrent neural network, encoding the source input $X$, weighted by $\alpha $ the attention weight: cl = s=1S l, s hs , l, s = (et, s)s'=1S (el, s') where $e_{t}$, also referred to as energy, measures how well the inputs around position $s$ and the output at position $l$ match, given the decoder states at decoding step $l-1$ and $h$ the encoder states for input $X$. In the following, we report the standard content-based mechanism and its location-aware variant which takes into account the alignment produced at the previous step using convolutional features: el, s = {ll content-based:
wT (W dl - 1 + Vhs + b)
location-based:
fu = F - 1
wT (W dl - 1 + Vhs + Ufl, s + b) . where $w$ and $b$ are vectors, $W$ the matrix for the decoder, $V$ the matrix for the high representation $h$ and $U$ the matrix for the convolutional filters, that takes the previous alignment for location-based attention mechanism into account.
End-to-end systems for Speech Recognition ::: RNN transducer
The RNN transducer architecture was first introduced by Graves and al. BIBREF9 to address the main limitation of the proposed CTC network: it cannot model interdependencies as it assumes conditional independence between predictions at different time steps.
To tackle this issue, the authors introduced a CTC-like network augmented with a separate RNN network predicting each label given the previous ones, analogous to a language model. With the addition of another network taking into account both encoder and decoder outputs, the system can jointly model interdependencies between both inputs and outputs and within the output label sequence.
Although the CTC and RNN-transducer are similar, it should be noted that unlike CTC which represent a loss function, RNN-transducer defines a model structure composed of the following subnetworks :
The encoder or transcription network: from an input value $x_{t}$ at timestep $t$ this network yields an output vector $h_{t}$ of dimension $|\mathcal {A}+1|$, where $+1$ denotes the ${<}blank{>}$ label which acts similarly as in CTC model.
The prediction network: given as input the previous label prediction $y_{u-1} \in \mathcal {A}$, this network compute an output vector $d_{u}$ dependent of the entire label sequence $y_{0}, ..., y_{u-1}$.
The joint network: using both encoder outputs $h_{t}^{enc}$ and prediction outputs $d_{u}^{dec}$, it computes $z_{t,u}$ for each input $t$ in the encoder sequence and label $u$ in prediction network such as: ht, ujoint = tanh(htenc + hudec)
zt,u = lin(ht,ujoint)
The output from the joint network is then passed to a softmax layer which defines a probability distribution over the set of possible target labels, including the blank symbol.
It should be noted that we made a small modification compared to the last proposed version BIBREF0: instead of feeding the hidden activations of both networks into a separate linear layer, whose outputs are then normalised, we include another linear layer and feed each hidden activations to its corresponding linear layer which yields a vector of dimension $J$, the defined joint-space.
Similarly to the CTC, the marginalized alignments are local and monotonic and the label likelihood can be computed using dynamic programming. However, unlike CTC, RNN transducer allows prediction of multiple characters at one time step, alongside their vertical probability transitions.
End-to-end systems for Speech Recognition ::: Other notable approaches
Joint CTC-attention The key idea behind the joint CTC-Attention BIBREF11 learning approach is simple. By training simultaneously the encoder using the attention mechanism with a standard CTC objective function as an auxiliary task, monotonic alignments between speech and label sequences can be enforced to reduce the irregular alignments caused by large jumps or loops on the same frame in the attention-based model. The objective function below formulates the multi-task learning of the network, where $0\le \lambda \le 1$ is a tunable parameter weighting the contribution of each loss function: LMTL = Lctc + (1 - ) Latt
= log pctc(Y|x) + (1 - ) log patt(Y|x) The approach proposed in BIBREF12 introduced a joint-decoding method to take into account the CTC predictions in the beam-search based decoding process of the attention-based model. Considering the difficulty to combine their respective scores, the attention-based decoder performs the beam search character-synchronously whereas the CTC performs it frame-synchronously, two methods were proposed.
The first one is a two-pass decoding process where the complete hypotheses from the attention model are computed and then rescored according to the following equation, where $p_{\textrm {ctc}}(Y|x)$ is computed using the standard CTC forward-backward algorithm: Y = arg C A* { log pctc(Y|x) + (1 - ) log patt(Y|x)} The second method is a one-pass decoding method where the probability of each partial hypothesis in the beam search process is computed directly using both CTC and attention model such as, given $h$ the partial hypothesis and $\alpha $ the score defined as the log probability of the hypothesized sequence:
End-to-end lattice-free MMI The end-to-end Lattice-Free MMI BIBREF15 is the end-to-end version of the method introduced by Povey et al. BIBREF14. In this version, a flat-start manner is adopted in order to remove the need of training an initial HMM-GMM for alignments and the tree-building pipeline. Although the approach seems more like a flat-start adaptation of the state-of-art method than end-to-end in terms of pipeline and it does not benefit from the open-vocabulary property to construct unseen words compared to previously presented methods, we use it in our experiments as it showed small degradation over the original lattice-free MMI with different acoustic units. We can therefore contrast the orthographic differences in productions between open systems and more constrained ones where the relationship between acoustic units and a word-level representation is restricted.
RNN-transducer with attention The RNN transducer architecture augmented with attention mechanisms was first mentioned, to the best of our knowledge, in BIBREF13. Here, the prediction network described in SECREF3 is replaced by an attention-based decoder similar to the one described in SECREF2 and used in the joint CTC-attention. This modification allows the decoder to access acoustic information alongside the sequence of previous predictions. As the decoder output computation is not affected by this change (the decoder and joint outputs computation are not dependent on a particular choice of segmentation), the architecture can be trained with the same forward-backward algorithm used for standard RNN-transducer. Finally, unlike the previous hybrid procedure, the inference procedure can be performed frame-synchronously with an unmodified greedy or beam search algorithm.
Database
We carried out our experiments using the data provided during the ESTER evaluation campaign (Evaluation of Broadcast News enriched transcription systems) BIBREF16 which is one of the most commonly used corpus for the evaluation of French ASR. Evaluations are done on test set. The details of the dataset, corresponding to 6h34 of speech, are described in BIBREF16. We use the same normalization and scoring rules as in the evaluation plan of the ESTER 2 campaign except that we do not use equivalence dictionary and partially pronounced words are scored as full words.
To train the acoustic models we use the 90h of the training set from ESTER2 augmented by 75h from ESTER1 training set and 90h from the additional subset provided in ESTER1 with their transcriptions provided in the corpus EPAC BIBREF17. We removed segments containing less than 1,5 seconds of transcribed speech and we excluded the utterances corresponding to segments with more than 3000 input frames or sentences of more than 400 characters for the end-to-end models. Because some irregulars segment-utterance pairs remained, we re-segmented the training data using the GMM-HMM model (with LDA-MLLT-SAT features) we build our phone-based chain model upon. During re-segmentation, only the audio parts matching the transcripts are selected. This brings the training data to approximately 231h. For neural networks training, we have applied 3-fold speed perturbation BIBREF18 and volume perturbation with random volume scale factor between 0.25 and 2, leading to a total of training data of 700h.
For language modeling, we use the manual transcripts from the training set. We extend this set with manually selected transcriptions from other speech sources (BREF corpus BIBREF19, oral interventions in EuroParl from '96-'06 BIBREF20 and a small portion of transcriptions from internal projects). The final corpus is composed of 2 041 916 sentences, for a total of 46 840 583 words.
Implementations
All our systems share equivalent optimization – no rescoring technique or post-processing is done – as well as equivalent resource usage. Each system is kept to its initial form (i.e. no further training on top of the reported system).
Implementations ::: Acoustic units
For our experiments, three kind of acoustic units were chosen: phones, characters and subwords. The baseline phone-based systems use the standard 36 phones used in French. The CTC, attention and hybrid systems each have two versions: one for characters with 41 classes (26 letters from the Latin alphabet, 14 letters with a diacritic and apostrophe) and another version for subwords where the number of classes is set to 500, the final set of subword units used in our training being selected by using a subword segmentation algorithm based on a unigram language model BIBREF21 and implemented in Google's toolkit SentencePiece BIBREF22. For the end-to-end variant of the chain model, characters units are used with the 41 classes set.
Implementations ::: Baseline systems
We used the Kaldi toolkit BIBREF23 to train the chain model and its end-to-end variant.
The chain model is a TDNN-HMM model trained with the LF-MMI objective function. The neural network is based on a sub-sampled time-delay neural network (TDNN) with 7 TDNN layers and 1024 units in each, time stride value being set to 1 in the first three layers, 0 in the fourth layer and 3 in the final ones. The end-to-end version of the chain model is trained in the same way as the original model but with a different architecture. The network is composed of a 1 LSTM Projected layer BIBREF24 with 512 units followed by 2 TDNN layers of 512 units - these first three layers being repeated twice - and another 1 LSTM Projected layer with 512 units when using character as unit. The time delay value in the recurrent connections of the projected LSTM layers is set to 3.
As the input for our models, we use a 40-dimensional high resolution MFCC vector (i.e. linear transform of the filterbanks) and CMVN for both the chain model trained with lattice free-MMI and its end-to-end variant. We also trained separately a phone-based chain model with the previous 40-dimensional MFCC vector concatenated with a 100-dimensional i-vector BIBREF25 as input to assess the impact of speaker-dependant features.
For the linguistic part, we also trained a word 3-gram language model using SRILM's n-gram counting method BIBREF26 with KN discounting. As lexicon we use the phonetic dictionary provided by the LIUM, thus the vocabulary of our language model is limited to the most frequent 50k words found in our training texts and also present in their dictionary.
For the end-to-end version modeling characters, we replace the phonetic lexicon by an orthographic lexicon with the same entries, where the orthographic representation is the word sequence with space inserted between each character.
Implementations ::: End-to-end systems
We use the ESPNET toolkit BIBREF27 to train the five end-to-end systems. For each method two acoustic units are used: character and subword. Ten epochs are used to train each model.
The acoustic models for all methods share the same architecture composed of VGG bottleneck BIBREF12 followed by a 3-layer bidirectional LSTM with 1024 units in each layer and each direction. For the models using attention mechanism we use a 1-layer LSTM with 1024 units and location-based mechanism with 10 centered convolution filters of width 100 for the convolutional feature extraction as decoder. When training jointly CTC and attention, $\lambda $ was set to $0.3$ based on preliminary experiments. For RNN-transducer the joint space between encoder and decoder was set to 1024 dimensions.
The input features for these models are a 80-dimensional raw filterbanks vector with their first and second derivatives with cepstral mean normalization (CMN).
For the experiments involving language models, we trained three different models using the RNNLM module available in ESPNET: one with characters, another with subwords and the last one with full words for multi-level combination when dealing with characters as units. Each model is incorporated at inference time using shallow fusion BIBREF28, except for the word-LM relying on multi-level decoding BIBREF29. The main architecture of our RNNLMs is a 1-layer RNN, the number of units in each layer depending of the target unit: 650 units for subwords and characters, and 1024 units for words. Unlike the systems described above, the vocabulary for the word-based RNNLM was limited according to the training texts only.
In order to directly compare the baseline systems to the end-to-end systems relying on different word-based LM (i.e. N-gram and RNN-based), another RNNLM was trained using available tools in Kaldi. The language model shares the same architecture as the word-RNNLM described in this subsection and was trained with equivalent training parameters. Following lattice rescoring approach proposed in BIBREF30, decoding was then performed with the RNNLM for all baseline systems. We observe a maximal WER improvement of 0.12% on the dev set and 0.16% on the test set compared to the systems relying on the original 3-gram. Adding to that a difference of less than 1.3% between words in language model vocabularies for baseline and end-to-end systems, we thus consider minimal the impact for our comparison.
Implementations ::: Decoding
To measure the best performance, we set the beam size to 30 in decoding under all conditions and for all models. When decoding with the attention-only model, we do not use sequence length control parameters such as coverage term or length normalization parameters BIBREF31. When joint-decoding, $\lambda $ is set to 0.2 based on our preliminary experiments. For CTC and attention experiments involving a RNNLM, the language model weight during decoding is set to respectively $0.3$ for character and subword LM, and $1.0$ for the word LM. For RNN-transducer, we downscale the use of external language model when performing multi-level LM decoding, setting the value to $0.3$.
Results
The results of our experiments in terms of Character Error Rate (CER) and Word Error Rate (WER) on the test set are gathered in the Table TABREF12. For CER we also report errors in the metric: correct, substituted, inserted and deleted characters.
It should be noted that the default CER computation in all frameworks does not use a special character for space during scoring. As important information relative to this character, denoting word-boundary errors, can be observed through the WER variation during comparison, we kept the initial computation for CER. Thus, for low CER variations, bigger WER differences are expected notably between traditional and end-to-end systems.
Results ::: Baseline systems
The phone-based chain model trained with lattice-free MMI criterion has a WER of 14.2 on the test set. Compared to the best reported system during the ESTER campaign (WER 12.1% BIBREF16), the performance show a relative degradation of 14.8%. Although the compared system rely on a HMM-GMM architecture, it should be noted that a triple-pass rescoring (+ post-processing) is applied, a consequent number of parameters is used, and a substantial amount of data is used for training the language model (more than 11 times our volume). Adding i-vectors features the performance of our model is further improved, leading to a WER of 13.7.
For the end-to-end phone-based system we denote a small WER degradation of 0.2% compared to the original system without i-vectors, which is a good trade-off considering the removal of the initial HMM-GMM training. Switching to characters as acoustic units we obtain a WER of 14.8, corresponding to a CER of 7.6. The detailed report show that all types of errors are quite balanced, with however a higher number of deletions. The system remains competitive even with orthographic units, despite the low correspondence between phonemes and letters in French. On the same note, a plain conversion of phonetic lexicon to a grapheme-based one does not negatively impact the performances. This was not excepted considering the use of alternative phonetic representation in French to denotes possible liaisons (the pronunciation of the final consonant of a word immediately before a following vowel sound in preceding word).
Results ::: End-to-end systems
Character-based models While, without language model, the attention-based model outperforms CTC model as expected, RNN-transducer performances exceed our initial estimations, surpassing previous models in terms of CER and WER. RNN-transducer even outperforms these models coupled with language model, regardless of the level of knowledge included (character and word-level). The CER obtained with this model is 8.5 while the WER is 19.7. This represent a relative decrease of almost 40% for the CER and 17% for the WER against the attention-based model with word LM, the second best system for classic end-to-end. Compared to the end-to-end chain model system modeling characters, we observe a small CER difference of 0.9 which corresponds to a WER difference of 4.9. While the CER is competitive, errors at word-level seem to indicate difficulties to model word boundaries compared to baseline systems.
Extending the comparison to hybrid models, only the RNN-transducer with attention mechanism could achieve similar or better results than its vanilla version. Although the joint CTC-attention procedure is beneficial to correct some limitations from individual approaches, the system can only reach a CER of 10.4 equivalent to a WER of 22.1. However, by adding word LM and using multi-level decoding, the system can achieve closer WER performance (18.6) despite the significant difference in terms of CER (9.6).
For the hybrid transducer relying on additional attention module, performances in all experiments are further improved compared to standard, reaching 8.2% CER and 19.1% WER without language model.
Concerning the best systems, it should be noted that the RNN-transducer performance is further improved with the use of language model, obtaining a CER of 8.0, close to our baseline score (7.6), with a word LM. In terms of WER it represents a relative improvement of 8.5% against previous results, which is however still far from the performance denoted with the baseline system for this metric (14.8%). For the RNN-transducer with an attention decoder, we achieve even better performance with a CER of 7.8 equivalent to a WER of 17.6. This is our best model with characters as acoustic units.
Focusing on the CER report, several observations can be made :
Insertion errors are lower for CTC models than attention-based systems, with the addition of language models included. Attention-based are expected to have higher number of deletions or insertions depending of the length difference between input and output sequences, it is however unanticipated to observe such a high number of deletion errors.
Following the last observation, we investigated the deletion errors done by the attention-only model. From what we found, the main reason is the existence of irregular segment-utterance pairs in the dataset (i.e: really low correspondence). Using coverage, penalty or length ratio terms helped on problematic pairs but degraded the global performances, regular short or long pairs being impacted.
Adding a language model decreases all errors in CTC systems while only deletion errors decrease in the attention-only system. Coupled with a word language model, substitution errors are even higher for attention model.
Similar observations can be made for RNN transducer. While we observe a small decrease of insertion errors with the addition of a language model, we also see a small increase in deletion errors. However the system is more impacted by the insertion changes as the number of substitutions decrease and the number of correct words increase.
Despite similar CER performances between CTC model with word LM and attention-only model with character or word LM for example, the first system cannot reach the word error rate of the second systems. It is beneficial to model linguistic information alongside acoustic information rather than in an external language model being at character or word level, although both can be combined to reach better performances. However, we should also consider that the training data for the acoustic model is the same as the data used to train the LM, augmented with a volume equivalent to less than a quarter of the initial training sentences.
Comparing the end-to-end chain model modeling characters to the RNN-transducer with language models, we can extract several useful information. Deletion errors made by the transducer are more influential at word level than the insertion errors made by the baseline system. From the hypotheses we observed that the insertion errors mostly happen on ambiguous verbal forms, gender forms or singular/plural forms in the baseline system. For the transducer, the same behaviour is observed however deletion errors at character level mostly happen on small words (such as article), common names and proper names which are numerous in the corpus.
Although we observe a smaller number of substitutions at character level for the RNN-transducer with or without attention compared to the baseline system, substitution errors impact more words than the baseline system. These errors are mostly due to the same problems described previously, while substitutions in baseline systems are more localized due particularly to the presence of OOV and ambiguous words.
Considering all the previous observations, further investigation should be done to compare and categorize errors at character and word level in each system and also assess the value of these errors. The character errors reported for RNN-transducer with attention should be sufficient motivation as we report, against the baseline system, a lower number of substitution and insertion errors coupled to an equivalent number of correct words despite a significant gap in WER performance.
Subword-based models Replacing characters with subword units improves the overall performance of all end-to-end methods. The gain is particularly important for CTC lowering the WER from $42.3$ to $28.4$ without language model. The gain observed when adding the language model to CTC is impressive with a relative improvement of almost 28% on WER (from $28.4$ to $21.2$). For the system relying only on attention, the WER is further improved without and with language model but, unlike when we used characters, the model is outperformed on both CER and WER by the model relying on CTC. Although we observe a similar CER for both methods we also note a significant difference in terms of correct characters and WER (almost $6\%$). The attention making mostly consecutive mistakes on the same words or groups of words (particularly at the beginning and end of utterances) while the CTC tends to recognize part of words as independent, thus incorrectly recognizing word boundaries. Adding RNN-transducer to the comparison, both previous methods are surpassed, on CER ($20.1$ for CTC, $17.5$ for attention and $15.2$ for transducer) and on WER ($21.1$ for CTC, $21.8$ for attention and $18.4$ for transducer). Decoding with an external language model, the CER and WER are further improved by about $5.5\%$ and $6.0\%$. It should be noted that the transducer model without language model exceed CTC and attention coupled to the subword LM.
Adding the hybrid systems to the comparison, we denote some differences compared to character-based systems. The RNN-transducer is not improved with attention mechanism and even slightly degraded for both CER and WER. The same observations can be done with and without LM addition. It seems the attention mechanism has more difficulty to model intra-subwords relations than intra-characters relations. However further work should be allocated to extend the comparison with different attention mechanisms, such as multi-head attention, and estimate the influence of architecture depending on output dimensions and representations.
Concerning the last hybrid system, joint CTC-attention is better suited to subword than characters, reaching comparable performances to transducer even without language model: 18.7% against $18.4$ for RNN-transducer and $18.5$. Although transducer are reported as our best system, it should be noted that joint CTC-attention reach equal or better performance on subword errors. Talking only about conventional ASR metric, we consider the two hybrid systems and vanilla transducer equivalent for subword units.
As in the previous section, we also made a focus on the detailed error report and denoted some differences compared to previous observations:
Akin to previous observations with characters, insertion errors are lower for CTC models ($1.4\%$) than attention-based models ($3.6\%$) with subwords. However, here, the number of insertions for CTC is even lower than for all other methods, transducer and hybrid systems showing an average insertion error of $2.5\%$.
Previously, we noted that a higher number of deletions or insertions should be expected with attention-only model. With subword units, we can observe a balanced number of deletions and insertions although we also denote a significant number of substitutions. Following this new observation, we also investigated the orthographic output from both models. We denoted that the limitation of attention model was mostly removed and word sequence was unrolled or stopped. However it translated to a really large number of substitutions, some subwords within the word structure being repeated or cut.
Although, we report a higher number of correct words and a lower number of errors for joint CTC-attention, the hybrid method obtains a higher WER than RNN-transducer and its hybrid version. Analyzing the hypothesis formulated and error distribution by both systems we could not extract any relevant information to explain the number of words impacted by the errors at character level.
On the same note, the following difference should still be noted: transducer-based models have a lower number of substitutions and equivalent or lower insertion whereas joint CTC-attention has a lower number of deletions and an equivalent or higher number of correct characters. Outside correct labels, only the CTC has a similar error distribution.
In case of joint CTC-Attention we can see that CTC as auxiliary function brings some benefices: the number of substitutions and insertions being further reduced compared to attention-only model. Additionally, the number of deletions is kept to the same range despite a high number of deletions for the CTC-only model. In case of additional attention module for RNN-transducer, although the attention-only has a lower number of deletion errors ($3.6$ versus $4.1$ for RNN-transducer), the inclusion of attention mechanism did not help to reduce this number. The error distribution is the same with and without attention. It also should be noted that RNN-transducer with attention has equivalent performance with characters and subword units.
Adding language models, all errors are lowered. The only exceptions being the number of insertions for CTC ($1.4\%$ raised to $2.3$), the number of deletions for RNN-transducer (from $4.1\%$ to $4.3$) and its hybrid counterpart (from $4.1$ to $4.4$). In these cases, and similarly as when we use character units, we can observe that the error rate (e.g.: insertion) decreases when the other (e.g.: deletion) increases.
Conclusion
In this paper, we experimentally showed that end-to-end approaches and different orthographic units were rather suitable to model the French language. RNN-transducer was found specially competitive with character units compared to other end-to-end approaches. Among the two orthographic units, subword was found beneficial for most methods to address the problems described in section SECREF14 and retain information on ambiguous patterns in French. Extending with language models, we could obtain promising results compared to traditional phone-based systems. The best performing systems being for character unit the RNN-transducer with additional attention module, achieving 7.8% in terms of CER and 17.6% on WER. For subword units, classic RNN-transducer, RNN-transducer with attention and joint CTC-attention show comparable performance on subword error rate and WER, with the first one being slightly better on WER ($17.4\%$) and the last one having a lower error rate on subword ($14.5\%$).
However, we also showed difference in produced errors for each method and different impact at word-level depending of the approach or units. Thus, future work will focus on analysing the orthographic output of these systems in two ways: 1) investigate errors produced by the end-to-end methods and explore several approaches to correct common errors done in French and 2) compare the end-to-end methods in a SLU context and evaluate the semantic value of the partially correct produced words. | Unanswerable |
acc512c57aef4d5a15c15e3593f0a9b3e7e7e8b8 | acc512c57aef4d5a15c15e3593f0a9b3e7e7e8b8_0 | Q: What are the existing end-to-end ASR approaches for the French language?
Text: Introduction
Automatic Speech Recognition (ASR) has traditionally used Hidden Markov Models (HMM), describing temporal variability, combined with Gaussian Mixture Models (GMM), computing emission probabilities from HMM states, to model and map acoustic features to phones. In recent years, the introduction of deep neural networks replacing GMM for acoustic modeling showed huge improvements compared to previous state-of-the-art systems BIBREF0, BIBREF1. However, building and training such systems can be complex and a lot of preprocessing steps are involved. Traditional ASR systems are also factorized in several modules, the acoustic model representing only one of them along with lexicon and language models.
Recently, more direct approaches – called end-to-end methods – in which neural architectures are trained to directly model sequences of features as characters have been proposed BIBREF2, BIBREF3, BIBREF4. Predicting context independent targets such as characters using a single neural network architecture, drained a lot of interest from the research community as well as non-experts developers. This is caused by the simplicity of the pipeline and the possibility to create a complete ASR system without the need for expert knowledge. Moreover having an orthographic-based output allows to freely construct words, making it interesting against the Out-Of-Vocabulary problem encountered in traditional ASR systems.
End-to-end systems are nowadays extensively used and studied for multiple tasks and languages such as English, Mandarin or Japanese. However, for a language such as French, ASR performance and results with the existing methods have been scarcely studied, although the large number of silents letters, homophones or argot make comparing the assumptions made by each method very attractive.
In this context, we decided to study the three main types of architectures which have demonstrated promising results over traditional systems: 1) Connectionist Temporal Classification (CTC) BIBREF5, BIBREF6 which uses Markov assumptions (i.e. conditional independence between predictions at each time step) to efficiently solve sequential problems by dynamic programming, 2) Attention-based methods BIBREF7, BIBREF8 which rely on an attention mechanism to perform non-monotonic alignment between acoustic frames and recognized acoustic units and 3) RNN-tranducer BIBREF0, BIBREF9, BIBREF10 which extends CTC by additionally modeling the dependencies between outputs at different steps using a prediction network analogous to a language model. We extend our experiments by adding two hybrid end-to-end methods: a multi-task method called joint CTC-attention BIBREF11, BIBREF12 and a RNN-transducer extended with attention mechanisms BIBREF13. To complete our review, we build a state-of-art phone-based system based on lattice-free MMI criterion BIBREF14 and its end-to-end counterpart with both phonetic and orthographic units BIBREF15.
End-to-end systems for Speech Recognition ::: Connectionist Temporal Classification
The CTC BIBREF5 can be seen as a direct translation of conventional HMM-DNN ASR systems into lexicon-free systems. Thus, the CTC follows the general ASR formulation, training the model to maximize $P(Y|X)$ the probability distribution over all possible label sequences: Y = arg Y A* p(Y|X) Here, $X$ denotes the observations, $Y$ is a sequence of acoustic units of length $L$ such that $Y = \lbrace y_{l} \in \, \mathcal {A} | l = 1, ..., L\rbrace $, where $\mathcal {A}$ is an alphabet containing all distinct units. As in traditional HMM-DNN systems, the CTC model makes conditional independence assumptions between output predictions at different time steps given aligned inputs and it uses the probabilistic chain rule to factorize the posterior distribution $p(Y|X)$ into three distributions (i.e. framewise posterior distribution, transition probability and prior distribution of units). However, unlike HMM-based models, the framewise posterior distribution is defined here as a framewise acoustic unit sequence $B$ with an additional blank label ${<}blank{>}$ such as $B = \lbrace b_{t} \in \, \mathcal {A}\, \cup \, {<}blank{>} | t = 1, ..., T\rbrace $. p(Y|X) = b=1Bt=1T p(bt | bt-1, Y) p(bt|X)pctc(Y|X) p(Y)
Here, ${<}blank{>}$ introduces two contraction rules for the output labels, allowing to repeat or collapse successive acoustic units.
End-to-end systems for Speech Recognition ::: Attention-based model
As opposed to CTC, the attention-based approach BIBREF7, BIBREF8 does not assume conditional independence between predictions at different time steps and does not marginalize over all alignments. Thus the posterior distribution $p(Y|X)$ is directly computed by picking a soft alignment between each output step and every input step as follows: patt(Y|X) = l=1U p(yl | y1, ..., yl-1, X) Here $p(y_{l}|y_{1},...,y_{l-1}, X)$, – our attention-based objective function –, is obtained according to a probability distribution, typically a softmax, applied to the linear projection of the output of a recurrent neural network (or long-short term memory network), called decoder, such as: p(yl|y1,...,yl-1, X) = softmax(lin(RNN())) The decoder output is conditioned by the previous output $y_{l-1}$, a hidden vector $d_{l-1}$ and a context vector $c_{l}$. Here $d_{l-1}$ denotes the high level representation (i.e. hidden states) of the decoder at step $l-1$, encoding the target input, and $c_{l}$ designate the context – or symbol-wise vector in our case – for decoding step $l$, which is computed as the sum of the complete high representation $h$ of another recurrent neural network, encoding the source input $X$, weighted by $\alpha $ the attention weight: cl = s=1S l, s hs , l, s = (et, s)s'=1S (el, s') where $e_{t}$, also referred to as energy, measures how well the inputs around position $s$ and the output at position $l$ match, given the decoder states at decoding step $l-1$ and $h$ the encoder states for input $X$. In the following, we report the standard content-based mechanism and its location-aware variant which takes into account the alignment produced at the previous step using convolutional features: el, s = {ll content-based:
wT (W dl - 1 + Vhs + b)
location-based:
fu = F - 1
wT (W dl - 1 + Vhs + Ufl, s + b) . where $w$ and $b$ are vectors, $W$ the matrix for the decoder, $V$ the matrix for the high representation $h$ and $U$ the matrix for the convolutional filters, that takes the previous alignment for location-based attention mechanism into account.
End-to-end systems for Speech Recognition ::: RNN transducer
The RNN transducer architecture was first introduced by Graves and al. BIBREF9 to address the main limitation of the proposed CTC network: it cannot model interdependencies as it assumes conditional independence between predictions at different time steps.
To tackle this issue, the authors introduced a CTC-like network augmented with a separate RNN network predicting each label given the previous ones, analogous to a language model. With the addition of another network taking into account both encoder and decoder outputs, the system can jointly model interdependencies between both inputs and outputs and within the output label sequence.
Although the CTC and RNN-transducer are similar, it should be noted that unlike CTC which represent a loss function, RNN-transducer defines a model structure composed of the following subnetworks :
The encoder or transcription network: from an input value $x_{t}$ at timestep $t$ this network yields an output vector $h_{t}$ of dimension $|\mathcal {A}+1|$, where $+1$ denotes the ${<}blank{>}$ label which acts similarly as in CTC model.
The prediction network: given as input the previous label prediction $y_{u-1} \in \mathcal {A}$, this network compute an output vector $d_{u}$ dependent of the entire label sequence $y_{0}, ..., y_{u-1}$.
The joint network: using both encoder outputs $h_{t}^{enc}$ and prediction outputs $d_{u}^{dec}$, it computes $z_{t,u}$ for each input $t$ in the encoder sequence and label $u$ in prediction network such as: ht, ujoint = tanh(htenc + hudec)
zt,u = lin(ht,ujoint)
The output from the joint network is then passed to a softmax layer which defines a probability distribution over the set of possible target labels, including the blank symbol.
It should be noted that we made a small modification compared to the last proposed version BIBREF0: instead of feeding the hidden activations of both networks into a separate linear layer, whose outputs are then normalised, we include another linear layer and feed each hidden activations to its corresponding linear layer which yields a vector of dimension $J$, the defined joint-space.
Similarly to the CTC, the marginalized alignments are local and monotonic and the label likelihood can be computed using dynamic programming. However, unlike CTC, RNN transducer allows prediction of multiple characters at one time step, alongside their vertical probability transitions.
End-to-end systems for Speech Recognition ::: Other notable approaches
Joint CTC-attention The key idea behind the joint CTC-Attention BIBREF11 learning approach is simple. By training simultaneously the encoder using the attention mechanism with a standard CTC objective function as an auxiliary task, monotonic alignments between speech and label sequences can be enforced to reduce the irregular alignments caused by large jumps or loops on the same frame in the attention-based model. The objective function below formulates the multi-task learning of the network, where $0\le \lambda \le 1$ is a tunable parameter weighting the contribution of each loss function: LMTL = Lctc + (1 - ) Latt
= log pctc(Y|x) + (1 - ) log patt(Y|x) The approach proposed in BIBREF12 introduced a joint-decoding method to take into account the CTC predictions in the beam-search based decoding process of the attention-based model. Considering the difficulty to combine their respective scores, the attention-based decoder performs the beam search character-synchronously whereas the CTC performs it frame-synchronously, two methods were proposed.
The first one is a two-pass decoding process where the complete hypotheses from the attention model are computed and then rescored according to the following equation, where $p_{\textrm {ctc}}(Y|x)$ is computed using the standard CTC forward-backward algorithm: Y = arg C A* { log pctc(Y|x) + (1 - ) log patt(Y|x)} The second method is a one-pass decoding method where the probability of each partial hypothesis in the beam search process is computed directly using both CTC and attention model such as, given $h$ the partial hypothesis and $\alpha $ the score defined as the log probability of the hypothesized sequence:
End-to-end lattice-free MMI The end-to-end Lattice-Free MMI BIBREF15 is the end-to-end version of the method introduced by Povey et al. BIBREF14. In this version, a flat-start manner is adopted in order to remove the need of training an initial HMM-GMM for alignments and the tree-building pipeline. Although the approach seems more like a flat-start adaptation of the state-of-art method than end-to-end in terms of pipeline and it does not benefit from the open-vocabulary property to construct unseen words compared to previously presented methods, we use it in our experiments as it showed small degradation over the original lattice-free MMI with different acoustic units. We can therefore contrast the orthographic differences in productions between open systems and more constrained ones where the relationship between acoustic units and a word-level representation is restricted.
RNN-transducer with attention The RNN transducer architecture augmented with attention mechanisms was first mentioned, to the best of our knowledge, in BIBREF13. Here, the prediction network described in SECREF3 is replaced by an attention-based decoder similar to the one described in SECREF2 and used in the joint CTC-attention. This modification allows the decoder to access acoustic information alongside the sequence of previous predictions. As the decoder output computation is not affected by this change (the decoder and joint outputs computation are not dependent on a particular choice of segmentation), the architecture can be trained with the same forward-backward algorithm used for standard RNN-transducer. Finally, unlike the previous hybrid procedure, the inference procedure can be performed frame-synchronously with an unmodified greedy or beam search algorithm.
Database
We carried out our experiments using the data provided during the ESTER evaluation campaign (Evaluation of Broadcast News enriched transcription systems) BIBREF16 which is one of the most commonly used corpus for the evaluation of French ASR. Evaluations are done on test set. The details of the dataset, corresponding to 6h34 of speech, are described in BIBREF16. We use the same normalization and scoring rules as in the evaluation plan of the ESTER 2 campaign except that we do not use equivalence dictionary and partially pronounced words are scored as full words.
To train the acoustic models we use the 90h of the training set from ESTER2 augmented by 75h from ESTER1 training set and 90h from the additional subset provided in ESTER1 with their transcriptions provided in the corpus EPAC BIBREF17. We removed segments containing less than 1,5 seconds of transcribed speech and we excluded the utterances corresponding to segments with more than 3000 input frames or sentences of more than 400 characters for the end-to-end models. Because some irregulars segment-utterance pairs remained, we re-segmented the training data using the GMM-HMM model (with LDA-MLLT-SAT features) we build our phone-based chain model upon. During re-segmentation, only the audio parts matching the transcripts are selected. This brings the training data to approximately 231h. For neural networks training, we have applied 3-fold speed perturbation BIBREF18 and volume perturbation with random volume scale factor between 0.25 and 2, leading to a total of training data of 700h.
For language modeling, we use the manual transcripts from the training set. We extend this set with manually selected transcriptions from other speech sources (BREF corpus BIBREF19, oral interventions in EuroParl from '96-'06 BIBREF20 and a small portion of transcriptions from internal projects). The final corpus is composed of 2 041 916 sentences, for a total of 46 840 583 words.
Implementations
All our systems share equivalent optimization – no rescoring technique or post-processing is done – as well as equivalent resource usage. Each system is kept to its initial form (i.e. no further training on top of the reported system).
Implementations ::: Acoustic units
For our experiments, three kind of acoustic units were chosen: phones, characters and subwords. The baseline phone-based systems use the standard 36 phones used in French. The CTC, attention and hybrid systems each have two versions: one for characters with 41 classes (26 letters from the Latin alphabet, 14 letters with a diacritic and apostrophe) and another version for subwords where the number of classes is set to 500, the final set of subword units used in our training being selected by using a subword segmentation algorithm based on a unigram language model BIBREF21 and implemented in Google's toolkit SentencePiece BIBREF22. For the end-to-end variant of the chain model, characters units are used with the 41 classes set.
Implementations ::: Baseline systems
We used the Kaldi toolkit BIBREF23 to train the chain model and its end-to-end variant.
The chain model is a TDNN-HMM model trained with the LF-MMI objective function. The neural network is based on a sub-sampled time-delay neural network (TDNN) with 7 TDNN layers and 1024 units in each, time stride value being set to 1 in the first three layers, 0 in the fourth layer and 3 in the final ones. The end-to-end version of the chain model is trained in the same way as the original model but with a different architecture. The network is composed of a 1 LSTM Projected layer BIBREF24 with 512 units followed by 2 TDNN layers of 512 units - these first three layers being repeated twice - and another 1 LSTM Projected layer with 512 units when using character as unit. The time delay value in the recurrent connections of the projected LSTM layers is set to 3.
As the input for our models, we use a 40-dimensional high resolution MFCC vector (i.e. linear transform of the filterbanks) and CMVN for both the chain model trained with lattice free-MMI and its end-to-end variant. We also trained separately a phone-based chain model with the previous 40-dimensional MFCC vector concatenated with a 100-dimensional i-vector BIBREF25 as input to assess the impact of speaker-dependant features.
For the linguistic part, we also trained a word 3-gram language model using SRILM's n-gram counting method BIBREF26 with KN discounting. As lexicon we use the phonetic dictionary provided by the LIUM, thus the vocabulary of our language model is limited to the most frequent 50k words found in our training texts and also present in their dictionary.
For the end-to-end version modeling characters, we replace the phonetic lexicon by an orthographic lexicon with the same entries, where the orthographic representation is the word sequence with space inserted between each character.
Implementations ::: End-to-end systems
We use the ESPNET toolkit BIBREF27 to train the five end-to-end systems. For each method two acoustic units are used: character and subword. Ten epochs are used to train each model.
The acoustic models for all methods share the same architecture composed of VGG bottleneck BIBREF12 followed by a 3-layer bidirectional LSTM with 1024 units in each layer and each direction. For the models using attention mechanism we use a 1-layer LSTM with 1024 units and location-based mechanism with 10 centered convolution filters of width 100 for the convolutional feature extraction as decoder. When training jointly CTC and attention, $\lambda $ was set to $0.3$ based on preliminary experiments. For RNN-transducer the joint space between encoder and decoder was set to 1024 dimensions.
The input features for these models are a 80-dimensional raw filterbanks vector with their first and second derivatives with cepstral mean normalization (CMN).
For the experiments involving language models, we trained three different models using the RNNLM module available in ESPNET: one with characters, another with subwords and the last one with full words for multi-level combination when dealing with characters as units. Each model is incorporated at inference time using shallow fusion BIBREF28, except for the word-LM relying on multi-level decoding BIBREF29. The main architecture of our RNNLMs is a 1-layer RNN, the number of units in each layer depending of the target unit: 650 units for subwords and characters, and 1024 units for words. Unlike the systems described above, the vocabulary for the word-based RNNLM was limited according to the training texts only.
In order to directly compare the baseline systems to the end-to-end systems relying on different word-based LM (i.e. N-gram and RNN-based), another RNNLM was trained using available tools in Kaldi. The language model shares the same architecture as the word-RNNLM described in this subsection and was trained with equivalent training parameters. Following lattice rescoring approach proposed in BIBREF30, decoding was then performed with the RNNLM for all baseline systems. We observe a maximal WER improvement of 0.12% on the dev set and 0.16% on the test set compared to the systems relying on the original 3-gram. Adding to that a difference of less than 1.3% between words in language model vocabularies for baseline and end-to-end systems, we thus consider minimal the impact for our comparison.
Implementations ::: Decoding
To measure the best performance, we set the beam size to 30 in decoding under all conditions and for all models. When decoding with the attention-only model, we do not use sequence length control parameters such as coverage term or length normalization parameters BIBREF31. When joint-decoding, $\lambda $ is set to 0.2 based on our preliminary experiments. For CTC and attention experiments involving a RNNLM, the language model weight during decoding is set to respectively $0.3$ for character and subword LM, and $1.0$ for the word LM. For RNN-transducer, we downscale the use of external language model when performing multi-level LM decoding, setting the value to $0.3$.
Results
The results of our experiments in terms of Character Error Rate (CER) and Word Error Rate (WER) on the test set are gathered in the Table TABREF12. For CER we also report errors in the metric: correct, substituted, inserted and deleted characters.
It should be noted that the default CER computation in all frameworks does not use a special character for space during scoring. As important information relative to this character, denoting word-boundary errors, can be observed through the WER variation during comparison, we kept the initial computation for CER. Thus, for low CER variations, bigger WER differences are expected notably between traditional and end-to-end systems.
Results ::: Baseline systems
The phone-based chain model trained with lattice-free MMI criterion has a WER of 14.2 on the test set. Compared to the best reported system during the ESTER campaign (WER 12.1% BIBREF16), the performance show a relative degradation of 14.8%. Although the compared system rely on a HMM-GMM architecture, it should be noted that a triple-pass rescoring (+ post-processing) is applied, a consequent number of parameters is used, and a substantial amount of data is used for training the language model (more than 11 times our volume). Adding i-vectors features the performance of our model is further improved, leading to a WER of 13.7.
For the end-to-end phone-based system we denote a small WER degradation of 0.2% compared to the original system without i-vectors, which is a good trade-off considering the removal of the initial HMM-GMM training. Switching to characters as acoustic units we obtain a WER of 14.8, corresponding to a CER of 7.6. The detailed report show that all types of errors are quite balanced, with however a higher number of deletions. The system remains competitive even with orthographic units, despite the low correspondence between phonemes and letters in French. On the same note, a plain conversion of phonetic lexicon to a grapheme-based one does not negatively impact the performances. This was not excepted considering the use of alternative phonetic representation in French to denotes possible liaisons (the pronunciation of the final consonant of a word immediately before a following vowel sound in preceding word).
Results ::: End-to-end systems
Character-based models While, without language model, the attention-based model outperforms CTC model as expected, RNN-transducer performances exceed our initial estimations, surpassing previous models in terms of CER and WER. RNN-transducer even outperforms these models coupled with language model, regardless of the level of knowledge included (character and word-level). The CER obtained with this model is 8.5 while the WER is 19.7. This represent a relative decrease of almost 40% for the CER and 17% for the WER against the attention-based model with word LM, the second best system for classic end-to-end. Compared to the end-to-end chain model system modeling characters, we observe a small CER difference of 0.9 which corresponds to a WER difference of 4.9. While the CER is competitive, errors at word-level seem to indicate difficulties to model word boundaries compared to baseline systems.
Extending the comparison to hybrid models, only the RNN-transducer with attention mechanism could achieve similar or better results than its vanilla version. Although the joint CTC-attention procedure is beneficial to correct some limitations from individual approaches, the system can only reach a CER of 10.4 equivalent to a WER of 22.1. However, by adding word LM and using multi-level decoding, the system can achieve closer WER performance (18.6) despite the significant difference in terms of CER (9.6).
For the hybrid transducer relying on additional attention module, performances in all experiments are further improved compared to standard, reaching 8.2% CER and 19.1% WER without language model.
Concerning the best systems, it should be noted that the RNN-transducer performance is further improved with the use of language model, obtaining a CER of 8.0, close to our baseline score (7.6), with a word LM. In terms of WER it represents a relative improvement of 8.5% against previous results, which is however still far from the performance denoted with the baseline system for this metric (14.8%). For the RNN-transducer with an attention decoder, we achieve even better performance with a CER of 7.8 equivalent to a WER of 17.6. This is our best model with characters as acoustic units.
Focusing on the CER report, several observations can be made :
Insertion errors are lower for CTC models than attention-based systems, with the addition of language models included. Attention-based are expected to have higher number of deletions or insertions depending of the length difference between input and output sequences, it is however unanticipated to observe such a high number of deletion errors.
Following the last observation, we investigated the deletion errors done by the attention-only model. From what we found, the main reason is the existence of irregular segment-utterance pairs in the dataset (i.e: really low correspondence). Using coverage, penalty or length ratio terms helped on problematic pairs but degraded the global performances, regular short or long pairs being impacted.
Adding a language model decreases all errors in CTC systems while only deletion errors decrease in the attention-only system. Coupled with a word language model, substitution errors are even higher for attention model.
Similar observations can be made for RNN transducer. While we observe a small decrease of insertion errors with the addition of a language model, we also see a small increase in deletion errors. However the system is more impacted by the insertion changes as the number of substitutions decrease and the number of correct words increase.
Despite similar CER performances between CTC model with word LM and attention-only model with character or word LM for example, the first system cannot reach the word error rate of the second systems. It is beneficial to model linguistic information alongside acoustic information rather than in an external language model being at character or word level, although both can be combined to reach better performances. However, we should also consider that the training data for the acoustic model is the same as the data used to train the LM, augmented with a volume equivalent to less than a quarter of the initial training sentences.
Comparing the end-to-end chain model modeling characters to the RNN-transducer with language models, we can extract several useful information. Deletion errors made by the transducer are more influential at word level than the insertion errors made by the baseline system. From the hypotheses we observed that the insertion errors mostly happen on ambiguous verbal forms, gender forms or singular/plural forms in the baseline system. For the transducer, the same behaviour is observed however deletion errors at character level mostly happen on small words (such as article), common names and proper names which are numerous in the corpus.
Although we observe a smaller number of substitutions at character level for the RNN-transducer with or without attention compared to the baseline system, substitution errors impact more words than the baseline system. These errors are mostly due to the same problems described previously, while substitutions in baseline systems are more localized due particularly to the presence of OOV and ambiguous words.
Considering all the previous observations, further investigation should be done to compare and categorize errors at character and word level in each system and also assess the value of these errors. The character errors reported for RNN-transducer with attention should be sufficient motivation as we report, against the baseline system, a lower number of substitution and insertion errors coupled to an equivalent number of correct words despite a significant gap in WER performance.
Subword-based models Replacing characters with subword units improves the overall performance of all end-to-end methods. The gain is particularly important for CTC lowering the WER from $42.3$ to $28.4$ without language model. The gain observed when adding the language model to CTC is impressive with a relative improvement of almost 28% on WER (from $28.4$ to $21.2$). For the system relying only on attention, the WER is further improved without and with language model but, unlike when we used characters, the model is outperformed on both CER and WER by the model relying on CTC. Although we observe a similar CER for both methods we also note a significant difference in terms of correct characters and WER (almost $6\%$). The attention making mostly consecutive mistakes on the same words or groups of words (particularly at the beginning and end of utterances) while the CTC tends to recognize part of words as independent, thus incorrectly recognizing word boundaries. Adding RNN-transducer to the comparison, both previous methods are surpassed, on CER ($20.1$ for CTC, $17.5$ for attention and $15.2$ for transducer) and on WER ($21.1$ for CTC, $21.8$ for attention and $18.4$ for transducer). Decoding with an external language model, the CER and WER are further improved by about $5.5\%$ and $6.0\%$. It should be noted that the transducer model without language model exceed CTC and attention coupled to the subword LM.
Adding the hybrid systems to the comparison, we denote some differences compared to character-based systems. The RNN-transducer is not improved with attention mechanism and even slightly degraded for both CER and WER. The same observations can be done with and without LM addition. It seems the attention mechanism has more difficulty to model intra-subwords relations than intra-characters relations. However further work should be allocated to extend the comparison with different attention mechanisms, such as multi-head attention, and estimate the influence of architecture depending on output dimensions and representations.
Concerning the last hybrid system, joint CTC-attention is better suited to subword than characters, reaching comparable performances to transducer even without language model: 18.7% against $18.4$ for RNN-transducer and $18.5$. Although transducer are reported as our best system, it should be noted that joint CTC-attention reach equal or better performance on subword errors. Talking only about conventional ASR metric, we consider the two hybrid systems and vanilla transducer equivalent for subword units.
As in the previous section, we also made a focus on the detailed error report and denoted some differences compared to previous observations:
Akin to previous observations with characters, insertion errors are lower for CTC models ($1.4\%$) than attention-based models ($3.6\%$) with subwords. However, here, the number of insertions for CTC is even lower than for all other methods, transducer and hybrid systems showing an average insertion error of $2.5\%$.
Previously, we noted that a higher number of deletions or insertions should be expected with attention-only model. With subword units, we can observe a balanced number of deletions and insertions although we also denote a significant number of substitutions. Following this new observation, we also investigated the orthographic output from both models. We denoted that the limitation of attention model was mostly removed and word sequence was unrolled or stopped. However it translated to a really large number of substitutions, some subwords within the word structure being repeated or cut.
Although, we report a higher number of correct words and a lower number of errors for joint CTC-attention, the hybrid method obtains a higher WER than RNN-transducer and its hybrid version. Analyzing the hypothesis formulated and error distribution by both systems we could not extract any relevant information to explain the number of words impacted by the errors at character level.
On the same note, the following difference should still be noted: transducer-based models have a lower number of substitutions and equivalent or lower insertion whereas joint CTC-attention has a lower number of deletions and an equivalent or higher number of correct characters. Outside correct labels, only the CTC has a similar error distribution.
In case of joint CTC-Attention we can see that CTC as auxiliary function brings some benefices: the number of substitutions and insertions being further reduced compared to attention-only model. Additionally, the number of deletions is kept to the same range despite a high number of deletions for the CTC-only model. In case of additional attention module for RNN-transducer, although the attention-only has a lower number of deletion errors ($3.6$ versus $4.1$ for RNN-transducer), the inclusion of attention mechanism did not help to reduce this number. The error distribution is the same with and without attention. It also should be noted that RNN-transducer with attention has equivalent performance with characters and subword units.
Adding language models, all errors are lowered. The only exceptions being the number of insertions for CTC ($1.4\%$ raised to $2.3$), the number of deletions for RNN-transducer (from $4.1\%$ to $4.3$) and its hybrid counterpart (from $4.1$ to $4.4$). In these cases, and similarly as when we use character units, we can observe that the error rate (e.g.: insertion) decreases when the other (e.g.: deletion) increases.
Conclusion
In this paper, we experimentally showed that end-to-end approaches and different orthographic units were rather suitable to model the French language. RNN-transducer was found specially competitive with character units compared to other end-to-end approaches. Among the two orthographic units, subword was found beneficial for most methods to address the problems described in section SECREF14 and retain information on ambiguous patterns in French. Extending with language models, we could obtain promising results compared to traditional phone-based systems. The best performing systems being for character unit the RNN-transducer with additional attention module, achieving 7.8% in terms of CER and 17.6% on WER. For subword units, classic RNN-transducer, RNN-transducer with attention and joint CTC-attention show comparable performance on subword error rate and WER, with the first one being slightly better on WER ($17.4\%$) and the last one having a lower error rate on subword ($14.5\%$).
However, we also showed difference in produced errors for each method and different impact at word-level depending of the approach or units. Thus, future work will focus on analysing the orthographic output of these systems in two ways: 1) investigate errors produced by the end-to-end methods and explore several approaches to correct common errors done in French and 2) compare the end-to-end methods in a SLU context and evaluate the semantic value of the partially correct produced words. | 1) Connectionist Temporal Classification (CTC), 2) Attention-based methods, 3) RNN-tranducer |
e75f5bd7cc7107f10412d61e3202a74b082b0934 | e75f5bd7cc7107f10412d61e3202a74b082b0934_0 | Q: How much is decoding speed increased by increasing encoder and decreasing decoder depth?
Text: Introduction
Neural Machine Translation (NMT) has achieved great success in the last few years BIBREF0, BIBREF1, BIBREF2. The popular Transformer BIBREF2 model, which outperforms previous RNN/CNN based translation models BIBREF0, BIBREF1, is based on multi-layer self-attention networks and can be paralleled effectively.
Recently, a wide range of analysises BIBREF3, BIBREF4, BIBREF5, BIBREF6, BIBREF7, BIBREF8, BIBREF9, BIBREF10 related to the Transformer have been conducted. For example, bisazza2018lazy perform a fine-grained analysis of how various source-side morphological features are captured at different levels of the NMT encoder, they find no correlation between the accuracy of source morphology encoding and translation quality, and morphological features only in context and only to the extent directly transferable to the target words are captured. voita2019bottom study how information flows across Transformer layers and find that representations differ significantly depending on the objectives (MT, LM and MLM). tang2019encoders find that encoder hidden states outperform word embeddings significantly in word sense disambiguation. However, how the Transformer translation model transforms individual source tokens into corresponding target tokens (word translations, as shown in Figure FIGREF1), and specifically, what is the role of each Transformer layer in translation, at which layer a target word is translated has not been studied to our knowledge.
To detect roles of Transformer layers in translation, in this paper, we follow previous probing approaches BIBREF11, BIBREF12, BIBREF13, and propose to measure the word translation accuracy of output representations of individual Transformer layers by probing corresponding target translation tokens in these representations. In addition to analyzing the role of each encoder / decoder layer, we also analyze the contribution of the source context and the decoding history in translation by testing the effects of the self-attention sub-layer and the cross-attention sub-layer in decoder layers.
Our analysis reveals that the translation already starts at the source embedding layer, which offers an explanation for bisazza2018lazy. It also demonstrates how the word translation evolves across encoder / decoder layers and the effects of the source “encoding” and the decoding history on the translation of target tokens.
Based on the observations from our analysis, we find that: 1) the proper use of more encoder layers with fewer decoder layer can significantly boost decoding speed without harming quality; 2) inserting a linear projection layer before the decoder classifier can provide small but significant and consistent improvements in our experiments on the WMT 14 English-German, English-French and WMT 15 Czech-English news translation tasks ($+0.42$, $+0.37$ and $+0.47$ BLEU respectively).
Word Translation Accuracy Analysis
To analyze word translation accuracy of the Transformer, we first freeze a trained Transformer model so its behavior is consistent in how it performs in translation during our analysis, then we compute the forward pass and extract output representations of the layer analyzed. Finally, we apply a linear projection layer to extract and enhance features related to translation and feed projected representations to the frozen decoder classifier of the converged Transformer. The linear projection layer is the only module trained and updated on the training set with the original Transformer being frozen, thus it will only transform between vector spaces without generating new features for the word translation. An illustration of our analysis approach for encoder / decoder layers is shown in Figure FIGREF2.
Word Translation Accuracy Analysis ::: Analysis of Encoder Layers
Analyzing word translation accuracy of encoder layers requires us to align source tokens with corresponding target token. We use the alignment matrices computed by cross-attention sub-layers in decoder layers to align source tokens with target tokens. As there are multiple matrices produced by each sub-layer (due to the multi-head attention mechanism) and multiple decoder layers, we have to ensemble them into one matrix of high alignment accuracy using weights. Assume there are $d$ decoder layers with $h$ attention heads in each multi-head attention sub-layer, which results in $d * h$ alignment matrices $A_1, ... A_{d * h}$. We use a $d * h$ dimension weight vector $w$ to combine all these attention matrices. The weight vector is first normalized by softmax to a probability distribution $p$:
where $i$ indicates the $i$th element in $w$.
Then we use $p$ as the weights of corresponding attention matrices and merge them into 1 alignment matrix $A$.
$w$ can be trained during backpropagation together with the linear projection layer.
After we obtain the alignment matrix $A$, instead of selecting the target token with the highest alignment weight as the translation of a source token, we perform matrix multiplication between the encoded source representations $E$ (size: source sentence length $*$ input dimension) and the alignment matrix $A$ (size: source sentence length $*$ target sentence length) to transform / re-order source representations to the target side $T_E$:
where $A^T$ and $\times $ indicate the transpose of $A$ and matrix multiplication.
Thus $T_E$ has the same length as the gold translation sequence, and the target sequence can be used directly as translations representing by $T_E$.
Though source representations are transformed to the target side, we suggest this does not involve any target side information as the pre-trained Transformer is frozen and the transformation does not introduce any representation from the decoder side. We do not retrieve target tokens with highest alignment score as word translations of corresponding source tokens because translation may involve one/none/multiple source token(s) to one/none/multiple target token(s) alignment, and we suggest that using a soft alignment (attention weights) may lead to more reliable gradients than the hard alignment.
Word Translation Accuracy Analysis ::: Analysis of Decoder Layers
The analysis of predicting accuracy of the decoder is simpler than the encoder, as we can directly use the shifted target sequence without the requirement to bridge the different sequence length of the source sentence and the target while analyzing the encoder. We can simply use the output representations of the analyzed layer, and evaluate its prediction accuracy after projection.
However, as studied by li2019word, the decoder involves 2 kinds of “translation”, one (performed by the self-attention sub-layer) translates the history token sequence to the next token, another (performed by the cross-attention sub-layer) translates by attending source tokens. We additionally analyze the effects of these 2 kinds of translation on predicting accuracy by dropping the corresponding sub-layer of the analyzed decoder layer (i.e. we only compute the other sub-layer and the feed-forward layer with only the residual connection kept as the computation of the skipped sub-layer).
Analysis Experiments ::: Settings
We conducted experiments based on the Neutron implementation of the Transformer BIBREF14. We first trained a Transformer base model for our analysis following all settings of vaswani2017attention on the WMT 14 English to German news translation task. The input dimension of the model and the hidden dimension of the feed-forward sub-layer were 512 and $2,048$ respectively. We employed a $512 * 512$ parameter matrix as the linear projection layer. The source embedding matrix, the target embedding matrix and the weight of the classifier were bound.
We applied joint Byte-Pair Encoding (BPE) BIBREF15 with $32k$ merge operations to address the unknown word issue. We only kept sentences with a maximum of 256 sub-word tokens for training. We removed repeated data in the training set, and the training set was randomly shuffled in every training epoch. The concatenation of newstest 2012 and newstest 2013 was used for validation and newstest 2014 as the test set.
The number of warm-up steps was set to $8k$ . Each training batch contained at least $25k$ target tokens, and the model was trained for $100k$ training steps. The large batch size is achieved by gradient accumulation. We used a dropout of $0.1$ and employed a label smoothing BIBREF16 value of $0.1$. We used the Adam optimizer BIBREF17 with $0.9$, $0.98$ and $10^{-9}$ as $\beta _{1}$, $\beta _{2}$ and $\epsilon $. Parameters were uniformly initialized under the Lipschitz constraint BIBREF18.
We averaged the last 5 checkpoints saved with an interval of $1,500$ training steps. For decoding, we used a beam size of 4, and evaluated tokenized case-sensitive BLEU . The averaged model achieved a BLEU score of $27.96$ on the test set.
The linear projection layer and the weight vector $w$ of 48 elements for alignment during the analysis of encoder layers were trained on the training set. We monitored the accuracy on the development set during their training, and reported results on the test set.
Analysis Experiments ::: Analysis
The analysis results of the trained Transformer are shown in Table TABREF8. Layer 0 stands for the embedding layer. “Acc” indicates the prediction accuracy. “-Self attention” and “-Cross attention” in the decoder layer analysis mean bypassing the computation of the self-attention sub-layer and the cross-attention sub-layer respectively of the analyzed decoder layer. In layer analysis of the encoder and decoder, “$\Delta $” indicates improvements in word translation accuracy of the analyzed layer over the previous layer. While analyzing the self-attention and cross-attention sub-layers, “$\Delta $” is the accuracy loss when we remove the computation of the corresponding sub-layer.
The results of encoder layers in Table TABREF8 shows that: 1) surprisingly but reasonably the translation already starts at the embedding layer, and an amazingly sound word translation accuracy is obtained at the source embedding layer! This indicates that the translation already begins at the very beginning of “encoding” (specifically, the source embedding layer) instead of at the decoder. 2) With the stacking of encoder layers, the word translation accuracy improves (i.e. encoder layers gradually fix word translations of the source embedding layer), and improvements brought by different layers are relatively similar.
While analyzing decoder layers, Table TABREF8 shows that: 1) shallow decoder layers (0, 1, 2 and 3) perform significantly worse compared to corresponding encoder layers (until reaching the 4th decoder layer, where a word translation accuracy which surpasses the embedding layer of the encoder is achieved); 2) The improvements brought by different decoder layers are quite different. Specifically, layer 4 and 5 bring more improvements than the others.
While analyzing the effects of the source context (the self-attention sub-layer is responsible for the target language re-ordering, and “-Self attention” prevents using the decoding history in the analyzed decoder layer) and the decoding history (“-Cross attention” prevents copying translation from the source “encoding”), Table TABREF8 shows that in shallow decoder layers (layer 1-3), the decoding history plays a similarly important role like the source “encoding”, while in deep layers, the source “encoding” plays a more vital role than the decoding history. Thus, we suggest our comparison sheds light on the importance of translation performed by the encoder.
Analysis Experiments ::: Translation from Encoder Layers
Since our approach extracts features for translation from output representations of encoder layers while analyzing them, is it possible to perform word translation with only these features from encoder layers without using the decoder?
To achieve this goal, we feed output representations from an encoder layer to the corresponding linear projection layer, and feed the output of the linear projection layer directly to the decoder classifier, and retrieve tokens with highest probabilities as “translations”. Even though such “translations” from encoder layers have a same length and a same word-order as source sentences, individual source tokens are translated to the target language to some extent. We evaluated BPEized case-insensitive BLEU and BLEU 1 (1-gram BLEU, indicates the word translation quality), and results are shown in Table TABREF13. “FULL” is the performance of the whole Transformer model (decoding with a beam size of 4). “$\Delta $” means the improvements obtained by the introduced layer (or the decoder for “FULL”) over the previous layer.
Table TABREF13 shows that though there is a significant gap in BLEU scores between encoder layers and the full Transformer, the gap in BLEU 1 is relatively smaller than in BLEU. It is reasonable that encoder layers achieve a comparably high BLEU 1 score while a low BLEU score, as they perform word translation in the same order as the source sentence without any word re-ordering of the target language. We suggest the BLEU 1 score achieved by only the source embedding layer (i.e. translating with only embeddings) surprising and worth noting.
Findings Based on Observations ::: Trade Decoder Layers for Encoder Layers
From our analysis of the 6-layer Transformer base model (Table TABREF8), we find that in contrast to the improvements of the word translation accuracy with increasing depth on the encoder side, some decoder layers contribute significantly fewer improvements than the others (i.e. Layer 4 and 5 bring more word translation accuracy improvements than that from layer 1, 2, 3 and 6 in Table TABREF8). We suggest there might be more “lazy” layers in the decoder than in the encoder, which means that it might be easier to compress the decoder than the encoder, and further conjecture that simply removing some decoder layers while adding the same number of encoder layers may improve the performance of the Transformer. The other motivations for doing so are:
Each decoder layer has one more cross-attention sub-layer than an encoder layer, and increasing encoder layers while decreasing the same number of decoder layers will reduce the number of parameters and computational cost;
The decoder has to compute the forward pass for every decoding step (the decoding of each target token), and the acceleration of reducing decoder layers will be more significant in decoding, which is of productive value.
Findings Based on Observations ::: Linear Projection Layer before Classifier
We compare the word translation accuracy achieved by the last decoder layer (with the linear projection layer) during analysis and that of the pre-trained standard Transformer (without the projection layer before the decoder classifier), and results are shown in Table TABREF20.
Table TABREF20 shows that feeding the representations from the last decoder layer after the linear projection to the decoder classifier leads to slightly higher word prediction accuracy than feeding them directly to the classifier. We conjecture potential reasons might be:
We follow vaswani2017attention binding the weight matrix of the classifier with the embedding matrix. Processing the inserted linear projection layer followed by the classifier is equivalent to using only a classifier but with a new weight matrix (equivalent to the matrix multiplication of the linear projection layer's weight matrix and the embedding matrix), which indirectly detaches the classifier weight matrix with the embedding matrix;
As described in our analysis approach, the linear projection layer is expected to enhance the part of its input representations which relates to the classification while fading the other parts irrelevant to the word prediction, which may benefit the performance.
Thus, we suggest that inserting a linear projection layer which simply performs matrix multiplication between input representations and a weight matrix before the decoder classifier may help improve the word translation accuracy and further lead to improved translation quality.
Findings Based on Observations ::: Results and Analysis ::: Effects of Encoder/Decoder Depth
We examine the effects of reducing decoder depth while adding corresponding numbers of encoder layers, and results are shown in Table TABREF24. The decoding speed is measured on the test set which contains $3,003$ sentences with a beam size of 4. “Speed up” stands for the decoding acceleration compared to the 6-layer Transformer.
Table TABREF24 shows that while the acceleration of trading decoder layers for encoding layers in training is small, in decoding is significant. Specifically, the Transformer with 10 encoder layers and 2 decoder layers is $2.32$ times as fast as the 6-layer Transformer while achieving a slightly higher BLEU.
Though the Transformer with 11 encoder layers and only 1 decoder layer fails to achieve a comparable performance comparing with the 6-layer Transformer, our results still suggest that using more encoder layers with fewer but sufficient decoder layers can significantly boost the decoding speed, which is simple but effective and valuable for production applications.
We demonstrate the word accuracy analysis results of the 10 encoder layer 2 decoder layer Transformer in Table TABREF27.
Comparing Table TABREF27 with Table TABREF8, we find that: 1) The differences in improvements ($1.17$ vs. $0.11$) brought by individual layers of the 10-layer encoder are larger than those of the 6-layer encoder ($1.90$ vs. $0.87$), indicating that there might be some “lazy” layers in the 10-layer encoder; 2) Decreasing the depth of the decoder removes those “lazy” decoder layers in the 6-layer decoder and makes decoder layers rely more on the source “encoding” (by comparing the effects of skipping the self-attention sub-layer and cross-attention sub-layer on performance).
Findings Based on Observations ::: Results and Analysis ::: Effects of the Projection Layer
To study the effects of the linear projection layer on performance, we conducted experiments on the WMT 14 English-French and WMT 15 Czech-English news translation tasks in addition to the WMT 14 English-German task. We also conducted significance tests BIBREF19. Results are tested on newstest 2014 and 2015 respectively and shown in Table TABREF28.
Table TABREF28 shows that the linear projection layer is able to provide small but consistent and significant improvements in all 3 tasks.
Related Work ::: Analysis of NMT Models.
li2019word analyze the word alignment quality in NMT with prediction difference, and further analyze the effect of alignment errors on translation errors, which demonstrates that NMT captures good word alignment for those words mostly contributed from source, while their word alignment is much worse for those words mostly contributed from target. voita2019analyzing evaluate the contribution of individual attention heads to the overall performance of the model and analyze the roles played by them in the encoder. yang2019assessing propose a word reordering detection task to quantify how well the word order information is learned by Self-Attention Networks (SAN) and RNN, and reveal that although recurrence structure makes the model more universally-effective on learning word order, learning objectives matter more in the downstream tasks such as machine translation. tsai2019transformer regard attention as applying a kernel smoother over the inputs with the kernel scores being the similarities between inputs, and analyze individual components of the Transformer’s attention with the new formulation via the lens of the kernel. tang2019encoders find that encoder hidden states outperform word embeddings significantly in word sense disambiguation. he2019towards measure the word importance by attributing the NMT output to every input word and reveal that words of certain syntactic categories have higher importance while the categories vary across language pairs. voita2019bottom use canonical correlation analysis and mutual information estimators to study how information flows across Transformer layers and find that representations differ significantly depending on the objectives (MT, LM and MLM). An early work BIBREF3 performs a fine-grained analysis of how various source-side morphological features are captured at different levels of the NMT encoder. While they are unable to find any correlation between the accuracy of source morphology encoding and translation quality, they discover that morphological features are only captured in context and only to the extent that they are directly transferable to the target words, thus they suggest encoder layers are “lazy”, while our analysis offers an explanation for their results as the translation already starts at the source embedding layer, and possibly source embeddings already represent linguistic features of their translations more than those of themselves.
Related Work ::: Analysis of BERT.
BERT BIBREF20 uses the Transformer encoder, and analysis of BERT may provide valuable references for analyzing the Transformer. jawahar2019bert provide novel support that BERT networks capture structural information, and perform a series of experiments to unpack the elements of English language structure learned by BERT. tenney2019bert employ the edge probing task suite to explore how the different layers of the BERT network can resolve syntactic and semantic structure within a sentence, and find that the model represents the steps of the traditional NLP pipeline in an interpretable and localizable way, and that the regions responsible for each step appear in the expected sequence: POS tagging, parsing, NER, semantic roles, then coreference. pires2019multilingual present a large number of probing experiments, and show that Multilingual-BERT’s robust ability to generalize cross-lingually is underpinned by a multilingual representation.
Related Work ::: Accelerating Decoding.
zhang2018accelerating propose average attention as an alternative to the self-attention network in the Transformer decoder to accelerate its decoding. wu2018pay introduce lightweight convolution and dynamic convolutions which are simpler and more efficient than self-attention. The number of operations required by their approach scales linearly in the input length, whereas self-attention is quadratic. zhang2018speeding apply cube pruning into neural machine translation to speed up the translation. zhang2018exploring propose to adapt an n-gram suffix based equivalence function into beam search decoding, which obtains similar translation quality with a smaller beam size, making NMT decoding more efficient. Non-Autoregressive Translation (NAT) BIBREF21, BIBREF22, BIBREF23, BIBREF24, BIBREF25, BIBREF26, BIBREF27 enables parallelized decoding, while there is still a significant quality drop compared to traditional autoregressive beam search, our findings on using more encoder layers might also be adapted to the NAT.
Conclusion
We propose approaches for the analysis of word translation accuracy of Transformer layers to investigate how translation is performed. To measure word translation accuracy, our approaches train a linear projection layer which bridges representations from the analyzing layer and the pre-trained classifier. While analyzing encoder layers, our approach additionally learns a weight vector to merge multiple attention matrices into one, and transforms the source “encoding” to the target shape by multiplying the merged alignment matrix. For the analysis of decoder layers, we additionally analyze the effects of the source context and the decoding history in word prediction through bypassing the corresponding sub-layers.
Two main findings of our analysis are: 1) the translation starts at the very beginning of “encoding” (specifically at the source word embedding layer), and evolves further with the forward computation of layers; 2) translation performed by the encoder is very important for the evolution of word translation of decoder layers, especially for Transformers with few decoder layers.
Based on our analysis, we propose to increase encoder depth while removing the same number of decoder layers to boost the decoding speed. We further show that simply inserting a linear projection layer before the decoder classifier which shares the weight matrix with the embedding layer can effectively provide small but consistent and significant improvements.
Acknowledgments
Hongfei XU acknowledges the support of China Scholarship Council ([2018]3101, 201807040056). This work is also supported by the German Federal Ministry of Education and Research (BMBF) under the funding code 01IW17001 (Deeplee). | the Transformer with 10 encoder layers and 2 decoder layers is $2.32$ times as fast as the 6-layer Transformer |
675f28958c76623b09baa8ee3c040ff0cf277a5a | 675f28958c76623b09baa8ee3c040ff0cf277a5a_0 | Q: What is the size of the dataset?
Text: Introduction
With the advent of the Web 2.0, regular users were able to share, remix and distribute content very easily. As a result of this process, the Web became a rich interconnected set of heterogeneous data sources. Being in a standard format, it is suitable for many tasks involving knowledge extraction and representation. For example, efforts have been made to design games with the purpose of semi-automating a wide range of knowledge transfer tasks, such as educational quizzes, by leveraging on this kind of data.
In particular, quizzes based on multiple choice questions (MCQs) have been proved efficient to judge students’ knowledge. However, manual construction of such questions often results a time-consuming and labor-intensive task.
Fill-in-the-blank questions, where a sentence is given with one or more blanks in it, either with or without alternatives to fill in those blanks, have gained research attention recently. In this kind of question, as opposed to MCQs, there is no need to generate a WH style question derived from text. This means that the target sentence could simply be picked from a document on a corresponding topic of interest which results easier to automate.
Fill-in-the-blank questions in its multiple-choice answer version, often referred to as cloze questions (CQ), are commonly used for evaluating proficiency of language learners, including official tests such as TOEIC and TOEFL BIBREF0 . They have also been used to test students knowledge of English in using the correct verbs BIBREF1 , prepositions BIBREF2 and adjectives BIBREF3 . BIBREF4 and BIBREF5 generated questions to evaluate student’s vocabulary.
The main problem in CQ generation is that it is generally not easy to come up with appropriate distractors —incorrect options— without rich experience. Existing approaches are mostly based on domain-specific templates, whose elaboration relies on experts. Lately, approaches based on discriminative methods, which rely on annotated training data, have also appeared. Ultimately, these settings prevent end-users from participating in the elaboration process, limiting the diversity and variation of quizzes that the system may offer.
In this work we formalize the problem of automatic fill-in-the-blank question generation and present an empirical study using deep learning models for it in the context of language learning. Our study is based on data obtained from our language learning platform BIBREF6 , BIBREF7 , BIBREF8 where users can create their own quizzes by utilizing freely available and open-licensed video content on the Web. In the platform, the automatic quiz creation currently relies on hand-crafted features and rules, making the process difficult to adapt. Our goal is to effectively provide an adaptive learning experience in terms of style and difficulty, and thus better serve users' needs BIBREF9 . In this context, we study the ability of our proposed architectures in learning to generate quizzes based on data derived of the interaction of users with the platform.
Related Work
The problem of fill-in-the-blank question generation has been studied in the past by several authors. Perhaps the earlies approach is by BIBREF1 , who proposed a cloze question generation system which focuses on distractor generation using search engines to automatically measure English proficiency. In the same research line, we also find the work of BIBREF2 , BIBREF3 and BIBREF4 . In this context, the work of BIBREF10 probably represents the first effort in applying machine learning techniques for multiple-choice cloze question generation. The authors propose an approach that uses conditional random fields BIBREF11 based on hand-crafted features such as word POS tags.
More recent approaches also focus on the problem of distractor selection or generation but apply it to different domains. For example, BIBREF12 , present a system which adopts a semi-structured approach to generate CQs by making use of a knowledge base extracted from a Cricket portal. On the other hand, BIBREF9 present a generic semi-automatic system for quiz generation using linked data and textual descriptions of RDF resources. The system seems to be the first that can be controlled by difficulty level. Authors tested it using an on-line dataset about wildlife provided by the BBC. BIBREF13 present an approach automatic for CQs generation for student self-assessment.
Finally, the work of BIBREF0 presents a discriminative approach based on SVM classifiers for distractor generation and selection using a large-scale language learners’ corpus. The SVM classifier works at the word level and takes a sentence in which the target word appears, choosing a verb as the best distractor given the context. Again, the SVM is based on human-engineered features such as n-grams, lemmas and dependency tags.
Compared to approaches above, our take is different since we work on fill-in-the-blank question generation without multiple-choice answers. Therefore, our problem focuses on word selection —the word to blank— given a sentence, rather than on distractor generation. To the best of our knowledge, our system is also the first to use representation learning for this task.
Proposed Approach
We formalize the problem of automatic fill-on-the-blanks quiz generation using two different perspectives. These are designed to match with specific machine learning schemes that are well-defined in the literature. In both cases. we consider a training corpus of INLINEFORM0 pairs INLINEFORM1 where INLINEFORM2 is a sequence of INLINEFORM3 tokens and INLINEFORM4 is an index that indicates the position that should be blanked inside INLINEFORM5 .
This setting allows us to train from examples of single blank-annotated sentences. In this way, in order to obtain a sentence with several blanks, multiple passes over the model are required. This approach works in a way analogous to humans, where blanks are provided one at a time.
AQG as Sequence Labeling
Firstly, we model the AQG as a sequence labeling problem. Formally, for an embedded input sequence INLINEFORM0 we build the corresponding label sequence by simply creating a one-hot vector of size INLINEFORM1 for the given class INLINEFORM2 . This vector can be seen as a sequence of binary classes, INLINEFORM3 , where only one item (the one in position INLINEFORM4 ) belongs to the positive class. Given this setting, the conditional probability of an output label is modeled as follows: DISPLAYFORM0
Where, in our, case, function INLINEFORM0 is modeled using a bidirectional LSTM BIBREF14 . Each predicted label distribution INLINEFORM1 is then calculated using the following formulas. DISPLAYFORM0
The loss function is the average cross entropy for the mini-batch. Figure FIGREF5 summarizes the proposed model. DISPLAYFORM0
AQG as Sequence Classification
In this case, since the output of the model is a position in the input sequence INLINEFORM0 , the size of output dictionary for INLINEFORM1 is variable and depends on INLINEFORM2 . Regular sequence classification models use a softmax distribution over a fixed output dictionary to compute INLINEFORM3 ) and therefore are not suitable for our case. Therefore, we propose to use an attention-based approach that allows us to have a variable size dictionary for the output softmax, in a way akin to Pointer Networks BIBREF15 . More formally, given an embedded input vector sequence INLINEFORM4 , we use a bidirectional LSTM to first obtain a dense representation of each input token. DISPLAYFORM0
We later use pooling techniques including INLINEFORM0 and INLINEFORM1 to obtain a summarized representation INLINEFORM2 of the input sequence, or simply take the INLINEFORM3 hidden state as a drop-in replacement to do so. After this, we add a global content-based attention layer, which we use to to compare that summarized vector to each hidden state INLINEFORM4 . Concretely, DISPLAYFORM0
Where INLINEFORM0 and INLINEFORM1 are learnable parameters of the model, and the softmax normalizes the vector INLINEFORM2 to be an output distribution over a dictionary of size INLINEFORM3 . Figure FIGREF9 summarizes the proposed model graphically. Then, for a given sentence INLINEFORM4 , the goal of our model is to predict the most likely position INLINEFORM5 of the next word to be blanked.
Empirical Study
Although the hand-crafted rule-based system currently used in our language learning platform offers us good results in general, we are interested in developing a more flexible approach that is easier to tailor depending on the case. In particular, in an adaptive learning setting where the goal is resource allocation according to the unique needs of each learner, rule-based methods for AQG appear to have insufficient flexibility and adaptability to accurately model the features of each learner or teacher.
With this point in mind, this section presents an empirical study using state-of-the-art Deep Learning approaches for the problem of AQG. In particular, the objective is to test to what extent our prosed models are able to encode the behavior of the rule-based system. Ultimately, we hope that these can be used for a smooth transition from the current human-engineered feature-based system to a fully user-experience-based regime.
In Natural Language Processing, deep models have succeeded in large part because they learn and use their own continuous numeric representational systems for words and sentences. In particular, distributed representations BIBREF16 applied to words BIBREF17 have meant a major breakthrough. All our models start with random word embeddings, we leave the usage of other pre-trained vectors for future work.
Using our platform, we extracted anonymized user interaction data in the manner of real quizzes generated for a collection of several input video sources. We obtained a corpus of approximately 300,000 sentences, from which roughly 1.5 million single-quiz question training examples were derived. We split this dataset using the regular 70/10/20 partition for training, validation and testing.
As the system required the input sentences to be tokenized and makes use of features such as word pos-tags and such, the sentences in our dataset are processed using CoreNLP BIBREF18 . We also extract user-specific and quiz-specific information, including word-level learning records of the user, such as the number of times the learner made a mistake on that word, or whether the learner looked up the word in the dictionary. In this study, however, we restrain our model to only look at word embeddings as input.
We use the same data pre-processing for all of our models. We build the vocabulary using the train partition of our dataset with a minimum frequency of 1. We do not keep cases and obtain an unknown vocabulary of size 2,029, and a total vocabulary size of 66,431 tokens.
Sequence Labeling
We use a 2-layer bidirectional LSTM, which we train using Adam BIBREF19 with a learning rate of INLINEFORM0 , clipping the gradient of our parameters to a maximum norm of 5. We use a word embedding size and hidden state size of 300 and add dropout BIBREF20 before and after the LSTM, using a drop probability of 0.2. We train our model for up to 10 epochs. Training lasts for about 3 hours.
For evaluation, as accuracy would be extremely unbalanced given the nature of the blanking scheme —there is only one positive-class example on each sentence— we use Precision, Recall and F1-Score over the positive class for development and evaluation. Table TABREF11 summarizes our obtained results.
Sequence Classification
In this case, we again use use a 2-layer bidirectional LSTM, which we train using Adam with a learning rate of INLINEFORM0 , also clipping the gradient of our parameters to a maximum norm of 5. Even with these limits, convergence is faster than in the previous model, so we only trained the the classifier for up to 5 epochs. Again we use a word embedding and hidden state of 300, and add dropout with drop probability of 0.2 before and after the LSTM. Our results for different pooling strategies showed no noticeable performance difference in preliminary experiments, so we report results using the last hidden state.
For development and evaluation we used accuracy over the validation and test set, respectively. Table TABREF13 below summarizes our obtained result, we can see that model was able to obtain a maximum accuracy of approximately 89% on the validation and testing sets.
Conclusions
In this paper we have formalized the problem of automatic fill-on-the-blanks quiz generation using two well-defined learning schemes: sequence classification and sequence labeling. We have also proposed concrete architectures based on LSTMs to tackle the problem in both cases.
We have presented an empirical study in which we test the proposed architectures in the context of a language learning platform. Our results show that both the0 proposed training schemes seem to offer fairly good results, with an Accuracy/F1-score of nearly 90%. We think this sets a clear future research direction, showing that it is possible to transition from a heavily hand-crafted approach for AQG to a learning-based approach on the base of examples derived from the platform on unlabeled data. This is specially important in the context of adaptive learning, where the goal is to effectively provide an tailored and flexible experience in terms of style and difficulty
For future work, we would like to use different pre-trained word embeddings as well as other features derived from the input sentence to further improve our results. We would also like to test the power of the models in capturing different quiz styles from real questions created by professors. | 300,000 sentences with 1.5 million single-quiz questions |
47b00652ac66039aafe886780e86961bfc5b466e | 47b00652ac66039aafe886780e86961bfc5b466e_0 | Q: What language platform does the data come from?
Text: Introduction
With the advent of the Web 2.0, regular users were able to share, remix and distribute content very easily. As a result of this process, the Web became a rich interconnected set of heterogeneous data sources. Being in a standard format, it is suitable for many tasks involving knowledge extraction and representation. For example, efforts have been made to design games with the purpose of semi-automating a wide range of knowledge transfer tasks, such as educational quizzes, by leveraging on this kind of data.
In particular, quizzes based on multiple choice questions (MCQs) have been proved efficient to judge students’ knowledge. However, manual construction of such questions often results a time-consuming and labor-intensive task.
Fill-in-the-blank questions, where a sentence is given with one or more blanks in it, either with or without alternatives to fill in those blanks, have gained research attention recently. In this kind of question, as opposed to MCQs, there is no need to generate a WH style question derived from text. This means that the target sentence could simply be picked from a document on a corresponding topic of interest which results easier to automate.
Fill-in-the-blank questions in its multiple-choice answer version, often referred to as cloze questions (CQ), are commonly used for evaluating proficiency of language learners, including official tests such as TOEIC and TOEFL BIBREF0 . They have also been used to test students knowledge of English in using the correct verbs BIBREF1 , prepositions BIBREF2 and adjectives BIBREF3 . BIBREF4 and BIBREF5 generated questions to evaluate student’s vocabulary.
The main problem in CQ generation is that it is generally not easy to come up with appropriate distractors —incorrect options— without rich experience. Existing approaches are mostly based on domain-specific templates, whose elaboration relies on experts. Lately, approaches based on discriminative methods, which rely on annotated training data, have also appeared. Ultimately, these settings prevent end-users from participating in the elaboration process, limiting the diversity and variation of quizzes that the system may offer.
In this work we formalize the problem of automatic fill-in-the-blank question generation and present an empirical study using deep learning models for it in the context of language learning. Our study is based on data obtained from our language learning platform BIBREF6 , BIBREF7 , BIBREF8 where users can create their own quizzes by utilizing freely available and open-licensed video content on the Web. In the platform, the automatic quiz creation currently relies on hand-crafted features and rules, making the process difficult to adapt. Our goal is to effectively provide an adaptive learning experience in terms of style and difficulty, and thus better serve users' needs BIBREF9 . In this context, we study the ability of our proposed architectures in learning to generate quizzes based on data derived of the interaction of users with the platform.
Related Work
The problem of fill-in-the-blank question generation has been studied in the past by several authors. Perhaps the earlies approach is by BIBREF1 , who proposed a cloze question generation system which focuses on distractor generation using search engines to automatically measure English proficiency. In the same research line, we also find the work of BIBREF2 , BIBREF3 and BIBREF4 . In this context, the work of BIBREF10 probably represents the first effort in applying machine learning techniques for multiple-choice cloze question generation. The authors propose an approach that uses conditional random fields BIBREF11 based on hand-crafted features such as word POS tags.
More recent approaches also focus on the problem of distractor selection or generation but apply it to different domains. For example, BIBREF12 , present a system which adopts a semi-structured approach to generate CQs by making use of a knowledge base extracted from a Cricket portal. On the other hand, BIBREF9 present a generic semi-automatic system for quiz generation using linked data and textual descriptions of RDF resources. The system seems to be the first that can be controlled by difficulty level. Authors tested it using an on-line dataset about wildlife provided by the BBC. BIBREF13 present an approach automatic for CQs generation for student self-assessment.
Finally, the work of BIBREF0 presents a discriminative approach based on SVM classifiers for distractor generation and selection using a large-scale language learners’ corpus. The SVM classifier works at the word level and takes a sentence in which the target word appears, choosing a verb as the best distractor given the context. Again, the SVM is based on human-engineered features such as n-grams, lemmas and dependency tags.
Compared to approaches above, our take is different since we work on fill-in-the-blank question generation without multiple-choice answers. Therefore, our problem focuses on word selection —the word to blank— given a sentence, rather than on distractor generation. To the best of our knowledge, our system is also the first to use representation learning for this task.
Proposed Approach
We formalize the problem of automatic fill-on-the-blanks quiz generation using two different perspectives. These are designed to match with specific machine learning schemes that are well-defined in the literature. In both cases. we consider a training corpus of INLINEFORM0 pairs INLINEFORM1 where INLINEFORM2 is a sequence of INLINEFORM3 tokens and INLINEFORM4 is an index that indicates the position that should be blanked inside INLINEFORM5 .
This setting allows us to train from examples of single blank-annotated sentences. In this way, in order to obtain a sentence with several blanks, multiple passes over the model are required. This approach works in a way analogous to humans, where blanks are provided one at a time.
AQG as Sequence Labeling
Firstly, we model the AQG as a sequence labeling problem. Formally, for an embedded input sequence INLINEFORM0 we build the corresponding label sequence by simply creating a one-hot vector of size INLINEFORM1 for the given class INLINEFORM2 . This vector can be seen as a sequence of binary classes, INLINEFORM3 , where only one item (the one in position INLINEFORM4 ) belongs to the positive class. Given this setting, the conditional probability of an output label is modeled as follows: DISPLAYFORM0
Where, in our, case, function INLINEFORM0 is modeled using a bidirectional LSTM BIBREF14 . Each predicted label distribution INLINEFORM1 is then calculated using the following formulas. DISPLAYFORM0
The loss function is the average cross entropy for the mini-batch. Figure FIGREF5 summarizes the proposed model. DISPLAYFORM0
AQG as Sequence Classification
In this case, since the output of the model is a position in the input sequence INLINEFORM0 , the size of output dictionary for INLINEFORM1 is variable and depends on INLINEFORM2 . Regular sequence classification models use a softmax distribution over a fixed output dictionary to compute INLINEFORM3 ) and therefore are not suitable for our case. Therefore, we propose to use an attention-based approach that allows us to have a variable size dictionary for the output softmax, in a way akin to Pointer Networks BIBREF15 . More formally, given an embedded input vector sequence INLINEFORM4 , we use a bidirectional LSTM to first obtain a dense representation of each input token. DISPLAYFORM0
We later use pooling techniques including INLINEFORM0 and INLINEFORM1 to obtain a summarized representation INLINEFORM2 of the input sequence, or simply take the INLINEFORM3 hidden state as a drop-in replacement to do so. After this, we add a global content-based attention layer, which we use to to compare that summarized vector to each hidden state INLINEFORM4 . Concretely, DISPLAYFORM0
Where INLINEFORM0 and INLINEFORM1 are learnable parameters of the model, and the softmax normalizes the vector INLINEFORM2 to be an output distribution over a dictionary of size INLINEFORM3 . Figure FIGREF9 summarizes the proposed model graphically. Then, for a given sentence INLINEFORM4 , the goal of our model is to predict the most likely position INLINEFORM5 of the next word to be blanked.
Empirical Study
Although the hand-crafted rule-based system currently used in our language learning platform offers us good results in general, we are interested in developing a more flexible approach that is easier to tailor depending on the case. In particular, in an adaptive learning setting where the goal is resource allocation according to the unique needs of each learner, rule-based methods for AQG appear to have insufficient flexibility and adaptability to accurately model the features of each learner or teacher.
With this point in mind, this section presents an empirical study using state-of-the-art Deep Learning approaches for the problem of AQG. In particular, the objective is to test to what extent our prosed models are able to encode the behavior of the rule-based system. Ultimately, we hope that these can be used for a smooth transition from the current human-engineered feature-based system to a fully user-experience-based regime.
In Natural Language Processing, deep models have succeeded in large part because they learn and use their own continuous numeric representational systems for words and sentences. In particular, distributed representations BIBREF16 applied to words BIBREF17 have meant a major breakthrough. All our models start with random word embeddings, we leave the usage of other pre-trained vectors for future work.
Using our platform, we extracted anonymized user interaction data in the manner of real quizzes generated for a collection of several input video sources. We obtained a corpus of approximately 300,000 sentences, from which roughly 1.5 million single-quiz question training examples were derived. We split this dataset using the regular 70/10/20 partition for training, validation and testing.
As the system required the input sentences to be tokenized and makes use of features such as word pos-tags and such, the sentences in our dataset are processed using CoreNLP BIBREF18 . We also extract user-specific and quiz-specific information, including word-level learning records of the user, such as the number of times the learner made a mistake on that word, or whether the learner looked up the word in the dictionary. In this study, however, we restrain our model to only look at word embeddings as input.
We use the same data pre-processing for all of our models. We build the vocabulary using the train partition of our dataset with a minimum frequency of 1. We do not keep cases and obtain an unknown vocabulary of size 2,029, and a total vocabulary size of 66,431 tokens.
Sequence Labeling
We use a 2-layer bidirectional LSTM, which we train using Adam BIBREF19 with a learning rate of INLINEFORM0 , clipping the gradient of our parameters to a maximum norm of 5. We use a word embedding size and hidden state size of 300 and add dropout BIBREF20 before and after the LSTM, using a drop probability of 0.2. We train our model for up to 10 epochs. Training lasts for about 3 hours.
For evaluation, as accuracy would be extremely unbalanced given the nature of the blanking scheme —there is only one positive-class example on each sentence— we use Precision, Recall and F1-Score over the positive class for development and evaluation. Table TABREF11 summarizes our obtained results.
Sequence Classification
In this case, we again use use a 2-layer bidirectional LSTM, which we train using Adam with a learning rate of INLINEFORM0 , also clipping the gradient of our parameters to a maximum norm of 5. Even with these limits, convergence is faster than in the previous model, so we only trained the the classifier for up to 5 epochs. Again we use a word embedding and hidden state of 300, and add dropout with drop probability of 0.2 before and after the LSTM. Our results for different pooling strategies showed no noticeable performance difference in preliminary experiments, so we report results using the last hidden state.
For development and evaluation we used accuracy over the validation and test set, respectively. Table TABREF13 below summarizes our obtained result, we can see that model was able to obtain a maximum accuracy of approximately 89% on the validation and testing sets.
Conclusions
In this paper we have formalized the problem of automatic fill-on-the-blanks quiz generation using two well-defined learning schemes: sequence classification and sequence labeling. We have also proposed concrete architectures based on LSTMs to tackle the problem in both cases.
We have presented an empirical study in which we test the proposed architectures in the context of a language learning platform. Our results show that both the0 proposed training schemes seem to offer fairly good results, with an Accuracy/F1-score of nearly 90%. We think this sets a clear future research direction, showing that it is possible to transition from a heavily hand-crafted approach for AQG to a learning-based approach on the base of examples derived from the platform on unlabeled data. This is specially important in the context of adaptive learning, where the goal is to effectively provide an tailored and flexible experience in terms of style and difficulty
For future work, we would like to use different pre-trained word embeddings as well as other features derived from the input sentence to further improve our results. We would also like to test the power of the models in capturing different quiz styles from real questions created by professors. | Unanswerable |
79443bf3123170da44396b0481364552186abb91 | 79443bf3123170da44396b0481364552186abb91_0 | Q: Which two schemes are used?
Text: Introduction
With the advent of the Web 2.0, regular users were able to share, remix and distribute content very easily. As a result of this process, the Web became a rich interconnected set of heterogeneous data sources. Being in a standard format, it is suitable for many tasks involving knowledge extraction and representation. For example, efforts have been made to design games with the purpose of semi-automating a wide range of knowledge transfer tasks, such as educational quizzes, by leveraging on this kind of data.
In particular, quizzes based on multiple choice questions (MCQs) have been proved efficient to judge students’ knowledge. However, manual construction of such questions often results a time-consuming and labor-intensive task.
Fill-in-the-blank questions, where a sentence is given with one or more blanks in it, either with or without alternatives to fill in those blanks, have gained research attention recently. In this kind of question, as opposed to MCQs, there is no need to generate a WH style question derived from text. This means that the target sentence could simply be picked from a document on a corresponding topic of interest which results easier to automate.
Fill-in-the-blank questions in its multiple-choice answer version, often referred to as cloze questions (CQ), are commonly used for evaluating proficiency of language learners, including official tests such as TOEIC and TOEFL BIBREF0 . They have also been used to test students knowledge of English in using the correct verbs BIBREF1 , prepositions BIBREF2 and adjectives BIBREF3 . BIBREF4 and BIBREF5 generated questions to evaluate student’s vocabulary.
The main problem in CQ generation is that it is generally not easy to come up with appropriate distractors —incorrect options— without rich experience. Existing approaches are mostly based on domain-specific templates, whose elaboration relies on experts. Lately, approaches based on discriminative methods, which rely on annotated training data, have also appeared. Ultimately, these settings prevent end-users from participating in the elaboration process, limiting the diversity and variation of quizzes that the system may offer.
In this work we formalize the problem of automatic fill-in-the-blank question generation and present an empirical study using deep learning models for it in the context of language learning. Our study is based on data obtained from our language learning platform BIBREF6 , BIBREF7 , BIBREF8 where users can create their own quizzes by utilizing freely available and open-licensed video content on the Web. In the platform, the automatic quiz creation currently relies on hand-crafted features and rules, making the process difficult to adapt. Our goal is to effectively provide an adaptive learning experience in terms of style and difficulty, and thus better serve users' needs BIBREF9 . In this context, we study the ability of our proposed architectures in learning to generate quizzes based on data derived of the interaction of users with the platform.
Related Work
The problem of fill-in-the-blank question generation has been studied in the past by several authors. Perhaps the earlies approach is by BIBREF1 , who proposed a cloze question generation system which focuses on distractor generation using search engines to automatically measure English proficiency. In the same research line, we also find the work of BIBREF2 , BIBREF3 and BIBREF4 . In this context, the work of BIBREF10 probably represents the first effort in applying machine learning techniques for multiple-choice cloze question generation. The authors propose an approach that uses conditional random fields BIBREF11 based on hand-crafted features such as word POS tags.
More recent approaches also focus on the problem of distractor selection or generation but apply it to different domains. For example, BIBREF12 , present a system which adopts a semi-structured approach to generate CQs by making use of a knowledge base extracted from a Cricket portal. On the other hand, BIBREF9 present a generic semi-automatic system for quiz generation using linked data and textual descriptions of RDF resources. The system seems to be the first that can be controlled by difficulty level. Authors tested it using an on-line dataset about wildlife provided by the BBC. BIBREF13 present an approach automatic for CQs generation for student self-assessment.
Finally, the work of BIBREF0 presents a discriminative approach based on SVM classifiers for distractor generation and selection using a large-scale language learners’ corpus. The SVM classifier works at the word level and takes a sentence in which the target word appears, choosing a verb as the best distractor given the context. Again, the SVM is based on human-engineered features such as n-grams, lemmas and dependency tags.
Compared to approaches above, our take is different since we work on fill-in-the-blank question generation without multiple-choice answers. Therefore, our problem focuses on word selection —the word to blank— given a sentence, rather than on distractor generation. To the best of our knowledge, our system is also the first to use representation learning for this task.
Proposed Approach
We formalize the problem of automatic fill-on-the-blanks quiz generation using two different perspectives. These are designed to match with specific machine learning schemes that are well-defined in the literature. In both cases. we consider a training corpus of INLINEFORM0 pairs INLINEFORM1 where INLINEFORM2 is a sequence of INLINEFORM3 tokens and INLINEFORM4 is an index that indicates the position that should be blanked inside INLINEFORM5 .
This setting allows us to train from examples of single blank-annotated sentences. In this way, in order to obtain a sentence with several blanks, multiple passes over the model are required. This approach works in a way analogous to humans, where blanks are provided one at a time.
AQG as Sequence Labeling
Firstly, we model the AQG as a sequence labeling problem. Formally, for an embedded input sequence INLINEFORM0 we build the corresponding label sequence by simply creating a one-hot vector of size INLINEFORM1 for the given class INLINEFORM2 . This vector can be seen as a sequence of binary classes, INLINEFORM3 , where only one item (the one in position INLINEFORM4 ) belongs to the positive class. Given this setting, the conditional probability of an output label is modeled as follows: DISPLAYFORM0
Where, in our, case, function INLINEFORM0 is modeled using a bidirectional LSTM BIBREF14 . Each predicted label distribution INLINEFORM1 is then calculated using the following formulas. DISPLAYFORM0
The loss function is the average cross entropy for the mini-batch. Figure FIGREF5 summarizes the proposed model. DISPLAYFORM0
AQG as Sequence Classification
In this case, since the output of the model is a position in the input sequence INLINEFORM0 , the size of output dictionary for INLINEFORM1 is variable and depends on INLINEFORM2 . Regular sequence classification models use a softmax distribution over a fixed output dictionary to compute INLINEFORM3 ) and therefore are not suitable for our case. Therefore, we propose to use an attention-based approach that allows us to have a variable size dictionary for the output softmax, in a way akin to Pointer Networks BIBREF15 . More formally, given an embedded input vector sequence INLINEFORM4 , we use a bidirectional LSTM to first obtain a dense representation of each input token. DISPLAYFORM0
We later use pooling techniques including INLINEFORM0 and INLINEFORM1 to obtain a summarized representation INLINEFORM2 of the input sequence, or simply take the INLINEFORM3 hidden state as a drop-in replacement to do so. After this, we add a global content-based attention layer, which we use to to compare that summarized vector to each hidden state INLINEFORM4 . Concretely, DISPLAYFORM0
Where INLINEFORM0 and INLINEFORM1 are learnable parameters of the model, and the softmax normalizes the vector INLINEFORM2 to be an output distribution over a dictionary of size INLINEFORM3 . Figure FIGREF9 summarizes the proposed model graphically. Then, for a given sentence INLINEFORM4 , the goal of our model is to predict the most likely position INLINEFORM5 of the next word to be blanked.
Empirical Study
Although the hand-crafted rule-based system currently used in our language learning platform offers us good results in general, we are interested in developing a more flexible approach that is easier to tailor depending on the case. In particular, in an adaptive learning setting where the goal is resource allocation according to the unique needs of each learner, rule-based methods for AQG appear to have insufficient flexibility and adaptability to accurately model the features of each learner or teacher.
With this point in mind, this section presents an empirical study using state-of-the-art Deep Learning approaches for the problem of AQG. In particular, the objective is to test to what extent our prosed models are able to encode the behavior of the rule-based system. Ultimately, we hope that these can be used for a smooth transition from the current human-engineered feature-based system to a fully user-experience-based regime.
In Natural Language Processing, deep models have succeeded in large part because they learn and use their own continuous numeric representational systems for words and sentences. In particular, distributed representations BIBREF16 applied to words BIBREF17 have meant a major breakthrough. All our models start with random word embeddings, we leave the usage of other pre-trained vectors for future work.
Using our platform, we extracted anonymized user interaction data in the manner of real quizzes generated for a collection of several input video sources. We obtained a corpus of approximately 300,000 sentences, from which roughly 1.5 million single-quiz question training examples were derived. We split this dataset using the regular 70/10/20 partition for training, validation and testing.
As the system required the input sentences to be tokenized and makes use of features such as word pos-tags and such, the sentences in our dataset are processed using CoreNLP BIBREF18 . We also extract user-specific and quiz-specific information, including word-level learning records of the user, such as the number of times the learner made a mistake on that word, or whether the learner looked up the word in the dictionary. In this study, however, we restrain our model to only look at word embeddings as input.
We use the same data pre-processing for all of our models. We build the vocabulary using the train partition of our dataset with a minimum frequency of 1. We do not keep cases and obtain an unknown vocabulary of size 2,029, and a total vocabulary size of 66,431 tokens.
Sequence Labeling
We use a 2-layer bidirectional LSTM, which we train using Adam BIBREF19 with a learning rate of INLINEFORM0 , clipping the gradient of our parameters to a maximum norm of 5. We use a word embedding size and hidden state size of 300 and add dropout BIBREF20 before and after the LSTM, using a drop probability of 0.2. We train our model for up to 10 epochs. Training lasts for about 3 hours.
For evaluation, as accuracy would be extremely unbalanced given the nature of the blanking scheme —there is only one positive-class example on each sentence— we use Precision, Recall and F1-Score over the positive class for development and evaluation. Table TABREF11 summarizes our obtained results.
Sequence Classification
In this case, we again use use a 2-layer bidirectional LSTM, which we train using Adam with a learning rate of INLINEFORM0 , also clipping the gradient of our parameters to a maximum norm of 5. Even with these limits, convergence is faster than in the previous model, so we only trained the the classifier for up to 5 epochs. Again we use a word embedding and hidden state of 300, and add dropout with drop probability of 0.2 before and after the LSTM. Our results for different pooling strategies showed no noticeable performance difference in preliminary experiments, so we report results using the last hidden state.
For development and evaluation we used accuracy over the validation and test set, respectively. Table TABREF13 below summarizes our obtained result, we can see that model was able to obtain a maximum accuracy of approximately 89% on the validation and testing sets.
Conclusions
In this paper we have formalized the problem of automatic fill-on-the-blanks quiz generation using two well-defined learning schemes: sequence classification and sequence labeling. We have also proposed concrete architectures based on LSTMs to tackle the problem in both cases.
We have presented an empirical study in which we test the proposed architectures in the context of a language learning platform. Our results show that both the0 proposed training schemes seem to offer fairly good results, with an Accuracy/F1-score of nearly 90%. We think this sets a clear future research direction, showing that it is possible to transition from a heavily hand-crafted approach for AQG to a learning-based approach on the base of examples derived from the platform on unlabeled data. This is specially important in the context of adaptive learning, where the goal is to effectively provide an tailored and flexible experience in terms of style and difficulty
For future work, we would like to use different pre-trained word embeddings as well as other features derived from the input sentence to further improve our results. We would also like to test the power of the models in capturing different quiz styles from real questions created by professors. | sequence classification, sequence labeling |
2a46db1b91de4b583d4a5302b2784c091f9478cc | 2a46db1b91de4b583d4a5302b2784c091f9478cc_0 | Q: How many examples do they have in the target domain?
Text: Introduction
Due to the fact that Neural Machine Translation (NMT) is reaching comparable or even better performance compared to the traditional statistical machine translation (SMT) models BIBREF0 , BIBREF1 , it has become very popular in the recent years BIBREF2 , BIBREF3 , BIBREF4 . With the great success of NMT, new challenges arise which have already been address with reasonable success in traditional SMT. One of the challenges is domain adaptation. In a typical domain adaptation setup such as ours, we have a large amount of out-of-domain bilingual training data for which we already have a trained neural network model (baseline). Given only an additional small amount of in-domain data, the challenge is to improve the translation performance on the new domain without deteriorating the performance on the general domain significantly. One approach one might take is to combine the in-domain data with the out-of-domain data and train the NMT model from scratch. However, there are two main problems with that approach. First, training a neural machine translation system on large data sets can take several weeks and training a new model based on the combined training data is time consuming. Second, since the in-domain data is relatively small, the out-of-domain data will tend to dominate the training data and hence the learned model will not perform as well on the in-domain test data. In this paper, we reuse the already trained out-of-domain system and continue training only on the small portion of in-domain data similar to BIBREF5 . While doing this, we adapt the parameters of the neural network model to the new domain. Instead of relying completely on the adapted (further-trained) model and over fitting on the in-domain data, we decode using an ensemble of the baseline model and the adapted model which tends to perform well on the in-domain data without deteriorating the performance on the baseline general domain.
Related Work
Domain adaptation has been an active research topic for the traditional SMT approach in the last few years. The existing domain adaptation methods can be roughly divided into three different categories.
First, the out-of-domain training data can be scored by a model built only on the in-domain training data. Based on the scores, we can either use a certain amount of best scoring out-of-domain training data to build a new translation system or assign a weight to each sentence which determines its contribution towards the training a new system. In SMT, this has been done for language model training BIBREF6 , BIBREF7 and translation model training BIBREF8 , BIBREF9 , BIBREF10 , BIBREF11 . In contrast to SMT, training a NMT system from scratch is time consuming and can easily take several weeks.
Second, different methods of interpolating in-domain and out-of-domain models BIBREF12 , BIBREF13 , BIBREF14 have been proposed. A widely used approach is to train an additional SMT system based only on the in-domain data in addition to the existing out-of-domain SMT system. By interpolating the phrase tables, the in-domain data can be integrated into the general system. In NMT, we do not have any phrase tables and can not use this method. Nevertheless, integrating the in-domain data with interpolation is faster than building a system from scratch.
The third approach is called semi-supervised training, where a large in-domain monolingual data is first translated with a machine translation engine into a different language to generate parallel data. The automatic translations have been used for retraining the language model and/or the translation model BIBREF15 , BIBREF16 , BIBREF17 . Parallel data can be created also by back-translating monolingual target language into the source language creating additional parallel data BIBREF18 . The additional parallel training data can be used to train the NMT and obtain. BIBREF18 report substantial improvements when a large amount of back-translated parallel data is used. However, as we mentioned before retraining the NMT model with large training data takes time and in this case it is even more time consuming since we first need to back-translate the target monolingual data and then build a system based on the combination of both the original parallel data and the back-translated data.
For neural machine translation, BIBREF5 proposed to adapt an already existing NMT system to a new domain with further training on the in-domain data only. The authors report an absolute gain of 3.8 Bleu points compared to using an original model without further training. In our work, we utilize the same approach but ensemble the further trained model with the original model. In addition, we report results on the out-of-domain test sets and show how degradation of the translation performance on the out-of-domain can be avoided. We further show how to avoid over-fitting on the in-domain training data and analyze how many additional epochs are needed to adapt the model to the new domain. We compare our adapted models with models either trained on the combined training data or the in-domain training data only and report results for different amount of in-domain training data.
Neural Machine Translation
In all our experiments, we use our in-house attention-based NMT implementation which is similar to BIBREF4 , BIBREF19 The approach is based on an encoder-decoder network. The encoder employs a bi-directional RNN to encode the source sentence ${\bf {x}}=({x_1, ... , x_l})$ into a sequence of hidden states ${\bf {h}}=({h_1, ..., h_l})$ , where $l$ is the length of the source sentence. Each $h_i$ is a concatenation of a left-to-right $\overrightarrow{h_i}$ and a right-to-left $\overleftarrow{h_i}$ RNN: $ h_{i} = \begin{bmatrix} \overleftarrow{h}_i \\ \overrightarrow{h}_i \\ \end{bmatrix} = \begin{bmatrix} \overleftarrow{f}(x_i, \overleftarrow{h}_{i+1}) \\ \overrightarrow{f}(x_i, \overrightarrow{h}_{i-1}) \\ \end{bmatrix} $
where $\overleftarrow{f}$ and $\overrightarrow{f}$ are two gated recurrent units (GRU) proposed by BIBREF20 .
Given the encoded ${\bf h}$ , the decoder predicts the target translation by maximizing the conditional log-probability of the correct translation ${\bf y^*} = (y^*_1, ... y^*_m)$ , where $m$ is the length of the target. At each time $t$ , the probability of each word $y_t$ from a target vocabulary $V_y$ is:
$$p(y_t|{\bf h}, y^*_{t-1}..y^*_1) = g(s_t, y^*_{t-1}, H_{t}),$$ (Eq. 1)
where $g$ is a two layer feed-forward neural network over the embedding of the previous target word $y^*_{t-1}$ , the hidden state $s_t$ , and the weighted sum of ${\bf h}$ ( $H_{t}$ ).
Before we compute $s_t$ and $H_t$ , we first covert $s_{t-1}$ and the embedding of $y^*_{t-1}$ into an intermediate state $s^{\prime }_t$ with a GRU $u$ as:
$$s^{\prime }_t = u(s_{t-1}, y^*_{t-1}).$$ (Eq. 2)
Then we have $s_t$ as:
$$s_t = q(s^{\prime }_{t}, H_{t})$$ (Eq. 3)
where $q$ is a GRU. And the $H_{t}$ is computed as:
$$H_t = \begin{bmatrix} \sum _{i=1}^{l}{(\alpha _{t,i} \cdot \overleftarrow{h}_i)} \\ \sum _{i=1}^{l}{(\alpha _{t,i} \cdot \overrightarrow{h}_i)} \\ \end{bmatrix},$$ (Eq. 4)
The alignment weights, $\alpha $ in $H_t$ , are computed with a two layer feed-forward neural network $r$ :
$$\alpha _{t,i} = \frac{\exp \lbrace r(s^{\prime }_{t}, h_{i})\rbrace }{\sum _{j=1}^{l}{\exp \lbrace r(s^{\prime }_{t}, h_{j})\rbrace }}$$ (Eq. 5)
Domain Adaptation
Our objectives in domain adaptation are two fold: (1) build an adapted system quickly (2) build a system that performs well on the in-domain test data without significantly degrading the system on a general domain. One possible approach to domain adaptation is to mix (possibly with a higher weight) the in-domain with the large out-of-domain data and retrain the system from scratch. However, training a NMT system on large amounts of parallel data (typically $>$ 4 million sentence pairs) can take several weeks. Therefore, we propose a method that doesn't require retraining on the large out-of-domain data which we can do relatively quickly. Hence, achieving our two objectives.
Our approach re-uses the already trained baseline model and continues the training for several additional epochs but only on the small amount of in-domain training data. We call this kind of further training a continue model. Depending on the amount of in-domain training data, the continue model can over-fit on the new training data. In general over-fitting means that the model performs excellent on the training data, but worse on any other unseen data. To overcome this problem, we ensemble the continue model with the baseline model. This has the positive side effect that we do not only get better translations for the new domain, but also stay close to the baseline model which performs well in general. As the amount of in-domain training data is usually small, we can quickly adapt our baseline model to a different domain.
Experiments
In all our experiments, we use the NMT approach as described in Section "Neural Machine Translation" . We limit our source and target vocabularies to be the top $N$ most frequent words for each side accordingly. Words not in these vocabularies are mapped into a special unknown token UNK. During translation, we write the alignments (from the attention mechanism) and use these to replace the unknown tokens either with potential targets (obtained from an IBM model 1 dictionary trained on the parallel data or from the SMT phrase table) or with the source word itself or a transliteration of it (if no target was found in the dictionary, i.e., the word is a genuine OOV). We use an embedding dimension of 620 and fix the RNN GRU layers to be of 1000 cells each. For the training procedure, we use SGD BIBREF21 to update the model parameters with a mini-batch size of 64. The training data is shuffled after each epoch. All experiments are evaluated with both Bleu BIBREF22 and Ter BIBREF23 (both are case-sensitive).
German→\rightarrow English
For the German $\rightarrow $ English translation task, we use an already trained out-of-domain NMT system (vocabulary size $N$ =100K) trained on the WMT 2015 training data BIBREF24 (3.9M parallel sentences). As in-domain training data, we use the TED talks from the IWSLT 2015 evaluation campaign BIBREF25 (194K parallel sentences). Corpus statistics can be found in Table 1 . The data is tokenized and the German text is preprocessed by splitting German compound words with the frequency-based method as described in BIBREF26 . We use our in-house language identification tool to remove sentence pairs where either the source or the target is assigned the wrong language by our language ID.
Experimental results can be found in Table 2 . The translation quality of a NMT system trained only on the in-domain data is not satisfying. In fact, it performs even worse on both test sets compared to the baseline model which is only trained on the out-of-domain data. By continuing the training of the baseline model on the in-domain data only, we get a gain of 4.4 points in Bleu and 3.1 points in Ter on the in-domain test set tst2013 after the second epoch. Nevertheless, we lose 2.1 points in Bleu and 3.9 points in Ter on the out-of-domain test set newstest2014. After continuing the epoch for 20 epochs, the model tends to overfit and the performance of both test sets degrades.
To avoid over fitting and to keep the out-of-domain translation quality close to the baseline, we ensemble the continue model with the baseline model. After 20 epochs, we only lose 0.2 points in Bleu and 0.6 points in Ter on the out-of-domain test set while we gain 4.2 points in Bleu and 3.7 points in Ter on tst2013. Each epoch of the continue training takes 1.8 hours. In fact, with only two epochs, we already have a very good performing system on the in-domain data. At the same time, the loss of translation quality on the out-of-domain test set is minimal (i.e., negligible). In fact, we get a gain of 0.7 points in Bleu while losing 0.6 points in Ter on our out-of-domain test set.
Figure 1 illustrates the learning curve of the continue training for different sizes of in-domain training data. For all setups, the translation quality massively drops on the out-of-domain test set. Further, the performance of the in-domain test set degrades as the neural network over-fits on the in-domain training data already after epoch 2.
To study the impact of the in-domain data size on the quality if the adapted model, we report results for different sizes of the in-domain data. Figure 2 shows the learning curve of the ensemble of the baseline and the continue model for different sizes of in-domain training data. The used in-domain data is a randomly selected subset of the entire pool of the in-domain data available to us. We also report the result when all of the in-domain data in the pool is used. As shown in Figure 2 the translation quality of the out-of-domain test set only degrades slightly for all the different sizes of the in-domain data we tried. However, the performance on the in-domain data significantly improves, reaching its peak just after the second epoch. We do not lose any translation quality on the in-domain test set by continuing the training for more epochs. Adding more in-domain data improved the score on the in-domain test set without seeing any significant degradation on the out-of-domain test set.
In addition to evaluating on automatic metrics, we also performed a subjective human evaluation where a human annotator assigns a score based on the quality of the translation. The judgments are done by an experienced annotator (a native speaker of German and a fluent speaker of English). We ask our annotator to judge the translation output of different systems on a randomly selected in-domain sample of 50 sentences (maximum sentence length 50). Each source sentence is presented to the annotator with all 3 different translations (baseline/ continue/ ensemble). The translation are presented in a blind fashion (i.e., the annotator is not aware of which system is which) and shuffled in random order. The evaluation is presented to the annotator via a web-page interface with all these translation pairs randomly ordered to disperse the three translations of each source sentence. The annotator judges each translation from 0 (very bad) to 5 (near perfect). The human evaluation results can be found in Table 3 . Both the continue as well as the ensemble of the baseline with the continue model significantly outperforms the baseline model on the in-domain data. Contrary to the automatic scores, the ensemble performs better compared to the continue model.
We compare the training times of our different setups in Table 4 . Based on the automatic scores, it is sufficient to further train the baseline model for 2 epochs to adapt the model to the new domain. For the case of having only 25K parallel in-domain training data, it takes 30 minutes to adapt the model. If we use all available 192K sentences, the total training time is 3 hours and 40 minutes. By using all training data (both in-domain and out-of-domain together), we need 7 epochs which sum up to a training time of 15 days and 11 hours.
Chinese→\rightarrow English
For the Chinese $\rightarrow $ English experiments, we utilize a NMT system (vocabulary size $N$ =500K) trained on 11.6 million out-of-domain sentences from the DARPA BOLT project. We use 593k parallel sentences of internal in-domain data that is different to the BOLT informal news domain. Corpus statistics can be found in Table 5 .
Experimental results can be found in Table 6 . Because the in-domain data is relatively large in this case, training a NMT model from scratch only on the in-domain data gives us similar performance on the in-domain test set compared to the baseline model that is trained only on the out-of-domain data. However, the performance on the out-of-domain test set is significantly worse. By continuing the training of the baseline model only on the in-domain data, we get an improvement of 9.5 points in Bleu and 12.2 points in Ter on the in-domain test set after 6 epochs. Unfortunately, the performance significantly drops on the out-of-domain test set. After 20 epochs, the performance on the in-domain data only further improves slightly while losing much more on the out-of-domain test set.
To avoid significant degradation to the translation quality on the out-of-domain test set, we ensemble the continue and the baseline models. After 6 epochs, we get a gain of 7.2 points in Bleu and 10 points in Ter on the in-domain test set while losing only slightly on the out-of-domain test set. After 20 epochs, the performance of the in-domain test set is similar while losing additional 1.5 points in Bleu and 1.1 points in Ter on the out-of-domain test set.
Figure 3 illustrates the learning curves of the continue training for different sizes of in-domain training data. Adding more parallel in-domain training data helps to improve the performance on the in-domain test set. For all different training sizes, the translation quality drops similar on the out-of-domain test set.
Figure 4 shows the learning curves of the ensemble of the baseline and the continue model for different sizes of in-domain training data. For all training sizes, the translation quality of the out-of-domain test set only degrades slightly. Nevertheless, the performance on the in-domain data significantly improves. We reach a saturation by continuing the training for several epochs on both test sets. Adding more in-domain data improves the score on the in-domain test set.
Human judgment was performed (cf. Table 7 ) by another experienced annotator (Chinese native speaker whose also fluent in English) on a randomly selected sample of 50 in-domain sentences. As in the German $\rightarrow $ English case, the annotator assigns a (0-5) score to each translation. Both, the continue as well as the ensemble of the baseline with the continue model outperforms the baseline model. Furthermore, the ensemble of the continue model with the baseline model outperforms the continue training on its own.
A comparison of the training times of our different setups can be found in Table 8 . Based on our experiments, it is sufficient to further train the baseline for 6 epochs to adapt the neural net to our new domain. By using all available in-domain training data, we have a total training time of 23 hours. The training time for a system based on both in-domain and out-of-domain training data needs already 77 hours and 30 min for one epoch. We trained the combined system for 8 epochs which sum up to a total training time of 620 hours (25 days and 20 hours).
Conclusion
We presented an approach for a fast and efficient way to adapt an already existing NMT system to a new domain without degradation of the translation quality on the out-of-domain test set. Our proposed method is based on two main steps: (a) train a model on only on the in-domain data, but initializing all parameters of the the neural network model with the one from the existing baseline model that is trained on the large amount of out-of-domain training data (in other words, we continue the training of the baseline model only on the in-domain data); (b) ensemble the continue model with the baseline model at decoding time. While step (a) can lead to significant gains of up to 9.9 points in Bleu and 12.2 points in Ter on the in-domain test set. However, it comes at the expense of significant degradation to the the translation quality on the original out-of-domain test set. Furthermore, the continue model tends to overfit the small amount of in-domain training data and even degrades translation quality on the in-domain test sets if you train beyond one or two epochs.
Step (b) (i.e., ensembling of the baseline model with the continue model) ensures that the performance does not drop significantly on the out-of-domain test set while still getting significant improvements of up to 7.2 points in Bleu and 10 points in Ter on the in-domain test set. Even after only few epochs of continue training, we get results that are close to the results obtained after 20 epoch. We also show significant improvements on on human judgment as well. We presented results on two diverse language pairs German $\rightarrow $ English and Chinese $\rightarrow $ English (usually very challenging pairs for machine translation). | Around 388k examples, 194k from tst2013 (in-domain) and 194k from newstest2014 (out-of-domain) |
48fa2ccc236e217fcf0e5aab0e7a146faf439b02 | 48fa2ccc236e217fcf0e5aab0e7a146faf439b02_0 | Q: Does Grail accept Prolog inputs?
Text: Introduction
This chapter describes a series of tools for developing and testing type-logical grammars. The Grail family of theorem provers have been designed to work with a variety of modern type-logical frameworks, including multimodal type-logical grammars BIBREF0 , NL $_{cl}$ BIBREF1 , the Displacement calculus BIBREF2 and hybrid type-logical grammars BIBREF3 .
The tools give a transparent way of implementing grammars and testing their consequences, providing a natural deduction proof in the specific type-logical grammar for each of the readings of a sentence. None of this replaces careful reflection by the grammar writer, of course, but in many cases, computational testing of hand-written grammars will reveal surprises, showing unintended consequences of our grammar and such unintended proofs (or unintended absences of proofs) help us improve the grammar. Computational tools also help us speed up grammar development, for example by allowing us to compare several alternative solutions to a problem and investigate where they make different predictions.
This chapter describes the underlying formalism of the theorem provers, as it is visible during an interactive proof trace, and present the general strategy followed by the theorem provers. The presentation in this chapter is somewhat informal, referring the reader elsewhere for full proofs.
The rest of this chapter is structured as follows. Section "Type-logical grammars" presents a general introduction to type-logical grammars and illustrates its basic concepts using the Lambek calculus, ending the section with some problems at the syntax-semantics interface for the Lambek calculus. Section "Modern type-logical grammars" looks at recent developments in type-logical grammars and how they solve some of the problems at the syntax-semantics interface. Section "Theorem proving" looks at two general frameworks for automated theorem proving for type-logical grammars, describing the internal representation of partial proofs and giving a high-level overview of the proof search mechanism.
Type-logical grammars
Type-logical grammars are a family of grammar formalisms built on a foundation of logic and type theory. Type-logical grammars originated when BIBREF4 introduced his Syntactic Calculus (called the Lambek calculus, L, by later authors). Though Lambek built on the work of BIBREF5 , BIBREF6 and others, Lambek's main innovation was to cast the calculus as a logic, giving a sequent calculus and showing decidability by means of cut elimination. This combination of linguistic and computational applications has proved very influential.
In its general form, a type-logical grammar consists of following components:
A sentence $w_1, \ldots , w_n$ is grammatical iff the statement $A_1,\ldots , A_n \vdash C$ is provable in our logic, for some $A_i \in \textit {lex}(w_i)$ and for some goal formula $C$ . In other words, we use the lexicon to map words to formulas and then ask the logic whether the resulting sequence of formulas is a theorem. Parsing in a type-logical grammar is quite literally a form of theorem proving, a very pure realisation of the slogan “parsing as deduction”.
One of the attractive aspects of type-logical grammars is their simple and transparent syntax-semantics interface. Though there is a variety of logics used for the syntax of type-logical grammars (I will discuss the Lambek calculus in Section "The Lambek calculus" and two generalisations of it in Sections "Multimodal grammars" and "First-order linear logic" ), there is a large consensus over the syntax-semantics interface. Figure 1 gives a picture of the standard architecture of type-logical grammars.
The “bridge” between syntax and semantics in the figure is the Curry-Howard isomorphism between linear lambda terms and proofs in multiplicative intuitionistic linear logic.
Theorem proving occurs in two places of the picture: first when parsing a sentence in a given type-logical grammar and also at the end when we use the resulting semantics for inferences. I will have little to say about this second type of theorem proving BIBREF9 , BIBREF10 ; theorem proving for parsing will be discussed in Section "Theorem proving" .
The lexicon plays the role of translating words to syntactic formulas but also specifies the semantic term which is used to compute the semantics later. The lexicon of a categorial grammar is “semantically informed”. The desired semantics of a sentence allows us to reverse-engineer the formula and lexical lambda-term which produce it.
Many current semantic theories do not provide a semantic formula directly, but first provide a proto-semantics on which further computations are performed to produce the final semantics (eg. for anaphora resolution, presuppositions projection etc.). In the current context this means at least some inference is necessary to determine semantic and pragmatic wellformedness.
The Lambek calculus
To make things more concrete, I will start by presenting the Lambek calculus BIBREF4 . Lambek introduced his calculus as a way to “obtain an effective rule (or algorithm) for distinguishing sentences from nonsentences”, which would be applicable both to formal and to (at least fragments of) natural languages BIBREF4 . The simplest formulas used in the Lambek calculus are atomic formulas, which normally include $s$ for sentence, $n$ for common noun, $np$ for noun phrase. We then inductively define the set of formulas of the Lambek calculus by saying that, they include the atomic formulas, and that, if $A$ and $B$ are formulas (atomic or not), then $A/B$ , $A\bullet B$ and $B\backslash A$ are also formulas.
The intended meaning of a formula $A/B$ — called $A$ over $B$ — is that it is looking for an expression of syntactic type $B$ to its right to produce an expression of syntactic type $A$ . An example would be a word like “the” which is assigned the formula $np/n$ in the lexicon, indicating that it is looking for a common noun (like “student”) to its right to form a noun phrase, meaning “the student” would be assigned syntactic type $np$ . Similarly, the intended meaning of a formula $B\backslash A$ — called $B$ under $A$ — is that it is looking for an expression of syntactic type $A$0 to its left to produce an expression of type $A$1 . This means an intransitive verb like “slept”, when assigned the formula $A$2 in the lexicon, combines with a noun phrase to its left to form a sentence $A$3 . We therefore predict that “the student slept” is a sentence, given the earlier assignment of $A$4 to “the student”. Finally, a formula $A$5 denotes the concatenation of an expression of type $A$6 to an expression of type $A$7 .
Basic statements of the Lambek calculus are of the form $A_1,\ldots ,A_n \vdash C$ (with $n \ge 1$ ), indicating a claim that the sequence of formulas $A_1,\ldots , A_n$ is of type $C$ ; the sequent comma `,' is implicitly associative and non-commutative. Table 1 shows the natural deduction rules for the Lambek calculus. $\Gamma $ , $\Delta $ , etc. denote non-empty sequences of formulas.
A simple Lambek calculus lexicon is shown in Table 2 . I have adopted the standard convention in type-logical grammars of not using set notation for the lexicon, but instead listing multiple lexical entries for a word separately. This corresponds to treating $\textit {lex}$ as a non-deterministic function rather than as a set-valued function.
Proper names, such as “Alyssa” and “Emory” are assigned the category $np$ . Common nouns, such as “student” and “exam” are assigned the category $n$ . Adjectives, such as “difficult” or “erratic” are not assigned a basic syntactic category but rather the category $n/n$ , indicating they are looking for a common noun to their right to form a new common noun, so we predict that both “difficult exam” and “exam” can be assigned category $n$ . For more complex entries, “someone” is looking to its right for a verb phrase to produce a sentence, where $np\backslash s$ is the Lambek calculus equivalent of verb phrase, whereas “whom” is first looking to its right for a sentence which is itself missing a noun phrase to its right and then to its left for a noun.
Given the lexicon of Table 2 , we can already derive some fairly complex sentences, such as the following, and, as we will see in the next section, obtain the correct semantics.
. Every student aced some exam.
. The student who slept during the exam loves Alyssa.
One of the two derivations of Sentence "The Lambek calculus" is shown in Figure 2 . To improve readability, the figure uses a “sugared” notation: instead of writing the lexical hypothesis corresponding to “exam” as $n \vdash n$ , we have written it as $\textit {exam} \vdash n$ . The withdrawn $np$ 's corresponding to the object and the subject are given a labels $p_0$ and $q_0$ respectively; the introduction rules are coindexed with the withdrawn hypotheses, even though this information can be inferred from the rule instantiation.
We can always uniquely reconstruct the antecedent from the labels. For example, the sugared statement “ $p_0\ \textrm {aced}\ q_0 \vdash s$ ” in the proof corresponds to $np, (np\backslash s)/np, np \vdash s$ .
Although it is easy to verify that the proof of Figure 2 has correctly applied the rules of the Lambek calculus, finding such a proof from scratch may look a bit complicated (the key steps at the beginning of the proof involve introducing two $np$ hypotheses and then deriving $s/np$ to allow the object quantifier to take narrow scope). We will defer the question “given a statement $\Gamma \vdash C$ , how do we decide whether or not it is derivable?” to Section "Theorem proving" but will first discuss how this proof corresponds to the following logical formula. $ \forall x. [\mathit {student}(x) \Rightarrow \exists y. [\mathit {exam}(y) \wedge \mathit {ace}(x,y) ] ] $
The syntax-semantics interface
For the Lambek calculus, specifying the homomorphism to multiplicative intuitionistic linear logic is easy: we replace the two implications ` $\backslash $ ' and ` $/$ ' by the linear implication ` $\multimap $ ' and the product ` $\bullet $ ' by the tensor ` $\otimes $ '. In a statement $\Gamma \vdash C$ , $\Gamma $ is now a multiset of formulas instead of a sequence. In other words, the sequent comma `,' is now associative, commutative instead of associative, non-commutative. For the proof of Figure 2 of the previous section, this mapping gives the proof shown in Figure 3 .
We have kept the order of the premisses of the rules as they were in Figure 2 to allow for an easier comparison. This deep structure still uses the same atomic formulas as the Lambek calculus, it just forgets about the order of the formulas and therefore can no longer distinguish between the leftward looking implication ` $\backslash $ ' and the rightward looking implication ` $/$ '.
To obtain a semantics in the tradition of BIBREF11 , we use the following mapping from syntactic types to semantic types, using Montague's atomic types $e$ (for entity) and $t$ (for truth value). $ np^* & = e\\ n^* & = e\rightarrow t\\ s^* & = t\\ (A \multimap B)^* & = A^* \rightarrow B^* $
Applying this mapping to the deep structure proof of Figure 3 produces the intuitionistic proof and the corresponding (linear) lambda term as shown in Figure 4
The computed term corresponds to the derivational semantics of the proof. To obtain the complete meaning, we need to substitute, for each of $z_0, \ldots , z_4$ , the meaning assigned in the lexicon.
For example, “every” has syntactic type $(s/(np\backslash s))/n$ and its semantic type is $(e\rightarrow t)\rightarrow (e\rightarrow t)\rightarrow t$ . The corresponding lexical lambda term of this type is $\lambda P^{e\rightarrow t}.\lambda Q^{e\rightarrow t}. (\forall (\lambda x^e. ((\Rightarrow (P\, x)) (Q\, x))))$ , with ` $\forall $ ' a constant of type $(e\rightarrow t)\rightarrow t$ and ` $\Rightarrow $ ' a constant of type $t\rightarrow (t\rightarrow t)$ . In the more familiar Montague formulation, this lexical term corresponds to $\lambda P^{e\rightarrow t}.\lambda Q^{e\rightarrow t}. \forall x. [ (P\, x) \Rightarrow (Q\,x)]$ , where we can see the formula in higher-order logic we are constructing more clearly. Although the derivational semantics is a linear lambda term, the lexical term assigned to “every” is not, since the variable $x$ has two bound occurrences.
The formula assigned to “some” has the same semantic type but a different term $\lambda P^{e\rightarrow t}.\lambda Q^{e\rightarrow t}. (\exists (\lambda x^e. ((\wedge (P\, x)) (Q\, x))))$ .
The other words are simple, “exam” is assigned $\mathit {exam}^{e\rightarrow t}$ , “student” is assigned $\mathit {student}^{e\rightarrow t}$ , and “aced” is assigned $\mathit {ace}^{e\rightarrow (e\rightarrow t)}$ .
So to compute the meaning, we start with the derivational semantics, repeated below. $ ((z_0\,z_1) \,(\lambda x. ((z_3\,z_4)\,\lambda y. ((z_2\,y)\,x)))) $
Then we substitute the lexical meanings, for $z_0,\ldots ,z_4$ . $ z_0& := \lambda P^{e\rightarrow t}.\lambda Q^{e\rightarrow t}. (\forall (\lambda x^e. ((\Rightarrow (P\, x)) (Q\, x))))\\ z_1&:= \mathit {student}^{e\rightarrow t}\\ z_2& := \mathit {ace}^{e\rightarrow (e\rightarrow t)}\\ z_3& := \lambda P^{e\rightarrow t}.\lambda Q^{e\rightarrow t}. (\exists (\lambda x^e. ((\wedge (P\, x)) (Q\, x))))\\ z_4& := \mathit {exam}^{e\rightarrow t}\\ $
This produces the following lambda term. $ ((\lambda P^{e\rightarrow t}.\lambda Q^{e\rightarrow t}. & (\forall (\lambda x^e. ((\Rightarrow (P\, x)) (Q\, x))))\,\mathit {student}^{e\rightarrow t}) \\ \,(\lambda x. ((\lambda P^{e\rightarrow t}.\lambda Q^{e\rightarrow t}. & (\exists (\lambda x^e. ((\wedge (P\, x)) (Q\, x))))\,\mathit {exam}^{e\rightarrow t})\\ &\lambda y. ((\mathit {ace}^{e\rightarrow (e\rightarrow t)}\,y)\,x)))) $
Finally, when we normalise this lambda term, we obtain the following semantics for this sentence. $ (\forall (\lambda x^e. ((\Rightarrow (\mathit {student}^{e\rightarrow t})\, x)) (\exists (\lambda y^e. ((\wedge (\mathit {exam}^{e\rightarrow t}\, y)) (((\mathit {ace}^{e\rightarrow (e\rightarrow t)}\,y)\,x))))) $
This lambda term represents the more readable higher-order logic formula. $ \forall x. [\mathit {student}(x) \Rightarrow \exists y. [\mathit {exam}(y) \wedge \mathit {ace}(x,y) ] ] $
Proofs in the Lambek calculus, and in type-logical grammars are subsets of the proofs in intuitionistic (linear) logic and these proofs are compatible with formal semantics in the tradition initiated by BIBREF11 .
For the example in this section, we have calculated the semantics of a simple example in “slow motion”: many authors assign a lambda term directly to a proof in their type-logical grammar, leaving the translation to intuitionistic linear logic implicit.
Given a semantic analysis without a corresponding syntactic proof, we can try to reverse engineer the syntactic proof. For example, suppose we want to assign the reflexive “himself” the lambda term $\lambda R^{(e\rightarrow e\rightarrow t)}\lambda x^e. ((R\,x)\, x)$ , that is, a term of type $(e\rightarrow e\rightarrow t)\rightarrow e\rightarrow t$ . Then, using some syntactic reasoning to eliminate implausible candidates like $(np\multimap n)\multimap n$ , the only reasonable deep structure formula is $(np\multimap np\multimap s)\multimap (np\multimap s)$ and, reasoning a bit further about which of the implications is left and right, we quickly end up with the quite reasonable (though far from perfect) Lambek calculus formula $((np\backslash s)/np)\backslash (np\backslash s)$ .
Going further
Though the Lambek calculus is a beautiful and simple logic and though it gives a reasonable account of many interesting phenomena on the syntax-semantics interface, the Lambek calculus has a number of problems, which I will discuss briefly below. The driving force of research in type-logical grammars since the eighties has been to find solutions to these problems and some of these solutions will be the main theme of the next section.
The Lambek calculus generates only context-free languages BIBREF12 . There is a rather large consensus that natural languages are best described by a class of languages at least slightly larger than the context-free languages. Classical examples of phenomena better analysed using so-called mildly context-sensitive language include verb clusters in Dutch and in Swiss German BIBREF13 , BIBREF14 .
Though our example grammar correctly predicted two readings for Sentence "The Lambek calculus" above, our treatment of quantifiers doesn't scale well. For example, if we want to predict two readings for the following sentence (which is just Sentence "The Lambek calculus" where “some” and “every” have exchanged position)
. Some student aced every exam.
then we need to add an additional lexical entry both for “some” and for “every”; this is easily done, but we end up with two lexical formulas for both words. However, this would still not be enough. For example, the following sentence is also grammatical.
. Alyssa gave every student a difficult exam.
. Alyssa believes a student committed perjury.
In Sentence UID18 , “every student” does not occur in a peripheral position, and though it is possible to add a more complex formula with the correct behaviour, we would need yet another formula for Sentence UID18 . Sentence UID18 is generally considered to have two readings: a de dicto reading, where Alyssa doesn't have a specific student in mind (she could conclude this, for example, when two students make contradictory statements under oath, this reading can be felicitously followed by “but she doesn't know which”), and a de re reading where Alyssa believes a specific student perjured. The Lambek calculus cannot generate this second reading without adding yet another formula for “a”.
It seems we are on the wrong track when we need to add a new lexical entry for each different context in which a quantifier phrase occurs. Ideally, we would like a single formula for “every”, “some” and “a” which applied in all these different cases.
Another way to see this is that we want to keep the deep structure formula $n\multimap ((np\multimap s) \multimap s)$ and that we need to replace the Lambek calculus by another logic such that the correct deep structures for the desired readings of sentences like UID18 and UID18 are produced.
The grammar above also overgenerates in several ways.
“ace” implies a (very positive) form of evaluation with respect to the object. “aced the exam” is good, whereas “aced Emory”, outside of the context of a tennis match is bad. “aced logic” can only mean something like “aced the exam for the logic course”.
“during” and similar temporal adverbs imply its argument is a temporal interval: “during the exam” is good, but “during the student” is bad, and “during logic” can only mean something like “during the contextually understood logic lecture”
In the literature on semantics, there has been an influential movement towards a richer ontology of types (compared to the “flat” Montagovian picture presented above) but also towards a richer set of operations for combining terms of specific types, notably allowing type coercions BIBREF15 , BIBREF16 . So an “exam” can be “difficult” (it subject matter, or informational content) but also “take a long time” (the event of taking the exam). The theory of semantics outlined in the previous section needs to be extended if we want to take these and other observations into account.
Modern type-logical grammars
We ended the last section with some problems with using the Lambek calculus as a theory of the syntax-semantics interface. The problems are of two different kinds.
Multimodal grammars
Multimodal type-logical grammars BIBREF0 take the non-associative Lambek calculus as its base, but allow multiple families of connectives.
For the basic statements $\Gamma \vdash C$ of the Lambek calculus, we ask the question whether we can derive formula $C$ , the succedent, from a sequence of formulas $\Gamma $ , the antecedent. In the multimodal Lambek calculus, the basic objects are labeled binary trees. The labels come from a separate set of indices or modes $I$ . Multimodal formulas are then of the form $A/_i B$ , $A\bullet _i B$ and $A\backslash _i B$ , and antecedent terms are of the form $\Gamma \circ _{i} \Delta $ , with $C$0 an index from $C$1 (we have omitted the outer brackets for the rules, but the operator $C$2 is non-associative). Sequents are still written as $C$3 , but $C$4 is now a binary branching, labeled tree with formulas as its leaves.
Given a set of words $w_1,\ldots ,w_n$ and a goal formula $C$ , the question is now: is there a labeled tree $\Gamma $ with formulas $A_1,\ldots ,A_n$ as its yield, such that $\Gamma \vdash C$ is derivable and $A_i \in \textit {lex}(w_i)$ for all $i$ (the implementation of Section "Multimodal proof nets" will automatically compute such a $\Gamma $ ).
The rules of multimodal type-logical grammars are shown in Table 3 . In the rules, $\Gamma [\Delta ]$ denotes an antecedent tree $\Gamma $ with distinguished subtree $\Delta $ — the subtree notation is a non-associative version of the Lambek calculus antecedent $\Gamma ,\Delta ,\Gamma ^{\prime }$ , where $\Delta $ is a subsequence instead of a subtree as it is in $\Gamma [\Delta ]$ .
Each logical connective with mode $i$ uses a structural connective $\circ _i$ in its rule. For the $/ E$ , $\bullet I$ and $\backslash E$ rules, reading from premisses to conclusions, we build structure. For the $/I$ , $\bullet E$ and $\backslash I$ rules we remove a structural connective with the same mode as the logical connective. The natural deduction rules use explicit antecedents, although, for convenience, we will again use coindexation between the introduction rules for the implications ` $/$ ' and ` $\backslash $ ' and its withdrawn premiss (and similarly for the $\circ _i$0 rule and its two premisses).
The main advantage of adding modes to the logic is that modes allow us to control the application of structural rules lexically. This gives us fine-grained control over the structural rules in our logic.
For example, the base logic is non-associative. Without structural rules, the sequent $a/b, b/c \vdash a/c$ , which is derivable in the Lambek calculus is not derivable in its multimodal incarnation $a/_a b, b/_a c \vdash a/_a c$ . The proof attempt below, with the failed rule application marked by the `' symbol, shows us that the elimination rules and the introduction rule for this sequent do not match up correctly. $ [[/ I]]{a/_ab \circ _{a} b/_ac\vdash a/_a c }{[\text{}]{(a/_ab \circ _{a} b/_ac) \circ _{a} c \vdash a}{[[/ E]]{a/_a b \circ _{a} (b/_a c \circ _{a} c)\vdash a}{a/_a b\vdash a/_a b & [[/ E]]{b/_a c \circ _{a} c \vdash b}{b/_a c \vdash b/_a c & c\vdash c}}}} $
This is where the structural rules, shown at the bottom of Table 3 come in. The general form, read from top to bottom, states that we take a structure $\Gamma $ containing a distinguished subtree $\Xi $ which itself has $n$ subtrees $\Delta _1,\ldots ,\Delta _n$ , and we replace this subtree $\Xi $ with a subtree $\Xi ^{\prime }$ which has the same number of subtrees, though not necessarily in the same order ( $\pi $ is a permutation on the leaves). In brief, we replace a subtree $\Xi $ by another subtree $\Xi ^{\prime }$ and possibly rearrange the leaves (subtrees) of $\Xi $ , without deleting or copying any subtrees. Examples of structural rules are the following.
The first structural rule is one of the structural rules for associativity. It is the simplest rule which will make the proof attempt above valid (with $\Gamma []$ the empty context, $\Delta _1 = a/_a b$ , $\Delta _2 = b/_a c$ and $\Delta _3 = c$ ). This structural rule keeps the order of the $\Delta _i$ the same.
The rule above on the right is slightly more complicated. There, the positions of $\Delta _2$ and $\Delta _3$ are swapped as are the relative positions of modes 0 and 1. Rules like this are called “mixed commutativity”, they permit controlled access to permutation. One way to see this rule, seen from top to bottom, is that is “moves out” a $\Delta _3$ constituent which is on the right branch of mode 1. Rules of this kind are part of the solution to phenomena like Dutch verb clusters BIBREF17 .
Many modern type-logical grammars, such as the Displacement calculus and NL $_{cl}$ can be seen as multimodal grammars BIBREF18 , BIBREF1 .
First-order linear logic
We have seen that multimodal type-logical grammars generalise the Lambek calculus by offering the possibility of fine-tuned controlled over the application of structural rules. In this section, I will introduce a second way of extending the Lambek calculus.
Many parsing algorithms use pairs of integers to represent the start and end position of substrings of the input string. For example, we can represent the sentence
. Alyssa believes someone committed perjury.
as follows (this is a slightly simplified version of Sentence UID18 from Section "Going further" ); we have treated “committed perjury” as a single word.
[node distance=5em] 0) 0; 1) [right of=0]1; 2) [right of=1]2; 3) [right of=2]3; 4) [node distance=10em, right of=3]4; (0) edge node[above] [label] Alyssa (1); (1) edge node[above] [label] believes $_{\rule {0pt}{1ex}}$ (2); (2) edge node[above] [label] someone $_{\rule {0pt}{1ex}}$ (3); (3) edge node[above] [label] committed perjury $_{\rule {0pt}{1ex}}$ (4);
The basic idea of first-order linear logic as a type-logical grammar is that we can code strings as pairs (or, more generally, tuples) of integers representing string positions. So for deciding the grammaticality of a sequence of words $w_1,\ldots , w_n \vdash C$ , with a goal formula $C$ , we now give a parametric translation from $\Vert A_i \Vert ^{i-1,i}$ for each lexical entry $w_i$ and $\Vert C\Vert ^{0,n}$ for the conclusion formula.
Given these string positions, we can assign the noun phrase “Alyssa” the formula $np(0,1)$ , that is a noun phrase from position 0 to position 1. The verb “believes”, which occurs above between position 1 and 2, can then be assigned the complex formula $\forall x_2. [ s(2,x_2) \multimap \forall x_1. [ np(x_1,1) \multimap s(x_1,x_2)] ]$ , meaning that it first selects a sentence to its right (that is, starting at its right edge, position 2 and ending anywhere) and then a noun phrase to its left (that is, starting anywhere and ending at its left edge, position 1) to produce a sentence from the left position of the noun phrase argument to the right position of the sentence argument.
We can systematise this translation, following BIBREF19 , and obtain the following translation from Lambek calculus formulas to first-order linear logic formulas. $ \Vert p \Vert ^{x,y} & = p(x,y) \\ \Vert A / B \Vert ^{x,y} &= \forall z. \Vert B \Vert ^{y,z} \multimap \Vert A \Vert ^{x,z} \\ \Vert B\backslash A \Vert ^{y,z} &= \forall x. \Vert B \Vert ^{x,y} \multimap \Vert A \Vert ^{x,z} \\ \Vert A \bullet B \Vert ^{x,z} &= \exists y. \Vert A \Vert ^{x,y} \otimes \Vert B \Vert ^{y,z} $
Given this translation, the lexical entry for “believes” discussed above is simply the translation of the Lambek calculus formula $(np\backslash s)/s$ , with position pair $1,2$ , to first-order linear logic. Doing the same for “committed perjury” with formula $np\backslash s$ and positions $3,4$ gives $\forall z. [np(z,3) \multimap s(z,4)]$ . For “someone” we would simply translate the Lambek calculus formula $s/(np\backslash s)$ , but we can do better than that: when we translate “someone” as $\forall y_1. \forall y_2. [ (np(2,3) \multimap s(y_1,y_2)) \multimap s(y_1,y_2) ]$ , we improve upon the Lambek calculus analysis.
As we noted in Section "Going further" , the Lambek calculus cannot generate the “de re” reading, where the existential quantifier has wide scope. Figure 5 shows how the simple first-order linear logic analysis does derive this reading.
Besides the Lambek calculus, first-order linear logic has many other modern type-logical grammars as fragments. Examples include lambda grammars BIBREF20 , the Displacement calculus BIBREF2 and hybrid type-logical grammars BIBREF3 . We can see first-order linear logic as a sort of “machine language” underlying these different formalisms, with each formalism introducing its own set of abbreviations convenient for the grammar writer. Seeing first-order linear logic as an underlying language allows us to compare the analyses proposed for different formalisms and find, in spite of different starting points, a lot of convergence. In addition, as discussed in Section "First-order proof nets" , we can use first-order linear logic as a uniform proof strategy for these formalisms.
As usual, we obtain the deep structure of a syntactic derivation by defining a homomorphism from the syntactic proof to a proof in multiplicative intuitionistic linear logic. For first-order linear logic, the natural mapping simply forgets all first-order quantifiers and replaces all atomic predicates $p(x_1,\ldots ,x_n)$ by propositions $p$ . Since the first-order variables have, so far, only been used to encode string positions, such a forgetful mapping makes sense.
However, other solutions are possible. When we add semantically meaningful terms to first-order linear logic, the Curry-Howard isomorphism for the first-order quantifiers will give us dependent types and this provides a natural connection to the work using dependent types for formal semantics BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 .
The Montagovian Generative Lexicon
In the previous sections, we have discussed two general solutions to the problems of the syntax-semantics interface of the Lambek calculus. Both solutions proposed a more flexible syntactic logic. In this section, we will discuss a different type of added flexibility, namely in the syntax-semantics interface itself.
The basic motivating examples for a more flexible composition have been amply debated in the literature BIBREF15 , BIBREF16 . Our solution is essentially the one proposed by BIBREF25 , called the Montagovian Generative Lexicon. I will only give a brief presentation of this framework. More details can be found in Chapter 6.
Like many other solutions, the first step consists of splitting Montague's type $e$ for entities into several types: physical objects, locations, informational objects, eventualities, etc. Although there are different opinions with respect to the correct granularity of types BIBREF15 , BIBREF16 , BIBREF26 , nothing much hinges on this for the present discussion.
The second key element is the move to the second-order lambda calculus, system F BIBREF27 , which allows abstraction over types as well as over terms. In our Lambek calculus, the determiner “the” was assigned the formula $np/n$ and the type of its lexical semantics was therefore $(e\rightarrow t) \rightarrow e$ , which we implement using the $\iota $ operators of type $(e\rightarrow t) \rightarrow e$ , which, roughly speaking, selects a contextually salient entity from (a characteristic function of) a set. When we replace the single type $e$ by several different types, we want to avoid listing several separate syntactically identical by semantically different entries for “the” in the lexicon, and therefore assign it a polymorphic term $\Lambda \alpha . \iota ^{(\alpha \rightarrow t)\rightarrow \alpha }$ of type $\Pi \alpha . ((\alpha \rightarrow t)\rightarrow \alpha )$ , quantifying over all types $\alpha $ . Though this looks problematic, the problem is resolved once we realise that only certain function words (quantifiers, conjunctions like “and”) are assigned polymorphic terms and that we simply use universal instantiation to obtain the value of the quantifier variable. So if “student” is a noun of type human, that is of type $h\rightarrow t$ , then “the student” will be of type $h$ , instantiating $(e\rightarrow t) \rightarrow e$0 to $(e\rightarrow t) \rightarrow e$1 . Formally, we use $(e\rightarrow t) \rightarrow e$2 reduction as follows (this is substitution of types instead of terms, substituting type $(e\rightarrow t) \rightarrow e$3 for $(e\rightarrow t) \rightarrow e$4 ). $(e\rightarrow t) \rightarrow e$5
The final component of the Montagovian Generative Lexicon is a set of lexically specified, optional transformations. In case of a type mismatch, an optional transformation can “repair” the term.
As an example from BIBREF28 and BIBREF29 , one of the classic puzzles in semantics are plurals and collective and distributive readings. For example, verbs like “meet” have collective readings, they apply to groups of individuals collectively, so we have the following contrast, where collectives like committees and plurals like students can meet, but not singular or distributively quantified noun phrases. The contrast with verbs like “sneeze”, which force a distributive reading is clear.
. The committee met.
. All/the students met
. *A/each/the student met.
. All/the students sneezed.
. A/each/the student sneezed.
In the Montagovian Generative lexicon, we can models these fact as follows. First, we assign the plural morphology “-s” the semantics $\Lambda \alpha \lambda P^{\alpha \rightarrow t} \lambda Q^{\alpha \rightarrow t}. | Q | > 1 \wedge \forall x^{\alpha }. Q(x) \Rightarrow P(x)$ , then “students” is assigned the following term $\lambda Q^{h\rightarrow t}. | Q | > 1 \wedge \forall x^h. Q(x) \Rightarrow \textit {student}(x)$ , that is the sets of cardinality greater than one such that all its members are students. Unlike “student” which was assigned a term of type $h\rightarrow t$ , roughly a property of humans, the plural “students” is assigned a term of type $(h\rightarrow t)\rightarrow t$ , roughly a property of sets of humans. Consequently, the contrast between “the student” and “the students” is that the first is of type $h$ (a human) and the second of type $h\rightarrow t$ (a set of humans) as indicated below.
Therefore, the meaning of “the students” is the contextually determined set of humans, from the sets of more than one human such that all of them are students.
Then we distinguish the verbs “meet” and “sneeze” as follows, with the simpler verb “sneeze” simply selecting for a human subject and the collective verb “meet” selecting for a set of humans (of cardinality greater than one) as its subject.
Given these basic lexical entries, we already correctly predict that “the student met” is ill-formed semantically (there is an unresolvable type mismatch) but “the students met” and “the student sneezed” are given the correct semantics.
The interesting case is “the students sneezed” which has as its only reading that each student sneezed individually. Given that “the students” is of type $h\rightarrow t$ and that “sneezed” requires an argument of type $h$ , there is a type mismatch when we apply the two terms. However, “sneeze” has the optional distributivity operator `*', which when we apply it to the lexical semantics for “sneeze” produces the term $\lambda P^{h\rightarrow t}. \forall x^h. P(x) \Rightarrow \textit {sneeze}(x)$ , which combines with “the students” to produce the reading. $ \forall x^h. (\iota (\lambda Q^{h\rightarrow t}. | Q | > 1 \wedge \forall y^h Q(y) \Rightarrow \textit {student}(y))\, x) \Rightarrow \textit {sneeze}(x) $
In other words, all of the members of the contextually determined set of more than human which are all students, sneeze.
The basic idea for the Montagovian Generative Lexicon is that lexical entries specify optional transformations which can repair certain sorts of type mismatches in the syntax-semantics interface. This adaptability allows the framework to solve many semantic puzzles.
Though a proof-of-concept application of these ideas exists, more robust and scalable applications, as well as efforts incorporate these ideas into wide-coverage semantics, are ongoing research.
Theorem proving
When looking at the rules and examples for the different logics, the reader may have wondered: how do we actually find proofs for type-logical grammars? This question becomes especially urgent once our grammars become more complex and the consequences of our lexical entries, given our logic, become hard to oversee. Though pen and paper generally suffice to show that a given sentence is derivable for the desired reading, it is generally much more laborious to show that a given sentence is underivable or that it has only the desired readings. This is where automated theorem provers are useful: they allow more extensive and intensive testing of your grammars, producing results more quickly and with less errors (though we should be careful about too naively assuming the implementation we are using is correct: when a proof is found it is generally easy to verify its correctness by hand, but when a proof isn't found because of a programming error this can be hard to detect).
Though the natural deduction calculi we have seen so far can be used for automated theorem proving BIBREF30 , BIBREF31 , and though BIBREF4 already gave a sequent calculus decision procedure, both logics have important drawbacks for proof search.
Natural deduction proofs have a 1-1 correspondence between proofs and readings, though this is somewhat complicated to enforce for a logic with the $\bullet \textit {E}$ rule (and the related $\Diamond \textit {E}$ rule). For the sequent calculus, the product rule is just like the other rules, but sequent calculus suffers from the so-called “spurious ambiguity” problem, which means that it generates many more proofs than readings.
Fortunately, there are proof systems which combine the good aspects of natural deduction and sequent calculus, and which eliminate their respective drawbacks. Proof nets are a graphical representation of proofs first introduced for linear logic BIBREF32 . Proof nets suffer neither from spurious ambiguity nor from complications for the product rules.
Proof nets are usually defined as a subset of a larger class, called proof structures. Proof structures are “candidate proofs”: part of the search space of a naive proof search procedure which need not correspond to actual proofs. Proof nets are those proof structures which correspond to sequent proofs. Perhaps surprisingly, we can distinguish proof nets from other proof structures by looking only at graph-theoretical properties of these structures.
Proof search for type-logical grammars using proof nets uses the following general procedure.
In Sections "Multimodal proof nets" and "First-order proof nets" we will instantiate this general procedure for multimodal type-logical grammar and for first-order linear logic respectively.
Multimodal proof nets
Table 5 presents the links for multimodal proof nets. The top row list the links corresponding to the elimination rules of natural deduction, the bottom row those corresponding to the introduction rules. There are two types of links: tensor links, with an open center, and par links, with a filled center. Par links have a single arrow pointing to the main formula of the link (the complex formula containing the principal connective). The top and bottom row are up-down symmetric with tensor and par reversed. The tensor links correspond to the logical rules which build structure when we read them from top to bottom, the par links to those rules which remove structure.
The formulas written above the central node of a link are its premisses, whereas the formulas written below it are its conclusions. Left-to-right order of the premisses as well as the conclusions is important.
A proof structure is a set of formula occurrences and a set of links such that:
each formula is at most once the premiss of a link,
each formula is at most once the conclusion of a link.
A formula which is not the premiss of any link is a conclusion of the proof structure. A formula which is not the conclusion of any link is a hypothesis of the proof structure. We say a proof structure with hypotheses $\Gamma $ and conclusions $\Delta $ is a proof structure of $\Gamma \vdash \Delta $ (we are overloading of the ` $\vdash $ ' symbol here, though this use should always be clear from the context; note that $\Delta $ can contain multiple formulas).
After the first step of lexical lookup we have a sequent $\Gamma \vdash C$ , and we can enumerate its proof structures as follows: unfold the formulas in $\Gamma , C$ , unfolding them so that the formulas in $\Gamma $ are hypotheses and the formula $C$ is a conclusion of the resulting structure, until we reach the atomic subformulas (this is step 2 of the general procedure), then identify atomic subformulas (step 3 of the general procedure, we turn to the last step, checking correctness, below). This identification step can, by the conditions on proof structures only identify hypotheses with conclusions and must leave all formulas of $\Gamma $ , including atomic formulas, as hypotheses and $C$ as a conclusion.
Figure 6 shows the lexical unfolding of the sequent $a/_a b, b/_a c \vdash a/_a c$ . It is already a proof structure, though a proof structure of $a, a/_a b, b, b/_a c, c \vdash a, a/_a c, b, c$ (to the reader familiar with the proof nets of linear logic: some other presentations of proof nets use more restricted definitions of proof structures where a “partial proof structure” such as shown in the figure is called a module).
To turn this proof structure into a proof structure of $a/_a b, b/_a c \vdash a/_a c$ , we identify the atomic formulas. In this case, there is only a single way to do this, since $a$ , $b$ and $c$ all occur once as a hypothesis and once as a conclusion, though in general there may be many possible matchings. Figure 7 shows, on the left, the proof structure after identifying the $a$ and $b$ formulas. Since left and right (linear order), up and down (premiss, conclusion) have meaning in the graph, connecting the $c$ formulas is less obvious: $c$ is a conclusion of the $/I$ link and must therefore be below it, but a premiss of the $/E$ link and must therefore be above it. This is hard to achieve in the figure shown on the left. Though a possible solution would be to draw the figure on a cylinder, where “going up” from the topmost $a$0 we arrive at the bottom one, for ease of type-setting and reading the figure, I have chosen the representation shown in Figure 7 on the right. The curved line goes up from the $a$1 premiss of the $a$2 link and arrives from below at the $a$3 link, as desired. One way so see this strange curved connection is as a graphical representation of the coindexation of a premiss with a rule in the natural deduction rule for the implication.
Figure 7 therefore shows, on the right, a proof structure for $a/_a b, b/_a c \vdash a/_a c$ . However, is it also a proof net, that is, does it correspond to a proof? In a multimodal logic, the answer depends on the available structural rules. For example, if no structural rules are applicable to mode $a$ then $a/_a b, b/_a c \vdash a/_a c$ is underivable, but if mode $a$ is associative, then it is derivable.
We decide whether a proof structure is a proof net based only on properties of the graph. As a first step, we erase all formula information from the internal nodes of the graph; for administrative reasons, we still need to be able to identify which of the hypotheses and conclusion of the structure correspond to which formula occurrence. All relevant information for correctness is present in this graph, which we call an abstract proof structure.
We talked about how the curved line in proof structures (and abstract proof structure) corresponds to the coindexation of discharged hypotheses with rule names for the implication introduction rules. However, the introduction rules for multimodal type-logical grammars actually do more than just discharge a hypothesis, they also check whether the discharged hypothesis is the immediate left (for $\backslash I$ ) or right (for $/ I$ ) daughter of the root node, that is, that the withdrawn hypothesis $A$ occurs as $A\circ _i \Gamma $ (for $\backslash I$ and mode $i$ ) or $\Gamma \circ _i A$ (for $/I$ and mode $i$ ). The par links in the (abstract) proof structure represent a sort of “promise” that will produce the required structure. We check whether it is satisfied by means of contractions on the abstract proof structure.
The multimodal contractions are shown in Table 6 . All portrayed configurations contract to a single vertex: we erase the two internal vertices and the paired links and we identify the two external vertices, keeping all connections of the external vertices to the rest of the abstract proof structure as they were: the vertex which is the result of the contraction will be a conclusion of the same link as the top external vertex (or a hypothesis of the abstract proof structure in case it wasn't) and it will be a premiss of the same link as the bottom external vertex (or a conclusion of the abstract proof structure in case it wasn't).
The contraction for $/I$ checks if the withdrawn hypothesis is the right daughter of a tensor link with the same mode information $i$ , and symmetrically for the $\backslash I$ contraction. The $\bullet E$ contraction contracts two hypotheses occurring as sister nodes.
All contractions are instantiations of the same pattern: a tensor link and a par link are connected, respecting left-right and up-down the two vertices of the par link without the arrow.
To get a better feel for the contractions, we will start with its simplest instances. When we do pattern matching on the contraction for $/ I$ , we see that it corresponds to the following patterns, depending on our choice for the tensor link (the par link is always $/ I$ ). $ C/_i B &\vdash C/_i B \\ A & \vdash (A\bullet _i B)/_i B \\ A & \vdash C/_i (A\backslash _i C) $
A proof structure is a proof net iff it contracts to a tree containing only tensor links using the contractions of Table 6 and any structural rewrites, discussed below — BIBREF33 present full proofs. In other words, we need to contract all par links in the proof structure according to their contraction, each contraction ensuring the correct application of the rule after which it is named. The abstract proof structure on the right of Figure 8 does not contract, since there is no substructure corresponding to the $/I$ contraction: for a valid contraction, a par link is connected to both “tentacles” of a single tensor link, and in the figure the two tentacles without arrow are connected to different tensor links. This is correct, since $a/_a b, b/_a c\vdash a/_a c$ is underivable in a logic without structural rules for $a$ .
However, we have seen that this statement becomes derivable once we add associativity of $a$ and it is easily verified to be a theorem of the Lambek calculus. How can we add a modally controlled version of associativity to the proof net calculus? We can add such a rule by adding a rewrite from a tensor tree to another tensor tree with the same set of leaves. The rewrite for associativity is shown in Figure 9 . To apply a structural rewrite, we replace the tree on the left hand side of the arrow by the one on the right hand side, reattaching the leaves and the root to the rest of the proof net.
Just like the structural rules, a structural rewrite always has the same leaves on both sides of the arrow — neither copying nor deletion is allowed, though we can reorder the leaves in any way (the associativity rule doesn't reorder the leaves).
Figure 10 shows how the contractions and the structural rewrites work together to derive $a/_a b, b/_a c \vdash a/_a c$ .
We start with a structural rewrite, which rebrackets the pair of tensor links. The two hypotheses are now the premisses of the same link, and this also produces a contractible structure for the $/I$ link. Hence, we have shown the proof structure to be a proof net.
In the Grail theorem prover, the representation of abstract proof structures looks as shown in Figure 11 (this is an automatically produced subgraph close to the graph on the left of Figure 10 , though with a non-associative mode $n$ and therefore not derivable). This graph is used during user interaction. The graphs are drawn using GraphViz, an external graph drawing program which does not guarantee respecting our desires for left, right and top/bottom, so tentacles are labeled 1, 2 and 3 (for left, right and top/bottom respectively) to allow us to make these distinctions regardless of the visual representation. Vertices are given unique identifiers for user interaction, for example to allow specifying which pair of atoms should be identified or which par link should be contracted.
Although the structural rules give the grammar writer a great deal of flexibility, such flexibility complicates proof search. As discussed at the beginning of Section "Theorem proving" , theorem proving using proof nets is a four step process, which in the current situation looks as follows: 1) lexical lookup, 2) unfolding, 3) identification of atoms, 4) graph rewriting. In the current case, both the graph rewriting and the identification of atoms are complicated and since we can interleave the atom connections and the graph rewriting it is not a priori clear which strategy is optimal for which set of structural rules. The current implementation does graph rewriting only once all atoms have been connected.
The Grail theorem prover implements some strategies for early failure. Since all proofs in multimodal type-logical grammars are a subset of the proofs in multiplicative linear logic, we can reject (partial) proof structures which are invalid in multiplicative linear logic, a condition which is both powerful and easy to check.
As a compromise between efficiency and flexibility, Grail allows the grammar writer to specify a first-order approximation of her structural rules. Unlike the test for validity in multiplicative linear logic which is valid for any set of structural rules, specifying such a first-order approximation is valid only when there is a guarantee that all derivable sequents in the multimodal grammar are a subset of their approximations derivable in first-order linear logic. Errors made here can be rather subtle and hard to detect. It is recommended to use such methods to improve parsing speed only when a grammar has been sufficiently tested and where it is possible to verify whether no valid readings are excluded, or, ideally, to prove that the subset relation holds between the multimodal logic and its first-order approximation.
The next section will discuss first-order proof nets in their own right. Though these proof nets have been used as an underlying mechanism in Grail for a long time, we have seen in Section "First-order linear logic" that many modern type-logical grammars are formulated in a way which permits a direct implementation without an explicit set of structural rules.
As to the proof search strategy used by Grail, it is an instance of the “dancing links” algorithm BIBREF35 : when connecting atomic formulas, we always link a formula which has the least possibilities and we rewrite the abstract proof structures only once a fully linked proof structure has been produced. Though the parser is not extremely fast, evaluation both on randomly generated statements and on multimodal statements extracted from corpora show that the resulting algorithm performs more than well enough BIBREF36 .
First-order proof nets
Proof nets for first-order linear logic BIBREF37 are a simple extension of the proof nets for standard, multiplicative linear logic BIBREF38 . Compared to the multimodal proof nets of the previous section, all logical links have the main formula of the link as their conclusion but there is now a notion of polarity, corresponding to whether or not the formula occurs on the left hand side of the turnstile (negative polarity) or on the right hand side (positive polarity).
We unfold a sequent $A_1,\ldots ,A_n \vdash C$ by using the negative unfolding for each of the $A_i$ and the positive unfolding for $C$ . The links for first-order proof nets are shown in Table 7 .
Contrary to multimodal proof nets, where a tensor link was drawn with an open central node and a par link with a filled central node, here par links are drawn as a connected pair of dotted lines and tensor links as a pair of solid lines.
As before, premisses are drawn above the link and conclusions are drawn below it. With the exception of the cut and axiom links, the order of the premisses and the conclusions is important. We assume without loss of generality that every quantifier link uses a distinct eigenvariable.
A set of formula occurrences connected by links is a proof structure if every formula is at most once the premiss of a link and if every formula is exactly once the conclusion of a link. Those formulas which are not the premiss of any link are the conclusions of the proof structure — note the difference with multimodal proof nets: a proof structure has conclusions but no hypotheses and, as a consequence, each formula in the proof net must be the conclusion of exactly one (instead of at most one) link.
For polarised proof nets, unfolding the formulas according to the links of Table 7 no longer produces a proof structure, since the atomic formulas after unfolding are not the conclusions of any link. Such “partial proof structures” are called a modules. To turn a module into a proof structure, we connect atomic formulas of opposite polarity by axiom links until we obtain a complete matching of the atomic formulas, that is until every atomic formula is the conclusion of an axiom link.
The negative $\forall $ and the positive $\exists $ link, are defined using substitution of an arbitrary term $t$ for the eigenvariable of the link. In actual proof search, we use unification of these variables when the axiom links are performed.
As usual, not all proof structures are proof nets. However, since the logical rules for the quantifiers make essential use of the notion of “free occurrence of a variable”, this should be reflected in out correctness condition. BIBREF37 uses a notion of switching for proof structures which extends the switchings of BIBREF38 .
A switching is, for each of the binary par links a choice of its left or right premiss and for each of the unary par links with eigenvariable $x$ a choice of one of the formulas in the structure with a free occurrence of $x$ or of the premiss of the rule.
Given a switching, a correction graph replaces a binary par link by a connection from the conclusion of the link to the premiss chosen by the switching, and it replace a unary par link by a link from the conclusion to the formula chosen by the switching.
Finally, a proof structure is a proof net when all its correction graphs are both acyclic and connected BIBREF37 .
As an example, look at the proof structure of $a\multimap \exists x.b(x) \vdash \exists y. [a\multimap b(y)]$ shown in Figure 12 . This statement is not derivable in first-order linear logic (nor in intuitionistic logic). Consider therefore the switching connecting the binary par link to its left premiss $a$ and the link for $x$ to the formula $a\multimap b(x)$ (it has a free occurrence of $x$ , so this like is a valid switching).
This switching produces the correction graph shown in Figure 13 . It contains a cycle, drawn with bold edges, and is therefore not a proof structure (in addition, the $b$ axiom is disconnected from the rest of the structure, giving a second reason for rejecting the proof structure).
Though switching conditions for proof nets are simple and elegant, they don't lend themselves to naive application: already for the example proof structure of Figure 12 there are six possible switchings to consider and, as the reader can verify, only the switching shown in Figure 13 is cyclic (and disconnected). In general, it is often the case that all switchings but one are acyclic and connected, as it is here.
Though there are efficient ways of testing acyclicity and connectedness for multiplicative proof nets BIBREF39 , BIBREF40 and it seems these can be adapted to the first-order case (though some care needs to be taken when we allow complex terms), the theorem prover for first-order linear logic uses a extension of the contraction criterion of BIBREF41 .
Given a proof structure we erase all formulas from the vertices and keep only a set of the free variables at this vertex. We then use the contractions of Table 8 to contract the edges of the graph. The resulting vertex of each contraction has the union of the free variables of the two vertices of the redex (we remove the eigenvariable $x$ of a $\forall $ contraction, “ $\Rightarrow _u$ ”). A proof structure is a proof net iff it contracts to a single vertex using the contractions of Table 8 .
To give an example of the contractions, Figure 14 shows the contractions for the underivable proof structure of Figure 12 . The initial structure, which simply takes the proof structure of Figure 12 and replaces the formulas by the corresponding set of free variables, is shown on the left. Contracting the five solid edges using the $c$ contraction produces the structure shown in the figure on the right.
No further contractions apply: the two connected dotted links from the binary par link do not end in the same vertex, so the par contraction $p$ cannot apply. In addition, the universal contraction $u$ cannot apply either, since it requires all vertices with its eigenvariable $x$ to occur at the node from which the arrow is leaving and there is another occurrence of $x$ at the bottom node of the structure. We have therefore shown that this is not a proof net.
Since there are no structural rewrites, the contractions for first-order linear logic are easier to apply than those for multimodal type-logical grammars: it is rather easy to show confluence for the contractions (the presence of structural rules, but also the unary versions of the multimodal contractions, means confluence is not guaranteed for multimodal proof nets). We already implicitly used confluence when we argued that the proof structure in Figure 14 was not a proof net. The theorem prover uses a maximally contracted representation of the proof structure to represent the current state of proof search and this means less overhead and more opportunities for early failure during proof search.
Like before, the theorem proving uses four steps, which look as follows in the first-order case: 1) lexical lookup, 2) unfolding, 3) axiom links with unification, 4) graph contraction. Unlike the multimodal proof nets of the previous section, the graph contractions are now confluent and can be performed efficiently (the linear time solutions for the multiplicative case may be adaptable, but a naive implementation already has an $O(n^2)$ worst-case performance). After lexical lookup, theorem proving for first-order linear logic unfolds the formulas as before, but uses a greedy contraction strategy. This maximally contracted partial proof net constrains further axiom links: for example, a vertex containing a free variable $x$ cannot be linked to the conclusion of the edge of its eigenvariable (the vertex to which the arrow of the edge with variable $x$ points) or to one of its descendants, since such a structure would fail to satisfy the condition that the two vertices of a $\forall $ link for the $u$ contraction of Figure 8 are distinct. Another easily verified constraint is that two atomic formulas can only be connected by an axiom link if these formulas unify. Like for multimodal proof nets, the first-order linear logic theorem prover chooses an axiom link for one of the atoms with the fewest possibilities.
Tools
Table 9 lists the different theorem provers which are available. Grail 0 BIBREF42 and Grail 3 BIBREF43 use the multimodal proof net calculus of Section "Multimodal proof nets" , whereas LinearOne BIBREF44 uses the first-order proof nets of Section "First-order proof nets" . GrailLight BIBREF45 is a special-purpose chart parser, intended for use with an automatically extracted French grammar for wide-coverage parsing and semantics BIBREF34 , BIBREF46 . All provers are provided under the GNU Lesser General Public License — this means, notably, there is no warranty, though I am committed to making all software as useful as possible; so contact me for any comments, feature requests or bug reports. All theorem provers can be downloaded from the author's GitHub site.
https://github.com/RichardMoot/
The columns of table Table 9 indicate whether the theorem provers provide natural deduction output, graph output (of the partial proof nets), whether there is an interactive mode for proof search, whether the implementation is complete and whether the grammar can specify its own set of structural rules; “NA” means the question doesn't apply to the given system (GrailLight doesn't use a graphs to represent proofs and first-order linear logic does not have a grammar-specific set of structural rules). The table should help you select the most adequate tool for your purposes.
LinearOne provides natural deduction output not only for first-order linear logic, but also for the Displacement calculus, hybrid type-logical grammars and lambda grammars. That is, the grammar writer can write a grammar in any of these formalisms, LinearOne will do proof search of the translation of this grammar in first-order linear logic and then translate any resulting proofs back to the source language.
The syntactic example proofs in this chapter have been automatically generated using these tools and the corresponding grammars files, as well as many other example grammars, are included in the repository. | No |
2b52d481b30185d2c6e7b403d37277f70337d6ca | 2b52d481b30185d2c6e7b403d37277f70337d6ca_0 | Q: What formalism does Grail use?
Text: Introduction
This chapter describes a series of tools for developing and testing type-logical grammars. The Grail family of theorem provers have been designed to work with a variety of modern type-logical frameworks, including multimodal type-logical grammars BIBREF0 , NL $_{cl}$ BIBREF1 , the Displacement calculus BIBREF2 and hybrid type-logical grammars BIBREF3 .
The tools give a transparent way of implementing grammars and testing their consequences, providing a natural deduction proof in the specific type-logical grammar for each of the readings of a sentence. None of this replaces careful reflection by the grammar writer, of course, but in many cases, computational testing of hand-written grammars will reveal surprises, showing unintended consequences of our grammar and such unintended proofs (or unintended absences of proofs) help us improve the grammar. Computational tools also help us speed up grammar development, for example by allowing us to compare several alternative solutions to a problem and investigate where they make different predictions.
This chapter describes the underlying formalism of the theorem provers, as it is visible during an interactive proof trace, and present the general strategy followed by the theorem provers. The presentation in this chapter is somewhat informal, referring the reader elsewhere for full proofs.
The rest of this chapter is structured as follows. Section "Type-logical grammars" presents a general introduction to type-logical grammars and illustrates its basic concepts using the Lambek calculus, ending the section with some problems at the syntax-semantics interface for the Lambek calculus. Section "Modern type-logical grammars" looks at recent developments in type-logical grammars and how they solve some of the problems at the syntax-semantics interface. Section "Theorem proving" looks at two general frameworks for automated theorem proving for type-logical grammars, describing the internal representation of partial proofs and giving a high-level overview of the proof search mechanism.
Type-logical grammars
Type-logical grammars are a family of grammar formalisms built on a foundation of logic and type theory. Type-logical grammars originated when BIBREF4 introduced his Syntactic Calculus (called the Lambek calculus, L, by later authors). Though Lambek built on the work of BIBREF5 , BIBREF6 and others, Lambek's main innovation was to cast the calculus as a logic, giving a sequent calculus and showing decidability by means of cut elimination. This combination of linguistic and computational applications has proved very influential.
In its general form, a type-logical grammar consists of following components:
A sentence $w_1, \ldots , w_n$ is grammatical iff the statement $A_1,\ldots , A_n \vdash C$ is provable in our logic, for some $A_i \in \textit {lex}(w_i)$ and for some goal formula $C$ . In other words, we use the lexicon to map words to formulas and then ask the logic whether the resulting sequence of formulas is a theorem. Parsing in a type-logical grammar is quite literally a form of theorem proving, a very pure realisation of the slogan “parsing as deduction”.
One of the attractive aspects of type-logical grammars is their simple and transparent syntax-semantics interface. Though there is a variety of logics used for the syntax of type-logical grammars (I will discuss the Lambek calculus in Section "The Lambek calculus" and two generalisations of it in Sections "Multimodal grammars" and "First-order linear logic" ), there is a large consensus over the syntax-semantics interface. Figure 1 gives a picture of the standard architecture of type-logical grammars.
The “bridge” between syntax and semantics in the figure is the Curry-Howard isomorphism between linear lambda terms and proofs in multiplicative intuitionistic linear logic.
Theorem proving occurs in two places of the picture: first when parsing a sentence in a given type-logical grammar and also at the end when we use the resulting semantics for inferences. I will have little to say about this second type of theorem proving BIBREF9 , BIBREF10 ; theorem proving for parsing will be discussed in Section "Theorem proving" .
The lexicon plays the role of translating words to syntactic formulas but also specifies the semantic term which is used to compute the semantics later. The lexicon of a categorial grammar is “semantically informed”. The desired semantics of a sentence allows us to reverse-engineer the formula and lexical lambda-term which produce it.
Many current semantic theories do not provide a semantic formula directly, but first provide a proto-semantics on which further computations are performed to produce the final semantics (eg. for anaphora resolution, presuppositions projection etc.). In the current context this means at least some inference is necessary to determine semantic and pragmatic wellformedness.
The Lambek calculus
To make things more concrete, I will start by presenting the Lambek calculus BIBREF4 . Lambek introduced his calculus as a way to “obtain an effective rule (or algorithm) for distinguishing sentences from nonsentences”, which would be applicable both to formal and to (at least fragments of) natural languages BIBREF4 . The simplest formulas used in the Lambek calculus are atomic formulas, which normally include $s$ for sentence, $n$ for common noun, $np$ for noun phrase. We then inductively define the set of formulas of the Lambek calculus by saying that, they include the atomic formulas, and that, if $A$ and $B$ are formulas (atomic or not), then $A/B$ , $A\bullet B$ and $B\backslash A$ are also formulas.
The intended meaning of a formula $A/B$ — called $A$ over $B$ — is that it is looking for an expression of syntactic type $B$ to its right to produce an expression of syntactic type $A$ . An example would be a word like “the” which is assigned the formula $np/n$ in the lexicon, indicating that it is looking for a common noun (like “student”) to its right to form a noun phrase, meaning “the student” would be assigned syntactic type $np$ . Similarly, the intended meaning of a formula $B\backslash A$ — called $B$ under $A$ — is that it is looking for an expression of syntactic type $A$0 to its left to produce an expression of type $A$1 . This means an intransitive verb like “slept”, when assigned the formula $A$2 in the lexicon, combines with a noun phrase to its left to form a sentence $A$3 . We therefore predict that “the student slept” is a sentence, given the earlier assignment of $A$4 to “the student”. Finally, a formula $A$5 denotes the concatenation of an expression of type $A$6 to an expression of type $A$7 .
Basic statements of the Lambek calculus are of the form $A_1,\ldots ,A_n \vdash C$ (with $n \ge 1$ ), indicating a claim that the sequence of formulas $A_1,\ldots , A_n$ is of type $C$ ; the sequent comma `,' is implicitly associative and non-commutative. Table 1 shows the natural deduction rules for the Lambek calculus. $\Gamma $ , $\Delta $ , etc. denote non-empty sequences of formulas.
A simple Lambek calculus lexicon is shown in Table 2 . I have adopted the standard convention in type-logical grammars of not using set notation for the lexicon, but instead listing multiple lexical entries for a word separately. This corresponds to treating $\textit {lex}$ as a non-deterministic function rather than as a set-valued function.
Proper names, such as “Alyssa” and “Emory” are assigned the category $np$ . Common nouns, such as “student” and “exam” are assigned the category $n$ . Adjectives, such as “difficult” or “erratic” are not assigned a basic syntactic category but rather the category $n/n$ , indicating they are looking for a common noun to their right to form a new common noun, so we predict that both “difficult exam” and “exam” can be assigned category $n$ . For more complex entries, “someone” is looking to its right for a verb phrase to produce a sentence, where $np\backslash s$ is the Lambek calculus equivalent of verb phrase, whereas “whom” is first looking to its right for a sentence which is itself missing a noun phrase to its right and then to its left for a noun.
Given the lexicon of Table 2 , we can already derive some fairly complex sentences, such as the following, and, as we will see in the next section, obtain the correct semantics.
. Every student aced some exam.
. The student who slept during the exam loves Alyssa.
One of the two derivations of Sentence "The Lambek calculus" is shown in Figure 2 . To improve readability, the figure uses a “sugared” notation: instead of writing the lexical hypothesis corresponding to “exam” as $n \vdash n$ , we have written it as $\textit {exam} \vdash n$ . The withdrawn $np$ 's corresponding to the object and the subject are given a labels $p_0$ and $q_0$ respectively; the introduction rules are coindexed with the withdrawn hypotheses, even though this information can be inferred from the rule instantiation.
We can always uniquely reconstruct the antecedent from the labels. For example, the sugared statement “ $p_0\ \textrm {aced}\ q_0 \vdash s$ ” in the proof corresponds to $np, (np\backslash s)/np, np \vdash s$ .
Although it is easy to verify that the proof of Figure 2 has correctly applied the rules of the Lambek calculus, finding such a proof from scratch may look a bit complicated (the key steps at the beginning of the proof involve introducing two $np$ hypotheses and then deriving $s/np$ to allow the object quantifier to take narrow scope). We will defer the question “given a statement $\Gamma \vdash C$ , how do we decide whether or not it is derivable?” to Section "Theorem proving" but will first discuss how this proof corresponds to the following logical formula. $ \forall x. [\mathit {student}(x) \Rightarrow \exists y. [\mathit {exam}(y) \wedge \mathit {ace}(x,y) ] ] $
The syntax-semantics interface
For the Lambek calculus, specifying the homomorphism to multiplicative intuitionistic linear logic is easy: we replace the two implications ` $\backslash $ ' and ` $/$ ' by the linear implication ` $\multimap $ ' and the product ` $\bullet $ ' by the tensor ` $\otimes $ '. In a statement $\Gamma \vdash C$ , $\Gamma $ is now a multiset of formulas instead of a sequence. In other words, the sequent comma `,' is now associative, commutative instead of associative, non-commutative. For the proof of Figure 2 of the previous section, this mapping gives the proof shown in Figure 3 .
We have kept the order of the premisses of the rules as they were in Figure 2 to allow for an easier comparison. This deep structure still uses the same atomic formulas as the Lambek calculus, it just forgets about the order of the formulas and therefore can no longer distinguish between the leftward looking implication ` $\backslash $ ' and the rightward looking implication ` $/$ '.
To obtain a semantics in the tradition of BIBREF11 , we use the following mapping from syntactic types to semantic types, using Montague's atomic types $e$ (for entity) and $t$ (for truth value). $ np^* & = e\\ n^* & = e\rightarrow t\\ s^* & = t\\ (A \multimap B)^* & = A^* \rightarrow B^* $
Applying this mapping to the deep structure proof of Figure 3 produces the intuitionistic proof and the corresponding (linear) lambda term as shown in Figure 4
The computed term corresponds to the derivational semantics of the proof. To obtain the complete meaning, we need to substitute, for each of $z_0, \ldots , z_4$ , the meaning assigned in the lexicon.
For example, “every” has syntactic type $(s/(np\backslash s))/n$ and its semantic type is $(e\rightarrow t)\rightarrow (e\rightarrow t)\rightarrow t$ . The corresponding lexical lambda term of this type is $\lambda P^{e\rightarrow t}.\lambda Q^{e\rightarrow t}. (\forall (\lambda x^e. ((\Rightarrow (P\, x)) (Q\, x))))$ , with ` $\forall $ ' a constant of type $(e\rightarrow t)\rightarrow t$ and ` $\Rightarrow $ ' a constant of type $t\rightarrow (t\rightarrow t)$ . In the more familiar Montague formulation, this lexical term corresponds to $\lambda P^{e\rightarrow t}.\lambda Q^{e\rightarrow t}. \forall x. [ (P\, x) \Rightarrow (Q\,x)]$ , where we can see the formula in higher-order logic we are constructing more clearly. Although the derivational semantics is a linear lambda term, the lexical term assigned to “every” is not, since the variable $x$ has two bound occurrences.
The formula assigned to “some” has the same semantic type but a different term $\lambda P^{e\rightarrow t}.\lambda Q^{e\rightarrow t}. (\exists (\lambda x^e. ((\wedge (P\, x)) (Q\, x))))$ .
The other words are simple, “exam” is assigned $\mathit {exam}^{e\rightarrow t}$ , “student” is assigned $\mathit {student}^{e\rightarrow t}$ , and “aced” is assigned $\mathit {ace}^{e\rightarrow (e\rightarrow t)}$ .
So to compute the meaning, we start with the derivational semantics, repeated below. $ ((z_0\,z_1) \,(\lambda x. ((z_3\,z_4)\,\lambda y. ((z_2\,y)\,x)))) $
Then we substitute the lexical meanings, for $z_0,\ldots ,z_4$ . $ z_0& := \lambda P^{e\rightarrow t}.\lambda Q^{e\rightarrow t}. (\forall (\lambda x^e. ((\Rightarrow (P\, x)) (Q\, x))))\\ z_1&:= \mathit {student}^{e\rightarrow t}\\ z_2& := \mathit {ace}^{e\rightarrow (e\rightarrow t)}\\ z_3& := \lambda P^{e\rightarrow t}.\lambda Q^{e\rightarrow t}. (\exists (\lambda x^e. ((\wedge (P\, x)) (Q\, x))))\\ z_4& := \mathit {exam}^{e\rightarrow t}\\ $
This produces the following lambda term. $ ((\lambda P^{e\rightarrow t}.\lambda Q^{e\rightarrow t}. & (\forall (\lambda x^e. ((\Rightarrow (P\, x)) (Q\, x))))\,\mathit {student}^{e\rightarrow t}) \\ \,(\lambda x. ((\lambda P^{e\rightarrow t}.\lambda Q^{e\rightarrow t}. & (\exists (\lambda x^e. ((\wedge (P\, x)) (Q\, x))))\,\mathit {exam}^{e\rightarrow t})\\ &\lambda y. ((\mathit {ace}^{e\rightarrow (e\rightarrow t)}\,y)\,x)))) $
Finally, when we normalise this lambda term, we obtain the following semantics for this sentence. $ (\forall (\lambda x^e. ((\Rightarrow (\mathit {student}^{e\rightarrow t})\, x)) (\exists (\lambda y^e. ((\wedge (\mathit {exam}^{e\rightarrow t}\, y)) (((\mathit {ace}^{e\rightarrow (e\rightarrow t)}\,y)\,x))))) $
This lambda term represents the more readable higher-order logic formula. $ \forall x. [\mathit {student}(x) \Rightarrow \exists y. [\mathit {exam}(y) \wedge \mathit {ace}(x,y) ] ] $
Proofs in the Lambek calculus, and in type-logical grammars are subsets of the proofs in intuitionistic (linear) logic and these proofs are compatible with formal semantics in the tradition initiated by BIBREF11 .
For the example in this section, we have calculated the semantics of a simple example in “slow motion”: many authors assign a lambda term directly to a proof in their type-logical grammar, leaving the translation to intuitionistic linear logic implicit.
Given a semantic analysis without a corresponding syntactic proof, we can try to reverse engineer the syntactic proof. For example, suppose we want to assign the reflexive “himself” the lambda term $\lambda R^{(e\rightarrow e\rightarrow t)}\lambda x^e. ((R\,x)\, x)$ , that is, a term of type $(e\rightarrow e\rightarrow t)\rightarrow e\rightarrow t$ . Then, using some syntactic reasoning to eliminate implausible candidates like $(np\multimap n)\multimap n$ , the only reasonable deep structure formula is $(np\multimap np\multimap s)\multimap (np\multimap s)$ and, reasoning a bit further about which of the implications is left and right, we quickly end up with the quite reasonable (though far from perfect) Lambek calculus formula $((np\backslash s)/np)\backslash (np\backslash s)$ .
Going further
Though the Lambek calculus is a beautiful and simple logic and though it gives a reasonable account of many interesting phenomena on the syntax-semantics interface, the Lambek calculus has a number of problems, which I will discuss briefly below. The driving force of research in type-logical grammars since the eighties has been to find solutions to these problems and some of these solutions will be the main theme of the next section.
The Lambek calculus generates only context-free languages BIBREF12 . There is a rather large consensus that natural languages are best described by a class of languages at least slightly larger than the context-free languages. Classical examples of phenomena better analysed using so-called mildly context-sensitive language include verb clusters in Dutch and in Swiss German BIBREF13 , BIBREF14 .
Though our example grammar correctly predicted two readings for Sentence "The Lambek calculus" above, our treatment of quantifiers doesn't scale well. For example, if we want to predict two readings for the following sentence (which is just Sentence "The Lambek calculus" where “some” and “every” have exchanged position)
. Some student aced every exam.
then we need to add an additional lexical entry both for “some” and for “every”; this is easily done, but we end up with two lexical formulas for both words. However, this would still not be enough. For example, the following sentence is also grammatical.
. Alyssa gave every student a difficult exam.
. Alyssa believes a student committed perjury.
In Sentence UID18 , “every student” does not occur in a peripheral position, and though it is possible to add a more complex formula with the correct behaviour, we would need yet another formula for Sentence UID18 . Sentence UID18 is generally considered to have two readings: a de dicto reading, where Alyssa doesn't have a specific student in mind (she could conclude this, for example, when two students make contradictory statements under oath, this reading can be felicitously followed by “but she doesn't know which”), and a de re reading where Alyssa believes a specific student perjured. The Lambek calculus cannot generate this second reading without adding yet another formula for “a”.
It seems we are on the wrong track when we need to add a new lexical entry for each different context in which a quantifier phrase occurs. Ideally, we would like a single formula for “every”, “some” and “a” which applied in all these different cases.
Another way to see this is that we want to keep the deep structure formula $n\multimap ((np\multimap s) \multimap s)$ and that we need to replace the Lambek calculus by another logic such that the correct deep structures for the desired readings of sentences like UID18 and UID18 are produced.
The grammar above also overgenerates in several ways.
“ace” implies a (very positive) form of evaluation with respect to the object. “aced the exam” is good, whereas “aced Emory”, outside of the context of a tennis match is bad. “aced logic” can only mean something like “aced the exam for the logic course”.
“during” and similar temporal adverbs imply its argument is a temporal interval: “during the exam” is good, but “during the student” is bad, and “during logic” can only mean something like “during the contextually understood logic lecture”
In the literature on semantics, there has been an influential movement towards a richer ontology of types (compared to the “flat” Montagovian picture presented above) but also towards a richer set of operations for combining terms of specific types, notably allowing type coercions BIBREF15 , BIBREF16 . So an “exam” can be “difficult” (it subject matter, or informational content) but also “take a long time” (the event of taking the exam). The theory of semantics outlined in the previous section needs to be extended if we want to take these and other observations into account.
Modern type-logical grammars
We ended the last section with some problems with using the Lambek calculus as a theory of the syntax-semantics interface. The problems are of two different kinds.
Multimodal grammars
Multimodal type-logical grammars BIBREF0 take the non-associative Lambek calculus as its base, but allow multiple families of connectives.
For the basic statements $\Gamma \vdash C$ of the Lambek calculus, we ask the question whether we can derive formula $C$ , the succedent, from a sequence of formulas $\Gamma $ , the antecedent. In the multimodal Lambek calculus, the basic objects are labeled binary trees. The labels come from a separate set of indices or modes $I$ . Multimodal formulas are then of the form $A/_i B$ , $A\bullet _i B$ and $A\backslash _i B$ , and antecedent terms are of the form $\Gamma \circ _{i} \Delta $ , with $C$0 an index from $C$1 (we have omitted the outer brackets for the rules, but the operator $C$2 is non-associative). Sequents are still written as $C$3 , but $C$4 is now a binary branching, labeled tree with formulas as its leaves.
Given a set of words $w_1,\ldots ,w_n$ and a goal formula $C$ , the question is now: is there a labeled tree $\Gamma $ with formulas $A_1,\ldots ,A_n$ as its yield, such that $\Gamma \vdash C$ is derivable and $A_i \in \textit {lex}(w_i)$ for all $i$ (the implementation of Section "Multimodal proof nets" will automatically compute such a $\Gamma $ ).
The rules of multimodal type-logical grammars are shown in Table 3 . In the rules, $\Gamma [\Delta ]$ denotes an antecedent tree $\Gamma $ with distinguished subtree $\Delta $ — the subtree notation is a non-associative version of the Lambek calculus antecedent $\Gamma ,\Delta ,\Gamma ^{\prime }$ , where $\Delta $ is a subsequence instead of a subtree as it is in $\Gamma [\Delta ]$ .
Each logical connective with mode $i$ uses a structural connective $\circ _i$ in its rule. For the $/ E$ , $\bullet I$ and $\backslash E$ rules, reading from premisses to conclusions, we build structure. For the $/I$ , $\bullet E$ and $\backslash I$ rules we remove a structural connective with the same mode as the logical connective. The natural deduction rules use explicit antecedents, although, for convenience, we will again use coindexation between the introduction rules for the implications ` $/$ ' and ` $\backslash $ ' and its withdrawn premiss (and similarly for the $\circ _i$0 rule and its two premisses).
The main advantage of adding modes to the logic is that modes allow us to control the application of structural rules lexically. This gives us fine-grained control over the structural rules in our logic.
For example, the base logic is non-associative. Without structural rules, the sequent $a/b, b/c \vdash a/c$ , which is derivable in the Lambek calculus is not derivable in its multimodal incarnation $a/_a b, b/_a c \vdash a/_a c$ . The proof attempt below, with the failed rule application marked by the `' symbol, shows us that the elimination rules and the introduction rule for this sequent do not match up correctly. $ [[/ I]]{a/_ab \circ _{a} b/_ac\vdash a/_a c }{[\text{}]{(a/_ab \circ _{a} b/_ac) \circ _{a} c \vdash a}{[[/ E]]{a/_a b \circ _{a} (b/_a c \circ _{a} c)\vdash a}{a/_a b\vdash a/_a b & [[/ E]]{b/_a c \circ _{a} c \vdash b}{b/_a c \vdash b/_a c & c\vdash c}}}} $
This is where the structural rules, shown at the bottom of Table 3 come in. The general form, read from top to bottom, states that we take a structure $\Gamma $ containing a distinguished subtree $\Xi $ which itself has $n$ subtrees $\Delta _1,\ldots ,\Delta _n$ , and we replace this subtree $\Xi $ with a subtree $\Xi ^{\prime }$ which has the same number of subtrees, though not necessarily in the same order ( $\pi $ is a permutation on the leaves). In brief, we replace a subtree $\Xi $ by another subtree $\Xi ^{\prime }$ and possibly rearrange the leaves (subtrees) of $\Xi $ , without deleting or copying any subtrees. Examples of structural rules are the following.
The first structural rule is one of the structural rules for associativity. It is the simplest rule which will make the proof attempt above valid (with $\Gamma []$ the empty context, $\Delta _1 = a/_a b$ , $\Delta _2 = b/_a c$ and $\Delta _3 = c$ ). This structural rule keeps the order of the $\Delta _i$ the same.
The rule above on the right is slightly more complicated. There, the positions of $\Delta _2$ and $\Delta _3$ are swapped as are the relative positions of modes 0 and 1. Rules like this are called “mixed commutativity”, they permit controlled access to permutation. One way to see this rule, seen from top to bottom, is that is “moves out” a $\Delta _3$ constituent which is on the right branch of mode 1. Rules of this kind are part of the solution to phenomena like Dutch verb clusters BIBREF17 .
Many modern type-logical grammars, such as the Displacement calculus and NL $_{cl}$ can be seen as multimodal grammars BIBREF18 , BIBREF1 .
First-order linear logic
We have seen that multimodal type-logical grammars generalise the Lambek calculus by offering the possibility of fine-tuned controlled over the application of structural rules. In this section, I will introduce a second way of extending the Lambek calculus.
Many parsing algorithms use pairs of integers to represent the start and end position of substrings of the input string. For example, we can represent the sentence
. Alyssa believes someone committed perjury.
as follows (this is a slightly simplified version of Sentence UID18 from Section "Going further" ); we have treated “committed perjury” as a single word.
[node distance=5em] 0) 0; 1) [right of=0]1; 2) [right of=1]2; 3) [right of=2]3; 4) [node distance=10em, right of=3]4; (0) edge node[above] [label] Alyssa (1); (1) edge node[above] [label] believes $_{\rule {0pt}{1ex}}$ (2); (2) edge node[above] [label] someone $_{\rule {0pt}{1ex}}$ (3); (3) edge node[above] [label] committed perjury $_{\rule {0pt}{1ex}}$ (4);
The basic idea of first-order linear logic as a type-logical grammar is that we can code strings as pairs (or, more generally, tuples) of integers representing string positions. So for deciding the grammaticality of a sequence of words $w_1,\ldots , w_n \vdash C$ , with a goal formula $C$ , we now give a parametric translation from $\Vert A_i \Vert ^{i-1,i}$ for each lexical entry $w_i$ and $\Vert C\Vert ^{0,n}$ for the conclusion formula.
Given these string positions, we can assign the noun phrase “Alyssa” the formula $np(0,1)$ , that is a noun phrase from position 0 to position 1. The verb “believes”, which occurs above between position 1 and 2, can then be assigned the complex formula $\forall x_2. [ s(2,x_2) \multimap \forall x_1. [ np(x_1,1) \multimap s(x_1,x_2)] ]$ , meaning that it first selects a sentence to its right (that is, starting at its right edge, position 2 and ending anywhere) and then a noun phrase to its left (that is, starting anywhere and ending at its left edge, position 1) to produce a sentence from the left position of the noun phrase argument to the right position of the sentence argument.
We can systematise this translation, following BIBREF19 , and obtain the following translation from Lambek calculus formulas to first-order linear logic formulas. $ \Vert p \Vert ^{x,y} & = p(x,y) \\ \Vert A / B \Vert ^{x,y} &= \forall z. \Vert B \Vert ^{y,z} \multimap \Vert A \Vert ^{x,z} \\ \Vert B\backslash A \Vert ^{y,z} &= \forall x. \Vert B \Vert ^{x,y} \multimap \Vert A \Vert ^{x,z} \\ \Vert A \bullet B \Vert ^{x,z} &= \exists y. \Vert A \Vert ^{x,y} \otimes \Vert B \Vert ^{y,z} $
Given this translation, the lexical entry for “believes” discussed above is simply the translation of the Lambek calculus formula $(np\backslash s)/s$ , with position pair $1,2$ , to first-order linear logic. Doing the same for “committed perjury” with formula $np\backslash s$ and positions $3,4$ gives $\forall z. [np(z,3) \multimap s(z,4)]$ . For “someone” we would simply translate the Lambek calculus formula $s/(np\backslash s)$ , but we can do better than that: when we translate “someone” as $\forall y_1. \forall y_2. [ (np(2,3) \multimap s(y_1,y_2)) \multimap s(y_1,y_2) ]$ , we improve upon the Lambek calculus analysis.
As we noted in Section "Going further" , the Lambek calculus cannot generate the “de re” reading, where the existential quantifier has wide scope. Figure 5 shows how the simple first-order linear logic analysis does derive this reading.
Besides the Lambek calculus, first-order linear logic has many other modern type-logical grammars as fragments. Examples include lambda grammars BIBREF20 , the Displacement calculus BIBREF2 and hybrid type-logical grammars BIBREF3 . We can see first-order linear logic as a sort of “machine language” underlying these different formalisms, with each formalism introducing its own set of abbreviations convenient for the grammar writer. Seeing first-order linear logic as an underlying language allows us to compare the analyses proposed for different formalisms and find, in spite of different starting points, a lot of convergence. In addition, as discussed in Section "First-order proof nets" , we can use first-order linear logic as a uniform proof strategy for these formalisms.
As usual, we obtain the deep structure of a syntactic derivation by defining a homomorphism from the syntactic proof to a proof in multiplicative intuitionistic linear logic. For first-order linear logic, the natural mapping simply forgets all first-order quantifiers and replaces all atomic predicates $p(x_1,\ldots ,x_n)$ by propositions $p$ . Since the first-order variables have, so far, only been used to encode string positions, such a forgetful mapping makes sense.
However, other solutions are possible. When we add semantically meaningful terms to first-order linear logic, the Curry-Howard isomorphism for the first-order quantifiers will give us dependent types and this provides a natural connection to the work using dependent types for formal semantics BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 .
The Montagovian Generative Lexicon
In the previous sections, we have discussed two general solutions to the problems of the syntax-semantics interface of the Lambek calculus. Both solutions proposed a more flexible syntactic logic. In this section, we will discuss a different type of added flexibility, namely in the syntax-semantics interface itself.
The basic motivating examples for a more flexible composition have been amply debated in the literature BIBREF15 , BIBREF16 . Our solution is essentially the one proposed by BIBREF25 , called the Montagovian Generative Lexicon. I will only give a brief presentation of this framework. More details can be found in Chapter 6.
Like many other solutions, the first step consists of splitting Montague's type $e$ for entities into several types: physical objects, locations, informational objects, eventualities, etc. Although there are different opinions with respect to the correct granularity of types BIBREF15 , BIBREF16 , BIBREF26 , nothing much hinges on this for the present discussion.
The second key element is the move to the second-order lambda calculus, system F BIBREF27 , which allows abstraction over types as well as over terms. In our Lambek calculus, the determiner “the” was assigned the formula $np/n$ and the type of its lexical semantics was therefore $(e\rightarrow t) \rightarrow e$ , which we implement using the $\iota $ operators of type $(e\rightarrow t) \rightarrow e$ , which, roughly speaking, selects a contextually salient entity from (a characteristic function of) a set. When we replace the single type $e$ by several different types, we want to avoid listing several separate syntactically identical by semantically different entries for “the” in the lexicon, and therefore assign it a polymorphic term $\Lambda \alpha . \iota ^{(\alpha \rightarrow t)\rightarrow \alpha }$ of type $\Pi \alpha . ((\alpha \rightarrow t)\rightarrow \alpha )$ , quantifying over all types $\alpha $ . Though this looks problematic, the problem is resolved once we realise that only certain function words (quantifiers, conjunctions like “and”) are assigned polymorphic terms and that we simply use universal instantiation to obtain the value of the quantifier variable. So if “student” is a noun of type human, that is of type $h\rightarrow t$ , then “the student” will be of type $h$ , instantiating $(e\rightarrow t) \rightarrow e$0 to $(e\rightarrow t) \rightarrow e$1 . Formally, we use $(e\rightarrow t) \rightarrow e$2 reduction as follows (this is substitution of types instead of terms, substituting type $(e\rightarrow t) \rightarrow e$3 for $(e\rightarrow t) \rightarrow e$4 ). $(e\rightarrow t) \rightarrow e$5
The final component of the Montagovian Generative Lexicon is a set of lexically specified, optional transformations. In case of a type mismatch, an optional transformation can “repair” the term.
As an example from BIBREF28 and BIBREF29 , one of the classic puzzles in semantics are plurals and collective and distributive readings. For example, verbs like “meet” have collective readings, they apply to groups of individuals collectively, so we have the following contrast, where collectives like committees and plurals like students can meet, but not singular or distributively quantified noun phrases. The contrast with verbs like “sneeze”, which force a distributive reading is clear.
. The committee met.
. All/the students met
. *A/each/the student met.
. All/the students sneezed.
. A/each/the student sneezed.
In the Montagovian Generative lexicon, we can models these fact as follows. First, we assign the plural morphology “-s” the semantics $\Lambda \alpha \lambda P^{\alpha \rightarrow t} \lambda Q^{\alpha \rightarrow t}. | Q | > 1 \wedge \forall x^{\alpha }. Q(x) \Rightarrow P(x)$ , then “students” is assigned the following term $\lambda Q^{h\rightarrow t}. | Q | > 1 \wedge \forall x^h. Q(x) \Rightarrow \textit {student}(x)$ , that is the sets of cardinality greater than one such that all its members are students. Unlike “student” which was assigned a term of type $h\rightarrow t$ , roughly a property of humans, the plural “students” is assigned a term of type $(h\rightarrow t)\rightarrow t$ , roughly a property of sets of humans. Consequently, the contrast between “the student” and “the students” is that the first is of type $h$ (a human) and the second of type $h\rightarrow t$ (a set of humans) as indicated below.
Therefore, the meaning of “the students” is the contextually determined set of humans, from the sets of more than one human such that all of them are students.
Then we distinguish the verbs “meet” and “sneeze” as follows, with the simpler verb “sneeze” simply selecting for a human subject and the collective verb “meet” selecting for a set of humans (of cardinality greater than one) as its subject.
Given these basic lexical entries, we already correctly predict that “the student met” is ill-formed semantically (there is an unresolvable type mismatch) but “the students met” and “the student sneezed” are given the correct semantics.
The interesting case is “the students sneezed” which has as its only reading that each student sneezed individually. Given that “the students” is of type $h\rightarrow t$ and that “sneezed” requires an argument of type $h$ , there is a type mismatch when we apply the two terms. However, “sneeze” has the optional distributivity operator `*', which when we apply it to the lexical semantics for “sneeze” produces the term $\lambda P^{h\rightarrow t}. \forall x^h. P(x) \Rightarrow \textit {sneeze}(x)$ , which combines with “the students” to produce the reading. $ \forall x^h. (\iota (\lambda Q^{h\rightarrow t}. | Q | > 1 \wedge \forall y^h Q(y) \Rightarrow \textit {student}(y))\, x) \Rightarrow \textit {sneeze}(x) $
In other words, all of the members of the contextually determined set of more than human which are all students, sneeze.
The basic idea for the Montagovian Generative Lexicon is that lexical entries specify optional transformations which can repair certain sorts of type mismatches in the syntax-semantics interface. This adaptability allows the framework to solve many semantic puzzles.
Though a proof-of-concept application of these ideas exists, more robust and scalable applications, as well as efforts incorporate these ideas into wide-coverage semantics, are ongoing research.
Theorem proving
When looking at the rules and examples for the different logics, the reader may have wondered: how do we actually find proofs for type-logical grammars? This question becomes especially urgent once our grammars become more complex and the consequences of our lexical entries, given our logic, become hard to oversee. Though pen and paper generally suffice to show that a given sentence is derivable for the desired reading, it is generally much more laborious to show that a given sentence is underivable or that it has only the desired readings. This is where automated theorem provers are useful: they allow more extensive and intensive testing of your grammars, producing results more quickly and with less errors (though we should be careful about too naively assuming the implementation we are using is correct: when a proof is found it is generally easy to verify its correctness by hand, but when a proof isn't found because of a programming error this can be hard to detect).
Though the natural deduction calculi we have seen so far can be used for automated theorem proving BIBREF30 , BIBREF31 , and though BIBREF4 already gave a sequent calculus decision procedure, both logics have important drawbacks for proof search.
Natural deduction proofs have a 1-1 correspondence between proofs and readings, though this is somewhat complicated to enforce for a logic with the $\bullet \textit {E}$ rule (and the related $\Diamond \textit {E}$ rule). For the sequent calculus, the product rule is just like the other rules, but sequent calculus suffers from the so-called “spurious ambiguity” problem, which means that it generates many more proofs than readings.
Fortunately, there are proof systems which combine the good aspects of natural deduction and sequent calculus, and which eliminate their respective drawbacks. Proof nets are a graphical representation of proofs first introduced for linear logic BIBREF32 . Proof nets suffer neither from spurious ambiguity nor from complications for the product rules.
Proof nets are usually defined as a subset of a larger class, called proof structures. Proof structures are “candidate proofs”: part of the search space of a naive proof search procedure which need not correspond to actual proofs. Proof nets are those proof structures which correspond to sequent proofs. Perhaps surprisingly, we can distinguish proof nets from other proof structures by looking only at graph-theoretical properties of these structures.
Proof search for type-logical grammars using proof nets uses the following general procedure.
In Sections "Multimodal proof nets" and "First-order proof nets" we will instantiate this general procedure for multimodal type-logical grammar and for first-order linear logic respectively.
Multimodal proof nets
Table 5 presents the links for multimodal proof nets. The top row list the links corresponding to the elimination rules of natural deduction, the bottom row those corresponding to the introduction rules. There are two types of links: tensor links, with an open center, and par links, with a filled center. Par links have a single arrow pointing to the main formula of the link (the complex formula containing the principal connective). The top and bottom row are up-down symmetric with tensor and par reversed. The tensor links correspond to the logical rules which build structure when we read them from top to bottom, the par links to those rules which remove structure.
The formulas written above the central node of a link are its premisses, whereas the formulas written below it are its conclusions. Left-to-right order of the premisses as well as the conclusions is important.
A proof structure is a set of formula occurrences and a set of links such that:
each formula is at most once the premiss of a link,
each formula is at most once the conclusion of a link.
A formula which is not the premiss of any link is a conclusion of the proof structure. A formula which is not the conclusion of any link is a hypothesis of the proof structure. We say a proof structure with hypotheses $\Gamma $ and conclusions $\Delta $ is a proof structure of $\Gamma \vdash \Delta $ (we are overloading of the ` $\vdash $ ' symbol here, though this use should always be clear from the context; note that $\Delta $ can contain multiple formulas).
After the first step of lexical lookup we have a sequent $\Gamma \vdash C$ , and we can enumerate its proof structures as follows: unfold the formulas in $\Gamma , C$ , unfolding them so that the formulas in $\Gamma $ are hypotheses and the formula $C$ is a conclusion of the resulting structure, until we reach the atomic subformulas (this is step 2 of the general procedure), then identify atomic subformulas (step 3 of the general procedure, we turn to the last step, checking correctness, below). This identification step can, by the conditions on proof structures only identify hypotheses with conclusions and must leave all formulas of $\Gamma $ , including atomic formulas, as hypotheses and $C$ as a conclusion.
Figure 6 shows the lexical unfolding of the sequent $a/_a b, b/_a c \vdash a/_a c$ . It is already a proof structure, though a proof structure of $a, a/_a b, b, b/_a c, c \vdash a, a/_a c, b, c$ (to the reader familiar with the proof nets of linear logic: some other presentations of proof nets use more restricted definitions of proof structures where a “partial proof structure” such as shown in the figure is called a module).
To turn this proof structure into a proof structure of $a/_a b, b/_a c \vdash a/_a c$ , we identify the atomic formulas. In this case, there is only a single way to do this, since $a$ , $b$ and $c$ all occur once as a hypothesis and once as a conclusion, though in general there may be many possible matchings. Figure 7 shows, on the left, the proof structure after identifying the $a$ and $b$ formulas. Since left and right (linear order), up and down (premiss, conclusion) have meaning in the graph, connecting the $c$ formulas is less obvious: $c$ is a conclusion of the $/I$ link and must therefore be below it, but a premiss of the $/E$ link and must therefore be above it. This is hard to achieve in the figure shown on the left. Though a possible solution would be to draw the figure on a cylinder, where “going up” from the topmost $a$0 we arrive at the bottom one, for ease of type-setting and reading the figure, I have chosen the representation shown in Figure 7 on the right. The curved line goes up from the $a$1 premiss of the $a$2 link and arrives from below at the $a$3 link, as desired. One way so see this strange curved connection is as a graphical representation of the coindexation of a premiss with a rule in the natural deduction rule for the implication.
Figure 7 therefore shows, on the right, a proof structure for $a/_a b, b/_a c \vdash a/_a c$ . However, is it also a proof net, that is, does it correspond to a proof? In a multimodal logic, the answer depends on the available structural rules. For example, if no structural rules are applicable to mode $a$ then $a/_a b, b/_a c \vdash a/_a c$ is underivable, but if mode $a$ is associative, then it is derivable.
We decide whether a proof structure is a proof net based only on properties of the graph. As a first step, we erase all formula information from the internal nodes of the graph; for administrative reasons, we still need to be able to identify which of the hypotheses and conclusion of the structure correspond to which formula occurrence. All relevant information for correctness is present in this graph, which we call an abstract proof structure.
We talked about how the curved line in proof structures (and abstract proof structure) corresponds to the coindexation of discharged hypotheses with rule names for the implication introduction rules. However, the introduction rules for multimodal type-logical grammars actually do more than just discharge a hypothesis, they also check whether the discharged hypothesis is the immediate left (for $\backslash I$ ) or right (for $/ I$ ) daughter of the root node, that is, that the withdrawn hypothesis $A$ occurs as $A\circ _i \Gamma $ (for $\backslash I$ and mode $i$ ) or $\Gamma \circ _i A$ (for $/I$ and mode $i$ ). The par links in the (abstract) proof structure represent a sort of “promise” that will produce the required structure. We check whether it is satisfied by means of contractions on the abstract proof structure.
The multimodal contractions are shown in Table 6 . All portrayed configurations contract to a single vertex: we erase the two internal vertices and the paired links and we identify the two external vertices, keeping all connections of the external vertices to the rest of the abstract proof structure as they were: the vertex which is the result of the contraction will be a conclusion of the same link as the top external vertex (or a hypothesis of the abstract proof structure in case it wasn't) and it will be a premiss of the same link as the bottom external vertex (or a conclusion of the abstract proof structure in case it wasn't).
The contraction for $/I$ checks if the withdrawn hypothesis is the right daughter of a tensor link with the same mode information $i$ , and symmetrically for the $\backslash I$ contraction. The $\bullet E$ contraction contracts two hypotheses occurring as sister nodes.
All contractions are instantiations of the same pattern: a tensor link and a par link are connected, respecting left-right and up-down the two vertices of the par link without the arrow.
To get a better feel for the contractions, we will start with its simplest instances. When we do pattern matching on the contraction for $/ I$ , we see that it corresponds to the following patterns, depending on our choice for the tensor link (the par link is always $/ I$ ). $ C/_i B &\vdash C/_i B \\ A & \vdash (A\bullet _i B)/_i B \\ A & \vdash C/_i (A\backslash _i C) $
A proof structure is a proof net iff it contracts to a tree containing only tensor links using the contractions of Table 6 and any structural rewrites, discussed below — BIBREF33 present full proofs. In other words, we need to contract all par links in the proof structure according to their contraction, each contraction ensuring the correct application of the rule after which it is named. The abstract proof structure on the right of Figure 8 does not contract, since there is no substructure corresponding to the $/I$ contraction: for a valid contraction, a par link is connected to both “tentacles” of a single tensor link, and in the figure the two tentacles without arrow are connected to different tensor links. This is correct, since $a/_a b, b/_a c\vdash a/_a c$ is underivable in a logic without structural rules for $a$ .
However, we have seen that this statement becomes derivable once we add associativity of $a$ and it is easily verified to be a theorem of the Lambek calculus. How can we add a modally controlled version of associativity to the proof net calculus? We can add such a rule by adding a rewrite from a tensor tree to another tensor tree with the same set of leaves. The rewrite for associativity is shown in Figure 9 . To apply a structural rewrite, we replace the tree on the left hand side of the arrow by the one on the right hand side, reattaching the leaves and the root to the rest of the proof net.
Just like the structural rules, a structural rewrite always has the same leaves on both sides of the arrow — neither copying nor deletion is allowed, though we can reorder the leaves in any way (the associativity rule doesn't reorder the leaves).
Figure 10 shows how the contractions and the structural rewrites work together to derive $a/_a b, b/_a c \vdash a/_a c$ .
We start with a structural rewrite, which rebrackets the pair of tensor links. The two hypotheses are now the premisses of the same link, and this also produces a contractible structure for the $/I$ link. Hence, we have shown the proof structure to be a proof net.
In the Grail theorem prover, the representation of abstract proof structures looks as shown in Figure 11 (this is an automatically produced subgraph close to the graph on the left of Figure 10 , though with a non-associative mode $n$ and therefore not derivable). This graph is used during user interaction. The graphs are drawn using GraphViz, an external graph drawing program which does not guarantee respecting our desires for left, right and top/bottom, so tentacles are labeled 1, 2 and 3 (for left, right and top/bottom respectively) to allow us to make these distinctions regardless of the visual representation. Vertices are given unique identifiers for user interaction, for example to allow specifying which pair of atoms should be identified or which par link should be contracted.
Although the structural rules give the grammar writer a great deal of flexibility, such flexibility complicates proof search. As discussed at the beginning of Section "Theorem proving" , theorem proving using proof nets is a four step process, which in the current situation looks as follows: 1) lexical lookup, 2) unfolding, 3) identification of atoms, 4) graph rewriting. In the current case, both the graph rewriting and the identification of atoms are complicated and since we can interleave the atom connections and the graph rewriting it is not a priori clear which strategy is optimal for which set of structural rules. The current implementation does graph rewriting only once all atoms have been connected.
The Grail theorem prover implements some strategies for early failure. Since all proofs in multimodal type-logical grammars are a subset of the proofs in multiplicative linear logic, we can reject (partial) proof structures which are invalid in multiplicative linear logic, a condition which is both powerful and easy to check.
As a compromise between efficiency and flexibility, Grail allows the grammar writer to specify a first-order approximation of her structural rules. Unlike the test for validity in multiplicative linear logic which is valid for any set of structural rules, specifying such a first-order approximation is valid only when there is a guarantee that all derivable sequents in the multimodal grammar are a subset of their approximations derivable in first-order linear logic. Errors made here can be rather subtle and hard to detect. It is recommended to use such methods to improve parsing speed only when a grammar has been sufficiently tested and where it is possible to verify whether no valid readings are excluded, or, ideally, to prove that the subset relation holds between the multimodal logic and its first-order approximation.
The next section will discuss first-order proof nets in their own right. Though these proof nets have been used as an underlying mechanism in Grail for a long time, we have seen in Section "First-order linear logic" that many modern type-logical grammars are formulated in a way which permits a direct implementation without an explicit set of structural rules.
As to the proof search strategy used by Grail, it is an instance of the “dancing links” algorithm BIBREF35 : when connecting atomic formulas, we always link a formula which has the least possibilities and we rewrite the abstract proof structures only once a fully linked proof structure has been produced. Though the parser is not extremely fast, evaluation both on randomly generated statements and on multimodal statements extracted from corpora show that the resulting algorithm performs more than well enough BIBREF36 .
First-order proof nets
Proof nets for first-order linear logic BIBREF37 are a simple extension of the proof nets for standard, multiplicative linear logic BIBREF38 . Compared to the multimodal proof nets of the previous section, all logical links have the main formula of the link as their conclusion but there is now a notion of polarity, corresponding to whether or not the formula occurs on the left hand side of the turnstile (negative polarity) or on the right hand side (positive polarity).
We unfold a sequent $A_1,\ldots ,A_n \vdash C$ by using the negative unfolding for each of the $A_i$ and the positive unfolding for $C$ . The links for first-order proof nets are shown in Table 7 .
Contrary to multimodal proof nets, where a tensor link was drawn with an open central node and a par link with a filled central node, here par links are drawn as a connected pair of dotted lines and tensor links as a pair of solid lines.
As before, premisses are drawn above the link and conclusions are drawn below it. With the exception of the cut and axiom links, the order of the premisses and the conclusions is important. We assume without loss of generality that every quantifier link uses a distinct eigenvariable.
A set of formula occurrences connected by links is a proof structure if every formula is at most once the premiss of a link and if every formula is exactly once the conclusion of a link. Those formulas which are not the premiss of any link are the conclusions of the proof structure — note the difference with multimodal proof nets: a proof structure has conclusions but no hypotheses and, as a consequence, each formula in the proof net must be the conclusion of exactly one (instead of at most one) link.
For polarised proof nets, unfolding the formulas according to the links of Table 7 no longer produces a proof structure, since the atomic formulas after unfolding are not the conclusions of any link. Such “partial proof structures” are called a modules. To turn a module into a proof structure, we connect atomic formulas of opposite polarity by axiom links until we obtain a complete matching of the atomic formulas, that is until every atomic formula is the conclusion of an axiom link.
The negative $\forall $ and the positive $\exists $ link, are defined using substitution of an arbitrary term $t$ for the eigenvariable of the link. In actual proof search, we use unification of these variables when the axiom links are performed.
As usual, not all proof structures are proof nets. However, since the logical rules for the quantifiers make essential use of the notion of “free occurrence of a variable”, this should be reflected in out correctness condition. BIBREF37 uses a notion of switching for proof structures which extends the switchings of BIBREF38 .
A switching is, for each of the binary par links a choice of its left or right premiss and for each of the unary par links with eigenvariable $x$ a choice of one of the formulas in the structure with a free occurrence of $x$ or of the premiss of the rule.
Given a switching, a correction graph replaces a binary par link by a connection from the conclusion of the link to the premiss chosen by the switching, and it replace a unary par link by a link from the conclusion to the formula chosen by the switching.
Finally, a proof structure is a proof net when all its correction graphs are both acyclic and connected BIBREF37 .
As an example, look at the proof structure of $a\multimap \exists x.b(x) \vdash \exists y. [a\multimap b(y)]$ shown in Figure 12 . This statement is not derivable in first-order linear logic (nor in intuitionistic logic). Consider therefore the switching connecting the binary par link to its left premiss $a$ and the link for $x$ to the formula $a\multimap b(x)$ (it has a free occurrence of $x$ , so this like is a valid switching).
This switching produces the correction graph shown in Figure 13 . It contains a cycle, drawn with bold edges, and is therefore not a proof structure (in addition, the $b$ axiom is disconnected from the rest of the structure, giving a second reason for rejecting the proof structure).
Though switching conditions for proof nets are simple and elegant, they don't lend themselves to naive application: already for the example proof structure of Figure 12 there are six possible switchings to consider and, as the reader can verify, only the switching shown in Figure 13 is cyclic (and disconnected). In general, it is often the case that all switchings but one are acyclic and connected, as it is here.
Though there are efficient ways of testing acyclicity and connectedness for multiplicative proof nets BIBREF39 , BIBREF40 and it seems these can be adapted to the first-order case (though some care needs to be taken when we allow complex terms), the theorem prover for first-order linear logic uses a extension of the contraction criterion of BIBREF41 .
Given a proof structure we erase all formulas from the vertices and keep only a set of the free variables at this vertex. We then use the contractions of Table 8 to contract the edges of the graph. The resulting vertex of each contraction has the union of the free variables of the two vertices of the redex (we remove the eigenvariable $x$ of a $\forall $ contraction, “ $\Rightarrow _u$ ”). A proof structure is a proof net iff it contracts to a single vertex using the contractions of Table 8 .
To give an example of the contractions, Figure 14 shows the contractions for the underivable proof structure of Figure 12 . The initial structure, which simply takes the proof structure of Figure 12 and replaces the formulas by the corresponding set of free variables, is shown on the left. Contracting the five solid edges using the $c$ contraction produces the structure shown in the figure on the right.
No further contractions apply: the two connected dotted links from the binary par link do not end in the same vertex, so the par contraction $p$ cannot apply. In addition, the universal contraction $u$ cannot apply either, since it requires all vertices with its eigenvariable $x$ to occur at the node from which the arrow is leaving and there is another occurrence of $x$ at the bottom node of the structure. We have therefore shown that this is not a proof net.
Since there are no structural rewrites, the contractions for first-order linear logic are easier to apply than those for multimodal type-logical grammars: it is rather easy to show confluence for the contractions (the presence of structural rules, but also the unary versions of the multimodal contractions, means confluence is not guaranteed for multimodal proof nets). We already implicitly used confluence when we argued that the proof structure in Figure 14 was not a proof net. The theorem prover uses a maximally contracted representation of the proof structure to represent the current state of proof search and this means less overhead and more opportunities for early failure during proof search.
Like before, the theorem proving uses four steps, which look as follows in the first-order case: 1) lexical lookup, 2) unfolding, 3) axiom links with unification, 4) graph contraction. Unlike the multimodal proof nets of the previous section, the graph contractions are now confluent and can be performed efficiently (the linear time solutions for the multiplicative case may be adaptable, but a naive implementation already has an $O(n^2)$ worst-case performance). After lexical lookup, theorem proving for first-order linear logic unfolds the formulas as before, but uses a greedy contraction strategy. This maximally contracted partial proof net constrains further axiom links: for example, a vertex containing a free variable $x$ cannot be linked to the conclusion of the edge of its eigenvariable (the vertex to which the arrow of the edge with variable $x$ points) or to one of its descendants, since such a structure would fail to satisfy the condition that the two vertices of a $\forall $ link for the $u$ contraction of Figure 8 are distinct. Another easily verified constraint is that two atomic formulas can only be connected by an axiom link if these formulas unify. Like for multimodal proof nets, the first-order linear logic theorem prover chooses an axiom link for one of the atoms with the fewest possibilities.
Tools
Table 9 lists the different theorem provers which are available. Grail 0 BIBREF42 and Grail 3 BIBREF43 use the multimodal proof net calculus of Section "Multimodal proof nets" , whereas LinearOne BIBREF44 uses the first-order proof nets of Section "First-order proof nets" . GrailLight BIBREF45 is a special-purpose chart parser, intended for use with an automatically extracted French grammar for wide-coverage parsing and semantics BIBREF34 , BIBREF46 . All provers are provided under the GNU Lesser General Public License — this means, notably, there is no warranty, though I am committed to making all software as useful as possible; so contact me for any comments, feature requests or bug reports. All theorem provers can be downloaded from the author's GitHub site.
https://github.com/RichardMoot/
The columns of table Table 9 indicate whether the theorem provers provide natural deduction output, graph output (of the partial proof nets), whether there is an interactive mode for proof search, whether the implementation is complete and whether the grammar can specify its own set of structural rules; “NA” means the question doesn't apply to the given system (GrailLight doesn't use a graphs to represent proofs and first-order linear logic does not have a grammar-specific set of structural rules). The table should help you select the most adequate tool for your purposes.
LinearOne provides natural deduction output not only for first-order linear logic, but also for the Displacement calculus, hybrid type-logical grammars and lambda grammars. That is, the grammar writer can write a grammar in any of these formalisms, LinearOne will do proof search of the translation of this grammar in first-order linear logic and then translate any resulting proofs back to the source language.
The syntactic example proofs in this chapter have been automatically generated using these tools and the corresponding grammars files, as well as many other example grammars, are included in the repository. | a family of grammar formalisms built on a foundation of logic and type theory. Type-logical grammars originated when BIBREF4 introduced his Syntactic Calculus (called the Lambek calculus, L, by later authors). |
0fa81adf00662694e1dc74475ae2b9283c50748c | 0fa81adf00662694e1dc74475ae2b9283c50748c_0 | Q: Which components of QA and QG models are shared during training?
Text: Introduction
Question answering (QA) is the task of automatically producing an answer to a question given a corresponding document. It not only provides humans with efficient access to vast amounts of information, but also acts as an important proxy task to assess machine literacy via reading comprehension. Thanks to the recent release of several large-scale machine comprehension/QA datasets BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , the field has undergone significant advancement, with an array of neural models rapidly approaching human parity on some of these benchmarks BIBREF5 , BIBREF6 , BIBREF7 . However, previous models do not treat QA as a task of natural language generation (NLG), but of pointing to an answer span within a document.
Alongside QA, question generation has also gained increased popularity BIBREF8 , BIBREF9 . The task is to generate a natural-language question conditioned on an answer and the corresponding document. Among its many applications, question generation has been used to improve QA systems BIBREF10 , BIBREF11 , BIBREF12 . A recurring theme among previous studies is to augment existing labeled data with machine-generated questions; to our knowledge, the direct (though implicit) effect of asking questions on answering questions has not yet been explored.
In this work, we propose a joint model that both asks and answers questions, and investigate how this joint-training setup affects the individual tasks. We hypothesize that question generation can help models achieve better QA performance. This is motivated partly by observations made in psychology that devising questions while reading can increase scores on comprehension tests BIBREF13 . Our joint model also serves as a novel framework for improving QA performance outside of the network-architectural engineering that characterizes most previous studies.
Although the question answering and asking tasks appear symmetric, there are some key differences. First, answering the questions in most existing QA datasets is extractive — it requires selecting some span of text within the document — while question asking is comparatively abstractive — it requires generation of text that may not appear in the document. Furthermore, a (document, question) pair typically specifies a unique answer. Conversely, a typical (document, answer) pair may be associated with multiple questions, since a valid question can be formed from any information or relations which uniquely specify the given answer.
To tackle the joint task, we construct an attention-based BIBREF14 sequence-to-sequence model BIBREF15 that takes a document as input and generates a question (answer) conditioned on an answer (question) as output. To address the mixed extractive/abstractive nature of the generative targets, we use the pointer-softmax mechanism BIBREF16 that learns to switch between copying words from the document and generating words from a prescribed vocabulary. Joint training is realized by alternating the input data between question-answering and question-generating examples for the same model. We demonstrate empirically that this model's QA performance on SQuAD, while not state of the art, improves by about 10% with joint training. A key novelty of our joint model is that it can generate (partially) abstractive answers.
Related Work
Joint-learning on multiple related tasks has been explored previously BIBREF17 , BIBREF18 . In machine translation, for instance, BIBREF18 demonstrated that translation quality clearly improves over models trained with a single language pair when the attention mechanism in a neural translation model is shared and jointly trained on multiple language pairs.
In question answering, BIBREF19 proposed one of the first neural models for the SQuAD dataset. SQuAD defines an extractive QA task wherein answers consist of word spans in the corresponding document. BIBREF19 demonstrated that learning to point to answer boundaries is more effective than learning to point sequentially to the tokens making up an answer span. Many later studies adopted this boundary model and achieved near-human performance on the task BIBREF5 , BIBREF6 , BIBREF7 . However, the boundary-pointing mechanism is not suitable for more open-ended tasks, including abstractive QA BIBREF4 and question generation. While “forcing” the extractive boundary model onto abstractive datasets currently yields state-of-the-art results BIBREF5 , this is mainly because current generative models are poor and NLG evaluation is unsolved.
Earlier work on question generation has resorted to either rule-based reordering methods BIBREF20 , BIBREF21 , BIBREF22 or slot-filling with question templates BIBREF23 , BIBREF24 , BIBREF25 . These techniques often involve pipelines of independent components that are difficult to tune for final performance measures. Partly to address this limitation, end-to-end-trainable neural models have recently been proposed for question generation in both vision BIBREF26 and language. For example, BIBREF8 used a sequence-to-sequence model with an attention mechanism derived from the encoder states. BIBREF9 proposed a similar architecture but in addition improved model performance through policy gradient techniques.
Several neural models with a questioning component have been proposed for the purpose of improving QA models, an objective shared by this study. BIBREF12 devised a semi-supervised training framework that trained a QA model BIBREF27 on both labeled data and artificial data generated by a separate generative component. BIBREF10 used policy gradient with a QA reward to train a sequence-to-sequence paraphrase model to reformulate questions in an existing QA dataset BIBREF2 . The generated questions were then used to further train an existing QA model BIBREF7 . A key distinction of our model is that we harness the process of asking questions to benefit question answering, without training the model to answer the generated questions.
Model Description
Our proposed model adopts a sequence-to-sequence framework BIBREF15 with an attention mechanism BIBREF14 and a pointer-softmax decoder BIBREF16 . Specifically, the model takes a document (i.e., a word sequence) $D = (w^d_1,\dots ,w^d_{n_d})$ and a condition sequence $C = (w^c_1,\dots ,w^c_{n_c})$ as input, and outputs a target sequence $Y^{\lbrace q,a\rbrace } = (\hat{w}_1,\dots ,\hat{w}_{n_p})$ . The condition corresponds to the question word sequence in answer-generation mode (a-gen), and the answer word sequence in question-generation mode (q-gen). We also attach a binary variable to indicate whether a data-point is intended for a-gen or q-gen. Intuitively, this should help the model learn the two modalities more easily. Empirically, QA performance improves slightly with this addition.
Encoder
A word $w_i$ in an input sequence is first embedded with an embedding layer into vector ${\bf e}^w_i$ . Character-level information is captured with the final states ${\bf e}^{ch}_i$ of a bidirectional Long Short-Term Memory model BIBREF28 on the character sequences of $w_i$ . The final representation for a word token ${\bf e}_i=\langle {\bf e}^w_i,{\bf e}^{ch}_i\rangle $ concatenates the word- and character-level embeddings. These are subsequently encoded with another BiLSTM into annotation vectors ${\bf h}^d_i$ and ${\bf h}^c_j$ (for the document and the condition sequence, respectively).
To better encode the condition, we also extract the encodings of the document words that appear in the condition sequence. This procedure is particularly helpful in q-gen mode, where the condition (answer) sequence is typically extractive. These extracted vectors are then fed into a condition aggregation BiLSTM to produce the extractive condition encoding ${\bf h}^e_k$ . We specifically take the final states of the condition encodings ${\bf h}^c_J$ and ${\bf h}^e_K$ . To account for the different extractive vs. abstractive nature of questions vs. answers, we use ${\bf h}^c_J$ in a-gen mode (for encoding questions) and ${\bf h}^e_K$ in q-gen mode (for encoding answers).
Decoder
The RNN-based decoder employs the pointer-softmax mechanism BIBREF16 . At each generation step, the decoder decides adaptively whether (a) to generate from a decoder vocabulary or (b) to point to a word in the source sequence (and copy over). Recurrence of the pointing decoder is implemented with two LSTM cells $c_1$ and $c_2$ :
$${\bf s}_1^{(t)} & = & c_1({\bf y}^{(t-1)}, {\bf s}_2^{(t-1)})\\ {\bf s}_2^{(t)} & = & c_2({\bf v}^{(t)}, {\bf s}_1^{(t)}),$$ (Eq. 1)
where ${\bf s}_1^{(t)}$ and ${\bf s}_2^{(t)}$ are the recurrent states, ${\bf y}^{(t-1)}$ is the embedding of decoder output from the previous time step, and ${\bf v}^{(t)}$ is the context vector (to be defined shortly in Equation ( 2 )).
The pointing decoder computes a distribution $\alpha ^{(t)}$ over the document word positions (i.e., a document attention, BIBREF14 ). Each element is defined as: $ \alpha ^{(t)}_i = f({\bf h}^d_i, {\bf h}^c, {\bf h}^e, {\bf s_1}^{(t-1)}), $
where $f$ is a two-layer MLP with tanh and softmax activation, respectively. The context vector ${\bf v}^{(t)}$ used in Equation () is the sum of the document encoding weighted by the document attention:
$${\bf v}^{(t)}=\sum _{i=1}^n \alpha ^{(t)}_i{\bf h}^d_i.$$ (Eq. 2)
The generative decoder, on the other hand, defines a distribution over a prescribed decoder vocabulary with a two-layer MLP $g$ :
$${\bf o}^{(t)}=g({\bf y}^{(t-1)},{\bf s}_2^{(t)},{\bf v}^{(t)},{\bf h}^c,{\bf h}^e).$$ (Eq. 3)
Finally, the switch scalar $s^{(t)}$ at each time step is computed by a three-layer MLP $h$ : $ s^{(t)}=h({\bf s}_2^{(t)},{\bf v}^{(t)},\alpha ^{(t)},{\bf o}^{(t)}), $
The first two layers of $h$ use tanh activation and the final layer uses sigmoid activation, and highway connections are present between the first and the second layer. We also attach the entropy of the softmax distributions to the input of the final layer, postulating that the quantities should help guide the switching mechanism by indicating the confidence of pointing vs generating. The addition is empirically observed to improve model performance.
The resulting switch is used to interpolate the pointing and the generative probabilities for predicting the next word: $ p(\hat{w}_t)\sim s^{(t)} \alpha ^{(t)} + (1-s^{(t)}){\bf o}^{(t)}. $
Training and Inference
The optimization objective for updating the model parameters $\theta $ is to maximize the negative log likelihood of the generated sequences with respect to the training data $\mathcal {D}$ : $ \mathcal {L}=-\sum _{x\in \mathcal {D}}\log p(\hat{w}_t|w_{<t},x;\theta ). $
Here, $w_{<t}$ corresponds to the embeddings ${\bf y}^{(t-1)}$ in Equation ( 1 ) and ( 3 ). During training, gold targets are used to teacher-force the sequence generation for training, i.e., $w_{<t}=w^{\lbrace q,a\rbrace }_{<t}$ , while during inference, generation is conditioned on the previously generated words, i.e., $w_{<t}=\hat{w}_{<t}$ .
For words with multiple occurrence, since their exact references in the document cannot be reiabled determined, we aggregate the probability of these words in the encoder and the pointing decoder (similar to BIBREF29 ). At test time, beam search is used to enhance fluency in the question-generation output. The decoder also keeps an explicit history of previously generated words to avoid repetition in the output.
Dataset
We conduct our experiments on the SQuAD corpus BIBREF1 , a machine comprehension dataset consisting of over 100k crowd-sourced question-answer pairs on 536 Wikipedia articles. Simple preprocessing is performed, including lower-casing all texts in the dataset and using NLTK BIBREF30 for word tokenization. The test split of SQuAD is hidden from the public. We therefore take 5,158 question-answer pairs (self-contained in 23 Wikipedia articles) from the training set as validation set, and use the official development data to report test results. Note that answers in this dataset are strictly extractive, and we therefore constrain the pointer-softmax module to point at all decoding steps in answer generation mode.
Baseline Models
We first establish two baselines without multi-task training. Specifically, model A-gen is trained only to generate an answer given a document and a question, i.e., as a conventional QA model. Analogously, model Q-gen is trained only to generate questions from documents and answers. Joint-training (in model JointQA) is realized by feeding answer-generation and question-generation data to the model in an alternating fashion between mini-batches.
In addition, we compare answer-generation performance with the sequence model variant of the match-LSTM (mLSTM) model BIBREF19 . As mentioned earlier, in contrast to existing neural QA models that point to the start and end boundaries of extractive answers, this model predicts a sequence of document positions as the answer. This makes it most comparable to our QA setup. Note, however, that our model has the additional capacity to generate abstractively from the decoder vocabulary.
Quantitative Evaluation
We use F1 and Exact Match (EM, BIBREF1 ) against the gold answer sequences to evaluate answer generation, and BLEU BIBREF31 against the gold question sequences to evaluate question generation. However, existing studies have shown that the task of question generation often exhibits linguistic variance that is semantically admissible; this renders it inappropriate to judge a generated question solely by matching against a gold sequence BIBREF9 . We therefore opt to assess the quality of generated questions $Y^q$ with two pretrained neural models as well: we use a language model to compute the perplexity of $Y^q$ , and a QA model to answer $Y^q$ . We measure the F1 score of the answer produced by this QA model.
We choose mLSTM as the pretrained QA model and train it on SQuAD with the same split as mentioned in Section "Dataset" . Performance on the test set (i.e., the official validation set of SQuAD) is 73.78 F1 and 62.7 EM. For the pretrained language model, we train a single-layer LSTM language model on the combination of the text8 corpus, the Quora Question Pairs corpus, and the gold questions from SQuAD. The latter two corpora were included to tailor to our purpose of assessing question fluency, and for this reason, we ignore the semantic equivalence labels in the Quora dataset. Validation perplexity is 67.2 for the pretrained language model.
Analysis and Discussion
Evaluation results are provided in Table 1 . We see that A-gen performance improves significantly with the joint model: both F1 and EM increase by about 10 percentage points. Performance of q-gen worsens after joint training, but the decrease is relatively small. Furthermore, as pointed out by earlier studies, automatic metrics often do not correlate well with the generation quality assessed by humans BIBREF9 . We thus consider the overall outcome to be positive.
Meanwhile, although our model does not perform as well as mLSTM on the QA task, it has the added capability of generating questions. mLSTM uses a more advanced encoder tailored to QA, while our model uses only a bidirectional LSTM for encoding. Our model uses a more advanced decoder based on the pointer-softmax that enables it to generate abstactively and extractively.
For a finer grained analysis, we first categorize test set answers based on their entity types, then stratify the QA performance comparison between A-gen and JointQA. The categorization relies on Stanford CoreNLP BIBREF32 to generate constituency parses, POS tags, and NER tags for answer spans (see BIBREF1 for more details). As seen in Figure 1 , the joint model significantly outperforms the single model in all categories. Interestingly, the moving average of the performance gap (dashed curve above bars) exhibits an upward trend as the A-gen model performance decreases across answer types, suggesting that the joint model helps most where the single model performance is weakest.
Qualitative Examples
Qualitatively, we have observed interesting “shifts” in attention before and after joint training. For example, in the positive case in Table 2 , the gold question asks about the direct object,Nixon, of the verb endorse, but the A-gen model predicts the indirect object, Kennedy, instead. In contrast, the joint model asks about the appositive of vice president during question generation, which presumably “primes” the model attention towards the correct answer Nixon. Analogously in the negative example, QA attention in the joint model appears to be shifted by joint training towards an answer that is incorrect but closer to the generated question.
Note that the examples from Table 2 come from the validation set, and it is thus not possible for the joint model to memorize the gold answers from question-generation mode — the priming effect must come from some form of knowledge transfer between q-gen and a-gen via joint training.
Implementation Details
Implementation details of the proposed model are as follows. The encoder vocabulary indexes all words in the dataset. The decoder vocabulary uses the top 100 words sorted by their frequency in the gold questions in the training data. This encourages the model to generate frequent words (e.g. wh-words and function words) from the decoder vocabulary and copy less frequent ones (e.g., topical words and entities) from the document.
The word embedding matrix is initialized with the 300-dimensional GloVe vectors BIBREF33 . The dimensionality of the character representations is 32. The number of hidden units is 384 for both of the encoder/decoder RNN cells. Dropout is applied at a rate of 0.3 to all embedding layers as well as between the hidden states in the encoder/decoder RNNs across time steps.
We use adam BIBREF34 as the step rule for optimization with mini-batch size 32. The initial learning rate is $2e-4$ , which is decayed at a rate of 0.5 when the validation loss increases for two consecutive epochs.
The model is implemented using Keras BIBREF35 with the Theano BIBREF36 backend.
Conclusion
We proposed a neural machine comprehension model that can jointly ask and answer questions given a document. We hypothesized that question answering can benefit from synergistic interaction between the two tasks through parameter sharing and joint training under this multitask setting. Our proposed model adopts an attention-based sequence-to-sequence architecture that learns to dynamically switch between copying words from the document and generating words from a vocabulary. Experiments with the model confirm our hypothesis: the joint model outperforms its QA-only counterpart by a significant margin on the SQuAD dataset.
Although evaluation scores are still lower than the state-of-the-art results achieved by dedicated QA models, the proposed model nonetheless demonstrates the effectiveness of joint training between QA and question generation, and thus offers a novel perspective and a promising direction for advancing the study of QA. | parameter sharing |
4ade72bfa28bd1f6b75cc7fa687fa634717782f2 | 4ade72bfa28bd1f6b75cc7fa687fa634717782f2_0 | Q: How much improvement does jointly learning QA and QG give, compared to only training QA?
Text: Introduction
Question answering (QA) is the task of automatically producing an answer to a question given a corresponding document. It not only provides humans with efficient access to vast amounts of information, but also acts as an important proxy task to assess machine literacy via reading comprehension. Thanks to the recent release of several large-scale machine comprehension/QA datasets BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , the field has undergone significant advancement, with an array of neural models rapidly approaching human parity on some of these benchmarks BIBREF5 , BIBREF6 , BIBREF7 . However, previous models do not treat QA as a task of natural language generation (NLG), but of pointing to an answer span within a document.
Alongside QA, question generation has also gained increased popularity BIBREF8 , BIBREF9 . The task is to generate a natural-language question conditioned on an answer and the corresponding document. Among its many applications, question generation has been used to improve QA systems BIBREF10 , BIBREF11 , BIBREF12 . A recurring theme among previous studies is to augment existing labeled data with machine-generated questions; to our knowledge, the direct (though implicit) effect of asking questions on answering questions has not yet been explored.
In this work, we propose a joint model that both asks and answers questions, and investigate how this joint-training setup affects the individual tasks. We hypothesize that question generation can help models achieve better QA performance. This is motivated partly by observations made in psychology that devising questions while reading can increase scores on comprehension tests BIBREF13 . Our joint model also serves as a novel framework for improving QA performance outside of the network-architectural engineering that characterizes most previous studies.
Although the question answering and asking tasks appear symmetric, there are some key differences. First, answering the questions in most existing QA datasets is extractive — it requires selecting some span of text within the document — while question asking is comparatively abstractive — it requires generation of text that may not appear in the document. Furthermore, a (document, question) pair typically specifies a unique answer. Conversely, a typical (document, answer) pair may be associated with multiple questions, since a valid question can be formed from any information or relations which uniquely specify the given answer.
To tackle the joint task, we construct an attention-based BIBREF14 sequence-to-sequence model BIBREF15 that takes a document as input and generates a question (answer) conditioned on an answer (question) as output. To address the mixed extractive/abstractive nature of the generative targets, we use the pointer-softmax mechanism BIBREF16 that learns to switch between copying words from the document and generating words from a prescribed vocabulary. Joint training is realized by alternating the input data between question-answering and question-generating examples for the same model. We demonstrate empirically that this model's QA performance on SQuAD, while not state of the art, improves by about 10% with joint training. A key novelty of our joint model is that it can generate (partially) abstractive answers.
Related Work
Joint-learning on multiple related tasks has been explored previously BIBREF17 , BIBREF18 . In machine translation, for instance, BIBREF18 demonstrated that translation quality clearly improves over models trained with a single language pair when the attention mechanism in a neural translation model is shared and jointly trained on multiple language pairs.
In question answering, BIBREF19 proposed one of the first neural models for the SQuAD dataset. SQuAD defines an extractive QA task wherein answers consist of word spans in the corresponding document. BIBREF19 demonstrated that learning to point to answer boundaries is more effective than learning to point sequentially to the tokens making up an answer span. Many later studies adopted this boundary model and achieved near-human performance on the task BIBREF5 , BIBREF6 , BIBREF7 . However, the boundary-pointing mechanism is not suitable for more open-ended tasks, including abstractive QA BIBREF4 and question generation. While “forcing” the extractive boundary model onto abstractive datasets currently yields state-of-the-art results BIBREF5 , this is mainly because current generative models are poor and NLG evaluation is unsolved.
Earlier work on question generation has resorted to either rule-based reordering methods BIBREF20 , BIBREF21 , BIBREF22 or slot-filling with question templates BIBREF23 , BIBREF24 , BIBREF25 . These techniques often involve pipelines of independent components that are difficult to tune for final performance measures. Partly to address this limitation, end-to-end-trainable neural models have recently been proposed for question generation in both vision BIBREF26 and language. For example, BIBREF8 used a sequence-to-sequence model with an attention mechanism derived from the encoder states. BIBREF9 proposed a similar architecture but in addition improved model performance through policy gradient techniques.
Several neural models with a questioning component have been proposed for the purpose of improving QA models, an objective shared by this study. BIBREF12 devised a semi-supervised training framework that trained a QA model BIBREF27 on both labeled data and artificial data generated by a separate generative component. BIBREF10 used policy gradient with a QA reward to train a sequence-to-sequence paraphrase model to reformulate questions in an existing QA dataset BIBREF2 . The generated questions were then used to further train an existing QA model BIBREF7 . A key distinction of our model is that we harness the process of asking questions to benefit question answering, without training the model to answer the generated questions.
Model Description
Our proposed model adopts a sequence-to-sequence framework BIBREF15 with an attention mechanism BIBREF14 and a pointer-softmax decoder BIBREF16 . Specifically, the model takes a document (i.e., a word sequence) $D = (w^d_1,\dots ,w^d_{n_d})$ and a condition sequence $C = (w^c_1,\dots ,w^c_{n_c})$ as input, and outputs a target sequence $Y^{\lbrace q,a\rbrace } = (\hat{w}_1,\dots ,\hat{w}_{n_p})$ . The condition corresponds to the question word sequence in answer-generation mode (a-gen), and the answer word sequence in question-generation mode (q-gen). We also attach a binary variable to indicate whether a data-point is intended for a-gen or q-gen. Intuitively, this should help the model learn the two modalities more easily. Empirically, QA performance improves slightly with this addition.
Encoder
A word $w_i$ in an input sequence is first embedded with an embedding layer into vector ${\bf e}^w_i$ . Character-level information is captured with the final states ${\bf e}^{ch}_i$ of a bidirectional Long Short-Term Memory model BIBREF28 on the character sequences of $w_i$ . The final representation for a word token ${\bf e}_i=\langle {\bf e}^w_i,{\bf e}^{ch}_i\rangle $ concatenates the word- and character-level embeddings. These are subsequently encoded with another BiLSTM into annotation vectors ${\bf h}^d_i$ and ${\bf h}^c_j$ (for the document and the condition sequence, respectively).
To better encode the condition, we also extract the encodings of the document words that appear in the condition sequence. This procedure is particularly helpful in q-gen mode, where the condition (answer) sequence is typically extractive. These extracted vectors are then fed into a condition aggregation BiLSTM to produce the extractive condition encoding ${\bf h}^e_k$ . We specifically take the final states of the condition encodings ${\bf h}^c_J$ and ${\bf h}^e_K$ . To account for the different extractive vs. abstractive nature of questions vs. answers, we use ${\bf h}^c_J$ in a-gen mode (for encoding questions) and ${\bf h}^e_K$ in q-gen mode (for encoding answers).
Decoder
The RNN-based decoder employs the pointer-softmax mechanism BIBREF16 . At each generation step, the decoder decides adaptively whether (a) to generate from a decoder vocabulary or (b) to point to a word in the source sequence (and copy over). Recurrence of the pointing decoder is implemented with two LSTM cells $c_1$ and $c_2$ :
$${\bf s}_1^{(t)} & = & c_1({\bf y}^{(t-1)}, {\bf s}_2^{(t-1)})\\ {\bf s}_2^{(t)} & = & c_2({\bf v}^{(t)}, {\bf s}_1^{(t)}),$$ (Eq. 1)
where ${\bf s}_1^{(t)}$ and ${\bf s}_2^{(t)}$ are the recurrent states, ${\bf y}^{(t-1)}$ is the embedding of decoder output from the previous time step, and ${\bf v}^{(t)}$ is the context vector (to be defined shortly in Equation ( 2 )).
The pointing decoder computes a distribution $\alpha ^{(t)}$ over the document word positions (i.e., a document attention, BIBREF14 ). Each element is defined as: $ \alpha ^{(t)}_i = f({\bf h}^d_i, {\bf h}^c, {\bf h}^e, {\bf s_1}^{(t-1)}), $
where $f$ is a two-layer MLP with tanh and softmax activation, respectively. The context vector ${\bf v}^{(t)}$ used in Equation () is the sum of the document encoding weighted by the document attention:
$${\bf v}^{(t)}=\sum _{i=1}^n \alpha ^{(t)}_i{\bf h}^d_i.$$ (Eq. 2)
The generative decoder, on the other hand, defines a distribution over a prescribed decoder vocabulary with a two-layer MLP $g$ :
$${\bf o}^{(t)}=g({\bf y}^{(t-1)},{\bf s}_2^{(t)},{\bf v}^{(t)},{\bf h}^c,{\bf h}^e).$$ (Eq. 3)
Finally, the switch scalar $s^{(t)}$ at each time step is computed by a three-layer MLP $h$ : $ s^{(t)}=h({\bf s}_2^{(t)},{\bf v}^{(t)},\alpha ^{(t)},{\bf o}^{(t)}), $
The first two layers of $h$ use tanh activation and the final layer uses sigmoid activation, and highway connections are present between the first and the second layer. We also attach the entropy of the softmax distributions to the input of the final layer, postulating that the quantities should help guide the switching mechanism by indicating the confidence of pointing vs generating. The addition is empirically observed to improve model performance.
The resulting switch is used to interpolate the pointing and the generative probabilities for predicting the next word: $ p(\hat{w}_t)\sim s^{(t)} \alpha ^{(t)} + (1-s^{(t)}){\bf o}^{(t)}. $
Training and Inference
The optimization objective for updating the model parameters $\theta $ is to maximize the negative log likelihood of the generated sequences with respect to the training data $\mathcal {D}$ : $ \mathcal {L}=-\sum _{x\in \mathcal {D}}\log p(\hat{w}_t|w_{<t},x;\theta ). $
Here, $w_{<t}$ corresponds to the embeddings ${\bf y}^{(t-1)}$ in Equation ( 1 ) and ( 3 ). During training, gold targets are used to teacher-force the sequence generation for training, i.e., $w_{<t}=w^{\lbrace q,a\rbrace }_{<t}$ , while during inference, generation is conditioned on the previously generated words, i.e., $w_{<t}=\hat{w}_{<t}$ .
For words with multiple occurrence, since their exact references in the document cannot be reiabled determined, we aggregate the probability of these words in the encoder and the pointing decoder (similar to BIBREF29 ). At test time, beam search is used to enhance fluency in the question-generation output. The decoder also keeps an explicit history of previously generated words to avoid repetition in the output.
Dataset
We conduct our experiments on the SQuAD corpus BIBREF1 , a machine comprehension dataset consisting of over 100k crowd-sourced question-answer pairs on 536 Wikipedia articles. Simple preprocessing is performed, including lower-casing all texts in the dataset and using NLTK BIBREF30 for word tokenization. The test split of SQuAD is hidden from the public. We therefore take 5,158 question-answer pairs (self-contained in 23 Wikipedia articles) from the training set as validation set, and use the official development data to report test results. Note that answers in this dataset are strictly extractive, and we therefore constrain the pointer-softmax module to point at all decoding steps in answer generation mode.
Baseline Models
We first establish two baselines without multi-task training. Specifically, model A-gen is trained only to generate an answer given a document and a question, i.e., as a conventional QA model. Analogously, model Q-gen is trained only to generate questions from documents and answers. Joint-training (in model JointQA) is realized by feeding answer-generation and question-generation data to the model in an alternating fashion between mini-batches.
In addition, we compare answer-generation performance with the sequence model variant of the match-LSTM (mLSTM) model BIBREF19 . As mentioned earlier, in contrast to existing neural QA models that point to the start and end boundaries of extractive answers, this model predicts a sequence of document positions as the answer. This makes it most comparable to our QA setup. Note, however, that our model has the additional capacity to generate abstractively from the decoder vocabulary.
Quantitative Evaluation
We use F1 and Exact Match (EM, BIBREF1 ) against the gold answer sequences to evaluate answer generation, and BLEU BIBREF31 against the gold question sequences to evaluate question generation. However, existing studies have shown that the task of question generation often exhibits linguistic variance that is semantically admissible; this renders it inappropriate to judge a generated question solely by matching against a gold sequence BIBREF9 . We therefore opt to assess the quality of generated questions $Y^q$ with two pretrained neural models as well: we use a language model to compute the perplexity of $Y^q$ , and a QA model to answer $Y^q$ . We measure the F1 score of the answer produced by this QA model.
We choose mLSTM as the pretrained QA model and train it on SQuAD with the same split as mentioned in Section "Dataset" . Performance on the test set (i.e., the official validation set of SQuAD) is 73.78 F1 and 62.7 EM. For the pretrained language model, we train a single-layer LSTM language model on the combination of the text8 corpus, the Quora Question Pairs corpus, and the gold questions from SQuAD. The latter two corpora were included to tailor to our purpose of assessing question fluency, and for this reason, we ignore the semantic equivalence labels in the Quora dataset. Validation perplexity is 67.2 for the pretrained language model.
Analysis and Discussion
Evaluation results are provided in Table 1 . We see that A-gen performance improves significantly with the joint model: both F1 and EM increase by about 10 percentage points. Performance of q-gen worsens after joint training, but the decrease is relatively small. Furthermore, as pointed out by earlier studies, automatic metrics often do not correlate well with the generation quality assessed by humans BIBREF9 . We thus consider the overall outcome to be positive.
Meanwhile, although our model does not perform as well as mLSTM on the QA task, it has the added capability of generating questions. mLSTM uses a more advanced encoder tailored to QA, while our model uses only a bidirectional LSTM for encoding. Our model uses a more advanced decoder based on the pointer-softmax that enables it to generate abstactively and extractively.
For a finer grained analysis, we first categorize test set answers based on their entity types, then stratify the QA performance comparison between A-gen and JointQA. The categorization relies on Stanford CoreNLP BIBREF32 to generate constituency parses, POS tags, and NER tags for answer spans (see BIBREF1 for more details). As seen in Figure 1 , the joint model significantly outperforms the single model in all categories. Interestingly, the moving average of the performance gap (dashed curve above bars) exhibits an upward trend as the A-gen model performance decreases across answer types, suggesting that the joint model helps most where the single model performance is weakest.
Qualitative Examples
Qualitatively, we have observed interesting “shifts” in attention before and after joint training. For example, in the positive case in Table 2 , the gold question asks about the direct object,Nixon, of the verb endorse, but the A-gen model predicts the indirect object, Kennedy, instead. In contrast, the joint model asks about the appositive of vice president during question generation, which presumably “primes” the model attention towards the correct answer Nixon. Analogously in the negative example, QA attention in the joint model appears to be shifted by joint training towards an answer that is incorrect but closer to the generated question.
Note that the examples from Table 2 come from the validation set, and it is thus not possible for the joint model to memorize the gold answers from question-generation mode — the priming effect must come from some form of knowledge transfer between q-gen and a-gen via joint training.
Implementation Details
Implementation details of the proposed model are as follows. The encoder vocabulary indexes all words in the dataset. The decoder vocabulary uses the top 100 words sorted by their frequency in the gold questions in the training data. This encourages the model to generate frequent words (e.g. wh-words and function words) from the decoder vocabulary and copy less frequent ones (e.g., topical words and entities) from the document.
The word embedding matrix is initialized with the 300-dimensional GloVe vectors BIBREF33 . The dimensionality of the character representations is 32. The number of hidden units is 384 for both of the encoder/decoder RNN cells. Dropout is applied at a rate of 0.3 to all embedding layers as well as between the hidden states in the encoder/decoder RNNs across time steps.
We use adam BIBREF34 as the step rule for optimization with mini-batch size 32. The initial learning rate is $2e-4$ , which is decayed at a rate of 0.5 when the validation loss increases for two consecutive epochs.
The model is implemented using Keras BIBREF35 with the Theano BIBREF36 backend.
Conclusion
We proposed a neural machine comprehension model that can jointly ask and answer questions given a document. We hypothesized that question answering can benefit from synergistic interaction between the two tasks through parameter sharing and joint training under this multitask setting. Our proposed model adopts an attention-based sequence-to-sequence architecture that learns to dynamically switch between copying words from the document and generating words from a vocabulary. Experiments with the model confirm our hypothesis: the joint model outperforms its QA-only counterpart by a significant margin on the SQuAD dataset.
Although evaluation scores are still lower than the state-of-the-art results achieved by dedicated QA models, the proposed model nonetheless demonstrates the effectiveness of joint training between QA and question generation, and thus offers a novel perspective and a promising direction for advancing the study of QA. | We see that A-gen performance improves significantly with the joint model: both F1 and EM increase by about 10 percentage points. |
fb381a59732474dc71a413e25cac37e239547b55 | fb381a59732474dc71a413e25cac37e239547b55_0 | Q: Do they test their word embeddings on downstream tasks?
Text: Introduction
Word embeddings have been used to improve the performance of many NLP tasks including language modelling BIBREF1 , machine translation BIBREF2 , and sentiment analysis BIBREF3 . The broad applicability of word embeddings to NLP implies that improvements to their quality will likely have widespread benefits for the field.
The word embedding problem is to learn a mapping INLINEFORM0 ( INLINEFORM1 100-300 in most applications) that encodes meaningful semantic and/or syntactic information. For instance, in many word embeddings, INLINEFORM2 car INLINEFORM3 truck INLINEFORM4 , since the words are semantically similar.
More complex relationships than similarity can also be encoded in word embeddings. For example, we can answer analogy queries of the form INLINEFORM0 ? using simple arithmetic in many state-of-the-art embeddings BIBREF4 . The answer to bed INLINEFORM1 sleep INLINEFORM2 chair INLINEFORM3 INLINEFORM4 is given by the word whose vector representation is closest to INLINEFORM5 sleep INLINEFORM6 bed INLINEFORM7 chair INLINEFORM8 ( INLINEFORM9 sit INLINEFORM10 ). Other embeddings may encode such information in a nonlinear way BIBREF5 .
BIBREF4 demonstrates the additive compositionality of their word2vec vectors: one can sum vectors produced by their embedding to compute vectors for certain phrases rather than just vectors for words. Later in this paper, we will show that our embeddings naturally give rise to a form of multiplicative compositionality that has not yet been explored in the literature.
Almost all recent word embeddings rely on the distributional hypothesis BIBREF6 , which states that a word's meaning can be inferred from the words that tend to surround it. To utilize the distributional hypothesis, many embeddings are given by a low-rank factor of a matrix derived from co-occurrences in a large unsupervised corpus, see BIBREF7 , BIBREF8 , BIBREF9 and BIBREF10 .
Approaches that rely on matrix factorization only utilize pairwise co-occurrence information in the corpus. We aim to extend this approach by creating word embeddings given by factors of tensors containing higher order co-occurrence data.
Related work
Some common word embeddings related to co-occurrence based matrix factorization include GloVe BIBREF7 , word2vec BIBREF9 , LexVec BIBREF10 , and NNSE BIBREF8 . In contrast, our work studies word embeddings given by factorization of tensors. An overview of tensor factorization methods is given in BIBREF11 .
Our work uses factorization of symmetric nonnegative tensors, which has been studied in the past BIBREF12 , BIBREF13 . In general, factorization of tensors has been applied to NLP in BIBREF14 and factorization of nonnegative tensors BIBREF15 . Recently, factorization of symmetric tensors has been used to create a generic word embedding BIBREF16 but the idea was not explored extensively. Our work studies this idea in much greater detail, fully demonstrating the viability of tensor factorization as a technique for training word embeddings.
Composition of word vectors to create novel representations has been studied in depth, including additive, multiplicative, and tensor-based methods BIBREF17 , BIBREF18 . Typically, composition is used to create vectors that represent phrases or sentences. Our work, instead, shows that pairs of word vectors can be composed multiplicatively to create different vector representations for the various meanings of a single polysemous word.
Notation
Throughout this paper we will write scalars in lowercase italics INLINEFORM0 , vectors in lowercase bold letters INLINEFORM1 , matrices with uppercase bold letters INLINEFORM2 , and tensors (of order INLINEFORM3 ) with Euler script notation INLINEFORM4 , as is standard in the literature.
Pointwise Mutual Information
Pointwise mutual information (PMI) is a useful property in NLP that quantifies the likelihood that two words co-occur BIBREF9 . It is defined as: INLINEFORM0
where INLINEFORM0 is the probability that INLINEFORM1 and INLINEFORM2 occur together in a given fixed-length context window in the corpus, irrespective of order.
It is often useful to consider the positive PMI (PPMI), defined as: INLINEFORM0
since negative PMI values have little grounded interpretation BIBREF19 , BIBREF9 , BIBREF15 .
Given an indexed vocabulary INLINEFORM0 , one can construct a INLINEFORM1 PPMI matrix INLINEFORM2 where INLINEFORM3 . Many existing word embedding techniques involve factorizing this PPMI matrix BIBREF9 , BIBREF8 , BIBREF10 .
PMI can be generalized to INLINEFORM0 variables. While there are many ways to do so BIBREF20 , in this paper we use the form defined by: INLINEFORM1
where INLINEFORM0 is the probability that all of INLINEFORM1 occur together in a given fixed-length context window in the corpus, irrespective of their order.
In this paper we study 3-way PPMI tensors INLINEFORM0 , where INLINEFORM1 , as this is the natural higher-order generalization of the PPMI matrix. We leave the study of creating word embeddings with INLINEFORM2 -dimensional PPMI tensors ( INLINEFORM3 ) to future work.
Tensor factorization
Just as the rank- INLINEFORM0 matrix decomposition is defined to be the product of two factor matrices ( INLINEFORM1 ), the canonical rank- INLINEFORM2 tensor decomposition for a third order tensor is defined to be the product of three factor matrices BIBREF11 : DISPLAYFORM0
where INLINEFORM0 is the outer product: INLINEFORM1 . This is also commonly referred to as the rank-R CP Decomposition. Elementwise, this is written as: INLINEFORM2
where INLINEFORM0 is elementwise vector multiplication and INLINEFORM1 is the INLINEFORM2 row of INLINEFORM3 . In our later section on multiplicative compositionality, we will see this formulation gives rise to a meaningful interpretation of the elementwise product between vectors in our word embeddings.
Symmetric CP Decomposition. In this paper, we will consider symmetric CP decomposition of nonnegative tensors BIBREF21 , BIBREF11 . Since our INLINEFORM0 -way PPMI is nonnegative and invariant under permutation, the PPMI tensor INLINEFORM1 is nonnegative and supersymmetric, i.e. INLINEFORM2 for any permutation INLINEFORM3 .
In the symmetric CP decomposition, instead of factorizing INLINEFORM0 , we factorize INLINEFORM1 as the triple product of a single factor matrix INLINEFORM2 such that INLINEFORM3
In this formulation, we use INLINEFORM0 to be the word embedding so the vector for INLINEFORM1 is the INLINEFORM2 row of INLINEFORM3 similar to the formulations in BIBREF9 , BIBREF8 , BIBREF7 .
It is known that the optimal rank- INLINEFORM0 CP decomposition exists for symmetric nonnegative tensors such as the PPMI tensor BIBREF21 . However, finding such a decomposition is NP hard in general BIBREF22 so we must consider approximate methods.
In this work, we only consider the symmetric CP decomposition, leaving the study of other tensor decompositions (such as the Tensor Train or HOSVD BIBREF23 , BIBREF11 ) to future work.
Computing the Symmetric CP Decomposition
The INLINEFORM0 size of the third order PPMI tensor presents a number of computational challenges. In practice, INLINEFORM1 can vary from INLINEFORM2 to INLINEFORM3 , resulting in a tensor whose naive representation requires at least INLINEFORM4 bytes = 4 TB of floats. Even the sparse representation of the tensor takes up such a large fraction of memory that standard algorithms such as successive rank-1 approximation BIBREF12 , BIBREF24 and alternating least-squares BIBREF11 are infeasible for our uses. Thus, in this paper we will consider a stochastic online formulation similar to that of BIBREF25 .
We optimize the CP decomposition in an online fashion, using small random subsets INLINEFORM0 of the nonzero tensor entries to update the decomposition at time INLINEFORM1 . In this minibatch setting, we optimize the decomposition based on the current minibatch and the previous decomposition at time INLINEFORM2 . To update INLINEFORM3 (and thus the symmetric decomposition), we first define a decomposition loss INLINEFORM4 and minimize this loss with respect to INLINEFORM5 using Adam BIBREF26 .
At each time INLINEFORM0 , we take INLINEFORM1 to be all co-occurrence triples (weighted by PPMI) in a fixed number of sentences (around 1,000) from the corpus. We continue training until we have depleted the entire corpus.
For INLINEFORM0 to accurately model INLINEFORM1 , we also include a certain proportion of elements with zero PPMI (or “negative samples”) in INLINEFORM2 , similar to that of BIBREF10 . We use an empirically found proportion of negative samples for training, and leave discovery of the optimal negative sample proportion to future work.
Word Embedding Proposals
CP-S. The first embedding we propose is based on symmetic CP decomposition of the PPMI tensor INLINEFORM0 as discussed in the mathematical preliminaries section. The optimal setting for the word embedding INLINEFORM1 is: INLINEFORM2
Since we cannot feasibly compute this exactly, we minimize the loss function defined as the squared error between the values in INLINEFORM0 and their predicted values: INLINEFORM1
using the techniques discussed in the previous section.
JCP-S. A potential problem with CP-S is that it is only trained on third order information. To rectify this issue, we propose a novel joint tensor factorization problem we call Joint Symmetric Rank- INLINEFORM0 CP Decomposition. In this problem, the input is the fixed rank INLINEFORM1 and a list of supersymmetric tensors INLINEFORM2 of different orders but whose axis lengths all equal INLINEFORM3 . Each tensor INLINEFORM4 is to be factorized via rank- INLINEFORM5 symmetric CP decomposition using a single INLINEFORM6 factor matrix INLINEFORM7 .
To produce a solution, we first define the loss at time INLINEFORM0 to be the sum of the reconstruction losses of each different tensor: INLINEFORM1
where INLINEFORM0 is an INLINEFORM1 -dimensional supersymmetric PPMI tensor. We then minimize the loss with respect to INLINEFORM2 . Since we are using at most third order tensors in this work, we assign our word embedding INLINEFORM3 to be: INLINEFORM4
This problem is a specific instance of Coupled Tensor Decomposition, which has been studied in the past BIBREF27 , BIBREF28 . In this problem, the goal is to factorize multiple tensors using at least one factor matrix in common. A similar formulation to our problem can be found in BIBREF29 , which studies blind source separation using the algebraic geometric aspects of jointly factorizing numerous supersymmetric tensors (to unknown rank). In contrast to our work, they outline some generic rank properties of such a decomposition rather than attacking the problem numerically. Also, in our formulation the rank is fixed and an approximate solution must be found. Exploring the connection between the theoretical aspects of joint decomposition and quality of word embeddings would be an interesting avenue for future work.
To the best of our knowledge this is the first study of Joint Symmetric Rank- INLINEFORM0 CP Decomposition.
Shifted PMI
In the same way BIBREF9 considers factorization of positive shifted PMI matrices, we consider factorization of positive shifted PMI tensors INLINEFORM0 , where INLINEFORM1 for some constant shift INLINEFORM2 . We empirically found that different levels of shifting resulted in different qualities of word embeddings – the best shift we found for CP-S was a shift of INLINEFORM3 , whereas any nonzero shift for JCP-S resulted in a worse embedding across the board. When we discuss evaluation we report the results given by factorization of the PPMI tensors shifted by the best value we found for each specific embedding.
Computational notes
When considering going from two dimensions to three, it is perhaps necessary to discuss the computational issues in such a problem size increase. However, it should be noted that the creation of pre-trained embeddings can be seen as a pre-processing step for many future NLP tasks, so if the training can be completed once, it can be used forever thereafter without having to take training time into account. Despite this, we found that the training of our embeddings was not considerably slower than the training of order-2 equivalents such as SGNS. Explicitly, our GPU trained CBOW vectors (using the experimental settings found below) in 3568 seconds, whereas training CP-S and JCP-S took 6786 and 8686 seconds respectively.
Evaluation
In this section we present a quantitative evaluation comparing our embeddings to an informationless embedding and two strong baselines. Our baselines are:
For a fair comparison, we trained each model on the same corpus of 10 million sentences gathered from Wikipedia. We removed stopwords and words appearing fewer than 2,000 times (130 million tokens total) to reduce noise and uninformative words. Our word2vec and NNSE baselines were trained using the recommended hyperparameters from their original publications, and all optimizers were using using the default settings. Hyperparameters are always consistent across evaluations.
Because of the dataset size, the results shown should be considered a proof of concept rather than an objective comparison to state-of-the-art pre-trained embeddings. Due to the natural computational challenges arising from working with tensors, we leave creation of a full-scale production ready embedding based on tensor factorization to future work.
As is common in the literature BIBREF4 , BIBREF8 , we use 300-dimensional vectors for our embeddings and all word vectors are normalized to unit length prior to evaluation.
Quantitative tasks
Outlier Detection. The Outlier Detection task BIBREF0 is to determine which word in a list INLINEFORM0 of INLINEFORM1 words is unrelated to the other INLINEFORM2 which were chosen to be related. For each INLINEFORM3 , one can compute its compactness score INLINEFORM4 , which is the compactness of INLINEFORM5 . INLINEFORM6 is explicitly computed as the mean similarity of all word pairs INLINEFORM7 . The predicted outlier is INLINEFORM8 , as the INLINEFORM9 related words should form a compact cluster with high mean similarity.
We use the WikiSem500 dataset BIBREF30 which includes sets of INLINEFORM0 words per group gathered based on semantic similarity. Thus, performance on this task is correlated with the amount of semantic information encoded in a word embedding. Performance on this dataset was shown to be well-correlated with performance at the common NLP task of sentiment analysis BIBREF30 .
The two metrics associated with this task are accuracy and Outlier Position Percentage (OPP). Accuracy is the fraction of cases in which the true outlier correctly had the highest compactness score. OPP measures how close the true outlier was to having the highest compactness score, rewarding embeddings more for predicting the outlier to be in 2nd place rather than INLINEFORM0 when sorting the words by their compactness score INLINEFORM1 .
3-way Outlier Detection. As our tensor-based embeddings encode higher order relationships between words, we introduce a new way to compute INLINEFORM0 based on groups of 3 words rather than pairs of words. We define the compactness score for a word INLINEFORM1 to be: INLINEFORM2
where INLINEFORM0 denotes similarity between a group of 3 vectors. INLINEFORM1 is defined as: INLINEFORM2
We call this evaluation method OD3.
The purpose of OD3 is to evaluate the extent to which an embedding captures 3rd order relationships between words. As we will see in the results of our quantitative experiments, our tensor methods outperform the baselines on OD3, which validates our approach.
This approach can easily be generalized to OD INLINEFORM0 INLINEFORM1 , but again we leave the study of higher order relationships to future work.
Simple supervised tasks. BIBREF5 points out that the primary application of word embeddings is transfer learning to NLP tasks. They argue that to evaluate an embedding's ability to transfer information to a relevant task, one must measure the embedding's accessibility of information for actual downstream tasks. To do so, one must cite the performance of simple supervised tasks as training set size increases, which is commonly done in transfer learning evaluation BIBREF5 . If an algorithm using a word embedding performs well with just a small amount of training data, then the information encoded in the embedding is easily accessible.
The simple supervised downstream tasks we use to evaluate the embeddings are as follows:
Supervised Analogy Recovery. We consider the task of solving queries of the form a : b :: c : ? using a simple neural network as suggested in BIBREF5 . The analogy dataset we use is from the Google analogy testbed BIBREF4 .
Sentiment analysis. We also consider sentiment analysis as described by BIBREF31 . We use the suggested Large Movie Review dataset BIBREF32 , containing 50,000 movie reviews.
All code is implemented using scikit-learn or TensorFlow and uses the suggested train/test split.
Word similarity. To standardize our evaluation methodology, we evaluate the embeddings using word similarity on the common MEN and MTurk datasets BIBREF33 , BIBREF34 . For an overview of word similarity evaluation, see BIBREF31 .
Quantitative results
Outlier Detection results. The results are shown in Table TABREF20 . The first thing to note is that CP-S outperforms the other methods across each Outlier Detection metric. Since the WikiSem500 dataset is semantically focused, performance at this task demonstrates the quality of semantic information encoded in our embeddings.
On OD2, the baselines perform more competitively with our CP Decomposition based models, but when OD3 is considered our methods clearly excel. Since the tensor-based methods are trained directly on third order information and perform much better at OD3, we see that OD3 scores reflect the amount of third order information in a word embedding. This is a validation of OD3, as our 3rd order embeddings would naturally out perform 2nd order embeddings at a task that requires third order information. Still, the superiority of our tensor-based embeddings at OD2 demonstrates the quality of the semantic information they encode.
Supervised analogy results. The results are shown in Figure FIGREF18 . At the supervised semantic analogy task, CP-S vastly outperforms the baselines at all levels of training data, further signifying the amount of semantic information encoded by this embedding technique.
Also, when only 10% of the training data is presented, our tensor methods are the only ones that attain nonzero performance – even in such a limited data setting, use of CP-S's vectors results in nearly 40% accuracy. This phenomenon is also observed in the syntactic analogy tasks: our embeddings consistently outperform the others until 100% of the training data is presented. These two observations demonstrate the accessibility of the information encoded in our word embeddings. We can thus conclude that this relational information encoded in the tensor-based embeddings is more easily accessible than that of CBOW and NNSE. Thus, our methods would likely be better suited for transfer learning to actual NLP tasks, particularly those in data-sparse settings.
Sentiment analysis results. The results are shown in Figure FIGREF19 . In this task, JCP-S is the dominant method across all levels of training data, but the difference is more obvious when training data is limited. This again indicates that for this specific task the information encoded by our tensor-based methods is more readily available as that of the baselines. It is thus evident that exploiting both second and third order co-occurrence data leads to higher quality semantic information being encoded in the embedding. At this point it is not clear why JCP-S so vastly outperforms CP-S at this task, but its superiority to the other strong baselines demonstrates the quality of information encoded by JCP-S. This discrepancy is also illustrative of the fact that there is no single “best word embedding” BIBREF5 – different embeddings encode different types of information, and thus should be used where they shine rather than for every NLP task.
Word Similarity results.
We show the results in Table TABREF21 . As we can see, our embeddings very clearly outperform the random embedding at this task. They even outperform CBOW on both of these datasets. It is worth including these results as the word similarity task is a very common way of evaluating embedding quality in the literature. However, due to the many intrinsic problems with evaluating word embeddings using word similarity BIBREF35 , we do not discuss this further.
Multiplicative Compositionality
We find that even though they are not explicitly trained to do so, our tensor-based embeddings capture polysemy information naturally through multiplicative compositionality. We demonstrate this property qualitatively and provide proper motivation for it, leaving automated utilization to future work.
In our tensor-based embeddings, we found that one can create a vector that represents a word INLINEFORM0 in the context of another word INLINEFORM1 by taking the elementwise product INLINEFORM2 . We call INLINEFORM3 a “meaning vector” for the polysemous word INLINEFORM4 .
For example, consider the word star, which can denote a lead performer or a celestial body. We can create a vector for star in the “lead performer” sense by taking the elementwise product INLINEFORM0 . This produces a vector that lies near vectors for words related to lead performers and far from those related to star's other senses.
To motivate why this works, recall that the values in a third order PPMI tensor INLINEFORM0 are given by: INLINEFORM1
where INLINEFORM0 is the word vector for INLINEFORM1 . If words INLINEFORM2 have a high PPMI, then INLINEFORM3 will also be high, meaning INLINEFORM4 will be close to INLINEFORM5 in the vector space by cosine similarity.
For example, even though galaxy is likely to appear in the context of the word star in in the “celestial body” sense, INLINEFORM0 PPMI(star, actor, galaxy) is low whereas INLINEFORM1 PPMI(star, actor, drama) is high. Thus , INLINEFORM2 represents the meaning of star in the “lead performer” sense.
In Table TABREF22 we present the nearest neighbors of multiplicative and additive composed vectors for a variety of polysemous words. As we can see, the words corresponding to the nearest neighbors of the composed vectors for our tensor methods are semantically related to the intended sense both for multiplicative and additive composition. In contrast, for CBOW, only additive composition yields vectors whose nearest neighbors are semantically related to the intended sense. Thus, our embeddings can produce complementary sets of polysemous word representations that are qualitatively valid whereas CBOW (seemingly) only guarantees meaningful additive compositionality. We leave automated usage of this property to future work.
Conclusion
Our key contributions are as follows:
Tensor factorization appears to be a highly applicable and effective tool for learning word embeddings, with many areas of potential future work. Leveraging higher order data in training word embeddings is useful for encoding new types of information and semantic relationships compared to models that are trained using only pairwise data. This indicates that such techniques will prove useful for training word embeddings to be used in downstream NLP tasks. | Yes |
a9b10e3db5902c6142e7d6a83253ad2a6cee77fc | a9b10e3db5902c6142e7d6a83253ad2a6cee77fc_0 | Q: What are the main disadvantages of their proposed word embeddings?
Text: Introduction
Word embeddings have been used to improve the performance of many NLP tasks including language modelling BIBREF1 , machine translation BIBREF2 , and sentiment analysis BIBREF3 . The broad applicability of word embeddings to NLP implies that improvements to their quality will likely have widespread benefits for the field.
The word embedding problem is to learn a mapping INLINEFORM0 ( INLINEFORM1 100-300 in most applications) that encodes meaningful semantic and/or syntactic information. For instance, in many word embeddings, INLINEFORM2 car INLINEFORM3 truck INLINEFORM4 , since the words are semantically similar.
More complex relationships than similarity can also be encoded in word embeddings. For example, we can answer analogy queries of the form INLINEFORM0 ? using simple arithmetic in many state-of-the-art embeddings BIBREF4 . The answer to bed INLINEFORM1 sleep INLINEFORM2 chair INLINEFORM3 INLINEFORM4 is given by the word whose vector representation is closest to INLINEFORM5 sleep INLINEFORM6 bed INLINEFORM7 chair INLINEFORM8 ( INLINEFORM9 sit INLINEFORM10 ). Other embeddings may encode such information in a nonlinear way BIBREF5 .
BIBREF4 demonstrates the additive compositionality of their word2vec vectors: one can sum vectors produced by their embedding to compute vectors for certain phrases rather than just vectors for words. Later in this paper, we will show that our embeddings naturally give rise to a form of multiplicative compositionality that has not yet been explored in the literature.
Almost all recent word embeddings rely on the distributional hypothesis BIBREF6 , which states that a word's meaning can be inferred from the words that tend to surround it. To utilize the distributional hypothesis, many embeddings are given by a low-rank factor of a matrix derived from co-occurrences in a large unsupervised corpus, see BIBREF7 , BIBREF8 , BIBREF9 and BIBREF10 .
Approaches that rely on matrix factorization only utilize pairwise co-occurrence information in the corpus. We aim to extend this approach by creating word embeddings given by factors of tensors containing higher order co-occurrence data.
Related work
Some common word embeddings related to co-occurrence based matrix factorization include GloVe BIBREF7 , word2vec BIBREF9 , LexVec BIBREF10 , and NNSE BIBREF8 . In contrast, our work studies word embeddings given by factorization of tensors. An overview of tensor factorization methods is given in BIBREF11 .
Our work uses factorization of symmetric nonnegative tensors, which has been studied in the past BIBREF12 , BIBREF13 . In general, factorization of tensors has been applied to NLP in BIBREF14 and factorization of nonnegative tensors BIBREF15 . Recently, factorization of symmetric tensors has been used to create a generic word embedding BIBREF16 but the idea was not explored extensively. Our work studies this idea in much greater detail, fully demonstrating the viability of tensor factorization as a technique for training word embeddings.
Composition of word vectors to create novel representations has been studied in depth, including additive, multiplicative, and tensor-based methods BIBREF17 , BIBREF18 . Typically, composition is used to create vectors that represent phrases or sentences. Our work, instead, shows that pairs of word vectors can be composed multiplicatively to create different vector representations for the various meanings of a single polysemous word.
Notation
Throughout this paper we will write scalars in lowercase italics INLINEFORM0 , vectors in lowercase bold letters INLINEFORM1 , matrices with uppercase bold letters INLINEFORM2 , and tensors (of order INLINEFORM3 ) with Euler script notation INLINEFORM4 , as is standard in the literature.
Pointwise Mutual Information
Pointwise mutual information (PMI) is a useful property in NLP that quantifies the likelihood that two words co-occur BIBREF9 . It is defined as: INLINEFORM0
where INLINEFORM0 is the probability that INLINEFORM1 and INLINEFORM2 occur together in a given fixed-length context window in the corpus, irrespective of order.
It is often useful to consider the positive PMI (PPMI), defined as: INLINEFORM0
since negative PMI values have little grounded interpretation BIBREF19 , BIBREF9 , BIBREF15 .
Given an indexed vocabulary INLINEFORM0 , one can construct a INLINEFORM1 PPMI matrix INLINEFORM2 where INLINEFORM3 . Many existing word embedding techniques involve factorizing this PPMI matrix BIBREF9 , BIBREF8 , BIBREF10 .
PMI can be generalized to INLINEFORM0 variables. While there are many ways to do so BIBREF20 , in this paper we use the form defined by: INLINEFORM1
where INLINEFORM0 is the probability that all of INLINEFORM1 occur together in a given fixed-length context window in the corpus, irrespective of their order.
In this paper we study 3-way PPMI tensors INLINEFORM0 , where INLINEFORM1 , as this is the natural higher-order generalization of the PPMI matrix. We leave the study of creating word embeddings with INLINEFORM2 -dimensional PPMI tensors ( INLINEFORM3 ) to future work.
Tensor factorization
Just as the rank- INLINEFORM0 matrix decomposition is defined to be the product of two factor matrices ( INLINEFORM1 ), the canonical rank- INLINEFORM2 tensor decomposition for a third order tensor is defined to be the product of three factor matrices BIBREF11 : DISPLAYFORM0
where INLINEFORM0 is the outer product: INLINEFORM1 . This is also commonly referred to as the rank-R CP Decomposition. Elementwise, this is written as: INLINEFORM2
where INLINEFORM0 is elementwise vector multiplication and INLINEFORM1 is the INLINEFORM2 row of INLINEFORM3 . In our later section on multiplicative compositionality, we will see this formulation gives rise to a meaningful interpretation of the elementwise product between vectors in our word embeddings.
Symmetric CP Decomposition. In this paper, we will consider symmetric CP decomposition of nonnegative tensors BIBREF21 , BIBREF11 . Since our INLINEFORM0 -way PPMI is nonnegative and invariant under permutation, the PPMI tensor INLINEFORM1 is nonnegative and supersymmetric, i.e. INLINEFORM2 for any permutation INLINEFORM3 .
In the symmetric CP decomposition, instead of factorizing INLINEFORM0 , we factorize INLINEFORM1 as the triple product of a single factor matrix INLINEFORM2 such that INLINEFORM3
In this formulation, we use INLINEFORM0 to be the word embedding so the vector for INLINEFORM1 is the INLINEFORM2 row of INLINEFORM3 similar to the formulations in BIBREF9 , BIBREF8 , BIBREF7 .
It is known that the optimal rank- INLINEFORM0 CP decomposition exists for symmetric nonnegative tensors such as the PPMI tensor BIBREF21 . However, finding such a decomposition is NP hard in general BIBREF22 so we must consider approximate methods.
In this work, we only consider the symmetric CP decomposition, leaving the study of other tensor decompositions (such as the Tensor Train or HOSVD BIBREF23 , BIBREF11 ) to future work.
Computing the Symmetric CP Decomposition
The INLINEFORM0 size of the third order PPMI tensor presents a number of computational challenges. In practice, INLINEFORM1 can vary from INLINEFORM2 to INLINEFORM3 , resulting in a tensor whose naive representation requires at least INLINEFORM4 bytes = 4 TB of floats. Even the sparse representation of the tensor takes up such a large fraction of memory that standard algorithms such as successive rank-1 approximation BIBREF12 , BIBREF24 and alternating least-squares BIBREF11 are infeasible for our uses. Thus, in this paper we will consider a stochastic online formulation similar to that of BIBREF25 .
We optimize the CP decomposition in an online fashion, using small random subsets INLINEFORM0 of the nonzero tensor entries to update the decomposition at time INLINEFORM1 . In this minibatch setting, we optimize the decomposition based on the current minibatch and the previous decomposition at time INLINEFORM2 . To update INLINEFORM3 (and thus the symmetric decomposition), we first define a decomposition loss INLINEFORM4 and minimize this loss with respect to INLINEFORM5 using Adam BIBREF26 .
At each time INLINEFORM0 , we take INLINEFORM1 to be all co-occurrence triples (weighted by PPMI) in a fixed number of sentences (around 1,000) from the corpus. We continue training until we have depleted the entire corpus.
For INLINEFORM0 to accurately model INLINEFORM1 , we also include a certain proportion of elements with zero PPMI (or “negative samples”) in INLINEFORM2 , similar to that of BIBREF10 . We use an empirically found proportion of negative samples for training, and leave discovery of the optimal negative sample proportion to future work.
Word Embedding Proposals
CP-S. The first embedding we propose is based on symmetic CP decomposition of the PPMI tensor INLINEFORM0 as discussed in the mathematical preliminaries section. The optimal setting for the word embedding INLINEFORM1 is: INLINEFORM2
Since we cannot feasibly compute this exactly, we minimize the loss function defined as the squared error between the values in INLINEFORM0 and their predicted values: INLINEFORM1
using the techniques discussed in the previous section.
JCP-S. A potential problem with CP-S is that it is only trained on third order information. To rectify this issue, we propose a novel joint tensor factorization problem we call Joint Symmetric Rank- INLINEFORM0 CP Decomposition. In this problem, the input is the fixed rank INLINEFORM1 and a list of supersymmetric tensors INLINEFORM2 of different orders but whose axis lengths all equal INLINEFORM3 . Each tensor INLINEFORM4 is to be factorized via rank- INLINEFORM5 symmetric CP decomposition using a single INLINEFORM6 factor matrix INLINEFORM7 .
To produce a solution, we first define the loss at time INLINEFORM0 to be the sum of the reconstruction losses of each different tensor: INLINEFORM1
where INLINEFORM0 is an INLINEFORM1 -dimensional supersymmetric PPMI tensor. We then minimize the loss with respect to INLINEFORM2 . Since we are using at most third order tensors in this work, we assign our word embedding INLINEFORM3 to be: INLINEFORM4
This problem is a specific instance of Coupled Tensor Decomposition, which has been studied in the past BIBREF27 , BIBREF28 . In this problem, the goal is to factorize multiple tensors using at least one factor matrix in common. A similar formulation to our problem can be found in BIBREF29 , which studies blind source separation using the algebraic geometric aspects of jointly factorizing numerous supersymmetric tensors (to unknown rank). In contrast to our work, they outline some generic rank properties of such a decomposition rather than attacking the problem numerically. Also, in our formulation the rank is fixed and an approximate solution must be found. Exploring the connection between the theoretical aspects of joint decomposition and quality of word embeddings would be an interesting avenue for future work.
To the best of our knowledge this is the first study of Joint Symmetric Rank- INLINEFORM0 CP Decomposition.
Shifted PMI
In the same way BIBREF9 considers factorization of positive shifted PMI matrices, we consider factorization of positive shifted PMI tensors INLINEFORM0 , where INLINEFORM1 for some constant shift INLINEFORM2 . We empirically found that different levels of shifting resulted in different qualities of word embeddings – the best shift we found for CP-S was a shift of INLINEFORM3 , whereas any nonzero shift for JCP-S resulted in a worse embedding across the board. When we discuss evaluation we report the results given by factorization of the PPMI tensors shifted by the best value we found for each specific embedding.
Computational notes
When considering going from two dimensions to three, it is perhaps necessary to discuss the computational issues in such a problem size increase. However, it should be noted that the creation of pre-trained embeddings can be seen as a pre-processing step for many future NLP tasks, so if the training can be completed once, it can be used forever thereafter without having to take training time into account. Despite this, we found that the training of our embeddings was not considerably slower than the training of order-2 equivalents such as SGNS. Explicitly, our GPU trained CBOW vectors (using the experimental settings found below) in 3568 seconds, whereas training CP-S and JCP-S took 6786 and 8686 seconds respectively.
Evaluation
In this section we present a quantitative evaluation comparing our embeddings to an informationless embedding and two strong baselines. Our baselines are:
For a fair comparison, we trained each model on the same corpus of 10 million sentences gathered from Wikipedia. We removed stopwords and words appearing fewer than 2,000 times (130 million tokens total) to reduce noise and uninformative words. Our word2vec and NNSE baselines were trained using the recommended hyperparameters from their original publications, and all optimizers were using using the default settings. Hyperparameters are always consistent across evaluations.
Because of the dataset size, the results shown should be considered a proof of concept rather than an objective comparison to state-of-the-art pre-trained embeddings. Due to the natural computational challenges arising from working with tensors, we leave creation of a full-scale production ready embedding based on tensor factorization to future work.
As is common in the literature BIBREF4 , BIBREF8 , we use 300-dimensional vectors for our embeddings and all word vectors are normalized to unit length prior to evaluation.
Quantitative tasks
Outlier Detection. The Outlier Detection task BIBREF0 is to determine which word in a list INLINEFORM0 of INLINEFORM1 words is unrelated to the other INLINEFORM2 which were chosen to be related. For each INLINEFORM3 , one can compute its compactness score INLINEFORM4 , which is the compactness of INLINEFORM5 . INLINEFORM6 is explicitly computed as the mean similarity of all word pairs INLINEFORM7 . The predicted outlier is INLINEFORM8 , as the INLINEFORM9 related words should form a compact cluster with high mean similarity.
We use the WikiSem500 dataset BIBREF30 which includes sets of INLINEFORM0 words per group gathered based on semantic similarity. Thus, performance on this task is correlated with the amount of semantic information encoded in a word embedding. Performance on this dataset was shown to be well-correlated with performance at the common NLP task of sentiment analysis BIBREF30 .
The two metrics associated with this task are accuracy and Outlier Position Percentage (OPP). Accuracy is the fraction of cases in which the true outlier correctly had the highest compactness score. OPP measures how close the true outlier was to having the highest compactness score, rewarding embeddings more for predicting the outlier to be in 2nd place rather than INLINEFORM0 when sorting the words by their compactness score INLINEFORM1 .
3-way Outlier Detection. As our tensor-based embeddings encode higher order relationships between words, we introduce a new way to compute INLINEFORM0 based on groups of 3 words rather than pairs of words. We define the compactness score for a word INLINEFORM1 to be: INLINEFORM2
where INLINEFORM0 denotes similarity between a group of 3 vectors. INLINEFORM1 is defined as: INLINEFORM2
We call this evaluation method OD3.
The purpose of OD3 is to evaluate the extent to which an embedding captures 3rd order relationships between words. As we will see in the results of our quantitative experiments, our tensor methods outperform the baselines on OD3, which validates our approach.
This approach can easily be generalized to OD INLINEFORM0 INLINEFORM1 , but again we leave the study of higher order relationships to future work.
Simple supervised tasks. BIBREF5 points out that the primary application of word embeddings is transfer learning to NLP tasks. They argue that to evaluate an embedding's ability to transfer information to a relevant task, one must measure the embedding's accessibility of information for actual downstream tasks. To do so, one must cite the performance of simple supervised tasks as training set size increases, which is commonly done in transfer learning evaluation BIBREF5 . If an algorithm using a word embedding performs well with just a small amount of training data, then the information encoded in the embedding is easily accessible.
The simple supervised downstream tasks we use to evaluate the embeddings are as follows:
Supervised Analogy Recovery. We consider the task of solving queries of the form a : b :: c : ? using a simple neural network as suggested in BIBREF5 . The analogy dataset we use is from the Google analogy testbed BIBREF4 .
Sentiment analysis. We also consider sentiment analysis as described by BIBREF31 . We use the suggested Large Movie Review dataset BIBREF32 , containing 50,000 movie reviews.
All code is implemented using scikit-learn or TensorFlow and uses the suggested train/test split.
Word similarity. To standardize our evaluation methodology, we evaluate the embeddings using word similarity on the common MEN and MTurk datasets BIBREF33 , BIBREF34 . For an overview of word similarity evaluation, see BIBREF31 .
Quantitative results
Outlier Detection results. The results are shown in Table TABREF20 . The first thing to note is that CP-S outperforms the other methods across each Outlier Detection metric. Since the WikiSem500 dataset is semantically focused, performance at this task demonstrates the quality of semantic information encoded in our embeddings.
On OD2, the baselines perform more competitively with our CP Decomposition based models, but when OD3 is considered our methods clearly excel. Since the tensor-based methods are trained directly on third order information and perform much better at OD3, we see that OD3 scores reflect the amount of third order information in a word embedding. This is a validation of OD3, as our 3rd order embeddings would naturally out perform 2nd order embeddings at a task that requires third order information. Still, the superiority of our tensor-based embeddings at OD2 demonstrates the quality of the semantic information they encode.
Supervised analogy results. The results are shown in Figure FIGREF18 . At the supervised semantic analogy task, CP-S vastly outperforms the baselines at all levels of training data, further signifying the amount of semantic information encoded by this embedding technique.
Also, when only 10% of the training data is presented, our tensor methods are the only ones that attain nonzero performance – even in such a limited data setting, use of CP-S's vectors results in nearly 40% accuracy. This phenomenon is also observed in the syntactic analogy tasks: our embeddings consistently outperform the others until 100% of the training data is presented. These two observations demonstrate the accessibility of the information encoded in our word embeddings. We can thus conclude that this relational information encoded in the tensor-based embeddings is more easily accessible than that of CBOW and NNSE. Thus, our methods would likely be better suited for transfer learning to actual NLP tasks, particularly those in data-sparse settings.
Sentiment analysis results. The results are shown in Figure FIGREF19 . In this task, JCP-S is the dominant method across all levels of training data, but the difference is more obvious when training data is limited. This again indicates that for this specific task the information encoded by our tensor-based methods is more readily available as that of the baselines. It is thus evident that exploiting both second and third order co-occurrence data leads to higher quality semantic information being encoded in the embedding. At this point it is not clear why JCP-S so vastly outperforms CP-S at this task, but its superiority to the other strong baselines demonstrates the quality of information encoded by JCP-S. This discrepancy is also illustrative of the fact that there is no single “best word embedding” BIBREF5 – different embeddings encode different types of information, and thus should be used where they shine rather than for every NLP task.
Word Similarity results.
We show the results in Table TABREF21 . As we can see, our embeddings very clearly outperform the random embedding at this task. They even outperform CBOW on both of these datasets. It is worth including these results as the word similarity task is a very common way of evaluating embedding quality in the literature. However, due to the many intrinsic problems with evaluating word embeddings using word similarity BIBREF35 , we do not discuss this further.
Multiplicative Compositionality
We find that even though they are not explicitly trained to do so, our tensor-based embeddings capture polysemy information naturally through multiplicative compositionality. We demonstrate this property qualitatively and provide proper motivation for it, leaving automated utilization to future work.
In our tensor-based embeddings, we found that one can create a vector that represents a word INLINEFORM0 in the context of another word INLINEFORM1 by taking the elementwise product INLINEFORM2 . We call INLINEFORM3 a “meaning vector” for the polysemous word INLINEFORM4 .
For example, consider the word star, which can denote a lead performer or a celestial body. We can create a vector for star in the “lead performer” sense by taking the elementwise product INLINEFORM0 . This produces a vector that lies near vectors for words related to lead performers and far from those related to star's other senses.
To motivate why this works, recall that the values in a third order PPMI tensor INLINEFORM0 are given by: INLINEFORM1
where INLINEFORM0 is the word vector for INLINEFORM1 . If words INLINEFORM2 have a high PPMI, then INLINEFORM3 will also be high, meaning INLINEFORM4 will be close to INLINEFORM5 in the vector space by cosine similarity.
For example, even though galaxy is likely to appear in the context of the word star in in the “celestial body” sense, INLINEFORM0 PPMI(star, actor, galaxy) is low whereas INLINEFORM1 PPMI(star, actor, drama) is high. Thus , INLINEFORM2 represents the meaning of star in the “lead performer” sense.
In Table TABREF22 we present the nearest neighbors of multiplicative and additive composed vectors for a variety of polysemous words. As we can see, the words corresponding to the nearest neighbors of the composed vectors for our tensor methods are semantically related to the intended sense both for multiplicative and additive composition. In contrast, for CBOW, only additive composition yields vectors whose nearest neighbors are semantically related to the intended sense. Thus, our embeddings can produce complementary sets of polysemous word representations that are qualitatively valid whereas CBOW (seemingly) only guarantees meaningful additive compositionality. We leave automated usage of this property to future work.
Conclusion
Our key contributions are as follows:
Tensor factorization appears to be a highly applicable and effective tool for learning word embeddings, with many areas of potential future work. Leveraging higher order data in training word embeddings is useful for encoding new types of information and semantic relationships compared to models that are trained using only pairwise data. This indicates that such techniques will prove useful for training word embeddings to be used in downstream NLP tasks. | Unanswerable |
54415efa91566d5d7135fa23bce3840d41a6389e | 54415efa91566d5d7135fa23bce3840d41a6389e_0 | Q: What dimensions of word embeddings do they produce using factorization?
Text: Introduction
Word embeddings have been used to improve the performance of many NLP tasks including language modelling BIBREF1 , machine translation BIBREF2 , and sentiment analysis BIBREF3 . The broad applicability of word embeddings to NLP implies that improvements to their quality will likely have widespread benefits for the field.
The word embedding problem is to learn a mapping INLINEFORM0 ( INLINEFORM1 100-300 in most applications) that encodes meaningful semantic and/or syntactic information. For instance, in many word embeddings, INLINEFORM2 car INLINEFORM3 truck INLINEFORM4 , since the words are semantically similar.
More complex relationships than similarity can also be encoded in word embeddings. For example, we can answer analogy queries of the form INLINEFORM0 ? using simple arithmetic in many state-of-the-art embeddings BIBREF4 . The answer to bed INLINEFORM1 sleep INLINEFORM2 chair INLINEFORM3 INLINEFORM4 is given by the word whose vector representation is closest to INLINEFORM5 sleep INLINEFORM6 bed INLINEFORM7 chair INLINEFORM8 ( INLINEFORM9 sit INLINEFORM10 ). Other embeddings may encode such information in a nonlinear way BIBREF5 .
BIBREF4 demonstrates the additive compositionality of their word2vec vectors: one can sum vectors produced by their embedding to compute vectors for certain phrases rather than just vectors for words. Later in this paper, we will show that our embeddings naturally give rise to a form of multiplicative compositionality that has not yet been explored in the literature.
Almost all recent word embeddings rely on the distributional hypothesis BIBREF6 , which states that a word's meaning can be inferred from the words that tend to surround it. To utilize the distributional hypothesis, many embeddings are given by a low-rank factor of a matrix derived from co-occurrences in a large unsupervised corpus, see BIBREF7 , BIBREF8 , BIBREF9 and BIBREF10 .
Approaches that rely on matrix factorization only utilize pairwise co-occurrence information in the corpus. We aim to extend this approach by creating word embeddings given by factors of tensors containing higher order co-occurrence data.
Related work
Some common word embeddings related to co-occurrence based matrix factorization include GloVe BIBREF7 , word2vec BIBREF9 , LexVec BIBREF10 , and NNSE BIBREF8 . In contrast, our work studies word embeddings given by factorization of tensors. An overview of tensor factorization methods is given in BIBREF11 .
Our work uses factorization of symmetric nonnegative tensors, which has been studied in the past BIBREF12 , BIBREF13 . In general, factorization of tensors has been applied to NLP in BIBREF14 and factorization of nonnegative tensors BIBREF15 . Recently, factorization of symmetric tensors has been used to create a generic word embedding BIBREF16 but the idea was not explored extensively. Our work studies this idea in much greater detail, fully demonstrating the viability of tensor factorization as a technique for training word embeddings.
Composition of word vectors to create novel representations has been studied in depth, including additive, multiplicative, and tensor-based methods BIBREF17 , BIBREF18 . Typically, composition is used to create vectors that represent phrases or sentences. Our work, instead, shows that pairs of word vectors can be composed multiplicatively to create different vector representations for the various meanings of a single polysemous word.
Notation
Throughout this paper we will write scalars in lowercase italics INLINEFORM0 , vectors in lowercase bold letters INLINEFORM1 , matrices with uppercase bold letters INLINEFORM2 , and tensors (of order INLINEFORM3 ) with Euler script notation INLINEFORM4 , as is standard in the literature.
Pointwise Mutual Information
Pointwise mutual information (PMI) is a useful property in NLP that quantifies the likelihood that two words co-occur BIBREF9 . It is defined as: INLINEFORM0
where INLINEFORM0 is the probability that INLINEFORM1 and INLINEFORM2 occur together in a given fixed-length context window in the corpus, irrespective of order.
It is often useful to consider the positive PMI (PPMI), defined as: INLINEFORM0
since negative PMI values have little grounded interpretation BIBREF19 , BIBREF9 , BIBREF15 .
Given an indexed vocabulary INLINEFORM0 , one can construct a INLINEFORM1 PPMI matrix INLINEFORM2 where INLINEFORM3 . Many existing word embedding techniques involve factorizing this PPMI matrix BIBREF9 , BIBREF8 , BIBREF10 .
PMI can be generalized to INLINEFORM0 variables. While there are many ways to do so BIBREF20 , in this paper we use the form defined by: INLINEFORM1
where INLINEFORM0 is the probability that all of INLINEFORM1 occur together in a given fixed-length context window in the corpus, irrespective of their order.
In this paper we study 3-way PPMI tensors INLINEFORM0 , where INLINEFORM1 , as this is the natural higher-order generalization of the PPMI matrix. We leave the study of creating word embeddings with INLINEFORM2 -dimensional PPMI tensors ( INLINEFORM3 ) to future work.
Tensor factorization
Just as the rank- INLINEFORM0 matrix decomposition is defined to be the product of two factor matrices ( INLINEFORM1 ), the canonical rank- INLINEFORM2 tensor decomposition for a third order tensor is defined to be the product of three factor matrices BIBREF11 : DISPLAYFORM0
where INLINEFORM0 is the outer product: INLINEFORM1 . This is also commonly referred to as the rank-R CP Decomposition. Elementwise, this is written as: INLINEFORM2
where INLINEFORM0 is elementwise vector multiplication and INLINEFORM1 is the INLINEFORM2 row of INLINEFORM3 . In our later section on multiplicative compositionality, we will see this formulation gives rise to a meaningful interpretation of the elementwise product between vectors in our word embeddings.
Symmetric CP Decomposition. In this paper, we will consider symmetric CP decomposition of nonnegative tensors BIBREF21 , BIBREF11 . Since our INLINEFORM0 -way PPMI is nonnegative and invariant under permutation, the PPMI tensor INLINEFORM1 is nonnegative and supersymmetric, i.e. INLINEFORM2 for any permutation INLINEFORM3 .
In the symmetric CP decomposition, instead of factorizing INLINEFORM0 , we factorize INLINEFORM1 as the triple product of a single factor matrix INLINEFORM2 such that INLINEFORM3
In this formulation, we use INLINEFORM0 to be the word embedding so the vector for INLINEFORM1 is the INLINEFORM2 row of INLINEFORM3 similar to the formulations in BIBREF9 , BIBREF8 , BIBREF7 .
It is known that the optimal rank- INLINEFORM0 CP decomposition exists for symmetric nonnegative tensors such as the PPMI tensor BIBREF21 . However, finding such a decomposition is NP hard in general BIBREF22 so we must consider approximate methods.
In this work, we only consider the symmetric CP decomposition, leaving the study of other tensor decompositions (such as the Tensor Train or HOSVD BIBREF23 , BIBREF11 ) to future work.
Computing the Symmetric CP Decomposition
The INLINEFORM0 size of the third order PPMI tensor presents a number of computational challenges. In practice, INLINEFORM1 can vary from INLINEFORM2 to INLINEFORM3 , resulting in a tensor whose naive representation requires at least INLINEFORM4 bytes = 4 TB of floats. Even the sparse representation of the tensor takes up such a large fraction of memory that standard algorithms such as successive rank-1 approximation BIBREF12 , BIBREF24 and alternating least-squares BIBREF11 are infeasible for our uses. Thus, in this paper we will consider a stochastic online formulation similar to that of BIBREF25 .
We optimize the CP decomposition in an online fashion, using small random subsets INLINEFORM0 of the nonzero tensor entries to update the decomposition at time INLINEFORM1 . In this minibatch setting, we optimize the decomposition based on the current minibatch and the previous decomposition at time INLINEFORM2 . To update INLINEFORM3 (and thus the symmetric decomposition), we first define a decomposition loss INLINEFORM4 and minimize this loss with respect to INLINEFORM5 using Adam BIBREF26 .
At each time INLINEFORM0 , we take INLINEFORM1 to be all co-occurrence triples (weighted by PPMI) in a fixed number of sentences (around 1,000) from the corpus. We continue training until we have depleted the entire corpus.
For INLINEFORM0 to accurately model INLINEFORM1 , we also include a certain proportion of elements with zero PPMI (or “negative samples”) in INLINEFORM2 , similar to that of BIBREF10 . We use an empirically found proportion of negative samples for training, and leave discovery of the optimal negative sample proportion to future work.
Word Embedding Proposals
CP-S. The first embedding we propose is based on symmetic CP decomposition of the PPMI tensor INLINEFORM0 as discussed in the mathematical preliminaries section. The optimal setting for the word embedding INLINEFORM1 is: INLINEFORM2
Since we cannot feasibly compute this exactly, we minimize the loss function defined as the squared error between the values in INLINEFORM0 and their predicted values: INLINEFORM1
using the techniques discussed in the previous section.
JCP-S. A potential problem with CP-S is that it is only trained on third order information. To rectify this issue, we propose a novel joint tensor factorization problem we call Joint Symmetric Rank- INLINEFORM0 CP Decomposition. In this problem, the input is the fixed rank INLINEFORM1 and a list of supersymmetric tensors INLINEFORM2 of different orders but whose axis lengths all equal INLINEFORM3 . Each tensor INLINEFORM4 is to be factorized via rank- INLINEFORM5 symmetric CP decomposition using a single INLINEFORM6 factor matrix INLINEFORM7 .
To produce a solution, we first define the loss at time INLINEFORM0 to be the sum of the reconstruction losses of each different tensor: INLINEFORM1
where INLINEFORM0 is an INLINEFORM1 -dimensional supersymmetric PPMI tensor. We then minimize the loss with respect to INLINEFORM2 . Since we are using at most third order tensors in this work, we assign our word embedding INLINEFORM3 to be: INLINEFORM4
This problem is a specific instance of Coupled Tensor Decomposition, which has been studied in the past BIBREF27 , BIBREF28 . In this problem, the goal is to factorize multiple tensors using at least one factor matrix in common. A similar formulation to our problem can be found in BIBREF29 , which studies blind source separation using the algebraic geometric aspects of jointly factorizing numerous supersymmetric tensors (to unknown rank). In contrast to our work, they outline some generic rank properties of such a decomposition rather than attacking the problem numerically. Also, in our formulation the rank is fixed and an approximate solution must be found. Exploring the connection between the theoretical aspects of joint decomposition and quality of word embeddings would be an interesting avenue for future work.
To the best of our knowledge this is the first study of Joint Symmetric Rank- INLINEFORM0 CP Decomposition.
Shifted PMI
In the same way BIBREF9 considers factorization of positive shifted PMI matrices, we consider factorization of positive shifted PMI tensors INLINEFORM0 , where INLINEFORM1 for some constant shift INLINEFORM2 . We empirically found that different levels of shifting resulted in different qualities of word embeddings – the best shift we found for CP-S was a shift of INLINEFORM3 , whereas any nonzero shift for JCP-S resulted in a worse embedding across the board. When we discuss evaluation we report the results given by factorization of the PPMI tensors shifted by the best value we found for each specific embedding.
Computational notes
When considering going from two dimensions to three, it is perhaps necessary to discuss the computational issues in such a problem size increase. However, it should be noted that the creation of pre-trained embeddings can be seen as a pre-processing step for many future NLP tasks, so if the training can be completed once, it can be used forever thereafter without having to take training time into account. Despite this, we found that the training of our embeddings was not considerably slower than the training of order-2 equivalents such as SGNS. Explicitly, our GPU trained CBOW vectors (using the experimental settings found below) in 3568 seconds, whereas training CP-S and JCP-S took 6786 and 8686 seconds respectively.
Evaluation
In this section we present a quantitative evaluation comparing our embeddings to an informationless embedding and two strong baselines. Our baselines are:
For a fair comparison, we trained each model on the same corpus of 10 million sentences gathered from Wikipedia. We removed stopwords and words appearing fewer than 2,000 times (130 million tokens total) to reduce noise and uninformative words. Our word2vec and NNSE baselines were trained using the recommended hyperparameters from their original publications, and all optimizers were using using the default settings. Hyperparameters are always consistent across evaluations.
Because of the dataset size, the results shown should be considered a proof of concept rather than an objective comparison to state-of-the-art pre-trained embeddings. Due to the natural computational challenges arising from working with tensors, we leave creation of a full-scale production ready embedding based on tensor factorization to future work.
As is common in the literature BIBREF4 , BIBREF8 , we use 300-dimensional vectors for our embeddings and all word vectors are normalized to unit length prior to evaluation.
Quantitative tasks
Outlier Detection. The Outlier Detection task BIBREF0 is to determine which word in a list INLINEFORM0 of INLINEFORM1 words is unrelated to the other INLINEFORM2 which were chosen to be related. For each INLINEFORM3 , one can compute its compactness score INLINEFORM4 , which is the compactness of INLINEFORM5 . INLINEFORM6 is explicitly computed as the mean similarity of all word pairs INLINEFORM7 . The predicted outlier is INLINEFORM8 , as the INLINEFORM9 related words should form a compact cluster with high mean similarity.
We use the WikiSem500 dataset BIBREF30 which includes sets of INLINEFORM0 words per group gathered based on semantic similarity. Thus, performance on this task is correlated with the amount of semantic information encoded in a word embedding. Performance on this dataset was shown to be well-correlated with performance at the common NLP task of sentiment analysis BIBREF30 .
The two metrics associated with this task are accuracy and Outlier Position Percentage (OPP). Accuracy is the fraction of cases in which the true outlier correctly had the highest compactness score. OPP measures how close the true outlier was to having the highest compactness score, rewarding embeddings more for predicting the outlier to be in 2nd place rather than INLINEFORM0 when sorting the words by their compactness score INLINEFORM1 .
3-way Outlier Detection. As our tensor-based embeddings encode higher order relationships between words, we introduce a new way to compute INLINEFORM0 based on groups of 3 words rather than pairs of words. We define the compactness score for a word INLINEFORM1 to be: INLINEFORM2
where INLINEFORM0 denotes similarity between a group of 3 vectors. INLINEFORM1 is defined as: INLINEFORM2
We call this evaluation method OD3.
The purpose of OD3 is to evaluate the extent to which an embedding captures 3rd order relationships between words. As we will see in the results of our quantitative experiments, our tensor methods outperform the baselines on OD3, which validates our approach.
This approach can easily be generalized to OD INLINEFORM0 INLINEFORM1 , but again we leave the study of higher order relationships to future work.
Simple supervised tasks. BIBREF5 points out that the primary application of word embeddings is transfer learning to NLP tasks. They argue that to evaluate an embedding's ability to transfer information to a relevant task, one must measure the embedding's accessibility of information for actual downstream tasks. To do so, one must cite the performance of simple supervised tasks as training set size increases, which is commonly done in transfer learning evaluation BIBREF5 . If an algorithm using a word embedding performs well with just a small amount of training data, then the information encoded in the embedding is easily accessible.
The simple supervised downstream tasks we use to evaluate the embeddings are as follows:
Supervised Analogy Recovery. We consider the task of solving queries of the form a : b :: c : ? using a simple neural network as suggested in BIBREF5 . The analogy dataset we use is from the Google analogy testbed BIBREF4 .
Sentiment analysis. We also consider sentiment analysis as described by BIBREF31 . We use the suggested Large Movie Review dataset BIBREF32 , containing 50,000 movie reviews.
All code is implemented using scikit-learn or TensorFlow and uses the suggested train/test split.
Word similarity. To standardize our evaluation methodology, we evaluate the embeddings using word similarity on the common MEN and MTurk datasets BIBREF33 , BIBREF34 . For an overview of word similarity evaluation, see BIBREF31 .
Quantitative results
Outlier Detection results. The results are shown in Table TABREF20 . The first thing to note is that CP-S outperforms the other methods across each Outlier Detection metric. Since the WikiSem500 dataset is semantically focused, performance at this task demonstrates the quality of semantic information encoded in our embeddings.
On OD2, the baselines perform more competitively with our CP Decomposition based models, but when OD3 is considered our methods clearly excel. Since the tensor-based methods are trained directly on third order information and perform much better at OD3, we see that OD3 scores reflect the amount of third order information in a word embedding. This is a validation of OD3, as our 3rd order embeddings would naturally out perform 2nd order embeddings at a task that requires third order information. Still, the superiority of our tensor-based embeddings at OD2 demonstrates the quality of the semantic information they encode.
Supervised analogy results. The results are shown in Figure FIGREF18 . At the supervised semantic analogy task, CP-S vastly outperforms the baselines at all levels of training data, further signifying the amount of semantic information encoded by this embedding technique.
Also, when only 10% of the training data is presented, our tensor methods are the only ones that attain nonzero performance – even in such a limited data setting, use of CP-S's vectors results in nearly 40% accuracy. This phenomenon is also observed in the syntactic analogy tasks: our embeddings consistently outperform the others until 100% of the training data is presented. These two observations demonstrate the accessibility of the information encoded in our word embeddings. We can thus conclude that this relational information encoded in the tensor-based embeddings is more easily accessible than that of CBOW and NNSE. Thus, our methods would likely be better suited for transfer learning to actual NLP tasks, particularly those in data-sparse settings.
Sentiment analysis results. The results are shown in Figure FIGREF19 . In this task, JCP-S is the dominant method across all levels of training data, but the difference is more obvious when training data is limited. This again indicates that for this specific task the information encoded by our tensor-based methods is more readily available as that of the baselines. It is thus evident that exploiting both second and third order co-occurrence data leads to higher quality semantic information being encoded in the embedding. At this point it is not clear why JCP-S so vastly outperforms CP-S at this task, but its superiority to the other strong baselines demonstrates the quality of information encoded by JCP-S. This discrepancy is also illustrative of the fact that there is no single “best word embedding” BIBREF5 – different embeddings encode different types of information, and thus should be used where they shine rather than for every NLP task.
Word Similarity results.
We show the results in Table TABREF21 . As we can see, our embeddings very clearly outperform the random embedding at this task. They even outperform CBOW on both of these datasets. It is worth including these results as the word similarity task is a very common way of evaluating embedding quality in the literature. However, due to the many intrinsic problems with evaluating word embeddings using word similarity BIBREF35 , we do not discuss this further.
Multiplicative Compositionality
We find that even though they are not explicitly trained to do so, our tensor-based embeddings capture polysemy information naturally through multiplicative compositionality. We demonstrate this property qualitatively and provide proper motivation for it, leaving automated utilization to future work.
In our tensor-based embeddings, we found that one can create a vector that represents a word INLINEFORM0 in the context of another word INLINEFORM1 by taking the elementwise product INLINEFORM2 . We call INLINEFORM3 a “meaning vector” for the polysemous word INLINEFORM4 .
For example, consider the word star, which can denote a lead performer or a celestial body. We can create a vector for star in the “lead performer” sense by taking the elementwise product INLINEFORM0 . This produces a vector that lies near vectors for words related to lead performers and far from those related to star's other senses.
To motivate why this works, recall that the values in a third order PPMI tensor INLINEFORM0 are given by: INLINEFORM1
where INLINEFORM0 is the word vector for INLINEFORM1 . If words INLINEFORM2 have a high PPMI, then INLINEFORM3 will also be high, meaning INLINEFORM4 will be close to INLINEFORM5 in the vector space by cosine similarity.
For example, even though galaxy is likely to appear in the context of the word star in in the “celestial body” sense, INLINEFORM0 PPMI(star, actor, galaxy) is low whereas INLINEFORM1 PPMI(star, actor, drama) is high. Thus , INLINEFORM2 represents the meaning of star in the “lead performer” sense.
In Table TABREF22 we present the nearest neighbors of multiplicative and additive composed vectors for a variety of polysemous words. As we can see, the words corresponding to the nearest neighbors of the composed vectors for our tensor methods are semantically related to the intended sense both for multiplicative and additive composition. In contrast, for CBOW, only additive composition yields vectors whose nearest neighbors are semantically related to the intended sense. Thus, our embeddings can produce complementary sets of polysemous word representations that are qualitatively valid whereas CBOW (seemingly) only guarantees meaningful additive compositionality. We leave automated usage of this property to future work.
Conclusion
Our key contributions are as follows:
Tensor factorization appears to be a highly applicable and effective tool for learning word embeddings, with many areas of potential future work. Leveraging higher order data in training word embeddings is useful for encoding new types of information and semantic relationships compared to models that are trained using only pairwise data. This indicates that such techniques will prove useful for training word embeddings to be used in downstream NLP tasks. | 300-dimensional vectors |
dcd22abfc9e7211925c0393adc30dbd4711a9f88 | dcd22abfc9e7211925c0393adc30dbd4711a9f88_0 | Q: On which dataset(s) do they compute their word embeddings?
Text: Introduction
Word embeddings have been used to improve the performance of many NLP tasks including language modelling BIBREF1 , machine translation BIBREF2 , and sentiment analysis BIBREF3 . The broad applicability of word embeddings to NLP implies that improvements to their quality will likely have widespread benefits for the field.
The word embedding problem is to learn a mapping INLINEFORM0 ( INLINEFORM1 100-300 in most applications) that encodes meaningful semantic and/or syntactic information. For instance, in many word embeddings, INLINEFORM2 car INLINEFORM3 truck INLINEFORM4 , since the words are semantically similar.
More complex relationships than similarity can also be encoded in word embeddings. For example, we can answer analogy queries of the form INLINEFORM0 ? using simple arithmetic in many state-of-the-art embeddings BIBREF4 . The answer to bed INLINEFORM1 sleep INLINEFORM2 chair INLINEFORM3 INLINEFORM4 is given by the word whose vector representation is closest to INLINEFORM5 sleep INLINEFORM6 bed INLINEFORM7 chair INLINEFORM8 ( INLINEFORM9 sit INLINEFORM10 ). Other embeddings may encode such information in a nonlinear way BIBREF5 .
BIBREF4 demonstrates the additive compositionality of their word2vec vectors: one can sum vectors produced by their embedding to compute vectors for certain phrases rather than just vectors for words. Later in this paper, we will show that our embeddings naturally give rise to a form of multiplicative compositionality that has not yet been explored in the literature.
Almost all recent word embeddings rely on the distributional hypothesis BIBREF6 , which states that a word's meaning can be inferred from the words that tend to surround it. To utilize the distributional hypothesis, many embeddings are given by a low-rank factor of a matrix derived from co-occurrences in a large unsupervised corpus, see BIBREF7 , BIBREF8 , BIBREF9 and BIBREF10 .
Approaches that rely on matrix factorization only utilize pairwise co-occurrence information in the corpus. We aim to extend this approach by creating word embeddings given by factors of tensors containing higher order co-occurrence data.
Related work
Some common word embeddings related to co-occurrence based matrix factorization include GloVe BIBREF7 , word2vec BIBREF9 , LexVec BIBREF10 , and NNSE BIBREF8 . In contrast, our work studies word embeddings given by factorization of tensors. An overview of tensor factorization methods is given in BIBREF11 .
Our work uses factorization of symmetric nonnegative tensors, which has been studied in the past BIBREF12 , BIBREF13 . In general, factorization of tensors has been applied to NLP in BIBREF14 and factorization of nonnegative tensors BIBREF15 . Recently, factorization of symmetric tensors has been used to create a generic word embedding BIBREF16 but the idea was not explored extensively. Our work studies this idea in much greater detail, fully demonstrating the viability of tensor factorization as a technique for training word embeddings.
Composition of word vectors to create novel representations has been studied in depth, including additive, multiplicative, and tensor-based methods BIBREF17 , BIBREF18 . Typically, composition is used to create vectors that represent phrases or sentences. Our work, instead, shows that pairs of word vectors can be composed multiplicatively to create different vector representations for the various meanings of a single polysemous word.
Notation
Throughout this paper we will write scalars in lowercase italics INLINEFORM0 , vectors in lowercase bold letters INLINEFORM1 , matrices with uppercase bold letters INLINEFORM2 , and tensors (of order INLINEFORM3 ) with Euler script notation INLINEFORM4 , as is standard in the literature.
Pointwise Mutual Information
Pointwise mutual information (PMI) is a useful property in NLP that quantifies the likelihood that two words co-occur BIBREF9 . It is defined as: INLINEFORM0
where INLINEFORM0 is the probability that INLINEFORM1 and INLINEFORM2 occur together in a given fixed-length context window in the corpus, irrespective of order.
It is often useful to consider the positive PMI (PPMI), defined as: INLINEFORM0
since negative PMI values have little grounded interpretation BIBREF19 , BIBREF9 , BIBREF15 .
Given an indexed vocabulary INLINEFORM0 , one can construct a INLINEFORM1 PPMI matrix INLINEFORM2 where INLINEFORM3 . Many existing word embedding techniques involve factorizing this PPMI matrix BIBREF9 , BIBREF8 , BIBREF10 .
PMI can be generalized to INLINEFORM0 variables. While there are many ways to do so BIBREF20 , in this paper we use the form defined by: INLINEFORM1
where INLINEFORM0 is the probability that all of INLINEFORM1 occur together in a given fixed-length context window in the corpus, irrespective of their order.
In this paper we study 3-way PPMI tensors INLINEFORM0 , where INLINEFORM1 , as this is the natural higher-order generalization of the PPMI matrix. We leave the study of creating word embeddings with INLINEFORM2 -dimensional PPMI tensors ( INLINEFORM3 ) to future work.
Tensor factorization
Just as the rank- INLINEFORM0 matrix decomposition is defined to be the product of two factor matrices ( INLINEFORM1 ), the canonical rank- INLINEFORM2 tensor decomposition for a third order tensor is defined to be the product of three factor matrices BIBREF11 : DISPLAYFORM0
where INLINEFORM0 is the outer product: INLINEFORM1 . This is also commonly referred to as the rank-R CP Decomposition. Elementwise, this is written as: INLINEFORM2
where INLINEFORM0 is elementwise vector multiplication and INLINEFORM1 is the INLINEFORM2 row of INLINEFORM3 . In our later section on multiplicative compositionality, we will see this formulation gives rise to a meaningful interpretation of the elementwise product between vectors in our word embeddings.
Symmetric CP Decomposition. In this paper, we will consider symmetric CP decomposition of nonnegative tensors BIBREF21 , BIBREF11 . Since our INLINEFORM0 -way PPMI is nonnegative and invariant under permutation, the PPMI tensor INLINEFORM1 is nonnegative and supersymmetric, i.e. INLINEFORM2 for any permutation INLINEFORM3 .
In the symmetric CP decomposition, instead of factorizing INLINEFORM0 , we factorize INLINEFORM1 as the triple product of a single factor matrix INLINEFORM2 such that INLINEFORM3
In this formulation, we use INLINEFORM0 to be the word embedding so the vector for INLINEFORM1 is the INLINEFORM2 row of INLINEFORM3 similar to the formulations in BIBREF9 , BIBREF8 , BIBREF7 .
It is known that the optimal rank- INLINEFORM0 CP decomposition exists for symmetric nonnegative tensors such as the PPMI tensor BIBREF21 . However, finding such a decomposition is NP hard in general BIBREF22 so we must consider approximate methods.
In this work, we only consider the symmetric CP decomposition, leaving the study of other tensor decompositions (such as the Tensor Train or HOSVD BIBREF23 , BIBREF11 ) to future work.
Computing the Symmetric CP Decomposition
The INLINEFORM0 size of the third order PPMI tensor presents a number of computational challenges. In practice, INLINEFORM1 can vary from INLINEFORM2 to INLINEFORM3 , resulting in a tensor whose naive representation requires at least INLINEFORM4 bytes = 4 TB of floats. Even the sparse representation of the tensor takes up such a large fraction of memory that standard algorithms such as successive rank-1 approximation BIBREF12 , BIBREF24 and alternating least-squares BIBREF11 are infeasible for our uses. Thus, in this paper we will consider a stochastic online formulation similar to that of BIBREF25 .
We optimize the CP decomposition in an online fashion, using small random subsets INLINEFORM0 of the nonzero tensor entries to update the decomposition at time INLINEFORM1 . In this minibatch setting, we optimize the decomposition based on the current minibatch and the previous decomposition at time INLINEFORM2 . To update INLINEFORM3 (and thus the symmetric decomposition), we first define a decomposition loss INLINEFORM4 and minimize this loss with respect to INLINEFORM5 using Adam BIBREF26 .
At each time INLINEFORM0 , we take INLINEFORM1 to be all co-occurrence triples (weighted by PPMI) in a fixed number of sentences (around 1,000) from the corpus. We continue training until we have depleted the entire corpus.
For INLINEFORM0 to accurately model INLINEFORM1 , we also include a certain proportion of elements with zero PPMI (or “negative samples”) in INLINEFORM2 , similar to that of BIBREF10 . We use an empirically found proportion of negative samples for training, and leave discovery of the optimal negative sample proportion to future work.
Word Embedding Proposals
CP-S. The first embedding we propose is based on symmetic CP decomposition of the PPMI tensor INLINEFORM0 as discussed in the mathematical preliminaries section. The optimal setting for the word embedding INLINEFORM1 is: INLINEFORM2
Since we cannot feasibly compute this exactly, we minimize the loss function defined as the squared error between the values in INLINEFORM0 and their predicted values: INLINEFORM1
using the techniques discussed in the previous section.
JCP-S. A potential problem with CP-S is that it is only trained on third order information. To rectify this issue, we propose a novel joint tensor factorization problem we call Joint Symmetric Rank- INLINEFORM0 CP Decomposition. In this problem, the input is the fixed rank INLINEFORM1 and a list of supersymmetric tensors INLINEFORM2 of different orders but whose axis lengths all equal INLINEFORM3 . Each tensor INLINEFORM4 is to be factorized via rank- INLINEFORM5 symmetric CP decomposition using a single INLINEFORM6 factor matrix INLINEFORM7 .
To produce a solution, we first define the loss at time INLINEFORM0 to be the sum of the reconstruction losses of each different tensor: INLINEFORM1
where INLINEFORM0 is an INLINEFORM1 -dimensional supersymmetric PPMI tensor. We then minimize the loss with respect to INLINEFORM2 . Since we are using at most third order tensors in this work, we assign our word embedding INLINEFORM3 to be: INLINEFORM4
This problem is a specific instance of Coupled Tensor Decomposition, which has been studied in the past BIBREF27 , BIBREF28 . In this problem, the goal is to factorize multiple tensors using at least one factor matrix in common. A similar formulation to our problem can be found in BIBREF29 , which studies blind source separation using the algebraic geometric aspects of jointly factorizing numerous supersymmetric tensors (to unknown rank). In contrast to our work, they outline some generic rank properties of such a decomposition rather than attacking the problem numerically. Also, in our formulation the rank is fixed and an approximate solution must be found. Exploring the connection between the theoretical aspects of joint decomposition and quality of word embeddings would be an interesting avenue for future work.
To the best of our knowledge this is the first study of Joint Symmetric Rank- INLINEFORM0 CP Decomposition.
Shifted PMI
In the same way BIBREF9 considers factorization of positive shifted PMI matrices, we consider factorization of positive shifted PMI tensors INLINEFORM0 , where INLINEFORM1 for some constant shift INLINEFORM2 . We empirically found that different levels of shifting resulted in different qualities of word embeddings – the best shift we found for CP-S was a shift of INLINEFORM3 , whereas any nonzero shift for JCP-S resulted in a worse embedding across the board. When we discuss evaluation we report the results given by factorization of the PPMI tensors shifted by the best value we found for each specific embedding.
Computational notes
When considering going from two dimensions to three, it is perhaps necessary to discuss the computational issues in such a problem size increase. However, it should be noted that the creation of pre-trained embeddings can be seen as a pre-processing step for many future NLP tasks, so if the training can be completed once, it can be used forever thereafter without having to take training time into account. Despite this, we found that the training of our embeddings was not considerably slower than the training of order-2 equivalents such as SGNS. Explicitly, our GPU trained CBOW vectors (using the experimental settings found below) in 3568 seconds, whereas training CP-S and JCP-S took 6786 and 8686 seconds respectively.
Evaluation
In this section we present a quantitative evaluation comparing our embeddings to an informationless embedding and two strong baselines. Our baselines are:
For a fair comparison, we trained each model on the same corpus of 10 million sentences gathered from Wikipedia. We removed stopwords and words appearing fewer than 2,000 times (130 million tokens total) to reduce noise and uninformative words. Our word2vec and NNSE baselines were trained using the recommended hyperparameters from their original publications, and all optimizers were using using the default settings. Hyperparameters are always consistent across evaluations.
Because of the dataset size, the results shown should be considered a proof of concept rather than an objective comparison to state-of-the-art pre-trained embeddings. Due to the natural computational challenges arising from working with tensors, we leave creation of a full-scale production ready embedding based on tensor factorization to future work.
As is common in the literature BIBREF4 , BIBREF8 , we use 300-dimensional vectors for our embeddings and all word vectors are normalized to unit length prior to evaluation.
Quantitative tasks
Outlier Detection. The Outlier Detection task BIBREF0 is to determine which word in a list INLINEFORM0 of INLINEFORM1 words is unrelated to the other INLINEFORM2 which were chosen to be related. For each INLINEFORM3 , one can compute its compactness score INLINEFORM4 , which is the compactness of INLINEFORM5 . INLINEFORM6 is explicitly computed as the mean similarity of all word pairs INLINEFORM7 . The predicted outlier is INLINEFORM8 , as the INLINEFORM9 related words should form a compact cluster with high mean similarity.
We use the WikiSem500 dataset BIBREF30 which includes sets of INLINEFORM0 words per group gathered based on semantic similarity. Thus, performance on this task is correlated with the amount of semantic information encoded in a word embedding. Performance on this dataset was shown to be well-correlated with performance at the common NLP task of sentiment analysis BIBREF30 .
The two metrics associated with this task are accuracy and Outlier Position Percentage (OPP). Accuracy is the fraction of cases in which the true outlier correctly had the highest compactness score. OPP measures how close the true outlier was to having the highest compactness score, rewarding embeddings more for predicting the outlier to be in 2nd place rather than INLINEFORM0 when sorting the words by their compactness score INLINEFORM1 .
3-way Outlier Detection. As our tensor-based embeddings encode higher order relationships between words, we introduce a new way to compute INLINEFORM0 based on groups of 3 words rather than pairs of words. We define the compactness score for a word INLINEFORM1 to be: INLINEFORM2
where INLINEFORM0 denotes similarity between a group of 3 vectors. INLINEFORM1 is defined as: INLINEFORM2
We call this evaluation method OD3.
The purpose of OD3 is to evaluate the extent to which an embedding captures 3rd order relationships between words. As we will see in the results of our quantitative experiments, our tensor methods outperform the baselines on OD3, which validates our approach.
This approach can easily be generalized to OD INLINEFORM0 INLINEFORM1 , but again we leave the study of higher order relationships to future work.
Simple supervised tasks. BIBREF5 points out that the primary application of word embeddings is transfer learning to NLP tasks. They argue that to evaluate an embedding's ability to transfer information to a relevant task, one must measure the embedding's accessibility of information for actual downstream tasks. To do so, one must cite the performance of simple supervised tasks as training set size increases, which is commonly done in transfer learning evaluation BIBREF5 . If an algorithm using a word embedding performs well with just a small amount of training data, then the information encoded in the embedding is easily accessible.
The simple supervised downstream tasks we use to evaluate the embeddings are as follows:
Supervised Analogy Recovery. We consider the task of solving queries of the form a : b :: c : ? using a simple neural network as suggested in BIBREF5 . The analogy dataset we use is from the Google analogy testbed BIBREF4 .
Sentiment analysis. We also consider sentiment analysis as described by BIBREF31 . We use the suggested Large Movie Review dataset BIBREF32 , containing 50,000 movie reviews.
All code is implemented using scikit-learn or TensorFlow and uses the suggested train/test split.
Word similarity. To standardize our evaluation methodology, we evaluate the embeddings using word similarity on the common MEN and MTurk datasets BIBREF33 , BIBREF34 . For an overview of word similarity evaluation, see BIBREF31 .
Quantitative results
Outlier Detection results. The results are shown in Table TABREF20 . The first thing to note is that CP-S outperforms the other methods across each Outlier Detection metric. Since the WikiSem500 dataset is semantically focused, performance at this task demonstrates the quality of semantic information encoded in our embeddings.
On OD2, the baselines perform more competitively with our CP Decomposition based models, but when OD3 is considered our methods clearly excel. Since the tensor-based methods are trained directly on third order information and perform much better at OD3, we see that OD3 scores reflect the amount of third order information in a word embedding. This is a validation of OD3, as our 3rd order embeddings would naturally out perform 2nd order embeddings at a task that requires third order information. Still, the superiority of our tensor-based embeddings at OD2 demonstrates the quality of the semantic information they encode.
Supervised analogy results. The results are shown in Figure FIGREF18 . At the supervised semantic analogy task, CP-S vastly outperforms the baselines at all levels of training data, further signifying the amount of semantic information encoded by this embedding technique.
Also, when only 10% of the training data is presented, our tensor methods are the only ones that attain nonzero performance – even in such a limited data setting, use of CP-S's vectors results in nearly 40% accuracy. This phenomenon is also observed in the syntactic analogy tasks: our embeddings consistently outperform the others until 100% of the training data is presented. These two observations demonstrate the accessibility of the information encoded in our word embeddings. We can thus conclude that this relational information encoded in the tensor-based embeddings is more easily accessible than that of CBOW and NNSE. Thus, our methods would likely be better suited for transfer learning to actual NLP tasks, particularly those in data-sparse settings.
Sentiment analysis results. The results are shown in Figure FIGREF19 . In this task, JCP-S is the dominant method across all levels of training data, but the difference is more obvious when training data is limited. This again indicates that for this specific task the information encoded by our tensor-based methods is more readily available as that of the baselines. It is thus evident that exploiting both second and third order co-occurrence data leads to higher quality semantic information being encoded in the embedding. At this point it is not clear why JCP-S so vastly outperforms CP-S at this task, but its superiority to the other strong baselines demonstrates the quality of information encoded by JCP-S. This discrepancy is also illustrative of the fact that there is no single “best word embedding” BIBREF5 – different embeddings encode different types of information, and thus should be used where they shine rather than for every NLP task.
Word Similarity results.
We show the results in Table TABREF21 . As we can see, our embeddings very clearly outperform the random embedding at this task. They even outperform CBOW on both of these datasets. It is worth including these results as the word similarity task is a very common way of evaluating embedding quality in the literature. However, due to the many intrinsic problems with evaluating word embeddings using word similarity BIBREF35 , we do not discuss this further.
Multiplicative Compositionality
We find that even though they are not explicitly trained to do so, our tensor-based embeddings capture polysemy information naturally through multiplicative compositionality. We demonstrate this property qualitatively and provide proper motivation for it, leaving automated utilization to future work.
In our tensor-based embeddings, we found that one can create a vector that represents a word INLINEFORM0 in the context of another word INLINEFORM1 by taking the elementwise product INLINEFORM2 . We call INLINEFORM3 a “meaning vector” for the polysemous word INLINEFORM4 .
For example, consider the word star, which can denote a lead performer or a celestial body. We can create a vector for star in the “lead performer” sense by taking the elementwise product INLINEFORM0 . This produces a vector that lies near vectors for words related to lead performers and far from those related to star's other senses.
To motivate why this works, recall that the values in a third order PPMI tensor INLINEFORM0 are given by: INLINEFORM1
where INLINEFORM0 is the word vector for INLINEFORM1 . If words INLINEFORM2 have a high PPMI, then INLINEFORM3 will also be high, meaning INLINEFORM4 will be close to INLINEFORM5 in the vector space by cosine similarity.
For example, even though galaxy is likely to appear in the context of the word star in in the “celestial body” sense, INLINEFORM0 PPMI(star, actor, galaxy) is low whereas INLINEFORM1 PPMI(star, actor, drama) is high. Thus , INLINEFORM2 represents the meaning of star in the “lead performer” sense.
In Table TABREF22 we present the nearest neighbors of multiplicative and additive composed vectors for a variety of polysemous words. As we can see, the words corresponding to the nearest neighbors of the composed vectors for our tensor methods are semantically related to the intended sense both for multiplicative and additive composition. In contrast, for CBOW, only additive composition yields vectors whose nearest neighbors are semantically related to the intended sense. Thus, our embeddings can produce complementary sets of polysemous word representations that are qualitatively valid whereas CBOW (seemingly) only guarantees meaningful additive compositionality. We leave automated usage of this property to future work.
Conclusion
Our key contributions are as follows:
Tensor factorization appears to be a highly applicable and effective tool for learning word embeddings, with many areas of potential future work. Leveraging higher order data in training word embeddings is useful for encoding new types of information and semantic relationships compared to models that are trained using only pairwise data. This indicates that such techniques will prove useful for training word embeddings to be used in downstream NLP tasks. | 10 million sentences gathered from Wikipedia |
05238d1fad2128403577822aa4822ef8ca9570ac | 05238d1fad2128403577822aa4822ef8ca9570ac_0 | Q: Do they measure computation time of their factorizations compared to other word embeddings?
Text: Introduction
Word embeddings have been used to improve the performance of many NLP tasks including language modelling BIBREF1 , machine translation BIBREF2 , and sentiment analysis BIBREF3 . The broad applicability of word embeddings to NLP implies that improvements to their quality will likely have widespread benefits for the field.
The word embedding problem is to learn a mapping INLINEFORM0 ( INLINEFORM1 100-300 in most applications) that encodes meaningful semantic and/or syntactic information. For instance, in many word embeddings, INLINEFORM2 car INLINEFORM3 truck INLINEFORM4 , since the words are semantically similar.
More complex relationships than similarity can also be encoded in word embeddings. For example, we can answer analogy queries of the form INLINEFORM0 ? using simple arithmetic in many state-of-the-art embeddings BIBREF4 . The answer to bed INLINEFORM1 sleep INLINEFORM2 chair INLINEFORM3 INLINEFORM4 is given by the word whose vector representation is closest to INLINEFORM5 sleep INLINEFORM6 bed INLINEFORM7 chair INLINEFORM8 ( INLINEFORM9 sit INLINEFORM10 ). Other embeddings may encode such information in a nonlinear way BIBREF5 .
BIBREF4 demonstrates the additive compositionality of their word2vec vectors: one can sum vectors produced by their embedding to compute vectors for certain phrases rather than just vectors for words. Later in this paper, we will show that our embeddings naturally give rise to a form of multiplicative compositionality that has not yet been explored in the literature.
Almost all recent word embeddings rely on the distributional hypothesis BIBREF6 , which states that a word's meaning can be inferred from the words that tend to surround it. To utilize the distributional hypothesis, many embeddings are given by a low-rank factor of a matrix derived from co-occurrences in a large unsupervised corpus, see BIBREF7 , BIBREF8 , BIBREF9 and BIBREF10 .
Approaches that rely on matrix factorization only utilize pairwise co-occurrence information in the corpus. We aim to extend this approach by creating word embeddings given by factors of tensors containing higher order co-occurrence data.
Related work
Some common word embeddings related to co-occurrence based matrix factorization include GloVe BIBREF7 , word2vec BIBREF9 , LexVec BIBREF10 , and NNSE BIBREF8 . In contrast, our work studies word embeddings given by factorization of tensors. An overview of tensor factorization methods is given in BIBREF11 .
Our work uses factorization of symmetric nonnegative tensors, which has been studied in the past BIBREF12 , BIBREF13 . In general, factorization of tensors has been applied to NLP in BIBREF14 and factorization of nonnegative tensors BIBREF15 . Recently, factorization of symmetric tensors has been used to create a generic word embedding BIBREF16 but the idea was not explored extensively. Our work studies this idea in much greater detail, fully demonstrating the viability of tensor factorization as a technique for training word embeddings.
Composition of word vectors to create novel representations has been studied in depth, including additive, multiplicative, and tensor-based methods BIBREF17 , BIBREF18 . Typically, composition is used to create vectors that represent phrases or sentences. Our work, instead, shows that pairs of word vectors can be composed multiplicatively to create different vector representations for the various meanings of a single polysemous word.
Notation
Throughout this paper we will write scalars in lowercase italics INLINEFORM0 , vectors in lowercase bold letters INLINEFORM1 , matrices with uppercase bold letters INLINEFORM2 , and tensors (of order INLINEFORM3 ) with Euler script notation INLINEFORM4 , as is standard in the literature.
Pointwise Mutual Information
Pointwise mutual information (PMI) is a useful property in NLP that quantifies the likelihood that two words co-occur BIBREF9 . It is defined as: INLINEFORM0
where INLINEFORM0 is the probability that INLINEFORM1 and INLINEFORM2 occur together in a given fixed-length context window in the corpus, irrespective of order.
It is often useful to consider the positive PMI (PPMI), defined as: INLINEFORM0
since negative PMI values have little grounded interpretation BIBREF19 , BIBREF9 , BIBREF15 .
Given an indexed vocabulary INLINEFORM0 , one can construct a INLINEFORM1 PPMI matrix INLINEFORM2 where INLINEFORM3 . Many existing word embedding techniques involve factorizing this PPMI matrix BIBREF9 , BIBREF8 , BIBREF10 .
PMI can be generalized to INLINEFORM0 variables. While there are many ways to do so BIBREF20 , in this paper we use the form defined by: INLINEFORM1
where INLINEFORM0 is the probability that all of INLINEFORM1 occur together in a given fixed-length context window in the corpus, irrespective of their order.
In this paper we study 3-way PPMI tensors INLINEFORM0 , where INLINEFORM1 , as this is the natural higher-order generalization of the PPMI matrix. We leave the study of creating word embeddings with INLINEFORM2 -dimensional PPMI tensors ( INLINEFORM3 ) to future work.
Tensor factorization
Just as the rank- INLINEFORM0 matrix decomposition is defined to be the product of two factor matrices ( INLINEFORM1 ), the canonical rank- INLINEFORM2 tensor decomposition for a third order tensor is defined to be the product of three factor matrices BIBREF11 : DISPLAYFORM0
where INLINEFORM0 is the outer product: INLINEFORM1 . This is also commonly referred to as the rank-R CP Decomposition. Elementwise, this is written as: INLINEFORM2
where INLINEFORM0 is elementwise vector multiplication and INLINEFORM1 is the INLINEFORM2 row of INLINEFORM3 . In our later section on multiplicative compositionality, we will see this formulation gives rise to a meaningful interpretation of the elementwise product between vectors in our word embeddings.
Symmetric CP Decomposition. In this paper, we will consider symmetric CP decomposition of nonnegative tensors BIBREF21 , BIBREF11 . Since our INLINEFORM0 -way PPMI is nonnegative and invariant under permutation, the PPMI tensor INLINEFORM1 is nonnegative and supersymmetric, i.e. INLINEFORM2 for any permutation INLINEFORM3 .
In the symmetric CP decomposition, instead of factorizing INLINEFORM0 , we factorize INLINEFORM1 as the triple product of a single factor matrix INLINEFORM2 such that INLINEFORM3
In this formulation, we use INLINEFORM0 to be the word embedding so the vector for INLINEFORM1 is the INLINEFORM2 row of INLINEFORM3 similar to the formulations in BIBREF9 , BIBREF8 , BIBREF7 .
It is known that the optimal rank- INLINEFORM0 CP decomposition exists for symmetric nonnegative tensors such as the PPMI tensor BIBREF21 . However, finding such a decomposition is NP hard in general BIBREF22 so we must consider approximate methods.
In this work, we only consider the symmetric CP decomposition, leaving the study of other tensor decompositions (such as the Tensor Train or HOSVD BIBREF23 , BIBREF11 ) to future work.
Computing the Symmetric CP Decomposition
The INLINEFORM0 size of the third order PPMI tensor presents a number of computational challenges. In practice, INLINEFORM1 can vary from INLINEFORM2 to INLINEFORM3 , resulting in a tensor whose naive representation requires at least INLINEFORM4 bytes = 4 TB of floats. Even the sparse representation of the tensor takes up such a large fraction of memory that standard algorithms such as successive rank-1 approximation BIBREF12 , BIBREF24 and alternating least-squares BIBREF11 are infeasible for our uses. Thus, in this paper we will consider a stochastic online formulation similar to that of BIBREF25 .
We optimize the CP decomposition in an online fashion, using small random subsets INLINEFORM0 of the nonzero tensor entries to update the decomposition at time INLINEFORM1 . In this minibatch setting, we optimize the decomposition based on the current minibatch and the previous decomposition at time INLINEFORM2 . To update INLINEFORM3 (and thus the symmetric decomposition), we first define a decomposition loss INLINEFORM4 and minimize this loss with respect to INLINEFORM5 using Adam BIBREF26 .
At each time INLINEFORM0 , we take INLINEFORM1 to be all co-occurrence triples (weighted by PPMI) in a fixed number of sentences (around 1,000) from the corpus. We continue training until we have depleted the entire corpus.
For INLINEFORM0 to accurately model INLINEFORM1 , we also include a certain proportion of elements with zero PPMI (or “negative samples”) in INLINEFORM2 , similar to that of BIBREF10 . We use an empirically found proportion of negative samples for training, and leave discovery of the optimal negative sample proportion to future work.
Word Embedding Proposals
CP-S. The first embedding we propose is based on symmetic CP decomposition of the PPMI tensor INLINEFORM0 as discussed in the mathematical preliminaries section. The optimal setting for the word embedding INLINEFORM1 is: INLINEFORM2
Since we cannot feasibly compute this exactly, we minimize the loss function defined as the squared error between the values in INLINEFORM0 and their predicted values: INLINEFORM1
using the techniques discussed in the previous section.
JCP-S. A potential problem with CP-S is that it is only trained on third order information. To rectify this issue, we propose a novel joint tensor factorization problem we call Joint Symmetric Rank- INLINEFORM0 CP Decomposition. In this problem, the input is the fixed rank INLINEFORM1 and a list of supersymmetric tensors INLINEFORM2 of different orders but whose axis lengths all equal INLINEFORM3 . Each tensor INLINEFORM4 is to be factorized via rank- INLINEFORM5 symmetric CP decomposition using a single INLINEFORM6 factor matrix INLINEFORM7 .
To produce a solution, we first define the loss at time INLINEFORM0 to be the sum of the reconstruction losses of each different tensor: INLINEFORM1
where INLINEFORM0 is an INLINEFORM1 -dimensional supersymmetric PPMI tensor. We then minimize the loss with respect to INLINEFORM2 . Since we are using at most third order tensors in this work, we assign our word embedding INLINEFORM3 to be: INLINEFORM4
This problem is a specific instance of Coupled Tensor Decomposition, which has been studied in the past BIBREF27 , BIBREF28 . In this problem, the goal is to factorize multiple tensors using at least one factor matrix in common. A similar formulation to our problem can be found in BIBREF29 , which studies blind source separation using the algebraic geometric aspects of jointly factorizing numerous supersymmetric tensors (to unknown rank). In contrast to our work, they outline some generic rank properties of such a decomposition rather than attacking the problem numerically. Also, in our formulation the rank is fixed and an approximate solution must be found. Exploring the connection between the theoretical aspects of joint decomposition and quality of word embeddings would be an interesting avenue for future work.
To the best of our knowledge this is the first study of Joint Symmetric Rank- INLINEFORM0 CP Decomposition.
Shifted PMI
In the same way BIBREF9 considers factorization of positive shifted PMI matrices, we consider factorization of positive shifted PMI tensors INLINEFORM0 , where INLINEFORM1 for some constant shift INLINEFORM2 . We empirically found that different levels of shifting resulted in different qualities of word embeddings – the best shift we found for CP-S was a shift of INLINEFORM3 , whereas any nonzero shift for JCP-S resulted in a worse embedding across the board. When we discuss evaluation we report the results given by factorization of the PPMI tensors shifted by the best value we found for each specific embedding.
Computational notes
When considering going from two dimensions to three, it is perhaps necessary to discuss the computational issues in such a problem size increase. However, it should be noted that the creation of pre-trained embeddings can be seen as a pre-processing step for many future NLP tasks, so if the training can be completed once, it can be used forever thereafter without having to take training time into account. Despite this, we found that the training of our embeddings was not considerably slower than the training of order-2 equivalents such as SGNS. Explicitly, our GPU trained CBOW vectors (using the experimental settings found below) in 3568 seconds, whereas training CP-S and JCP-S took 6786 and 8686 seconds respectively.
Evaluation
In this section we present a quantitative evaluation comparing our embeddings to an informationless embedding and two strong baselines. Our baselines are:
For a fair comparison, we trained each model on the same corpus of 10 million sentences gathered from Wikipedia. We removed stopwords and words appearing fewer than 2,000 times (130 million tokens total) to reduce noise and uninformative words. Our word2vec and NNSE baselines were trained using the recommended hyperparameters from their original publications, and all optimizers were using using the default settings. Hyperparameters are always consistent across evaluations.
Because of the dataset size, the results shown should be considered a proof of concept rather than an objective comparison to state-of-the-art pre-trained embeddings. Due to the natural computational challenges arising from working with tensors, we leave creation of a full-scale production ready embedding based on tensor factorization to future work.
As is common in the literature BIBREF4 , BIBREF8 , we use 300-dimensional vectors for our embeddings and all word vectors are normalized to unit length prior to evaluation.
Quantitative tasks
Outlier Detection. The Outlier Detection task BIBREF0 is to determine which word in a list INLINEFORM0 of INLINEFORM1 words is unrelated to the other INLINEFORM2 which were chosen to be related. For each INLINEFORM3 , one can compute its compactness score INLINEFORM4 , which is the compactness of INLINEFORM5 . INLINEFORM6 is explicitly computed as the mean similarity of all word pairs INLINEFORM7 . The predicted outlier is INLINEFORM8 , as the INLINEFORM9 related words should form a compact cluster with high mean similarity.
We use the WikiSem500 dataset BIBREF30 which includes sets of INLINEFORM0 words per group gathered based on semantic similarity. Thus, performance on this task is correlated with the amount of semantic information encoded in a word embedding. Performance on this dataset was shown to be well-correlated with performance at the common NLP task of sentiment analysis BIBREF30 .
The two metrics associated with this task are accuracy and Outlier Position Percentage (OPP). Accuracy is the fraction of cases in which the true outlier correctly had the highest compactness score. OPP measures how close the true outlier was to having the highest compactness score, rewarding embeddings more for predicting the outlier to be in 2nd place rather than INLINEFORM0 when sorting the words by their compactness score INLINEFORM1 .
3-way Outlier Detection. As our tensor-based embeddings encode higher order relationships between words, we introduce a new way to compute INLINEFORM0 based on groups of 3 words rather than pairs of words. We define the compactness score for a word INLINEFORM1 to be: INLINEFORM2
where INLINEFORM0 denotes similarity between a group of 3 vectors. INLINEFORM1 is defined as: INLINEFORM2
We call this evaluation method OD3.
The purpose of OD3 is to evaluate the extent to which an embedding captures 3rd order relationships between words. As we will see in the results of our quantitative experiments, our tensor methods outperform the baselines on OD3, which validates our approach.
This approach can easily be generalized to OD INLINEFORM0 INLINEFORM1 , but again we leave the study of higher order relationships to future work.
Simple supervised tasks. BIBREF5 points out that the primary application of word embeddings is transfer learning to NLP tasks. They argue that to evaluate an embedding's ability to transfer information to a relevant task, one must measure the embedding's accessibility of information for actual downstream tasks. To do so, one must cite the performance of simple supervised tasks as training set size increases, which is commonly done in transfer learning evaluation BIBREF5 . If an algorithm using a word embedding performs well with just a small amount of training data, then the information encoded in the embedding is easily accessible.
The simple supervised downstream tasks we use to evaluate the embeddings are as follows:
Supervised Analogy Recovery. We consider the task of solving queries of the form a : b :: c : ? using a simple neural network as suggested in BIBREF5 . The analogy dataset we use is from the Google analogy testbed BIBREF4 .
Sentiment analysis. We also consider sentiment analysis as described by BIBREF31 . We use the suggested Large Movie Review dataset BIBREF32 , containing 50,000 movie reviews.
All code is implemented using scikit-learn or TensorFlow and uses the suggested train/test split.
Word similarity. To standardize our evaluation methodology, we evaluate the embeddings using word similarity on the common MEN and MTurk datasets BIBREF33 , BIBREF34 . For an overview of word similarity evaluation, see BIBREF31 .
Quantitative results
Outlier Detection results. The results are shown in Table TABREF20 . The first thing to note is that CP-S outperforms the other methods across each Outlier Detection metric. Since the WikiSem500 dataset is semantically focused, performance at this task demonstrates the quality of semantic information encoded in our embeddings.
On OD2, the baselines perform more competitively with our CP Decomposition based models, but when OD3 is considered our methods clearly excel. Since the tensor-based methods are trained directly on third order information and perform much better at OD3, we see that OD3 scores reflect the amount of third order information in a word embedding. This is a validation of OD3, as our 3rd order embeddings would naturally out perform 2nd order embeddings at a task that requires third order information. Still, the superiority of our tensor-based embeddings at OD2 demonstrates the quality of the semantic information they encode.
Supervised analogy results. The results are shown in Figure FIGREF18 . At the supervised semantic analogy task, CP-S vastly outperforms the baselines at all levels of training data, further signifying the amount of semantic information encoded by this embedding technique.
Also, when only 10% of the training data is presented, our tensor methods are the only ones that attain nonzero performance – even in such a limited data setting, use of CP-S's vectors results in nearly 40% accuracy. This phenomenon is also observed in the syntactic analogy tasks: our embeddings consistently outperform the others until 100% of the training data is presented. These two observations demonstrate the accessibility of the information encoded in our word embeddings. We can thus conclude that this relational information encoded in the tensor-based embeddings is more easily accessible than that of CBOW and NNSE. Thus, our methods would likely be better suited for transfer learning to actual NLP tasks, particularly those in data-sparse settings.
Sentiment analysis results. The results are shown in Figure FIGREF19 . In this task, JCP-S is the dominant method across all levels of training data, but the difference is more obvious when training data is limited. This again indicates that for this specific task the information encoded by our tensor-based methods is more readily available as that of the baselines. It is thus evident that exploiting both second and third order co-occurrence data leads to higher quality semantic information being encoded in the embedding. At this point it is not clear why JCP-S so vastly outperforms CP-S at this task, but its superiority to the other strong baselines demonstrates the quality of information encoded by JCP-S. This discrepancy is also illustrative of the fact that there is no single “best word embedding” BIBREF5 – different embeddings encode different types of information, and thus should be used where they shine rather than for every NLP task.
Word Similarity results.
We show the results in Table TABREF21 . As we can see, our embeddings very clearly outperform the random embedding at this task. They even outperform CBOW on both of these datasets. It is worth including these results as the word similarity task is a very common way of evaluating embedding quality in the literature. However, due to the many intrinsic problems with evaluating word embeddings using word similarity BIBREF35 , we do not discuss this further.
Multiplicative Compositionality
We find that even though they are not explicitly trained to do so, our tensor-based embeddings capture polysemy information naturally through multiplicative compositionality. We demonstrate this property qualitatively and provide proper motivation for it, leaving automated utilization to future work.
In our tensor-based embeddings, we found that one can create a vector that represents a word INLINEFORM0 in the context of another word INLINEFORM1 by taking the elementwise product INLINEFORM2 . We call INLINEFORM3 a “meaning vector” for the polysemous word INLINEFORM4 .
For example, consider the word star, which can denote a lead performer or a celestial body. We can create a vector for star in the “lead performer” sense by taking the elementwise product INLINEFORM0 . This produces a vector that lies near vectors for words related to lead performers and far from those related to star's other senses.
To motivate why this works, recall that the values in a third order PPMI tensor INLINEFORM0 are given by: INLINEFORM1
where INLINEFORM0 is the word vector for INLINEFORM1 . If words INLINEFORM2 have a high PPMI, then INLINEFORM3 will also be high, meaning INLINEFORM4 will be close to INLINEFORM5 in the vector space by cosine similarity.
For example, even though galaxy is likely to appear in the context of the word star in in the “celestial body” sense, INLINEFORM0 PPMI(star, actor, galaxy) is low whereas INLINEFORM1 PPMI(star, actor, drama) is high. Thus , INLINEFORM2 represents the meaning of star in the “lead performer” sense.
In Table TABREF22 we present the nearest neighbors of multiplicative and additive composed vectors for a variety of polysemous words. As we can see, the words corresponding to the nearest neighbors of the composed vectors for our tensor methods are semantically related to the intended sense both for multiplicative and additive composition. In contrast, for CBOW, only additive composition yields vectors whose nearest neighbors are semantically related to the intended sense. Thus, our embeddings can produce complementary sets of polysemous word representations that are qualitatively valid whereas CBOW (seemingly) only guarantees meaningful additive compositionality. We leave automated usage of this property to future work.
Conclusion
Our key contributions are as follows:
Tensor factorization appears to be a highly applicable and effective tool for learning word embeddings, with many areas of potential future work. Leveraging higher order data in training word embeddings is useful for encoding new types of information and semantic relationships compared to models that are trained using only pairwise data. This indicates that such techniques will prove useful for training word embeddings to be used in downstream NLP tasks. | Yes |
6ee27ab55b1f64783a9e72e3f83b7c9ec5cc8073 | 6ee27ab55b1f64783a9e72e3f83b7c9ec5cc8073_0 | Q: What datasets are experimented with?
Text: Introduction
Voice conversion (VC) aims to convert the speech from a source to that of a target without changing the linguistic content BIBREF0. Conventional VC systems follow an analysis—conversion —synthesis paradigm BIBREF1. First, a high quality vocoder such as WORLD BIBREF2 or STRAIGHT BIBREF3 is utilized to extract different acoustic features, such as spectral features and fundamental frequency (F0). These features are converted separately, and a waveform synthesizer finally generates the converted waveform using the converted features. Past VC studies have focused on the conversion of spectral features while only applying a simple linear transformation to F0. In addition, the conversion is usually performed frame-by-frame, i.e, the converted speech and the source speech are always of the same length. To summarize, the conversion of prosody, including F0 and duration, is overly simplified in the current VC literature.
This is where sequence-to-sequence (seq2seq) models BIBREF4 can play a role. Modern seq2seq models, often equipped with an attention mechanism BIBREF5, BIBREF6 to implicitly learn the alignment between the source and output sequences, can generate outputs of various lengths. This ability makes the seq2seq model a natural choice to convert duration in VC. In addition, the F0 contour can also be converted by considering F0 explicitly (e.g, forming the input feature sequence by concatenating the spectral and F0 sequences) BIBREF7, BIBREF8, BIBREF9 or implicitly (e.g, using mel spectrograms as the input feature) BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15. Seq2seq VC can further be applied to accent conversion BIBREF13, where the conversion of prosody plays an important role.
Existing seq2seq VC models are based on either recurrent neural networks (RNNs) BIBREF7, BIBREF8, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15 or convolutional neural networks (CNNs) BIBREF9. In recent years, the Transformer architecture BIBREF16 has been shown to perform efficiently BIBREF17 in various speech processing tasks such as automatic speech recognition (ASR) BIBREF18, speech translation (ST) BIBREF19, BIBREF20, and text-to-speech (TTS) BIBREF21. On the basis of attention mechanism solely, the Transformer enables parallel training by avoiding the use of recurrent layers, and provides a receptive field that spans the entire input by using multi-head self-attention rather than convolutional layers. Nonetheless, the above-mentioned speech applications that have successfully utilized the Transformer architecture all attempted to find a mapping between text and acoustic feature sequences. VC, in contrast, attempts to map between acoustic frames, whose high time resolution introduces challenges regarding computational memory cost and accurate attention learning.
Despite the promising results, seq2seq VC models suffer from two major problems. First, seq2seq models usually require a large amount of training data, although a large-scale parallel corpus, i.e, pairs of speech samples with identical linguistic contents uttered by both source and target speakers, is impractical to collect. Second, as pointed out in BIBREF11, the converted speech often suffers from mispronunciations and other instability problems such as phonemes and skipped phonemes. Several techniques have been proposed to address these issues. In BIBREF10 a pretrained ASR module was used to extract phonetic posteriorgrams (PPGs) as an extra clue, whereas PPGs were solely used as the input in BIBREF13. The use of context preservation loss and guided attention loss BIBREF22 to stabilize training has also been proposed BIBREF8, BIBREF9. Multitask learning and data augmentation were incorporated in BIBREF11 using additional text labels to improve data efficiency, and linguistic and speaker representations were disentangled in BIBREF12 to enable nonparallel training, thus removing the need for a parallel corpus. In BIBREF15 a large hand-transcribed corpus was used to generate artificial training data from a TTS model for a many-to-one (normalization) VC model, where multitask learning was also used.
One popular means of dealing with the problem of limited training data is transfer leaning, where knowledge from massive, out-of-domain data is utilized to aid learning in the target domain. Recently, TTS systems, especially neural seq2seq models, have enjoyed great success owing to the vast large-scale corpus contributed by the community. We argue that lying at the core of these TTS models is the ability to generate effective intermediate representations, which facilitates correct attention learning that bridges the encoder and the decoder. Transfer learning from TTS has been successfully applied to tasks such as speaker adaptation BIBREF23, BIBREF24, BIBREF25, BIBREF26. In BIBREF27 the first attempt to apply this technique to VC was made by bootstrapping a nonparallel VC system from a pretrained speaker-adaptive TTS model.
In this work, we propose a novel yet simple pretraining technique to transfer knowledge from learned TTS models. To transfer the core ability, i.e, the generation and utilization of fine representations, knowledge from both the encoder and the decoder is needed. Thus, we pretrain them in separate steps: first, the decoder is pretrained by using a large-scale TTS corpus to train a conventional TTS model. The TTS training ensures a well-trained decoder that can generate high-quality speech with the correct hidden representations. As the encoder must be pretrained to encode input speech into hidden representations that can be recognized by the decoder, we train the encoder in an autoencoder style with the pretrained decoder fixed. This is carried out using a simple reconstruction loss. We demonstrate that the VC model initialized with the above pretrained model parameters can generate high-quality, highly intelligible speech even with very limited training data.
Our contributions in this work are as follows:
We apply the Transformer network to VC. To our knowledge, this is the first work to investigate this combination.
We propose a TTS pretraining technique for VC. The pretraining process provides a prior for fast, sample-efficient VC model learning, thus reducing the data size requirement and training time. In this work, we verify the effectiveness of this scheme by transferring knowledge from Transformer-based TTS models to a Transformer-based VC model.
Background ::: Sequence-to-sequence speech systhesis
Seq2seq models are used to find a mapping between a source feature sequence $\vec{x}_{1:n}=(\vec{x}_1, \cdots , \vec{x}_n)$ and a target feature sequence $\vec{y}_{1:m}=(\vec{y}_1, \cdots , \vec{y}_m)$ which do not necessarily have to be of the same length, i.e, $n \ne m$. Most seq2seq models have an encoder—decoder structure BIBREF4, where advanced ones are equipped with an attention mechanism BIBREF5, BIBREF6. First, an encoder ($\text{Enc}$) maps $\vec{x}_{1:n}$ into a sequence of hidden representations ${1:n}=(1, \cdots , n)$. The decoding of the output sequence is autoregressive, which means that the previously generated symbols are considered an additional input at each decoding time step. To decode an output feature $\vec{y}_t$, a weighted sum of ${1:n}$ first forms a context vector $\vec{c}_t$, where the weight vector is represented by a calculated attention probability vector $\vec{a}_t=(a^{(1)}_t, \cdots , a^{(n)}_t)$. Each attention probability $a^{(k)}_t$ can be thought of as the importance of the hidden representation $k$ at the $t$th time step. Then the decoder ($\text{Dec}$) uses the context vector $\vec{c}$ and the previously generated features $\vec{y}_{1:t-1}=(\vec{y}_1, \cdots , \vec{y}_{t-1})$ to decode $\vec{y}_t$. Note that both the calculation of the attention vector and the decoding process take the previous hidden state of the decoder $\vec{q}_{t-1}$ as the input. The above-mentioned procedure can be formulated as follows: 1:n = Enc(x1:n),
at = attention(qt-1, 1:n),
ct = k=1n a(n)t k,
yt , qt = Dec(y1:t-1, qt-1, ct). As pointed out in BIBREF27, BIBREF28, TTS and VC are similar since the output in both tasks is a sequence of acoustic features. In such seq2seq speech synthesis tasks, it is a common practice to employ a linear layer to further project the decoder output to the desired dimension. During training, the model is optimized via backpropagation using an L1 or L2 loss.
Background ::: Transformer-based text-to-speech synthesis
In this subsection we describe the Transformer-based TTS system proposed in BIBREF21, which we will refer to as Transformer-TTS. Transformer-TTS is a combination of the Transformer BIBREF16 architecture and the Tacotron 2 BIBREF29 TTS system.
We first briefly introduce the Transformer model BIBREF16. The Transformer relies solely on a so-called multi-head self-attention module that learns sequential dependences by jointly attending to information from different representation subspaces. The main body of Transformer-TTS resembles the original Transformer architecture, which, as in any conventional seq2seq model, consists of an encoder stack and a decoder stack that are composed of $L$ encoder layers and $L$ decoder layers, respectively. An encoder layer contains a multi-head self-attention sublayer followed by a positionwise fully connected feedforward network. A decoder layer, in addition to the two sub-layers in the encoder layer, contains a third sub-layer, which performs multi-head attention over the output of the encoder stack. Each layer is equipped with residual connections and layer normalization. Finally, since no recurrent relation is employed, sinusoidal positional encoding BIBREF30 is added to the inputs of the encoder and decoder so that the model can be aware of information about the relative or absolute position of each element.
The model architecture of Transformer-TTS is depicted in Figure FIGREF2. Since the Transformer architecture was originally designed for machine translation, several changes have been made to the architecture in BIBREF21 to make it compatible in the TTS task. First, as in Tacotron 2, prenets are added to the encoder and decoder sides. Since the text space and the acoustic feature space are different, the positional embeddings are employed with corresponding trainable weights to adapt to the scale of each space. In addition to the linear projection to predict the output acoustic feature, an extra linear layer is added to predict the stop token BIBREF29. A weighted binary cross-entropy loss is used so that the model can learn when to stop decoding. As a common practice in recent TTS models, a five-layer CNN postnet predicts a residual to refine the final prediction.
In this work, our implementation is based on the open-source ESPnet-TTS BIBREF31, BIBREF26, where the encoder prenet is discarded and the guided attention loss is applied BIBREF22 to partial heads in partial decoder layers BIBREF17.
Voice Transformer Network
In this section we describe the combination of Transformer and seq2seq VC. Our proposed model, called the Voice Transformer Network (VTN), is largely based on Transformer-TTS introduced in Section SECREF6. Our model consumes the source log-mel spectrogram and outputs the converted log-mel spectrogram. As pointed out in Section SECREF5, TTS and VC respectively encode text and acoustic features to decode acoustic features. Therefore, we make a very simple modification to the TTS model, which is to replace the embedding lookup layer in the encoder with a linear projection layer, as shown in Figure FIGREF2. Although more complicated networks can be employed, we found that this simple design is sufficient to generate satisfying results. The rest of the model architecture as well as the training process remains the same as that for Transformer-TTS.
An important trick we found to be useful here is to use a reduction factor in both the encoder and the decoder for accurate attention learning. In seq2seq TTS, since the time resolution of acoustic features is usually much larger than that of the text input, a reduction factor $r_d$ is commonly used on the decoder side BIBREF32, where multiple stacked frames are decoded at each time step. On the other hand, although the input and output of VC are both acoustic features, the high time resolution (about 100 frames per second) not only makes attention learning difficult but also increases the training memory footprint. While pyramid RNNs were used to reduce the time resolution in BIBREF10, here we simply introduce an encoder reduction factor $r_e$, where adjacent frames are stacked to reduce the time axis. We found that this not only leads to better attention alignment but also reduces the training memory footprint by half and subsequently the number of required gradient accumulation steps BIBREF26.
Proposed training strategy with text-to-speech pretraining
We present a text-to-speech pretraining technique that enables fast, sample-efficient training without introducing additional modification or loss to the original model structure or training loss. Assume that, in addition to a small, parallel VC dataset $\vec{D}_{\text{VC}}=\lbrace \vec{S}_{\text{src}}, \vec{S}_{\text{trg}}\rbrace $, access to a large single-speaker TTS corpus $\vec{D}_{\text{TTS}}=\lbrace \vec{T}_{\text{TTS}}, \vec{S}_{\text{TTS}}\rbrace $ is also available. $\vec{S}_{\text{src}}, \vec{S}_{\text{trg}}$ denote the source, target speech respectively, and $\vec{T}_{\text{TTS}}, \vec{S}_{\text{TTS}}$ denote the text and speech of the TTS speaker respectively. Our setup is highly flexible in that we do not require any of the speakers to be the same, nor any of the sentences between the VC and TTS corpus to be parallel. We employ a two-stage training procedure, where in the first stage we use $\vec{D}_{\text{TTS}}$ to learn the initial parameters as a prior, and then use $\vec{D}_{\text{VC}}$ to adapt to the VC model in the second stage. As argued in Section SECREF1, the ability to generate fine-grained hidden representations $\vec{H}$ is the key to a good VC model, so our goal is to find a set of prior model parameters to train the final encoder $\text{Enc}^{\text{S}}_{\text{VC}}$ and decoder $\text{Dec}^{\text{S}}_{\text{VC}}$. The overall procedure is depicted in Figure FIGREF7.
Proposed training strategy with text-to-speech pretraining ::: Decoder pretraining
The decoder pretraining is as simple as training a conventional TTS model using $\vec{D}_{\text{TTS}}$. Since text itself contains pure linguistic information, the text encoder $\text{Enc}^{\text{T}}_{\text{TTS}}$ here is ensured to learn to encode an effective hidden representation that can be consumed by the decoder $\text{Dec}^{\text{S}}_{\text{TTS}}$. Furthermore, by leveraging the large-scale corpus, the decoder is expected to be more robust by capturing various speech features, such as articulation and prosody.
Proposed training strategy with text-to-speech pretraining ::: Encoder pretraining
A well pretrained encoder should be capable of encoding acoustic features into hidden representations that are recognizable by the pretrained decoder. With this goal in mind, we train an autoencoder whose decoder is the one pretrained in Section SECREF9 and kept fixed during training. The desired pretrained encoder $\text{Enc}^{\text{S}}_{\text{TTS}}$ can be obtained by minimizing the reconstruction loss of $\vec{S}_{\text{TTS}}$. As the decoder pretraining process described in Section SECREF9 takes a hidden representation encoded from text as the input, fixing it in the encoder pretraining process guarantees the encoder to behave similarly to the text encoder $\text{Enc}^{\text{T}}_{\text{TTS}}$, which is to extract fine-grained, linguistic-information-rich representations.
Proposed training strategy with text-to-speech pretraining ::: VC model training
Finally, using $\vec{D}_{\text{VC}}$, we train the desired VC models, with the encoder and decoder initialized with $\text{Enc}^{\text{S}}_{\text{TTS}}$ and $\text{Dec}^{\text{S}}_{\text{TTS}}$ pretrained in Section SECREF10 and Section $\ref {ssec:dpt}$, respectively. The pretrained model parameters serve as a very good prior to adapt to the relatively scarce VC data, as we will show later. Also, compared with training from scratch, the model takes less than half the training time to converge with the pretraining scheme, enabling extremely efficient training.
Experimental evaluation ::: Experimental settings
We conducted our experiments on the CMU ARCTIC database BIBREF33, which contains parallel recordings of professional US English speakers sampled at 16 kHz. One female (slt) was chosen as the target speaker and one male (bdl) and one female (clb) were chosen as sources. We selected 100 utterances each for validation and evaluation, and the other 932 utterances were used as training data. For the TTS corpus, we chose a US female English speaker (judy bieber) from the M-AILABS speech dataset BIBREF34 to train a single-speaker Transformer-TTS model. With the sampling rate also at 16 kHz, the training set contained 15,200 utterances, which were roughly 32 hours long.
The entire implementation was carried out on the open-source ESPnet toolkit BIBREF26, BIBREF31, including feature extraction, training and benchmarking. We extracted 80-dimensional mel spectrograms with 1024 FFT points and a 256 point frame shift. The base settings for the TTS model and training follow the Transformer.v1 configuration in BIBREF26, and we made minimal modifications to it for VC. The reduction factors $r_e, r_d$ are both 2 in all VC models. For the waveform synthesis module, we used Parallel WaveGAN (PWG) BIBREF35, which is a non-autoregressive variant of the WaveNet vocoder BIBREF36, BIBREF37 and enables parallel, faster than real-time waveform generation. Since speaker-dependent neural vocoders outperform speaker-independent ones BIBREF38, we trained a speaker-dependent PWG by conditioning on natural mel spectrograms using the full training data of slt. Our goal here is to demonstrate the effectiveness of our proposed method, so we did not train separate PWGs for different training sizes of the TTS/VC model used, although target speaker adaptation with limited data in VC can be used BIBREF39.
We carried out two types of objective evaluations between the converted speech and the ground truth: the mel cepstrum distortion (MCD), a commonly used measure of spectral distortion in VC, and the character error rate (CER) as well as the word error rate (WER), which estimate the intelligibility of the converted speech. We used the WORLD vocoder BIBREF2 to extract 24-dimensional mel cepstrum coefficients with a 5 ms frame shift, and calculated the distortion of nonsilent, time-aligned frame pairs. The ASR engine is based on the Transformer architecture BIBREF18 and is trained using the LibriSpeech dataset BIBREF40. The CER and WER for the ground-truth evaluation set of slt were 0.9% and 3.8%, respectively. We also reported the ASR results of the TTS model adapted on different sizes of slt training data in Table TABREF8, which can be regarded as upper bounds.
Experimental evaluation ::: Effectiveness of TTS pretraining
To evaluate the importance and the effectiveness of each pretraining scheme we proposed, we conducted a systematic comparison between different training processes and different sizes of training data. The objective results are in Table TABREF8. First, when the network was trained from scratch without any pretraining, the performance was not satisfactory even with the full training set. With decoder pretraining, a performance boost in MCD was obtained, whereas the ASR results were similar. Nonetheless, as we reduced the training size, the performance dropped dramatically, a similar trend to that reported in BIBREF12. Finally, by incorporating encoder pretraining, the model exhibited a significant improvement in all objective measures, where the effectiveness was robust against the reduction in the size of training data. Note that in the clb-slt conversion pair, our proposed method showed the potential to achieve extremely impressive ASR results comparable to the TTS upper bound.
Experimental evaluation ::: Comparison with baseline method
Next, we compared our VTN model with an RNN-based seq2seq VC model called ATTS2S BIBREF8. This model is based on the Tacotron model BIBREF32 with the help of context preservation loss and guided attention loss to stabilize training and maintain linguistic consistency after conversion. We followed the configurations in BIBREF8 but used mel spectrograms instead of WORLD features.
The objective evaluation results of the baseline are reported in Table TABREF8. For the different sizes of training data, our system not only consistently outperformed the baseline method but also remained robust, whereas the performance of the baseline method dropped dramatically as the size of training data was reduced. This proves that our proposed method can improve data efficiency as well as pronunciation. We also observed that when trained from scratch, our VTN model had a similar MCD and inferior ASR performance compared with the baseline. As the ATTS2S employed an extra mechanism to stabilize training, this result may indicate the superiority of using the Transformer architecture over RNNs. We leave rigorous investigation for future work.
Systemwise subjective tests on naturalness and conversion similarity were also conducted to evaluate the perceptual performance. For naturalness, participants were asked to evaluate the naturalness of the speech by the mean opinion score (MOS) test on a five-point scale. For conversion similarity, each listener was presented a natural speech of the target speaker and a converted speech, and asked to judge whether they were produced by the same speaker with the confidence of the decision, i.e., sure or not sure. Ten non-native English speakers were recruited.
Table TABREF14 shows the subjective results on the evaluation set. First, with the full training set, our proposed VTN model significantly outperformed the baseline ATTS2S by over one point for naturalness and 30% for similarity. Moreover, when trained with 80 utterances, our proposed method showed only a slight drop in performance, and was still superior to the baseline method. This result justifies the effectiveness of our method and also showed that the pretraining technique can greatly increase data efficiency without severe performance degradation.
Finally, one interesting finding is that the VTN trained with the full training set also outperformed the adapted TTS model, while the VTN with limited data exhibited comparable performance. Considering that the TTS models in fact obtained good ASR results, we suspect that the VC-generated speech could benefit from encoding the prosody information from the source speech. In contrast, the lack of prosodic clues in the linguistic input in TTS reduced the naturalness of the generated speech.
Conclusion
In this work, we successfully applied the Transformer structure to seq2seq VC. Also, to address the problems of data efficiency and mispronunciation in seq2seq VC, we proposed the transfer of knowledge from easily accessible, large-scale TTS corpora by initializing the VC models with pretrained TTS models. A two-stage training strategy that pretrains the decoder and the encoder subsequently ensures that fine-grained intermediate representations are generated and fully utilized. Objective and subjective evaluations showed that our pretraining scheme can greatly improve speech intelligibility, and it significantly outperformed an RNN-based seq2seq VC baseline. Even with limited training data, our system can be successfully trained without significant performance degradation. In the future, we plan to more systematically examine the effectiveness of the Transformer architecture compared with RNN-based models. Extension of our pretraining methods to more flexible training conditions, such as nonparallel training BIBREF12, BIBREF27, is also an important future task.
Acknowledgements
This work was supported in part by JST PRESTO Grant Number JPMJPR1657 and JST CREST Grant Number JPMJCR19A3, Japan. | the CMU ARCTIC database BIBREF33, the M-AILABS speech dataset BIBREF34 |
bb4de896c0fa4bf3c8c43137255a4895f52abeef | bb4de896c0fa4bf3c8c43137255a4895f52abeef_0 | Q: What is the baseline model?
Text: Introduction
Voice conversion (VC) aims to convert the speech from a source to that of a target without changing the linguistic content BIBREF0. Conventional VC systems follow an analysis—conversion —synthesis paradigm BIBREF1. First, a high quality vocoder such as WORLD BIBREF2 or STRAIGHT BIBREF3 is utilized to extract different acoustic features, such as spectral features and fundamental frequency (F0). These features are converted separately, and a waveform synthesizer finally generates the converted waveform using the converted features. Past VC studies have focused on the conversion of spectral features while only applying a simple linear transformation to F0. In addition, the conversion is usually performed frame-by-frame, i.e, the converted speech and the source speech are always of the same length. To summarize, the conversion of prosody, including F0 and duration, is overly simplified in the current VC literature.
This is where sequence-to-sequence (seq2seq) models BIBREF4 can play a role. Modern seq2seq models, often equipped with an attention mechanism BIBREF5, BIBREF6 to implicitly learn the alignment between the source and output sequences, can generate outputs of various lengths. This ability makes the seq2seq model a natural choice to convert duration in VC. In addition, the F0 contour can also be converted by considering F0 explicitly (e.g, forming the input feature sequence by concatenating the spectral and F0 sequences) BIBREF7, BIBREF8, BIBREF9 or implicitly (e.g, using mel spectrograms as the input feature) BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15. Seq2seq VC can further be applied to accent conversion BIBREF13, where the conversion of prosody plays an important role.
Existing seq2seq VC models are based on either recurrent neural networks (RNNs) BIBREF7, BIBREF8, BIBREF10, BIBREF11, BIBREF12, BIBREF13, BIBREF14, BIBREF15 or convolutional neural networks (CNNs) BIBREF9. In recent years, the Transformer architecture BIBREF16 has been shown to perform efficiently BIBREF17 in various speech processing tasks such as automatic speech recognition (ASR) BIBREF18, speech translation (ST) BIBREF19, BIBREF20, and text-to-speech (TTS) BIBREF21. On the basis of attention mechanism solely, the Transformer enables parallel training by avoiding the use of recurrent layers, and provides a receptive field that spans the entire input by using multi-head self-attention rather than convolutional layers. Nonetheless, the above-mentioned speech applications that have successfully utilized the Transformer architecture all attempted to find a mapping between text and acoustic feature sequences. VC, in contrast, attempts to map between acoustic frames, whose high time resolution introduces challenges regarding computational memory cost and accurate attention learning.
Despite the promising results, seq2seq VC models suffer from two major problems. First, seq2seq models usually require a large amount of training data, although a large-scale parallel corpus, i.e, pairs of speech samples with identical linguistic contents uttered by both source and target speakers, is impractical to collect. Second, as pointed out in BIBREF11, the converted speech often suffers from mispronunciations and other instability problems such as phonemes and skipped phonemes. Several techniques have been proposed to address these issues. In BIBREF10 a pretrained ASR module was used to extract phonetic posteriorgrams (PPGs) as an extra clue, whereas PPGs were solely used as the input in BIBREF13. The use of context preservation loss and guided attention loss BIBREF22 to stabilize training has also been proposed BIBREF8, BIBREF9. Multitask learning and data augmentation were incorporated in BIBREF11 using additional text labels to improve data efficiency, and linguistic and speaker representations were disentangled in BIBREF12 to enable nonparallel training, thus removing the need for a parallel corpus. In BIBREF15 a large hand-transcribed corpus was used to generate artificial training data from a TTS model for a many-to-one (normalization) VC model, where multitask learning was also used.
One popular means of dealing with the problem of limited training data is transfer leaning, where knowledge from massive, out-of-domain data is utilized to aid learning in the target domain. Recently, TTS systems, especially neural seq2seq models, have enjoyed great success owing to the vast large-scale corpus contributed by the community. We argue that lying at the core of these TTS models is the ability to generate effective intermediate representations, which facilitates correct attention learning that bridges the encoder and the decoder. Transfer learning from TTS has been successfully applied to tasks such as speaker adaptation BIBREF23, BIBREF24, BIBREF25, BIBREF26. In BIBREF27 the first attempt to apply this technique to VC was made by bootstrapping a nonparallel VC system from a pretrained speaker-adaptive TTS model.
In this work, we propose a novel yet simple pretraining technique to transfer knowledge from learned TTS models. To transfer the core ability, i.e, the generation and utilization of fine representations, knowledge from both the encoder and the decoder is needed. Thus, we pretrain them in separate steps: first, the decoder is pretrained by using a large-scale TTS corpus to train a conventional TTS model. The TTS training ensures a well-trained decoder that can generate high-quality speech with the correct hidden representations. As the encoder must be pretrained to encode input speech into hidden representations that can be recognized by the decoder, we train the encoder in an autoencoder style with the pretrained decoder fixed. This is carried out using a simple reconstruction loss. We demonstrate that the VC model initialized with the above pretrained model parameters can generate high-quality, highly intelligible speech even with very limited training data.
Our contributions in this work are as follows:
We apply the Transformer network to VC. To our knowledge, this is the first work to investigate this combination.
We propose a TTS pretraining technique for VC. The pretraining process provides a prior for fast, sample-efficient VC model learning, thus reducing the data size requirement and training time. In this work, we verify the effectiveness of this scheme by transferring knowledge from Transformer-based TTS models to a Transformer-based VC model.
Background ::: Sequence-to-sequence speech systhesis
Seq2seq models are used to find a mapping between a source feature sequence $\vec{x}_{1:n}=(\vec{x}_1, \cdots , \vec{x}_n)$ and a target feature sequence $\vec{y}_{1:m}=(\vec{y}_1, \cdots , \vec{y}_m)$ which do not necessarily have to be of the same length, i.e, $n \ne m$. Most seq2seq models have an encoder—decoder structure BIBREF4, where advanced ones are equipped with an attention mechanism BIBREF5, BIBREF6. First, an encoder ($\text{Enc}$) maps $\vec{x}_{1:n}$ into a sequence of hidden representations ${1:n}=(1, \cdots , n)$. The decoding of the output sequence is autoregressive, which means that the previously generated symbols are considered an additional input at each decoding time step. To decode an output feature $\vec{y}_t$, a weighted sum of ${1:n}$ first forms a context vector $\vec{c}_t$, where the weight vector is represented by a calculated attention probability vector $\vec{a}_t=(a^{(1)}_t, \cdots , a^{(n)}_t)$. Each attention probability $a^{(k)}_t$ can be thought of as the importance of the hidden representation $k$ at the $t$th time step. Then the decoder ($\text{Dec}$) uses the context vector $\vec{c}$ and the previously generated features $\vec{y}_{1:t-1}=(\vec{y}_1, \cdots , \vec{y}_{t-1})$ to decode $\vec{y}_t$. Note that both the calculation of the attention vector and the decoding process take the previous hidden state of the decoder $\vec{q}_{t-1}$ as the input. The above-mentioned procedure can be formulated as follows: 1:n = Enc(x1:n),
at = attention(qt-1, 1:n),
ct = k=1n a(n)t k,
yt , qt = Dec(y1:t-1, qt-1, ct). As pointed out in BIBREF27, BIBREF28, TTS and VC are similar since the output in both tasks is a sequence of acoustic features. In such seq2seq speech synthesis tasks, it is a common practice to employ a linear layer to further project the decoder output to the desired dimension. During training, the model is optimized via backpropagation using an L1 or L2 loss.
Background ::: Transformer-based text-to-speech synthesis
In this subsection we describe the Transformer-based TTS system proposed in BIBREF21, which we will refer to as Transformer-TTS. Transformer-TTS is a combination of the Transformer BIBREF16 architecture and the Tacotron 2 BIBREF29 TTS system.
We first briefly introduce the Transformer model BIBREF16. The Transformer relies solely on a so-called multi-head self-attention module that learns sequential dependences by jointly attending to information from different representation subspaces. The main body of Transformer-TTS resembles the original Transformer architecture, which, as in any conventional seq2seq model, consists of an encoder stack and a decoder stack that are composed of $L$ encoder layers and $L$ decoder layers, respectively. An encoder layer contains a multi-head self-attention sublayer followed by a positionwise fully connected feedforward network. A decoder layer, in addition to the two sub-layers in the encoder layer, contains a third sub-layer, which performs multi-head attention over the output of the encoder stack. Each layer is equipped with residual connections and layer normalization. Finally, since no recurrent relation is employed, sinusoidal positional encoding BIBREF30 is added to the inputs of the encoder and decoder so that the model can be aware of information about the relative or absolute position of each element.
The model architecture of Transformer-TTS is depicted in Figure FIGREF2. Since the Transformer architecture was originally designed for machine translation, several changes have been made to the architecture in BIBREF21 to make it compatible in the TTS task. First, as in Tacotron 2, prenets are added to the encoder and decoder sides. Since the text space and the acoustic feature space are different, the positional embeddings are employed with corresponding trainable weights to adapt to the scale of each space. In addition to the linear projection to predict the output acoustic feature, an extra linear layer is added to predict the stop token BIBREF29. A weighted binary cross-entropy loss is used so that the model can learn when to stop decoding. As a common practice in recent TTS models, a five-layer CNN postnet predicts a residual to refine the final prediction.
In this work, our implementation is based on the open-source ESPnet-TTS BIBREF31, BIBREF26, where the encoder prenet is discarded and the guided attention loss is applied BIBREF22 to partial heads in partial decoder layers BIBREF17.
Voice Transformer Network
In this section we describe the combination of Transformer and seq2seq VC. Our proposed model, called the Voice Transformer Network (VTN), is largely based on Transformer-TTS introduced in Section SECREF6. Our model consumes the source log-mel spectrogram and outputs the converted log-mel spectrogram. As pointed out in Section SECREF5, TTS and VC respectively encode text and acoustic features to decode acoustic features. Therefore, we make a very simple modification to the TTS model, which is to replace the embedding lookup layer in the encoder with a linear projection layer, as shown in Figure FIGREF2. Although more complicated networks can be employed, we found that this simple design is sufficient to generate satisfying results. The rest of the model architecture as well as the training process remains the same as that for Transformer-TTS.
An important trick we found to be useful here is to use a reduction factor in both the encoder and the decoder for accurate attention learning. In seq2seq TTS, since the time resolution of acoustic features is usually much larger than that of the text input, a reduction factor $r_d$ is commonly used on the decoder side BIBREF32, where multiple stacked frames are decoded at each time step. On the other hand, although the input and output of VC are both acoustic features, the high time resolution (about 100 frames per second) not only makes attention learning difficult but also increases the training memory footprint. While pyramid RNNs were used to reduce the time resolution in BIBREF10, here we simply introduce an encoder reduction factor $r_e$, where adjacent frames are stacked to reduce the time axis. We found that this not only leads to better attention alignment but also reduces the training memory footprint by half and subsequently the number of required gradient accumulation steps BIBREF26.
Proposed training strategy with text-to-speech pretraining
We present a text-to-speech pretraining technique that enables fast, sample-efficient training without introducing additional modification or loss to the original model structure or training loss. Assume that, in addition to a small, parallel VC dataset $\vec{D}_{\text{VC}}=\lbrace \vec{S}_{\text{src}}, \vec{S}_{\text{trg}}\rbrace $, access to a large single-speaker TTS corpus $\vec{D}_{\text{TTS}}=\lbrace \vec{T}_{\text{TTS}}, \vec{S}_{\text{TTS}}\rbrace $ is also available. $\vec{S}_{\text{src}}, \vec{S}_{\text{trg}}$ denote the source, target speech respectively, and $\vec{T}_{\text{TTS}}, \vec{S}_{\text{TTS}}$ denote the text and speech of the TTS speaker respectively. Our setup is highly flexible in that we do not require any of the speakers to be the same, nor any of the sentences between the VC and TTS corpus to be parallel. We employ a two-stage training procedure, where in the first stage we use $\vec{D}_{\text{TTS}}$ to learn the initial parameters as a prior, and then use $\vec{D}_{\text{VC}}$ to adapt to the VC model in the second stage. As argued in Section SECREF1, the ability to generate fine-grained hidden representations $\vec{H}$ is the key to a good VC model, so our goal is to find a set of prior model parameters to train the final encoder $\text{Enc}^{\text{S}}_{\text{VC}}$ and decoder $\text{Dec}^{\text{S}}_{\text{VC}}$. The overall procedure is depicted in Figure FIGREF7.
Proposed training strategy with text-to-speech pretraining ::: Decoder pretraining
The decoder pretraining is as simple as training a conventional TTS model using $\vec{D}_{\text{TTS}}$. Since text itself contains pure linguistic information, the text encoder $\text{Enc}^{\text{T}}_{\text{TTS}}$ here is ensured to learn to encode an effective hidden representation that can be consumed by the decoder $\text{Dec}^{\text{S}}_{\text{TTS}}$. Furthermore, by leveraging the large-scale corpus, the decoder is expected to be more robust by capturing various speech features, such as articulation and prosody.
Proposed training strategy with text-to-speech pretraining ::: Encoder pretraining
A well pretrained encoder should be capable of encoding acoustic features into hidden representations that are recognizable by the pretrained decoder. With this goal in mind, we train an autoencoder whose decoder is the one pretrained in Section SECREF9 and kept fixed during training. The desired pretrained encoder $\text{Enc}^{\text{S}}_{\text{TTS}}$ can be obtained by minimizing the reconstruction loss of $\vec{S}_{\text{TTS}}$. As the decoder pretraining process described in Section SECREF9 takes a hidden representation encoded from text as the input, fixing it in the encoder pretraining process guarantees the encoder to behave similarly to the text encoder $\text{Enc}^{\text{T}}_{\text{TTS}}$, which is to extract fine-grained, linguistic-information-rich representations.
Proposed training strategy with text-to-speech pretraining ::: VC model training
Finally, using $\vec{D}_{\text{VC}}$, we train the desired VC models, with the encoder and decoder initialized with $\text{Enc}^{\text{S}}_{\text{TTS}}$ and $\text{Dec}^{\text{S}}_{\text{TTS}}$ pretrained in Section SECREF10 and Section $\ref {ssec:dpt}$, respectively. The pretrained model parameters serve as a very good prior to adapt to the relatively scarce VC data, as we will show later. Also, compared with training from scratch, the model takes less than half the training time to converge with the pretraining scheme, enabling extremely efficient training.
Experimental evaluation ::: Experimental settings
We conducted our experiments on the CMU ARCTIC database BIBREF33, which contains parallel recordings of professional US English speakers sampled at 16 kHz. One female (slt) was chosen as the target speaker and one male (bdl) and one female (clb) were chosen as sources. We selected 100 utterances each for validation and evaluation, and the other 932 utterances were used as training data. For the TTS corpus, we chose a US female English speaker (judy bieber) from the M-AILABS speech dataset BIBREF34 to train a single-speaker Transformer-TTS model. With the sampling rate also at 16 kHz, the training set contained 15,200 utterances, which were roughly 32 hours long.
The entire implementation was carried out on the open-source ESPnet toolkit BIBREF26, BIBREF31, including feature extraction, training and benchmarking. We extracted 80-dimensional mel spectrograms with 1024 FFT points and a 256 point frame shift. The base settings for the TTS model and training follow the Transformer.v1 configuration in BIBREF26, and we made minimal modifications to it for VC. The reduction factors $r_e, r_d$ are both 2 in all VC models. For the waveform synthesis module, we used Parallel WaveGAN (PWG) BIBREF35, which is a non-autoregressive variant of the WaveNet vocoder BIBREF36, BIBREF37 and enables parallel, faster than real-time waveform generation. Since speaker-dependent neural vocoders outperform speaker-independent ones BIBREF38, we trained a speaker-dependent PWG by conditioning on natural mel spectrograms using the full training data of slt. Our goal here is to demonstrate the effectiveness of our proposed method, so we did not train separate PWGs for different training sizes of the TTS/VC model used, although target speaker adaptation with limited data in VC can be used BIBREF39.
We carried out two types of objective evaluations between the converted speech and the ground truth: the mel cepstrum distortion (MCD), a commonly used measure of spectral distortion in VC, and the character error rate (CER) as well as the word error rate (WER), which estimate the intelligibility of the converted speech. We used the WORLD vocoder BIBREF2 to extract 24-dimensional mel cepstrum coefficients with a 5 ms frame shift, and calculated the distortion of nonsilent, time-aligned frame pairs. The ASR engine is based on the Transformer architecture BIBREF18 and is trained using the LibriSpeech dataset BIBREF40. The CER and WER for the ground-truth evaluation set of slt were 0.9% and 3.8%, respectively. We also reported the ASR results of the TTS model adapted on different sizes of slt training data in Table TABREF8, which can be regarded as upper bounds.
Experimental evaluation ::: Effectiveness of TTS pretraining
To evaluate the importance and the effectiveness of each pretraining scheme we proposed, we conducted a systematic comparison between different training processes and different sizes of training data. The objective results are in Table TABREF8. First, when the network was trained from scratch without any pretraining, the performance was not satisfactory even with the full training set. With decoder pretraining, a performance boost in MCD was obtained, whereas the ASR results were similar. Nonetheless, as we reduced the training size, the performance dropped dramatically, a similar trend to that reported in BIBREF12. Finally, by incorporating encoder pretraining, the model exhibited a significant improvement in all objective measures, where the effectiveness was robust against the reduction in the size of training data. Note that in the clb-slt conversion pair, our proposed method showed the potential to achieve extremely impressive ASR results comparable to the TTS upper bound.
Experimental evaluation ::: Comparison with baseline method
Next, we compared our VTN model with an RNN-based seq2seq VC model called ATTS2S BIBREF8. This model is based on the Tacotron model BIBREF32 with the help of context preservation loss and guided attention loss to stabilize training and maintain linguistic consistency after conversion. We followed the configurations in BIBREF8 but used mel spectrograms instead of WORLD features.
The objective evaluation results of the baseline are reported in Table TABREF8. For the different sizes of training data, our system not only consistently outperformed the baseline method but also remained robust, whereas the performance of the baseline method dropped dramatically as the size of training data was reduced. This proves that our proposed method can improve data efficiency as well as pronunciation. We also observed that when trained from scratch, our VTN model had a similar MCD and inferior ASR performance compared with the baseline. As the ATTS2S employed an extra mechanism to stabilize training, this result may indicate the superiority of using the Transformer architecture over RNNs. We leave rigorous investigation for future work.
Systemwise subjective tests on naturalness and conversion similarity were also conducted to evaluate the perceptual performance. For naturalness, participants were asked to evaluate the naturalness of the speech by the mean opinion score (MOS) test on a five-point scale. For conversion similarity, each listener was presented a natural speech of the target speaker and a converted speech, and asked to judge whether they were produced by the same speaker with the confidence of the decision, i.e., sure or not sure. Ten non-native English speakers were recruited.
Table TABREF14 shows the subjective results on the evaluation set. First, with the full training set, our proposed VTN model significantly outperformed the baseline ATTS2S by over one point for naturalness and 30% for similarity. Moreover, when trained with 80 utterances, our proposed method showed only a slight drop in performance, and was still superior to the baseline method. This result justifies the effectiveness of our method and also showed that the pretraining technique can greatly increase data efficiency without severe performance degradation.
Finally, one interesting finding is that the VTN trained with the full training set also outperformed the adapted TTS model, while the VTN with limited data exhibited comparable performance. Considering that the TTS models in fact obtained good ASR results, we suspect that the VC-generated speech could benefit from encoding the prosody information from the source speech. In contrast, the lack of prosodic clues in the linguistic input in TTS reduced the naturalness of the generated speech.
Conclusion
In this work, we successfully applied the Transformer structure to seq2seq VC. Also, to address the problems of data efficiency and mispronunciation in seq2seq VC, we proposed the transfer of knowledge from easily accessible, large-scale TTS corpora by initializing the VC models with pretrained TTS models. A two-stage training strategy that pretrains the decoder and the encoder subsequently ensures that fine-grained intermediate representations are generated and fully utilized. Objective and subjective evaluations showed that our pretraining scheme can greatly improve speech intelligibility, and it significantly outperformed an RNN-based seq2seq VC baseline. Even with limited training data, our system can be successfully trained without significant performance degradation. In the future, we plan to more systematically examine the effectiveness of the Transformer architecture compared with RNN-based models. Extension of our pretraining methods to more flexible training conditions, such as nonparallel training BIBREF12, BIBREF27, is also an important future task.
Acknowledgements
This work was supported in part by JST PRESTO Grant Number JPMJPR1657 and JST CREST Grant Number JPMJCR19A3, Japan. | a RNN-based seq2seq VC model called ATTS2S based on the Tacotron model |
d9eacd965bbdc468da522e5e6fe7491adc34b93b | d9eacd965bbdc468da522e5e6fe7491adc34b93b_0 | Q: What model do they train?
Text: Introduction
Social media are increasingly being used in the scientific community as a key source of data to help understand diverse natural and social phenomena, and this has prompted the development of a wide range of computational data mining tools that can extract knowledge from social media for both post-hoc and real time analysis. Thanks to the availability of a public API that enables the cost-free collection of a significant amount of data, Twitter has become a leading data source for such studies BIBREF0 . Having Twitter as a new kind of data source, researchers have looked into the development of tools for real-time trend analytics BIBREF1 , BIBREF2 or early detection of newsworthy events BIBREF3 , as well as into analytical approaches for understanding the sentiment expressed by users towards a target BIBREF4 , BIBREF5 , BIBREF6 , or public opinion on a specific topic BIBREF7 . However, Twitter data lacks reliable demographic details that would enable a representative sample of users to be collected and/or a focus on a specific user subgroup BIBREF8 , or other specific applications such as helping establish the trustworthiness of information posted BIBREF9 . Automated inference of social media demographics would be useful, among others, to broaden demographically aware social media analyses that are conducted through surveys BIBREF10 . One of the missing demographic details is a user's country of origin, which we study here. The only option then for the researcher is to try to infer such demographic characteristics before attempting the intended analysis.
This has motivated a growing body of research in recent years looking at different ways of determining automatically the user's country of origin and/or – as a proxy for the former – the location from which tweets have been posted BIBREF11 . Most of the previous research in inferring tweet geolocation has classified tweets by location within a limited geographical area or country; these cannot be applied directly to an unfiltered stream where tweets from any location or country will be observed. The few cases that have dealt with a global collection of tweets have used an extensive set of features that cannot realistically be extracted in a real-time, streaming context (e.g., user tweeting history or social networks) BIBREF12 , and have been limited to a selected set of global cities as well as to English tweets. This means they use ground truth labels to pre-filter tweets originating from other regions and/or written in languages other than English. The classifier built on this pre-filtered dataset may not be applicable to a Twitter stream where every tweet needs to be geolocated. An ability to classify tweets by location in real-time is crucial for applications exploiting social media updates as social sensors that enable tracking topics and learning about location-specific trending topics, emerging events and breaking news. Specific applications of a real-time, country-level tweet geolocation system include country-specific trending topic detection or tracking sentiment towards a topic broken down by country. To the best of our knowledge, our work is the first to deal with global tweets in any language, using only those features present within the content of a tweet and its associated metadata. We also complement previous work by investigating the extent to which a classifier trained on historical tweets can be used effectively on newly harvested tweets.
Motivated by the need to develop an application to identify the trending topics within a specific country, here we document the development of a classifier that can geolocate tweets by country of origin in real-time. Given that within this scenario it is not feasible to collect additional data to that readily available from the Twitter stream BIBREF12 , we explore the usefulness of eight tweet-inherent features, all of which are readily available from a tweet object as retrieved from the Twitter API, for determining its geolocation. We perform classification using each of the features alone, but also in feature combinations. We explore the ability to perform the classification on as many as 217 countries, or in a reduced subset of the top 25 countries, as judged by tweet volume. The use of two datasets, collected in October 2014 and October 2015, gives additional insight into whether historical Twitter data can be used to classify new instances of tweets. These two datasets with over 5 million country-coded tweets are publicly available.
Our methodology enables us to perform a thorough analysis of tweet geolocation, revealing insights into the best approaches for an accurate country-level location classifier for tweets. We find that the use of a single feature like content, which is the most commonly used feature in previous work, does not suffice for an accurate classification of users by country and that the combination of multiple features leads to substantial improvement, outperforming the state-of-the-art real-time tweet geolocation classifier; this improvement is particularly manifest when using metadata like the user's self-reported location as well as the user's real name. We also perform a per-country analysis for the top 25 countries in terms of tweet volume, exploring how different features lead to optimal classification for different countries, as well as discussing limitations when dealing with some of the most challenging countries. We show that country-level classification of an unfiltered Twitter stream is challenging. It requires careful design of a classifier that uses an appropriate combination of features. Our results at the country level are promising enough in the case of numerous countries, encouraging further research into finer-grained geolocation of global tweets. Cases where country-level geolocation is more challenging include English and Spanish speaking countries, which are harder to distinguish due to their numerous commonalities. Still, our experiments show that we can achieve F1 scores above 80% in many of these cases given the choice of an appropriate combination of features, as well as an overall performance above 80% in terms of both micro-accuracy and macro-accuracy for the top 25 countries.
Related Work
A growing body of research deals with the automated inference of demographic details of Twitter users BIBREF8 . Researchers have attempted to infer attributes of Twitter users such as age BIBREF13 , BIBREF14 , gender BIBREF15 , BIBREF16 , BIBREF17 , BIBREF14 , political orientation BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 or a range of social identities BIBREF22 . Digging more deeply into the demographics of Twitter users, other researchers have attempted to infer socioeconomic demographics such as occupational class BIBREF23 , income BIBREF24 and socioeconomic status BIBREF25 . Work by Huang et al. BIBREF26 has also tried to infer the nationality of users; this work is different from that which we report here in that the country where the tweets were posted from, was already known.
What motivates the present study is the increasing interest in inferring the geographical location of either tweets or Twitter users BIBREF11 . The automated inference of tweet location has been studied for different purposes, ranging from data journalism BIBREF27 , BIBREF9 to public health BIBREF28 . As well as numerous different techniques, researchers have relied on different settings and pursued different objectives when conducting experiments. Table TABREF2 shows a summary of previous work reported in the scientific literature, outlining the features that each study used to classify tweets by location, the geographic scope of the study, the languages they dealt with, the classification granularity they tried to achieve and used for evaluation, and whether single tweets, aggregated multiple tweets and/or user history were used to train the classifier.
Most of the previous studies on automated geolocation of tweets have assumed that the tweet stream includes only tweets from a specific country. The majority of these studies have focused on the United States, classifying tweets either at a city or state level. One of the earliest studies is that by Cheng et al. BIBREF30 , who introduced a probabilistic, content-based approach that identifies the most representative words of each of the major cities in the USA; these words are then used to classify new tweets. They incorporate different techniques to filter words, such as local and state-level filtering, classifying up to 51% of Twitter users accurately within a 100 mile radius. Their approach, however, relies on making use of the complete history of a user, and was tested only for users with at least 1,000 tweets in their timeline.
Most of the other studies documented in the literature have also relied on tweet content, using different techniques such as topic modelling to find locally relevant keywords that reveal a user's likely location BIBREF34 , BIBREF35 , BIBREF30 , BIBREF44 , BIBREF41 , BIBREF45 , BIBREF47 , BIBREF43 , BIBREF37 . Another widely used technique relies on the social network that a user is connected to, in order to infer a user's location from that of their followers and followees BIBREF36 , BIBREF37 , BIBREF38 . While the approaches summarised will work well for certain applications, retrieving the tweet history for each user or the profile information of all of a user's followers and followees is not feasible in a real-time scenario. Hence, in this context, a classifier needs to deal with the additional challenge of having to rely only on the information that can be extracted from a single tweet.
Only a handful of studies have relied solely on the content of a single tweet to infer its location BIBREF33 , BIBREF39 , BIBREF29 , BIBREF40 , BIBREF46 , BIBREF32 , BIBREF31 . Again, most of these have actually worked on very restricted geographical areas, with tweets being limited to different regions, such as the United States BIBREF29 , BIBREF31 , four different cities BIBREF40 , and New York only BIBREF39 . Bo et al. BIBREF33 did focus on a broader geographical area, including 3.7k cities all over the world. Nevertheless, their study focused on a limited number of cities, disregarding other locations, and only classified tweets written in English.
When it comes to geolocation classification granularity, the majority of studies have aimed at city-level classification. While this provides fine-grained classification of tweets, it also means that a limited number of cities can be considered, ignoring other cities and towns. Only Han et al. BIBREF41 and Dredze et al. BIBREF12 perform country-level classification, although they also restricted themselves to English language tweets posted from a limited number of cities. This means that tweets posted from cities other than the ones under consideration are removed from the stream, as are tweets written in other languages. In our study, we take as input the stream of tweets with content originating from any country and in any language, i.e. the entire tweet stream, to classify, at the country-level, each tweet according to its origin.
To date, the work by Han et al. BIBREF41 is the most relevant to our new study. They conducted a comprehensive study on how Twitter users can be geolocated by using different features of tweets. They analysed how location indicative words from a user's aggregated tweets can be used to geolocate the user. However, this requires collecting a user's history of tweets, which is not realistic in our real-time scenario. They also looked at how some metadata from tweets can be leveraged for classification, achieving slight improvements in performance, but again this is for a user's aggregated history. Finally, they looked at the temporality of tweets, using an old model to classify new tweets, finding that new tweets are more difficult to classify. This is an insightful study, which also motivates some of the settings and selection of classifiers in our own study; however, while an approach based on location indicative words may be very useful when looking at a user's aggregated tweets, it is rather limited when – as in our case – relying on a single tweet per user. Instead, our analysis of different tweet features for geolocating a tweet is based solely on its attributes as retrieved from the Twitter API. Dredze et al. BIBREF12 followed an approach similar to ours when they looked at the utility of a model trained from past tweets, finding that the classification performance degrades for new tweets and that the trained model needs to be continually updated. Their study did not look into further details, such as whether some features are still useful for new tweets, however, and which our study analyses in more detail.
In summary, as far as we are aware, no previous work has dealt with the multiple features available within a tweet, as retrieved from the Twitter streaming API, to determine the location of a tweet posted from anywhere in the world. We look at the suitability of eight tweet features for this purpose, both singly and combined, and experiment on two datasets collected within different time frames to measure the usefulness of an old model on new tweets.
Datasets
For training our classifier, we rely on the most widely adopted approach for the collection of a Twitter dataset with tweets categorised by location. This involves using the Twitter API endpoint that returns a stream of geolocated tweets posted from within one or more specified geographic bounding boxes. In our study, we set this bounding box to be the whole world (i.e., [-180,-90,180,90]) in order to retrieve tweets worldwide. This way, we collected streams of global geolocated tweets for two different week long periods: 4-11 October, 2014 (TC2014) and 22-28 October, 2015 (TC2015). This led to the collection of 31.7 million tweets in 2014 and 28.8 million tweets in 2015, which we adapt for our purposes as explained below.
Our raw datasets reflect the well-known fact that some Twitter users are far more prolific than others, which would introduce a bias in the evaluation if not dealt with. If our classifier has seen a user before, it is very likely that the user will tweet from the same country again. Hence, in order to ensure an unbiased evaluation of the tweet level classification, we de-duplicated users from our datasets, by randomly picking only one tweet from each user for TC2014. For TC2015, we also picked one tweet per user at random, but also removed users that were included in TC2014. This led to a collection of 4,155,763 geolocated tweets in TC2014 and 897,341 geolocated tweets in TC2015. 462,536 tweets were removed from the TC2015 dataset for belonging to users that also appeared in TC2014.
Having these tweets geolocated with the specific coordinates of the user's location, we then inferred the name of that location. For this, we used Nominatim, whose reverse geocoding feature enabled us to retrieve detailed information of the location pointed to by the coordinates given as input. From Nominatim's output, we made use of the country code in our experiments that aimed at country level classification of tweets. As a result, we had all the tweets in TC2014 and TC2015 categorised by country, which we then used as the ground truth for our classification experiments. It is worthwhile noting that the distribution of countries in TC2014 and TC2015 correlate highly with INLINEFORM0 . This suggests that the distribution is stable and therefore we can focus our study on the usefulness of the model trained for different features for new tweets.
The more than 5 million tweets in these two datasets are categorised into 217 different countries. It is worthwhile mentioning that, as one would expect, the resulting datasets are clearly imbalanced, where only a few countries account for most of the tweets. The first country by number of tweets is the United States (20.99%), followed by Indonesia (14.01%) and Turkey (8.50%). The 10 most prominent countries on Twitter in our datasets account for 72.98% of the tweets, while the 25 most prominent countries account for 90.22%. Figure FIGREF5 shows a heat map of popularity by country in our datasets.
The resulting datasets, both TC2014 and TC2015, are publicly available.
Country-Level Location Classification for Tweets
In this study, we define the country-level location classification task as one in which, given a single tweet as input, a classifier has to determine the country of origin of the tweet. We argue for the sole use of the content and metadata provided in a single tweet, which are accessible in a scenario where one wants to classify tweets by country in the tweet stream and in real-time. Most existing approaches have looked at the history of a Twitter user or the social network derivable from a user's followers and followees, which would not be feasible in our real-time scenario.
Classification Techniques
We carried out the experimentation with a range of classifiers of different types: Support Vector Machines (SVM), Gaussian Naive Bayes, Multinomial Naive Bayes, Decision Trees, Random Forests and a Maximum Entropy classifier. They were tested in two different settings, one without balancing the weights of the different classes and the other by weighing the classes as the inverse of their frequency in the training set; the latter was tested as a means for dealing with the highly imbalanced data. The selection of these classifiers is in line with those used in the literature, especially with those tested by Han et al. BIBREF41 . This experimentation led to the selection of the weighed Maximum Entropy (MaxEnt) classifier as the most accurate. In the interest of space and focus, we only present results for this classifier.
Additionally, we compare our results with two baseline approaches. On the one hand, we used the Vowpal Wabbit classifier described by BIBREF12 , a state-of-the-art real-time tweet geolocation classifier. On the other hand, we made use of the GeoNames geographical database, a commonly used approach in the literature. The user location, a string optionally specified by users in their profile settings, can be used here as input to the GeoNames database, which will return a likely location translated from that string. GeoNames provides a list of the most likely locations for a given string, based on either relevance or population, from which we took the first element. While GeoNames can be very effective for certain location names that are easy to map, the use of this feature is limited to users who opt to specify a non-empty location string in their settings (67.1% in our datasets), and will fail with users whose location is not a valid country or city name (e.g., somewhere in the world). The location specified in the user's profile has been used before to infer a user's location, although it is known to lead to low recall BIBREF48 . Here, we used this approach, using a database to translate user locations as a baseline, and explored whether, how, and to what extent a classifier can outperform it. For this baseline approach, we query GeoNames with the location string specified by the user and pick the first option output by the service. To make a fairer comparison with our classifiers, since GeoNames will not be able to determine the location for users with an empty location field, we default GeoNames' prediction for those tweets to be the majority country, i.e., the United States. This decision favours the baseline by assigning the most likely country and is also in line with the baseline approaches used in previous work BIBREF41 .
Experiment Settings
Within the TC2014 dataset, we created 10 different random distributions of the tweets for cross-validation, each having 50% of the tweets for training, 25% for development and 25% for testing. The performance of the 10 runs on the test set were ultimately averaged to get the final performance value. The development set was used to determine the optimal parameters in each case, which are then used for the classification applied to the test set. In separate experiments, TC2015 was used as the test set, keeping the same subsets of TC2014 as training sets, to make the experiments comparable by using the same trained models and to assess the usefulness of year-old tweets to classify new tweets.
We created eight different classifiers, each of which used one of the following eight features available from a tweet as retrieved from a stream of the Twitter API:
User location (uloc): This is the location the user specifies in their profile. While this feature might seem a priori useful, it is somewhat limited as this is a free text field that users can leave empty, input a location name that is ambiguous or has typos, or a string that does not match with any specific locations (e.g., “at home”). Looking at users' self-reported locations, Hecht et al. BIBREF49 found that 66% report information that can be translated, accurately or inaccurately, to a geographic location, with the other 34% being either empty or not geolocalisable.
User language (ulang): This is the user's self-declared user interface language. The interface language might be indicative of the user's country of origin; however, they might also have set up the interface in a different language, such as English, because it was the default language when they signed up or because the language of their choice is not available.
Timezone (tz): This indicates the time zone that the user has specified in their settings, e.g., “Pacific Time (US & Canada)”. When the user has specified an accurate time zone in their settings, it can be indicative of their country of origin; however, some users may have the default time zone in their settings, or they may use an equivalent time zone belonging to a different location (e.g., “Europe/London” for a user in Portugal). Also, Twitter's list of time zones does not include all countries.
Tweet language (tlang): The language in which a tweet is believed to be written is automatically detected by Twitter. It has been found to be accurate for major languages, but it leaves much to be desired for less widely used languages. Twitter's language identifier has also been found to struggle with multilingual tweets, where parts of a tweet are written in different languages BIBREF50 .
Offset (offset): This is the offset, with respect to UTC/GMT, that the user has specified in their settings. It is similar to the time zone, albeit more limited as it is shared with a number of countries.
User name (name): This is the name that the user specifies in their settings, which can be their real name, or an alternative name they choose to use. The name of a user can reveal, in some cases, their country of origin.
User description (description): This is a free text where a user can describe themselves, their interests, etc.
Tweet content (content): The text that forms the actual content of the tweet. The use of content has a number of caveats. One is that content might change over time, and therefore new tweets might discuss new topics that the classifiers have not seen before. Another caveat is that the content of the tweet might not be location-specific; in a previous study, Rakesh et al. BIBREF51 found that the content of only 289 out of 10,000 tweets was location-specific.
Figure FIGREF19 shows an example of a tweet and the eight features listed above. The features were treated in two different ways: the user location, name of the user, description and tweet content were represented using a bag of words approach, where each token represented a feature in the vector space model. The rest of the features, namely the user language, time zone, tweet language and offset, were represented by a single categorical value in the vector space model, given the limited number of values that the features can take. We used these eight features separately, as well as in different combinations with one another, in our experiments testing the ability to infer the country of origin of tweets. In separate experiments, we also append these features into single vectors to test different combinations of these features.
Evaluation
We report three different performance values for each of the experiments: micro-accuracy, macro-accuracy and mean squared error (MSE). The accuracy values are computed as the result of dividing all the correctly classified instances by all the instances in the test set. The micro-accuracy is computed for the test set as a whole. For macro-accuracy, we compute the accuracy for each specific country in the test set, which are then averaged to compute the overall macro-accuracy. While the micro-accuracy measures the actual accuracy in the whole dataset, the macro-accuracy penalises the classifier that performs well only for the majority classes and rewards, instead, classifiers that perform well across multiple categories. This is especially crucial in a case like ours where the categories are highly imbalanced.
The MSE is the average of the squared distance in kilometres between the predicted country and the actual, ground truth country, as shown in Equation EQREF21 . DISPLAYFORM0
In this computation, the distances between pairs of countries were calculated based on their centroids. We used the Countries of the World (COW) dataset produced by OpenGeonames.org to obtain the centroids of all countries. Having the latitude and longitude values of the centroids of all these countries, we then used the Haversine formula BIBREF52 , which accounts for the spheric shape when computing the distance between two points and is often used as an acceptable approximation to compute distances on the Earth. The Haversine distance between two points of a sphere each defined by its longitude and latitude is computed as shown in Equation EQREF22 . DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are the latitudes of point 1 and point 2, INLINEFORM2 and INLINEFORM3 are the longitudes of point 1 and point 2, and INLINEFORM4 is the radius of the Earth, which is estimated to be 6,371 km.
Classification Results
In this section, we present results for different location classification experiments. First, we look at the performance of classifiers that use a single feature. Then, we present the results for classifiers combining multiple features. To conclude, we examine the results in more depth by looking at the performance by country, as well as error analysis.
Single Feature
Table TABREF24 shows the results for the classification on the TC2014 dataset with two different approaches using GeoNames, one based on population (the most populous city is chosen when there are different options for a name) and one based on relevance (the city name that most resembles the input string). In this dataset, 65.82% of the tweets have a non-empty string in the location field; for the rest of tweets, we pick the most popular country in the dataset as the output of the approach based on GeoNames. The table shows values of micro- and macro-accuracy.
There is no big difference between the two approaches based on GeoNames when we look at micro-accuracy. However, this accuracy is slightly better distributed across countries when we use the approach based on relevance, as can be seen from the macro-accuracy values. In what follows, we consider the relevance-based GeoNames approach as the baseline that solely relies on a database matching the user's profile location and compare with the use of classifiers that exploit additional features available in a tweet.
Table TABREF26 shows the classification results, each case making use of only one of the eight features under study. This table includes performance values when we applied the classifier on both datasets, TC2014 and TC2015. The additional column, “Diff.”, shows the relative difference in performance for each of these datasets, i.e., measuring the extent to which a model learned from the TC2014 dataset can still be applied to the TC2015 test set. Note that while higher values are desired for micro-accuracy and macro-accuracy, lower values are optimal for MSE.
If we look at the micro-accuracy scores, the results suggest that three approaches stand out over the rest. These are tweet content, tweet language and user language, which are the only three approaches to get a micro-accuracy score above 0.5. However, these three approaches leave much to be desired when we evaluate them based on macro-accuracy scores, and therefore they fail to balance the classification well. Instead, the users' self-reported location (user location) achieves the highest macro-accuracy scores, while micro-accuracy scores are only slightly lower. This is due to the fact that the classifier that only uses the user's profile location will be able to guess correctly a few cases for each country where users specify a correctly spelled, unambiguous location, but will fail to classify correctly the rest; hence the higher macro-accuracy is sensible according to these expectations. The MSE error rates suggest that tweet content and tweet language are the best in getting the most proximate classifications. We believe that this is due to the proximity of many countries that speak the same language (e.g., Germany and Austria, or Argentina and Chile), in which case the classifier that relies on tweet language or content will often choose a neighbouring country given the similarities they share in terms of topics and language. While most of these classifiers outperform the GeoNames baseline in terms of micro-accuracy, user location is the only feature to beat the baseline in terms of macro-accuracy. However, the small improvement over the baseline suggests that alternative approaches are needed for a better balanced classification performance.
Figure FIGREF25 shows a heat map with accuracy values of each of the features broken down by country. We observe the best distributed accuracy across countries is with the use of user location as a feature. However, other features are doing significantly better classifying tweets that belong to some of the major countries such as the USA (better classified by tweet language or user language), Russia (better classified by tweet language) or Brazil (better classified by tweet language, user name or tweet content). This emphasises the necessity to explore further the differences between each country's characteristics.
As we noted above, a remarkable characteristic of our datasets (and the reality of Twitter itself) is the high imbalance in the distribution of tweets across countries, where a few countries account for a large majority of the tweets and many countries in the tail account for very few tweets. The fact that the classifier has to determine which of the 217 countries a tweet belongs to substantially complicates the task. To quantify this, and to explore the ability to boost performance on the countries with highest presence, we also performed classification experiments on the top 25 countries. These top 25 countries account for as many as 90.22% of the tweets; consequently, being able to boost performance on these 25 countries, while assuming that the system will miss the rest, can make it a more achievable task where the overall performance gets improved.
To perform the classification on the top countries, we removed the tweets from countries that do not belong to the top 25 list from the training set. Including tweets from the remaining countries would add a noisy category to the training set, given the diversity of that new category. However, for obvious reasons, we cannot do the same for the test set. For the purposes of experimentation, we assign the rest of the tweets in the test set a different, 26th label, meaning that they belong to other countries. Our experiments on the top 25 countries will then have a training set with 25 categories to learn from and test sets with 26 categories, where the classifier will never predict the 26th category.
Table TABREF27 shows the results for the experiments on the top 25 countries. The overall tendency is very similar to that of the classifiers applied to all the countries in the world, with an expected overall boost in macro-accuracy values. However, we see a substantial improvement with the use of content as a feature, which now outperforms tweet language in micro-accuracy scores as well as user location in macro-accuracy scores. Tweet content actually becomes the best performing feature with the reduced set of 25 countries. Classification on a reduced subset of countries can substantially boost performance, even assuming that part of the dataset will be misclassified. In fact, classification on this optimised setting outperforms by far the baseline using GeoNames. Not only does the top performing feature, tweet content, improve its performance. Other features that performed poorly before, such as tweet language, time zone or user language, perform significantly better, also outperforming the GeoNames baseline. This further motivates our subsequent goal of studying combinations of features to further boost the performance of the classifier applied to the top 25 countries.
Feature Combinations
Having seen that different features give rise to gains in different ways, testing the performance of combinations of multiple features seemed like a wise option. We performed these combinations of features by appending the vectors for each of the features into a single vector. We tested all 255 possible combinations using the eight features under study. We only report the best performing combinations here in the interest of space and clarity.
Table TABREF29 shows the best combination in each case for the TC2014 and TC2015 datasets, as well as for the classifiers that consider all the countries in the datasets and only the top 25 countries. The table also shows the performance of the best single feature as well as the baseline classifier by BIBREF12 to facilitate comparison, as well as the improvement in performance when using a combination of features over that of a single feature. We observe that the selection of an appropriate combination of features can actually lead to a substantial increase in terms of all micro-accuracy, macro-accuracy and MSE. These improvements are especially remarkable when we look at the MSE scores, where the improvement is always above 50%. Improvements in terms of micro-accuracy and macro-accuracy scores are also always above 20%, but are especially high for micro-accuracy (50%+) when we classify for all the countries, and for macro-accuracy (40%+) when we classify for the top 25 countries. These results suggest that the use of a single feature, as it is the case with most previous work using e.g. only tweet content, can be substantially improved by using more features. In fact, our results suggest that the combination of many features is usually best; we need to combine seven of the eight features (all but offset) in three of the cases, and six features in the other case (all but description and offset). As a result, we get performance values above 85% in terms of macro-accuracy for the top 25 countries. These performance scores are also remarkably higher than those of the classifier by BIBREF12 , both in terms of micro- and macro-accuracy.
Interestingly, the combination of features has led to a significant improvement in performance, with a better balance across countries. To complement this analysis, we believe it is important to understand the differences among countries. Will different sets of features be useful for an accurate classification for each country? Are we perhaps doing very well for some countries with certain combinations, but that combination, is in turn, bad for other countries? To explore this further, we now take a closer look at the performance broken down by country.
Breakdown of Countries
Given the remarkable differences among countries we observed (Figure FIGREF25 ) when exploring how different features are useful for different countries, we take a closer look at the performance of different classifiers for each of the top 25 countries. As we are now looking at each country separately, we use precision, recall and F1 scores as more appropriate evaluation measures that better capture the extent to which a country's tweets are being correctly categorised. We look at the best combination of features for each country in terms of F1 score and analyse the set of features that lead to the best performance in each case. We show the results of this analysis in Table TABREF31 .
The results show that very different approaches lead to optimal results for each country, revealing the different features that characterise each country. One striking observation we make from the ranking of country accuracies is that seven of the top eight ranking countries have unique characteristics, especially when it comes to language; except for the USA, these countries have a language that is not shared with any other country in the list. Interestingly, the best approach for most of these countries include either or both of tweet language or user language. When it comes to user language, this means that users in these countries have a strong inclination towards setting the user interface in their own language instead of the default language. In the case of tweet language, this mainly reflects a combination of two things, one being that users in these countries tend to tweet mostly in their own language, while the other is that Twitter's language identifier is very accurate in these cases. Further down in the list, we see the Spanish and English speaking countries, which seem to be harder to classify because of the numerous commonalities with one another, both in terms of language as well as in terms of content, given their cultural and geographical proximity.
All of the top 25 countries actually benefit from a combination of features, as there is no single case in which the use of only one feature performs best. Most of the countries in fact benefit from combining four or more features, with the only exceptions being Saudi Arabia –two features– and Japan –three features. Looking at the utility of features (see last row of the table showing totals), the features that are useful for TC2014 in most of the cases include user location, tweet content and user name, while offset and tweet language are the least useful. When we look at the combinations that perform best for new tweets –i.e. TC2015–, we see that in the majority of the cases the optimal combination is a reduced subset of that for TC2014 (green rows). This suggests that there are some features that perform well when classifying tweets from the same time frame as the training data, but whose performance drops when applied to new collections of tweets. However, one can get comparable performance when the right combination of features is chosen. As our results suggest, the features whose utility tends to fade include especially user description, with a remarkable drop from 19 to 1 case where it is useful, but also to a lesser extent tweet language, offset, time zone and user language. On the other hand, tweet content, user name and user location are the features that are as useful when applied to new tweets.
Finally, looking at the performance difference of countries in TC2014 and that in TC2015, there is no big gap in most of the cases and the differences are mostly within INLINEFORM0 5%. However, there are a few cases where the performance drops drastically when we apply the classifier on the new dataset. This is the case of Saudi Arabia, Netherlands and France, whose performance in TC2015 drops between 9% and 21% from that in TC2014. The highest improvement occurs for Germany, India and South Africa, with increases in performance in TC2014 that range between 4% and 11%.
Error Analysis
To shed some light on the reasons why some countries are not classified as accurately, we looked at the errors that the classifiers are making. Overall, if we put together all correct classifications by any of the classifiers, we would be able to get a micro-accuracy of up to 99.1% as an upper bound estimation for the tweets that belong to one of the top 25 countries. This raises expectations in that nearly all users can be accurately classified in some way by using the right classifier. However, many countries share similar (or common) characteristics, which often leads to mistakes between those countries. To better understand this, we look at the confusion matrix for the top 25 countries.
The confusion matrix in Table SECREF32 shows the aggregated misclassifications for all the 255 classifiers applied to the top 25 countries. The values highlighted in grey refer to correct guesses (diagonal). In red, we highlight misclassifications exceeding 10% of a country's tweets, in orange those exceeding 5% and in yellow those exceeding 2%.
[p]
Aggregated confusion matrix for all classifiers on the top 25 countries. (ar: Argentina, au: Australia, br: Brazil, ca: Canada, cl: Chile, co: Colombia, de: Germany, es: Spain, fr: France, gb: United Kingdom, id: Indonesia, in: India, it: Italy, jp: Japan, mx: Mexico, my: Malaysia, nl: The Netherlands, ph: Philippines, ru: Russia, sa: Saudi Arabia, th: Thailand, tr: Turkey, us: United States, ve: Venezuela, za: South Africa)
On the positive side, some of the countries have very small misclassifications. Brazil and Turkey have misclassifications of less than 2% (no yellow, orange or red cells). Other countries, including France, Indonesia, Italy, Japan and the USA, have misclassifications of less than 5% (no red or orange cells). These are mostly countries with unique characteristics with respect to the rest of the top 25 countries; they predominantly use a language that is not used by any other in the list, except the USA, which has the advantage of having the majority of tweets. However, a striking observation is the large percentage of misclassifications involving Spanish speaking countries, which include Argentina, Chile, Colombia, Spain, Mexico and Venezuela. In most of these cases the high number of misclassifications occurs in both directions for each pair of countries. This is an additional difficulty that one might have expected, given that all of them share cultural and linguistic commonalities, especially for using the same language and hence overlapping content. Moreover, the Latin American countries often share the time zone and, while the time zone is different for Spain, many of the cities in the Latin American countries are named after Spanish cities (e.g., Córdoba in Argentina, León in Mexico, Valencia in Venezuela, Cartagena in Colombia or Santiago in Chile, all of which are also Spanish cities), which makes the distinction from Spain more challenging if only user location is used. Similarly, we also observe a large amount of misclassifications involving English speaking countries, e.g. Australia, the UK, Canada and the USA. The majority of the orange misclassifications (5%-10%) are between Spanish and English speaking countries, with the exception of Chile and Argentina, which are even higher (10%+) and which we surmise is due to their proximity and cultural similarities. Finally, many misclassifications involve the United States, which account for the majority of red misclassifications (10%+), and which is not surprising since it is the predominant country with about 20% of tweets.
Discussion
Our experiments and analysis on over 5 million geolocated tweets from unique users reveal insights into country-level geolocation of tweets in real time. Our experiments only make use of features inherent in the tweets to enable real-time classification. This can be invaluable when curation of the tweet stream is needed for applications such as country-specific trending topic detection BIBREF53 , or for more specific applications where only tweets coming from a specific country are sought, e.g. sentiment analysis or reputation management BIBREF54 . The identification of the country of origin will also help mitigate problems caused by the limited availability of demographic details for Twitter users BIBREF55 .
We found that one of the most commonly used approaches, which is the use of gazeteers such as GeoNames to match the user's self-reported location with a place in the world, performs reasonably well in terms of macro-accuracy, but fails in terms of micro-accuracy, i.e. without high accuracy for most countries. The use of a classifier that makes use of a single feature, such as the self-reported location of a user, outperforms the GeoNames baseline in terms of micro-accuracy, as well as slightly in terms of macro-accuracy. The main challenge is that it has to deal with as many as 217 countries, making the task especially difficult. To overcome this, we have tested our classifier on a reduced subset of the top 25 countries, which still account for more than 90% of the whole Twitter stream. In this case, we found that this classifier can substantially outperform both the GeoNames baseline and the state-of-the-art real-time tweet geolocation classifier by BIBREF12 . The use of the tweet content alone becomes then the most useful feature.
Further testing with combinations of multiple features, we found that performance can be substantially improved, although one needs to be careful when picking the features to be used. What is interesting is that the classifier trained on data from the same time frame as the test set can be effectively applied to new tweets, which we verified on tweets posted a year later. The combination of features that works well for the test set in the same time frame can be applied to the new tweets in most cases, achieving similar performance values. However, it is important to consider that the utility of some features drops over time, which is especially the case of user description, but also to a lesser extent other features like offset and tweet language. On the positive side, features like tweet content, user location and user name are among the most useful features for classifying new tweets. One may also choose to regularly update the classifier by training with new tweets, as BIBREF12 suggested, however, in the interest of keeping a model for longer and reducing the cost of updating models, we show that the choice of the appropriate features can be as effective (i.e. achieving macro-accuracy scores of 0.858 and 0.853 for tweets within the same time frame and new tweets, respectively). The scenario is quite different when one wants to identify tweets from a specific country, given that different sets of features lead to more accurate classifications for different countries, which do not necessarily match with the overall best approach. By picking the right combination of features one can achieve classification performances for a country higher than 0.8 and even above 0.9 in terms of F1 score in cases where a country has unique characteristics such as a language that is not spoken in other countries or a unique time zone. However, these performance values tend to drop when one aims to identify tweets for a country that has common characteristics with other countries; this is especially true for English and Spanish speaking countries, among which many are large countries that speak the same language, share similar contents and have the same time zone (e.g., Chile and Argentina, or Canada and the USA).
The use of geolocated tweets to build a collection of tweets with a location assigned is a widely accepted practice, although the applicability of a model trained on geolocated tweets to then classify non-geolocated tweets has not been studied in depth. In previous work, BIBREF41 suggested that a model trained on geotagged data is expected to generalise well to non-geotagged data when one wants to classify users. For our case study with tweets rather than users, we performed a comparative analysis of geolocated and non-geolocated tweets in the time frame of our TC2014 dataset. Looking at the ranked frequencies for each feature, we found high correlations ranging from INLINEFORM0 to INLINEFORM1 for seven of the features under study across the subsets of geolocated and non-geolocated tweets, except for content leading to lower correlation ( INLINEFORM2 ). This indicates that non-geolocated tweets have similar characteristics and that a model trained on geolocated tweets could be effectively applied, reinforcing our findings that the use of content alone, as in most previous work, does not suffice, and combination of features is recommended. Empirical experimentation on non-geolocated tweets would help quantify this further; however an alternative data collection and annotation methodology should be defined for this purpose, which is beyond the scope of this work.
In summary, the results suggest that an appropriate selection of tweet features can lead to accurate, real-time classification of the most populous countries in terms of volume. Interestingly, a model trained from historical tweets can also be applied to tweets collected later in time when the topics that users talk about may be completely different. Having this classifier in place, one may then want to perform finer-grained geolocation of tweets within a country. For instance, during breaking news, one may want to identify reports from eyewitnesses on the ground and therefore fine-grained geolocation would be crucial to identify tweets in the area.
Conclusion
To the best of our knowledge, this is the first study performing a comprehensive analysis of the usefulness of tweet-inherent features to automatically infer the country of origin of tweets in a real-time scenario from a global stream of tweets written in any language. Most previous work focused on classifying tweets coming from a single country and hence assumed that tweets from that country were already identified. Where previous work had considered tweets from all over the world, the set of features employed for the classification included features, such as a user's social network, that are not readily available within a tweet and so is not feasible in a scenario where tweets need to be classified in real-time as they are collected from the streaming API. Moreover, previous attempts to geolocate global tweets tended to restrict their collection to tweets from a list of cities, as well as to tweets in English; this means that they did not consider the entire stream, but only a set of cities, which assumes prior preprocessing. Finally, our study uses two datasets collected a year apart from each other, to test the ability to classify new tweets with a classifier trained on older tweets. Our experiments and analysis reveal insights that can be used effectively to build an application that classifies tweets by country in real time, either when the goal is to organise content by country or when one wants to identify all the content posted from a specific country.
In the future we plan to test alternative cost-sensitive learning approaches to the one used here, focusing especially on collection of more data for under-represented countries, so that the classifier can be further improved for all the countries. Furthermore, we plan to explore more sophisticated approaches for content analysis, e.g. detection of topics in content (e.g. do some countries talk more about football than others?), as well as semantic treatment of the content. We also aim to develop finer-grained classifiers that take the output of the country-level classifier as input.
Acknowledgments
This work has been supported by the PHEME FP7 project (grant No. 611233), the Warwick University Higher Education Impact Fund, an ESRC Impact Acceleration Award, EPSRC Impact Acceleration Account (grant no. EP/K503940/1) and EPSRC grant EP/L016400/1. We used the MidPlus computational facilities, supported by QMUL Research-IT and funded by EPSRC grant EP/K000128/1. | Support Vector Machines (SVM), Gaussian Naive Bayes, Multinomial Naive Bayes, Decision Trees, Random Forests and a Maximum Entropy classifier |
ebae0cd1fe0e7ba877d4b3055190e8b1dfcaeb53 | ebae0cd1fe0e7ba877d4b3055190e8b1dfcaeb53_0 | Q: What are the eight features mentioned?
Text: Introduction
Social media are increasingly being used in the scientific community as a key source of data to help understand diverse natural and social phenomena, and this has prompted the development of a wide range of computational data mining tools that can extract knowledge from social media for both post-hoc and real time analysis. Thanks to the availability of a public API that enables the cost-free collection of a significant amount of data, Twitter has become a leading data source for such studies BIBREF0 . Having Twitter as a new kind of data source, researchers have looked into the development of tools for real-time trend analytics BIBREF1 , BIBREF2 or early detection of newsworthy events BIBREF3 , as well as into analytical approaches for understanding the sentiment expressed by users towards a target BIBREF4 , BIBREF5 , BIBREF6 , or public opinion on a specific topic BIBREF7 . However, Twitter data lacks reliable demographic details that would enable a representative sample of users to be collected and/or a focus on a specific user subgroup BIBREF8 , or other specific applications such as helping establish the trustworthiness of information posted BIBREF9 . Automated inference of social media demographics would be useful, among others, to broaden demographically aware social media analyses that are conducted through surveys BIBREF10 . One of the missing demographic details is a user's country of origin, which we study here. The only option then for the researcher is to try to infer such demographic characteristics before attempting the intended analysis.
This has motivated a growing body of research in recent years looking at different ways of determining automatically the user's country of origin and/or – as a proxy for the former – the location from which tweets have been posted BIBREF11 . Most of the previous research in inferring tweet geolocation has classified tweets by location within a limited geographical area or country; these cannot be applied directly to an unfiltered stream where tweets from any location or country will be observed. The few cases that have dealt with a global collection of tweets have used an extensive set of features that cannot realistically be extracted in a real-time, streaming context (e.g., user tweeting history or social networks) BIBREF12 , and have been limited to a selected set of global cities as well as to English tweets. This means they use ground truth labels to pre-filter tweets originating from other regions and/or written in languages other than English. The classifier built on this pre-filtered dataset may not be applicable to a Twitter stream where every tweet needs to be geolocated. An ability to classify tweets by location in real-time is crucial for applications exploiting social media updates as social sensors that enable tracking topics and learning about location-specific trending topics, emerging events and breaking news. Specific applications of a real-time, country-level tweet geolocation system include country-specific trending topic detection or tracking sentiment towards a topic broken down by country. To the best of our knowledge, our work is the first to deal with global tweets in any language, using only those features present within the content of a tweet and its associated metadata. We also complement previous work by investigating the extent to which a classifier trained on historical tweets can be used effectively on newly harvested tweets.
Motivated by the need to develop an application to identify the trending topics within a specific country, here we document the development of a classifier that can geolocate tweets by country of origin in real-time. Given that within this scenario it is not feasible to collect additional data to that readily available from the Twitter stream BIBREF12 , we explore the usefulness of eight tweet-inherent features, all of which are readily available from a tweet object as retrieved from the Twitter API, for determining its geolocation. We perform classification using each of the features alone, but also in feature combinations. We explore the ability to perform the classification on as many as 217 countries, or in a reduced subset of the top 25 countries, as judged by tweet volume. The use of two datasets, collected in October 2014 and October 2015, gives additional insight into whether historical Twitter data can be used to classify new instances of tweets. These two datasets with over 5 million country-coded tweets are publicly available.
Our methodology enables us to perform a thorough analysis of tweet geolocation, revealing insights into the best approaches for an accurate country-level location classifier for tweets. We find that the use of a single feature like content, which is the most commonly used feature in previous work, does not suffice for an accurate classification of users by country and that the combination of multiple features leads to substantial improvement, outperforming the state-of-the-art real-time tweet geolocation classifier; this improvement is particularly manifest when using metadata like the user's self-reported location as well as the user's real name. We also perform a per-country analysis for the top 25 countries in terms of tweet volume, exploring how different features lead to optimal classification for different countries, as well as discussing limitations when dealing with some of the most challenging countries. We show that country-level classification of an unfiltered Twitter stream is challenging. It requires careful design of a classifier that uses an appropriate combination of features. Our results at the country level are promising enough in the case of numerous countries, encouraging further research into finer-grained geolocation of global tweets. Cases where country-level geolocation is more challenging include English and Spanish speaking countries, which are harder to distinguish due to their numerous commonalities. Still, our experiments show that we can achieve F1 scores above 80% in many of these cases given the choice of an appropriate combination of features, as well as an overall performance above 80% in terms of both micro-accuracy and macro-accuracy for the top 25 countries.
Related Work
A growing body of research deals with the automated inference of demographic details of Twitter users BIBREF8 . Researchers have attempted to infer attributes of Twitter users such as age BIBREF13 , BIBREF14 , gender BIBREF15 , BIBREF16 , BIBREF17 , BIBREF14 , political orientation BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 or a range of social identities BIBREF22 . Digging more deeply into the demographics of Twitter users, other researchers have attempted to infer socioeconomic demographics such as occupational class BIBREF23 , income BIBREF24 and socioeconomic status BIBREF25 . Work by Huang et al. BIBREF26 has also tried to infer the nationality of users; this work is different from that which we report here in that the country where the tweets were posted from, was already known.
What motivates the present study is the increasing interest in inferring the geographical location of either tweets or Twitter users BIBREF11 . The automated inference of tweet location has been studied for different purposes, ranging from data journalism BIBREF27 , BIBREF9 to public health BIBREF28 . As well as numerous different techniques, researchers have relied on different settings and pursued different objectives when conducting experiments. Table TABREF2 shows a summary of previous work reported in the scientific literature, outlining the features that each study used to classify tweets by location, the geographic scope of the study, the languages they dealt with, the classification granularity they tried to achieve and used for evaluation, and whether single tweets, aggregated multiple tweets and/or user history were used to train the classifier.
Most of the previous studies on automated geolocation of tweets have assumed that the tweet stream includes only tweets from a specific country. The majority of these studies have focused on the United States, classifying tweets either at a city or state level. One of the earliest studies is that by Cheng et al. BIBREF30 , who introduced a probabilistic, content-based approach that identifies the most representative words of each of the major cities in the USA; these words are then used to classify new tweets. They incorporate different techniques to filter words, such as local and state-level filtering, classifying up to 51% of Twitter users accurately within a 100 mile radius. Their approach, however, relies on making use of the complete history of a user, and was tested only for users with at least 1,000 tweets in their timeline.
Most of the other studies documented in the literature have also relied on tweet content, using different techniques such as topic modelling to find locally relevant keywords that reveal a user's likely location BIBREF34 , BIBREF35 , BIBREF30 , BIBREF44 , BIBREF41 , BIBREF45 , BIBREF47 , BIBREF43 , BIBREF37 . Another widely used technique relies on the social network that a user is connected to, in order to infer a user's location from that of their followers and followees BIBREF36 , BIBREF37 , BIBREF38 . While the approaches summarised will work well for certain applications, retrieving the tweet history for each user or the profile information of all of a user's followers and followees is not feasible in a real-time scenario. Hence, in this context, a classifier needs to deal with the additional challenge of having to rely only on the information that can be extracted from a single tweet.
Only a handful of studies have relied solely on the content of a single tweet to infer its location BIBREF33 , BIBREF39 , BIBREF29 , BIBREF40 , BIBREF46 , BIBREF32 , BIBREF31 . Again, most of these have actually worked on very restricted geographical areas, with tweets being limited to different regions, such as the United States BIBREF29 , BIBREF31 , four different cities BIBREF40 , and New York only BIBREF39 . Bo et al. BIBREF33 did focus on a broader geographical area, including 3.7k cities all over the world. Nevertheless, their study focused on a limited number of cities, disregarding other locations, and only classified tweets written in English.
When it comes to geolocation classification granularity, the majority of studies have aimed at city-level classification. While this provides fine-grained classification of tweets, it also means that a limited number of cities can be considered, ignoring other cities and towns. Only Han et al. BIBREF41 and Dredze et al. BIBREF12 perform country-level classification, although they also restricted themselves to English language tweets posted from a limited number of cities. This means that tweets posted from cities other than the ones under consideration are removed from the stream, as are tweets written in other languages. In our study, we take as input the stream of tweets with content originating from any country and in any language, i.e. the entire tweet stream, to classify, at the country-level, each tweet according to its origin.
To date, the work by Han et al. BIBREF41 is the most relevant to our new study. They conducted a comprehensive study on how Twitter users can be geolocated by using different features of tweets. They analysed how location indicative words from a user's aggregated tweets can be used to geolocate the user. However, this requires collecting a user's history of tweets, which is not realistic in our real-time scenario. They also looked at how some metadata from tweets can be leveraged for classification, achieving slight improvements in performance, but again this is for a user's aggregated history. Finally, they looked at the temporality of tweets, using an old model to classify new tweets, finding that new tweets are more difficult to classify. This is an insightful study, which also motivates some of the settings and selection of classifiers in our own study; however, while an approach based on location indicative words may be very useful when looking at a user's aggregated tweets, it is rather limited when – as in our case – relying on a single tweet per user. Instead, our analysis of different tweet features for geolocating a tweet is based solely on its attributes as retrieved from the Twitter API. Dredze et al. BIBREF12 followed an approach similar to ours when they looked at the utility of a model trained from past tweets, finding that the classification performance degrades for new tweets and that the trained model needs to be continually updated. Their study did not look into further details, such as whether some features are still useful for new tweets, however, and which our study analyses in more detail.
In summary, as far as we are aware, no previous work has dealt with the multiple features available within a tweet, as retrieved from the Twitter streaming API, to determine the location of a tweet posted from anywhere in the world. We look at the suitability of eight tweet features for this purpose, both singly and combined, and experiment on two datasets collected within different time frames to measure the usefulness of an old model on new tweets.
Datasets
For training our classifier, we rely on the most widely adopted approach for the collection of a Twitter dataset with tweets categorised by location. This involves using the Twitter API endpoint that returns a stream of geolocated tweets posted from within one or more specified geographic bounding boxes. In our study, we set this bounding box to be the whole world (i.e., [-180,-90,180,90]) in order to retrieve tweets worldwide. This way, we collected streams of global geolocated tweets for two different week long periods: 4-11 October, 2014 (TC2014) and 22-28 October, 2015 (TC2015). This led to the collection of 31.7 million tweets in 2014 and 28.8 million tweets in 2015, which we adapt for our purposes as explained below.
Our raw datasets reflect the well-known fact that some Twitter users are far more prolific than others, which would introduce a bias in the evaluation if not dealt with. If our classifier has seen a user before, it is very likely that the user will tweet from the same country again. Hence, in order to ensure an unbiased evaluation of the tweet level classification, we de-duplicated users from our datasets, by randomly picking only one tweet from each user for TC2014. For TC2015, we also picked one tweet per user at random, but also removed users that were included in TC2014. This led to a collection of 4,155,763 geolocated tweets in TC2014 and 897,341 geolocated tweets in TC2015. 462,536 tweets were removed from the TC2015 dataset for belonging to users that also appeared in TC2014.
Having these tweets geolocated with the specific coordinates of the user's location, we then inferred the name of that location. For this, we used Nominatim, whose reverse geocoding feature enabled us to retrieve detailed information of the location pointed to by the coordinates given as input. From Nominatim's output, we made use of the country code in our experiments that aimed at country level classification of tweets. As a result, we had all the tweets in TC2014 and TC2015 categorised by country, which we then used as the ground truth for our classification experiments. It is worthwhile noting that the distribution of countries in TC2014 and TC2015 correlate highly with INLINEFORM0 . This suggests that the distribution is stable and therefore we can focus our study on the usefulness of the model trained for different features for new tweets.
The more than 5 million tweets in these two datasets are categorised into 217 different countries. It is worthwhile mentioning that, as one would expect, the resulting datasets are clearly imbalanced, where only a few countries account for most of the tweets. The first country by number of tweets is the United States (20.99%), followed by Indonesia (14.01%) and Turkey (8.50%). The 10 most prominent countries on Twitter in our datasets account for 72.98% of the tweets, while the 25 most prominent countries account for 90.22%. Figure FIGREF5 shows a heat map of popularity by country in our datasets.
The resulting datasets, both TC2014 and TC2015, are publicly available.
Country-Level Location Classification for Tweets
In this study, we define the country-level location classification task as one in which, given a single tweet as input, a classifier has to determine the country of origin of the tweet. We argue for the sole use of the content and metadata provided in a single tweet, which are accessible in a scenario where one wants to classify tweets by country in the tweet stream and in real-time. Most existing approaches have looked at the history of a Twitter user or the social network derivable from a user's followers and followees, which would not be feasible in our real-time scenario.
Classification Techniques
We carried out the experimentation with a range of classifiers of different types: Support Vector Machines (SVM), Gaussian Naive Bayes, Multinomial Naive Bayes, Decision Trees, Random Forests and a Maximum Entropy classifier. They were tested in two different settings, one without balancing the weights of the different classes and the other by weighing the classes as the inverse of their frequency in the training set; the latter was tested as a means for dealing with the highly imbalanced data. The selection of these classifiers is in line with those used in the literature, especially with those tested by Han et al. BIBREF41 . This experimentation led to the selection of the weighed Maximum Entropy (MaxEnt) classifier as the most accurate. In the interest of space and focus, we only present results for this classifier.
Additionally, we compare our results with two baseline approaches. On the one hand, we used the Vowpal Wabbit classifier described by BIBREF12 , a state-of-the-art real-time tweet geolocation classifier. On the other hand, we made use of the GeoNames geographical database, a commonly used approach in the literature. The user location, a string optionally specified by users in their profile settings, can be used here as input to the GeoNames database, which will return a likely location translated from that string. GeoNames provides a list of the most likely locations for a given string, based on either relevance or population, from which we took the first element. While GeoNames can be very effective for certain location names that are easy to map, the use of this feature is limited to users who opt to specify a non-empty location string in their settings (67.1% in our datasets), and will fail with users whose location is not a valid country or city name (e.g., somewhere in the world). The location specified in the user's profile has been used before to infer a user's location, although it is known to lead to low recall BIBREF48 . Here, we used this approach, using a database to translate user locations as a baseline, and explored whether, how, and to what extent a classifier can outperform it. For this baseline approach, we query GeoNames with the location string specified by the user and pick the first option output by the service. To make a fairer comparison with our classifiers, since GeoNames will not be able to determine the location for users with an empty location field, we default GeoNames' prediction for those tweets to be the majority country, i.e., the United States. This decision favours the baseline by assigning the most likely country and is also in line with the baseline approaches used in previous work BIBREF41 .
Experiment Settings
Within the TC2014 dataset, we created 10 different random distributions of the tweets for cross-validation, each having 50% of the tweets for training, 25% for development and 25% for testing. The performance of the 10 runs on the test set were ultimately averaged to get the final performance value. The development set was used to determine the optimal parameters in each case, which are then used for the classification applied to the test set. In separate experiments, TC2015 was used as the test set, keeping the same subsets of TC2014 as training sets, to make the experiments comparable by using the same trained models and to assess the usefulness of year-old tweets to classify new tweets.
We created eight different classifiers, each of which used one of the following eight features available from a tweet as retrieved from a stream of the Twitter API:
User location (uloc): This is the location the user specifies in their profile. While this feature might seem a priori useful, it is somewhat limited as this is a free text field that users can leave empty, input a location name that is ambiguous or has typos, or a string that does not match with any specific locations (e.g., “at home”). Looking at users' self-reported locations, Hecht et al. BIBREF49 found that 66% report information that can be translated, accurately or inaccurately, to a geographic location, with the other 34% being either empty or not geolocalisable.
User language (ulang): This is the user's self-declared user interface language. The interface language might be indicative of the user's country of origin; however, they might also have set up the interface in a different language, such as English, because it was the default language when they signed up or because the language of their choice is not available.
Timezone (tz): This indicates the time zone that the user has specified in their settings, e.g., “Pacific Time (US & Canada)”. When the user has specified an accurate time zone in their settings, it can be indicative of their country of origin; however, some users may have the default time zone in their settings, or they may use an equivalent time zone belonging to a different location (e.g., “Europe/London” for a user in Portugal). Also, Twitter's list of time zones does not include all countries.
Tweet language (tlang): The language in which a tweet is believed to be written is automatically detected by Twitter. It has been found to be accurate for major languages, but it leaves much to be desired for less widely used languages. Twitter's language identifier has also been found to struggle with multilingual tweets, where parts of a tweet are written in different languages BIBREF50 .
Offset (offset): This is the offset, with respect to UTC/GMT, that the user has specified in their settings. It is similar to the time zone, albeit more limited as it is shared with a number of countries.
User name (name): This is the name that the user specifies in their settings, which can be their real name, or an alternative name they choose to use. The name of a user can reveal, in some cases, their country of origin.
User description (description): This is a free text where a user can describe themselves, their interests, etc.
Tweet content (content): The text that forms the actual content of the tweet. The use of content has a number of caveats. One is that content might change over time, and therefore new tweets might discuss new topics that the classifiers have not seen before. Another caveat is that the content of the tweet might not be location-specific; in a previous study, Rakesh et al. BIBREF51 found that the content of only 289 out of 10,000 tweets was location-specific.
Figure FIGREF19 shows an example of a tweet and the eight features listed above. The features were treated in two different ways: the user location, name of the user, description and tweet content were represented using a bag of words approach, where each token represented a feature in the vector space model. The rest of the features, namely the user language, time zone, tweet language and offset, were represented by a single categorical value in the vector space model, given the limited number of values that the features can take. We used these eight features separately, as well as in different combinations with one another, in our experiments testing the ability to infer the country of origin of tweets. In separate experiments, we also append these features into single vectors to test different combinations of these features.
Evaluation
We report three different performance values for each of the experiments: micro-accuracy, macro-accuracy and mean squared error (MSE). The accuracy values are computed as the result of dividing all the correctly classified instances by all the instances in the test set. The micro-accuracy is computed for the test set as a whole. For macro-accuracy, we compute the accuracy for each specific country in the test set, which are then averaged to compute the overall macro-accuracy. While the micro-accuracy measures the actual accuracy in the whole dataset, the macro-accuracy penalises the classifier that performs well only for the majority classes and rewards, instead, classifiers that perform well across multiple categories. This is especially crucial in a case like ours where the categories are highly imbalanced.
The MSE is the average of the squared distance in kilometres between the predicted country and the actual, ground truth country, as shown in Equation EQREF21 . DISPLAYFORM0
In this computation, the distances between pairs of countries were calculated based on their centroids. We used the Countries of the World (COW) dataset produced by OpenGeonames.org to obtain the centroids of all countries. Having the latitude and longitude values of the centroids of all these countries, we then used the Haversine formula BIBREF52 , which accounts for the spheric shape when computing the distance between two points and is often used as an acceptable approximation to compute distances on the Earth. The Haversine distance between two points of a sphere each defined by its longitude and latitude is computed as shown in Equation EQREF22 . DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are the latitudes of point 1 and point 2, INLINEFORM2 and INLINEFORM3 are the longitudes of point 1 and point 2, and INLINEFORM4 is the radius of the Earth, which is estimated to be 6,371 km.
Classification Results
In this section, we present results for different location classification experiments. First, we look at the performance of classifiers that use a single feature. Then, we present the results for classifiers combining multiple features. To conclude, we examine the results in more depth by looking at the performance by country, as well as error analysis.
Single Feature
Table TABREF24 shows the results for the classification on the TC2014 dataset with two different approaches using GeoNames, one based on population (the most populous city is chosen when there are different options for a name) and one based on relevance (the city name that most resembles the input string). In this dataset, 65.82% of the tweets have a non-empty string in the location field; for the rest of tweets, we pick the most popular country in the dataset as the output of the approach based on GeoNames. The table shows values of micro- and macro-accuracy.
There is no big difference between the two approaches based on GeoNames when we look at micro-accuracy. However, this accuracy is slightly better distributed across countries when we use the approach based on relevance, as can be seen from the macro-accuracy values. In what follows, we consider the relevance-based GeoNames approach as the baseline that solely relies on a database matching the user's profile location and compare with the use of classifiers that exploit additional features available in a tweet.
Table TABREF26 shows the classification results, each case making use of only one of the eight features under study. This table includes performance values when we applied the classifier on both datasets, TC2014 and TC2015. The additional column, “Diff.”, shows the relative difference in performance for each of these datasets, i.e., measuring the extent to which a model learned from the TC2014 dataset can still be applied to the TC2015 test set. Note that while higher values are desired for micro-accuracy and macro-accuracy, lower values are optimal for MSE.
If we look at the micro-accuracy scores, the results suggest that three approaches stand out over the rest. These are tweet content, tweet language and user language, which are the only three approaches to get a micro-accuracy score above 0.5. However, these three approaches leave much to be desired when we evaluate them based on macro-accuracy scores, and therefore they fail to balance the classification well. Instead, the users' self-reported location (user location) achieves the highest macro-accuracy scores, while micro-accuracy scores are only slightly lower. This is due to the fact that the classifier that only uses the user's profile location will be able to guess correctly a few cases for each country where users specify a correctly spelled, unambiguous location, but will fail to classify correctly the rest; hence the higher macro-accuracy is sensible according to these expectations. The MSE error rates suggest that tweet content and tweet language are the best in getting the most proximate classifications. We believe that this is due to the proximity of many countries that speak the same language (e.g., Germany and Austria, or Argentina and Chile), in which case the classifier that relies on tweet language or content will often choose a neighbouring country given the similarities they share in terms of topics and language. While most of these classifiers outperform the GeoNames baseline in terms of micro-accuracy, user location is the only feature to beat the baseline in terms of macro-accuracy. However, the small improvement over the baseline suggests that alternative approaches are needed for a better balanced classification performance.
Figure FIGREF25 shows a heat map with accuracy values of each of the features broken down by country. We observe the best distributed accuracy across countries is with the use of user location as a feature. However, other features are doing significantly better classifying tweets that belong to some of the major countries such as the USA (better classified by tweet language or user language), Russia (better classified by tweet language) or Brazil (better classified by tweet language, user name or tweet content). This emphasises the necessity to explore further the differences between each country's characteristics.
As we noted above, a remarkable characteristic of our datasets (and the reality of Twitter itself) is the high imbalance in the distribution of tweets across countries, where a few countries account for a large majority of the tweets and many countries in the tail account for very few tweets. The fact that the classifier has to determine which of the 217 countries a tweet belongs to substantially complicates the task. To quantify this, and to explore the ability to boost performance on the countries with highest presence, we also performed classification experiments on the top 25 countries. These top 25 countries account for as many as 90.22% of the tweets; consequently, being able to boost performance on these 25 countries, while assuming that the system will miss the rest, can make it a more achievable task where the overall performance gets improved.
To perform the classification on the top countries, we removed the tweets from countries that do not belong to the top 25 list from the training set. Including tweets from the remaining countries would add a noisy category to the training set, given the diversity of that new category. However, for obvious reasons, we cannot do the same for the test set. For the purposes of experimentation, we assign the rest of the tweets in the test set a different, 26th label, meaning that they belong to other countries. Our experiments on the top 25 countries will then have a training set with 25 categories to learn from and test sets with 26 categories, where the classifier will never predict the 26th category.
Table TABREF27 shows the results for the experiments on the top 25 countries. The overall tendency is very similar to that of the classifiers applied to all the countries in the world, with an expected overall boost in macro-accuracy values. However, we see a substantial improvement with the use of content as a feature, which now outperforms tweet language in micro-accuracy scores as well as user location in macro-accuracy scores. Tweet content actually becomes the best performing feature with the reduced set of 25 countries. Classification on a reduced subset of countries can substantially boost performance, even assuming that part of the dataset will be misclassified. In fact, classification on this optimised setting outperforms by far the baseline using GeoNames. Not only does the top performing feature, tweet content, improve its performance. Other features that performed poorly before, such as tweet language, time zone or user language, perform significantly better, also outperforming the GeoNames baseline. This further motivates our subsequent goal of studying combinations of features to further boost the performance of the classifier applied to the top 25 countries.
Feature Combinations
Having seen that different features give rise to gains in different ways, testing the performance of combinations of multiple features seemed like a wise option. We performed these combinations of features by appending the vectors for each of the features into a single vector. We tested all 255 possible combinations using the eight features under study. We only report the best performing combinations here in the interest of space and clarity.
Table TABREF29 shows the best combination in each case for the TC2014 and TC2015 datasets, as well as for the classifiers that consider all the countries in the datasets and only the top 25 countries. The table also shows the performance of the best single feature as well as the baseline classifier by BIBREF12 to facilitate comparison, as well as the improvement in performance when using a combination of features over that of a single feature. We observe that the selection of an appropriate combination of features can actually lead to a substantial increase in terms of all micro-accuracy, macro-accuracy and MSE. These improvements are especially remarkable when we look at the MSE scores, where the improvement is always above 50%. Improvements in terms of micro-accuracy and macro-accuracy scores are also always above 20%, but are especially high for micro-accuracy (50%+) when we classify for all the countries, and for macro-accuracy (40%+) when we classify for the top 25 countries. These results suggest that the use of a single feature, as it is the case with most previous work using e.g. only tweet content, can be substantially improved by using more features. In fact, our results suggest that the combination of many features is usually best; we need to combine seven of the eight features (all but offset) in three of the cases, and six features in the other case (all but description and offset). As a result, we get performance values above 85% in terms of macro-accuracy for the top 25 countries. These performance scores are also remarkably higher than those of the classifier by BIBREF12 , both in terms of micro- and macro-accuracy.
Interestingly, the combination of features has led to a significant improvement in performance, with a better balance across countries. To complement this analysis, we believe it is important to understand the differences among countries. Will different sets of features be useful for an accurate classification for each country? Are we perhaps doing very well for some countries with certain combinations, but that combination, is in turn, bad for other countries? To explore this further, we now take a closer look at the performance broken down by country.
Breakdown of Countries
Given the remarkable differences among countries we observed (Figure FIGREF25 ) when exploring how different features are useful for different countries, we take a closer look at the performance of different classifiers for each of the top 25 countries. As we are now looking at each country separately, we use precision, recall and F1 scores as more appropriate evaluation measures that better capture the extent to which a country's tweets are being correctly categorised. We look at the best combination of features for each country in terms of F1 score and analyse the set of features that lead to the best performance in each case. We show the results of this analysis in Table TABREF31 .
The results show that very different approaches lead to optimal results for each country, revealing the different features that characterise each country. One striking observation we make from the ranking of country accuracies is that seven of the top eight ranking countries have unique characteristics, especially when it comes to language; except for the USA, these countries have a language that is not shared with any other country in the list. Interestingly, the best approach for most of these countries include either or both of tweet language or user language. When it comes to user language, this means that users in these countries have a strong inclination towards setting the user interface in their own language instead of the default language. In the case of tweet language, this mainly reflects a combination of two things, one being that users in these countries tend to tweet mostly in their own language, while the other is that Twitter's language identifier is very accurate in these cases. Further down in the list, we see the Spanish and English speaking countries, which seem to be harder to classify because of the numerous commonalities with one another, both in terms of language as well as in terms of content, given their cultural and geographical proximity.
All of the top 25 countries actually benefit from a combination of features, as there is no single case in which the use of only one feature performs best. Most of the countries in fact benefit from combining four or more features, with the only exceptions being Saudi Arabia –two features– and Japan –three features. Looking at the utility of features (see last row of the table showing totals), the features that are useful for TC2014 in most of the cases include user location, tweet content and user name, while offset and tweet language are the least useful. When we look at the combinations that perform best for new tweets –i.e. TC2015–, we see that in the majority of the cases the optimal combination is a reduced subset of that for TC2014 (green rows). This suggests that there are some features that perform well when classifying tweets from the same time frame as the training data, but whose performance drops when applied to new collections of tweets. However, one can get comparable performance when the right combination of features is chosen. As our results suggest, the features whose utility tends to fade include especially user description, with a remarkable drop from 19 to 1 case where it is useful, but also to a lesser extent tweet language, offset, time zone and user language. On the other hand, tweet content, user name and user location are the features that are as useful when applied to new tweets.
Finally, looking at the performance difference of countries in TC2014 and that in TC2015, there is no big gap in most of the cases and the differences are mostly within INLINEFORM0 5%. However, there are a few cases where the performance drops drastically when we apply the classifier on the new dataset. This is the case of Saudi Arabia, Netherlands and France, whose performance in TC2015 drops between 9% and 21% from that in TC2014. The highest improvement occurs for Germany, India and South Africa, with increases in performance in TC2014 that range between 4% and 11%.
Error Analysis
To shed some light on the reasons why some countries are not classified as accurately, we looked at the errors that the classifiers are making. Overall, if we put together all correct classifications by any of the classifiers, we would be able to get a micro-accuracy of up to 99.1% as an upper bound estimation for the tweets that belong to one of the top 25 countries. This raises expectations in that nearly all users can be accurately classified in some way by using the right classifier. However, many countries share similar (or common) characteristics, which often leads to mistakes between those countries. To better understand this, we look at the confusion matrix for the top 25 countries.
The confusion matrix in Table SECREF32 shows the aggregated misclassifications for all the 255 classifiers applied to the top 25 countries. The values highlighted in grey refer to correct guesses (diagonal). In red, we highlight misclassifications exceeding 10% of a country's tweets, in orange those exceeding 5% and in yellow those exceeding 2%.
[p]
Aggregated confusion matrix for all classifiers on the top 25 countries. (ar: Argentina, au: Australia, br: Brazil, ca: Canada, cl: Chile, co: Colombia, de: Germany, es: Spain, fr: France, gb: United Kingdom, id: Indonesia, in: India, it: Italy, jp: Japan, mx: Mexico, my: Malaysia, nl: The Netherlands, ph: Philippines, ru: Russia, sa: Saudi Arabia, th: Thailand, tr: Turkey, us: United States, ve: Venezuela, za: South Africa)
On the positive side, some of the countries have very small misclassifications. Brazil and Turkey have misclassifications of less than 2% (no yellow, orange or red cells). Other countries, including France, Indonesia, Italy, Japan and the USA, have misclassifications of less than 5% (no red or orange cells). These are mostly countries with unique characteristics with respect to the rest of the top 25 countries; they predominantly use a language that is not used by any other in the list, except the USA, which has the advantage of having the majority of tweets. However, a striking observation is the large percentage of misclassifications involving Spanish speaking countries, which include Argentina, Chile, Colombia, Spain, Mexico and Venezuela. In most of these cases the high number of misclassifications occurs in both directions for each pair of countries. This is an additional difficulty that one might have expected, given that all of them share cultural and linguistic commonalities, especially for using the same language and hence overlapping content. Moreover, the Latin American countries often share the time zone and, while the time zone is different for Spain, many of the cities in the Latin American countries are named after Spanish cities (e.g., Córdoba in Argentina, León in Mexico, Valencia in Venezuela, Cartagena in Colombia or Santiago in Chile, all of which are also Spanish cities), which makes the distinction from Spain more challenging if only user location is used. Similarly, we also observe a large amount of misclassifications involving English speaking countries, e.g. Australia, the UK, Canada and the USA. The majority of the orange misclassifications (5%-10%) are between Spanish and English speaking countries, with the exception of Chile and Argentina, which are even higher (10%+) and which we surmise is due to their proximity and cultural similarities. Finally, many misclassifications involve the United States, which account for the majority of red misclassifications (10%+), and which is not surprising since it is the predominant country with about 20% of tweets.
Discussion
Our experiments and analysis on over 5 million geolocated tweets from unique users reveal insights into country-level geolocation of tweets in real time. Our experiments only make use of features inherent in the tweets to enable real-time classification. This can be invaluable when curation of the tweet stream is needed for applications such as country-specific trending topic detection BIBREF53 , or for more specific applications where only tweets coming from a specific country are sought, e.g. sentiment analysis or reputation management BIBREF54 . The identification of the country of origin will also help mitigate problems caused by the limited availability of demographic details for Twitter users BIBREF55 .
We found that one of the most commonly used approaches, which is the use of gazeteers such as GeoNames to match the user's self-reported location with a place in the world, performs reasonably well in terms of macro-accuracy, but fails in terms of micro-accuracy, i.e. without high accuracy for most countries. The use of a classifier that makes use of a single feature, such as the self-reported location of a user, outperforms the GeoNames baseline in terms of micro-accuracy, as well as slightly in terms of macro-accuracy. The main challenge is that it has to deal with as many as 217 countries, making the task especially difficult. To overcome this, we have tested our classifier on a reduced subset of the top 25 countries, which still account for more than 90% of the whole Twitter stream. In this case, we found that this classifier can substantially outperform both the GeoNames baseline and the state-of-the-art real-time tweet geolocation classifier by BIBREF12 . The use of the tweet content alone becomes then the most useful feature.
Further testing with combinations of multiple features, we found that performance can be substantially improved, although one needs to be careful when picking the features to be used. What is interesting is that the classifier trained on data from the same time frame as the test set can be effectively applied to new tweets, which we verified on tweets posted a year later. The combination of features that works well for the test set in the same time frame can be applied to the new tweets in most cases, achieving similar performance values. However, it is important to consider that the utility of some features drops over time, which is especially the case of user description, but also to a lesser extent other features like offset and tweet language. On the positive side, features like tweet content, user location and user name are among the most useful features for classifying new tweets. One may also choose to regularly update the classifier by training with new tweets, as BIBREF12 suggested, however, in the interest of keeping a model for longer and reducing the cost of updating models, we show that the choice of the appropriate features can be as effective (i.e. achieving macro-accuracy scores of 0.858 and 0.853 for tweets within the same time frame and new tweets, respectively). The scenario is quite different when one wants to identify tweets from a specific country, given that different sets of features lead to more accurate classifications for different countries, which do not necessarily match with the overall best approach. By picking the right combination of features one can achieve classification performances for a country higher than 0.8 and even above 0.9 in terms of F1 score in cases where a country has unique characteristics such as a language that is not spoken in other countries or a unique time zone. However, these performance values tend to drop when one aims to identify tweets for a country that has common characteristics with other countries; this is especially true for English and Spanish speaking countries, among which many are large countries that speak the same language, share similar contents and have the same time zone (e.g., Chile and Argentina, or Canada and the USA).
The use of geolocated tweets to build a collection of tweets with a location assigned is a widely accepted practice, although the applicability of a model trained on geolocated tweets to then classify non-geolocated tweets has not been studied in depth. In previous work, BIBREF41 suggested that a model trained on geotagged data is expected to generalise well to non-geotagged data when one wants to classify users. For our case study with tweets rather than users, we performed a comparative analysis of geolocated and non-geolocated tweets in the time frame of our TC2014 dataset. Looking at the ranked frequencies for each feature, we found high correlations ranging from INLINEFORM0 to INLINEFORM1 for seven of the features under study across the subsets of geolocated and non-geolocated tweets, except for content leading to lower correlation ( INLINEFORM2 ). This indicates that non-geolocated tweets have similar characteristics and that a model trained on geolocated tweets could be effectively applied, reinforcing our findings that the use of content alone, as in most previous work, does not suffice, and combination of features is recommended. Empirical experimentation on non-geolocated tweets would help quantify this further; however an alternative data collection and annotation methodology should be defined for this purpose, which is beyond the scope of this work.
In summary, the results suggest that an appropriate selection of tweet features can lead to accurate, real-time classification of the most populous countries in terms of volume. Interestingly, a model trained from historical tweets can also be applied to tweets collected later in time when the topics that users talk about may be completely different. Having this classifier in place, one may then want to perform finer-grained geolocation of tweets within a country. For instance, during breaking news, one may want to identify reports from eyewitnesses on the ground and therefore fine-grained geolocation would be crucial to identify tweets in the area.
Conclusion
To the best of our knowledge, this is the first study performing a comprehensive analysis of the usefulness of tweet-inherent features to automatically infer the country of origin of tweets in a real-time scenario from a global stream of tweets written in any language. Most previous work focused on classifying tweets coming from a single country and hence assumed that tweets from that country were already identified. Where previous work had considered tweets from all over the world, the set of features employed for the classification included features, such as a user's social network, that are not readily available within a tweet and so is not feasible in a scenario where tweets need to be classified in real-time as they are collected from the streaming API. Moreover, previous attempts to geolocate global tweets tended to restrict their collection to tweets from a list of cities, as well as to tweets in English; this means that they did not consider the entire stream, but only a set of cities, which assumes prior preprocessing. Finally, our study uses two datasets collected a year apart from each other, to test the ability to classify new tweets with a classifier trained on older tweets. Our experiments and analysis reveal insights that can be used effectively to build an application that classifies tweets by country in real time, either when the goal is to organise content by country or when one wants to identify all the content posted from a specific country.
In the future we plan to test alternative cost-sensitive learning approaches to the one used here, focusing especially on collection of more data for under-represented countries, so that the classifier can be further improved for all the countries. Furthermore, we plan to explore more sophisticated approaches for content analysis, e.g. detection of topics in content (e.g. do some countries talk more about football than others?), as well as semantic treatment of the content. We also aim to develop finer-grained classifiers that take the output of the country-level classifier as input.
Acknowledgments
This work has been supported by the PHEME FP7 project (grant No. 611233), the Warwick University Higher Education Impact Fund, an ESRC Impact Acceleration Award, EPSRC Impact Acceleration Account (grant no. EP/K503940/1) and EPSRC grant EP/L016400/1. We used the MidPlus computational facilities, supported by QMUL Research-IT and funded by EPSRC grant EP/K000128/1. | User location (uloc), User language (ulang), Timezone (tz), Tweet language (tlang), Offset (offset), User name (name), User description (description), Tweet content (content) |
8e630c5a4a8ba0a4f5d8c483a2bf09c4ac8020ce | 8e630c5a4a8ba0a4f5d8c483a2bf09c4ac8020ce_0 | Q: How many languages are considered in the experiments?
Text: Introduction
Social media are increasingly being used in the scientific community as a key source of data to help understand diverse natural and social phenomena, and this has prompted the development of a wide range of computational data mining tools that can extract knowledge from social media for both post-hoc and real time analysis. Thanks to the availability of a public API that enables the cost-free collection of a significant amount of data, Twitter has become a leading data source for such studies BIBREF0 . Having Twitter as a new kind of data source, researchers have looked into the development of tools for real-time trend analytics BIBREF1 , BIBREF2 or early detection of newsworthy events BIBREF3 , as well as into analytical approaches for understanding the sentiment expressed by users towards a target BIBREF4 , BIBREF5 , BIBREF6 , or public opinion on a specific topic BIBREF7 . However, Twitter data lacks reliable demographic details that would enable a representative sample of users to be collected and/or a focus on a specific user subgroup BIBREF8 , or other specific applications such as helping establish the trustworthiness of information posted BIBREF9 . Automated inference of social media demographics would be useful, among others, to broaden demographically aware social media analyses that are conducted through surveys BIBREF10 . One of the missing demographic details is a user's country of origin, which we study here. The only option then for the researcher is to try to infer such demographic characteristics before attempting the intended analysis.
This has motivated a growing body of research in recent years looking at different ways of determining automatically the user's country of origin and/or – as a proxy for the former – the location from which tweets have been posted BIBREF11 . Most of the previous research in inferring tweet geolocation has classified tweets by location within a limited geographical area or country; these cannot be applied directly to an unfiltered stream where tweets from any location or country will be observed. The few cases that have dealt with a global collection of tweets have used an extensive set of features that cannot realistically be extracted in a real-time, streaming context (e.g., user tweeting history or social networks) BIBREF12 , and have been limited to a selected set of global cities as well as to English tweets. This means they use ground truth labels to pre-filter tweets originating from other regions and/or written in languages other than English. The classifier built on this pre-filtered dataset may not be applicable to a Twitter stream where every tweet needs to be geolocated. An ability to classify tweets by location in real-time is crucial for applications exploiting social media updates as social sensors that enable tracking topics and learning about location-specific trending topics, emerging events and breaking news. Specific applications of a real-time, country-level tweet geolocation system include country-specific trending topic detection or tracking sentiment towards a topic broken down by country. To the best of our knowledge, our work is the first to deal with global tweets in any language, using only those features present within the content of a tweet and its associated metadata. We also complement previous work by investigating the extent to which a classifier trained on historical tweets can be used effectively on newly harvested tweets.
Motivated by the need to develop an application to identify the trending topics within a specific country, here we document the development of a classifier that can geolocate tweets by country of origin in real-time. Given that within this scenario it is not feasible to collect additional data to that readily available from the Twitter stream BIBREF12 , we explore the usefulness of eight tweet-inherent features, all of which are readily available from a tweet object as retrieved from the Twitter API, for determining its geolocation. We perform classification using each of the features alone, but also in feature combinations. We explore the ability to perform the classification on as many as 217 countries, or in a reduced subset of the top 25 countries, as judged by tweet volume. The use of two datasets, collected in October 2014 and October 2015, gives additional insight into whether historical Twitter data can be used to classify new instances of tweets. These two datasets with over 5 million country-coded tweets are publicly available.
Our methodology enables us to perform a thorough analysis of tweet geolocation, revealing insights into the best approaches for an accurate country-level location classifier for tweets. We find that the use of a single feature like content, which is the most commonly used feature in previous work, does not suffice for an accurate classification of users by country and that the combination of multiple features leads to substantial improvement, outperforming the state-of-the-art real-time tweet geolocation classifier; this improvement is particularly manifest when using metadata like the user's self-reported location as well as the user's real name. We also perform a per-country analysis for the top 25 countries in terms of tweet volume, exploring how different features lead to optimal classification for different countries, as well as discussing limitations when dealing with some of the most challenging countries. We show that country-level classification of an unfiltered Twitter stream is challenging. It requires careful design of a classifier that uses an appropriate combination of features. Our results at the country level are promising enough in the case of numerous countries, encouraging further research into finer-grained geolocation of global tweets. Cases where country-level geolocation is more challenging include English and Spanish speaking countries, which are harder to distinguish due to their numerous commonalities. Still, our experiments show that we can achieve F1 scores above 80% in many of these cases given the choice of an appropriate combination of features, as well as an overall performance above 80% in terms of both micro-accuracy and macro-accuracy for the top 25 countries.
Related Work
A growing body of research deals with the automated inference of demographic details of Twitter users BIBREF8 . Researchers have attempted to infer attributes of Twitter users such as age BIBREF13 , BIBREF14 , gender BIBREF15 , BIBREF16 , BIBREF17 , BIBREF14 , political orientation BIBREF18 , BIBREF19 , BIBREF20 , BIBREF21 or a range of social identities BIBREF22 . Digging more deeply into the demographics of Twitter users, other researchers have attempted to infer socioeconomic demographics such as occupational class BIBREF23 , income BIBREF24 and socioeconomic status BIBREF25 . Work by Huang et al. BIBREF26 has also tried to infer the nationality of users; this work is different from that which we report here in that the country where the tweets were posted from, was already known.
What motivates the present study is the increasing interest in inferring the geographical location of either tweets or Twitter users BIBREF11 . The automated inference of tweet location has been studied for different purposes, ranging from data journalism BIBREF27 , BIBREF9 to public health BIBREF28 . As well as numerous different techniques, researchers have relied on different settings and pursued different objectives when conducting experiments. Table TABREF2 shows a summary of previous work reported in the scientific literature, outlining the features that each study used to classify tweets by location, the geographic scope of the study, the languages they dealt with, the classification granularity they tried to achieve and used for evaluation, and whether single tweets, aggregated multiple tweets and/or user history were used to train the classifier.
Most of the previous studies on automated geolocation of tweets have assumed that the tweet stream includes only tweets from a specific country. The majority of these studies have focused on the United States, classifying tweets either at a city or state level. One of the earliest studies is that by Cheng et al. BIBREF30 , who introduced a probabilistic, content-based approach that identifies the most representative words of each of the major cities in the USA; these words are then used to classify new tweets. They incorporate different techniques to filter words, such as local and state-level filtering, classifying up to 51% of Twitter users accurately within a 100 mile radius. Their approach, however, relies on making use of the complete history of a user, and was tested only for users with at least 1,000 tweets in their timeline.
Most of the other studies documented in the literature have also relied on tweet content, using different techniques such as topic modelling to find locally relevant keywords that reveal a user's likely location BIBREF34 , BIBREF35 , BIBREF30 , BIBREF44 , BIBREF41 , BIBREF45 , BIBREF47 , BIBREF43 , BIBREF37 . Another widely used technique relies on the social network that a user is connected to, in order to infer a user's location from that of their followers and followees BIBREF36 , BIBREF37 , BIBREF38 . While the approaches summarised will work well for certain applications, retrieving the tweet history for each user or the profile information of all of a user's followers and followees is not feasible in a real-time scenario. Hence, in this context, a classifier needs to deal with the additional challenge of having to rely only on the information that can be extracted from a single tweet.
Only a handful of studies have relied solely on the content of a single tweet to infer its location BIBREF33 , BIBREF39 , BIBREF29 , BIBREF40 , BIBREF46 , BIBREF32 , BIBREF31 . Again, most of these have actually worked on very restricted geographical areas, with tweets being limited to different regions, such as the United States BIBREF29 , BIBREF31 , four different cities BIBREF40 , and New York only BIBREF39 . Bo et al. BIBREF33 did focus on a broader geographical area, including 3.7k cities all over the world. Nevertheless, their study focused on a limited number of cities, disregarding other locations, and only classified tweets written in English.
When it comes to geolocation classification granularity, the majority of studies have aimed at city-level classification. While this provides fine-grained classification of tweets, it also means that a limited number of cities can be considered, ignoring other cities and towns. Only Han et al. BIBREF41 and Dredze et al. BIBREF12 perform country-level classification, although they also restricted themselves to English language tweets posted from a limited number of cities. This means that tweets posted from cities other than the ones under consideration are removed from the stream, as are tweets written in other languages. In our study, we take as input the stream of tweets with content originating from any country and in any language, i.e. the entire tweet stream, to classify, at the country-level, each tweet according to its origin.
To date, the work by Han et al. BIBREF41 is the most relevant to our new study. They conducted a comprehensive study on how Twitter users can be geolocated by using different features of tweets. They analysed how location indicative words from a user's aggregated tweets can be used to geolocate the user. However, this requires collecting a user's history of tweets, which is not realistic in our real-time scenario. They also looked at how some metadata from tweets can be leveraged for classification, achieving slight improvements in performance, but again this is for a user's aggregated history. Finally, they looked at the temporality of tweets, using an old model to classify new tweets, finding that new tweets are more difficult to classify. This is an insightful study, which also motivates some of the settings and selection of classifiers in our own study; however, while an approach based on location indicative words may be very useful when looking at a user's aggregated tweets, it is rather limited when – as in our case – relying on a single tweet per user. Instead, our analysis of different tweet features for geolocating a tweet is based solely on its attributes as retrieved from the Twitter API. Dredze et al. BIBREF12 followed an approach similar to ours when they looked at the utility of a model trained from past tweets, finding that the classification performance degrades for new tweets and that the trained model needs to be continually updated. Their study did not look into further details, such as whether some features are still useful for new tweets, however, and which our study analyses in more detail.
In summary, as far as we are aware, no previous work has dealt with the multiple features available within a tweet, as retrieved from the Twitter streaming API, to determine the location of a tweet posted from anywhere in the world. We look at the suitability of eight tweet features for this purpose, both singly and combined, and experiment on two datasets collected within different time frames to measure the usefulness of an old model on new tweets.
Datasets
For training our classifier, we rely on the most widely adopted approach for the collection of a Twitter dataset with tweets categorised by location. This involves using the Twitter API endpoint that returns a stream of geolocated tweets posted from within one or more specified geographic bounding boxes. In our study, we set this bounding box to be the whole world (i.e., [-180,-90,180,90]) in order to retrieve tweets worldwide. This way, we collected streams of global geolocated tweets for two different week long periods: 4-11 October, 2014 (TC2014) and 22-28 October, 2015 (TC2015). This led to the collection of 31.7 million tweets in 2014 and 28.8 million tweets in 2015, which we adapt for our purposes as explained below.
Our raw datasets reflect the well-known fact that some Twitter users are far more prolific than others, which would introduce a bias in the evaluation if not dealt with. If our classifier has seen a user before, it is very likely that the user will tweet from the same country again. Hence, in order to ensure an unbiased evaluation of the tweet level classification, we de-duplicated users from our datasets, by randomly picking only one tweet from each user for TC2014. For TC2015, we also picked one tweet per user at random, but also removed users that were included in TC2014. This led to a collection of 4,155,763 geolocated tweets in TC2014 and 897,341 geolocated tweets in TC2015. 462,536 tweets were removed from the TC2015 dataset for belonging to users that also appeared in TC2014.
Having these tweets geolocated with the specific coordinates of the user's location, we then inferred the name of that location. For this, we used Nominatim, whose reverse geocoding feature enabled us to retrieve detailed information of the location pointed to by the coordinates given as input. From Nominatim's output, we made use of the country code in our experiments that aimed at country level classification of tweets. As a result, we had all the tweets in TC2014 and TC2015 categorised by country, which we then used as the ground truth for our classification experiments. It is worthwhile noting that the distribution of countries in TC2014 and TC2015 correlate highly with INLINEFORM0 . This suggests that the distribution is stable and therefore we can focus our study on the usefulness of the model trained for different features for new tweets.
The more than 5 million tweets in these two datasets are categorised into 217 different countries. It is worthwhile mentioning that, as one would expect, the resulting datasets are clearly imbalanced, where only a few countries account for most of the tweets. The first country by number of tweets is the United States (20.99%), followed by Indonesia (14.01%) and Turkey (8.50%). The 10 most prominent countries on Twitter in our datasets account for 72.98% of the tweets, while the 25 most prominent countries account for 90.22%. Figure FIGREF5 shows a heat map of popularity by country in our datasets.
The resulting datasets, both TC2014 and TC2015, are publicly available.
Country-Level Location Classification for Tweets
In this study, we define the country-level location classification task as one in which, given a single tweet as input, a classifier has to determine the country of origin of the tweet. We argue for the sole use of the content and metadata provided in a single tweet, which are accessible in a scenario where one wants to classify tweets by country in the tweet stream and in real-time. Most existing approaches have looked at the history of a Twitter user or the social network derivable from a user's followers and followees, which would not be feasible in our real-time scenario.
Classification Techniques
We carried out the experimentation with a range of classifiers of different types: Support Vector Machines (SVM), Gaussian Naive Bayes, Multinomial Naive Bayes, Decision Trees, Random Forests and a Maximum Entropy classifier. They were tested in two different settings, one without balancing the weights of the different classes and the other by weighing the classes as the inverse of their frequency in the training set; the latter was tested as a means for dealing with the highly imbalanced data. The selection of these classifiers is in line with those used in the literature, especially with those tested by Han et al. BIBREF41 . This experimentation led to the selection of the weighed Maximum Entropy (MaxEnt) classifier as the most accurate. In the interest of space and focus, we only present results for this classifier.
Additionally, we compare our results with two baseline approaches. On the one hand, we used the Vowpal Wabbit classifier described by BIBREF12 , a state-of-the-art real-time tweet geolocation classifier. On the other hand, we made use of the GeoNames geographical database, a commonly used approach in the literature. The user location, a string optionally specified by users in their profile settings, can be used here as input to the GeoNames database, which will return a likely location translated from that string. GeoNames provides a list of the most likely locations for a given string, based on either relevance or population, from which we took the first element. While GeoNames can be very effective for certain location names that are easy to map, the use of this feature is limited to users who opt to specify a non-empty location string in their settings (67.1% in our datasets), and will fail with users whose location is not a valid country or city name (e.g., somewhere in the world). The location specified in the user's profile has been used before to infer a user's location, although it is known to lead to low recall BIBREF48 . Here, we used this approach, using a database to translate user locations as a baseline, and explored whether, how, and to what extent a classifier can outperform it. For this baseline approach, we query GeoNames with the location string specified by the user and pick the first option output by the service. To make a fairer comparison with our classifiers, since GeoNames will not be able to determine the location for users with an empty location field, we default GeoNames' prediction for those tweets to be the majority country, i.e., the United States. This decision favours the baseline by assigning the most likely country and is also in line with the baseline approaches used in previous work BIBREF41 .
Experiment Settings
Within the TC2014 dataset, we created 10 different random distributions of the tweets for cross-validation, each having 50% of the tweets for training, 25% for development and 25% for testing. The performance of the 10 runs on the test set were ultimately averaged to get the final performance value. The development set was used to determine the optimal parameters in each case, which are then used for the classification applied to the test set. In separate experiments, TC2015 was used as the test set, keeping the same subsets of TC2014 as training sets, to make the experiments comparable by using the same trained models and to assess the usefulness of year-old tweets to classify new tweets.
We created eight different classifiers, each of which used one of the following eight features available from a tweet as retrieved from a stream of the Twitter API:
User location (uloc): This is the location the user specifies in their profile. While this feature might seem a priori useful, it is somewhat limited as this is a free text field that users can leave empty, input a location name that is ambiguous or has typos, or a string that does not match with any specific locations (e.g., “at home”). Looking at users' self-reported locations, Hecht et al. BIBREF49 found that 66% report information that can be translated, accurately or inaccurately, to a geographic location, with the other 34% being either empty or not geolocalisable.
User language (ulang): This is the user's self-declared user interface language. The interface language might be indicative of the user's country of origin; however, they might also have set up the interface in a different language, such as English, because it was the default language when they signed up or because the language of their choice is not available.
Timezone (tz): This indicates the time zone that the user has specified in their settings, e.g., “Pacific Time (US & Canada)”. When the user has specified an accurate time zone in their settings, it can be indicative of their country of origin; however, some users may have the default time zone in their settings, or they may use an equivalent time zone belonging to a different location (e.g., “Europe/London” for a user in Portugal). Also, Twitter's list of time zones does not include all countries.
Tweet language (tlang): The language in which a tweet is believed to be written is automatically detected by Twitter. It has been found to be accurate for major languages, but it leaves much to be desired for less widely used languages. Twitter's language identifier has also been found to struggle with multilingual tweets, where parts of a tweet are written in different languages BIBREF50 .
Offset (offset): This is the offset, with respect to UTC/GMT, that the user has specified in their settings. It is similar to the time zone, albeit more limited as it is shared with a number of countries.
User name (name): This is the name that the user specifies in their settings, which can be their real name, or an alternative name they choose to use. The name of a user can reveal, in some cases, their country of origin.
User description (description): This is a free text where a user can describe themselves, their interests, etc.
Tweet content (content): The text that forms the actual content of the tweet. The use of content has a number of caveats. One is that content might change over time, and therefore new tweets might discuss new topics that the classifiers have not seen before. Another caveat is that the content of the tweet might not be location-specific; in a previous study, Rakesh et al. BIBREF51 found that the content of only 289 out of 10,000 tweets was location-specific.
Figure FIGREF19 shows an example of a tweet and the eight features listed above. The features were treated in two different ways: the user location, name of the user, description and tweet content were represented using a bag of words approach, where each token represented a feature in the vector space model. The rest of the features, namely the user language, time zone, tweet language and offset, were represented by a single categorical value in the vector space model, given the limited number of values that the features can take. We used these eight features separately, as well as in different combinations with one another, in our experiments testing the ability to infer the country of origin of tweets. In separate experiments, we also append these features into single vectors to test different combinations of these features.
Evaluation
We report three different performance values for each of the experiments: micro-accuracy, macro-accuracy and mean squared error (MSE). The accuracy values are computed as the result of dividing all the correctly classified instances by all the instances in the test set. The micro-accuracy is computed for the test set as a whole. For macro-accuracy, we compute the accuracy for each specific country in the test set, which are then averaged to compute the overall macro-accuracy. While the micro-accuracy measures the actual accuracy in the whole dataset, the macro-accuracy penalises the classifier that performs well only for the majority classes and rewards, instead, classifiers that perform well across multiple categories. This is especially crucial in a case like ours where the categories are highly imbalanced.
The MSE is the average of the squared distance in kilometres between the predicted country and the actual, ground truth country, as shown in Equation EQREF21 . DISPLAYFORM0
In this computation, the distances between pairs of countries were calculated based on their centroids. We used the Countries of the World (COW) dataset produced by OpenGeonames.org to obtain the centroids of all countries. Having the latitude and longitude values of the centroids of all these countries, we then used the Haversine formula BIBREF52 , which accounts for the spheric shape when computing the distance between two points and is often used as an acceptable approximation to compute distances on the Earth. The Haversine distance between two points of a sphere each defined by its longitude and latitude is computed as shown in Equation EQREF22 . DISPLAYFORM0
where INLINEFORM0 and INLINEFORM1 are the latitudes of point 1 and point 2, INLINEFORM2 and INLINEFORM3 are the longitudes of point 1 and point 2, and INLINEFORM4 is the radius of the Earth, which is estimated to be 6,371 km.
Classification Results
In this section, we present results for different location classification experiments. First, we look at the performance of classifiers that use a single feature. Then, we present the results for classifiers combining multiple features. To conclude, we examine the results in more depth by looking at the performance by country, as well as error analysis.
Single Feature
Table TABREF24 shows the results for the classification on the TC2014 dataset with two different approaches using GeoNames, one based on population (the most populous city is chosen when there are different options for a name) and one based on relevance (the city name that most resembles the input string). In this dataset, 65.82% of the tweets have a non-empty string in the location field; for the rest of tweets, we pick the most popular country in the dataset as the output of the approach based on GeoNames. The table shows values of micro- and macro-accuracy.
There is no big difference between the two approaches based on GeoNames when we look at micro-accuracy. However, this accuracy is slightly better distributed across countries when we use the approach based on relevance, as can be seen from the macro-accuracy values. In what follows, we consider the relevance-based GeoNames approach as the baseline that solely relies on a database matching the user's profile location and compare with the use of classifiers that exploit additional features available in a tweet.
Table TABREF26 shows the classification results, each case making use of only one of the eight features under study. This table includes performance values when we applied the classifier on both datasets, TC2014 and TC2015. The additional column, “Diff.”, shows the relative difference in performance for each of these datasets, i.e., measuring the extent to which a model learned from the TC2014 dataset can still be applied to the TC2015 test set. Note that while higher values are desired for micro-accuracy and macro-accuracy, lower values are optimal for MSE.
If we look at the micro-accuracy scores, the results suggest that three approaches stand out over the rest. These are tweet content, tweet language and user language, which are the only three approaches to get a micro-accuracy score above 0.5. However, these three approaches leave much to be desired when we evaluate them based on macro-accuracy scores, and therefore they fail to balance the classification well. Instead, the users' self-reported location (user location) achieves the highest macro-accuracy scores, while micro-accuracy scores are only slightly lower. This is due to the fact that the classifier that only uses the user's profile location will be able to guess correctly a few cases for each country where users specify a correctly spelled, unambiguous location, but will fail to classify correctly the rest; hence the higher macro-accuracy is sensible according to these expectations. The MSE error rates suggest that tweet content and tweet language are the best in getting the most proximate classifications. We believe that this is due to the proximity of many countries that speak the same language (e.g., Germany and Austria, or Argentina and Chile), in which case the classifier that relies on tweet language or content will often choose a neighbouring country given the similarities they share in terms of topics and language. While most of these classifiers outperform the GeoNames baseline in terms of micro-accuracy, user location is the only feature to beat the baseline in terms of macro-accuracy. However, the small improvement over the baseline suggests that alternative approaches are needed for a better balanced classification performance.
Figure FIGREF25 shows a heat map with accuracy values of each of the features broken down by country. We observe the best distributed accuracy across countries is with the use of user location as a feature. However, other features are doing significantly better classifying tweets that belong to some of the major countries such as the USA (better classified by tweet language or user language), Russia (better classified by tweet language) or Brazil (better classified by tweet language, user name or tweet content). This emphasises the necessity to explore further the differences between each country's characteristics.
As we noted above, a remarkable characteristic of our datasets (and the reality of Twitter itself) is the high imbalance in the distribution of tweets across countries, where a few countries account for a large majority of the tweets and many countries in the tail account for very few tweets. The fact that the classifier has to determine which of the 217 countries a tweet belongs to substantially complicates the task. To quantify this, and to explore the ability to boost performance on the countries with highest presence, we also performed classification experiments on the top 25 countries. These top 25 countries account for as many as 90.22% of the tweets; consequently, being able to boost performance on these 25 countries, while assuming that the system will miss the rest, can make it a more achievable task where the overall performance gets improved.
To perform the classification on the top countries, we removed the tweets from countries that do not belong to the top 25 list from the training set. Including tweets from the remaining countries would add a noisy category to the training set, given the diversity of that new category. However, for obvious reasons, we cannot do the same for the test set. For the purposes of experimentation, we assign the rest of the tweets in the test set a different, 26th label, meaning that they belong to other countries. Our experiments on the top 25 countries will then have a training set with 25 categories to learn from and test sets with 26 categories, where the classifier will never predict the 26th category.
Table TABREF27 shows the results for the experiments on the top 25 countries. The overall tendency is very similar to that of the classifiers applied to all the countries in the world, with an expected overall boost in macro-accuracy values. However, we see a substantial improvement with the use of content as a feature, which now outperforms tweet language in micro-accuracy scores as well as user location in macro-accuracy scores. Tweet content actually becomes the best performing feature with the reduced set of 25 countries. Classification on a reduced subset of countries can substantially boost performance, even assuming that part of the dataset will be misclassified. In fact, classification on this optimised setting outperforms by far the baseline using GeoNames. Not only does the top performing feature, tweet content, improve its performance. Other features that performed poorly before, such as tweet language, time zone or user language, perform significantly better, also outperforming the GeoNames baseline. This further motivates our subsequent goal of studying combinations of features to further boost the performance of the classifier applied to the top 25 countries.
Feature Combinations
Having seen that different features give rise to gains in different ways, testing the performance of combinations of multiple features seemed like a wise option. We performed these combinations of features by appending the vectors for each of the features into a single vector. We tested all 255 possible combinations using the eight features under study. We only report the best performing combinations here in the interest of space and clarity.
Table TABREF29 shows the best combination in each case for the TC2014 and TC2015 datasets, as well as for the classifiers that consider all the countries in the datasets and only the top 25 countries. The table also shows the performance of the best single feature as well as the baseline classifier by BIBREF12 to facilitate comparison, as well as the improvement in performance when using a combination of features over that of a single feature. We observe that the selection of an appropriate combination of features can actually lead to a substantial increase in terms of all micro-accuracy, macro-accuracy and MSE. These improvements are especially remarkable when we look at the MSE scores, where the improvement is always above 50%. Improvements in terms of micro-accuracy and macro-accuracy scores are also always above 20%, but are especially high for micro-accuracy (50%+) when we classify for all the countries, and for macro-accuracy (40%+) when we classify for the top 25 countries. These results suggest that the use of a single feature, as it is the case with most previous work using e.g. only tweet content, can be substantially improved by using more features. In fact, our results suggest that the combination of many features is usually best; we need to combine seven of the eight features (all but offset) in three of the cases, and six features in the other case (all but description and offset). As a result, we get performance values above 85% in terms of macro-accuracy for the top 25 countries. These performance scores are also remarkably higher than those of the classifier by BIBREF12 , both in terms of micro- and macro-accuracy.
Interestingly, the combination of features has led to a significant improvement in performance, with a better balance across countries. To complement this analysis, we believe it is important to understand the differences among countries. Will different sets of features be useful for an accurate classification for each country? Are we perhaps doing very well for some countries with certain combinations, but that combination, is in turn, bad for other countries? To explore this further, we now take a closer look at the performance broken down by country.
Breakdown of Countries
Given the remarkable differences among countries we observed (Figure FIGREF25 ) when exploring how different features are useful for different countries, we take a closer look at the performance of different classifiers for each of the top 25 countries. As we are now looking at each country separately, we use precision, recall and F1 scores as more appropriate evaluation measures that better capture the extent to which a country's tweets are being correctly categorised. We look at the best combination of features for each country in terms of F1 score and analyse the set of features that lead to the best performance in each case. We show the results of this analysis in Table TABREF31 .
The results show that very different approaches lead to optimal results for each country, revealing the different features that characterise each country. One striking observation we make from the ranking of country accuracies is that seven of the top eight ranking countries have unique characteristics, especially when it comes to language; except for the USA, these countries have a language that is not shared with any other country in the list. Interestingly, the best approach for most of these countries include either or both of tweet language or user language. When it comes to user language, this means that users in these countries have a strong inclination towards setting the user interface in their own language instead of the default language. In the case of tweet language, this mainly reflects a combination of two things, one being that users in these countries tend to tweet mostly in their own language, while the other is that Twitter's language identifier is very accurate in these cases. Further down in the list, we see the Spanish and English speaking countries, which seem to be harder to classify because of the numerous commonalities with one another, both in terms of language as well as in terms of content, given their cultural and geographical proximity.
All of the top 25 countries actually benefit from a combination of features, as there is no single case in which the use of only one feature performs best. Most of the countries in fact benefit from combining four or more features, with the only exceptions being Saudi Arabia –two features– and Japan –three features. Looking at the utility of features (see last row of the table showing totals), the features that are useful for TC2014 in most of the cases include user location, tweet content and user name, while offset and tweet language are the least useful. When we look at the combinations that perform best for new tweets –i.e. TC2015–, we see that in the majority of the cases the optimal combination is a reduced subset of that for TC2014 (green rows). This suggests that there are some features that perform well when classifying tweets from the same time frame as the training data, but whose performance drops when applied to new collections of tweets. However, one can get comparable performance when the right combination of features is chosen. As our results suggest, the features whose utility tends to fade include especially user description, with a remarkable drop from 19 to 1 case where it is useful, but also to a lesser extent tweet language, offset, time zone and user language. On the other hand, tweet content, user name and user location are the features that are as useful when applied to new tweets.
Finally, looking at the performance difference of countries in TC2014 and that in TC2015, there is no big gap in most of the cases and the differences are mostly within INLINEFORM0 5%. However, there are a few cases where the performance drops drastically when we apply the classifier on the new dataset. This is the case of Saudi Arabia, Netherlands and France, whose performance in TC2015 drops between 9% and 21% from that in TC2014. The highest improvement occurs for Germany, India and South Africa, with increases in performance in TC2014 that range between 4% and 11%.
Error Analysis
To shed some light on the reasons why some countries are not classified as accurately, we looked at the errors that the classifiers are making. Overall, if we put together all correct classifications by any of the classifiers, we would be able to get a micro-accuracy of up to 99.1% as an upper bound estimation for the tweets that belong to one of the top 25 countries. This raises expectations in that nearly all users can be accurately classified in some way by using the right classifier. However, many countries share similar (or common) characteristics, which often leads to mistakes between those countries. To better understand this, we look at the confusion matrix for the top 25 countries.
The confusion matrix in Table SECREF32 shows the aggregated misclassifications for all the 255 classifiers applied to the top 25 countries. The values highlighted in grey refer to correct guesses (diagonal). In red, we highlight misclassifications exceeding 10% of a country's tweets, in orange those exceeding 5% and in yellow those exceeding 2%.
[p]
Aggregated confusion matrix for all classifiers on the top 25 countries. (ar: Argentina, au: Australia, br: Brazil, ca: Canada, cl: Chile, co: Colombia, de: Germany, es: Spain, fr: France, gb: United Kingdom, id: Indonesia, in: India, it: Italy, jp: Japan, mx: Mexico, my: Malaysia, nl: The Netherlands, ph: Philippines, ru: Russia, sa: Saudi Arabia, th: Thailand, tr: Turkey, us: United States, ve: Venezuela, za: South Africa)
On the positive side, some of the countries have very small misclassifications. Brazil and Turkey have misclassifications of less than 2% (no yellow, orange or red cells). Other countries, including France, Indonesia, Italy, Japan and the USA, have misclassifications of less than 5% (no red or orange cells). These are mostly countries with unique characteristics with respect to the rest of the top 25 countries; they predominantly use a language that is not used by any other in the list, except the USA, which has the advantage of having the majority of tweets. However, a striking observation is the large percentage of misclassifications involving Spanish speaking countries, which include Argentina, Chile, Colombia, Spain, Mexico and Venezuela. In most of these cases the high number of misclassifications occurs in both directions for each pair of countries. This is an additional difficulty that one might have expected, given that all of them share cultural and linguistic commonalities, especially for using the same language and hence overlapping content. Moreover, the Latin American countries often share the time zone and, while the time zone is different for Spain, many of the cities in the Latin American countries are named after Spanish cities (e.g., Córdoba in Argentina, León in Mexico, Valencia in Venezuela, Cartagena in Colombia or Santiago in Chile, all of which are also Spanish cities), which makes the distinction from Spain more challenging if only user location is used. Similarly, we also observe a large amount of misclassifications involving English speaking countries, e.g. Australia, the UK, Canada and the USA. The majority of the orange misclassifications (5%-10%) are between Spanish and English speaking countries, with the exception of Chile and Argentina, which are even higher (10%+) and which we surmise is due to their proximity and cultural similarities. Finally, many misclassifications involve the United States, which account for the majority of red misclassifications (10%+), and which is not surprising since it is the predominant country with about 20% of tweets.
Discussion
Our experiments and analysis on over 5 million geolocated tweets from unique users reveal insights into country-level geolocation of tweets in real time. Our experiments only make use of features inherent in the tweets to enable real-time classification. This can be invaluable when curation of the tweet stream is needed for applications such as country-specific trending topic detection BIBREF53 , or for more specific applications where only tweets coming from a specific country are sought, e.g. sentiment analysis or reputation management BIBREF54 . The identification of the country of origin will also help mitigate problems caused by the limited availability of demographic details for Twitter users BIBREF55 .
We found that one of the most commonly used approaches, which is the use of gazeteers such as GeoNames to match the user's self-reported location with a place in the world, performs reasonably well in terms of macro-accuracy, but fails in terms of micro-accuracy, i.e. without high accuracy for most countries. The use of a classifier that makes use of a single feature, such as the self-reported location of a user, outperforms the GeoNames baseline in terms of micro-accuracy, as well as slightly in terms of macro-accuracy. The main challenge is that it has to deal with as many as 217 countries, making the task especially difficult. To overcome this, we have tested our classifier on a reduced subset of the top 25 countries, which still account for more than 90% of the whole Twitter stream. In this case, we found that this classifier can substantially outperform both the GeoNames baseline and the state-of-the-art real-time tweet geolocation classifier by BIBREF12 . The use of the tweet content alone becomes then the most useful feature.
Further testing with combinations of multiple features, we found that performance can be substantially improved, although one needs to be careful when picking the features to be used. What is interesting is that the classifier trained on data from the same time frame as the test set can be effectively applied to new tweets, which we verified on tweets posted a year later. The combination of features that works well for the test set in the same time frame can be applied to the new tweets in most cases, achieving similar performance values. However, it is important to consider that the utility of some features drops over time, which is especially the case of user description, but also to a lesser extent other features like offset and tweet language. On the positive side, features like tweet content, user location and user name are among the most useful features for classifying new tweets. One may also choose to regularly update the classifier by training with new tweets, as BIBREF12 suggested, however, in the interest of keeping a model for longer and reducing the cost of updating models, we show that the choice of the appropriate features can be as effective (i.e. achieving macro-accuracy scores of 0.858 and 0.853 for tweets within the same time frame and new tweets, respectively). The scenario is quite different when one wants to identify tweets from a specific country, given that different sets of features lead to more accurate classifications for different countries, which do not necessarily match with the overall best approach. By picking the right combination of features one can achieve classification performances for a country higher than 0.8 and even above 0.9 in terms of F1 score in cases where a country has unique characteristics such as a language that is not spoken in other countries or a unique time zone. However, these performance values tend to drop when one aims to identify tweets for a country that has common characteristics with other countries; this is especially true for English and Spanish speaking countries, among which many are large countries that speak the same language, share similar contents and have the same time zone (e.g., Chile and Argentina, or Canada and the USA).
The use of geolocated tweets to build a collection of tweets with a location assigned is a widely accepted practice, although the applicability of a model trained on geolocated tweets to then classify non-geolocated tweets has not been studied in depth. In previous work, BIBREF41 suggested that a model trained on geotagged data is expected to generalise well to non-geotagged data when one wants to classify users. For our case study with tweets rather than users, we performed a comparative analysis of geolocated and non-geolocated tweets in the time frame of our TC2014 dataset. Looking at the ranked frequencies for each feature, we found high correlations ranging from INLINEFORM0 to INLINEFORM1 for seven of the features under study across the subsets of geolocated and non-geolocated tweets, except for content leading to lower correlation ( INLINEFORM2 ). This indicates that non-geolocated tweets have similar characteristics and that a model trained on geolocated tweets could be effectively applied, reinforcing our findings that the use of content alone, as in most previous work, does not suffice, and combination of features is recommended. Empirical experimentation on non-geolocated tweets would help quantify this further; however an alternative data collection and annotation methodology should be defined for this purpose, which is beyond the scope of this work.
In summary, the results suggest that an appropriate selection of tweet features can lead to accurate, real-time classification of the most populous countries in terms of volume. Interestingly, a model trained from historical tweets can also be applied to tweets collected later in time when the topics that users talk about may be completely different. Having this classifier in place, one may then want to perform finer-grained geolocation of tweets within a country. For instance, during breaking news, one may want to identify reports from eyewitnesses on the ground and therefore fine-grained geolocation would be crucial to identify tweets in the area.
Conclusion
To the best of our knowledge, this is the first study performing a comprehensive analysis of the usefulness of tweet-inherent features to automatically infer the country of origin of tweets in a real-time scenario from a global stream of tweets written in any language. Most previous work focused on classifying tweets coming from a single country and hence assumed that tweets from that country were already identified. Where previous work had considered tweets from all over the world, the set of features employed for the classification included features, such as a user's social network, that are not readily available within a tweet and so is not feasible in a scenario where tweets need to be classified in real-time as they are collected from the streaming API. Moreover, previous attempts to geolocate global tweets tended to restrict their collection to tweets from a list of cities, as well as to tweets in English; this means that they did not consider the entire stream, but only a set of cities, which assumes prior preprocessing. Finally, our study uses two datasets collected a year apart from each other, to test the ability to classify new tweets with a classifier trained on older tweets. Our experiments and analysis reveal insights that can be used effectively to build an application that classifies tweets by country in real time, either when the goal is to organise content by country or when one wants to identify all the content posted from a specific country.
In the future we plan to test alternative cost-sensitive learning approaches to the one used here, focusing especially on collection of more data for under-represented countries, so that the classifier can be further improved for all the countries. Furthermore, we plan to explore more sophisticated approaches for content analysis, e.g. detection of topics in content (e.g. do some countries talk more about football than others?), as well as semantic treatment of the content. We also aim to develop finer-grained classifiers that take the output of the country-level classifier as input.
Acknowledgments
This work has been supported by the PHEME FP7 project (grant No. 611233), the Warwick University Higher Education Impact Fund, an ESRC Impact Acceleration Award, EPSRC Impact Acceleration Account (grant no. EP/K503940/1) and EPSRC grant EP/L016400/1. We used the MidPlus computational facilities, supported by QMUL Research-IT and funded by EPSRC grant EP/K000128/1. | Unanswerable |
0b24b5a652d674d4694668d889643bc1accf18ef | 0b24b5a652d674d4694668d889643bc1accf18ef_0 | Q: How did they evaluate the system?
Text: Credits
This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex suggestions from Jason Eisner, ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael White, those from ACL 2010 by Jing-Shing Chang and Philipp Koehn, those for ACL 2008 by JohannaD. Moore, Simone Teufel, James Allan, and Sadaoki Furui, those for ACL 2005 by Hwee Tou Ng and Kemal Oflazer, those for ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats. Those versions were written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the International Joint Conference on Artificial Intelligence and the Conference on Computer Vision and Pattern Recognition.
Introduction
The following instructions are directed to authors of papers submitted to NAACL-HLT 2019 or accepted for publication in its proceedings. All authors are required to adhere to these specifications. Authors are required to provide a Portable Document Format (PDF) version of their papers. The proceedings are designed for printing on A4 paper.
General Instructions
Manuscripts must be in two-column format. Exceptions to the two-column format include the title, authors' names and complete addresses, which must be centered at the top of the first page, and any full-width figures or tables (see the guidelines in Subsection "The First Page" ). Type single-spaced. Start all pages directly under the top margin. See the guidelines later regarding formatting the first page. The manuscript should be printed single-sided and its length should not exceed the maximum page limit described in Section "Length of Submission" . Pages are numbered for initial submission. However, do not number the pages in the camera-ready version.
By uncommenting \aclfinalcopy at the top of this document, it will compile to produce an example of the camera-ready formatting; by leaving it commented out, the document will be anonymized for initial submission. When you first create your submission on softconf, please fill in your submitted paper ID where *** appears in the \def\aclpaperid{***} definition at the top.
The review process is double-blind, so do not include any author information (names, addresses) when submitting a paper for review. However, you should maintain space for names and addresses so that they will fit in the final (accepted) version. The NAACL-HLT 2019 style will create a titlebox space of 2.5in for you when \aclfinalcopy is commented out.
The author list for submissions should include all (and only) individuals who made substantial contributions to the work presented. Each author listed on a submission to NAACL-HLT 2019 will be notified of submissions, revisions and the final decision. No authors may be added to or removed from submissions to NAACL-HLT 2019 after the submission deadline.
The Ruler
The NAACL-HLT 2019 style defines a printed ruler which should be presented in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document without the provided style files, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. ( users may uncomment the \aclfinalcopy command in the document preamble.)
Reviewers: note that the ruler measurements do not align well with lines in the paper – this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. In most cases one would expect that the approximate location will be adequate, although you can also use fractional references (e.g., the first paragraph on this page ends at mark $108.5$ ).
Electronically-available resources
NAACL-HLT provides this description in 2e (naaclhlt2019.tex) and PDF format (naaclhlt2019.pdf), along with the 2e style file used to format it (naaclhlt2019.sty) and an ACL bibliography style (acl_natbib.bst) and example bibliography (naaclhlt2019.bib). These files are all available at http://naacl2019.org/downloads/ naaclhlt2019-latex.zip. We strongly recommend the use of these style files, which have been appropriately tailored for the NAACL-HLT 2019 proceedings.
Format of Electronic Manuscript
For the production of the electronic manuscript you must use Adobe's Portable Document Format (PDF). PDF files are usually produced from using the pdflatex command. If your version of produces Postscript files, you can convert these into PDF using ps2pdf or dvipdf. On Windows, you can also use Adobe Distiller to generate PDF.
Please make sure that your PDF file includes all the necessary fonts (especially tree diagrams, symbols, and fonts with Asian characters). When you print or create the PDF file, there is usually an option in your printer setup to include none, all or just non-standard fonts. Please make sure that you select the option of including ALL the fonts. Before sending it, test your PDF by printing it from a computer different from the one where it was created. Moreover, some word processors may generate very large PDF files, where each page is rendered as an image. Such images may reproduce poorly. In this case, try alternative ways to obtain the PDF. One way on some systems is to install a driver for a postscript printer, send your document to the printer specifying “Output to a file”, then convert the file to PDF.
It is of utmost importance to specify the A4 format (21 cm x 29.7 cm) when formatting the paper. When working with dvips, for instance, one should specify -t a4. Or using the command \special{papersize=210mm,297mm} in the latex preamble (directly below the \usepackage commands). Then using dvipdf and/or pdflatex which would make it easier for some.
Print-outs of the PDF file on A4 paper should be identical to the hardcopy version. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs as soon as possible.
Layout
Format manuscripts two columns to a page, in the manner these instructions are formatted. The exact dimensions for a page on A4 paper are:
Left and right margins: 2.5 cm
Top margin: 2.5 cm
Bottom margin: 2.5 cm
Column width: 7.7 cm
Column height: 24.7 cm
Gap between columns: 0.6 cm
Papers should not be submitted on any other paper size. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs above as soon as possible.
Fonts
For reasons of uniformity, Adobe's Times Roman font should be used. In 2e this is accomplished by putting
\usepackage{times}
\usepackage{latexsym}
in the preamble. If Times Roman is unavailable, use Computer Modern Roman (2e's default). Note that the latter is about 10% less dense than Adobe's Times Roman font.
The First Page
Center the title, author's name(s) and affiliation(s) across both columns. Do not use footnotes for affiliations. Do not include the paper ID number assigned during the submission process. Use the two-column format only when you begin the abstract.
Title: Place the title centered at the top of the first page, in a 15-point bold font. (For a complete guide to font sizes and styles, see Table 1 ) Long titles should be typed on two lines without a blank line intervening. Approximately, put the title at 2.5 cm from the top of the page, followed by a blank line, then the author's names(s), and the affiliation on the following line. Do not use only initials for given names (middle initials are allowed). Do not format surnames in all capitals (e.g., use “Mitchell” not “MITCHELL”). Do not format title and section headings in all capitals as well except for proper names (such as “BLEU”) that are conventionally in all capitals. The affiliation should contain the author's complete address, and if possible, an electronic mail address. Start the body of the first page 7.5 cm from the top of the page.
The title, author names and addresses should be completely identical to those entered to the electronical paper submission website in order to maintain the consistency of author information among all publications of the conference. If they are different, the publication chairs may resolve the difference without consulting with you; so it is in your own interest to double-check that the information is consistent.
Abstract: Type the abstract at the beginning of the first column. The width of the abstract text should be smaller than the width of the columns for the text in the body of the paper by about 0.6 cm on each side. Center the word Abstract in a 12 point bold font above the body of the abstract. The abstract should be a concise summary of the general thesis and conclusions of the paper. It should be no longer than 200 words. The abstract text should be in 10 point font.
Text: Begin typing the main body of the text immediately after the abstract, observing the two-column format as shown in the present document. Do not include page numbers.
Indent: Indent when starting a new paragraph, about 0.4 cm. Use 11 points for text and subsection headings, 12 points for section headings and 15 points for the title.
Sections
Headings: Type and label section and subsection headings in the style shown on the present document. Use numbered sections (Arabic numerals) in order to facilitate cross references. Number subsections with the section number and the subsection number separated by a dot, in Arabic numerals. Do not number subsubsections.
Citations: Citations within the text appear in parentheses as BIBREF0 or, if the author's name appears in the text itself, as Gusfield Gusfield:97. Using the provided style, the former is accomplished using \cite and the latter with \shortcite or \newcite. Collapse multiple citations as in BIBREF0 , BIBREF1 ; this is accomplished with the provided style using commas within the \cite command, e.g., \cite{Gusfield:97,Aho:72}. Append lowercase letters to the year in cases of ambiguities. Treat double authors as in BIBREF1 , but write as in BIBREF2 when more than two authors are involved. Collapse multiple citations as in BIBREF0 , BIBREF1 . Also refrain from using full citations as sentence constituents.
We suggest that instead of
“ BIBREF0 showed that ...”
you use
“Gusfield Gusfield:97 showed that ...”
If you are using the provided and Bib style files, you can use the command \citet (cite in text) to get “author (year)” citations.
If the Bib file contains DOI fields, the paper title in the references section will appear as a hyperlink to the DOI, using the hyperref package. To disable the hyperref package, load the style file with the nohyperref option:
\usepackage[nohyperref]{naaclhlt2019}
Digital Object Identifiers: As part of our work to make ACL materials more widely used and cited outside of our discipline, ACL has registered as a CrossRef member, as a registrant of Digital Object Identifiers (DOIs), the standard for registering permanent URNs for referencing scholarly materials. As of 2017, we are requiring all camera-ready references to contain the appropriate DOIs (or as a second resort, the hyperlinked ACL Anthology Identifier) to all cited works. Thus, please ensure that you use Bib records that contain DOI or URLs for any of the ACL materials that you reference. Appropriate records should be found for most materials in the current ACL Anthology at http://aclanthology.info/.
As examples, we cite BIBREF3 to show you how papers with a DOI will appear in the bibliography. We cite BIBREF4 to show how papers without a DOI but with an ACL Anthology Identifier will appear in the bibliography.
As reviewing will be double-blind, the submitted version of the papers should not include the authors' names and affiliations. Furthermore, self-references that reveal the author's identity, e.g.,
“We previously showed BIBREF0 ...”
should be avoided. Instead, use citations such as
“ BIBREF0 Gusfield:97 previously showed ... ”
Any preliminary non-archival versions of submitted papers should be listed in the submission form but not in the review version of the paper. NAACL-HLT 2019 reviewers are generally aware that authors may present preliminary versions of their work in other venues, but will not be provided the list of previous presentations from the submission form.
Please do not use anonymous citations and do not include when submitting your papers. Papers that do not conform to these requirements may be rejected without review.
References: Gather the full set of references together under the heading References; place the section before any Appendices. Arrange the references alphabetically by first author, rather than by order of occurrence in the text. By using a .bib file, as in this template, this will be automatically handled for you. See the \bibliography commands near the end for more.
Provide as complete a citation as possible, using a consistent format, such as the one for Computational Linguistics or the one in the Publication Manual of the American Psychological Association BIBREF5 . Use of full names for authors rather than initials is preferred. A list of abbreviations for common computer science journals can be found in the ACM Computing Reviews BIBREF6 .
The and Bib style files provided roughly fit the American Psychological Association format, allowing regular citations, short citations and multiple citations as described above.
Example citing an arxiv paper: BIBREF7 .
Example article in journal citation: BIBREF8 .
Example article in proceedings, with location: BIBREF9 .
Example article in proceedings, without location: BIBREF10 .
See corresponding .bib file for further details.
Submissions should accurately reference prior and related work, including code and data. If a piece of prior work appeared in multiple venues, the version that appeared in a refereed, archival venue should be referenced. If multiple versions of a piece of prior work exist, the one used by the authors should be referenced. Authors should not rely on automated citation indices to provide accurate references for prior and related work.
Appendices: Appendices, if any, directly follow the text and the references (but see above). Letter them in sequence and provide an informative title: Appendix A. Title of Appendix.
Footnotes
Footnotes: Put footnotes at the bottom of the page and use 9 point font. They may be numbered or referred to by asterisks or other symbols. Footnotes should be separated from the text by a line.
Graphics
Illustrations: Place figures, tables, and photographs in the paper near where they are first discussed, rather than at the end, if possible. Wide illustrations may run across both columns. Color illustrations are discouraged, unless you have verified that they will be understandable when printed in black ink.
Captions: Provide a caption for every illustration; number each one sequentially in the form: “Figure 1. Caption of the Figure.” “Table 1. Caption of the Table.” Type the captions of the figures and tables below the body, using 10 point text. Captions should be placed below illustrations. Captions that are one line are centered (see Table 1 ). Captions longer than one line are left-aligned (see Table 2 ). Do not overwrite the default caption sizes. The naaclhlt2019.sty file is compatible with the caption and subcaption packages; do not add optional arguments.
Accessibility
In an effort to accommodate people who are color-blind (as well as those printing to paper), grayscale readability for all accepted papers will be encouraged. Color is not forbidden, but authors should ensure that tables and figures do not rely solely on color to convey critical distinctions. A simple criterion: All curves and points in your figures should be clearly distinguishable without color.
Translation of non-English Terms
It is also advised to supplement non-English characters and terms with appropriate transliterations and/or translations since not all readers understand all such characters and terms. Inline transliteration or translation can be represented in the order of: original-form transliteration “translation”.
Length of Submission
The NAACL-HLT 2019 main conference accepts submissions of long papers and short papers. Long papers may consist of up to eight (8) pages of content plus unlimited pages for references. Upon acceptance, final versions of long papers will be given one additional page – up to nine (9) pages of content plus unlimited pages for references – so that reviewers' comments can be taken into account. Short papers may consist of up to four (4) pages of content, plus unlimited pages for references. Upon acceptance, short papers will be given five (5) pages in the proceedings and unlimited pages for references. For both long and short papers, all illustrations and tables that are part of the main text must be accommodated within these page limits, observing the formatting instructions given in the present document. Papers that do not conform to the specified length and formatting requirements are subject to be rejected without review.
NAACL-HLT 2019 does encourage the submission of additional material that is relevant to the reviewers but not an integral part of the paper. There are two such types of material: appendices, which can be read, and non-readable supplementary materials, often data or code. Do not include this additional material in the same document as your main paper. Additional material must be submitted as one or more separate files, and must adhere to the same anonymity guidelines as the main paper. The paper must be self-contained: it is optional for reviewers to look at the supplementary material. Papers should not refer, for further detail, to documents, code or data resources that are not available to the reviewers. Refer to Appendix "Appendices" and Appendix "Supplemental Material" for further information.
Workshop chairs may have different rules for allowed length and whether supplemental material is welcome. As always, the respective call for papers is the authoritative source.
Acknowledgments
The acknowledgments should go immediately before the references. Do not number the acknowledgments section. Do not include this section when submitting your paper for review.
Preparing References:
Include your own bib file like this: \bibliographystyle{acl_natbib} \begin{thebibliography}{40}
Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proc. ACL '15/IJCNLP '15, pages 344–354.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. ICLR '15.
Michele Banko, Michael J. Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proc. IJCAI '07, pages 2670–2676.
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proc. EMNLP '13, pages 1533–1544.
Nikita Bhutani, HV Jagadish, and Dragomir Radev. 2016. Nested propositions in open information extraction. In Proc. EMNLP '16, pages 55–64.
Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Proc. NIPS '13, pages 2787–2795.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proc. EMNLP '14, pages 1724–1734.
Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. In Proc. ACL '18, pages 407–413.
Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming question answering datasets into natural language inference datasets. arXiv preprint arXiv:1809.02922.
Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proc. EMNLP '11, pages 1535–1545.
Ben Hixon, Peter Clark, and Hannaneh Hajishirzi. 2015. Learning knowledge graphs for question answering through conversational dialog. In Proc. NAACL-HLT '15, pages 851–861.
Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991.
Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proc. ACL '17, pages 963–973.
Prachi Jain, Shikhar Murty, Mausam, and Soumen Chakrabarti. 2018. Mitigating the effect of out-of-vocabulary entity pairs in matrix factorization for KB inference. In Proc. IJCAI '18, pages 4122–4129.
Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proc. ACL '17 (System Demonstrations), pages 67–72.
Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015a. Multi-task sequence to sequence learning. In Proc. ICLR '16.
Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015b. Effective approaches to attention-based neural machine translation. In Proc. EMNLP '15, pages 1412–1421.
Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using $t$ -SNE. Journal of Machine Learning Research, 9(Nov):2579–2605.
Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, Oren Etzioni, et al. 2012. Open language learning for information extraction. In Proc. EMNLP '12, pages 523–534.
Julian McAuley and Alex Yang. 2016. Addressing complex and subjective product-related queries with customer reviews. In Proc. WWW '16, pages 625–635.
Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowledge graphs. In Proc. AAAI '16, pages 1955–1961.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proc. EMNLP '14, pages 1532–1543.
Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proc. EMNLP '17, pages 338–348.
Subhashree S and P Sreenivasa Kumar. 2018. Enriching domain ontologies using question-answer datasets. In Proc. CoDS-COMAD '18, pages 329–332.
Swarnadeep Saha, Harinder Pal, et al. 2017. Bootstrapping for numerical open ie. In Proc. ACL '17, pages 317–323.
Denis Savenkov, Wei-Lwun Lu, Jeff Dalton, and Eugene Agichtein. 2015. Relation extraction from community generated question-answer pairs. In Proc. NAACL-HLT '15, pages 96–102.
Gabriel Stanovsky and Ido Dagan. 2016. Creating a large benchmark for open information extraction. In Proc. EMNLP '16.
Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised open information extraction. In Proc. ACL '18, pages 885–895.
Antonio Toral and Víctor M. Sánchez-Cartagena. 2017. A multifaceted evaluation of neural versus phrase-based machine translation for 9 language directions. In Proc. EACL '17, pages 1063–1073.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proc. NIPS '15, pages 2692–2700.
Mengting Wan and Julian McAuley. 2016. Modeling ambiguity, subjectivity, and diverging viewpoints in opinion question answering systems. In Proc. ICDM '16, pages 489–498.
Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29(12):2724–2743.
Zeqiu Wu, Xiang Ren, Frank F. Xu, Ji Li, and Jiawei Han. 2018. Indirect supervision for relation extraction using question-answer pairs. In Proc. WSDM '18, pages 646–654.
Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for semantic parsing. In Proc. ACL '16, pages 1341–1350.
Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for question answering. In Proc. ICLR '17.
Alexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead, and Stephen Soderland. 2007. TextRunner: Open information extraction on the web. In Proc. NAACL-HLT '07 (Demonstrations), pages 25–26.
Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proc. ACL '17, pages 440–450.
Biao Zhang, Deyi Xiong, and Jinsong Su. 2016. Cseq2seq: Cyclic sequence-to-sequence learning. arXiv preprint arXiv:1607.08725.
Yaoyuan Zhang, Zhenxu Ye, Yansong Feng, Dongyan Zhao, and Rui Yan. 2017. A constrained sequence-to-sequence neural model for sentence simplification. arXiv preprint arXiv:1704.02312.
Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. In Proc. NAACL-HLT '16, pages 30–34.
|
where naaclhlt2019 corresponds to a naaclhlt2019.bib file. Appendices Appendices are material that can be read, and include lemmas, formulas, proofs, and tables that are not critical to the reading and understanding of the paper. Appendices should be uploaded as supplementary material when submitting the paper for review. Upon acceptance, the appendices come after the references, as shown here. Use \appendix before any appendix section to switch the section numbering over to letters. Supplemental Material Submissions may include non-readable supplementary material used in the work and described in the paper. Any accompanying software and/or data should include licenses and documentation of research review as appropriate. Supplementary material may report preprocessing decisions, model parameters, and other details necessary for the replication of the experiments reported in the paper. Seemingly small preprocessing decisions can sometimes make a large difference in performance, so it is crucial to record such decisions to precisely characterize state-of-the-art methods. Nonetheless, supplementary material should be supplementary (rather than central) to the paper. Submissions that misuse the supplementary material may be rejected without review. Supplementary material may include explanations or details of proofs or derivations that do not fit into the paper, lists of features or feature templates, sample inputs and outputs for a system, pseudo-code or source code, and data. (Source code and data should be separate uploads, rather than part of the paper). The paper should not rely on the supplementary material: while the paper may refer to and cite the supplementary material and the supplementary material will be available to the reviewers, they will not be asked to review the supplementary material. | Unanswerable |
1fb73176394ef59adfaa8fc7827395525f9a5af7 | 1fb73176394ef59adfaa8fc7827395525f9a5af7_0 | Q: Where did they get training data?
Text: Credits
This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex suggestions from Jason Eisner, ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael White, those from ACL 2010 by Jing-Shing Chang and Philipp Koehn, those for ACL 2008 by JohannaD. Moore, Simone Teufel, James Allan, and Sadaoki Furui, those for ACL 2005 by Hwee Tou Ng and Kemal Oflazer, those for ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats. Those versions were written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the International Joint Conference on Artificial Intelligence and the Conference on Computer Vision and Pattern Recognition.
Introduction
The following instructions are directed to authors of papers submitted to NAACL-HLT 2019 or accepted for publication in its proceedings. All authors are required to adhere to these specifications. Authors are required to provide a Portable Document Format (PDF) version of their papers. The proceedings are designed for printing on A4 paper.
General Instructions
Manuscripts must be in two-column format. Exceptions to the two-column format include the title, authors' names and complete addresses, which must be centered at the top of the first page, and any full-width figures or tables (see the guidelines in Subsection "The First Page" ). Type single-spaced. Start all pages directly under the top margin. See the guidelines later regarding formatting the first page. The manuscript should be printed single-sided and its length should not exceed the maximum page limit described in Section "Length of Submission" . Pages are numbered for initial submission. However, do not number the pages in the camera-ready version.
By uncommenting \aclfinalcopy at the top of this document, it will compile to produce an example of the camera-ready formatting; by leaving it commented out, the document will be anonymized for initial submission. When you first create your submission on softconf, please fill in your submitted paper ID where *** appears in the \def\aclpaperid{***} definition at the top.
The review process is double-blind, so do not include any author information (names, addresses) when submitting a paper for review. However, you should maintain space for names and addresses so that they will fit in the final (accepted) version. The NAACL-HLT 2019 style will create a titlebox space of 2.5in for you when \aclfinalcopy is commented out.
The author list for submissions should include all (and only) individuals who made substantial contributions to the work presented. Each author listed on a submission to NAACL-HLT 2019 will be notified of submissions, revisions and the final decision. No authors may be added to or removed from submissions to NAACL-HLT 2019 after the submission deadline.
The Ruler
The NAACL-HLT 2019 style defines a printed ruler which should be presented in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document without the provided style files, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. ( users may uncomment the \aclfinalcopy command in the document preamble.)
Reviewers: note that the ruler measurements do not align well with lines in the paper – this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. In most cases one would expect that the approximate location will be adequate, although you can also use fractional references (e.g., the first paragraph on this page ends at mark $108.5$ ).
Electronically-available resources
NAACL-HLT provides this description in 2e (naaclhlt2019.tex) and PDF format (naaclhlt2019.pdf), along with the 2e style file used to format it (naaclhlt2019.sty) and an ACL bibliography style (acl_natbib.bst) and example bibliography (naaclhlt2019.bib). These files are all available at http://naacl2019.org/downloads/ naaclhlt2019-latex.zip. We strongly recommend the use of these style files, which have been appropriately tailored for the NAACL-HLT 2019 proceedings.
Format of Electronic Manuscript
For the production of the electronic manuscript you must use Adobe's Portable Document Format (PDF). PDF files are usually produced from using the pdflatex command. If your version of produces Postscript files, you can convert these into PDF using ps2pdf or dvipdf. On Windows, you can also use Adobe Distiller to generate PDF.
Please make sure that your PDF file includes all the necessary fonts (especially tree diagrams, symbols, and fonts with Asian characters). When you print or create the PDF file, there is usually an option in your printer setup to include none, all or just non-standard fonts. Please make sure that you select the option of including ALL the fonts. Before sending it, test your PDF by printing it from a computer different from the one where it was created. Moreover, some word processors may generate very large PDF files, where each page is rendered as an image. Such images may reproduce poorly. In this case, try alternative ways to obtain the PDF. One way on some systems is to install a driver for a postscript printer, send your document to the printer specifying “Output to a file”, then convert the file to PDF.
It is of utmost importance to specify the A4 format (21 cm x 29.7 cm) when formatting the paper. When working with dvips, for instance, one should specify -t a4. Or using the command \special{papersize=210mm,297mm} in the latex preamble (directly below the \usepackage commands). Then using dvipdf and/or pdflatex which would make it easier for some.
Print-outs of the PDF file on A4 paper should be identical to the hardcopy version. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs as soon as possible.
Layout
Format manuscripts two columns to a page, in the manner these instructions are formatted. The exact dimensions for a page on A4 paper are:
Left and right margins: 2.5 cm
Top margin: 2.5 cm
Bottom margin: 2.5 cm
Column width: 7.7 cm
Column height: 24.7 cm
Gap between columns: 0.6 cm
Papers should not be submitted on any other paper size. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs above as soon as possible.
Fonts
For reasons of uniformity, Adobe's Times Roman font should be used. In 2e this is accomplished by putting
\usepackage{times}
\usepackage{latexsym}
in the preamble. If Times Roman is unavailable, use Computer Modern Roman (2e's default). Note that the latter is about 10% less dense than Adobe's Times Roman font.
The First Page
Center the title, author's name(s) and affiliation(s) across both columns. Do not use footnotes for affiliations. Do not include the paper ID number assigned during the submission process. Use the two-column format only when you begin the abstract.
Title: Place the title centered at the top of the first page, in a 15-point bold font. (For a complete guide to font sizes and styles, see Table 1 ) Long titles should be typed on two lines without a blank line intervening. Approximately, put the title at 2.5 cm from the top of the page, followed by a blank line, then the author's names(s), and the affiliation on the following line. Do not use only initials for given names (middle initials are allowed). Do not format surnames in all capitals (e.g., use “Mitchell” not “MITCHELL”). Do not format title and section headings in all capitals as well except for proper names (such as “BLEU”) that are conventionally in all capitals. The affiliation should contain the author's complete address, and if possible, an electronic mail address. Start the body of the first page 7.5 cm from the top of the page.
The title, author names and addresses should be completely identical to those entered to the electronical paper submission website in order to maintain the consistency of author information among all publications of the conference. If they are different, the publication chairs may resolve the difference without consulting with you; so it is in your own interest to double-check that the information is consistent.
Abstract: Type the abstract at the beginning of the first column. The width of the abstract text should be smaller than the width of the columns for the text in the body of the paper by about 0.6 cm on each side. Center the word Abstract in a 12 point bold font above the body of the abstract. The abstract should be a concise summary of the general thesis and conclusions of the paper. It should be no longer than 200 words. The abstract text should be in 10 point font.
Text: Begin typing the main body of the text immediately after the abstract, observing the two-column format as shown in the present document. Do not include page numbers.
Indent: Indent when starting a new paragraph, about 0.4 cm. Use 11 points for text and subsection headings, 12 points for section headings and 15 points for the title.
Sections
Headings: Type and label section and subsection headings in the style shown on the present document. Use numbered sections (Arabic numerals) in order to facilitate cross references. Number subsections with the section number and the subsection number separated by a dot, in Arabic numerals. Do not number subsubsections.
Citations: Citations within the text appear in parentheses as BIBREF0 or, if the author's name appears in the text itself, as Gusfield Gusfield:97. Using the provided style, the former is accomplished using \cite and the latter with \shortcite or \newcite. Collapse multiple citations as in BIBREF0 , BIBREF1 ; this is accomplished with the provided style using commas within the \cite command, e.g., \cite{Gusfield:97,Aho:72}. Append lowercase letters to the year in cases of ambiguities. Treat double authors as in BIBREF1 , but write as in BIBREF2 when more than two authors are involved. Collapse multiple citations as in BIBREF0 , BIBREF1 . Also refrain from using full citations as sentence constituents.
We suggest that instead of
“ BIBREF0 showed that ...”
you use
“Gusfield Gusfield:97 showed that ...”
If you are using the provided and Bib style files, you can use the command \citet (cite in text) to get “author (year)” citations.
If the Bib file contains DOI fields, the paper title in the references section will appear as a hyperlink to the DOI, using the hyperref package. To disable the hyperref package, load the style file with the nohyperref option:
\usepackage[nohyperref]{naaclhlt2019}
Digital Object Identifiers: As part of our work to make ACL materials more widely used and cited outside of our discipline, ACL has registered as a CrossRef member, as a registrant of Digital Object Identifiers (DOIs), the standard for registering permanent URNs for referencing scholarly materials. As of 2017, we are requiring all camera-ready references to contain the appropriate DOIs (or as a second resort, the hyperlinked ACL Anthology Identifier) to all cited works. Thus, please ensure that you use Bib records that contain DOI or URLs for any of the ACL materials that you reference. Appropriate records should be found for most materials in the current ACL Anthology at http://aclanthology.info/.
As examples, we cite BIBREF3 to show you how papers with a DOI will appear in the bibliography. We cite BIBREF4 to show how papers without a DOI but with an ACL Anthology Identifier will appear in the bibliography.
As reviewing will be double-blind, the submitted version of the papers should not include the authors' names and affiliations. Furthermore, self-references that reveal the author's identity, e.g.,
“We previously showed BIBREF0 ...”
should be avoided. Instead, use citations such as
“ BIBREF0 Gusfield:97 previously showed ... ”
Any preliminary non-archival versions of submitted papers should be listed in the submission form but not in the review version of the paper. NAACL-HLT 2019 reviewers are generally aware that authors may present preliminary versions of their work in other venues, but will not be provided the list of previous presentations from the submission form.
Please do not use anonymous citations and do not include when submitting your papers. Papers that do not conform to these requirements may be rejected without review.
References: Gather the full set of references together under the heading References; place the section before any Appendices. Arrange the references alphabetically by first author, rather than by order of occurrence in the text. By using a .bib file, as in this template, this will be automatically handled for you. See the \bibliography commands near the end for more.
Provide as complete a citation as possible, using a consistent format, such as the one for Computational Linguistics or the one in the Publication Manual of the American Psychological Association BIBREF5 . Use of full names for authors rather than initials is preferred. A list of abbreviations for common computer science journals can be found in the ACM Computing Reviews BIBREF6 .
The and Bib style files provided roughly fit the American Psychological Association format, allowing regular citations, short citations and multiple citations as described above.
Example citing an arxiv paper: BIBREF7 .
Example article in journal citation: BIBREF8 .
Example article in proceedings, with location: BIBREF9 .
Example article in proceedings, without location: BIBREF10 .
See corresponding .bib file for further details.
Submissions should accurately reference prior and related work, including code and data. If a piece of prior work appeared in multiple venues, the version that appeared in a refereed, archival venue should be referenced. If multiple versions of a piece of prior work exist, the one used by the authors should be referenced. Authors should not rely on automated citation indices to provide accurate references for prior and related work.
Appendices: Appendices, if any, directly follow the text and the references (but see above). Letter them in sequence and provide an informative title: Appendix A. Title of Appendix.
Footnotes
Footnotes: Put footnotes at the bottom of the page and use 9 point font. They may be numbered or referred to by asterisks or other symbols. Footnotes should be separated from the text by a line.
Graphics
Illustrations: Place figures, tables, and photographs in the paper near where they are first discussed, rather than at the end, if possible. Wide illustrations may run across both columns. Color illustrations are discouraged, unless you have verified that they will be understandable when printed in black ink.
Captions: Provide a caption for every illustration; number each one sequentially in the form: “Figure 1. Caption of the Figure.” “Table 1. Caption of the Table.” Type the captions of the figures and tables below the body, using 10 point text. Captions should be placed below illustrations. Captions that are one line are centered (see Table 1 ). Captions longer than one line are left-aligned (see Table 2 ). Do not overwrite the default caption sizes. The naaclhlt2019.sty file is compatible with the caption and subcaption packages; do not add optional arguments.
Accessibility
In an effort to accommodate people who are color-blind (as well as those printing to paper), grayscale readability for all accepted papers will be encouraged. Color is not forbidden, but authors should ensure that tables and figures do not rely solely on color to convey critical distinctions. A simple criterion: All curves and points in your figures should be clearly distinguishable without color.
Translation of non-English Terms
It is also advised to supplement non-English characters and terms with appropriate transliterations and/or translations since not all readers understand all such characters and terms. Inline transliteration or translation can be represented in the order of: original-form transliteration “translation”.
Length of Submission
The NAACL-HLT 2019 main conference accepts submissions of long papers and short papers. Long papers may consist of up to eight (8) pages of content plus unlimited pages for references. Upon acceptance, final versions of long papers will be given one additional page – up to nine (9) pages of content plus unlimited pages for references – so that reviewers' comments can be taken into account. Short papers may consist of up to four (4) pages of content, plus unlimited pages for references. Upon acceptance, short papers will be given five (5) pages in the proceedings and unlimited pages for references. For both long and short papers, all illustrations and tables that are part of the main text must be accommodated within these page limits, observing the formatting instructions given in the present document. Papers that do not conform to the specified length and formatting requirements are subject to be rejected without review.
NAACL-HLT 2019 does encourage the submission of additional material that is relevant to the reviewers but not an integral part of the paper. There are two such types of material: appendices, which can be read, and non-readable supplementary materials, often data or code. Do not include this additional material in the same document as your main paper. Additional material must be submitted as one or more separate files, and must adhere to the same anonymity guidelines as the main paper. The paper must be self-contained: it is optional for reviewers to look at the supplementary material. Papers should not refer, for further detail, to documents, code or data resources that are not available to the reviewers. Refer to Appendix "Appendices" and Appendix "Supplemental Material" for further information.
Workshop chairs may have different rules for allowed length and whether supplemental material is welcome. As always, the respective call for papers is the authoritative source.
Acknowledgments
The acknowledgments should go immediately before the references. Do not number the acknowledgments section. Do not include this section when submitting your paper for review.
Preparing References:
Include your own bib file like this: \bibliographystyle{acl_natbib} \begin{thebibliography}{40}
Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proc. ACL '15/IJCNLP '15, pages 344–354.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. ICLR '15.
Michele Banko, Michael J. Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proc. IJCAI '07, pages 2670–2676.
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proc. EMNLP '13, pages 1533–1544.
Nikita Bhutani, HV Jagadish, and Dragomir Radev. 2016. Nested propositions in open information extraction. In Proc. EMNLP '16, pages 55–64.
Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Proc. NIPS '13, pages 2787–2795.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proc. EMNLP '14, pages 1724–1734.
Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. In Proc. ACL '18, pages 407–413.
Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming question answering datasets into natural language inference datasets. arXiv preprint arXiv:1809.02922.
Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proc. EMNLP '11, pages 1535–1545.
Ben Hixon, Peter Clark, and Hannaneh Hajishirzi. 2015. Learning knowledge graphs for question answering through conversational dialog. In Proc. NAACL-HLT '15, pages 851–861.
Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991.
Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proc. ACL '17, pages 963–973.
Prachi Jain, Shikhar Murty, Mausam, and Soumen Chakrabarti. 2018. Mitigating the effect of out-of-vocabulary entity pairs in matrix factorization for KB inference. In Proc. IJCAI '18, pages 4122–4129.
Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proc. ACL '17 (System Demonstrations), pages 67–72.
Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015a. Multi-task sequence to sequence learning. In Proc. ICLR '16.
Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015b. Effective approaches to attention-based neural machine translation. In Proc. EMNLP '15, pages 1412–1421.
Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using $t$ -SNE. Journal of Machine Learning Research, 9(Nov):2579–2605.
Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, Oren Etzioni, et al. 2012. Open language learning for information extraction. In Proc. EMNLP '12, pages 523–534.
Julian McAuley and Alex Yang. 2016. Addressing complex and subjective product-related queries with customer reviews. In Proc. WWW '16, pages 625–635.
Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowledge graphs. In Proc. AAAI '16, pages 1955–1961.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proc. EMNLP '14, pages 1532–1543.
Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proc. EMNLP '17, pages 338–348.
Subhashree S and P Sreenivasa Kumar. 2018. Enriching domain ontologies using question-answer datasets. In Proc. CoDS-COMAD '18, pages 329–332.
Swarnadeep Saha, Harinder Pal, et al. 2017. Bootstrapping for numerical open ie. In Proc. ACL '17, pages 317–323.
Denis Savenkov, Wei-Lwun Lu, Jeff Dalton, and Eugene Agichtein. 2015. Relation extraction from community generated question-answer pairs. In Proc. NAACL-HLT '15, pages 96–102.
Gabriel Stanovsky and Ido Dagan. 2016. Creating a large benchmark for open information extraction. In Proc. EMNLP '16.
Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised open information extraction. In Proc. ACL '18, pages 885–895.
Antonio Toral and Víctor M. Sánchez-Cartagena. 2017. A multifaceted evaluation of neural versus phrase-based machine translation for 9 language directions. In Proc. EACL '17, pages 1063–1073.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proc. NIPS '15, pages 2692–2700.
Mengting Wan and Julian McAuley. 2016. Modeling ambiguity, subjectivity, and diverging viewpoints in opinion question answering systems. In Proc. ICDM '16, pages 489–498.
Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29(12):2724–2743.
Zeqiu Wu, Xiang Ren, Frank F. Xu, Ji Li, and Jiawei Han. 2018. Indirect supervision for relation extraction using question-answer pairs. In Proc. WSDM '18, pages 646–654.
Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for semantic parsing. In Proc. ACL '16, pages 1341–1350.
Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for question answering. In Proc. ICLR '17.
Alexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead, and Stephen Soderland. 2007. TextRunner: Open information extraction on the web. In Proc. NAACL-HLT '07 (Demonstrations), pages 25–26.
Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proc. ACL '17, pages 440–450.
Biao Zhang, Deyi Xiong, and Jinsong Su. 2016. Cseq2seq: Cyclic sequence-to-sequence learning. arXiv preprint arXiv:1607.08725.
Yaoyuan Zhang, Zhenxu Ye, Yansong Feng, Dongyan Zhao, and Rui Yan. 2017. A constrained sequence-to-sequence neural model for sentence simplification. arXiv preprint arXiv:1704.02312.
Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. In Proc. NAACL-HLT '16, pages 30–34.
|
where naaclhlt2019 corresponds to a naaclhlt2019.bib file. Appendices Appendices are material that can be read, and include lemmas, formulas, proofs, and tables that are not critical to the reading and understanding of the paper. Appendices should be uploaded as supplementary material when submitting the paper for review. Upon acceptance, the appendices come after the references, as shown here. Use \appendix before any appendix section to switch the section numbering over to letters. Supplemental Material Submissions may include non-readable supplementary material used in the work and described in the paper. Any accompanying software and/or data should include licenses and documentation of research review as appropriate. Supplementary material may report preprocessing decisions, model parameters, and other details necessary for the replication of the experiments reported in the paper. Seemingly small preprocessing decisions can sometimes make a large difference in performance, so it is crucial to record such decisions to precisely characterize state-of-the-art methods. Nonetheless, supplementary material should be supplementary (rather than central) to the paper. Submissions that misuse the supplementary material may be rejected without review. Supplementary material may include explanations or details of proofs or derivations that do not fit into the paper, lists of features or feature templates, sample inputs and outputs for a system, pseudo-code or source code, and data. (Source code and data should be separate uploads, rather than part of the paper). The paper should not rely on the supplementary material: while the paper may refer to and cite the supplementary material and the supplementary material will be available to the reviewers, they will not be asked to review the supplementary material. | AmazonQA and ConciergeQA datasets |
3a3a65c65cebc2b8c267c334e154517d208adc7d | 3a3a65c65cebc2b8c267c334e154517d208adc7d_0 | Q: What extraction model did they use?
Text: Credits
This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex suggestions from Jason Eisner, ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael White, those from ACL 2010 by Jing-Shing Chang and Philipp Koehn, those for ACL 2008 by JohannaD. Moore, Simone Teufel, James Allan, and Sadaoki Furui, those for ACL 2005 by Hwee Tou Ng and Kemal Oflazer, those for ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats. Those versions were written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the International Joint Conference on Artificial Intelligence and the Conference on Computer Vision and Pattern Recognition.
Introduction
The following instructions are directed to authors of papers submitted to NAACL-HLT 2019 or accepted for publication in its proceedings. All authors are required to adhere to these specifications. Authors are required to provide a Portable Document Format (PDF) version of their papers. The proceedings are designed for printing on A4 paper.
General Instructions
Manuscripts must be in two-column format. Exceptions to the two-column format include the title, authors' names and complete addresses, which must be centered at the top of the first page, and any full-width figures or tables (see the guidelines in Subsection "The First Page" ). Type single-spaced. Start all pages directly under the top margin. See the guidelines later regarding formatting the first page. The manuscript should be printed single-sided and its length should not exceed the maximum page limit described in Section "Length of Submission" . Pages are numbered for initial submission. However, do not number the pages in the camera-ready version.
By uncommenting \aclfinalcopy at the top of this document, it will compile to produce an example of the camera-ready formatting; by leaving it commented out, the document will be anonymized for initial submission. When you first create your submission on softconf, please fill in your submitted paper ID where *** appears in the \def\aclpaperid{***} definition at the top.
The review process is double-blind, so do not include any author information (names, addresses) when submitting a paper for review. However, you should maintain space for names and addresses so that they will fit in the final (accepted) version. The NAACL-HLT 2019 style will create a titlebox space of 2.5in for you when \aclfinalcopy is commented out.
The author list for submissions should include all (and only) individuals who made substantial contributions to the work presented. Each author listed on a submission to NAACL-HLT 2019 will be notified of submissions, revisions and the final decision. No authors may be added to or removed from submissions to NAACL-HLT 2019 after the submission deadline.
The Ruler
The NAACL-HLT 2019 style defines a printed ruler which should be presented in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document without the provided style files, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. ( users may uncomment the \aclfinalcopy command in the document preamble.)
Reviewers: note that the ruler measurements do not align well with lines in the paper – this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. In most cases one would expect that the approximate location will be adequate, although you can also use fractional references (e.g., the first paragraph on this page ends at mark $108.5$ ).
Electronically-available resources
NAACL-HLT provides this description in 2e (naaclhlt2019.tex) and PDF format (naaclhlt2019.pdf), along with the 2e style file used to format it (naaclhlt2019.sty) and an ACL bibliography style (acl_natbib.bst) and example bibliography (naaclhlt2019.bib). These files are all available at http://naacl2019.org/downloads/ naaclhlt2019-latex.zip. We strongly recommend the use of these style files, which have been appropriately tailored for the NAACL-HLT 2019 proceedings.
Format of Electronic Manuscript
For the production of the electronic manuscript you must use Adobe's Portable Document Format (PDF). PDF files are usually produced from using the pdflatex command. If your version of produces Postscript files, you can convert these into PDF using ps2pdf or dvipdf. On Windows, you can also use Adobe Distiller to generate PDF.
Please make sure that your PDF file includes all the necessary fonts (especially tree diagrams, symbols, and fonts with Asian characters). When you print or create the PDF file, there is usually an option in your printer setup to include none, all or just non-standard fonts. Please make sure that you select the option of including ALL the fonts. Before sending it, test your PDF by printing it from a computer different from the one where it was created. Moreover, some word processors may generate very large PDF files, where each page is rendered as an image. Such images may reproduce poorly. In this case, try alternative ways to obtain the PDF. One way on some systems is to install a driver for a postscript printer, send your document to the printer specifying “Output to a file”, then convert the file to PDF.
It is of utmost importance to specify the A4 format (21 cm x 29.7 cm) when formatting the paper. When working with dvips, for instance, one should specify -t a4. Or using the command \special{papersize=210mm,297mm} in the latex preamble (directly below the \usepackage commands). Then using dvipdf and/or pdflatex which would make it easier for some.
Print-outs of the PDF file on A4 paper should be identical to the hardcopy version. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs as soon as possible.
Layout
Format manuscripts two columns to a page, in the manner these instructions are formatted. The exact dimensions for a page on A4 paper are:
Left and right margins: 2.5 cm
Top margin: 2.5 cm
Bottom margin: 2.5 cm
Column width: 7.7 cm
Column height: 24.7 cm
Gap between columns: 0.6 cm
Papers should not be submitted on any other paper size. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs above as soon as possible.
Fonts
For reasons of uniformity, Adobe's Times Roman font should be used. In 2e this is accomplished by putting
\usepackage{times}
\usepackage{latexsym}
in the preamble. If Times Roman is unavailable, use Computer Modern Roman (2e's default). Note that the latter is about 10% less dense than Adobe's Times Roman font.
The First Page
Center the title, author's name(s) and affiliation(s) across both columns. Do not use footnotes for affiliations. Do not include the paper ID number assigned during the submission process. Use the two-column format only when you begin the abstract.
Title: Place the title centered at the top of the first page, in a 15-point bold font. (For a complete guide to font sizes and styles, see Table 1 ) Long titles should be typed on two lines without a blank line intervening. Approximately, put the title at 2.5 cm from the top of the page, followed by a blank line, then the author's names(s), and the affiliation on the following line. Do not use only initials for given names (middle initials are allowed). Do not format surnames in all capitals (e.g., use “Mitchell” not “MITCHELL”). Do not format title and section headings in all capitals as well except for proper names (such as “BLEU”) that are conventionally in all capitals. The affiliation should contain the author's complete address, and if possible, an electronic mail address. Start the body of the first page 7.5 cm from the top of the page.
The title, author names and addresses should be completely identical to those entered to the electronical paper submission website in order to maintain the consistency of author information among all publications of the conference. If they are different, the publication chairs may resolve the difference without consulting with you; so it is in your own interest to double-check that the information is consistent.
Abstract: Type the abstract at the beginning of the first column. The width of the abstract text should be smaller than the width of the columns for the text in the body of the paper by about 0.6 cm on each side. Center the word Abstract in a 12 point bold font above the body of the abstract. The abstract should be a concise summary of the general thesis and conclusions of the paper. It should be no longer than 200 words. The abstract text should be in 10 point font.
Text: Begin typing the main body of the text immediately after the abstract, observing the two-column format as shown in the present document. Do not include page numbers.
Indent: Indent when starting a new paragraph, about 0.4 cm. Use 11 points for text and subsection headings, 12 points for section headings and 15 points for the title.
Sections
Headings: Type and label section and subsection headings in the style shown on the present document. Use numbered sections (Arabic numerals) in order to facilitate cross references. Number subsections with the section number and the subsection number separated by a dot, in Arabic numerals. Do not number subsubsections.
Citations: Citations within the text appear in parentheses as BIBREF0 or, if the author's name appears in the text itself, as Gusfield Gusfield:97. Using the provided style, the former is accomplished using \cite and the latter with \shortcite or \newcite. Collapse multiple citations as in BIBREF0 , BIBREF1 ; this is accomplished with the provided style using commas within the \cite command, e.g., \cite{Gusfield:97,Aho:72}. Append lowercase letters to the year in cases of ambiguities. Treat double authors as in BIBREF1 , but write as in BIBREF2 when more than two authors are involved. Collapse multiple citations as in BIBREF0 , BIBREF1 . Also refrain from using full citations as sentence constituents.
We suggest that instead of
“ BIBREF0 showed that ...”
you use
“Gusfield Gusfield:97 showed that ...”
If you are using the provided and Bib style files, you can use the command \citet (cite in text) to get “author (year)” citations.
If the Bib file contains DOI fields, the paper title in the references section will appear as a hyperlink to the DOI, using the hyperref package. To disable the hyperref package, load the style file with the nohyperref option:
\usepackage[nohyperref]{naaclhlt2019}
Digital Object Identifiers: As part of our work to make ACL materials more widely used and cited outside of our discipline, ACL has registered as a CrossRef member, as a registrant of Digital Object Identifiers (DOIs), the standard for registering permanent URNs for referencing scholarly materials. As of 2017, we are requiring all camera-ready references to contain the appropriate DOIs (or as a second resort, the hyperlinked ACL Anthology Identifier) to all cited works. Thus, please ensure that you use Bib records that contain DOI or URLs for any of the ACL materials that you reference. Appropriate records should be found for most materials in the current ACL Anthology at http://aclanthology.info/.
As examples, we cite BIBREF3 to show you how papers with a DOI will appear in the bibliography. We cite BIBREF4 to show how papers without a DOI but with an ACL Anthology Identifier will appear in the bibliography.
As reviewing will be double-blind, the submitted version of the papers should not include the authors' names and affiliations. Furthermore, self-references that reveal the author's identity, e.g.,
“We previously showed BIBREF0 ...”
should be avoided. Instead, use citations such as
“ BIBREF0 Gusfield:97 previously showed ... ”
Any preliminary non-archival versions of submitted papers should be listed in the submission form but not in the review version of the paper. NAACL-HLT 2019 reviewers are generally aware that authors may present preliminary versions of their work in other venues, but will not be provided the list of previous presentations from the submission form.
Please do not use anonymous citations and do not include when submitting your papers. Papers that do not conform to these requirements may be rejected without review.
References: Gather the full set of references together under the heading References; place the section before any Appendices. Arrange the references alphabetically by first author, rather than by order of occurrence in the text. By using a .bib file, as in this template, this will be automatically handled for you. See the \bibliography commands near the end for more.
Provide as complete a citation as possible, using a consistent format, such as the one for Computational Linguistics or the one in the Publication Manual of the American Psychological Association BIBREF5 . Use of full names for authors rather than initials is preferred. A list of abbreviations for common computer science journals can be found in the ACM Computing Reviews BIBREF6 .
The and Bib style files provided roughly fit the American Psychological Association format, allowing regular citations, short citations and multiple citations as described above.
Example citing an arxiv paper: BIBREF7 .
Example article in journal citation: BIBREF8 .
Example article in proceedings, with location: BIBREF9 .
Example article in proceedings, without location: BIBREF10 .
See corresponding .bib file for further details.
Submissions should accurately reference prior and related work, including code and data. If a piece of prior work appeared in multiple venues, the version that appeared in a refereed, archival venue should be referenced. If multiple versions of a piece of prior work exist, the one used by the authors should be referenced. Authors should not rely on automated citation indices to provide accurate references for prior and related work.
Appendices: Appendices, if any, directly follow the text and the references (but see above). Letter them in sequence and provide an informative title: Appendix A. Title of Appendix.
Footnotes
Footnotes: Put footnotes at the bottom of the page and use 9 point font. They may be numbered or referred to by asterisks or other symbols. Footnotes should be separated from the text by a line.
Graphics
Illustrations: Place figures, tables, and photographs in the paper near where they are first discussed, rather than at the end, if possible. Wide illustrations may run across both columns. Color illustrations are discouraged, unless you have verified that they will be understandable when printed in black ink.
Captions: Provide a caption for every illustration; number each one sequentially in the form: “Figure 1. Caption of the Figure.” “Table 1. Caption of the Table.” Type the captions of the figures and tables below the body, using 10 point text. Captions should be placed below illustrations. Captions that are one line are centered (see Table 1 ). Captions longer than one line are left-aligned (see Table 2 ). Do not overwrite the default caption sizes. The naaclhlt2019.sty file is compatible with the caption and subcaption packages; do not add optional arguments.
Accessibility
In an effort to accommodate people who are color-blind (as well as those printing to paper), grayscale readability for all accepted papers will be encouraged. Color is not forbidden, but authors should ensure that tables and figures do not rely solely on color to convey critical distinctions. A simple criterion: All curves and points in your figures should be clearly distinguishable without color.
Translation of non-English Terms
It is also advised to supplement non-English characters and terms with appropriate transliterations and/or translations since not all readers understand all such characters and terms. Inline transliteration or translation can be represented in the order of: original-form transliteration “translation”.
Length of Submission
The NAACL-HLT 2019 main conference accepts submissions of long papers and short papers. Long papers may consist of up to eight (8) pages of content plus unlimited pages for references. Upon acceptance, final versions of long papers will be given one additional page – up to nine (9) pages of content plus unlimited pages for references – so that reviewers' comments can be taken into account. Short papers may consist of up to four (4) pages of content, plus unlimited pages for references. Upon acceptance, short papers will be given five (5) pages in the proceedings and unlimited pages for references. For both long and short papers, all illustrations and tables that are part of the main text must be accommodated within these page limits, observing the formatting instructions given in the present document. Papers that do not conform to the specified length and formatting requirements are subject to be rejected without review.
NAACL-HLT 2019 does encourage the submission of additional material that is relevant to the reviewers but not an integral part of the paper. There are two such types of material: appendices, which can be read, and non-readable supplementary materials, often data or code. Do not include this additional material in the same document as your main paper. Additional material must be submitted as one or more separate files, and must adhere to the same anonymity guidelines as the main paper. The paper must be self-contained: it is optional for reviewers to look at the supplementary material. Papers should not refer, for further detail, to documents, code or data resources that are not available to the reviewers. Refer to Appendix "Appendices" and Appendix "Supplemental Material" for further information.
Workshop chairs may have different rules for allowed length and whether supplemental material is welcome. As always, the respective call for papers is the authoritative source.
Acknowledgments
The acknowledgments should go immediately before the references. Do not number the acknowledgments section. Do not include this section when submitting your paper for review.
Preparing References:
Include your own bib file like this: \bibliographystyle{acl_natbib} \begin{thebibliography}{40}
Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proc. ACL '15/IJCNLP '15, pages 344–354.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. ICLR '15.
Michele Banko, Michael J. Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proc. IJCAI '07, pages 2670–2676.
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proc. EMNLP '13, pages 1533–1544.
Nikita Bhutani, HV Jagadish, and Dragomir Radev. 2016. Nested propositions in open information extraction. In Proc. EMNLP '16, pages 55–64.
Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Proc. NIPS '13, pages 2787–2795.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proc. EMNLP '14, pages 1724–1734.
Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. In Proc. ACL '18, pages 407–413.
Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming question answering datasets into natural language inference datasets. arXiv preprint arXiv:1809.02922.
Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proc. EMNLP '11, pages 1535–1545.
Ben Hixon, Peter Clark, and Hannaneh Hajishirzi. 2015. Learning knowledge graphs for question answering through conversational dialog. In Proc. NAACL-HLT '15, pages 851–861.
Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991.
Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proc. ACL '17, pages 963–973.
Prachi Jain, Shikhar Murty, Mausam, and Soumen Chakrabarti. 2018. Mitigating the effect of out-of-vocabulary entity pairs in matrix factorization for KB inference. In Proc. IJCAI '18, pages 4122–4129.
Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proc. ACL '17 (System Demonstrations), pages 67–72.
Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015a. Multi-task sequence to sequence learning. In Proc. ICLR '16.
Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015b. Effective approaches to attention-based neural machine translation. In Proc. EMNLP '15, pages 1412–1421.
Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using $t$ -SNE. Journal of Machine Learning Research, 9(Nov):2579–2605.
Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, Oren Etzioni, et al. 2012. Open language learning for information extraction. In Proc. EMNLP '12, pages 523–534.
Julian McAuley and Alex Yang. 2016. Addressing complex and subjective product-related queries with customer reviews. In Proc. WWW '16, pages 625–635.
Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowledge graphs. In Proc. AAAI '16, pages 1955–1961.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proc. EMNLP '14, pages 1532–1543.
Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proc. EMNLP '17, pages 338–348.
Subhashree S and P Sreenivasa Kumar. 2018. Enriching domain ontologies using question-answer datasets. In Proc. CoDS-COMAD '18, pages 329–332.
Swarnadeep Saha, Harinder Pal, et al. 2017. Bootstrapping for numerical open ie. In Proc. ACL '17, pages 317–323.
Denis Savenkov, Wei-Lwun Lu, Jeff Dalton, and Eugene Agichtein. 2015. Relation extraction from community generated question-answer pairs. In Proc. NAACL-HLT '15, pages 96–102.
Gabriel Stanovsky and Ido Dagan. 2016. Creating a large benchmark for open information extraction. In Proc. EMNLP '16.
Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised open information extraction. In Proc. ACL '18, pages 885–895.
Antonio Toral and Víctor M. Sánchez-Cartagena. 2017. A multifaceted evaluation of neural versus phrase-based machine translation for 9 language directions. In Proc. EACL '17, pages 1063–1073.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proc. NIPS '15, pages 2692–2700.
Mengting Wan and Julian McAuley. 2016. Modeling ambiguity, subjectivity, and diverging viewpoints in opinion question answering systems. In Proc. ICDM '16, pages 489–498.
Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29(12):2724–2743.
Zeqiu Wu, Xiang Ren, Frank F. Xu, Ji Li, and Jiawei Han. 2018. Indirect supervision for relation extraction using question-answer pairs. In Proc. WSDM '18, pages 646–654.
Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for semantic parsing. In Proc. ACL '16, pages 1341–1350.
Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for question answering. In Proc. ICLR '17.
Alexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead, and Stephen Soderland. 2007. TextRunner: Open information extraction on the web. In Proc. NAACL-HLT '07 (Demonstrations), pages 25–26.
Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proc. ACL '17, pages 440–450.
Biao Zhang, Deyi Xiong, and Jinsong Su. 2016. Cseq2seq: Cyclic sequence-to-sequence learning. arXiv preprint arXiv:1607.08725.
Yaoyuan Zhang, Zhenxu Ye, Yansong Feng, Dongyan Zhao, and Rui Yan. 2017. A constrained sequence-to-sequence neural model for sentence simplification. arXiv preprint arXiv:1704.02312.
Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. In Proc. NAACL-HLT '16, pages 30–34.
|
where naaclhlt2019 corresponds to a naaclhlt2019.bib file. Appendices Appendices are material that can be read, and include lemmas, formulas, proofs, and tables that are not critical to the reading and understanding of the paper. Appendices should be uploaded as supplementary material when submitting the paper for review. Upon acceptance, the appendices come after the references, as shown here. Use \appendix before any appendix section to switch the section numbering over to letters. Supplemental Material Submissions may include non-readable supplementary material used in the work and described in the paper. Any accompanying software and/or data should include licenses and documentation of research review as appropriate. Supplementary material may report preprocessing decisions, model parameters, and other details necessary for the replication of the experiments reported in the paper. Seemingly small preprocessing decisions can sometimes make a large difference in performance, so it is crucial to record such decisions to precisely characterize state-of-the-art methods. Nonetheless, supplementary material should be supplementary (rather than central) to the paper. Submissions that misuse the supplementary material may be rejected without review. Supplementary material may include explanations or details of proofs or derivations that do not fit into the paper, lists of features or feature templates, sample inputs and outputs for a system, pseudo-code or source code, and data. (Source code and data should be separate uploads, rather than part of the paper). The paper should not rely on the supplementary material: while the paper may refer to and cite the supplementary material and the supplementary material will be available to the reviewers, they will not be asked to review the supplementary material. | Multi-Encoder, Constrained-Decoder model |
d70ba6053e245ee4179c26a5dabcad37561c6af0 | d70ba6053e245ee4179c26a5dabcad37561c6af0_0 | Q: Which datasets did they experiment on?
Text: Credits
This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex suggestions from Jason Eisner, ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael White, those from ACL 2010 by Jing-Shing Chang and Philipp Koehn, those for ACL 2008 by JohannaD. Moore, Simone Teufel, James Allan, and Sadaoki Furui, those for ACL 2005 by Hwee Tou Ng and Kemal Oflazer, those for ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats. Those versions were written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the International Joint Conference on Artificial Intelligence and the Conference on Computer Vision and Pattern Recognition.
Introduction
The following instructions are directed to authors of papers submitted to NAACL-HLT 2019 or accepted for publication in its proceedings. All authors are required to adhere to these specifications. Authors are required to provide a Portable Document Format (PDF) version of their papers. The proceedings are designed for printing on A4 paper.
General Instructions
Manuscripts must be in two-column format. Exceptions to the two-column format include the title, authors' names and complete addresses, which must be centered at the top of the first page, and any full-width figures or tables (see the guidelines in Subsection "The First Page" ). Type single-spaced. Start all pages directly under the top margin. See the guidelines later regarding formatting the first page. The manuscript should be printed single-sided and its length should not exceed the maximum page limit described in Section "Length of Submission" . Pages are numbered for initial submission. However, do not number the pages in the camera-ready version.
By uncommenting \aclfinalcopy at the top of this document, it will compile to produce an example of the camera-ready formatting; by leaving it commented out, the document will be anonymized for initial submission. When you first create your submission on softconf, please fill in your submitted paper ID where *** appears in the \def\aclpaperid{***} definition at the top.
The review process is double-blind, so do not include any author information (names, addresses) when submitting a paper for review. However, you should maintain space for names and addresses so that they will fit in the final (accepted) version. The NAACL-HLT 2019 style will create a titlebox space of 2.5in for you when \aclfinalcopy is commented out.
The author list for submissions should include all (and only) individuals who made substantial contributions to the work presented. Each author listed on a submission to NAACL-HLT 2019 will be notified of submissions, revisions and the final decision. No authors may be added to or removed from submissions to NAACL-HLT 2019 after the submission deadline.
The Ruler
The NAACL-HLT 2019 style defines a printed ruler which should be presented in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document without the provided style files, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. ( users may uncomment the \aclfinalcopy command in the document preamble.)
Reviewers: note that the ruler measurements do not align well with lines in the paper – this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. In most cases one would expect that the approximate location will be adequate, although you can also use fractional references (e.g., the first paragraph on this page ends at mark $108.5$ ).
Electronically-available resources
NAACL-HLT provides this description in 2e (naaclhlt2019.tex) and PDF format (naaclhlt2019.pdf), along with the 2e style file used to format it (naaclhlt2019.sty) and an ACL bibliography style (acl_natbib.bst) and example bibliography (naaclhlt2019.bib). These files are all available at http://naacl2019.org/downloads/ naaclhlt2019-latex.zip. We strongly recommend the use of these style files, which have been appropriately tailored for the NAACL-HLT 2019 proceedings.
Format of Electronic Manuscript
For the production of the electronic manuscript you must use Adobe's Portable Document Format (PDF). PDF files are usually produced from using the pdflatex command. If your version of produces Postscript files, you can convert these into PDF using ps2pdf or dvipdf. On Windows, you can also use Adobe Distiller to generate PDF.
Please make sure that your PDF file includes all the necessary fonts (especially tree diagrams, symbols, and fonts with Asian characters). When you print or create the PDF file, there is usually an option in your printer setup to include none, all or just non-standard fonts. Please make sure that you select the option of including ALL the fonts. Before sending it, test your PDF by printing it from a computer different from the one where it was created. Moreover, some word processors may generate very large PDF files, where each page is rendered as an image. Such images may reproduce poorly. In this case, try alternative ways to obtain the PDF. One way on some systems is to install a driver for a postscript printer, send your document to the printer specifying “Output to a file”, then convert the file to PDF.
It is of utmost importance to specify the A4 format (21 cm x 29.7 cm) when formatting the paper. When working with dvips, for instance, one should specify -t a4. Or using the command \special{papersize=210mm,297mm} in the latex preamble (directly below the \usepackage commands). Then using dvipdf and/or pdflatex which would make it easier for some.
Print-outs of the PDF file on A4 paper should be identical to the hardcopy version. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs as soon as possible.
Layout
Format manuscripts two columns to a page, in the manner these instructions are formatted. The exact dimensions for a page on A4 paper are:
Left and right margins: 2.5 cm
Top margin: 2.5 cm
Bottom margin: 2.5 cm
Column width: 7.7 cm
Column height: 24.7 cm
Gap between columns: 0.6 cm
Papers should not be submitted on any other paper size. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs above as soon as possible.
Fonts
For reasons of uniformity, Adobe's Times Roman font should be used. In 2e this is accomplished by putting
\usepackage{times}
\usepackage{latexsym}
in the preamble. If Times Roman is unavailable, use Computer Modern Roman (2e's default). Note that the latter is about 10% less dense than Adobe's Times Roman font.
The First Page
Center the title, author's name(s) and affiliation(s) across both columns. Do not use footnotes for affiliations. Do not include the paper ID number assigned during the submission process. Use the two-column format only when you begin the abstract.
Title: Place the title centered at the top of the first page, in a 15-point bold font. (For a complete guide to font sizes and styles, see Table 1 ) Long titles should be typed on two lines without a blank line intervening. Approximately, put the title at 2.5 cm from the top of the page, followed by a blank line, then the author's names(s), and the affiliation on the following line. Do not use only initials for given names (middle initials are allowed). Do not format surnames in all capitals (e.g., use “Mitchell” not “MITCHELL”). Do not format title and section headings in all capitals as well except for proper names (such as “BLEU”) that are conventionally in all capitals. The affiliation should contain the author's complete address, and if possible, an electronic mail address. Start the body of the first page 7.5 cm from the top of the page.
The title, author names and addresses should be completely identical to those entered to the electronical paper submission website in order to maintain the consistency of author information among all publications of the conference. If they are different, the publication chairs may resolve the difference without consulting with you; so it is in your own interest to double-check that the information is consistent.
Abstract: Type the abstract at the beginning of the first column. The width of the abstract text should be smaller than the width of the columns for the text in the body of the paper by about 0.6 cm on each side. Center the word Abstract in a 12 point bold font above the body of the abstract. The abstract should be a concise summary of the general thesis and conclusions of the paper. It should be no longer than 200 words. The abstract text should be in 10 point font.
Text: Begin typing the main body of the text immediately after the abstract, observing the two-column format as shown in the present document. Do not include page numbers.
Indent: Indent when starting a new paragraph, about 0.4 cm. Use 11 points for text and subsection headings, 12 points for section headings and 15 points for the title.
Sections
Headings: Type and label section and subsection headings in the style shown on the present document. Use numbered sections (Arabic numerals) in order to facilitate cross references. Number subsections with the section number and the subsection number separated by a dot, in Arabic numerals. Do not number subsubsections.
Citations: Citations within the text appear in parentheses as BIBREF0 or, if the author's name appears in the text itself, as Gusfield Gusfield:97. Using the provided style, the former is accomplished using \cite and the latter with \shortcite or \newcite. Collapse multiple citations as in BIBREF0 , BIBREF1 ; this is accomplished with the provided style using commas within the \cite command, e.g., \cite{Gusfield:97,Aho:72}. Append lowercase letters to the year in cases of ambiguities. Treat double authors as in BIBREF1 , but write as in BIBREF2 when more than two authors are involved. Collapse multiple citations as in BIBREF0 , BIBREF1 . Also refrain from using full citations as sentence constituents.
We suggest that instead of
“ BIBREF0 showed that ...”
you use
“Gusfield Gusfield:97 showed that ...”
If you are using the provided and Bib style files, you can use the command \citet (cite in text) to get “author (year)” citations.
If the Bib file contains DOI fields, the paper title in the references section will appear as a hyperlink to the DOI, using the hyperref package. To disable the hyperref package, load the style file with the nohyperref option:
\usepackage[nohyperref]{naaclhlt2019}
Digital Object Identifiers: As part of our work to make ACL materials more widely used and cited outside of our discipline, ACL has registered as a CrossRef member, as a registrant of Digital Object Identifiers (DOIs), the standard for registering permanent URNs for referencing scholarly materials. As of 2017, we are requiring all camera-ready references to contain the appropriate DOIs (or as a second resort, the hyperlinked ACL Anthology Identifier) to all cited works. Thus, please ensure that you use Bib records that contain DOI or URLs for any of the ACL materials that you reference. Appropriate records should be found for most materials in the current ACL Anthology at http://aclanthology.info/.
As examples, we cite BIBREF3 to show you how papers with a DOI will appear in the bibliography. We cite BIBREF4 to show how papers without a DOI but with an ACL Anthology Identifier will appear in the bibliography.
As reviewing will be double-blind, the submitted version of the papers should not include the authors' names and affiliations. Furthermore, self-references that reveal the author's identity, e.g.,
“We previously showed BIBREF0 ...”
should be avoided. Instead, use citations such as
“ BIBREF0 Gusfield:97 previously showed ... ”
Any preliminary non-archival versions of submitted papers should be listed in the submission form but not in the review version of the paper. NAACL-HLT 2019 reviewers are generally aware that authors may present preliminary versions of their work in other venues, but will not be provided the list of previous presentations from the submission form.
Please do not use anonymous citations and do not include when submitting your papers. Papers that do not conform to these requirements may be rejected without review.
References: Gather the full set of references together under the heading References; place the section before any Appendices. Arrange the references alphabetically by first author, rather than by order of occurrence in the text. By using a .bib file, as in this template, this will be automatically handled for you. See the \bibliography commands near the end for more.
Provide as complete a citation as possible, using a consistent format, such as the one for Computational Linguistics or the one in the Publication Manual of the American Psychological Association BIBREF5 . Use of full names for authors rather than initials is preferred. A list of abbreviations for common computer science journals can be found in the ACM Computing Reviews BIBREF6 .
The and Bib style files provided roughly fit the American Psychological Association format, allowing regular citations, short citations and multiple citations as described above.
Example citing an arxiv paper: BIBREF7 .
Example article in journal citation: BIBREF8 .
Example article in proceedings, with location: BIBREF9 .
Example article in proceedings, without location: BIBREF10 .
See corresponding .bib file for further details.
Submissions should accurately reference prior and related work, including code and data. If a piece of prior work appeared in multiple venues, the version that appeared in a refereed, archival venue should be referenced. If multiple versions of a piece of prior work exist, the one used by the authors should be referenced. Authors should not rely on automated citation indices to provide accurate references for prior and related work.
Appendices: Appendices, if any, directly follow the text and the references (but see above). Letter them in sequence and provide an informative title: Appendix A. Title of Appendix.
Footnotes
Footnotes: Put footnotes at the bottom of the page and use 9 point font. They may be numbered or referred to by asterisks or other symbols. Footnotes should be separated from the text by a line.
Graphics
Illustrations: Place figures, tables, and photographs in the paper near where they are first discussed, rather than at the end, if possible. Wide illustrations may run across both columns. Color illustrations are discouraged, unless you have verified that they will be understandable when printed in black ink.
Captions: Provide a caption for every illustration; number each one sequentially in the form: “Figure 1. Caption of the Figure.” “Table 1. Caption of the Table.” Type the captions of the figures and tables below the body, using 10 point text. Captions should be placed below illustrations. Captions that are one line are centered (see Table 1 ). Captions longer than one line are left-aligned (see Table 2 ). Do not overwrite the default caption sizes. The naaclhlt2019.sty file is compatible with the caption and subcaption packages; do not add optional arguments.
Accessibility
In an effort to accommodate people who are color-blind (as well as those printing to paper), grayscale readability for all accepted papers will be encouraged. Color is not forbidden, but authors should ensure that tables and figures do not rely solely on color to convey critical distinctions. A simple criterion: All curves and points in your figures should be clearly distinguishable without color.
Translation of non-English Terms
It is also advised to supplement non-English characters and terms with appropriate transliterations and/or translations since not all readers understand all such characters and terms. Inline transliteration or translation can be represented in the order of: original-form transliteration “translation”.
Length of Submission
The NAACL-HLT 2019 main conference accepts submissions of long papers and short papers. Long papers may consist of up to eight (8) pages of content plus unlimited pages for references. Upon acceptance, final versions of long papers will be given one additional page – up to nine (9) pages of content plus unlimited pages for references – so that reviewers' comments can be taken into account. Short papers may consist of up to four (4) pages of content, plus unlimited pages for references. Upon acceptance, short papers will be given five (5) pages in the proceedings and unlimited pages for references. For both long and short papers, all illustrations and tables that are part of the main text must be accommodated within these page limits, observing the formatting instructions given in the present document. Papers that do not conform to the specified length and formatting requirements are subject to be rejected without review.
NAACL-HLT 2019 does encourage the submission of additional material that is relevant to the reviewers but not an integral part of the paper. There are two such types of material: appendices, which can be read, and non-readable supplementary materials, often data or code. Do not include this additional material in the same document as your main paper. Additional material must be submitted as one or more separate files, and must adhere to the same anonymity guidelines as the main paper. The paper must be self-contained: it is optional for reviewers to look at the supplementary material. Papers should not refer, for further detail, to documents, code or data resources that are not available to the reviewers. Refer to Appendix "Appendices" and Appendix "Supplemental Material" for further information.
Workshop chairs may have different rules for allowed length and whether supplemental material is welcome. As always, the respective call for papers is the authoritative source.
Acknowledgments
The acknowledgments should go immediately before the references. Do not number the acknowledgments section. Do not include this section when submitting your paper for review.
Preparing References:
Include your own bib file like this: \bibliographystyle{acl_natbib} \begin{thebibliography}{40}
Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proc. ACL '15/IJCNLP '15, pages 344–354.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. ICLR '15.
Michele Banko, Michael J. Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proc. IJCAI '07, pages 2670–2676.
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proc. EMNLP '13, pages 1533–1544.
Nikita Bhutani, HV Jagadish, and Dragomir Radev. 2016. Nested propositions in open information extraction. In Proc. EMNLP '16, pages 55–64.
Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Proc. NIPS '13, pages 2787–2795.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proc. EMNLP '14, pages 1724–1734.
Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. In Proc. ACL '18, pages 407–413.
Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming question answering datasets into natural language inference datasets. arXiv preprint arXiv:1809.02922.
Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proc. EMNLP '11, pages 1535–1545.
Ben Hixon, Peter Clark, and Hannaneh Hajishirzi. 2015. Learning knowledge graphs for question answering through conversational dialog. In Proc. NAACL-HLT '15, pages 851–861.
Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991.
Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proc. ACL '17, pages 963–973.
Prachi Jain, Shikhar Murty, Mausam, and Soumen Chakrabarti. 2018. Mitigating the effect of out-of-vocabulary entity pairs in matrix factorization for KB inference. In Proc. IJCAI '18, pages 4122–4129.
Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proc. ACL '17 (System Demonstrations), pages 67–72.
Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015a. Multi-task sequence to sequence learning. In Proc. ICLR '16.
Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015b. Effective approaches to attention-based neural machine translation. In Proc. EMNLP '15, pages 1412–1421.
Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using $t$ -SNE. Journal of Machine Learning Research, 9(Nov):2579–2605.
Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, Oren Etzioni, et al. 2012. Open language learning for information extraction. In Proc. EMNLP '12, pages 523–534.
Julian McAuley and Alex Yang. 2016. Addressing complex and subjective product-related queries with customer reviews. In Proc. WWW '16, pages 625–635.
Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowledge graphs. In Proc. AAAI '16, pages 1955–1961.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proc. EMNLP '14, pages 1532–1543.
Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proc. EMNLP '17, pages 338–348.
Subhashree S and P Sreenivasa Kumar. 2018. Enriching domain ontologies using question-answer datasets. In Proc. CoDS-COMAD '18, pages 329–332.
Swarnadeep Saha, Harinder Pal, et al. 2017. Bootstrapping for numerical open ie. In Proc. ACL '17, pages 317–323.
Denis Savenkov, Wei-Lwun Lu, Jeff Dalton, and Eugene Agichtein. 2015. Relation extraction from community generated question-answer pairs. In Proc. NAACL-HLT '15, pages 96–102.
Gabriel Stanovsky and Ido Dagan. 2016. Creating a large benchmark for open information extraction. In Proc. EMNLP '16.
Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised open information extraction. In Proc. ACL '18, pages 885–895.
Antonio Toral and Víctor M. Sánchez-Cartagena. 2017. A multifaceted evaluation of neural versus phrase-based machine translation for 9 language directions. In Proc. EACL '17, pages 1063–1073.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proc. NIPS '15, pages 2692–2700.
Mengting Wan and Julian McAuley. 2016. Modeling ambiguity, subjectivity, and diverging viewpoints in opinion question answering systems. In Proc. ICDM '16, pages 489–498.
Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29(12):2724–2743.
Zeqiu Wu, Xiang Ren, Frank F. Xu, Ji Li, and Jiawei Han. 2018. Indirect supervision for relation extraction using question-answer pairs. In Proc. WSDM '18, pages 646–654.
Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for semantic parsing. In Proc. ACL '16, pages 1341–1350.
Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for question answering. In Proc. ICLR '17.
Alexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead, and Stephen Soderland. 2007. TextRunner: Open information extraction on the web. In Proc. NAACL-HLT '07 (Demonstrations), pages 25–26.
Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proc. ACL '17, pages 440–450.
Biao Zhang, Deyi Xiong, and Jinsong Su. 2016. Cseq2seq: Cyclic sequence-to-sequence learning. arXiv preprint arXiv:1607.08725.
Yaoyuan Zhang, Zhenxu Ye, Yansong Feng, Dongyan Zhao, and Rui Yan. 2017. A constrained sequence-to-sequence neural model for sentence simplification. arXiv preprint arXiv:1704.02312.
Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. In Proc. NAACL-HLT '16, pages 30–34.
|
where naaclhlt2019 corresponds to a naaclhlt2019.bib file. Appendices Appendices are material that can be read, and include lemmas, formulas, proofs, and tables that are not critical to the reading and understanding of the paper. Appendices should be uploaded as supplementary material when submitting the paper for review. Upon acceptance, the appendices come after the references, as shown here. Use \appendix before any appendix section to switch the section numbering over to letters. Supplemental Material Submissions may include non-readable supplementary material used in the work and described in the paper. Any accompanying software and/or data should include licenses and documentation of research review as appropriate. Supplementary material may report preprocessing decisions, model parameters, and other details necessary for the replication of the experiments reported in the paper. Seemingly small preprocessing decisions can sometimes make a large difference in performance, so it is crucial to record such decisions to precisely characterize state-of-the-art methods. Nonetheless, supplementary material should be supplementary (rather than central) to the paper. Submissions that misuse the supplementary material may be rejected without review. Supplementary material may include explanations or details of proofs or derivations that do not fit into the paper, lists of features or feature templates, sample inputs and outputs for a system, pseudo-code or source code, and data. (Source code and data should be separate uploads, rather than part of the paper). The paper should not rely on the supplementary material: while the paper may refer to and cite the supplementary material and the supplementary material will be available to the reviewers, they will not be asked to review the supplementary material. | ConciergeQA and AmazonQA |
802687121a98ba4d7df1f8040ea0dc1cc9565b69 | 802687121a98ba4d7df1f8040ea0dc1cc9565b69_0 | Q: What types of facts can be extracted from QA pairs that can't be extracted from general text?
Text: Credits
This document has been adapted from the instructions for earlier ACL and NAACL proceedings, including those for ACL 2018 by Shay Cohen, Kevin Gimpel, and Wei Lu, NAACL 2018 by Margaret Michell and Stephanie Lukin, 2017/2018 (NA)ACL bibtex suggestions from Jason Eisner, ACL 2017 by Dan Gildea and Min-Yen Kan, NAACL 2017 by Margaret Mitchell, ACL 2012 by Maggie Li and Michael White, those from ACL 2010 by Jing-Shing Chang and Philipp Koehn, those for ACL 2008 by JohannaD. Moore, Simone Teufel, James Allan, and Sadaoki Furui, those for ACL 2005 by Hwee Tou Ng and Kemal Oflazer, those for ACL 2002 by Eugene Charniak and Dekang Lin, and earlier ACL and EACL formats. Those versions were written by several people, including John Chen, Henry S. Thompson and Donald Walker. Additional elements were taken from the formatting instructions of the International Joint Conference on Artificial Intelligence and the Conference on Computer Vision and Pattern Recognition.
Introduction
The following instructions are directed to authors of papers submitted to NAACL-HLT 2019 or accepted for publication in its proceedings. All authors are required to adhere to these specifications. Authors are required to provide a Portable Document Format (PDF) version of their papers. The proceedings are designed for printing on A4 paper.
General Instructions
Manuscripts must be in two-column format. Exceptions to the two-column format include the title, authors' names and complete addresses, which must be centered at the top of the first page, and any full-width figures or tables (see the guidelines in Subsection "The First Page" ). Type single-spaced. Start all pages directly under the top margin. See the guidelines later regarding formatting the first page. The manuscript should be printed single-sided and its length should not exceed the maximum page limit described in Section "Length of Submission" . Pages are numbered for initial submission. However, do not number the pages in the camera-ready version.
By uncommenting \aclfinalcopy at the top of this document, it will compile to produce an example of the camera-ready formatting; by leaving it commented out, the document will be anonymized for initial submission. When you first create your submission on softconf, please fill in your submitted paper ID where *** appears in the \def\aclpaperid{***} definition at the top.
The review process is double-blind, so do not include any author information (names, addresses) when submitting a paper for review. However, you should maintain space for names and addresses so that they will fit in the final (accepted) version. The NAACL-HLT 2019 style will create a titlebox space of 2.5in for you when \aclfinalcopy is commented out.
The author list for submissions should include all (and only) individuals who made substantial contributions to the work presented. Each author listed on a submission to NAACL-HLT 2019 will be notified of submissions, revisions and the final decision. No authors may be added to or removed from submissions to NAACL-HLT 2019 after the submission deadline.
The Ruler
The NAACL-HLT 2019 style defines a printed ruler which should be presented in the version submitted for review. The ruler is provided in order that reviewers may comment on particular lines in the paper without circumlocution. If you are preparing a document without the provided style files, please arrange for an equivalent ruler to appear on the final output pages. The presence or absence of the ruler should not change the appearance of any other content on the page. The camera ready copy should not contain a ruler. ( users may uncomment the \aclfinalcopy command in the document preamble.)
Reviewers: note that the ruler measurements do not align well with lines in the paper – this turns out to be very difficult to do well when the paper contains many figures and equations, and, when done, looks ugly. In most cases one would expect that the approximate location will be adequate, although you can also use fractional references (e.g., the first paragraph on this page ends at mark $108.5$ ).
Electronically-available resources
NAACL-HLT provides this description in 2e (naaclhlt2019.tex) and PDF format (naaclhlt2019.pdf), along with the 2e style file used to format it (naaclhlt2019.sty) and an ACL bibliography style (acl_natbib.bst) and example bibliography (naaclhlt2019.bib). These files are all available at http://naacl2019.org/downloads/ naaclhlt2019-latex.zip. We strongly recommend the use of these style files, which have been appropriately tailored for the NAACL-HLT 2019 proceedings.
Format of Electronic Manuscript
For the production of the electronic manuscript you must use Adobe's Portable Document Format (PDF). PDF files are usually produced from using the pdflatex command. If your version of produces Postscript files, you can convert these into PDF using ps2pdf or dvipdf. On Windows, you can also use Adobe Distiller to generate PDF.
Please make sure that your PDF file includes all the necessary fonts (especially tree diagrams, symbols, and fonts with Asian characters). When you print or create the PDF file, there is usually an option in your printer setup to include none, all or just non-standard fonts. Please make sure that you select the option of including ALL the fonts. Before sending it, test your PDF by printing it from a computer different from the one where it was created. Moreover, some word processors may generate very large PDF files, where each page is rendered as an image. Such images may reproduce poorly. In this case, try alternative ways to obtain the PDF. One way on some systems is to install a driver for a postscript printer, send your document to the printer specifying “Output to a file”, then convert the file to PDF.
It is of utmost importance to specify the A4 format (21 cm x 29.7 cm) when formatting the paper. When working with dvips, for instance, one should specify -t a4. Or using the command \special{papersize=210mm,297mm} in the latex preamble (directly below the \usepackage commands). Then using dvipdf and/or pdflatex which would make it easier for some.
Print-outs of the PDF file on A4 paper should be identical to the hardcopy version. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs as soon as possible.
Layout
Format manuscripts two columns to a page, in the manner these instructions are formatted. The exact dimensions for a page on A4 paper are:
Left and right margins: 2.5 cm
Top margin: 2.5 cm
Bottom margin: 2.5 cm
Column width: 7.7 cm
Column height: 24.7 cm
Gap between columns: 0.6 cm
Papers should not be submitted on any other paper size. If you cannot meet the above requirements about the production of your electronic submission, please contact the publication chairs above as soon as possible.
Fonts
For reasons of uniformity, Adobe's Times Roman font should be used. In 2e this is accomplished by putting
\usepackage{times}
\usepackage{latexsym}
in the preamble. If Times Roman is unavailable, use Computer Modern Roman (2e's default). Note that the latter is about 10% less dense than Adobe's Times Roman font.
The First Page
Center the title, author's name(s) and affiliation(s) across both columns. Do not use footnotes for affiliations. Do not include the paper ID number assigned during the submission process. Use the two-column format only when you begin the abstract.
Title: Place the title centered at the top of the first page, in a 15-point bold font. (For a complete guide to font sizes and styles, see Table 1 ) Long titles should be typed on two lines without a blank line intervening. Approximately, put the title at 2.5 cm from the top of the page, followed by a blank line, then the author's names(s), and the affiliation on the following line. Do not use only initials for given names (middle initials are allowed). Do not format surnames in all capitals (e.g., use “Mitchell” not “MITCHELL”). Do not format title and section headings in all capitals as well except for proper names (such as “BLEU”) that are conventionally in all capitals. The affiliation should contain the author's complete address, and if possible, an electronic mail address. Start the body of the first page 7.5 cm from the top of the page.
The title, author names and addresses should be completely identical to those entered to the electronical paper submission website in order to maintain the consistency of author information among all publications of the conference. If they are different, the publication chairs may resolve the difference without consulting with you; so it is in your own interest to double-check that the information is consistent.
Abstract: Type the abstract at the beginning of the first column. The width of the abstract text should be smaller than the width of the columns for the text in the body of the paper by about 0.6 cm on each side. Center the word Abstract in a 12 point bold font above the body of the abstract. The abstract should be a concise summary of the general thesis and conclusions of the paper. It should be no longer than 200 words. The abstract text should be in 10 point font.
Text: Begin typing the main body of the text immediately after the abstract, observing the two-column format as shown in the present document. Do not include page numbers.
Indent: Indent when starting a new paragraph, about 0.4 cm. Use 11 points for text and subsection headings, 12 points for section headings and 15 points for the title.
Sections
Headings: Type and label section and subsection headings in the style shown on the present document. Use numbered sections (Arabic numerals) in order to facilitate cross references. Number subsections with the section number and the subsection number separated by a dot, in Arabic numerals. Do not number subsubsections.
Citations: Citations within the text appear in parentheses as BIBREF0 or, if the author's name appears in the text itself, as Gusfield Gusfield:97. Using the provided style, the former is accomplished using \cite and the latter with \shortcite or \newcite. Collapse multiple citations as in BIBREF0 , BIBREF1 ; this is accomplished with the provided style using commas within the \cite command, e.g., \cite{Gusfield:97,Aho:72}. Append lowercase letters to the year in cases of ambiguities. Treat double authors as in BIBREF1 , but write as in BIBREF2 when more than two authors are involved. Collapse multiple citations as in BIBREF0 , BIBREF1 . Also refrain from using full citations as sentence constituents.
We suggest that instead of
“ BIBREF0 showed that ...”
you use
“Gusfield Gusfield:97 showed that ...”
If you are using the provided and Bib style files, you can use the command \citet (cite in text) to get “author (year)” citations.
If the Bib file contains DOI fields, the paper title in the references section will appear as a hyperlink to the DOI, using the hyperref package. To disable the hyperref package, load the style file with the nohyperref option:
\usepackage[nohyperref]{naaclhlt2019}
Digital Object Identifiers: As part of our work to make ACL materials more widely used and cited outside of our discipline, ACL has registered as a CrossRef member, as a registrant of Digital Object Identifiers (DOIs), the standard for registering permanent URNs for referencing scholarly materials. As of 2017, we are requiring all camera-ready references to contain the appropriate DOIs (or as a second resort, the hyperlinked ACL Anthology Identifier) to all cited works. Thus, please ensure that you use Bib records that contain DOI or URLs for any of the ACL materials that you reference. Appropriate records should be found for most materials in the current ACL Anthology at http://aclanthology.info/.
As examples, we cite BIBREF3 to show you how papers with a DOI will appear in the bibliography. We cite BIBREF4 to show how papers without a DOI but with an ACL Anthology Identifier will appear in the bibliography.
As reviewing will be double-blind, the submitted version of the papers should not include the authors' names and affiliations. Furthermore, self-references that reveal the author's identity, e.g.,
“We previously showed BIBREF0 ...”
should be avoided. Instead, use citations such as
“ BIBREF0 Gusfield:97 previously showed ... ”
Any preliminary non-archival versions of submitted papers should be listed in the submission form but not in the review version of the paper. NAACL-HLT 2019 reviewers are generally aware that authors may present preliminary versions of their work in other venues, but will not be provided the list of previous presentations from the submission form.
Please do not use anonymous citations and do not include when submitting your papers. Papers that do not conform to these requirements may be rejected without review.
References: Gather the full set of references together under the heading References; place the section before any Appendices. Arrange the references alphabetically by first author, rather than by order of occurrence in the text. By using a .bib file, as in this template, this will be automatically handled for you. See the \bibliography commands near the end for more.
Provide as complete a citation as possible, using a consistent format, such as the one for Computational Linguistics or the one in the Publication Manual of the American Psychological Association BIBREF5 . Use of full names for authors rather than initials is preferred. A list of abbreviations for common computer science journals can be found in the ACM Computing Reviews BIBREF6 .
The and Bib style files provided roughly fit the American Psychological Association format, allowing regular citations, short citations and multiple citations as described above.
Example citing an arxiv paper: BIBREF7 .
Example article in journal citation: BIBREF8 .
Example article in proceedings, with location: BIBREF9 .
Example article in proceedings, without location: BIBREF10 .
See corresponding .bib file for further details.
Submissions should accurately reference prior and related work, including code and data. If a piece of prior work appeared in multiple venues, the version that appeared in a refereed, archival venue should be referenced. If multiple versions of a piece of prior work exist, the one used by the authors should be referenced. Authors should not rely on automated citation indices to provide accurate references for prior and related work.
Appendices: Appendices, if any, directly follow the text and the references (but see above). Letter them in sequence and provide an informative title: Appendix A. Title of Appendix.
Footnotes
Footnotes: Put footnotes at the bottom of the page and use 9 point font. They may be numbered or referred to by asterisks or other symbols. Footnotes should be separated from the text by a line.
Graphics
Illustrations: Place figures, tables, and photographs in the paper near where they are first discussed, rather than at the end, if possible. Wide illustrations may run across both columns. Color illustrations are discouraged, unless you have verified that they will be understandable when printed in black ink.
Captions: Provide a caption for every illustration; number each one sequentially in the form: “Figure 1. Caption of the Figure.” “Table 1. Caption of the Table.” Type the captions of the figures and tables below the body, using 10 point text. Captions should be placed below illustrations. Captions that are one line are centered (see Table 1 ). Captions longer than one line are left-aligned (see Table 2 ). Do not overwrite the default caption sizes. The naaclhlt2019.sty file is compatible with the caption and subcaption packages; do not add optional arguments.
Accessibility
In an effort to accommodate people who are color-blind (as well as those printing to paper), grayscale readability for all accepted papers will be encouraged. Color is not forbidden, but authors should ensure that tables and figures do not rely solely on color to convey critical distinctions. A simple criterion: All curves and points in your figures should be clearly distinguishable without color.
Translation of non-English Terms
It is also advised to supplement non-English characters and terms with appropriate transliterations and/or translations since not all readers understand all such characters and terms. Inline transliteration or translation can be represented in the order of: original-form transliteration “translation”.
Length of Submission
The NAACL-HLT 2019 main conference accepts submissions of long papers and short papers. Long papers may consist of up to eight (8) pages of content plus unlimited pages for references. Upon acceptance, final versions of long papers will be given one additional page – up to nine (9) pages of content plus unlimited pages for references – so that reviewers' comments can be taken into account. Short papers may consist of up to four (4) pages of content, plus unlimited pages for references. Upon acceptance, short papers will be given five (5) pages in the proceedings and unlimited pages for references. For both long and short papers, all illustrations and tables that are part of the main text must be accommodated within these page limits, observing the formatting instructions given in the present document. Papers that do not conform to the specified length and formatting requirements are subject to be rejected without review.
NAACL-HLT 2019 does encourage the submission of additional material that is relevant to the reviewers but not an integral part of the paper. There are two such types of material: appendices, which can be read, and non-readable supplementary materials, often data or code. Do not include this additional material in the same document as your main paper. Additional material must be submitted as one or more separate files, and must adhere to the same anonymity guidelines as the main paper. The paper must be self-contained: it is optional for reviewers to look at the supplementary material. Papers should not refer, for further detail, to documents, code or data resources that are not available to the reviewers. Refer to Appendix "Appendices" and Appendix "Supplemental Material" for further information.
Workshop chairs may have different rules for allowed length and whether supplemental material is welcome. As always, the respective call for papers is the authoritative source.
Acknowledgments
The acknowledgments should go immediately before the references. Do not number the acknowledgments section. Do not include this section when submitting your paper for review.
Preparing References:
Include your own bib file like this: \bibliographystyle{acl_natbib} \begin{thebibliography}{40}
Gabor Angeli, Melvin Jose Johnson Premkumar, and Christopher D Manning. 2015. Leveraging linguistic structure for open domain information extraction. In Proc. ACL '15/IJCNLP '15, pages 344–354.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. ICLR '15.
Michele Banko, Michael J. Cafarella, Stephen Soderland, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Proc. IJCAI '07, pages 2670–2676.
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proc. EMNLP '13, pages 1533–1544.
Nikita Bhutani, HV Jagadish, and Dragomir Radev. 2016. Nested propositions in open information extraction. In Proc. EMNLP '16, pages 55–64.
Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. In Proc. NIPS '13, pages 2787–2795.
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In Proc. EMNLP '14, pages 1724–1734.
Lei Cui, Furu Wei, and Ming Zhou. 2018. Neural open information extraction. In Proc. ACL '18, pages 407–413.
Dorottya Demszky, Kelvin Guu, and Percy Liang. 2018. Transforming question answering datasets into natural language inference datasets. arXiv preprint arXiv:1809.02922.
Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proc. EMNLP '11, pages 1535–1545.
Ben Hixon, Peter Clark, and Hannaneh Hajishirzi. 2015. Learning knowledge graphs for question answering through conversational dialog. In Proc. NAACL-HLT '15, pages 851–861.
Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991.
Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, and Luke Zettlemoyer. 2017. Learning a neural semantic parser from user feedback. In Proc. ACL '17, pages 963–973.
Prachi Jain, Shikhar Murty, Mausam, and Soumen Chakrabarti. 2018. Mitigating the effect of out-of-vocabulary entity pairs in matrix factorization for KB inference. In Proc. IJCAI '18, pages 4122–4129.
Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proc. ACL '17 (System Demonstrations), pages 67–72.
Minh-Thang Luong, Quoc V Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2015a. Multi-task sequence to sequence learning. In Proc. ICLR '16.
Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015b. Effective approaches to attention-based neural machine translation. In Proc. EMNLP '15, pages 1412–1421.
Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using $t$ -SNE. Journal of Machine Learning Research, 9(Nov):2579–2605.
Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, Oren Etzioni, et al. 2012. Open language learning for information extraction. In Proc. EMNLP '12, pages 523–534.
Julian McAuley and Alex Yang. 2016. Addressing complex and subjective product-related queries with customer reviews. In Proc. WWW '16, pages 625–635.
Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowledge graphs. In Proc. AAAI '16, pages 1955–1961.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proc. EMNLP '14, pages 1532–1543.
Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proc. EMNLP '17, pages 338–348.
Subhashree S and P Sreenivasa Kumar. 2018. Enriching domain ontologies using question-answer datasets. In Proc. CoDS-COMAD '18, pages 329–332.
Swarnadeep Saha, Harinder Pal, et al. 2017. Bootstrapping for numerical open ie. In Proc. ACL '17, pages 317–323.
Denis Savenkov, Wei-Lwun Lu, Jeff Dalton, and Eugene Agichtein. 2015. Relation extraction from community generated question-answer pairs. In Proc. NAACL-HLT '15, pages 96–102.
Gabriel Stanovsky and Ido Dagan. 2016. Creating a large benchmark for open information extraction. In Proc. EMNLP '16.
Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, and Ido Dagan. 2018. Supervised open information extraction. In Proc. ACL '18, pages 885–895.
Antonio Toral and Víctor M. Sánchez-Cartagena. 2017. A multifaceted evaluation of neural versus phrase-based machine translation for 9 language directions. In Proc. EACL '17, pages 1063–1073.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proc. NIPS '15, pages 2692–2700.
Mengting Wan and Julian McAuley. 2016. Modeling ambiguity, subjectivity, and diverging viewpoints in opinion question answering systems. In Proc. ICDM '16, pages 489–498.
Quan Wang, Zhendong Mao, Bin Wang, and Li Guo. 2017. Knowledge graph embedding: A survey of approaches and applications. IEEE Transactions on Knowledge and Data Engineering, 29(12):2724–2743.
Zeqiu Wu, Xiang Ren, Frank F. Xu, Ji Li, and Jiawei Han. 2018. Indirect supervision for relation extraction using question-answer pairs. In Proc. WSDM '18, pages 646–654.
Chunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for semantic parsing. In Proc. ACL '16, pages 1341–1350.
Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dynamic coattention networks for question answering. In Proc. ICLR '17.
Alexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead, and Stephen Soderland. 2007. TextRunner: Open information extraction on the web. In Proc. NAACL-HLT '07 (Demonstrations), pages 25–26.
Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proc. ACL '17, pages 440–450.
Biao Zhang, Deyi Xiong, and Jinsong Su. 2016. Cseq2seq: Cyclic sequence-to-sequence learning. arXiv preprint arXiv:1607.08725.
Yaoyuan Zhang, Zhenxu Ye, Yansong Feng, Dongyan Zhao, and Rui Yan. 2017. A constrained sequence-to-sequence neural model for sentence simplification. arXiv preprint arXiv:1704.02312.
Barret Zoph and Kevin Knight. 2016. Multi-source neural translation. In Proc. NAACL-HLT '16, pages 30–34.
|
where naaclhlt2019 corresponds to a naaclhlt2019.bib file. Appendices Appendices are material that can be read, and include lemmas, formulas, proofs, and tables that are not critical to the reading and understanding of the paper. Appendices should be uploaded as supplementary material when submitting the paper for review. Upon acceptance, the appendices come after the references, as shown here. Use \appendix before any appendix section to switch the section numbering over to letters. Supplemental Material Submissions may include non-readable supplementary material used in the work and described in the paper. Any accompanying software and/or data should include licenses and documentation of research review as appropriate. Supplementary material may report preprocessing decisions, model parameters, and other details necessary for the replication of the experiments reported in the paper. Seemingly small preprocessing decisions can sometimes make a large difference in performance, so it is crucial to record such decisions to precisely characterize state-of-the-art methods. Nonetheless, supplementary material should be supplementary (rather than central) to the paper. Submissions that misuse the supplementary material may be rejected without review. Supplementary material may include explanations or details of proofs or derivations that do not fit into the paper, lists of features or feature templates, sample inputs and outputs for a system, pseudo-code or source code, and data. (Source code and data should be separate uploads, rather than part of the paper). The paper should not rely on the supplementary material: while the paper may refer to and cite the supplementary material and the supplementary material will be available to the reviewers, they will not be asked to review the supplementary material. | Unanswerable |
f1bd66bb354e3dabf5dc4a71e6f08b17d472ecc9 | f1bd66bb354e3dabf5dc4a71e6f08b17d472ecc9_0 | Q: How do slot binary classifiers improve performance?
Text: Introduction
A traditional task-oriented dialogue system is often composed of a few modules, such as natural language understanding, dialogue state tracking, knowledge base (KB) query, dialogue policy engine and response generation. Language understanding aims to convert the input to some predefined semantic frame. State tracking is a critical component that models explicitly the input semantic frame and the dialogue history for producing KB queries. The semantic frame and the corresponding belief state are defined in terms of informable slots values and requestable slots. Informable slot values capture information provided by the user so far, e.g., {price=cheap, food=italian} indicating the user wants a cheap Italian restaurant at this stage. Requestable slots capture the information requested by the user, e.g., {address, phone} means the user wants to know the address and phone number of a restaurant. Dialogue policy model decides on the system action which is then realized by a language generation component.
To mitigate the problems with such a classic modularized dialogue system, such as the error propagation between modules, the cascade effect that the updates of the modules have and the expensiveness of annotation, end-to-end training of dialogue systems was recently proposed BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . These systems train one whole model to read the current user's utterance, the past state (that may contain all previous interactions) and generate the current state and response.
There are two main approaches for modeling the belief state in end-to-end task-oriented dialogue systems in the literature: the fully structured approach based on classification BIBREF7 , BIBREF9 , and the free-form approach based on text generation BIBREF10 . The fully structured approaches BIBREF11 , BIBREF12 use the full structure of the KB, both its schema and the values available in it, and assumes that the sets of informable slot values and requestable slots are fixed. In real-world scenarios, this assumption is too restrictive as the content of the KB may change and users' utterances may contain information outside the pre-defined sets. An ideal end-to-end architecture for state tracking should be able to identify the values of the informable slots and the requestable slots, easily adapt to new domains, to the changes in the content of the KB, and to the occurrence of words in users' utterances that are not present in the KB at training time, while at the same time providing the right amount of inductive bias to allow generalization. Recently, a free-form approach called TSCP (Two Stage Copy Net) BIBREF10 was proposed. This approach does not integrate any information about the KB in the model architecture. It has the advantage of being readily adaptable to new domains and changes in the content of the KB as well as solving the out-of-vocabulary word problem by generating or copying the relevant piece of text from the user's utterances in its response generation. However, TSCP can produce invalid states (see Section "Experiments" ). Furthermore, by putting all slots together into a sequence, it introduces an unwanted (artificial) order between different slots since they are encoded and decoded sequentially. It could be even worse if two slots have overlapping values, like departure and arrival airport in a travel booking system. As such, the unnecessary order of the slots makes getting rid of the invalid states a great challenge for the sequential decoder. As a summary, both approaches to state tracking have their weaknesses when applied to real-world applications.
This paper proposes the Flexibly-Structured Dialogue Model (FSDM) as a new end-to-end task-oriented dialogue system. The state tracking component of FSDM has the advantages of both fully structured and free-form approaches while addressing their shortcomings. On one hand, it is still structured, as it incorporates information about slots in KB schema; on the other hand, it is flexible, as it does not use information about the values contained in the KB records. This makes it easily adaptable to new values. These desirable properties are achieved by a separate decoder for each informable slot and a multi-label classifier for the requestable slots. Those components explicitly assign values to slots like the fully structured approach, while also preserving the capability of dealing with out-of-vocabulary words like the free-form approach. By using these two types of decoders, FSDM produces only valid belief states, overcoming the limitations of the free-form approach. Further, FSDM has a new module called response slot binary classifier that adds extra supervision to generate the slots that will be present in the response more precisely before generating the final textual agent response (see Section "Methodology" for details).
The main contributions of this work are
Related Work
Our work is related to end-to-end task-oriented dialogue systems in general BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF14 , BIBREF7 , BIBREF8 and those that extend the Seq2Seq BIBREF15 architecture in particular BIBREF13 , BIBREF16 , BIBREF17 . Belief tracking, which is necessary to form KB queries, is not explicitly performed in the latter works. To compensate, BIBREF13 , BIBREF18 , BIBREF17 adopt a copy mechanism that allows copying information retrieved from the KB to the generated response. BIBREF16 adopt Memory Networks BIBREF19 to memorize the retrieved KB entities and words appearing in the dialogue history. These models scale linearly with the size of the KB and need to be retrained at each update of the KB. Both issues make these approaches less practical in real-world applications.
Our work is also akin to modularly connected end-to-end trainable networks BIBREF7 , BIBREF9 , BIBREF0 , BIBREF4 , BIBREF3 , BIBREF20 . BIBREF7 includes belief state tracking and has two phases in training: the first phase uses belief state supervision, and then the second phase uses response generation supervision. BIBREF9 improves BIBREF7 by adding a policy network using latent representations so that the dialogue system can be continuously improved through reinforcement learning. These methods utilize classification as a way to decode the belief state.
BIBREF10 decode the belief state as well as the response in a free-form fashion, but it tracks the informable slot values without an explicit assignment to an informable slot. Moreover, the arbitrary order in which informable slot values and requestable slots are encoded and decoded suggests that the sequential inductive bias the architecture provides may not be the right one.
Other works BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 focus on the scalability of DST to large or changing vocabularies. BIBREF26 score a dynamically defined set of candidates as informable slot values. BIBREF27 addresses the problem of large vocabularies with a mix of rules and machine-learned classifiers.
Methodology
We propose a fully-fledged task-oriented dialogue system called Flexibly-Structured Dialogue Model (FSDM), which operates at the turn level. Its overall architecture is shown in Figure 1 , which illustrates one dialogue turn. Without loss of generality, let us assume that we are on the $t$ -th turn of a dialogue. FSDM has three (3) inputs: agent response and belief state of the $t-1$ -th turn, and user utterance of the $t$ -th turn. It has two (2) outputs: the belief state for the $t$ -th turn that is used to query the KB, and the agent response of the $t$ -th turn based on the query result. As we can see, belief tracking is the key component that turns unstructured user utterance and the dialogue history into a KB-friendly belief state. The success of retrieving the correct KB result and further generating the correct response to complete a task relies on the quality of the produced belief state.
FSDM contains five (5) components that work together in an end-to-end manner as follows: (1) The input is encoded and the last hidden state of the encoder serves as the initial hidden state of the belief state tracker and the response decoder; (2) Then, the belief state tracker generates a belief state $B_t = \lbrace I_t, R_t\rbrace $ , where $I_{t}$ is the set of constraints used for the KB query generated by the informable slots value decoder and $R_{t}$ is the user requested slots identified by the requestable slots multi-label classifier; (3) Given $I_t$ , the KB query component queries the KB and encodes the number of records returned in a one-hot vector $d_t$ ; (4) The response slot binary classifier predicts which slots should appear in the agent response $S_t$ ; (5) Finally, the agent response decoder takes in the KB output $d_t$ , a word copy probability vector $\mathcal {P}^{c}$ computed from $I_t$ , $R_t$ , $I_{t}$0 together with an attention on hidden states of the input encoder and the belief decoders, and generates a response $I_{t}$1 .
Input Encoder
The input contains three parts: (1) the agent response $A_{t-1}$ , (2) the belief state $B_{t-1}$ from the $(t-1)$ -th turn and (3) the current user utterance $U_t$ . These parts are all text-based and concatenated, and then consumed by the input encoder. Specifically, the belief state $B_{t-1}$ is represented as a sequence of informable slot names with their respective values and requestable slot names. As an example, the sequence $\langle $ cheap, end_price, italian, end_food, address, phone, end_belief $\rangle $ indicates a state where the user informed cheap and Italian as KB query constraints and requested the address and phone number.
The input encoder consists of an embedding layer followed by a recurrent layer with Gated Recurrent Units (GRU) BIBREF28 . It maps the input $A_{t-1} \circ B_{t-1} \circ U_{t}$ (where $\circ $ denotes concatenation) to a sequence of hidden vectors $\lbrace h^{E}_i| i = 1, \dots , |A_{t-1} \circ B_{t-1} \circ U_{t}| \rbrace $ so that $h^{E}_i = \text{GRU}_H(e^{A_{t-1} \circ B_{t-1} \circ U_{t}})$ where $e$ is the embedding function that maps from words to vectors. The output of the input encoder is its last hidden state $h^{E}_{l}$ , which is served as the initial state for the belief state and response decoders as discussed next.
Informable Slot Value Decoder
The belief state is composed of informable slot values $I_{t}$ and the requestable slots $R_{t}$ . We describe the generation of the former in this subsection and the latter in the next subsection.
The informable slot values track information provided by the user and are used to query the KB. We allow each informable slot to have its own decoder to resolve the unwanted artificial dependencies among slot values introduced by TSCP BIBREF10 . As an example of artificial dependency, `italian; expensive' appears a lot in the training data. During testing, even when the gold informable value is `italian; moderate', the decoder may still generate `italian; expensive'. Modeling one decoder for each slot exactly associates the values with the corresponding informable slot.
The informable slot value decoder consists of GRU recurrent layers with a copy mechanism as shown in the yellow section of Figure 1 . It is composed of weight-tied GRU generators that take the same initial hidden state $h^{E}_{l}$ , but have different start-of-sentence symbols for each unique informable slot. This way, each informable slot value decoder is dependent on the encoder's output, but it is also independent of the values generated for the other slots. Let $\lbrace k^{I}\rbrace $ denote the set of informable slots. The probability of the $j$ th word $P(y^{k^I}_j)$ being generated for the slot $k^I$ is calculated as follows: (1) calculate the attention with respect to the input encoded vectors to obtain the context vector $c^{k^I}_j$ , (2) calculate the generation score $\phi _g(y^{k^I}_j)$ and the copy score $\phi _c(y^{k^I}_j)$ based on the current step's hidden state $h^{k^I}_j$ , (3) calculate the probability using the copy mechanism:
$$\small \begin{split} &c^{k^I}_j = \text{Attn}(h^{k^I}_{j-1}, \lbrace h_{i}^E\rbrace ),\\ &h^{k^I}_j = \text{GRU}_I\Big ((c^{k^I}_j \circ e^{y^{k^I}_{j}}), h^{k^I}_{j-1}\Big ),\\ &\phi _g(y^{k^I}_j) = W_{g}^{K^I}\cdot h^{k^I}_j,\\ &\phi _c(y^{k^I}_j) = \text{tanh}(W_c^{K^I} \cdot h^{y_j^{k^I}}) \cdot h_j^{k^I} ,\\ & y_j^{k^I} \in A_{t-1} \cup B_{t-1} \cup U_t,\\ &P(y^{k^I}_j|y^{k^I}_{j-1}, h^{k^I}_{j-1}) = \text{Copy} \Big ( \phi _c(y^{k^I}_j), \phi _g(y^{k^I}_j)\Big ), \end{split}$$ (Eq. 9)
where for each informable slot $k^I$ , $y_0^{k^I} = k^I$ and $h_0^{k^I} = h^{E}_{l}$ , $e^{y^{k^I}_{j}}$ is the embedding of the current input word (the one generated at the previous step), and $W_{g}^{K^I}$ and $W_{c}^{K^I}$ are learned weight matrices. We follow BIBREF29 and BIBREF30 for the copy $\text{Copy}(\cdot , \cdot )$ and attention $\text{Attn}(\cdot , \cdot )$ mechanisms implementation respectively.
The loss for the informable slot values decoder is calculated as follows:
$$\small \begin{split} \mathcal {L}^I =& - \frac{1}{|\lbrace k^I\rbrace |} \frac{1}{|Y^{k^I}|} \sum _{k^I} \sum _j \\ &\log P(y^{k^I}_j = z^{k^I}_j|y^{k^I}_{j-1}, h^{k^I}_{j-1}), \end{split}$$ (Eq. 10)
where $Y^{K^I}$ is the sequence of informable slot value decoder predictions and $z$ is the ground truth label.
Requestable Slot Binary Classifier
As the other part of a belief state, requestable slots are the attributes of KB entries that are explicitly requested by the user. We introduce a separate multi-label requestable slots classifier to perform binary classification for each slot. This greatly resolves the issues of TSCP that uses a single decoder with each step having unconstrained vocabulary-size choices, which may potentially lead to generating non-slot words. Similar to the informable slots decoders, such a separate classifier also eliminates the undesired dependencies among slots.
Let $\lbrace k^R\rbrace $ denote the set of requestable slots. A single GRU cell is used to perform the classification. The initial state $h^{E}_{l}$ is used to pay attention to the input encoder hidden vectors to compute a context vector $c^{k^R}$ . The concatenation of $c^{k^R}$ and $e^{k^R}$ , the embedding vector of one requestable slot $k^R$ , is passed as input and $h^{E}_{l}$ as the initial state to the GRU. Finally, a sigmoid non-linearity is applied to the product of a weight vector $W_{y}^{R}$ and the output of the GRU $h^{k^R}$ to obtain $y^{k^R}$ , which is the probability of the slot being requested by the user.
$$\small \begin{split} &c^{k^R} = \text{Attn}(h^{E}_{l}, \lbrace h_{i}^E\rbrace ),\\ &h^{k^R} = \text{GRU}_R\Big ( (c^{k^R}\circ e^{k^R}), h^{E}_{l} \Big ),\\ &y^{k^R} = \sigma (W_{y}^{R} \cdot h^{k^R}). \end{split}$$ (Eq. 12)
The loss function for all requestable slot binary classifiers is:
$$\small \begin{split} \mathcal {L}^R =& - \frac{1}{|\lbrace k^R\rbrace |} \sum _{k^R} \\ &z^{k^R} \log (y^{k^R}) + (1-z^{k^R}) \log (1-y^{k^R}). \end{split}$$ (Eq. 13)
Knowledge Base Query
The generated informable slot values $I_t = \lbrace Y^{k^I}\rbrace $ are used as constraints of the KB query. The KB is composed of one or more relational tables and each entity is a record in one table. The query is performed to select a subset of the entities that satisfy those constraints. For instance, if the informable slots are {price=cheap, area=north}, all the restaurants that have attributes of those fields equal to those values will be returned. The output of this component, the one-hot vector $d_t$ , indicates the number of records satisfying the constraints. $d_t$ is a five-dimensional one-hot vector, where the first four dimensions represent integers from 0 to 3 and the last dimension represents 4 or more matched records. It is later used to inform the response slot binary classifier and the agent response decoder.
Response Slot Binary Classifier
In order to incorporate all the relevant information about the retrieved entities into the response, FSDM introduces a new response slot binary classifier. Its inputs are requestable slots and KB queried result $d_t$ and the outputs are the response slots to appear in the agent response. Response slots are the slot names that are expected to appear in a de-lexicalized response (discussed in the next subsection). For instance, assume the requestable slot in the belief state is “address” and the KB query returned one candidate record. The response slot binary classifier may predict name_slot, address_slot and area_slot, which are expected to appear in an agent response as “name_slot is located in address_slot in the area_slot part of town”.
The response slots $\lbrace k^S\rbrace $ map one-to-one to the requestable slots $\lbrace k^R\rbrace $ . The initial state of each response slot decoder is the last hidden state of the corresponding requestable slot decoder. In this case, the context vector $c^{k^S}$ is obtained by paying attention to all hidden vectors in the informable slot value decoders and requestable slots classifiers. Then, the concatenation of the context vector $c^{k^S}$ , the embedding vector of the response slot $e^{k^S}$ and the KB query vector $d_t$ are used as input to a single GRU cell. Finally, a sigmoid non-linearity is applied to the product of a weight vector $W_{y}^{S}$ and the output of the GRU $h^{k^S}$ to obtain a probability $y^{k^S}$ for each slot that is going to appear in the answer.
$$\small \begin{split} &c^{k^S} = \text{Attn}(h^{k^R}, \\ &\lbrace h_{i}^{k^I}|k^I \in K^I, i \le |Y^{k^I}|\rbrace \cup \lbrace h^{k^R}| k^R \in K^R\rbrace ), \\ &h^{k^S} = \text{GRU}_S\Big ((c^{k^S} \circ e^{k^S} \circ d_t), h^{k^R}\Big ),\\ &y^{k^S} = \sigma (W_{y}^{S} \cdot h^{k^S}). \end{split}$$ (Eq. 17)
The loss function for all response slot binary classifiers is:
$$\small \begin{split} \mathcal {L}^S =& - \frac{1}{|\lbrace k^S\rbrace |} \sum _{k^S} \\ &z^{k^S} \log (y^{k^S}) + (1-z^{k^S}) \log (1-y^{k^S}). \end{split}$$ (Eq. 18)
Word Copy Probability and Agent Response Decoder
Lastly, we introduce the agent response decoder. It takes in the generated informable slot values, requestable slots, response slots, and KB query result and generates a (de-lexicalized) response. We adopt a copy-augmented decoder BIBREF29 as architecture. The canonical copy mechanism only takes a sequence of word indexes as inputs but does not accept the multiple Bernoulli distributions we obtain from sigmoid functions. For this reason, we introduce a vector of independent word copy probabilities $\mathcal {P}^{C}$ , which is constructed as follows:
$$\small \mathcal {P^C}(w) = {\left\lbrace \begin{array}{ll} y^{k^R}, & \text{if } w = k^R,\\ y^{k^S}, & \text{if } w = k^S,\\ 1, & \text{if } w \in I_t,\\ 0, & \text{otherwise}, \end{array}\right.}$$ (Eq. 20)
where if a word $w$ is a requestable slot or a response slot, the probability is equal to their binary classifier output; if a word appears in the generated informable slot values, its probability is equal to 1; for the other words in the vocabulary the probability is equal to 0. This vector is used in conjunction with the agent response decoder prediction probability to generate the response.
The agent response decoder is responsible for generating a de-lexicalized agent response. The response slots are substituted with the values of the results obtained by querying the KB before the response is returned to the user.
Like the informable slot value decoder, the agent response decoder also uses a copy mechanism, so it has a copy probability and generation probability. Consider the generation of the $j$ th word. Its generation score $\phi _g$ is calculated as:
$$\small \begin{split} &c^{A^E}_j = \text{Attn}(h_{j-1}^A, \lbrace h_i^E\rbrace ), \\ &c^{A^B}_j = \text{Attn}(h_{j-1}^A, \lbrace h_{i}^{k^I}|k^I \in K^I, i \le |Y^{k^I}|\rbrace \\ &\cup \lbrace h^{k^R}| k^R \in K^R\rbrace )\cup \lbrace h^{k^S}| k^S \in K^S\rbrace ),\\ &h^{A}_j = \text{GRU}_A\Big ( (c^{A^E}_j \circ c^{A^B}_j \circ e^{A}_j \circ d_t), h_{j-1}^A \Big ),\\ &\phi _g(y^A_j) = W_{g}^{A} \cdot h^{A}_j, \end{split}$$ (Eq. 21)
where $c^{A^E}_j$ is a context vector obtained by attending to the hidden vectors of the input encoder, $c^{A^B}_j$ is a context vector obtained by attending to all hidden vectors of the informable slot value decoder, requestable slot classifier and response slot classifier, and $W_{g}^{A}$ is a learned weight matrix. The concatenation of the two context vectors $c^{A^E}_j$ and $c^{A^B}_j$ , the embedding vector $e^{A}_j$ of the previously generated word and the KB query output vector $d_t$ is used as input to a GRU. Note that the initial hidden state is $h_0^A = h^{E}_{l}$ . The copy score $\phi _c$ is calculated as:
$$\small \phi _c(y_j^A) = {\left\lbrace \begin{array}{ll} \mathcal {P}^C(y_j^A) \cdot \text{tanh}(W_c^A \cdot h^{y_j^A}) \cdot h_j^A, &\\ \text{if } y_j^A \in I_t \cup K^R \cup K^S,&\\ \mathcal {P}^C(y_j^A), \text{otherwise},& \end{array}\right.}$$ (Eq. 22)
where $W_c^A$ is a learned weight matrix. The final probability is:
$$\small P(y^{A}_j|y^{A}_{j-1}, h^{A}_{j-1}) = \text{Copy}(\phi _g(y^A_j), \phi _c(y_j^A)).$$ (Eq. 23)
Let $z$ denote the ground truth de-lexicalized agent response. The loss for the agent response decoder is calculated as follows where $Y^A$ is the sequence of agent response decoder prediction:
$$\small \mathcal {L}^A = - \frac{1}{|Y^{A}|} \sum _j \log P(y^{A}_j = z^{A}_j|y^{A}_{j-1}, h^{A}_{j-1}).$$ (Eq. 24)
Loss Function
The loss function of the whole network is the sum of the four losses described so far for the informable slot values $\mathcal {L}^I$ , requestable slot $\mathcal {L}^R$ , response slot $\mathcal {L}^S$ and agent response decoders $\mathcal {L}^A$ , weighted by $\alpha $ hyperparameters:
$$\small \mathcal {L} = \alpha ^{I}\mathcal {L}^I + \alpha ^{R}\mathcal {L}^R + \alpha ^{S}\mathcal {L}^S + \alpha ^{A}\mathcal {L}^A.$$ (Eq. 26)
The loss is optimized in an end-to-end fashion, with all modules trained simultaneously with loss gradients back-propagated to their weights. In order to do so, ground truth results from database queries are also provided to the model to compute the $d_t$ , while at prediction time results obtained by using the generated informable slot values $I_t$ are used.
Experiments
We tested the FSDM on the Cambridge Restaurant dataset (CamRest) BIBREF7 and the Stanford in-car assistant dataset (KVRET) BIBREF13 described in Table 1 .
Preprocessing and Hyper-parameters
We use NLTK BIBREF31 to tokenize each sentence. The user utterances are precisely the original texts, while all agent responses are de-lexicalized as described in BIBREF10 . We obtain the labels for the response slot decoder from the de-lexicalized response texts. We use 300-dimensional GloVe embeddings BIBREF32 trained on 840B words. Tokens not present in GloVe are initialized to be the average of all other embeddings plus a small amount of random noise to make them different from each other. We optimize both training and model hyperparameters by running Bayesian optimization over the product of validation set BLEU, EMR, and SuccF $_1$ using skopt. The model that performed the best on the validation set uses Adam optimizer BIBREF33 with a learning rate of 0.00025 for minimizing the loss in Equation 26 for both datasets. We apply dropout with a rate of 0.5 after the embedding layer, the GRU layer and any linear layer for CamRest and 0.2 for KVRET. The dimension of all hidden states is 128 for CamRest and 256 for KVRET. Loss weights $\alpha ^I$ , $\alpha ^R$ , $\alpha ^S$ , $\alpha ^A$ are 1.5, 9, 8, 0.5 respectively for CamRest and 1, 3, 2, 0.5 for KVRET.
Evaluation Metrics
We evaluate the performance concerning belief state tracking, response language quality, and task completion. For belief state tracking, we report precision, recall, and F $_1$ score of informable slot values and requestable slots. BLEU BIBREF34 is applied to the generated agent responses for evaluating language quality. Although it is a poor choice for evaluating dialogue systems BIBREF35 , we still report it in order to compare with previous work that has adopted it. For task completion evaluation, the Entity Match Rate (EMR) BIBREF7 and Success F $_1$ score (SuccF $_1$ ) BIBREF10 are reported. EMR evaluates whether a system can correctly retrieve the user's indicated entity (record) from the KB based on the generated constraints so it can have only a score of 0 or 1 for each dialogue. The SuccF $_1$ score evaluates how a system responds to the user's requests at dialogue level: it is F $_1$ score of the response slots in the agent responses.
Benchmarks
We compare FSDM with four baseline methods and two ablations.
NDM BIBREF7 proposes a modular end-to-end trainable network. It applies de-lexicalization on user utterances and responses.
LIDM BIBREF9 improves over NDM by employing a discrete latent variable to learn underlying dialogue acts. This allows the system to be refined by reinforcement learning.
KVRN BIBREF13 adopts a copy-augmented Seq2Seq model for agent response generation and uses an attention mechanism on the KB. It does not perform belief state tracking.
TSCP/RL BIBREF10 is a two-stage CopyNet which consists of one encoder and two copy-mechanism-augmented decoders for belief state and response generation. TSCP includes further parameter tuning with reinforcement learning to increase the appearance of response slots in the generated response. We were unable to replicate the reported results using the provided code, hyperparameters, and random seed, so we report both the results from the paper and the average of 5 runs on the code with different random seeds (marked with $^\dagger $ ).
FSDM is the proposed method and we report two ablations: in FSDM/St the whole state tracking is removed (informable, requestable and response slots) and the answer is generated from the encoding of the input, while in FSDM/Res, only the response slot decoder is removed.
Result Analysis
At the turn level, FSDM and FSDM/Res perform better than TSCP and TSCP/RL on belief state tracking, especially on requestable slots, as shown in Table 2 . FSDM and FSDM/Res use independent binary classifiers for the requestable slots and are capable of predicting the correct slots in all those cases. FSDM/Res and TSCP/RL do not have any additional mechanism for generating response slot, so FSDM/Res performing better than TSCP/RL shows the effectiveness of flexible-structured belief state tracker. Moreover, FSDM performs better than FSDM/Res, but TSCP performs worse than TSCP/RL. This suggests that using RL to increase the appearance of response slots in the response decoder does not help belief state tracking, but our response slot decoder does.
FSDM performs better than all benchmarks on the dialogue level measures too, as shown in Table 3 , with the exception of BLEU score on KVRET, where it is still competitive. Comparing TSCP/RL and FSDM/Res, the flexibly-structured belief state tracker achieves better task completion than the free-form belief state tracker. Furthermore, FSDM performing better than FSDM/Res shows the effectiveness of the response slot decoder for task completion. The most significant performance improvement is obtained on CamRest by FSDM, confirming that the additional inductive bias helps to generalize from smaller datasets. More importantly, the experiment confirms that, although making weaker assumptions that are reasonable for real-world applications, FSDM is capable of performing at least as well as models that make stronger limiting assumptions which make them unusable in real-world applications.
Error Analysis
We investigated the errors that both TSCP and FSDM make and discovered that the sequential nature of the TSCP state tracker leads to the memorization of common patterns that FSDM is not subject to. As an example (Table 4 ), TSCP often generates “date; party” as requestable slots even if only “party” and “time” are requested like in “what time is my next activity and who will be attending?” or if “party”, “time” and “date” are requested like in “what is the date and time of my next meeting and who will be attending it?”. FSDM produces correct belief states in these examples.
FSDM misses some requestable slots in some conditions. For example, consider the user's utterance: “I would like their address and what part of town they are located in”. The ground-truth requestable slots are `address' and `area'. FSDM only predicts `address' and misses `area', which suggests that the model did not recognize `what part of town' as being a phrasing for requesting `area'. Another example is when the agent proposes “the name_SLOT is moderately priced and in the area_SLOT part of town . would you like their location ?” and the user replies “i would like the location and the phone number, please”. FSDM predicts `phone' as a requestable slot, but misses `address', suggesting it doesn't recognize the connection between `location' and `address'. The missing requestable slot issue may propagate to the agent response decoder. These issues may arise due to the use of fixed pre-trained embeddings and the single encoder. Using separate encoders for user utterance, agent response and dialogue history or fine-tuning the embeddings may solve the issue.
Conclusion
We propose the flexibly-structured dialogue model, a novel end-to-end architecture for task-oriented dialogue. It uses the structure in the schema of the KB to make architectural choices that introduce inductive bias and address the limitations of fully structured and free-form methods. The experiment suggests that this architecture is competitive with state-of-the-art models, while at the same time providing a more practical solution for real-world applications.
Acknowledgments
We would like to thank Alexandros Papangelis, Janice Lam, Stefan Douglas Webb and SIGDIAL reviewers for their valuable comments. | by adding extra supervision to generate the slots that will be present in the response |
25fd61bb20f71051fe2bd866d221f87367e81027 | 25fd61bb20f71051fe2bd866d221f87367e81027_0 | Q: What baselines have been used in this work?
Text: Introduction
A traditional task-oriented dialogue system is often composed of a few modules, such as natural language understanding, dialogue state tracking, knowledge base (KB) query, dialogue policy engine and response generation. Language understanding aims to convert the input to some predefined semantic frame. State tracking is a critical component that models explicitly the input semantic frame and the dialogue history for producing KB queries. The semantic frame and the corresponding belief state are defined in terms of informable slots values and requestable slots. Informable slot values capture information provided by the user so far, e.g., {price=cheap, food=italian} indicating the user wants a cheap Italian restaurant at this stage. Requestable slots capture the information requested by the user, e.g., {address, phone} means the user wants to know the address and phone number of a restaurant. Dialogue policy model decides on the system action which is then realized by a language generation component.
To mitigate the problems with such a classic modularized dialogue system, such as the error propagation between modules, the cascade effect that the updates of the modules have and the expensiveness of annotation, end-to-end training of dialogue systems was recently proposed BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 . These systems train one whole model to read the current user's utterance, the past state (that may contain all previous interactions) and generate the current state and response.
There are two main approaches for modeling the belief state in end-to-end task-oriented dialogue systems in the literature: the fully structured approach based on classification BIBREF7 , BIBREF9 , and the free-form approach based on text generation BIBREF10 . The fully structured approaches BIBREF11 , BIBREF12 use the full structure of the KB, both its schema and the values available in it, and assumes that the sets of informable slot values and requestable slots are fixed. In real-world scenarios, this assumption is too restrictive as the content of the KB may change and users' utterances may contain information outside the pre-defined sets. An ideal end-to-end architecture for state tracking should be able to identify the values of the informable slots and the requestable slots, easily adapt to new domains, to the changes in the content of the KB, and to the occurrence of words in users' utterances that are not present in the KB at training time, while at the same time providing the right amount of inductive bias to allow generalization. Recently, a free-form approach called TSCP (Two Stage Copy Net) BIBREF10 was proposed. This approach does not integrate any information about the KB in the model architecture. It has the advantage of being readily adaptable to new domains and changes in the content of the KB as well as solving the out-of-vocabulary word problem by generating or copying the relevant piece of text from the user's utterances in its response generation. However, TSCP can produce invalid states (see Section "Experiments" ). Furthermore, by putting all slots together into a sequence, it introduces an unwanted (artificial) order between different slots since they are encoded and decoded sequentially. It could be even worse if two slots have overlapping values, like departure and arrival airport in a travel booking system. As such, the unnecessary order of the slots makes getting rid of the invalid states a great challenge for the sequential decoder. As a summary, both approaches to state tracking have their weaknesses when applied to real-world applications.
This paper proposes the Flexibly-Structured Dialogue Model (FSDM) as a new end-to-end task-oriented dialogue system. The state tracking component of FSDM has the advantages of both fully structured and free-form approaches while addressing their shortcomings. On one hand, it is still structured, as it incorporates information about slots in KB schema; on the other hand, it is flexible, as it does not use information about the values contained in the KB records. This makes it easily adaptable to new values. These desirable properties are achieved by a separate decoder for each informable slot and a multi-label classifier for the requestable slots. Those components explicitly assign values to slots like the fully structured approach, while also preserving the capability of dealing with out-of-vocabulary words like the free-form approach. By using these two types of decoders, FSDM produces only valid belief states, overcoming the limitations of the free-form approach. Further, FSDM has a new module called response slot binary classifier that adds extra supervision to generate the slots that will be present in the response more precisely before generating the final textual agent response (see Section "Methodology" for details).
The main contributions of this work are
Related Work
Our work is related to end-to-end task-oriented dialogue systems in general BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF14 , BIBREF7 , BIBREF8 and those that extend the Seq2Seq BIBREF15 architecture in particular BIBREF13 , BIBREF16 , BIBREF17 . Belief tracking, which is necessary to form KB queries, is not explicitly performed in the latter works. To compensate, BIBREF13 , BIBREF18 , BIBREF17 adopt a copy mechanism that allows copying information retrieved from the KB to the generated response. BIBREF16 adopt Memory Networks BIBREF19 to memorize the retrieved KB entities and words appearing in the dialogue history. These models scale linearly with the size of the KB and need to be retrained at each update of the KB. Both issues make these approaches less practical in real-world applications.
Our work is also akin to modularly connected end-to-end trainable networks BIBREF7 , BIBREF9 , BIBREF0 , BIBREF4 , BIBREF3 , BIBREF20 . BIBREF7 includes belief state tracking and has two phases in training: the first phase uses belief state supervision, and then the second phase uses response generation supervision. BIBREF9 improves BIBREF7 by adding a policy network using latent representations so that the dialogue system can be continuously improved through reinforcement learning. These methods utilize classification as a way to decode the belief state.
BIBREF10 decode the belief state as well as the response in a free-form fashion, but it tracks the informable slot values without an explicit assignment to an informable slot. Moreover, the arbitrary order in which informable slot values and requestable slots are encoded and decoded suggests that the sequential inductive bias the architecture provides may not be the right one.
Other works BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 , BIBREF25 focus on the scalability of DST to large or changing vocabularies. BIBREF26 score a dynamically defined set of candidates as informable slot values. BIBREF27 addresses the problem of large vocabularies with a mix of rules and machine-learned classifiers.
Methodology
We propose a fully-fledged task-oriented dialogue system called Flexibly-Structured Dialogue Model (FSDM), which operates at the turn level. Its overall architecture is shown in Figure 1 , which illustrates one dialogue turn. Without loss of generality, let us assume that we are on the $t$ -th turn of a dialogue. FSDM has three (3) inputs: agent response and belief state of the $t-1$ -th turn, and user utterance of the $t$ -th turn. It has two (2) outputs: the belief state for the $t$ -th turn that is used to query the KB, and the agent response of the $t$ -th turn based on the query result. As we can see, belief tracking is the key component that turns unstructured user utterance and the dialogue history into a KB-friendly belief state. The success of retrieving the correct KB result and further generating the correct response to complete a task relies on the quality of the produced belief state.
FSDM contains five (5) components that work together in an end-to-end manner as follows: (1) The input is encoded and the last hidden state of the encoder serves as the initial hidden state of the belief state tracker and the response decoder; (2) Then, the belief state tracker generates a belief state $B_t = \lbrace I_t, R_t\rbrace $ , where $I_{t}$ is the set of constraints used for the KB query generated by the informable slots value decoder and $R_{t}$ is the user requested slots identified by the requestable slots multi-label classifier; (3) Given $I_t$ , the KB query component queries the KB and encodes the number of records returned in a one-hot vector $d_t$ ; (4) The response slot binary classifier predicts which slots should appear in the agent response $S_t$ ; (5) Finally, the agent response decoder takes in the KB output $d_t$ , a word copy probability vector $\mathcal {P}^{c}$ computed from $I_t$ , $R_t$ , $I_{t}$0 together with an attention on hidden states of the input encoder and the belief decoders, and generates a response $I_{t}$1 .
Input Encoder
The input contains three parts: (1) the agent response $A_{t-1}$ , (2) the belief state $B_{t-1}$ from the $(t-1)$ -th turn and (3) the current user utterance $U_t$ . These parts are all text-based and concatenated, and then consumed by the input encoder. Specifically, the belief state $B_{t-1}$ is represented as a sequence of informable slot names with their respective values and requestable slot names. As an example, the sequence $\langle $ cheap, end_price, italian, end_food, address, phone, end_belief $\rangle $ indicates a state where the user informed cheap and Italian as KB query constraints and requested the address and phone number.
The input encoder consists of an embedding layer followed by a recurrent layer with Gated Recurrent Units (GRU) BIBREF28 . It maps the input $A_{t-1} \circ B_{t-1} \circ U_{t}$ (where $\circ $ denotes concatenation) to a sequence of hidden vectors $\lbrace h^{E}_i| i = 1, \dots , |A_{t-1} \circ B_{t-1} \circ U_{t}| \rbrace $ so that $h^{E}_i = \text{GRU}_H(e^{A_{t-1} \circ B_{t-1} \circ U_{t}})$ where $e$ is the embedding function that maps from words to vectors. The output of the input encoder is its last hidden state $h^{E}_{l}$ , which is served as the initial state for the belief state and response decoders as discussed next.
Informable Slot Value Decoder
The belief state is composed of informable slot values $I_{t}$ and the requestable slots $R_{t}$ . We describe the generation of the former in this subsection and the latter in the next subsection.
The informable slot values track information provided by the user and are used to query the KB. We allow each informable slot to have its own decoder to resolve the unwanted artificial dependencies among slot values introduced by TSCP BIBREF10 . As an example of artificial dependency, `italian; expensive' appears a lot in the training data. During testing, even when the gold informable value is `italian; moderate', the decoder may still generate `italian; expensive'. Modeling one decoder for each slot exactly associates the values with the corresponding informable slot.
The informable slot value decoder consists of GRU recurrent layers with a copy mechanism as shown in the yellow section of Figure 1 . It is composed of weight-tied GRU generators that take the same initial hidden state $h^{E}_{l}$ , but have different start-of-sentence symbols for each unique informable slot. This way, each informable slot value decoder is dependent on the encoder's output, but it is also independent of the values generated for the other slots. Let $\lbrace k^{I}\rbrace $ denote the set of informable slots. The probability of the $j$ th word $P(y^{k^I}_j)$ being generated for the slot $k^I$ is calculated as follows: (1) calculate the attention with respect to the input encoded vectors to obtain the context vector $c^{k^I}_j$ , (2) calculate the generation score $\phi _g(y^{k^I}_j)$ and the copy score $\phi _c(y^{k^I}_j)$ based on the current step's hidden state $h^{k^I}_j$ , (3) calculate the probability using the copy mechanism:
$$\small \begin{split} &c^{k^I}_j = \text{Attn}(h^{k^I}_{j-1}, \lbrace h_{i}^E\rbrace ),\\ &h^{k^I}_j = \text{GRU}_I\Big ((c^{k^I}_j \circ e^{y^{k^I}_{j}}), h^{k^I}_{j-1}\Big ),\\ &\phi _g(y^{k^I}_j) = W_{g}^{K^I}\cdot h^{k^I}_j,\\ &\phi _c(y^{k^I}_j) = \text{tanh}(W_c^{K^I} \cdot h^{y_j^{k^I}}) \cdot h_j^{k^I} ,\\ & y_j^{k^I} \in A_{t-1} \cup B_{t-1} \cup U_t,\\ &P(y^{k^I}_j|y^{k^I}_{j-1}, h^{k^I}_{j-1}) = \text{Copy} \Big ( \phi _c(y^{k^I}_j), \phi _g(y^{k^I}_j)\Big ), \end{split}$$ (Eq. 9)
where for each informable slot $k^I$ , $y_0^{k^I} = k^I$ and $h_0^{k^I} = h^{E}_{l}$ , $e^{y^{k^I}_{j}}$ is the embedding of the current input word (the one generated at the previous step), and $W_{g}^{K^I}$ and $W_{c}^{K^I}$ are learned weight matrices. We follow BIBREF29 and BIBREF30 for the copy $\text{Copy}(\cdot , \cdot )$ and attention $\text{Attn}(\cdot , \cdot )$ mechanisms implementation respectively.
The loss for the informable slot values decoder is calculated as follows:
$$\small \begin{split} \mathcal {L}^I =& - \frac{1}{|\lbrace k^I\rbrace |} \frac{1}{|Y^{k^I}|} \sum _{k^I} \sum _j \\ &\log P(y^{k^I}_j = z^{k^I}_j|y^{k^I}_{j-1}, h^{k^I}_{j-1}), \end{split}$$ (Eq. 10)
where $Y^{K^I}$ is the sequence of informable slot value decoder predictions and $z$ is the ground truth label.
Requestable Slot Binary Classifier
As the other part of a belief state, requestable slots are the attributes of KB entries that are explicitly requested by the user. We introduce a separate multi-label requestable slots classifier to perform binary classification for each slot. This greatly resolves the issues of TSCP that uses a single decoder with each step having unconstrained vocabulary-size choices, which may potentially lead to generating non-slot words. Similar to the informable slots decoders, such a separate classifier also eliminates the undesired dependencies among slots.
Let $\lbrace k^R\rbrace $ denote the set of requestable slots. A single GRU cell is used to perform the classification. The initial state $h^{E}_{l}$ is used to pay attention to the input encoder hidden vectors to compute a context vector $c^{k^R}$ . The concatenation of $c^{k^R}$ and $e^{k^R}$ , the embedding vector of one requestable slot $k^R$ , is passed as input and $h^{E}_{l}$ as the initial state to the GRU. Finally, a sigmoid non-linearity is applied to the product of a weight vector $W_{y}^{R}$ and the output of the GRU $h^{k^R}$ to obtain $y^{k^R}$ , which is the probability of the slot being requested by the user.
$$\small \begin{split} &c^{k^R} = \text{Attn}(h^{E}_{l}, \lbrace h_{i}^E\rbrace ),\\ &h^{k^R} = \text{GRU}_R\Big ( (c^{k^R}\circ e^{k^R}), h^{E}_{l} \Big ),\\ &y^{k^R} = \sigma (W_{y}^{R} \cdot h^{k^R}). \end{split}$$ (Eq. 12)
The loss function for all requestable slot binary classifiers is:
$$\small \begin{split} \mathcal {L}^R =& - \frac{1}{|\lbrace k^R\rbrace |} \sum _{k^R} \\ &z^{k^R} \log (y^{k^R}) + (1-z^{k^R}) \log (1-y^{k^R}). \end{split}$$ (Eq. 13)
Knowledge Base Query
The generated informable slot values $I_t = \lbrace Y^{k^I}\rbrace $ are used as constraints of the KB query. The KB is composed of one or more relational tables and each entity is a record in one table. The query is performed to select a subset of the entities that satisfy those constraints. For instance, if the informable slots are {price=cheap, area=north}, all the restaurants that have attributes of those fields equal to those values will be returned. The output of this component, the one-hot vector $d_t$ , indicates the number of records satisfying the constraints. $d_t$ is a five-dimensional one-hot vector, where the first four dimensions represent integers from 0 to 3 and the last dimension represents 4 or more matched records. It is later used to inform the response slot binary classifier and the agent response decoder.
Response Slot Binary Classifier
In order to incorporate all the relevant information about the retrieved entities into the response, FSDM introduces a new response slot binary classifier. Its inputs are requestable slots and KB queried result $d_t$ and the outputs are the response slots to appear in the agent response. Response slots are the slot names that are expected to appear in a de-lexicalized response (discussed in the next subsection). For instance, assume the requestable slot in the belief state is “address” and the KB query returned one candidate record. The response slot binary classifier may predict name_slot, address_slot and area_slot, which are expected to appear in an agent response as “name_slot is located in address_slot in the area_slot part of town”.
The response slots $\lbrace k^S\rbrace $ map one-to-one to the requestable slots $\lbrace k^R\rbrace $ . The initial state of each response slot decoder is the last hidden state of the corresponding requestable slot decoder. In this case, the context vector $c^{k^S}$ is obtained by paying attention to all hidden vectors in the informable slot value decoders and requestable slots classifiers. Then, the concatenation of the context vector $c^{k^S}$ , the embedding vector of the response slot $e^{k^S}$ and the KB query vector $d_t$ are used as input to a single GRU cell. Finally, a sigmoid non-linearity is applied to the product of a weight vector $W_{y}^{S}$ and the output of the GRU $h^{k^S}$ to obtain a probability $y^{k^S}$ for each slot that is going to appear in the answer.
$$\small \begin{split} &c^{k^S} = \text{Attn}(h^{k^R}, \\ &\lbrace h_{i}^{k^I}|k^I \in K^I, i \le |Y^{k^I}|\rbrace \cup \lbrace h^{k^R}| k^R \in K^R\rbrace ), \\ &h^{k^S} = \text{GRU}_S\Big ((c^{k^S} \circ e^{k^S} \circ d_t), h^{k^R}\Big ),\\ &y^{k^S} = \sigma (W_{y}^{S} \cdot h^{k^S}). \end{split}$$ (Eq. 17)
The loss function for all response slot binary classifiers is:
$$\small \begin{split} \mathcal {L}^S =& - \frac{1}{|\lbrace k^S\rbrace |} \sum _{k^S} \\ &z^{k^S} \log (y^{k^S}) + (1-z^{k^S}) \log (1-y^{k^S}). \end{split}$$ (Eq. 18)
Word Copy Probability and Agent Response Decoder
Lastly, we introduce the agent response decoder. It takes in the generated informable slot values, requestable slots, response slots, and KB query result and generates a (de-lexicalized) response. We adopt a copy-augmented decoder BIBREF29 as architecture. The canonical copy mechanism only takes a sequence of word indexes as inputs but does not accept the multiple Bernoulli distributions we obtain from sigmoid functions. For this reason, we introduce a vector of independent word copy probabilities $\mathcal {P}^{C}$ , which is constructed as follows:
$$\small \mathcal {P^C}(w) = {\left\lbrace \begin{array}{ll} y^{k^R}, & \text{if } w = k^R,\\ y^{k^S}, & \text{if } w = k^S,\\ 1, & \text{if } w \in I_t,\\ 0, & \text{otherwise}, \end{array}\right.}$$ (Eq. 20)
where if a word $w$ is a requestable slot or a response slot, the probability is equal to their binary classifier output; if a word appears in the generated informable slot values, its probability is equal to 1; for the other words in the vocabulary the probability is equal to 0. This vector is used in conjunction with the agent response decoder prediction probability to generate the response.
The agent response decoder is responsible for generating a de-lexicalized agent response. The response slots are substituted with the values of the results obtained by querying the KB before the response is returned to the user.
Like the informable slot value decoder, the agent response decoder also uses a copy mechanism, so it has a copy probability and generation probability. Consider the generation of the $j$ th word. Its generation score $\phi _g$ is calculated as:
$$\small \begin{split} &c^{A^E}_j = \text{Attn}(h_{j-1}^A, \lbrace h_i^E\rbrace ), \\ &c^{A^B}_j = \text{Attn}(h_{j-1}^A, \lbrace h_{i}^{k^I}|k^I \in K^I, i \le |Y^{k^I}|\rbrace \\ &\cup \lbrace h^{k^R}| k^R \in K^R\rbrace )\cup \lbrace h^{k^S}| k^S \in K^S\rbrace ),\\ &h^{A}_j = \text{GRU}_A\Big ( (c^{A^E}_j \circ c^{A^B}_j \circ e^{A}_j \circ d_t), h_{j-1}^A \Big ),\\ &\phi _g(y^A_j) = W_{g}^{A} \cdot h^{A}_j, \end{split}$$ (Eq. 21)
where $c^{A^E}_j$ is a context vector obtained by attending to the hidden vectors of the input encoder, $c^{A^B}_j$ is a context vector obtained by attending to all hidden vectors of the informable slot value decoder, requestable slot classifier and response slot classifier, and $W_{g}^{A}$ is a learned weight matrix. The concatenation of the two context vectors $c^{A^E}_j$ and $c^{A^B}_j$ , the embedding vector $e^{A}_j$ of the previously generated word and the KB query output vector $d_t$ is used as input to a GRU. Note that the initial hidden state is $h_0^A = h^{E}_{l}$ . The copy score $\phi _c$ is calculated as:
$$\small \phi _c(y_j^A) = {\left\lbrace \begin{array}{ll} \mathcal {P}^C(y_j^A) \cdot \text{tanh}(W_c^A \cdot h^{y_j^A}) \cdot h_j^A, &\\ \text{if } y_j^A \in I_t \cup K^R \cup K^S,&\\ \mathcal {P}^C(y_j^A), \text{otherwise},& \end{array}\right.}$$ (Eq. 22)
where $W_c^A$ is a learned weight matrix. The final probability is:
$$\small P(y^{A}_j|y^{A}_{j-1}, h^{A}_{j-1}) = \text{Copy}(\phi _g(y^A_j), \phi _c(y_j^A)).$$ (Eq. 23)
Let $z$ denote the ground truth de-lexicalized agent response. The loss for the agent response decoder is calculated as follows where $Y^A$ is the sequence of agent response decoder prediction:
$$\small \mathcal {L}^A = - \frac{1}{|Y^{A}|} \sum _j \log P(y^{A}_j = z^{A}_j|y^{A}_{j-1}, h^{A}_{j-1}).$$ (Eq. 24)
Loss Function
The loss function of the whole network is the sum of the four losses described so far for the informable slot values $\mathcal {L}^I$ , requestable slot $\mathcal {L}^R$ , response slot $\mathcal {L}^S$ and agent response decoders $\mathcal {L}^A$ , weighted by $\alpha $ hyperparameters:
$$\small \mathcal {L} = \alpha ^{I}\mathcal {L}^I + \alpha ^{R}\mathcal {L}^R + \alpha ^{S}\mathcal {L}^S + \alpha ^{A}\mathcal {L}^A.$$ (Eq. 26)
The loss is optimized in an end-to-end fashion, with all modules trained simultaneously with loss gradients back-propagated to their weights. In order to do so, ground truth results from database queries are also provided to the model to compute the $d_t$ , while at prediction time results obtained by using the generated informable slot values $I_t$ are used.
Experiments
We tested the FSDM on the Cambridge Restaurant dataset (CamRest) BIBREF7 and the Stanford in-car assistant dataset (KVRET) BIBREF13 described in Table 1 .
Preprocessing and Hyper-parameters
We use NLTK BIBREF31 to tokenize each sentence. The user utterances are precisely the original texts, while all agent responses are de-lexicalized as described in BIBREF10 . We obtain the labels for the response slot decoder from the de-lexicalized response texts. We use 300-dimensional GloVe embeddings BIBREF32 trained on 840B words. Tokens not present in GloVe are initialized to be the average of all other embeddings plus a small amount of random noise to make them different from each other. We optimize both training and model hyperparameters by running Bayesian optimization over the product of validation set BLEU, EMR, and SuccF $_1$ using skopt. The model that performed the best on the validation set uses Adam optimizer BIBREF33 with a learning rate of 0.00025 for minimizing the loss in Equation 26 for both datasets. We apply dropout with a rate of 0.5 after the embedding layer, the GRU layer and any linear layer for CamRest and 0.2 for KVRET. The dimension of all hidden states is 128 for CamRest and 256 for KVRET. Loss weights $\alpha ^I$ , $\alpha ^R$ , $\alpha ^S$ , $\alpha ^A$ are 1.5, 9, 8, 0.5 respectively for CamRest and 1, 3, 2, 0.5 for KVRET.
Evaluation Metrics
We evaluate the performance concerning belief state tracking, response language quality, and task completion. For belief state tracking, we report precision, recall, and F $_1$ score of informable slot values and requestable slots. BLEU BIBREF34 is applied to the generated agent responses for evaluating language quality. Although it is a poor choice for evaluating dialogue systems BIBREF35 , we still report it in order to compare with previous work that has adopted it. For task completion evaluation, the Entity Match Rate (EMR) BIBREF7 and Success F $_1$ score (SuccF $_1$ ) BIBREF10 are reported. EMR evaluates whether a system can correctly retrieve the user's indicated entity (record) from the KB based on the generated constraints so it can have only a score of 0 or 1 for each dialogue. The SuccF $_1$ score evaluates how a system responds to the user's requests at dialogue level: it is F $_1$ score of the response slots in the agent responses.
Benchmarks
We compare FSDM with four baseline methods and two ablations.
NDM BIBREF7 proposes a modular end-to-end trainable network. It applies de-lexicalization on user utterances and responses.
LIDM BIBREF9 improves over NDM by employing a discrete latent variable to learn underlying dialogue acts. This allows the system to be refined by reinforcement learning.
KVRN BIBREF13 adopts a copy-augmented Seq2Seq model for agent response generation and uses an attention mechanism on the KB. It does not perform belief state tracking.
TSCP/RL BIBREF10 is a two-stage CopyNet which consists of one encoder and two copy-mechanism-augmented decoders for belief state and response generation. TSCP includes further parameter tuning with reinforcement learning to increase the appearance of response slots in the generated response. We were unable to replicate the reported results using the provided code, hyperparameters, and random seed, so we report both the results from the paper and the average of 5 runs on the code with different random seeds (marked with $^\dagger $ ).
FSDM is the proposed method and we report two ablations: in FSDM/St the whole state tracking is removed (informable, requestable and response slots) and the answer is generated from the encoding of the input, while in FSDM/Res, only the response slot decoder is removed.
Result Analysis
At the turn level, FSDM and FSDM/Res perform better than TSCP and TSCP/RL on belief state tracking, especially on requestable slots, as shown in Table 2 . FSDM and FSDM/Res use independent binary classifiers for the requestable slots and are capable of predicting the correct slots in all those cases. FSDM/Res and TSCP/RL do not have any additional mechanism for generating response slot, so FSDM/Res performing better than TSCP/RL shows the effectiveness of flexible-structured belief state tracker. Moreover, FSDM performs better than FSDM/Res, but TSCP performs worse than TSCP/RL. This suggests that using RL to increase the appearance of response slots in the response decoder does not help belief state tracking, but our response slot decoder does.
FSDM performs better than all benchmarks on the dialogue level measures too, as shown in Table 3 , with the exception of BLEU score on KVRET, where it is still competitive. Comparing TSCP/RL and FSDM/Res, the flexibly-structured belief state tracker achieves better task completion than the free-form belief state tracker. Furthermore, FSDM performing better than FSDM/Res shows the effectiveness of the response slot decoder for task completion. The most significant performance improvement is obtained on CamRest by FSDM, confirming that the additional inductive bias helps to generalize from smaller datasets. More importantly, the experiment confirms that, although making weaker assumptions that are reasonable for real-world applications, FSDM is capable of performing at least as well as models that make stronger limiting assumptions which make them unusable in real-world applications.
Error Analysis
We investigated the errors that both TSCP and FSDM make and discovered that the sequential nature of the TSCP state tracker leads to the memorization of common patterns that FSDM is not subject to. As an example (Table 4 ), TSCP often generates “date; party” as requestable slots even if only “party” and “time” are requested like in “what time is my next activity and who will be attending?” or if “party”, “time” and “date” are requested like in “what is the date and time of my next meeting and who will be attending it?”. FSDM produces correct belief states in these examples.
FSDM misses some requestable slots in some conditions. For example, consider the user's utterance: “I would like their address and what part of town they are located in”. The ground-truth requestable slots are `address' and `area'. FSDM only predicts `address' and misses `area', which suggests that the model did not recognize `what part of town' as being a phrasing for requesting `area'. Another example is when the agent proposes “the name_SLOT is moderately priced and in the area_SLOT part of town . would you like their location ?” and the user replies “i would like the location and the phone number, please”. FSDM predicts `phone' as a requestable slot, but misses `address', suggesting it doesn't recognize the connection between `location' and `address'. The missing requestable slot issue may propagate to the agent response decoder. These issues may arise due to the use of fixed pre-trained embeddings and the single encoder. Using separate encoders for user utterance, agent response and dialogue history or fine-tuning the embeddings may solve the issue.
Conclusion
We propose the flexibly-structured dialogue model, a novel end-to-end architecture for task-oriented dialogue. It uses the structure in the schema of the KB to make architectural choices that introduce inductive bias and address the limitations of fully structured and free-form methods. The experiment suggests that this architecture is competitive with state-of-the-art models, while at the same time providing a more practical solution for real-world applications.
Acknowledgments
We would like to thank Alexandros Papangelis, Janice Lam, Stefan Douglas Webb and SIGDIAL reviewers for their valuable comments. | NDM, LIDM, KVRN, and TSCP/RL |
70e596dd4334a94844454fa7b565889556e2358d | 70e596dd4334a94844454fa7b565889556e2358d_0 | Q: How successful are they at matching names of authors in Japanese and English?
Text: List of Acronyms
tocchapterList of Acronyms
[OAI-PMH] ACMAssociation for Computing Machinery ASCIIAmerican Standard Code for Information Interchange APIApplication Programming Interface BHTBibliography HyperText DBLPDigital Bibliography & Library Project (former meaning: DataBase systems and Logic Programming) FAQFrequently Asked Questions GBGigaByte HTMLHyperText Markup Language HTTPHyperText Transfer Protocol IDIdentifier IEEEInstitute of Electrical and Electronics Engineers IFIPInternational Federation for Information Processing IPSJInformation Processing Society of Japan IPSJ DLDigital Library of the Information Processing Society of Japan ISOInternational Organization for Standardization JARJava ARchive JDBCJava DataBase Connectivity JDKJava Development Kit OAIOpen Archives Initiative OAI-PMHOpen Archives Initiative - Protocol for Metadata Harvesting PDFPortable Document Format RAMRandom Access Memory SAXSimple API for XML SQLStructured Query Language SPFSingle Publication Format TOCTables Of Contents URLUniform Resource Locator XMLeXtensible Markup Language
About This Diploma Thesis
The idea for this work was born when the author was searching for a possibility to combine computer science with his minor subject Japan studies in his diploma thesis. After dismissing some ideas leaning towards Named Entity Recognition and computer linguistics the author chose “Integration of Japanese Papers Into the DBLP Data Set” as his subject. The DBLP is a well-known and useful tool for finding papers published in the context of computer science. The challenge to deal with such a huge database and the problems that occur when processing Japanese input data was the reason why this idea has been chosen. The hope is that, in the future, many Japanese papers can be added by the responsible people of the DBLP project.
Motivation
Computer scientists are likely to use the DBLP to find information about certain papers or authors. Therefore, the DBLP is supposed to provide information about as many papers as possible. For example, one could be interested in the paper “Analysis of an Entry Term Set of a Civil Engineering Dictionary and Its Application to Information Retrieval Systems” by Akiko Aizawa et al. (2005) but DBLP does not include it yet. Japanese scientists might look for the original (Japanese) title “土木関連用語辞典の見出し語の分析と検索システムにおける活用に関する考察” or use Aizawa's name in Japanese characters (相澤彰子) for a search in DBLP. The DBLP contains the author “Akiko Aizawa” but does not contain this specific paper or the author's original name in Japanese characters. Our work is to implement a tool which addresses these questions, support the DBLP team in the integration of Japanese papers and reveal the difficulties of realizing the integration.
Composition of the Diploma Thesis
Dates are displayed in the ISO 8601 standard format YYYY-MM-DD, e.g. 2012-10-19.
Although scientific works about the Japanese language often display the Sino-Japanese reading of kanji (a Japanese character set) with uppercase letters to distinguish them from the other “pure” Japanese reading, we will not use uppercase letters to distinguish them in this work.
When a Japanese word is used in its plural form in this work, the word always stays unmodified. The reason is that in the Japanese language there is no differentiation between a singular and plural form.
We use a macron instead of a circumflex to display a long vowel of a Japanese word in Latin transcription (see section SECREF14 ).
Acknowledgement
First I would like to thank Prof. Dr. Bernd Walter and Prof. Dr. Peter Sturm for making this diploma thesis possible. Special thanks go to Florian Reitz for the great support and the useful answers for the questions I had while I have been working on this diploma thesis. I also want to acknowledge the help of Peter Sommerhoff, Daniel Fett, David Christ and Kana Matsumoto for proofreading my work. I thank Dr. Michael Ley, Oliver Hoffmann, Peter Birke and the other members of the Chair of Database and Information Systems of the University of Trier. Last but not least I want to tell some personal words to my family in my and their native language German:
Ich möchte nun noch meinen Eltern und meinem Bruder Peter dafür danken, dass sie mich in meiner Diplomarbeitsphase, meinem Studium und auch schon davor immer unterstützt haben und immer für mich da waren, wenn ich sie brauchte. Ich weiß es zu schätzen.
Writing in Japanese
“My view is that if your philosophy is not unsettled daily
then you are blind to all the universe has to offer.”
(Neil deGrasse Tyson)
First we need to understand some aspects of the Japanese language and especially the different ways of writing Japanese because the peculiarities of the Japanese writing system are a crucial point of our work. It lays the foundation for all Japanese-related subjects such as the structure of Japanese names (discussed in section SECREF19 ), a dictionary for Japanese names (discussed in section SECREF36 ) or the publication metadata source for Japanese publications (discussed in section SECREF39 ).
Hadamitzky ( BIBREF0 , p. 8-57) gives an overview about the basics of Japanese writing. The Japanese writing system includes kanji, hiragana, katakana and the possibility to use Latin characters.
Kanji
Kanji is the Japanese script which consists of traditional Chinese characters. It came to Japan around the 4th century. Since the Japanese had not developed an own writing system yet they began to use the Chinese characters. At the beginning, the characters were linked phonetically with a certain sound, so that they could write down all existing words by their sound. Applying this principle the man'yōgana were created. Every character had one defined way to pronounce it. In addition to this, a second principle was introduced to write Japanese. This time the people orientated themselves on the meaning of the Chinese characters to choose a writing for a word. Applying the second principle, the kanji were created. While the man'yōgana were simplified to hiragana and katakana (see following sections SECREF7 and SECREF11 ) the general usage of kanji did not change.
Due to an increase in number and possible readings of characters, the government began to try to simplify the Japanese writing system after the Meiji Restoration at the end of the 19th century. The last important reform took place after World War II. Along with some other changes and regulations, the permitted characters in official documents (tōyō kanji) were limited to 1850 in 1946 and increased to 1900 in a draft from 1977. In 1981 they were replaced by the “List of Characters for General Use” (jōyō kanji) containing 1945 characters. In 1951 the government published a list of additional 92 kanji permitted for personal names. The number of kanji permitted for personal names increased with time passing by. Eschbach-Szabo ( BIBREF2 , p. 175) says the last change permitted 983 kanji for personal names in 2004. The press tries to abide by the jōyō kanji. Japanese literature (science, fiction, etc.) uses about 4000 characters (comprehensive Sino-Japanese kanji dictionaries contain ca. 10000 characters). Japanese people know approximately 3000 kanji on average.
Due to their capability to give a word a meaning, kanji are used in substantives, verbs, adjectives and Japanese personal names.
An important aspect is reading a kanji because there are several possibilities to read one. Saitō and Silberstein ( BIBREF3 , p. 31-34) describe how to read a kanji. There is a Japanese reading kun and a Sino-Japanese reading on. Depending on the text and grammar context either the kun or on reading is required. For example the kanji 生 is read sei in 学生 (gakusei, meaning: student, on reading) but is read INLINEFORM0 in 生まれる (umareru, meaning: being born, kun reading). A single kanji can have several kun and several on readings.
For our work it is important to know that one character can have several readings in names too.
Hiragana
The syllabary hiragana evolved from the man'yōgana by simplifying the characters. Every syllable is phonetically assigned to one sound of the spoken language (with two exceptions which can have two sounds each). The gojūon table shown in figure FIGREF9 lists the 46 syllables used today in a certain way (it can be compared with the ABC for letters). Another but obsolete way to order the syllables is iroha which is a poem containing all syllables. Although the name implies 50 sounds (gojū means “50”, on means “sound”) there are only 46 syllables left in modern Japanese. Actually, only 45 syllables belong to the gojūon table. The INLINEFORM0 counts as extra symbol (see gojūon tables in figures FIGREF9 and FIGREF12 ).
Other additional syllables are dakuon (e.g. だ/ INLINEFORM0 , recognizable by two little strokes), handakuon (e.g. ぱ/ INLINEFORM1 , recognizable by a little circle) and yōon (e.g. しゃ/ INLINEFORM2 , recognizable by a normally sized character that is followed by a smaller character).
You can write every Japanese word in hiragana but if possible, kanji are usually preferred to avoid problems with homonyms (we take a look at homonyms in chapter SECREF5 ). Hiragana is mainly used to write words not covered by kanji and as inflected endings. Kanji and hiragana are often combined within one word. For example 読む (yomu) is the basic form of the verb “to read”. The kanji 読 means reading by itself and in combination with the hiragana syllable む it becomes the verb “to read” in a special grammatical form specifying tense, politeness level and other properties.
Katakana
The syllabary katakana also evolved from the man'yōgana by simplifying the characters, consists of 46 characters nowadays (representing the same syllables as hiragana) and is usually ordered by the gojūon table. Figure FIGREF12 presents the katakana in a gojūon table. Besides optical differences with hiragana, katakana are used in other contexts. Japanese mostly use them to write foreign words including foreign personal names.
So foreigners often apply katakana for their names. For example, the author's name can be transcribed as パウル·ソマホフ. The dot · in the middle separates family and given name. Foreign names are often written with the given name preceding the family name.
Latin Characters/Transcription
Transcription systems which convert kanji, hiragana and katakana to Latin characters are usually called rōmaji. Japanese can be easily transcribed by 22 letters and two additional signs. Due to many words having the same pronunciation, the meaning of words is sometimes ambiguous if they are transcribed into Latin characters. In 1954 the government released recommendations for transcribing Japanese. It recommended following two transcription systems:
The kunreishiki rōmaji assigns transcriptions according to the order in the gojūon table without regard to phonetic divergences of some consonants (we will discuss these divergences later). It has been introduced for official usage by the government only slightly different in 1937. It became the preferred transcription system in the standard ISO 3602 “Documentation - Romanization of Japanese (kana script)” BIBREF6 .
The hebonshiki rōmaji was developed by a council of Japanese and foreign erudites in 1885 and spread by the American missionary James C. Hepburn (Hebon in Japanese), especially thanks to his Japanese-English dictionary published one year later. This work also employs hebonshiki. Kunreishiki would lead to transcriptions like kunreisiki, hebonsiki and kanzi.
Although the kunreishiki became the preferred system of the government, the international community often prefers the Hepburn system because the written words suggest a more intuitive pronunciation than kunreishiki. There are also language-related transcription systems that are rarely used. Kaneko and Stickel ( BIBREF7 , p. 53-55) mention them:
The important aspect are the system differences because we need to know where they occur when we deal with Personal Name Matching problems later. Figure FIGREF165 in the appendix reveals the differences between the transcription systems. It summarizes 18 differences in all syllables including INLINEFORM0 , INLINEFORM1 and INLINEFORM2 . Unfortunately, there can be even more transcription differences. ISO 3602 highlights some more special cases when it comes to transcribing Japanese. One is the question whether to put an apostrophe after an INLINEFORM3 . To avoid misunderstandings, one should put an apostrophe behind an INLINEFORM4 in certain cases. Otherwise, people could misinterpret the syllable INLINEFORM5 followed by a syllable composed of a vowel or “y” and a vowel as syllables na, ni, nu, ne, no, nya, nyu or nyo. We will outline a practical example of this case in section UID99 .
A second irregularity occurs when the same vowel appears right after another. If there is a morpheme boundary between the vowels, they should be transcribed as “aa”, “ii”, etc. but should be transcribed by an additional circumflex otherwise.
Koop and Inada BIBREF4 write about another difficulty called nigori.
“The nigori (濁, literally, `turbidity', `impurity') ... [means] modifying the pronunciation of the consonant in certain of the kana sounds. It may be either (1) inherent, as in suge (`sedge'), suzu (`grelot'), go (`five'), or (2) applied incidentally to the initial consonant of a word or name-element following another in composition, e.g., Shimabara from shima and hara, nenjū from nen and chū, Harada from hara and ta.” ( BIBREF4 , p. 34)
So, if we want to derive a transcription from the family name 中田, we cannot tell whether to take Nakata or Nakada as the rightful transcription.
Japanese Personal Names
七転び、八起き。 Nana korobi, ya oki.
(Fall seven times, get up eight times.)
Japanese saying
One of the central problems in this work is to deal with Japanese personal names. We need to get a picture of Japanese personal names in general to deal with multiple data sources (like the introduced publication metadata sources in chapter SECREF4 ) which may represent the same name with different scripts or transcription methods. The dictionary ENAMDICT will be very helpful when it comes to extracting and verifying name information.
Structure of Japanese Names
Having the urge to name things is part of the human nature. Names make it easy to refer to things, people or any other object in this world. When it comes to name giving, history shows a development in the Japanese society.
Japanese names are divided into family and given name, similar to the system in the Western culture. When Japanese write their name in kanji they put the family name first, followed by the given name (usually without leaving spaces between them), for example 中村武志 (Takeshi Nakamura). While introducing themselves, they often tell their family name and skip the given name. When Japanese refer to others, they have many name particles they put after a name to express the relationship to the other person. There is the neutral san, chan for children, kun particular for boys or sensei for teachers and doctors. ( BIBREF5 , p. 18-19)
Kagami ( BIBREF8 , p. 913) writes about Japanese personal names. Only the samurai and nobility were allowed to carry family names before the Meiji Restoration in 1868. Merchants carried shop names instead (recognizable by the suffix -ya), for example Kinokuniya (shop name) Bunzaemon (given name). Then everybody had to pick a family name after the Meiji Restoration. Approximately 135000 family names are recognized now. The most common family names are Suzuki, Satō, Tanaka, Yamamoto, Watanabe, Takahashi, Kobayashi, Nakamura, Itō, Saitō and others.
“In the feudal age, first and second given names were used as male names. The first name was Kemyoo which was the order of brothers, and the second name was the formal name given at the coming of age ceremony (genpuku), e.g. the name of a famous general in 12c.: Minamoto (family name) no (of) Kuroo (kemyoo) Yoshitune (formal given name), and before the genpuku ceremony, he was called by Yoomyoo (child name) Ushiwakamaru.” ( BIBREF8 , p. 913)
While there were no restrictions to the number of personal names visible until the Meiji Restoration, due to modernization, Japanese people got the restriction to carry only one given and one family name. ( BIBREF2 , p. 167-169)
Some indicators for assigning the gender to a name also exist. The suffixes -ko (e.g. Hanako), -mi (Natsumi) and -yo (Yachiyo) indicate a female name. Male names are harder to identify because they have no fixed pattern. The suffix -o (Kazuo) mostly belongs to a male name though.
Family names often consist of two kanji characters, rarely of one or three characters. ( BIBREF8 , p. 913)
Eschbach-Szabo ( BIBREF2 , p. 157-309) dedicates an elaborate chapter to Japanese personal names. Compared to the Chinese system, the Japanese naming system shows more tolerance. Several readings are left besides each other, formal rules are not always applied in practice. Japanese apprehend names mainly visually by the characters, secondarily by the reading and sound. This is why several readings for a written name are still acceptable in the modern Japanese world. In the feudal system, names were needed to determine the position and roles of a person in the family and the society rather than characterizing him or her as an individual. Japan has an open naming system which allows adding new names. This is a difference to the exclusive name lists in Germany or France. ( BIBREF2 , p. 157-166)
Even the apparently simple kanji 正 has a lot of possible readings: Akira, Kami, Sada, Taka, Tadashi, Tsura, Nao, Nobu, Masa. We can see the same phenomenon in recently approved kanji too. When we see 昴 we cannot be sure whether it is read Kō or Subaru. ( BIBREF9 )
“Conversely, it often happens that one does not know to write a name of given pronunciation. For example, Ogawa can be written 尾川 or 小川. In Japan, when two people meet for the first time, they exchange business cards. This custom often baffles foreigners, but for Japanese it is a ritual with practical purpose: Japanese do not feel at ease until they see how a name is spelled out in kanji.” ( BIBREF9 )
Figure FIGREF22 illustrates the problem. The cashier tries to read the customer's name and cannot determine the right name. According to the customer's reaction, his first two trials Hiroko and Yūko seem to be wrong. Ogawa considers the name polygraphy as a reason why the creation of new name characters is still allowed.
Some characteristics of the Japanese naming system are: only little renaming of people semantic variance (names indicate different meanings/attributes) admission of foreign elements (foreign names get assimilated) possibility of polygraphic writing diversity of writing (many scripts usable, weak orthographic normalization) number of personal names for one person
In academic circles a Sino-Japanese reading led to a more reputable name. So the famous linguist 上田万年 from the Meiji era became known as Kazutoshi Ueda AND Mannen Ueda (Mannen is the Sino-Japanese on reading, Kazutoshi is the Japanese kun reading). Modern guidebooks underline that maybe one has to take a loan word from another language to find the corresponding reading for a name in kanji. For example, 宇宙 could be read as Kosumo (from the Greek word for cosmos) instead of Uchū. Also ノイ (Noi), derived from the German word “neu” (new), became a Japanese given name. Another imaginable name is “Sky” written as 空海 (meanings: 空 Sky, 海 sea) and transcribed as Sukai (actually kūkai). This would finally show the impact of globalization also on the Japanese naming system. If one has lived in Japan for a while and wants to adapt or register his or her Western name, one can choose corresponding kanji either by meaning or reading of the original name. Another possibility is transcribing the name with katakana. ( BIBREF2 , p. 170-171, 305-309)
The name Anna exists in many cultures. The girls in figure FIGREF29 are both called Anna. Both turn around when they hear their name and respond in their mother tongue (“Yes!” and “Hai!”, respectively).
One principle of Japanese name giving is ateji. Ateji (当て字) means “appropriate characters”. It says Japanese try to find characters with good, positive meanings for their children's name. Examples are 愛子 (愛: ai, love; 子: ko, child), 夏美 (夏: natsu, summer; 美: mi, beauty) or 正 (Tadashi, correct, honest). There is also a list with characters that are allowed but should be avoided because of bad associations. Characters like 蟻 (ari, ant), 苺 (ichigo, strawberry), 陰 (kage, shadow), 悪 (aku, bad/evil) belong to this list. ( BIBREF2 , p. 172-176)
A particular case drew public attention from June 1993 to February 1994 when Shigeru Satō wanted to call his son Akuma, written as 悪魔 (devil/demon). The civil registry office declined the registration after some discussion because they were worried about other children teasing him. The father went to court but the judges also declined the wish. Although the father wanted to give his son a unique, rememberable name, the judges saw a possible problem in his individual identification process and also getting teased (ijime) by other children in school someday. Then Satō tried to choose other characters while keeping the reading Akuma. But also changing the name partly into man'yōgana (亜久魔) did not change anything about the declination because of the phonological equality implying the same negative associations. Thereupon the father picked the character 神 (god) and its unusual reading Jin. Even though Shintoistic gods can be good or evil, the civil registry office accepted the name. Satō announced his intention to keep calling his son Akuma anyway. So a new (yet unofficial) reading for a character might be established. ( BIBREF2 , p. 271-278)
An article of “Japan Today” from December 2012 shows that there is still a debate about this subject.
“[...]Shinzo Abe, the leader of the Liberal Democratic Party made a stand against kirakira names last week when he stated that giving a child a name like Pikachu, which could be written something like 光宙 (`light' and `space'), is tantamount to child abuse, saying: `Children are not pets; we have to provide guidance for parents who would name their child in such a way.' ”( BIBREF11 )
Despite regulations, the discussion about the culture of name giving does not seem to have ended yet. Japanese comics like the one in figure FIGREF34 suggest a happy-go-lucky life if one has a common everyday name like Keiko.
Today's registration of names allows 2983 kanji for given names, 4000 kanji for family names, 700 man'yōgana, 46 hiragana and 46 katakana. There are still people whose names are written with the obsolete kana syllabary hentaigana which has been prohibited in 1948 ( BIBREF2 , p. 176-177; BIBREF12 ). Regarding this variety of characters (and readings) it is not surprising that even well educated Japanese have problems reading certain names too, respectively they cannot be sure that the chosen reading is the correct reading in the current situation. Forbidden is the usage of geometrical and punctuation signs. The sign ◯ (maru) is an example of such a forbidden one. Also forbidden is the usage of Latin characters (rōmaji) at the registration of a name. Rōmaji can be used privately, though. ( BIBREF2 , p. 176-177)
Names can be changed by marriage, adoption or getting a pseudonym or special posthumous name. Titles can be acquired too. ( BIBREF2 , p. 251)
After disestablishing the patriarchal ie system in which a man (for example the husband) is the dominating householder of a family, the family name has not been focused on the affiliation to a family anymore but has been focused on the couple living together in joint lives. ( BIBREF2 , p. 253-255)
Writing a Japanese name can be ambiguous. While the name written in kanji is definite, displaying it in Latin characters leads to several possibilities. Japanese themselves usually write their name using kanji. To find matching authors in the DBLP, it will be crucial for us to have names in Latin characters later on (in chapter SECREF6 ) because the standard encoding format of the file containing the main data of the DBLP project is ISO 8859-1 (Latin-1).
We sometimes talk about “kanji names” or “names in kanji representation” in this work. Although the expression does not suggest it, they shall include all names in Japanese characters, ergo names in kanji, hiragana and katakana.
ENAMDICT
To automatically detect where a Japanese family name in kanji notation ends and the given name begins, we should factor a name dictionary into our work. It is important that this dictionary includes the names written in kanji and a clear transcription for them in Latin characters. A useful dictionary for our purposes is ENAMDICT.
ENAMDICT BIBREF13 is a free dictionary for Japanese proper names, maintained by the Monash University in Victoria (Australia). The Electronic Dictionary Research and Development Group owns the copyright. In 1995, ENAMDICT became an independent project by dividing the universal dictionary EDICT into two projects. ENAMDICT contains person names and non-person names like places and companies as well. Table TABREF38 shows the online statistics about the content of the ENAMDICT file. We will call the categories “name types” in subsequent chapters.
“A proper name is a word or group of words which is recognized as having identification as its specific purpose, and which achieves, or tends to achieve that purpose by means of its distinctive sound alone, without regard to any meaning possessed by that sound from the start, or aquired by it through association with the object thereby identified.” ( BIBREF14 , p. 73)
these intern abbreviations occur again when we construct a database for Japanese names in chapter SECREF74
Publication Metadata Sources
百語より一笑 Hyaku go yori isshō
(A smile is more worth than a hundred words.)
Japanese saying
This chapter gives an overview of the publication metadata sources that we will need later. We take a look at these sources because we will discuss a way to extract metadata information from one source containing Japanese papers and import them into another source in chapter SECREF6 .
Digital Library of the IPSJ
The IPSJ is a Japanese society in the area of information processing and computer science. It was founded in April 1960 and, by its own account, helps evolving computer science and technology and contributes new ideas in the digital age. It regularly publishes the magazine “Information Processing” (jōhō shori) and a journal, holds symposiums and seminars, Special Interest Groups issue technical reports and hold conferences. It is also the Japan representative member of the IFIP and established partnerships with the IEEE, ACM and other organizations. -2 IPSJ develops drafts of international standards and Japanese industrial standards as well. Eight regional research sections are widespread over Japan. IPSJ had over 17000 members in March 2011. ( BIBREF15 ; BIBREF16 )
The IPSJ provides a Digital Library (referenced as IPSJ DL in this work) where everybody can search Japanese papers in the field of computer science. The search page can be displayed in Japanese and English, most papers are written in Japanese. Free papers are accessible in PDF format, non-free can be bought. A tree view provides the order structure of the papers and there is a keyword search available. We are especially interested in the metadata export functions, though. The online application offers following export formats:
OAI-PMH
BibTeX
OWL SWRC
WEKO Export
For our purposes the OAI-PMH is the most suitable solution because we can send simple HTTP requests to the server and get publication metadata as a result. It “provides an application-independent interoperability framework based on metadata harvesting” ( BIBREF17 ) and consists of two groups of participants. Data Providers can be servers hosting and supplying the metadata. Service Providers take the harvester role and process the recieved metadata from the Data Provider. The application-independent interoperability is achieved by using XML as basic exchange format. Arbitrary programs can parse XML input data very easily, so can we.
While accessing the server, the data can be extracted in several ways. We can either access an OAI-PMH repository by the repository name, the metadata format prefix of the record and a unique identifier or get a list of records with only one request.
A request for a list of records looks like this: 1.5 em1.5 em(*@@*)false6pt http: //ipsj.ixsq.nii.ac.jp/ej/ ?action=repository_oaipmh&verb=ListRecords &metadataPrefix=oai_dc It may also contain a start date and an end date or a resumption token. The headers of records include a corresponding time stamp. The server's response to a request offers only 100 publications. We need this resumption token because it determines the point where we resume the harvest.
In the beginning and for debugging, it was more comfortable to increment a counter that acts as the unique identifier and send requests for single entries with the respective ID multiple times. Fortunately, the entries can be addressed by such an integer ID (plus some constant name):
1.5 em1.5 em(*@@*)false6pt
http: //ipsj.ixsq.nii.ac.jp/ej/
?action=repository_oaipmh&verb=GetRecord&metadataPrefix=oai_dc
&(*@\textbf{identifier}@*)=oai:ipsj.ixsq.nii.ac.jp:(*@\textbf{27130} @*)
The last entry containing real publication metadata has the suffix integer 87045 in its ID. After that some entries with status INLINEFORM0 follow. If we continue requesting even higher IDs, we soon get only a reply with the error code INLINEFORM1 anymore, implying there are no publications with higher IDs. We will discuss the implementation of an OAI-PMH harvester for the IPSJ DL in section UID99 .
DBLP Project
The DBLP is a worldwide known database for publication metadata in the field of computer science. Ley BIBREF19 gives a brief explanation of the DBLP, additional information is extracted from the online DBLP FAQ BIBREF20 . It was started in 1993 as a test server for web technologies and named “Database systems and Logic Programming” in the beginning. But it grew and became a popular web application for computer scientists. The Computer Science department of the University of Trier founded the project, since summer 2011 it is a joint project of Schloss Dagstuhl - Leibniz Center for Informatics and the University of Trier.
“For computer science researchers the DBLP web site is a popular tool to trace the work of colleagues and to retrieve bibliographic details when composing the lists of references for new papers. Ranking and profiling of persons, institutions, journals, or conferences is another sometimes controversial usage of DBLP.” ( BIBREF19 )
The publication metadata is stored in the XML file INLINEFORM0 containing more than 2 million publications and exceeding a size of 1 GB (state of October 2012). An excerpt of the beginning of INLINEFORM1 can be found in the appendix section SECREF171 . The header dictates ISO-8859-1 (Latin-1) as encoding of the file. Considering that we want to import Japanese names in kanji (which are not included in Latin-1) we must handle that issue somehow. We will discuss the solution in section UID121 .
The web front end of the DBLP provides an overview of coauthor relationships by a Coauthor Index (see figure FIGREF53 ). The Coauthor Index can be found at the author's page after the list of the author's publications itself. It shows all coauthors, common papers and categorizes the coauthors into groups that worked together by giving the author names corresponding background colors.
In his diploma thesis Vollmer BIBREF23 gives useful hints in terms of converting the INLINEFORM0 file to a relational database. He also compares the performance of several relational database management systems for this conversion.
The DBLP team developed a special format for the integration of new publications. It is called Bibliography Hypertext (BHT), is based on HTML and similar to the HTML code of the tables of contents (TOCs) at the DBLP website. An example of a publication list in BHT format can be found in the appendix in section SECREF168 . A BHT file has the following structure. The header (text between h2 tags) contains the volume, the number/issue and the date of issue. A list of corresponding publications follows next. The list is surrounded by a beginning and a closing INLINEFORM0 tag, single publication entries start with a INLINEFORM1 tag. A comma is used for the separation of authors while there should be a colon after the last author name. Then comes the title which has to end with a period, question mark or exclamation point. The next line provides the start and end page in the volume/issue. At last, an optional URL can be added by an INLINEFORM2 element to specify an “electronic edition” for a paper. Some guidelines need to be considered, too:
there is no closing INLINEFORM0 tag
initials should be avoided (full name is preferred)
titles with only upper case letters should be avoided
“0-” is the default page number value if the page information is missing
The BHT file may contain additional information. For example, conference proceedings may have more headers to achieve a better clarity. But it should be as close to the proposed format as possible to guarantee an easy import without unnecessary burdens. ( BIBREF24 ; BIBREF20 , “What is the preferred format to enter publications into DBLP?”)
We will extend the original format in section UID121 to satisfy our needs in the context of Japanese papers.
Personal Name Matching
“The important thing is not to stop questioning;
curiosity has its own reason for existing.”
(Albert Einstein)
After looking at transcription systems, Japanese personal names and publication metadata sources, we will now have to look at Personal Name Matching to enable us to deal with the Japanese names extracted from the metadata sources. First we will discuss Personal Name Matching in general and then problems of Personal Name Matching for Japanese names in particular.
The expression Personal Name Matching comes from the work by Borgman and Siegfried BIBREF25 and is used here as in the extended definition from Reuther's work ( BIBREF26 , p. 48-51). Borgman and Siegfried only talk about synonyms. Synonyms are possible names for the same person. Reuther extended the definition by also including homonyms. A name is a homonym if it can belong to several persons. Personal Name Matching is known by other titles in literature, too. Niu et al. BIBREF27 discuss Cross Document Name Disambiguation:
“Cross document name disambiguation is required for various tasks of knowledge discovery from textual documents, such as entity tracking, link discovery, information fusion and event tracking. This task is part of the co-reference task: if two mentions of the same name refer to same (different) entities, by definition, they should (should not) be co-referenced. As far as names are concerned, co-reference consists of two sub-tasks:
On et al. BIBREF28 formally express their Name Disambiguation problem as follows:
“Given two long lists of author names, INLINEFORM0 and INLINEFORM1 , for each author name INLINEFORM2 , find a set of author names, INLINEFORM3 such that both INLINEFORM4 and INLINEFORM5 are name variants of the same author.” ( BIBREF28 )
In contrast to the previous definitions Han et al. BIBREF29 define Name Disambiguation like this:
“Name disambiguation can have several causes. Because of name variations, identical names, name misspellings or pseudonyms, two types of name ambiguities in research papers and bibliographies (citations) can be observed. The first type is that an author has multiple name labels. For example, the author `David S. Johnson' may appear in multiple publications under different name abbreviations such as `David Johnson', `D. Johnson', or `D. S. Johnson', or a misspelled name such as `Davad Johnson'. The second type is that multiple authors may share the same name label. For example, 'D. Johnson' may refer to `David B. Johnson' from Rice University, `David S. Johnson' from AT&T research lab, or `David E. Johnson' from Utah University (assuming the authors still have these affiliations).”( BIBREF29 )
The citations above show that there are many expressions for Personal Name Matching (or sub-categories) which are not equally used by different authors. Niu et al. and On et al. restrict Name Disambiguation to finding synonyms, Han et al. include homonyms in their definition. Even more related expressions can be found in literature. As mentioned, we will use Personal Name Matching in this work as Reuther uses it.
The main aspect of Personal Name Matching is handling synonyms and homonyms. Trying to express the problems formally leads to the following description: Let INLINEFORM0 be a set of persons, especially characterized by their names, in a certain data set and INLINEFORM1 a set of all existing persons. We are also being given a function INLINEFORM2 and a relation INLINEFORM3 . The actual problems can be described as
with INLINEFORM0 ; INLINEFORM1 ; INLINEFORM2 .
Case UID60 checks for each person INLINEFORM0 from the person set INLINEFORM1 whether another person INLINEFORM2 from INLINEFORM3 exists, so that their name labels are different ( INLINEFORM4 ) but the person is the same ( INLINEFORM5 ). So this case covers the synonym problem because the same person has several names here.
Case UID61 checks for each person INLINEFORM0 from the person set INLINEFORM1 whether another person INLINEFORM2 exists in INLINEFORM3 , so that their name labels are equal ( INLINEFORM4 ) but the persons behind the names differ ( INLINEFORM5 ). So this case covers the homonym problem because the same name is taken by several people.
The problem Personal Name Matching arises because such a relation INLINEFORM0 usually does not exist and needs to be approximated as good as possible: INLINEFORM1
Thanks to appropriate similarity measurements and a matching threshold INLINEFORM0 , we can find such a relation INLINEFORM1 which is approximately equivalent to the original relation INLINEFORM2 . The main task in Personal Name Matching is finding a good similarity measure for the described problem. ( BIBREF26 , p. 52)
Let us have a look at a vivid example.
The birth name of the famous actor Michael Keaton is Michael John Douglas. Keaton took a pseudonym because he could have been confused with the more famous actor Michael Douglas. Synonyms for Keaton are “Michael Keaton”, “Michael Douglas”, “Michael John Douglas”, “Michael J. Douglas”, “M. Keaton” or “M. J. Douglas”. -1
On the other hand, when we hear the name “Michael Douglas” we cannot be sure which famous actor is referred to, because Michael Douglas is a valid name for both of them. Figure FIGREF62 illustrates this Personal Name Matching problem with Michael Keaton.
The process of Personal Name Matching can be divided into the following steps ( BIBREF26 , p. 56-87):
Criteria for the evaluation of such a process are Precision and Recall ( BIBREF35 , p. 75-81; BIBREF26 , p. 83-85). Let INLINEFORM0 be a set of items, INLINEFORM1 be the set of relevant items (e.g. synonyms) with INLINEFORM2 and INLINEFORM3 be the answer of a request. In our scenario, the request is usually the question “Is the item INLINEFORM4 a synonym, or accordingly INLINEFORM5 ?”. Then we can define: INLINEFORM6 INLINEFORM7
Precision testifies whether the reported synonyms during the Name Matching process are really synonyms, Recall allows us to say whether there are synonyms which have not been found.
We use a combination of the Jaccard Similarity Coefficient and Levenshtein Distance in our tool. Bilenko et al. BIBREF36 explain these string matching methods isolated. Given two word sets INLINEFORM0 and INLINEFORM1 , the simple Jaccard Similarity Coefficient is: INLINEFORM2
The Levenshtein Distance uses the operations replacement, insertion and deletion of a character and is defined by a matrix. Let INLINEFORM0 and INLINEFORM1 be words, INLINEFORM2 and INLINEFORM3 their lengths. Then we can define: DISPLAYFORM0
We modify the Jaccard Similarity Coefficient in a way that it classifies two set items as intersected if their Levenshtein Distance is lower than a certain threshold.
In addition to the general Personal Name Matching, we must take the characteristics of Japanese names into account. Particularly the usage of kanji and several possibilities to transcribe a name make it hard to compare Japanese names. For example, we cannot compare kanji names from the IPSJ DL with the author names in DBLP. Even though kanji are suited best for name comparison it does not work here because the standard encoding of names in DBLP is “Latin-1” which does not support kanji natively.
A big problem for our work is revealed by looking at the given name Akiko with its kanji representation 章子. As we can see in table TABREF71 章子 has several possible readings besides Akiko (left column) and Akiko written in Latin characters does not determine a nonambiguous match in kanji (right column).
The same problem applies to Japanese family names. Table TABREF72 presents the problem with Kojima as a family name example.
Preparation of Japanese Papers for the Import Into the DBLP Data Set
大事の前の小事 Daiji no mae no shōji
(Who wants to achieve big things must do the little things first.)
Japanese saying
This chapter explains the approach to process and combine the various data sources so that we can import Japanese publications in the end. We will proceed step by step to make the ideas behind the solution as comprehensible as possible.
General Approach
First we will construct a table in a relational database containing information about Japanese names and their transcriptions by converting the ENAMDICT name dictionary. Then we set up a data structure for Japanese names that handles the problem of assigning a given and a family name to a newly instantiated author during parsing the publications of IPSJ DL. At last, we will discuss the actual and titular integration of Japanese papers into the DBLP data set including an explanation that shows how to create a harvester for the OAI-PMH protocol.
Converting an ENAMDICT File to a Relational Database
The first step towards being able to handle Japanese names is distinguishing given and family name in the input text. A relational database containing information about Japanese names and their transcriptions is useful for this task. The database should contain names in kanji, their transcriptions in hiragana and Latin characters and the name type to have a good match with the data source ENAMDICT and to provide all necessary name information we need.
To fill the empty database, the ENAMDICT file needs to be analyzed and its data needs to be extracted. The entries usually have the form
KANJI [TRANSCRIPTION] /LATIN (TYPE)/.
We can take the following line as an example of an existing entry:
森田 [もりだ] /Morida (s)/
A parser should export the single entries. First it saves the text between the slashes and searches for the type of the entry. It must be assured that all person name types and no undesired or alleged types will be stored. Types can consist of the characters “s” (surname), “g” (given name), “f” (female name), “m” (male name), “u” (unclassified name), “p” (place name), “h” (full name of a particular person), “pr” (product name), “co” (company name) or “st” (station name). But only the types “s”, “g”, “f” and “m” are important in this case because the parser should only store person names in the database. One exception are the unclassified names and they need to be stored too because they can also contain person names. Using unclassified names carelessly leads to problems, though. On the one hand it is useful if you find a match for the given name but not for the assumed family name. Then it helps to find an unclassified name matching the assumed family name. On the other hand some unclassified names in the ENAMDICT file decrease the data quality of the database. The entry
スターウォーズ /(u) Star Wars (film)/
shows that there are undesired names like film titles in the category “unclassified”. The example also reveals that there is no overall standard for an entry format. Analyzing the file leads to following observations:
text in round brackets might be type or additional commentary (see entry example above)
when only hiragana or katakana are used instead of kanji to display the Japanese name the transcription part is missing because it is not required (see entry example above)
the type information in brackets might actually consist of several type declarations, separated by commas
the type information might be placed before or after the transcription in Latin characters
one entry line might contain several possibilities to interpret the name, the example
イブ /(f) Eve/(u) Ib/Ibu (f)/(m) Yves/
clarifies this aspect
We must consider these observations when we implement the parser.
To handle the problems in UID76 and UID78 we can filter the contents in round brackets. One possibility is using a regular expression like (,|s|u|g|f|m|p|h|pr|co|st) INLINEFORM0 to filter all valid types. Regular expressions are powerful and popular tools for pattern matching. In our case we are looking for valid type expressions including commas to get rid of commentaries. After eliminating commentaries we also want to get rid of unwanted types like place names. So we filter again and only process desired types this way. To handle UID77 we just ignore missing transcriptions in square brackets. Our parser also needs to be flexible enough to deal with observation UID79 which means that it must expect the type(s) at two possible places (before and after the transcription in Latin characters). We can handle the last observation UID80 by using recursive function calls. We call the function that exports one entry with a modified parameter value within the function itself when there is more than one entry in the input line (noticeable by additional slashes).
Before parsing we need to change the original encoding of the ENAMDICT file from “EUC-JP” to “UTF-8” to make it compatible with our program.
During parsing a few inconsistencies in the syntax of the ENAMDICT file occurred:
there were four times no slash in the end of the entry:
甲子太郎 [かしたろう] /Kashitarou (m)
there was once an unnecessary closing bracket without an opening bracket:
近松秋江 [ちかまつしゅうこう] /Chikamatsu Shuukou) (h)/
there was once a backslash where a square bracket was supposed to be put:
キルギス共和国 [キルギスきょうわこく\ /(p) Kyrgyz Republic/Kirghiz Republic/
Instead of constructing a workaround for these problems we should rather correct the only few inconsistencies manually.
A Data Structure for Japanese Names
We will construct a class which is responsible for handling Japanese names and representing them in a convenient way. Therefore, it must be able to save the name in kanji and in at least one Latin transcription. The transcription is necessary to compare found authors in IPSJ DL with authors in the DBLP. The kanji name can be stored as additional author metadata in the DBLP later. Our goal is a standardized representation of a Japanese person. So first we can construct a simple helper class for a single name containing given and family name as strings. This class can be applied to both kanji and Latin names. Our Japanese person usually has these two name representations.
When getting an input name from the IPSJ DL we try to determine the separation point and categorize the tokens into given and family names. The separation point can mostly be identified by white space or a comma between the words. The categorization is done by including information from ENAMDICT. Thanks to ENAMDICT's classification into name types we can use this information to categorize our input name tokens into given and family names. However, we have to cover some unusual cases too because IPSJ DL has no standardized way to provide names. So we get names in various formats. For example, there are entries in which the family name follows the given name directly without any separation markers. Then we can try to take advantage of upper and lower case letters assuming that an uppercase letter means the beginning of a new name token. But we must also be aware of existing input names like “KenjiTODA”. If we get a longer sequence of uppercase letters, this sequence is probably a family name. We can filter these names with a regular expression like [A-Z][a-z]{1,}[A-Z]{3,} (first character is an uppercase letter, followed by at least one lowercase letter, followed by at least three uppercase letters). We also have to recognize abbreviated names and normalize Latin names.
Let us have a look at what we can observe about necessary transcription customizations. One peculiarity is that Japanese like to transcribe their names with an INLINEFORM0 instead of a double vowel. An example is “Hitoshi Gotoh”. The INLINEFORM1 symbolizes the lengthening of a vowel and is a substitute for INLINEFORM2 or INLINEFORM3 in this case. To enable our class to find names like this in ENAMDICT, we have to replace the INLINEFORM4 's lengthening a vowel by the vowel itself because ENAMDICT entries contain double vowels instead of INLINEFORM5 's with this semantic function.
Another observation is ENAMDICT's usage of the Hepburn transcription system throughout the entire dictionary. So we have to convert the name to match the Hepburn system and to check a name via ENAMDICT. The needed character replacements for a conversion into the Hepburn system are shown in table TABREF86 (see also figure FIGREF165 in the appendix).
In addition to the replacements from table TABREF86 , we must consider that names usually start with uppercase letters and replace “Tu”, “Ti”, “Sya” and so on by “Tsu”, “Chi”, “Sha”, etc. as well.
The Japanese INLINEFORM0 is sometimes transcribed as INLINEFORM1 . If INLINEFORM2 is followed by INLINEFORM3 or INLINEFORM4 , this INLINEFORM5 is likely to be transcribed as INLINEFORM6 . The reason is a correlative modification in the pronunciation of INLINEFORM7 in these cases. For example, the family name Kanbe is often transcribed as Kambe in the IPSJ DL data set. -1
Double vowels are sometimes completely dropped in some IPSJ DL author elements. While this might be okay for aesthetic reasons when transcribing the own name, it becomes a problem when we try to find a matching name in a dictionary like ENAMDICT. So we also have to check additional modified names. If there is a single vowel in the name, we must also check the same name whose vowel has become a double vowel. If several single vowels occur in a name, the number of names to be checked rapidly increases too. We have to pay special attention to the doubling of the vowel INLINEFORM0 because INLINEFORM1 AND INLINEFORM2 are possible doublings for the single INLINEFORM3 . Doubling the vowel INLINEFORM4 leads either to INLINEFORM5 or INLINEFORM6 . All other double vowels are intuitive: INLINEFORM7 becomes INLINEFORM8 , INLINEFORM9 becomes INLINEFORM10 , INLINEFORM11 becomes INLINEFORM12 . Taking “Gotoh” as an example we remove the INLINEFORM13 first and check a list of names via ENAMDICT. The list of names consists of “Goto”, “Gooto”, “Gouto”, “Gotoo”, “Gotou”, “Gootoo”, “Goutoo”, “Gootou” and “Goutou”. We can remove “Goto”, “Gooto” and “Gouto” from the list if we know that the INLINEFORM14 (representing a double vowel) has been removed before.
If the input metadata contains a Latin and kanji representation of the author's name, we will try to find a match for these. Names in kanji usually do not have any separation mark, so we must distinguish given and family name by taking advantage of the ENAMDICT dictionary and checking the possible name combinations. Processing author names without kanji representation is okay but a missing Latin representation becomes a problem when it comes to actually integrating the publication into the DBLP data set because all DBLP data are supposed to have a Latin representation. The solution is a search for name candidates (we will discuss it more detailed in section UID121 ).
We cannot be sure that our name matching for Latin and kanji names always succeeds. Therefore, we add some status information to our Japanese name to get a chance to evaluate the outcome of the program. Possible status types are:
The status “ok” means that given and family name have successfully been found in the name dictionary and (if available) the kanji names have successfully been assigned to their corresponding name in Latin characters.
An undefined status usually means that the Latin name is missing. A missing Latin name leads to a never changed name status. In these cases, the name in kanji usually exists anyway.
This is the status type for an abbreviated name like “T. Nakamura”.
If this status occurs, the Latin name could not be found in the name dictionary.
If a kanji name has not been found in the name dictionary or could not be assigned to the Latin name, this status will occur.
As the name suggests, this status means that the data quality of the publication metadata source is most likely bad. Our tool can handle some of these cases well by normalizing the name.
We could have stumbled upon a name anomaly when we see this status type. During implementation this status was narrowed down to a possible name anomaly for abbreviated names.
This status indicates a critical name anomaly. This is the only case in which the tool cannot even give a recommendation for given and family name. The output is the full name of the input data for both given and family name.
In chapter SECREF5 we discussed synonyms and homonyms. With the strategies from above we can deal with synonyms pretty well. Yet, homonyms cannot be recognized this way and are not covered at all by our tool.
Import Into the DBLP Data Set
To be able to import the harvested data into the DBLP, we still need to make the existing publication data processable in an appropriate way for our program, construct a coauthor table for these data, compare publications from the Digital Library of the IPSJ with those available in the DBLP project and provide the new publication metadata for the DBLP adequately.
It is important to convert the DBLP file INLINEFORM0 to a relational database to gain an easier and more efficient access to the data while running our program. We are mainly interested in the basic publication metadata. So we will skip some non-publication records of the DBLP like INLINEFORM1 elements. Our publication database table shall contain columns for an ID, the authors, title, publication year, journal title, journal pages and the volume. Whenever we come across the beginning of a publication type element ( INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 , INLINEFORM8 , INLINEFORM9 ) during parsing, we reinitialize the variables which store this metadata for the table columns. When we encounter the according XML end tag of the publication we add an SQL INSERT command to a batch of commands. This batch is regularly executed after processing a certain amount of publications. The regular execution of batches allows a better performance than sending single INSERT commands to the database server. There are some recommendations in the DBLP FAQ BIBREF20 for parsing the INLINEFORM10 file. We use the Apache Xerces parser instead of the standard Java SAX parser and need to increase the allocatable heap space for our parser.
While parsing the DBLP file we can construct a table with coauthor relationships along with the DBLP publication table. This coauthor table stores two author names and a publication ID. The ID shows which publication has been written together by the authors and matches the ID in the DBLP publication table. New coauthor relationships will only be inserted if there are at least two authors mentioned in the metadata. If the metadata mentions more than two authors, every possible pair of authors will be inserted into the database.
As already explained in section SECREF39 , we access the OAI-PMH repository by the repository name and the metadata format prefix to get a list of publication metadata entries. The specification of OAI-PMH 2.0 BIBREF17 describes a possibility to retrieve a list of all metadata formats which a Data Provider has to offer. The HTTP request
1.5 em1.5 em(*@@*)false6pt
http: //ipsj.ixsq.nii.ac.jp/ej/?action=repository_oaipmh
&verb=ListMetadataFormats
informs us that there are two metadata formats called oai_dc and junii2. oai_dc is the standard Dublin Core format all Data Providers provide, also traceable in the protocol specification. The “Implementation Guidelines for the Open Archives Initiative Protocol for Metadata Harvesting” BIBREF37 classify the metadata format oai_dc as mandatory. The name junii2 suggests that it is a self-developed format of the National Institute of Informatics (in Tokyo). Comparing these two in IPSJ DL, we notice that junii2 provides a more accurate description of the data, for example regarding additional XML attributes telling us whether the element value is English or Japanese. This additional information is helpful when we process the data in a later step and is missing in the oai_dc representation of the IPSJ server's data. So we will take the metadata prefix junii2 as initial point for harvesting the server's metadata. Figure FIGREF102 shows an according metadata example (also compare figure FIGREF46 ).
The harvesting includes the following steps:
we load the DBLP publication, coauthor relationship and the ENAMDICT data into the RAM
we access the IPSJ server to get publication metadata
we parse the accessed XML metadata (concerning the thoughts from section SECREF85 ) and store the needed publication data temporarily in the RAM.
we add the parsed publication to an SQL command batch to insert the metadata into a relational database (the batch is regularly executed)
we create a BHT file for the parsed publication
at the end we go into all directories with BHT files and concatenate them to one bigger BHT file
During the implementation and testing, some exceptional incidents occurred. We try to cover them besides the expected difficulties like Personal Name Matching and transcriptions. For example, we get “NobukazuYOSHIOKA” as a full input name. Algorithm UID99 shows a way to handle these unusual input data. Japanese sometimes write their family names in upper case letters to distinguish given and family name. [htb]
INLINEFORM0 : full input name
INLINEFORM0 : list of name representations for a Japanese person
function split( INLINEFORM0 ): searches for regular expression and splits text,
splitted text does not contain text that matches the regular expression
function normalize( INLINEFORM0 ): normalizes personal name
new name for person found and added (given and family name separated)
INLINEFORM0 matches regular expression INLINEFORM1 INLINEFORM2 split INLINEFORM3 INLINEFORM4 split INLINEFORM5 normalize INLINEFORM6 INLINEFORM7 BAD_DATA_QUALITY_IN_SOURCE INLINEFORM8 add(new PersonName INLINEFORM9 Categorizing names like “NobukazuYOSHIOKA”
Another observation during testing the program and checking the data is the following. Searching the Japanese given name “Shin'ichi” in the DBLP we notice that there is no uniform way to store certain names in the database. We find “Shin'ichi Aihara” but also “Shin-ichi Adachi” along with other results indicating the same phenomenon. So we see the apostrophe and the hyphen are used equally as syllable separators (we discussed the syllable separation in chapter SECREF14 ). Comparing the author “Shinichi Horiden” from the IPSJ data set and the one from the DBLP data set we can assume they are the same person because they have common coauthors (e.g. Kenji Taguchi and Kiyoshi Itoh) in both databases. The IPSJ data set tells us that the name written in kanji is 本位田真一. We are interested in the part 真一 (Shin'ichi) because we get to know that the separator symbol is sometimes missing. The kanji indicates the syllables INLINEFORM0 , especially focused on INLINEFORM1 and INLINEFORM2 instead of INLINEFORM3 . We would expect an additional separator symbol for a clear (nonambiguous) transcription; but obviously, it has been dropped in this case. A separator symbol can also be found when some double vowels occur. For example, we find “Toru Moto'oka” (元岡達) instead of “Toru Motooka”. This makes it easier to identify the reading of a single kanji (元 moto, 岡 oka, 達 toru). When a separator symbol is needed for a clear transcription, an apostrophe is used as separator symbol in ENAMDICT. While ENAMDICT always uses an apostrophe as separator symbol, DBLP and IPSJ DL use an apostrophe, a hyphen or the separator symbol is missing. We must consider these differences in the data sources for a successful import. For an easier name matching between names in the ENAMDICT and IPSJ DL data set we can add names containing an apostrophe once as they are and once without apostrophes to the relational database when we parse the ENAMDICT file to store person names in a relational database.
Our tool has a statistics class to get an overview over the parsed input data and the quality of the output data. We will have a look at these statistics created after the harvest. There are 81597 records with publication metadata and 8562 records which are marked as INLINEFORM0 in the parsed data. Figure FIGREF114 shows a visualization in pie chart form.
The publication types are declared as “Technical Report”, “Conference Paper”, “Journal Article”, “Departmental Bulletin Paper” or “Article” (compare the table TABREF115 and figure FIGREF116 ).
The statistics also reveal that 74971 publications are published in Japanese, only 4456 in English (compare the pie chart in figure FIGREF117 ).
Our tool detects 1325 publications which are already included in DBLP. A publication is considered found in both databases if the title is the same and at least one author is the same.
The most interesting statistics for our work are these about the evaluation of the quality of author name assignments (compare the bar chart in figure FIGREF119 ):
Fortunately, 180221 of 231162 author names could be matched successfully. There are many reasons for the remaining uncovered cases. 9073 Latin names could not be found in the name dictionary ENAMDICT and 14827 name matchings between the names' Latin and kanji representations did not succeed. These names might be missing at all in the dictionary, delivered in a very unusual format that the tool does not cover, or might not be Japanese or human names at all. Of course, Japanese computer scientists sometimes also cooperate with foreign colleagues but our tool expects Japanese names and is optimized for them. Both IPSJ DL and ENAMDICT provide katakana representations for some Western names. However, katakana representations for Western names are irrelevant for projects like DBLP. But for instance, Chinese names in Chinese characters are relevant. Understandably, our tool does not support any special Personal Name Matching for Chinese names yet because our work is focused on Japanese names. The tool does not take account of the unclassified names of ENAMDICT by default. We can increase the general success rate of the Name Matching process by enabling the inclusion of unclassified names in the configuration file but the quality of the Name Matching process will decrease because the correct differentiation between given and family name cannot be guaranteed anymore. An unclassified name may substitute a given or a family name.
There are 1203 entries that were qualified as “bad data quality in publication metadata source”. They might be handled alright but they are particularly marked to indicate that these cases should also be reviewed manually before any import action is performed.
The numbers of abbreviated names, possible name anomalies and name anomalies are very low. While processing author names which will be later qualified as “possible name anomaly”, the tool cannot decide whether the assignment has been correct or the name is an anomaly. “Name anomalies” are critical anomalies that could not be categorized into any other status.
There could be a few uncovered flaws, for example HTML or code in titles. We must be aware of those when we do the actual import into the DBLP data set.
We will discuss the creation of BHT files and important extensions for the BHT format that fit the requirements of Japanese papers well, based on our knowledge from section SECREF49 . As mentioned, the header dictates ISO-8859-1 (Latin-1) as encoding of the file INLINEFORM0 . Ley's work BIBREF19 reveals that we can use XML/HTML entities to solve this problem. Authors have person records in the DBLP providing additional information. For example, we can find the following entry for Atsuyuki Morishima (森嶋厚行) in the XML file:
1.5 em1.5 em(*@@*)false6pt
<www mdate="2008-02-20" key="homepages/m/AtsuyukiMorishima">
<author>Atsuyuki Morishima</author>
<title>Home Page</title>
<url>http://www.kc.tsukuba.ac.jp/~mori/index.html</url>
<note>森嶋厚行</note>
</www>
We must extend the BHT format to fulfill the requirements and add extra metadata for authors, title and relevant process information. The author talked to members of the DBLP team personally and got the permission to extend the original BHT format to enable us to adapt the format to Japanese papers. Our additions are well formed XML elements. We must substitute all non-ASCII characters by escape characters (XML entities) to ensure the compatibility for DBLP. The additional elements are:
Every author that has a kanji representation in its metadata gets an originalname element:
1.5 em1.5 em(*@@*)false6pt
<originalname latin="Shinsuke Mori">森,信介
</originalname>
If available, the Latin representation is added as an attribute INLINEFORM0 to avoid confusion on assigning the extra information to the right author later on. The element content has a fixed structure. The family name comes first, followed by a comma and the given name.
Every author gets a status information that evaluates the author name assignment. It is displayed by a status element:
1.5 em1.5 em(*@@*)false6pt
<status name="Shinsuke Mori">ok</status>
The connected author is added as an attribute INLINEFORM0 .
If there is no Latin representation of the name of an author, we will add Latin name candidates to the BHT file:
1.5 em1.5 em(*@@*)false6pt
<namecandidates kanji="菅谷正弘">Shougu Sugatani, Seihiro Sugatani, Tadahiro Sugatani, Masahiro Sugatani, Shougu Suganoya, Seihiro Suganoya, Tadahiro Suganoya, Masahiro Suganoya, Shougu Sugaya, Seihiro Sugaya, Tadahiro Sugaya, Masahiro Sugaya, Shougu Sugetani, Seihiro Sugetani, Tadahiro Sugetani, Masahiro Sugetani, Shougu Sugenoya, Seihiro Sugenoya, Tadahiro Sugenoya, Masahiro Sugenoya</namecandidates>
The connected kanji representation is added as an attribute kanji in the namecandidates element. We seek the kanji in ENAMDICT and output all possible name combinations in a comma separated list.
If the original language of the title is Japanese, we will add this title to the BHT file:
1.5 em1.5 em(*@@*)false6pt
<originaltitle lang="ja" type="Journal Article">点予測による自動単語分割</originaltitle>
The XML element originaltitle has the attributes lang (for the paper language) and type (for the publication type).
The tool searches the authors in DBLP and tries to find additional common coauthors in DBLP. If at least two of the main authors of the paper also worked with a certain other person (that is retrieved from DBLP), this person is added to the comma separated list. The Personal Name Matching of author names uses a combination of Levenshtein Distance and Jaccard Similarity Coefficient here.
1.5 em1.5 em(*@@*)false6pt
<commoncoauthors>Masato Mimura</commoncoauthors>
If the tool finds the paper in DBLP, we also add the DBLP key. Records, such as elements with publication metadata, have a unique key in DBLP.
1.5 em1.5 em(*@@*)false6pt
<dblpkey>conf/iscas/HiratsukaGI06</dblpkey>
An example of a BHT file in SPF can be found in the appendix in section SECREF170 (also compare with the original BHT format in section SECREF168 ). After we have finished parsing all Japanese papers, we concatenate the BHT files in SPF that belong together to one bigger BHT file INLINEFORM0 . Publications, respectively BHT files, that belong together are recognizable by the directory structure. If they belong together, they will be in the same directory. We must simply go through the BHT root directory recursively.
Conclusion and Future Work
“Creativity is seeing what everyone else sees,
but then thinking a new thought that has never been
thought before and expressing it somehow.”
(Neil deGrasse Tyson)
The integration of Japanese papers into the DBLP data set has revealed some major problems. The nonambiguous representation of Japanese names (and paper titles, etc.) is done by kanji while DBLP's standard encoding is Latin-1 and Japanese characters are only optionally added to the publications' metadata. This leads to the need of transcribing the Japanese names which in turn also evokes new problems because there is not the transcription but rather a lot of transcription possibilities.
In addition to that, we must ensure a certain data quality even if one data source sometimes lacks this quality. Due to name matching with a name dictionary, format checking and conversions (if necessary), we can actually correct some flaws or at least assimilate the data into our project.
The problem of synonyms is dealt with by transcription manipulations, homonyms could not be addressed in this work. Reuther ( BIBREF26 , p. 159-164) describes an idea to handle homonyms. We could extend our tool by a Coauthor Index as in DBLP for the publications of the IPSJ DL. The idea is based on the assumption that scientists often publish their papers with the same people as coauthors. If the coauthors match a certain coauthor group, the author is considered the same. -1 If the author's coauthors are not members of the expected coauthor groups, the author could be a different person than we expected and we might have a homonym here.
The developed tool is usable and provides among relational databases customized Bibliography Hypertext (BHT) files as output data. Customizations were necessary to optimize the BHT files for Japanese papers and additional important metadata information. Desired but missing metadata like contributors or a short description of the content of a paper can be added without much effort because the relational database already contains these data, only the source code of Kankoukanyuu (our tool) needs to be extended by a few lines.
Though having been created with care regarding correct and well-formed output data, it is not recommended to import the newly created BHT files unchecked. The DBLP team should check the files not to compromise the data quality of DBLP. There might still be undesired format anomalies in the BHT files. The DBLP team also needs to adapt their import system to the extended BHT format developed in this work for the actual import into DBLP.
Titles might be in uppercase letters. This could be improved but we have to pay attention because a primitive solution will not work well. For example, we have to be aware of the popular usage of acronyms in computer science. So some words in uppercase letters can be correct.
Our tool is optimized for the Digital Library of the IPSJ and their OAI-PMH metadata prefix junii2. It can easily be adapted to support the similar and commonly used metadata prefix oai_dc. So the tool would be able to handle other publication metadata sources that support OAI-PMH.
The algorithm for detecting common papers in DBLP and IPSJ DL may be modified to achieve an even better comparison between the databases and detect more common papers.
It would be useful to include a Chinese name dictionary in the future and extend the name search of our tool to cover Chinese names as well. -1
One improvement in the future could be storing the most common names (for example, the 100 most common given and family names) in a separate data structure in the RAM. This way we can improve the runtime by often skipping the search in the huge name data.
We can still increase the success rate of the Name Matching process too. One way is swapping kanji. A typical Japanese name has two kanji for the given name and two kanji for the family name. The family name shall precede the given name. However, this principle could be violated by the publication source. If the Name Matching is not successful, we may swap the first two for the last two characters and try to find a match again.
A second advancement is the additional support of a special Latin character set that is used by Japanese. For instance, we can find the name “Kai” instead of “Kai” in the metadata of IPSJ DL. They look very similar and both represent simple Latin letters but their character codes are different. So programs handle them differently. A simple (but yet unimplemented) substitution function can cover these rare and unusual cases.
Another possibility to take advantage of this work is extracting the author names in kanji from the relational database. So the DBLP team can insert author metadata for already existing authors in DBLP.
We can also have a look at what phases of the Personal Name Matching process have been implemented in this work and to which degree. There are actually different types of Personal Name Matching included in our tool:
The “Standardization” is accomplished by a normalization of the Latin input names at the beginning of the process. Kanji input names get trimmed by removing all whitespace. We do not have a “Blocking” phase as it is proposed by Reuther BIBREF26 . When searching a match between transcribed Japanese names with their original kanji representation we even go a contrary way and increase the number of comparisons by adding reasonable other transcriptions to the matching process. Due to efficient data structures and a comparatively small amount of Japanese papers (less than 100000), our tool has an acceptable runtime (the retrieval of the publication metadata from the IPSJ server takes much longer than processing it). In addition, the search for common coauthors will only be done if the author exists in DBLP. The phases “Analysis” and “Decision Model” are entangled in our tool. If we find a match between a (normalized or modified) input name and a name in the name dictionary, we will immediately consider them a successful match and continue parsing the metadata. When we try to find coauthors in DBLP, we take advantage of the combined Jaccard Levenshtein Distance as explained in chapter SECREF5 .
Instead of checking the complete output data in the “Performance Measurement” phase, we could only take control samples while implementing, debugging, testing and improving our program. A broad manual check of approximately 90000 publications is not possible within the scope of a diploma thesis. The control samples had the expected and desired content but we cannot guarantee the correctness of the output. Under the assumption that ENAMDICT's entries are correct, the predicted Precision should be about INLINEFORM0 because the tool probably does not produce many false positives. But we cannot say anything about the Recall because ENAMDICT does not cover all names that occur in IPSJ DL. All exceptions resulting from the limits of a name dictionary and a bad data quality are supposed to be handled by the status for author name assignments (described in section UID99 ). This gives us the chance to manually handle the noted exceptions afterwards.
All in all, this work is a first approach for an integration of Japanese papers into the DBLP data set and provides a not yet perfect but usable tool for this task. Some major obstacles are overcome.
About the Tool
The developed tool that is also part of this project is named Kankoukanyuu (刊行加入). Kankou means publication, kanyuu means admission. The whole name indicates the ability to import publications. The tool also allows the assimilation of imported publications, of course. The usable functionalities are:
Parsing the DBLP file INLINEFORM0 and converting it to a MySQL database
Converting an ENAMDICT name dictionary file to a MySQL database
Harvesting the IPSJ server, processing the publication metadata and storing it in a MySQL database
Making the harvested publications ready for an import into the DBLP data set by making BHT files
Usage
The tool has been developed and tested on a Linux system with Intel Core 2 Quad and 8 GB RAM in the local computer pool. It has to be executed by command line like this:
1.5 em1.5 em(*@@*)false6pt
java -Xmx5400M -jar kankoukanyuu.jar
The parameter -Xmx5400M allows our program to allocate more than 5 GB RAM and store all necessary data in the RAM for an unproblematic execution.
Possible command line arguments are:
Parse dplb.xml and fill database tables
Convert ENAMDICT dictionary file to a relational database
Harvest the IPSJ server, fill OAI-PMH data into databases and create BHT files (in SPF) - requires DBLP and ENAMDICT database tables from steps above
Concatenate BHT files in Single Publication Format to one bigger file (file all.bht will be created in every folder with BHT files) - requires BHT files in SPF from step above
Do all of the above
Show help text about usage of the tool
The configuration file INLINEFORM0 allows us to change following parameters:
Database related parameters (in INLINEFORM0 section): URL ( INLINEFORM1 ), database name ( INLINEFORM2 ), user name ( INLINEFORM3 ) and password ( INLINEFORM4 )
ENAMDICT related parameter (in INLINEFORM0 section): location of ENAMDICT file ( INLINEFORM1 )
ENAMDICT database related parameters (in INLINEFORM0 section): database table name ( INLINEFORM1 ), decision whether to use unclassified names ( INLINEFORM2 )
DBLP related parameter (in INLINEFORM0 section): location of INLINEFORM1 ( INLINEFORM2 )
DBLP database related parameters (in INLINEFORM0 section): database table name for publications ( INLINEFORM1 ), database table name for coauthor relationships (authorscounttable)
OAI-PMH database (contains output after harvest and parsing process) related parameters (in INLINEFORM0 section): publication table ( INLINEFORM1 ), authors table ( INLINEFORM2 ), titles table ( INLINEFORM3 ), contributors table ( INLINEFORM4 ), descriptions table ( INLINEFORM5 )
Harvester related parameters (in INLINEFORM0 section): location for storing the harvest ( INLINEFORM1 ), start ID for harvester ( INLINEFORM2 ), end ID for harvester ( INLINEFORM3 ), decision whether to use record lists ( INLINEFORM4 )
BHT export related parameters (in INLINEFORM0 section): location for BHT output files ( INLINEFORM1 ), decision whether to compute and show common coauthors (showcommoncoauthors)
Log related parameter (in INLINEFORM0 section): location of log files ( INLINEFORM1 )
A configuration example can be found in the appendix section SECREF172 .
The system must support the Japanese language (meaning Japanese characters) to ensure a successful run.
Kankoukanyuu does not use any Linux-only commands but has not been tested on Microsoft Windows yet.
Used Technologies
The tool itself has been written in Java, using the OpenJDK 6. The handling of databases is done by MySQL 5 and JDBC is used to provide MySQL functionalities within Java.
External libraries are the Apache Xerces parser and the MySQL Connector/J. The Fat Jar Eclipse Plug-In is used to deploy the complete project into one executable Java JAR file. The execution of Kankoukanyuu becomes more user-friendly this way because external libraries are already included and class paths for external libraries does not need to be specified anymore.
Runtime
Measurement indicates the following approximated runtimes of Kankoukanyuu:
We can make some observations. During the harvest, only ca. 30 minutes were spent on processing the harvested data, the rest is needed to retrieve the data from the Japanese server. Depending on whether the local file system or network file system was used, the runtime for the concatenation differs immensely.
BHT Example Proposed By DBLP
1.5 em1.5 em(*@@*)false6pt
Computer Languages, Systems & Structures (journals/cl)
<h2>Volume 34, Numbers 2-3, July-October 2008</h2>
Best Papers 2006 International Smalltalk Conference
<ul>
<li>Wolfgang De Meuter:
Preface.
45
<ee>http://dx.doi.org/10.1016/j.cl.2007.07.001</ee>
<li>David Röthlisberger, Marcus Denker, Éric Tanter:
Unanticipated partial behavioral reflection: Adapting applications at runtime.
46-65
<ee>http://dx.doi.org/10.1016/j.cl.2007.05.001</ee>
<li>Johan Brichau, Andy Kellens, Kris Gybels, Kim Mens, Robert Hirschfeld, Theo D'Hondt:
Application-specific models and pointcuts using a logic metalanguage.
66-82
<ee>http://dx.doi.org/10.1016/j.cl.2007.05.004</ee>
<li>Alexandre Bergel, Stéphane Ducasse, Oscar Nierstrasz, Roel Wuyts:
Stateful traits and their formalization.
83-108
<ee>http://dx.doi.org/10.1016/j.cl.2007.05.003</ee>
<li>Alexandre Bergel, Stéphane Ducasse, Colin Putney, Roel Wuyts:
Creating sophisticated development tools with OmniBrowser.
109-129
<ee>http://dx.doi.org/10.1016/j.cl.2007.05.005</ee>
<li>Luc Fabresse, Christophe Dony, Marianne Huchard:
Foundations of a simple and unified component-oriented language.
130-149
<ee>http://dx.doi.org/10.1016/j.cl.2007.05.002</ee>
</ul>
This is a BHT example proposed by the DBLP team in the DBLP FAQ BIBREF20 .
BHT Example File Created By Kankoukanyuu
1.5 em1.5 em(*@@*)false6pt
<h2>Volume 52, Number 10, October 2011</h2>
<ul>
<li>Shinsuke Mori, Graham Neubig, Yuuta Tsuboi:
A Pointwise Approach to Automatic Word Segmentation.
2944-2952
<ee>http://id.nii.ac.jp/1001/00078161/</ee>
<originalname latin="Shinsuke Mori">森,信介</originalname>
<status name="Shinsuke Mori">ok</status>
<originalname latin="Graham Neubig">ニュービッググラム,</originalname>
<status name="Graham Neubig">no kanji matching found</status>
<originalname latin="Yuuta Tsuboi">坪井,祐太</originalname>
<status name="Yuuta Tsuboi">ok</status>
<originaltitle lang="ja" type="Journal Article">点予測による自動単語分割</originaltitle>
<commoncoauthors>Masato Mimura</commoncoauthors>
</ul>
This is an output example of a BHT file in Single Publication Format (before the concatenation step), created by our tool.
Excerpt From dblp.xml
1.5 em1.5 em(*@@*)false6pt
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE dblp SYSTEM "dblp.dtd">
<dblp>
<article mdate="2002-01-03" key="persons/Codd71a">
<author>E. F. Codd</author>
<title>Further Normalization of the Data Base Relational Model.</title>
<journal>IBM Research Report, San Jose, California</journal>
<volume>RJ909</volume>
<month>August</month>
<year>1971</year>
<cdrom>ibmTR/rj909.pdf</cdrom>
<ee>db/labs/ibm/RJ909.html</ee>
</article>
<article mdate="2002-01-03" key="persons/Hall74">
<author>Patrick A. V. Hall</author>
<title>Common Subexpression Identification in General Algebraic Systems.</title>
<journal>Technical Rep. UKSC 0060, IBM United Kingdom Scientific Centre</journal>
<month>November</month>
<year>1974</year>
</article>
<article mdate="2002-01-03" key="persons/Tresch96">
<author>Markus Tresch</author>
<title>Principles of Distributed Object Database Languages.</title>
<journal>technical Report 248, ETH Zürich, Dept. of Computer Science</journal>
<month>July</month>
<year>1996</year>
</article>
...
Configuration File of Our Tool
1.5 em1.5 em(*@@*)false6pt
[db]
url=myserver
db=mydbname
user=myusername
password=mypassword
[japnamesdb]
table=japnames
useunclassifiednames=false
[dblpdb]
authorscounttable=dblpauthors
dblptable=dblp
[oaidb]
publicationtable=oai_publications
authorstable=oai_authors
titlestable=oai_titles
contributorstable=oai_contributors
descriptionstable=oai_descriptions
[enamdict]
file=./enamdict
[harvester]
filespath=./files-harvester
minid=1
maxid=100000
uselistrecords=true
[dblp]
xmlfile=/dblp/dblp.xml
[bhtexport]
path=./bht
showcommoncoauthors=true
[log]
path=./log | 180221 of 231162 author names could be matched successfully |
18dab362ae4587408a291a55299f347f8870e9f1 | 18dab362ae4587408a291a55299f347f8870e9f1_0 | Q: Is their approach applicable to papers outside computer science?
Text: List of Acronyms
tocchapterList of Acronyms
[OAI-PMH] ACMAssociation for Computing Machinery ASCIIAmerican Standard Code for Information Interchange APIApplication Programming Interface BHTBibliography HyperText DBLPDigital Bibliography & Library Project (former meaning: DataBase systems and Logic Programming) FAQFrequently Asked Questions GBGigaByte HTMLHyperText Markup Language HTTPHyperText Transfer Protocol IDIdentifier IEEEInstitute of Electrical and Electronics Engineers IFIPInternational Federation for Information Processing IPSJInformation Processing Society of Japan IPSJ DLDigital Library of the Information Processing Society of Japan ISOInternational Organization for Standardization JARJava ARchive JDBCJava DataBase Connectivity JDKJava Development Kit OAIOpen Archives Initiative OAI-PMHOpen Archives Initiative - Protocol for Metadata Harvesting PDFPortable Document Format RAMRandom Access Memory SAXSimple API for XML SQLStructured Query Language SPFSingle Publication Format TOCTables Of Contents URLUniform Resource Locator XMLeXtensible Markup Language
About This Diploma Thesis
The idea for this work was born when the author was searching for a possibility to combine computer science with his minor subject Japan studies in his diploma thesis. After dismissing some ideas leaning towards Named Entity Recognition and computer linguistics the author chose “Integration of Japanese Papers Into the DBLP Data Set” as his subject. The DBLP is a well-known and useful tool for finding papers published in the context of computer science. The challenge to deal with such a huge database and the problems that occur when processing Japanese input data was the reason why this idea has been chosen. The hope is that, in the future, many Japanese papers can be added by the responsible people of the DBLP project.
Motivation
Computer scientists are likely to use the DBLP to find information about certain papers or authors. Therefore, the DBLP is supposed to provide information about as many papers as possible. For example, one could be interested in the paper “Analysis of an Entry Term Set of a Civil Engineering Dictionary and Its Application to Information Retrieval Systems” by Akiko Aizawa et al. (2005) but DBLP does not include it yet. Japanese scientists might look for the original (Japanese) title “土木関連用語辞典の見出し語の分析と検索システムにおける活用に関する考察” or use Aizawa's name in Japanese characters (相澤彰子) for a search in DBLP. The DBLP contains the author “Akiko Aizawa” but does not contain this specific paper or the author's original name in Japanese characters. Our work is to implement a tool which addresses these questions, support the DBLP team in the integration of Japanese papers and reveal the difficulties of realizing the integration.
Composition of the Diploma Thesis
Dates are displayed in the ISO 8601 standard format YYYY-MM-DD, e.g. 2012-10-19.
Although scientific works about the Japanese language often display the Sino-Japanese reading of kanji (a Japanese character set) with uppercase letters to distinguish them from the other “pure” Japanese reading, we will not use uppercase letters to distinguish them in this work.
When a Japanese word is used in its plural form in this work, the word always stays unmodified. The reason is that in the Japanese language there is no differentiation between a singular and plural form.
We use a macron instead of a circumflex to display a long vowel of a Japanese word in Latin transcription (see section SECREF14 ).
Acknowledgement
First I would like to thank Prof. Dr. Bernd Walter and Prof. Dr. Peter Sturm for making this diploma thesis possible. Special thanks go to Florian Reitz for the great support and the useful answers for the questions I had while I have been working on this diploma thesis. I also want to acknowledge the help of Peter Sommerhoff, Daniel Fett, David Christ and Kana Matsumoto for proofreading my work. I thank Dr. Michael Ley, Oliver Hoffmann, Peter Birke and the other members of the Chair of Database and Information Systems of the University of Trier. Last but not least I want to tell some personal words to my family in my and their native language German:
Ich möchte nun noch meinen Eltern und meinem Bruder Peter dafür danken, dass sie mich in meiner Diplomarbeitsphase, meinem Studium und auch schon davor immer unterstützt haben und immer für mich da waren, wenn ich sie brauchte. Ich weiß es zu schätzen.
Writing in Japanese
“My view is that if your philosophy is not unsettled daily
then you are blind to all the universe has to offer.”
(Neil deGrasse Tyson)
First we need to understand some aspects of the Japanese language and especially the different ways of writing Japanese because the peculiarities of the Japanese writing system are a crucial point of our work. It lays the foundation for all Japanese-related subjects such as the structure of Japanese names (discussed in section SECREF19 ), a dictionary for Japanese names (discussed in section SECREF36 ) or the publication metadata source for Japanese publications (discussed in section SECREF39 ).
Hadamitzky ( BIBREF0 , p. 8-57) gives an overview about the basics of Japanese writing. The Japanese writing system includes kanji, hiragana, katakana and the possibility to use Latin characters.
Kanji
Kanji is the Japanese script which consists of traditional Chinese characters. It came to Japan around the 4th century. Since the Japanese had not developed an own writing system yet they began to use the Chinese characters. At the beginning, the characters were linked phonetically with a certain sound, so that they could write down all existing words by their sound. Applying this principle the man'yōgana were created. Every character had one defined way to pronounce it. In addition to this, a second principle was introduced to write Japanese. This time the people orientated themselves on the meaning of the Chinese characters to choose a writing for a word. Applying the second principle, the kanji were created. While the man'yōgana were simplified to hiragana and katakana (see following sections SECREF7 and SECREF11 ) the general usage of kanji did not change.
Due to an increase in number and possible readings of characters, the government began to try to simplify the Japanese writing system after the Meiji Restoration at the end of the 19th century. The last important reform took place after World War II. Along with some other changes and regulations, the permitted characters in official documents (tōyō kanji) were limited to 1850 in 1946 and increased to 1900 in a draft from 1977. In 1981 they were replaced by the “List of Characters for General Use” (jōyō kanji) containing 1945 characters. In 1951 the government published a list of additional 92 kanji permitted for personal names. The number of kanji permitted for personal names increased with time passing by. Eschbach-Szabo ( BIBREF2 , p. 175) says the last change permitted 983 kanji for personal names in 2004. The press tries to abide by the jōyō kanji. Japanese literature (science, fiction, etc.) uses about 4000 characters (comprehensive Sino-Japanese kanji dictionaries contain ca. 10000 characters). Japanese people know approximately 3000 kanji on average.
Due to their capability to give a word a meaning, kanji are used in substantives, verbs, adjectives and Japanese personal names.
An important aspect is reading a kanji because there are several possibilities to read one. Saitō and Silberstein ( BIBREF3 , p. 31-34) describe how to read a kanji. There is a Japanese reading kun and a Sino-Japanese reading on. Depending on the text and grammar context either the kun or on reading is required. For example the kanji 生 is read sei in 学生 (gakusei, meaning: student, on reading) but is read INLINEFORM0 in 生まれる (umareru, meaning: being born, kun reading). A single kanji can have several kun and several on readings.
For our work it is important to know that one character can have several readings in names too.
Hiragana
The syllabary hiragana evolved from the man'yōgana by simplifying the characters. Every syllable is phonetically assigned to one sound of the spoken language (with two exceptions which can have two sounds each). The gojūon table shown in figure FIGREF9 lists the 46 syllables used today in a certain way (it can be compared with the ABC for letters). Another but obsolete way to order the syllables is iroha which is a poem containing all syllables. Although the name implies 50 sounds (gojū means “50”, on means “sound”) there are only 46 syllables left in modern Japanese. Actually, only 45 syllables belong to the gojūon table. The INLINEFORM0 counts as extra symbol (see gojūon tables in figures FIGREF9 and FIGREF12 ).
Other additional syllables are dakuon (e.g. だ/ INLINEFORM0 , recognizable by two little strokes), handakuon (e.g. ぱ/ INLINEFORM1 , recognizable by a little circle) and yōon (e.g. しゃ/ INLINEFORM2 , recognizable by a normally sized character that is followed by a smaller character).
You can write every Japanese word in hiragana but if possible, kanji are usually preferred to avoid problems with homonyms (we take a look at homonyms in chapter SECREF5 ). Hiragana is mainly used to write words not covered by kanji and as inflected endings. Kanji and hiragana are often combined within one word. For example 読む (yomu) is the basic form of the verb “to read”. The kanji 読 means reading by itself and in combination with the hiragana syllable む it becomes the verb “to read” in a special grammatical form specifying tense, politeness level and other properties.
Katakana
The syllabary katakana also evolved from the man'yōgana by simplifying the characters, consists of 46 characters nowadays (representing the same syllables as hiragana) and is usually ordered by the gojūon table. Figure FIGREF12 presents the katakana in a gojūon table. Besides optical differences with hiragana, katakana are used in other contexts. Japanese mostly use them to write foreign words including foreign personal names.
So foreigners often apply katakana for their names. For example, the author's name can be transcribed as パウル·ソマホフ. The dot · in the middle separates family and given name. Foreign names are often written with the given name preceding the family name.
Latin Characters/Transcription
Transcription systems which convert kanji, hiragana and katakana to Latin characters are usually called rōmaji. Japanese can be easily transcribed by 22 letters and two additional signs. Due to many words having the same pronunciation, the meaning of words is sometimes ambiguous if they are transcribed into Latin characters. In 1954 the government released recommendations for transcribing Japanese. It recommended following two transcription systems:
The kunreishiki rōmaji assigns transcriptions according to the order in the gojūon table without regard to phonetic divergences of some consonants (we will discuss these divergences later). It has been introduced for official usage by the government only slightly different in 1937. It became the preferred transcription system in the standard ISO 3602 “Documentation - Romanization of Japanese (kana script)” BIBREF6 .
The hebonshiki rōmaji was developed by a council of Japanese and foreign erudites in 1885 and spread by the American missionary James C. Hepburn (Hebon in Japanese), especially thanks to his Japanese-English dictionary published one year later. This work also employs hebonshiki. Kunreishiki would lead to transcriptions like kunreisiki, hebonsiki and kanzi.
Although the kunreishiki became the preferred system of the government, the international community often prefers the Hepburn system because the written words suggest a more intuitive pronunciation than kunreishiki. There are also language-related transcription systems that are rarely used. Kaneko and Stickel ( BIBREF7 , p. 53-55) mention them:
The important aspect are the system differences because we need to know where they occur when we deal with Personal Name Matching problems later. Figure FIGREF165 in the appendix reveals the differences between the transcription systems. It summarizes 18 differences in all syllables including INLINEFORM0 , INLINEFORM1 and INLINEFORM2 . Unfortunately, there can be even more transcription differences. ISO 3602 highlights some more special cases when it comes to transcribing Japanese. One is the question whether to put an apostrophe after an INLINEFORM3 . To avoid misunderstandings, one should put an apostrophe behind an INLINEFORM4 in certain cases. Otherwise, people could misinterpret the syllable INLINEFORM5 followed by a syllable composed of a vowel or “y” and a vowel as syllables na, ni, nu, ne, no, nya, nyu or nyo. We will outline a practical example of this case in section UID99 .
A second irregularity occurs when the same vowel appears right after another. If there is a morpheme boundary between the vowels, they should be transcribed as “aa”, “ii”, etc. but should be transcribed by an additional circumflex otherwise.
Koop and Inada BIBREF4 write about another difficulty called nigori.
“The nigori (濁, literally, `turbidity', `impurity') ... [means] modifying the pronunciation of the consonant in certain of the kana sounds. It may be either (1) inherent, as in suge (`sedge'), suzu (`grelot'), go (`five'), or (2) applied incidentally to the initial consonant of a word or name-element following another in composition, e.g., Shimabara from shima and hara, nenjū from nen and chū, Harada from hara and ta.” ( BIBREF4 , p. 34)
So, if we want to derive a transcription from the family name 中田, we cannot tell whether to take Nakata or Nakada as the rightful transcription.
Japanese Personal Names
七転び、八起き。 Nana korobi, ya oki.
(Fall seven times, get up eight times.)
Japanese saying
One of the central problems in this work is to deal with Japanese personal names. We need to get a picture of Japanese personal names in general to deal with multiple data sources (like the introduced publication metadata sources in chapter SECREF4 ) which may represent the same name with different scripts or transcription methods. The dictionary ENAMDICT will be very helpful when it comes to extracting and verifying name information.
Structure of Japanese Names
Having the urge to name things is part of the human nature. Names make it easy to refer to things, people or any other object in this world. When it comes to name giving, history shows a development in the Japanese society.
Japanese names are divided into family and given name, similar to the system in the Western culture. When Japanese write their name in kanji they put the family name first, followed by the given name (usually without leaving spaces between them), for example 中村武志 (Takeshi Nakamura). While introducing themselves, they often tell their family name and skip the given name. When Japanese refer to others, they have many name particles they put after a name to express the relationship to the other person. There is the neutral san, chan for children, kun particular for boys or sensei for teachers and doctors. ( BIBREF5 , p. 18-19)
Kagami ( BIBREF8 , p. 913) writes about Japanese personal names. Only the samurai and nobility were allowed to carry family names before the Meiji Restoration in 1868. Merchants carried shop names instead (recognizable by the suffix -ya), for example Kinokuniya (shop name) Bunzaemon (given name). Then everybody had to pick a family name after the Meiji Restoration. Approximately 135000 family names are recognized now. The most common family names are Suzuki, Satō, Tanaka, Yamamoto, Watanabe, Takahashi, Kobayashi, Nakamura, Itō, Saitō and others.
“In the feudal age, first and second given names were used as male names. The first name was Kemyoo which was the order of brothers, and the second name was the formal name given at the coming of age ceremony (genpuku), e.g. the name of a famous general in 12c.: Minamoto (family name) no (of) Kuroo (kemyoo) Yoshitune (formal given name), and before the genpuku ceremony, he was called by Yoomyoo (child name) Ushiwakamaru.” ( BIBREF8 , p. 913)
While there were no restrictions to the number of personal names visible until the Meiji Restoration, due to modernization, Japanese people got the restriction to carry only one given and one family name. ( BIBREF2 , p. 167-169)
Some indicators for assigning the gender to a name also exist. The suffixes -ko (e.g. Hanako), -mi (Natsumi) and -yo (Yachiyo) indicate a female name. Male names are harder to identify because they have no fixed pattern. The suffix -o (Kazuo) mostly belongs to a male name though.
Family names often consist of two kanji characters, rarely of one or three characters. ( BIBREF8 , p. 913)
Eschbach-Szabo ( BIBREF2 , p. 157-309) dedicates an elaborate chapter to Japanese personal names. Compared to the Chinese system, the Japanese naming system shows more tolerance. Several readings are left besides each other, formal rules are not always applied in practice. Japanese apprehend names mainly visually by the characters, secondarily by the reading and sound. This is why several readings for a written name are still acceptable in the modern Japanese world. In the feudal system, names were needed to determine the position and roles of a person in the family and the society rather than characterizing him or her as an individual. Japan has an open naming system which allows adding new names. This is a difference to the exclusive name lists in Germany or France. ( BIBREF2 , p. 157-166)
Even the apparently simple kanji 正 has a lot of possible readings: Akira, Kami, Sada, Taka, Tadashi, Tsura, Nao, Nobu, Masa. We can see the same phenomenon in recently approved kanji too. When we see 昴 we cannot be sure whether it is read Kō or Subaru. ( BIBREF9 )
“Conversely, it often happens that one does not know to write a name of given pronunciation. For example, Ogawa can be written 尾川 or 小川. In Japan, when two people meet for the first time, they exchange business cards. This custom often baffles foreigners, but for Japanese it is a ritual with practical purpose: Japanese do not feel at ease until they see how a name is spelled out in kanji.” ( BIBREF9 )
Figure FIGREF22 illustrates the problem. The cashier tries to read the customer's name and cannot determine the right name. According to the customer's reaction, his first two trials Hiroko and Yūko seem to be wrong. Ogawa considers the name polygraphy as a reason why the creation of new name characters is still allowed.
Some characteristics of the Japanese naming system are: only little renaming of people semantic variance (names indicate different meanings/attributes) admission of foreign elements (foreign names get assimilated) possibility of polygraphic writing diversity of writing (many scripts usable, weak orthographic normalization) number of personal names for one person
In academic circles a Sino-Japanese reading led to a more reputable name. So the famous linguist 上田万年 from the Meiji era became known as Kazutoshi Ueda AND Mannen Ueda (Mannen is the Sino-Japanese on reading, Kazutoshi is the Japanese kun reading). Modern guidebooks underline that maybe one has to take a loan word from another language to find the corresponding reading for a name in kanji. For example, 宇宙 could be read as Kosumo (from the Greek word for cosmos) instead of Uchū. Also ノイ (Noi), derived from the German word “neu” (new), became a Japanese given name. Another imaginable name is “Sky” written as 空海 (meanings: 空 Sky, 海 sea) and transcribed as Sukai (actually kūkai). This would finally show the impact of globalization also on the Japanese naming system. If one has lived in Japan for a while and wants to adapt or register his or her Western name, one can choose corresponding kanji either by meaning or reading of the original name. Another possibility is transcribing the name with katakana. ( BIBREF2 , p. 170-171, 305-309)
The name Anna exists in many cultures. The girls in figure FIGREF29 are both called Anna. Both turn around when they hear their name and respond in their mother tongue (“Yes!” and “Hai!”, respectively).
One principle of Japanese name giving is ateji. Ateji (当て字) means “appropriate characters”. It says Japanese try to find characters with good, positive meanings for their children's name. Examples are 愛子 (愛: ai, love; 子: ko, child), 夏美 (夏: natsu, summer; 美: mi, beauty) or 正 (Tadashi, correct, honest). There is also a list with characters that are allowed but should be avoided because of bad associations. Characters like 蟻 (ari, ant), 苺 (ichigo, strawberry), 陰 (kage, shadow), 悪 (aku, bad/evil) belong to this list. ( BIBREF2 , p. 172-176)
A particular case drew public attention from June 1993 to February 1994 when Shigeru Satō wanted to call his son Akuma, written as 悪魔 (devil/demon). The civil registry office declined the registration after some discussion because they were worried about other children teasing him. The father went to court but the judges also declined the wish. Although the father wanted to give his son a unique, rememberable name, the judges saw a possible problem in his individual identification process and also getting teased (ijime) by other children in school someday. Then Satō tried to choose other characters while keeping the reading Akuma. But also changing the name partly into man'yōgana (亜久魔) did not change anything about the declination because of the phonological equality implying the same negative associations. Thereupon the father picked the character 神 (god) and its unusual reading Jin. Even though Shintoistic gods can be good or evil, the civil registry office accepted the name. Satō announced his intention to keep calling his son Akuma anyway. So a new (yet unofficial) reading for a character might be established. ( BIBREF2 , p. 271-278)
An article of “Japan Today” from December 2012 shows that there is still a debate about this subject.
“[...]Shinzo Abe, the leader of the Liberal Democratic Party made a stand against kirakira names last week when he stated that giving a child a name like Pikachu, which could be written something like 光宙 (`light' and `space'), is tantamount to child abuse, saying: `Children are not pets; we have to provide guidance for parents who would name their child in such a way.' ”( BIBREF11 )
Despite regulations, the discussion about the culture of name giving does not seem to have ended yet. Japanese comics like the one in figure FIGREF34 suggest a happy-go-lucky life if one has a common everyday name like Keiko.
Today's registration of names allows 2983 kanji for given names, 4000 kanji for family names, 700 man'yōgana, 46 hiragana and 46 katakana. There are still people whose names are written with the obsolete kana syllabary hentaigana which has been prohibited in 1948 ( BIBREF2 , p. 176-177; BIBREF12 ). Regarding this variety of characters (and readings) it is not surprising that even well educated Japanese have problems reading certain names too, respectively they cannot be sure that the chosen reading is the correct reading in the current situation. Forbidden is the usage of geometrical and punctuation signs. The sign ◯ (maru) is an example of such a forbidden one. Also forbidden is the usage of Latin characters (rōmaji) at the registration of a name. Rōmaji can be used privately, though. ( BIBREF2 , p. 176-177)
Names can be changed by marriage, adoption or getting a pseudonym or special posthumous name. Titles can be acquired too. ( BIBREF2 , p. 251)
After disestablishing the patriarchal ie system in which a man (for example the husband) is the dominating householder of a family, the family name has not been focused on the affiliation to a family anymore but has been focused on the couple living together in joint lives. ( BIBREF2 , p. 253-255)
Writing a Japanese name can be ambiguous. While the name written in kanji is definite, displaying it in Latin characters leads to several possibilities. Japanese themselves usually write their name using kanji. To find matching authors in the DBLP, it will be crucial for us to have names in Latin characters later on (in chapter SECREF6 ) because the standard encoding format of the file containing the main data of the DBLP project is ISO 8859-1 (Latin-1).
We sometimes talk about “kanji names” or “names in kanji representation” in this work. Although the expression does not suggest it, they shall include all names in Japanese characters, ergo names in kanji, hiragana and katakana.
ENAMDICT
To automatically detect where a Japanese family name in kanji notation ends and the given name begins, we should factor a name dictionary into our work. It is important that this dictionary includes the names written in kanji and a clear transcription for them in Latin characters. A useful dictionary for our purposes is ENAMDICT.
ENAMDICT BIBREF13 is a free dictionary for Japanese proper names, maintained by the Monash University in Victoria (Australia). The Electronic Dictionary Research and Development Group owns the copyright. In 1995, ENAMDICT became an independent project by dividing the universal dictionary EDICT into two projects. ENAMDICT contains person names and non-person names like places and companies as well. Table TABREF38 shows the online statistics about the content of the ENAMDICT file. We will call the categories “name types” in subsequent chapters.
“A proper name is a word or group of words which is recognized as having identification as its specific purpose, and which achieves, or tends to achieve that purpose by means of its distinctive sound alone, without regard to any meaning possessed by that sound from the start, or aquired by it through association with the object thereby identified.” ( BIBREF14 , p. 73)
these intern abbreviations occur again when we construct a database for Japanese names in chapter SECREF74
Publication Metadata Sources
百語より一笑 Hyaku go yori isshō
(A smile is more worth than a hundred words.)
Japanese saying
This chapter gives an overview of the publication metadata sources that we will need later. We take a look at these sources because we will discuss a way to extract metadata information from one source containing Japanese papers and import them into another source in chapter SECREF6 .
Digital Library of the IPSJ
The IPSJ is a Japanese society in the area of information processing and computer science. It was founded in April 1960 and, by its own account, helps evolving computer science and technology and contributes new ideas in the digital age. It regularly publishes the magazine “Information Processing” (jōhō shori) and a journal, holds symposiums and seminars, Special Interest Groups issue technical reports and hold conferences. It is also the Japan representative member of the IFIP and established partnerships with the IEEE, ACM and other organizations. -2 IPSJ develops drafts of international standards and Japanese industrial standards as well. Eight regional research sections are widespread over Japan. IPSJ had over 17000 members in March 2011. ( BIBREF15 ; BIBREF16 )
The IPSJ provides a Digital Library (referenced as IPSJ DL in this work) where everybody can search Japanese papers in the field of computer science. The search page can be displayed in Japanese and English, most papers are written in Japanese. Free papers are accessible in PDF format, non-free can be bought. A tree view provides the order structure of the papers and there is a keyword search available. We are especially interested in the metadata export functions, though. The online application offers following export formats:
OAI-PMH
BibTeX
OWL SWRC
WEKO Export
For our purposes the OAI-PMH is the most suitable solution because we can send simple HTTP requests to the server and get publication metadata as a result. It “provides an application-independent interoperability framework based on metadata harvesting” ( BIBREF17 ) and consists of two groups of participants. Data Providers can be servers hosting and supplying the metadata. Service Providers take the harvester role and process the recieved metadata from the Data Provider. The application-independent interoperability is achieved by using XML as basic exchange format. Arbitrary programs can parse XML input data very easily, so can we.
While accessing the server, the data can be extracted in several ways. We can either access an OAI-PMH repository by the repository name, the metadata format prefix of the record and a unique identifier or get a list of records with only one request.
A request for a list of records looks like this: 1.5 em1.5 em(*@@*)false6pt http: //ipsj.ixsq.nii.ac.jp/ej/ ?action=repository_oaipmh&verb=ListRecords &metadataPrefix=oai_dc It may also contain a start date and an end date or a resumption token. The headers of records include a corresponding time stamp. The server's response to a request offers only 100 publications. We need this resumption token because it determines the point where we resume the harvest.
In the beginning and for debugging, it was more comfortable to increment a counter that acts as the unique identifier and send requests for single entries with the respective ID multiple times. Fortunately, the entries can be addressed by such an integer ID (plus some constant name):
1.5 em1.5 em(*@@*)false6pt
http: //ipsj.ixsq.nii.ac.jp/ej/
?action=repository_oaipmh&verb=GetRecord&metadataPrefix=oai_dc
&(*@\textbf{identifier}@*)=oai:ipsj.ixsq.nii.ac.jp:(*@\textbf{27130} @*)
The last entry containing real publication metadata has the suffix integer 87045 in its ID. After that some entries with status INLINEFORM0 follow. If we continue requesting even higher IDs, we soon get only a reply with the error code INLINEFORM1 anymore, implying there are no publications with higher IDs. We will discuss the implementation of an OAI-PMH harvester for the IPSJ DL in section UID99 .
DBLP Project
The DBLP is a worldwide known database for publication metadata in the field of computer science. Ley BIBREF19 gives a brief explanation of the DBLP, additional information is extracted from the online DBLP FAQ BIBREF20 . It was started in 1993 as a test server for web technologies and named “Database systems and Logic Programming” in the beginning. But it grew and became a popular web application for computer scientists. The Computer Science department of the University of Trier founded the project, since summer 2011 it is a joint project of Schloss Dagstuhl - Leibniz Center for Informatics and the University of Trier.
“For computer science researchers the DBLP web site is a popular tool to trace the work of colleagues and to retrieve bibliographic details when composing the lists of references for new papers. Ranking and profiling of persons, institutions, journals, or conferences is another sometimes controversial usage of DBLP.” ( BIBREF19 )
The publication metadata is stored in the XML file INLINEFORM0 containing more than 2 million publications and exceeding a size of 1 GB (state of October 2012). An excerpt of the beginning of INLINEFORM1 can be found in the appendix section SECREF171 . The header dictates ISO-8859-1 (Latin-1) as encoding of the file. Considering that we want to import Japanese names in kanji (which are not included in Latin-1) we must handle that issue somehow. We will discuss the solution in section UID121 .
The web front end of the DBLP provides an overview of coauthor relationships by a Coauthor Index (see figure FIGREF53 ). The Coauthor Index can be found at the author's page after the list of the author's publications itself. It shows all coauthors, common papers and categorizes the coauthors into groups that worked together by giving the author names corresponding background colors.
In his diploma thesis Vollmer BIBREF23 gives useful hints in terms of converting the INLINEFORM0 file to a relational database. He also compares the performance of several relational database management systems for this conversion.
The DBLP team developed a special format for the integration of new publications. It is called Bibliography Hypertext (BHT), is based on HTML and similar to the HTML code of the tables of contents (TOCs) at the DBLP website. An example of a publication list in BHT format can be found in the appendix in section SECREF168 . A BHT file has the following structure. The header (text between h2 tags) contains the volume, the number/issue and the date of issue. A list of corresponding publications follows next. The list is surrounded by a beginning and a closing INLINEFORM0 tag, single publication entries start with a INLINEFORM1 tag. A comma is used for the separation of authors while there should be a colon after the last author name. Then comes the title which has to end with a period, question mark or exclamation point. The next line provides the start and end page in the volume/issue. At last, an optional URL can be added by an INLINEFORM2 element to specify an “electronic edition” for a paper. Some guidelines need to be considered, too:
there is no closing INLINEFORM0 tag
initials should be avoided (full name is preferred)
titles with only upper case letters should be avoided
“0-” is the default page number value if the page information is missing
The BHT file may contain additional information. For example, conference proceedings may have more headers to achieve a better clarity. But it should be as close to the proposed format as possible to guarantee an easy import without unnecessary burdens. ( BIBREF24 ; BIBREF20 , “What is the preferred format to enter publications into DBLP?”)
We will extend the original format in section UID121 to satisfy our needs in the context of Japanese papers.
Personal Name Matching
“The important thing is not to stop questioning;
curiosity has its own reason for existing.”
(Albert Einstein)
After looking at transcription systems, Japanese personal names and publication metadata sources, we will now have to look at Personal Name Matching to enable us to deal with the Japanese names extracted from the metadata sources. First we will discuss Personal Name Matching in general and then problems of Personal Name Matching for Japanese names in particular.
The expression Personal Name Matching comes from the work by Borgman and Siegfried BIBREF25 and is used here as in the extended definition from Reuther's work ( BIBREF26 , p. 48-51). Borgman and Siegfried only talk about synonyms. Synonyms are possible names for the same person. Reuther extended the definition by also including homonyms. A name is a homonym if it can belong to several persons. Personal Name Matching is known by other titles in literature, too. Niu et al. BIBREF27 discuss Cross Document Name Disambiguation:
“Cross document name disambiguation is required for various tasks of knowledge discovery from textual documents, such as entity tracking, link discovery, information fusion and event tracking. This task is part of the co-reference task: if two mentions of the same name refer to same (different) entities, by definition, they should (should not) be co-referenced. As far as names are concerned, co-reference consists of two sub-tasks:
On et al. BIBREF28 formally express their Name Disambiguation problem as follows:
“Given two long lists of author names, INLINEFORM0 and INLINEFORM1 , for each author name INLINEFORM2 , find a set of author names, INLINEFORM3 such that both INLINEFORM4 and INLINEFORM5 are name variants of the same author.” ( BIBREF28 )
In contrast to the previous definitions Han et al. BIBREF29 define Name Disambiguation like this:
“Name disambiguation can have several causes. Because of name variations, identical names, name misspellings or pseudonyms, two types of name ambiguities in research papers and bibliographies (citations) can be observed. The first type is that an author has multiple name labels. For example, the author `David S. Johnson' may appear in multiple publications under different name abbreviations such as `David Johnson', `D. Johnson', or `D. S. Johnson', or a misspelled name such as `Davad Johnson'. The second type is that multiple authors may share the same name label. For example, 'D. Johnson' may refer to `David B. Johnson' from Rice University, `David S. Johnson' from AT&T research lab, or `David E. Johnson' from Utah University (assuming the authors still have these affiliations).”( BIBREF29 )
The citations above show that there are many expressions for Personal Name Matching (or sub-categories) which are not equally used by different authors. Niu et al. and On et al. restrict Name Disambiguation to finding synonyms, Han et al. include homonyms in their definition. Even more related expressions can be found in literature. As mentioned, we will use Personal Name Matching in this work as Reuther uses it.
The main aspect of Personal Name Matching is handling synonyms and homonyms. Trying to express the problems formally leads to the following description: Let INLINEFORM0 be a set of persons, especially characterized by their names, in a certain data set and INLINEFORM1 a set of all existing persons. We are also being given a function INLINEFORM2 and a relation INLINEFORM3 . The actual problems can be described as
with INLINEFORM0 ; INLINEFORM1 ; INLINEFORM2 .
Case UID60 checks for each person INLINEFORM0 from the person set INLINEFORM1 whether another person INLINEFORM2 from INLINEFORM3 exists, so that their name labels are different ( INLINEFORM4 ) but the person is the same ( INLINEFORM5 ). So this case covers the synonym problem because the same person has several names here.
Case UID61 checks for each person INLINEFORM0 from the person set INLINEFORM1 whether another person INLINEFORM2 exists in INLINEFORM3 , so that their name labels are equal ( INLINEFORM4 ) but the persons behind the names differ ( INLINEFORM5 ). So this case covers the homonym problem because the same name is taken by several people.
The problem Personal Name Matching arises because such a relation INLINEFORM0 usually does not exist and needs to be approximated as good as possible: INLINEFORM1
Thanks to appropriate similarity measurements and a matching threshold INLINEFORM0 , we can find such a relation INLINEFORM1 which is approximately equivalent to the original relation INLINEFORM2 . The main task in Personal Name Matching is finding a good similarity measure for the described problem. ( BIBREF26 , p. 52)
Let us have a look at a vivid example.
The birth name of the famous actor Michael Keaton is Michael John Douglas. Keaton took a pseudonym because he could have been confused with the more famous actor Michael Douglas. Synonyms for Keaton are “Michael Keaton”, “Michael Douglas”, “Michael John Douglas”, “Michael J. Douglas”, “M. Keaton” or “M. J. Douglas”. -1
On the other hand, when we hear the name “Michael Douglas” we cannot be sure which famous actor is referred to, because Michael Douglas is a valid name for both of them. Figure FIGREF62 illustrates this Personal Name Matching problem with Michael Keaton.
The process of Personal Name Matching can be divided into the following steps ( BIBREF26 , p. 56-87):
Criteria for the evaluation of such a process are Precision and Recall ( BIBREF35 , p. 75-81; BIBREF26 , p. 83-85). Let INLINEFORM0 be a set of items, INLINEFORM1 be the set of relevant items (e.g. synonyms) with INLINEFORM2 and INLINEFORM3 be the answer of a request. In our scenario, the request is usually the question “Is the item INLINEFORM4 a synonym, or accordingly INLINEFORM5 ?”. Then we can define: INLINEFORM6 INLINEFORM7
Precision testifies whether the reported synonyms during the Name Matching process are really synonyms, Recall allows us to say whether there are synonyms which have not been found.
We use a combination of the Jaccard Similarity Coefficient and Levenshtein Distance in our tool. Bilenko et al. BIBREF36 explain these string matching methods isolated. Given two word sets INLINEFORM0 and INLINEFORM1 , the simple Jaccard Similarity Coefficient is: INLINEFORM2
The Levenshtein Distance uses the operations replacement, insertion and deletion of a character and is defined by a matrix. Let INLINEFORM0 and INLINEFORM1 be words, INLINEFORM2 and INLINEFORM3 their lengths. Then we can define: DISPLAYFORM0
We modify the Jaccard Similarity Coefficient in a way that it classifies two set items as intersected if their Levenshtein Distance is lower than a certain threshold.
In addition to the general Personal Name Matching, we must take the characteristics of Japanese names into account. Particularly the usage of kanji and several possibilities to transcribe a name make it hard to compare Japanese names. For example, we cannot compare kanji names from the IPSJ DL with the author names in DBLP. Even though kanji are suited best for name comparison it does not work here because the standard encoding of names in DBLP is “Latin-1” which does not support kanji natively.
A big problem for our work is revealed by looking at the given name Akiko with its kanji representation 章子. As we can see in table TABREF71 章子 has several possible readings besides Akiko (left column) and Akiko written in Latin characters does not determine a nonambiguous match in kanji (right column).
The same problem applies to Japanese family names. Table TABREF72 presents the problem with Kojima as a family name example.
Preparation of Japanese Papers for the Import Into the DBLP Data Set
大事の前の小事 Daiji no mae no shōji
(Who wants to achieve big things must do the little things first.)
Japanese saying
This chapter explains the approach to process and combine the various data sources so that we can import Japanese publications in the end. We will proceed step by step to make the ideas behind the solution as comprehensible as possible.
General Approach
First we will construct a table in a relational database containing information about Japanese names and their transcriptions by converting the ENAMDICT name dictionary. Then we set up a data structure for Japanese names that handles the problem of assigning a given and a family name to a newly instantiated author during parsing the publications of IPSJ DL. At last, we will discuss the actual and titular integration of Japanese papers into the DBLP data set including an explanation that shows how to create a harvester for the OAI-PMH protocol.
Converting an ENAMDICT File to a Relational Database
The first step towards being able to handle Japanese names is distinguishing given and family name in the input text. A relational database containing information about Japanese names and their transcriptions is useful for this task. The database should contain names in kanji, their transcriptions in hiragana and Latin characters and the name type to have a good match with the data source ENAMDICT and to provide all necessary name information we need.
To fill the empty database, the ENAMDICT file needs to be analyzed and its data needs to be extracted. The entries usually have the form
KANJI [TRANSCRIPTION] /LATIN (TYPE)/.
We can take the following line as an example of an existing entry:
森田 [もりだ] /Morida (s)/
A parser should export the single entries. First it saves the text between the slashes and searches for the type of the entry. It must be assured that all person name types and no undesired or alleged types will be stored. Types can consist of the characters “s” (surname), “g” (given name), “f” (female name), “m” (male name), “u” (unclassified name), “p” (place name), “h” (full name of a particular person), “pr” (product name), “co” (company name) or “st” (station name). But only the types “s”, “g”, “f” and “m” are important in this case because the parser should only store person names in the database. One exception are the unclassified names and they need to be stored too because they can also contain person names. Using unclassified names carelessly leads to problems, though. On the one hand it is useful if you find a match for the given name but not for the assumed family name. Then it helps to find an unclassified name matching the assumed family name. On the other hand some unclassified names in the ENAMDICT file decrease the data quality of the database. The entry
スターウォーズ /(u) Star Wars (film)/
shows that there are undesired names like film titles in the category “unclassified”. The example also reveals that there is no overall standard for an entry format. Analyzing the file leads to following observations:
text in round brackets might be type or additional commentary (see entry example above)
when only hiragana or katakana are used instead of kanji to display the Japanese name the transcription part is missing because it is not required (see entry example above)
the type information in brackets might actually consist of several type declarations, separated by commas
the type information might be placed before or after the transcription in Latin characters
one entry line might contain several possibilities to interpret the name, the example
イブ /(f) Eve/(u) Ib/Ibu (f)/(m) Yves/
clarifies this aspect
We must consider these observations when we implement the parser.
To handle the problems in UID76 and UID78 we can filter the contents in round brackets. One possibility is using a regular expression like (,|s|u|g|f|m|p|h|pr|co|st) INLINEFORM0 to filter all valid types. Regular expressions are powerful and popular tools for pattern matching. In our case we are looking for valid type expressions including commas to get rid of commentaries. After eliminating commentaries we also want to get rid of unwanted types like place names. So we filter again and only process desired types this way. To handle UID77 we just ignore missing transcriptions in square brackets. Our parser also needs to be flexible enough to deal with observation UID79 which means that it must expect the type(s) at two possible places (before and after the transcription in Latin characters). We can handle the last observation UID80 by using recursive function calls. We call the function that exports one entry with a modified parameter value within the function itself when there is more than one entry in the input line (noticeable by additional slashes).
Before parsing we need to change the original encoding of the ENAMDICT file from “EUC-JP” to “UTF-8” to make it compatible with our program.
During parsing a few inconsistencies in the syntax of the ENAMDICT file occurred:
there were four times no slash in the end of the entry:
甲子太郎 [かしたろう] /Kashitarou (m)
there was once an unnecessary closing bracket without an opening bracket:
近松秋江 [ちかまつしゅうこう] /Chikamatsu Shuukou) (h)/
there was once a backslash where a square bracket was supposed to be put:
キルギス共和国 [キルギスきょうわこく\ /(p) Kyrgyz Republic/Kirghiz Republic/
Instead of constructing a workaround for these problems we should rather correct the only few inconsistencies manually.
A Data Structure for Japanese Names
We will construct a class which is responsible for handling Japanese names and representing them in a convenient way. Therefore, it must be able to save the name in kanji and in at least one Latin transcription. The transcription is necessary to compare found authors in IPSJ DL with authors in the DBLP. The kanji name can be stored as additional author metadata in the DBLP later. Our goal is a standardized representation of a Japanese person. So first we can construct a simple helper class for a single name containing given and family name as strings. This class can be applied to both kanji and Latin names. Our Japanese person usually has these two name representations.
When getting an input name from the IPSJ DL we try to determine the separation point and categorize the tokens into given and family names. The separation point can mostly be identified by white space or a comma between the words. The categorization is done by including information from ENAMDICT. Thanks to ENAMDICT's classification into name types we can use this information to categorize our input name tokens into given and family names. However, we have to cover some unusual cases too because IPSJ DL has no standardized way to provide names. So we get names in various formats. For example, there are entries in which the family name follows the given name directly without any separation markers. Then we can try to take advantage of upper and lower case letters assuming that an uppercase letter means the beginning of a new name token. But we must also be aware of existing input names like “KenjiTODA”. If we get a longer sequence of uppercase letters, this sequence is probably a family name. We can filter these names with a regular expression like [A-Z][a-z]{1,}[A-Z]{3,} (first character is an uppercase letter, followed by at least one lowercase letter, followed by at least three uppercase letters). We also have to recognize abbreviated names and normalize Latin names.
Let us have a look at what we can observe about necessary transcription customizations. One peculiarity is that Japanese like to transcribe their names with an INLINEFORM0 instead of a double vowel. An example is “Hitoshi Gotoh”. The INLINEFORM1 symbolizes the lengthening of a vowel and is a substitute for INLINEFORM2 or INLINEFORM3 in this case. To enable our class to find names like this in ENAMDICT, we have to replace the INLINEFORM4 's lengthening a vowel by the vowel itself because ENAMDICT entries contain double vowels instead of INLINEFORM5 's with this semantic function.
Another observation is ENAMDICT's usage of the Hepburn transcription system throughout the entire dictionary. So we have to convert the name to match the Hepburn system and to check a name via ENAMDICT. The needed character replacements for a conversion into the Hepburn system are shown in table TABREF86 (see also figure FIGREF165 in the appendix).
In addition to the replacements from table TABREF86 , we must consider that names usually start with uppercase letters and replace “Tu”, “Ti”, “Sya” and so on by “Tsu”, “Chi”, “Sha”, etc. as well.
The Japanese INLINEFORM0 is sometimes transcribed as INLINEFORM1 . If INLINEFORM2 is followed by INLINEFORM3 or INLINEFORM4 , this INLINEFORM5 is likely to be transcribed as INLINEFORM6 . The reason is a correlative modification in the pronunciation of INLINEFORM7 in these cases. For example, the family name Kanbe is often transcribed as Kambe in the IPSJ DL data set. -1
Double vowels are sometimes completely dropped in some IPSJ DL author elements. While this might be okay for aesthetic reasons when transcribing the own name, it becomes a problem when we try to find a matching name in a dictionary like ENAMDICT. So we also have to check additional modified names. If there is a single vowel in the name, we must also check the same name whose vowel has become a double vowel. If several single vowels occur in a name, the number of names to be checked rapidly increases too. We have to pay special attention to the doubling of the vowel INLINEFORM0 because INLINEFORM1 AND INLINEFORM2 are possible doublings for the single INLINEFORM3 . Doubling the vowel INLINEFORM4 leads either to INLINEFORM5 or INLINEFORM6 . All other double vowels are intuitive: INLINEFORM7 becomes INLINEFORM8 , INLINEFORM9 becomes INLINEFORM10 , INLINEFORM11 becomes INLINEFORM12 . Taking “Gotoh” as an example we remove the INLINEFORM13 first and check a list of names via ENAMDICT. The list of names consists of “Goto”, “Gooto”, “Gouto”, “Gotoo”, “Gotou”, “Gootoo”, “Goutoo”, “Gootou” and “Goutou”. We can remove “Goto”, “Gooto” and “Gouto” from the list if we know that the INLINEFORM14 (representing a double vowel) has been removed before.
If the input metadata contains a Latin and kanji representation of the author's name, we will try to find a match for these. Names in kanji usually do not have any separation mark, so we must distinguish given and family name by taking advantage of the ENAMDICT dictionary and checking the possible name combinations. Processing author names without kanji representation is okay but a missing Latin representation becomes a problem when it comes to actually integrating the publication into the DBLP data set because all DBLP data are supposed to have a Latin representation. The solution is a search for name candidates (we will discuss it more detailed in section UID121 ).
We cannot be sure that our name matching for Latin and kanji names always succeeds. Therefore, we add some status information to our Japanese name to get a chance to evaluate the outcome of the program. Possible status types are:
The status “ok” means that given and family name have successfully been found in the name dictionary and (if available) the kanji names have successfully been assigned to their corresponding name in Latin characters.
An undefined status usually means that the Latin name is missing. A missing Latin name leads to a never changed name status. In these cases, the name in kanji usually exists anyway.
This is the status type for an abbreviated name like “T. Nakamura”.
If this status occurs, the Latin name could not be found in the name dictionary.
If a kanji name has not been found in the name dictionary or could not be assigned to the Latin name, this status will occur.
As the name suggests, this status means that the data quality of the publication metadata source is most likely bad. Our tool can handle some of these cases well by normalizing the name.
We could have stumbled upon a name anomaly when we see this status type. During implementation this status was narrowed down to a possible name anomaly for abbreviated names.
This status indicates a critical name anomaly. This is the only case in which the tool cannot even give a recommendation for given and family name. The output is the full name of the input data for both given and family name.
In chapter SECREF5 we discussed synonyms and homonyms. With the strategies from above we can deal with synonyms pretty well. Yet, homonyms cannot be recognized this way and are not covered at all by our tool.
Import Into the DBLP Data Set
To be able to import the harvested data into the DBLP, we still need to make the existing publication data processable in an appropriate way for our program, construct a coauthor table for these data, compare publications from the Digital Library of the IPSJ with those available in the DBLP project and provide the new publication metadata for the DBLP adequately.
It is important to convert the DBLP file INLINEFORM0 to a relational database to gain an easier and more efficient access to the data while running our program. We are mainly interested in the basic publication metadata. So we will skip some non-publication records of the DBLP like INLINEFORM1 elements. Our publication database table shall contain columns for an ID, the authors, title, publication year, journal title, journal pages and the volume. Whenever we come across the beginning of a publication type element ( INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 , INLINEFORM8 , INLINEFORM9 ) during parsing, we reinitialize the variables which store this metadata for the table columns. When we encounter the according XML end tag of the publication we add an SQL INSERT command to a batch of commands. This batch is regularly executed after processing a certain amount of publications. The regular execution of batches allows a better performance than sending single INSERT commands to the database server. There are some recommendations in the DBLP FAQ BIBREF20 for parsing the INLINEFORM10 file. We use the Apache Xerces parser instead of the standard Java SAX parser and need to increase the allocatable heap space for our parser.
While parsing the DBLP file we can construct a table with coauthor relationships along with the DBLP publication table. This coauthor table stores two author names and a publication ID. The ID shows which publication has been written together by the authors and matches the ID in the DBLP publication table. New coauthor relationships will only be inserted if there are at least two authors mentioned in the metadata. If the metadata mentions more than two authors, every possible pair of authors will be inserted into the database.
As already explained in section SECREF39 , we access the OAI-PMH repository by the repository name and the metadata format prefix to get a list of publication metadata entries. The specification of OAI-PMH 2.0 BIBREF17 describes a possibility to retrieve a list of all metadata formats which a Data Provider has to offer. The HTTP request
1.5 em1.5 em(*@@*)false6pt
http: //ipsj.ixsq.nii.ac.jp/ej/?action=repository_oaipmh
&verb=ListMetadataFormats
informs us that there are two metadata formats called oai_dc and junii2. oai_dc is the standard Dublin Core format all Data Providers provide, also traceable in the protocol specification. The “Implementation Guidelines for the Open Archives Initiative Protocol for Metadata Harvesting” BIBREF37 classify the metadata format oai_dc as mandatory. The name junii2 suggests that it is a self-developed format of the National Institute of Informatics (in Tokyo). Comparing these two in IPSJ DL, we notice that junii2 provides a more accurate description of the data, for example regarding additional XML attributes telling us whether the element value is English or Japanese. This additional information is helpful when we process the data in a later step and is missing in the oai_dc representation of the IPSJ server's data. So we will take the metadata prefix junii2 as initial point for harvesting the server's metadata. Figure FIGREF102 shows an according metadata example (also compare figure FIGREF46 ).
The harvesting includes the following steps:
we load the DBLP publication, coauthor relationship and the ENAMDICT data into the RAM
we access the IPSJ server to get publication metadata
we parse the accessed XML metadata (concerning the thoughts from section SECREF85 ) and store the needed publication data temporarily in the RAM.
we add the parsed publication to an SQL command batch to insert the metadata into a relational database (the batch is regularly executed)
we create a BHT file for the parsed publication
at the end we go into all directories with BHT files and concatenate them to one bigger BHT file
During the implementation and testing, some exceptional incidents occurred. We try to cover them besides the expected difficulties like Personal Name Matching and transcriptions. For example, we get “NobukazuYOSHIOKA” as a full input name. Algorithm UID99 shows a way to handle these unusual input data. Japanese sometimes write their family names in upper case letters to distinguish given and family name. [htb]
INLINEFORM0 : full input name
INLINEFORM0 : list of name representations for a Japanese person
function split( INLINEFORM0 ): searches for regular expression and splits text,
splitted text does not contain text that matches the regular expression
function normalize( INLINEFORM0 ): normalizes personal name
new name for person found and added (given and family name separated)
INLINEFORM0 matches regular expression INLINEFORM1 INLINEFORM2 split INLINEFORM3 INLINEFORM4 split INLINEFORM5 normalize INLINEFORM6 INLINEFORM7 BAD_DATA_QUALITY_IN_SOURCE INLINEFORM8 add(new PersonName INLINEFORM9 Categorizing names like “NobukazuYOSHIOKA”
Another observation during testing the program and checking the data is the following. Searching the Japanese given name “Shin'ichi” in the DBLP we notice that there is no uniform way to store certain names in the database. We find “Shin'ichi Aihara” but also “Shin-ichi Adachi” along with other results indicating the same phenomenon. So we see the apostrophe and the hyphen are used equally as syllable separators (we discussed the syllable separation in chapter SECREF14 ). Comparing the author “Shinichi Horiden” from the IPSJ data set and the one from the DBLP data set we can assume they are the same person because they have common coauthors (e.g. Kenji Taguchi and Kiyoshi Itoh) in both databases. The IPSJ data set tells us that the name written in kanji is 本位田真一. We are interested in the part 真一 (Shin'ichi) because we get to know that the separator symbol is sometimes missing. The kanji indicates the syllables INLINEFORM0 , especially focused on INLINEFORM1 and INLINEFORM2 instead of INLINEFORM3 . We would expect an additional separator symbol for a clear (nonambiguous) transcription; but obviously, it has been dropped in this case. A separator symbol can also be found when some double vowels occur. For example, we find “Toru Moto'oka” (元岡達) instead of “Toru Motooka”. This makes it easier to identify the reading of a single kanji (元 moto, 岡 oka, 達 toru). When a separator symbol is needed for a clear transcription, an apostrophe is used as separator symbol in ENAMDICT. While ENAMDICT always uses an apostrophe as separator symbol, DBLP and IPSJ DL use an apostrophe, a hyphen or the separator symbol is missing. We must consider these differences in the data sources for a successful import. For an easier name matching between names in the ENAMDICT and IPSJ DL data set we can add names containing an apostrophe once as they are and once without apostrophes to the relational database when we parse the ENAMDICT file to store person names in a relational database.
Our tool has a statistics class to get an overview over the parsed input data and the quality of the output data. We will have a look at these statistics created after the harvest. There are 81597 records with publication metadata and 8562 records which are marked as INLINEFORM0 in the parsed data. Figure FIGREF114 shows a visualization in pie chart form.
The publication types are declared as “Technical Report”, “Conference Paper”, “Journal Article”, “Departmental Bulletin Paper” or “Article” (compare the table TABREF115 and figure FIGREF116 ).
The statistics also reveal that 74971 publications are published in Japanese, only 4456 in English (compare the pie chart in figure FIGREF117 ).
Our tool detects 1325 publications which are already included in DBLP. A publication is considered found in both databases if the title is the same and at least one author is the same.
The most interesting statistics for our work are these about the evaluation of the quality of author name assignments (compare the bar chart in figure FIGREF119 ):
Fortunately, 180221 of 231162 author names could be matched successfully. There are many reasons for the remaining uncovered cases. 9073 Latin names could not be found in the name dictionary ENAMDICT and 14827 name matchings between the names' Latin and kanji representations did not succeed. These names might be missing at all in the dictionary, delivered in a very unusual format that the tool does not cover, or might not be Japanese or human names at all. Of course, Japanese computer scientists sometimes also cooperate with foreign colleagues but our tool expects Japanese names and is optimized for them. Both IPSJ DL and ENAMDICT provide katakana representations for some Western names. However, katakana representations for Western names are irrelevant for projects like DBLP. But for instance, Chinese names in Chinese characters are relevant. Understandably, our tool does not support any special Personal Name Matching for Chinese names yet because our work is focused on Japanese names. The tool does not take account of the unclassified names of ENAMDICT by default. We can increase the general success rate of the Name Matching process by enabling the inclusion of unclassified names in the configuration file but the quality of the Name Matching process will decrease because the correct differentiation between given and family name cannot be guaranteed anymore. An unclassified name may substitute a given or a family name.
There are 1203 entries that were qualified as “bad data quality in publication metadata source”. They might be handled alright but they are particularly marked to indicate that these cases should also be reviewed manually before any import action is performed.
The numbers of abbreviated names, possible name anomalies and name anomalies are very low. While processing author names which will be later qualified as “possible name anomaly”, the tool cannot decide whether the assignment has been correct or the name is an anomaly. “Name anomalies” are critical anomalies that could not be categorized into any other status.
There could be a few uncovered flaws, for example HTML or code in titles. We must be aware of those when we do the actual import into the DBLP data set.
We will discuss the creation of BHT files and important extensions for the BHT format that fit the requirements of Japanese papers well, based on our knowledge from section SECREF49 . As mentioned, the header dictates ISO-8859-1 (Latin-1) as encoding of the file INLINEFORM0 . Ley's work BIBREF19 reveals that we can use XML/HTML entities to solve this problem. Authors have person records in the DBLP providing additional information. For example, we can find the following entry for Atsuyuki Morishima (森嶋厚行) in the XML file:
1.5 em1.5 em(*@@*)false6pt
<www mdate="2008-02-20" key="homepages/m/AtsuyukiMorishima">
<author>Atsuyuki Morishima</author>
<title>Home Page</title>
<url>http://www.kc.tsukuba.ac.jp/~mori/index.html</url>
<note>森嶋厚行</note>
</www>
We must extend the BHT format to fulfill the requirements and add extra metadata for authors, title and relevant process information. The author talked to members of the DBLP team personally and got the permission to extend the original BHT format to enable us to adapt the format to Japanese papers. Our additions are well formed XML elements. We must substitute all non-ASCII characters by escape characters (XML entities) to ensure the compatibility for DBLP. The additional elements are:
Every author that has a kanji representation in its metadata gets an originalname element:
1.5 em1.5 em(*@@*)false6pt
<originalname latin="Shinsuke Mori">森,信介
</originalname>
If available, the Latin representation is added as an attribute INLINEFORM0 to avoid confusion on assigning the extra information to the right author later on. The element content has a fixed structure. The family name comes first, followed by a comma and the given name.
Every author gets a status information that evaluates the author name assignment. It is displayed by a status element:
1.5 em1.5 em(*@@*)false6pt
<status name="Shinsuke Mori">ok</status>
The connected author is added as an attribute INLINEFORM0 .
If there is no Latin representation of the name of an author, we will add Latin name candidates to the BHT file:
1.5 em1.5 em(*@@*)false6pt
<namecandidates kanji="菅谷正弘">Shougu Sugatani, Seihiro Sugatani, Tadahiro Sugatani, Masahiro Sugatani, Shougu Suganoya, Seihiro Suganoya, Tadahiro Suganoya, Masahiro Suganoya, Shougu Sugaya, Seihiro Sugaya, Tadahiro Sugaya, Masahiro Sugaya, Shougu Sugetani, Seihiro Sugetani, Tadahiro Sugetani, Masahiro Sugetani, Shougu Sugenoya, Seihiro Sugenoya, Tadahiro Sugenoya, Masahiro Sugenoya</namecandidates>
The connected kanji representation is added as an attribute kanji in the namecandidates element. We seek the kanji in ENAMDICT and output all possible name combinations in a comma separated list.
If the original language of the title is Japanese, we will add this title to the BHT file:
1.5 em1.5 em(*@@*)false6pt
<originaltitle lang="ja" type="Journal Article">点予測による自動単語分割</originaltitle>
The XML element originaltitle has the attributes lang (for the paper language) and type (for the publication type).
The tool searches the authors in DBLP and tries to find additional common coauthors in DBLP. If at least two of the main authors of the paper also worked with a certain other person (that is retrieved from DBLP), this person is added to the comma separated list. The Personal Name Matching of author names uses a combination of Levenshtein Distance and Jaccard Similarity Coefficient here.
1.5 em1.5 em(*@@*)false6pt
<commoncoauthors>Masato Mimura</commoncoauthors>
If the tool finds the paper in DBLP, we also add the DBLP key. Records, such as elements with publication metadata, have a unique key in DBLP.
1.5 em1.5 em(*@@*)false6pt
<dblpkey>conf/iscas/HiratsukaGI06</dblpkey>
An example of a BHT file in SPF can be found in the appendix in section SECREF170 (also compare with the original BHT format in section SECREF168 ). After we have finished parsing all Japanese papers, we concatenate the BHT files in SPF that belong together to one bigger BHT file INLINEFORM0 . Publications, respectively BHT files, that belong together are recognizable by the directory structure. If they belong together, they will be in the same directory. We must simply go through the BHT root directory recursively.
Conclusion and Future Work
“Creativity is seeing what everyone else sees,
but then thinking a new thought that has never been
thought before and expressing it somehow.”
(Neil deGrasse Tyson)
The integration of Japanese papers into the DBLP data set has revealed some major problems. The nonambiguous representation of Japanese names (and paper titles, etc.) is done by kanji while DBLP's standard encoding is Latin-1 and Japanese characters are only optionally added to the publications' metadata. This leads to the need of transcribing the Japanese names which in turn also evokes new problems because there is not the transcription but rather a lot of transcription possibilities.
In addition to that, we must ensure a certain data quality even if one data source sometimes lacks this quality. Due to name matching with a name dictionary, format checking and conversions (if necessary), we can actually correct some flaws or at least assimilate the data into our project.
The problem of synonyms is dealt with by transcription manipulations, homonyms could not be addressed in this work. Reuther ( BIBREF26 , p. 159-164) describes an idea to handle homonyms. We could extend our tool by a Coauthor Index as in DBLP for the publications of the IPSJ DL. The idea is based on the assumption that scientists often publish their papers with the same people as coauthors. If the coauthors match a certain coauthor group, the author is considered the same. -1 If the author's coauthors are not members of the expected coauthor groups, the author could be a different person than we expected and we might have a homonym here.
The developed tool is usable and provides among relational databases customized Bibliography Hypertext (BHT) files as output data. Customizations were necessary to optimize the BHT files for Japanese papers and additional important metadata information. Desired but missing metadata like contributors or a short description of the content of a paper can be added without much effort because the relational database already contains these data, only the source code of Kankoukanyuu (our tool) needs to be extended by a few lines.
Though having been created with care regarding correct and well-formed output data, it is not recommended to import the newly created BHT files unchecked. The DBLP team should check the files not to compromise the data quality of DBLP. There might still be undesired format anomalies in the BHT files. The DBLP team also needs to adapt their import system to the extended BHT format developed in this work for the actual import into DBLP.
Titles might be in uppercase letters. This could be improved but we have to pay attention because a primitive solution will not work well. For example, we have to be aware of the popular usage of acronyms in computer science. So some words in uppercase letters can be correct.
Our tool is optimized for the Digital Library of the IPSJ and their OAI-PMH metadata prefix junii2. It can easily be adapted to support the similar and commonly used metadata prefix oai_dc. So the tool would be able to handle other publication metadata sources that support OAI-PMH.
The algorithm for detecting common papers in DBLP and IPSJ DL may be modified to achieve an even better comparison between the databases and detect more common papers.
It would be useful to include a Chinese name dictionary in the future and extend the name search of our tool to cover Chinese names as well. -1
One improvement in the future could be storing the most common names (for example, the 100 most common given and family names) in a separate data structure in the RAM. This way we can improve the runtime by often skipping the search in the huge name data.
We can still increase the success rate of the Name Matching process too. One way is swapping kanji. A typical Japanese name has two kanji for the given name and two kanji for the family name. The family name shall precede the given name. However, this principle could be violated by the publication source. If the Name Matching is not successful, we may swap the first two for the last two characters and try to find a match again.
A second advancement is the additional support of a special Latin character set that is used by Japanese. For instance, we can find the name “Kai” instead of “Kai” in the metadata of IPSJ DL. They look very similar and both represent simple Latin letters but their character codes are different. So programs handle them differently. A simple (but yet unimplemented) substitution function can cover these rare and unusual cases.
Another possibility to take advantage of this work is extracting the author names in kanji from the relational database. So the DBLP team can insert author metadata for already existing authors in DBLP.
We can also have a look at what phases of the Personal Name Matching process have been implemented in this work and to which degree. There are actually different types of Personal Name Matching included in our tool:
The “Standardization” is accomplished by a normalization of the Latin input names at the beginning of the process. Kanji input names get trimmed by removing all whitespace. We do not have a “Blocking” phase as it is proposed by Reuther BIBREF26 . When searching a match between transcribed Japanese names with their original kanji representation we even go a contrary way and increase the number of comparisons by adding reasonable other transcriptions to the matching process. Due to efficient data structures and a comparatively small amount of Japanese papers (less than 100000), our tool has an acceptable runtime (the retrieval of the publication metadata from the IPSJ server takes much longer than processing it). In addition, the search for common coauthors will only be done if the author exists in DBLP. The phases “Analysis” and “Decision Model” are entangled in our tool. If we find a match between a (normalized or modified) input name and a name in the name dictionary, we will immediately consider them a successful match and continue parsing the metadata. When we try to find coauthors in DBLP, we take advantage of the combined Jaccard Levenshtein Distance as explained in chapter SECREF5 .
Instead of checking the complete output data in the “Performance Measurement” phase, we could only take control samples while implementing, debugging, testing and improving our program. A broad manual check of approximately 90000 publications is not possible within the scope of a diploma thesis. The control samples had the expected and desired content but we cannot guarantee the correctness of the output. Under the assumption that ENAMDICT's entries are correct, the predicted Precision should be about INLINEFORM0 because the tool probably does not produce many false positives. But we cannot say anything about the Recall because ENAMDICT does not cover all names that occur in IPSJ DL. All exceptions resulting from the limits of a name dictionary and a bad data quality are supposed to be handled by the status for author name assignments (described in section UID99 ). This gives us the chance to manually handle the noted exceptions afterwards.
All in all, this work is a first approach for an integration of Japanese papers into the DBLP data set and provides a not yet perfect but usable tool for this task. Some major obstacles are overcome.
About the Tool
The developed tool that is also part of this project is named Kankoukanyuu (刊行加入). Kankou means publication, kanyuu means admission. The whole name indicates the ability to import publications. The tool also allows the assimilation of imported publications, of course. The usable functionalities are:
Parsing the DBLP file INLINEFORM0 and converting it to a MySQL database
Converting an ENAMDICT name dictionary file to a MySQL database
Harvesting the IPSJ server, processing the publication metadata and storing it in a MySQL database
Making the harvested publications ready for an import into the DBLP data set by making BHT files
Usage
The tool has been developed and tested on a Linux system with Intel Core 2 Quad and 8 GB RAM in the local computer pool. It has to be executed by command line like this:
1.5 em1.5 em(*@@*)false6pt
java -Xmx5400M -jar kankoukanyuu.jar
The parameter -Xmx5400M allows our program to allocate more than 5 GB RAM and store all necessary data in the RAM for an unproblematic execution.
Possible command line arguments are:
Parse dplb.xml and fill database tables
Convert ENAMDICT dictionary file to a relational database
Harvest the IPSJ server, fill OAI-PMH data into databases and create BHT files (in SPF) - requires DBLP and ENAMDICT database tables from steps above
Concatenate BHT files in Single Publication Format to one bigger file (file all.bht will be created in every folder with BHT files) - requires BHT files in SPF from step above
Do all of the above
Show help text about usage of the tool
The configuration file INLINEFORM0 allows us to change following parameters:
Database related parameters (in INLINEFORM0 section): URL ( INLINEFORM1 ), database name ( INLINEFORM2 ), user name ( INLINEFORM3 ) and password ( INLINEFORM4 )
ENAMDICT related parameter (in INLINEFORM0 section): location of ENAMDICT file ( INLINEFORM1 )
ENAMDICT database related parameters (in INLINEFORM0 section): database table name ( INLINEFORM1 ), decision whether to use unclassified names ( INLINEFORM2 )
DBLP related parameter (in INLINEFORM0 section): location of INLINEFORM1 ( INLINEFORM2 )
DBLP database related parameters (in INLINEFORM0 section): database table name for publications ( INLINEFORM1 ), database table name for coauthor relationships (authorscounttable)
OAI-PMH database (contains output after harvest and parsing process) related parameters (in INLINEFORM0 section): publication table ( INLINEFORM1 ), authors table ( INLINEFORM2 ), titles table ( INLINEFORM3 ), contributors table ( INLINEFORM4 ), descriptions table ( INLINEFORM5 )
Harvester related parameters (in INLINEFORM0 section): location for storing the harvest ( INLINEFORM1 ), start ID for harvester ( INLINEFORM2 ), end ID for harvester ( INLINEFORM3 ), decision whether to use record lists ( INLINEFORM4 )
BHT export related parameters (in INLINEFORM0 section): location for BHT output files ( INLINEFORM1 ), decision whether to compute and show common coauthors (showcommoncoauthors)
Log related parameter (in INLINEFORM0 section): location of log files ( INLINEFORM1 )
A configuration example can be found in the appendix section SECREF172 .
The system must support the Japanese language (meaning Japanese characters) to ensure a successful run.
Kankoukanyuu does not use any Linux-only commands but has not been tested on Microsoft Windows yet.
Used Technologies
The tool itself has been written in Java, using the OpenJDK 6. The handling of databases is done by MySQL 5 and JDBC is used to provide MySQL functionalities within Java.
External libraries are the Apache Xerces parser and the MySQL Connector/J. The Fat Jar Eclipse Plug-In is used to deploy the complete project into one executable Java JAR file. The execution of Kankoukanyuu becomes more user-friendly this way because external libraries are already included and class paths for external libraries does not need to be specified anymore.
Runtime
Measurement indicates the following approximated runtimes of Kankoukanyuu:
We can make some observations. During the harvest, only ca. 30 minutes were spent on processing the harvested data, the rest is needed to retrieve the data from the Japanese server. Depending on whether the local file system or network file system was used, the runtime for the concatenation differs immensely.
BHT Example Proposed By DBLP
1.5 em1.5 em(*@@*)false6pt
Computer Languages, Systems & Structures (journals/cl)
<h2>Volume 34, Numbers 2-3, July-October 2008</h2>
Best Papers 2006 International Smalltalk Conference
<ul>
<li>Wolfgang De Meuter:
Preface.
45
<ee>http://dx.doi.org/10.1016/j.cl.2007.07.001</ee>
<li>David Röthlisberger, Marcus Denker, Éric Tanter:
Unanticipated partial behavioral reflection: Adapting applications at runtime.
46-65
<ee>http://dx.doi.org/10.1016/j.cl.2007.05.001</ee>
<li>Johan Brichau, Andy Kellens, Kris Gybels, Kim Mens, Robert Hirschfeld, Theo D'Hondt:
Application-specific models and pointcuts using a logic metalanguage.
66-82
<ee>http://dx.doi.org/10.1016/j.cl.2007.05.004</ee>
<li>Alexandre Bergel, Stéphane Ducasse, Oscar Nierstrasz, Roel Wuyts:
Stateful traits and their formalization.
83-108
<ee>http://dx.doi.org/10.1016/j.cl.2007.05.003</ee>
<li>Alexandre Bergel, Stéphane Ducasse, Colin Putney, Roel Wuyts:
Creating sophisticated development tools with OmniBrowser.
109-129
<ee>http://dx.doi.org/10.1016/j.cl.2007.05.005</ee>
<li>Luc Fabresse, Christophe Dony, Marianne Huchard:
Foundations of a simple and unified component-oriented language.
130-149
<ee>http://dx.doi.org/10.1016/j.cl.2007.05.002</ee>
</ul>
This is a BHT example proposed by the DBLP team in the DBLP FAQ BIBREF20 .
BHT Example File Created By Kankoukanyuu
1.5 em1.5 em(*@@*)false6pt
<h2>Volume 52, Number 10, October 2011</h2>
<ul>
<li>Shinsuke Mori, Graham Neubig, Yuuta Tsuboi:
A Pointwise Approach to Automatic Word Segmentation.
2944-2952
<ee>http://id.nii.ac.jp/1001/00078161/</ee>
<originalname latin="Shinsuke Mori">森,信介</originalname>
<status name="Shinsuke Mori">ok</status>
<originalname latin="Graham Neubig">ニュービッググラム,</originalname>
<status name="Graham Neubig">no kanji matching found</status>
<originalname latin="Yuuta Tsuboi">坪井,祐太</originalname>
<status name="Yuuta Tsuboi">ok</status>
<originaltitle lang="ja" type="Journal Article">点予測による自動単語分割</originaltitle>
<commoncoauthors>Masato Mimura</commoncoauthors>
</ul>
This is an output example of a BHT file in Single Publication Format (before the concatenation step), created by our tool.
Excerpt From dblp.xml
1.5 em1.5 em(*@@*)false6pt
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE dblp SYSTEM "dblp.dtd">
<dblp>
<article mdate="2002-01-03" key="persons/Codd71a">
<author>E. F. Codd</author>
<title>Further Normalization of the Data Base Relational Model.</title>
<journal>IBM Research Report, San Jose, California</journal>
<volume>RJ909</volume>
<month>August</month>
<year>1971</year>
<cdrom>ibmTR/rj909.pdf</cdrom>
<ee>db/labs/ibm/RJ909.html</ee>
</article>
<article mdate="2002-01-03" key="persons/Hall74">
<author>Patrick A. V. Hall</author>
<title>Common Subexpression Identification in General Algebraic Systems.</title>
<journal>Technical Rep. UKSC 0060, IBM United Kingdom Scientific Centre</journal>
<month>November</month>
<year>1974</year>
</article>
<article mdate="2002-01-03" key="persons/Tresch96">
<author>Markus Tresch</author>
<title>Principles of Distributed Object Database Languages.</title>
<journal>technical Report 248, ETH Zürich, Dept. of Computer Science</journal>
<month>July</month>
<year>1996</year>
</article>
...
Configuration File of Our Tool
1.5 em1.5 em(*@@*)false6pt
[db]
url=myserver
db=mydbname
user=myusername
password=mypassword
[japnamesdb]
table=japnames
useunclassifiednames=false
[dblpdb]
authorscounttable=dblpauthors
dblptable=dblp
[oaidb]
publicationtable=oai_publications
authorstable=oai_authors
titlestable=oai_titles
contributorstable=oai_contributors
descriptionstable=oai_descriptions
[enamdict]
file=./enamdict
[harvester]
filespath=./files-harvester
minid=1
maxid=100000
uselistrecords=true
[dblp]
xmlfile=/dblp/dblp.xml
[bhtexport]
path=./bht
showcommoncoauthors=true
[log]
path=./log | Unanswerable |
9c2de35d07f0d536bfdefe4828d66dd450de2b61 | 9c2de35d07f0d536bfdefe4828d66dd450de2b61_0 | Q: Do they translate metadata from Japanese papers to English?
Text: List of Acronyms
tocchapterList of Acronyms
[OAI-PMH] ACMAssociation for Computing Machinery ASCIIAmerican Standard Code for Information Interchange APIApplication Programming Interface BHTBibliography HyperText DBLPDigital Bibliography & Library Project (former meaning: DataBase systems and Logic Programming) FAQFrequently Asked Questions GBGigaByte HTMLHyperText Markup Language HTTPHyperText Transfer Protocol IDIdentifier IEEEInstitute of Electrical and Electronics Engineers IFIPInternational Federation for Information Processing IPSJInformation Processing Society of Japan IPSJ DLDigital Library of the Information Processing Society of Japan ISOInternational Organization for Standardization JARJava ARchive JDBCJava DataBase Connectivity JDKJava Development Kit OAIOpen Archives Initiative OAI-PMHOpen Archives Initiative - Protocol for Metadata Harvesting PDFPortable Document Format RAMRandom Access Memory SAXSimple API for XML SQLStructured Query Language SPFSingle Publication Format TOCTables Of Contents URLUniform Resource Locator XMLeXtensible Markup Language
About This Diploma Thesis
The idea for this work was born when the author was searching for a possibility to combine computer science with his minor subject Japan studies in his diploma thesis. After dismissing some ideas leaning towards Named Entity Recognition and computer linguistics the author chose “Integration of Japanese Papers Into the DBLP Data Set” as his subject. The DBLP is a well-known and useful tool for finding papers published in the context of computer science. The challenge to deal with such a huge database and the problems that occur when processing Japanese input data was the reason why this idea has been chosen. The hope is that, in the future, many Japanese papers can be added by the responsible people of the DBLP project.
Motivation
Computer scientists are likely to use the DBLP to find information about certain papers or authors. Therefore, the DBLP is supposed to provide information about as many papers as possible. For example, one could be interested in the paper “Analysis of an Entry Term Set of a Civil Engineering Dictionary and Its Application to Information Retrieval Systems” by Akiko Aizawa et al. (2005) but DBLP does not include it yet. Japanese scientists might look for the original (Japanese) title “土木関連用語辞典の見出し語の分析と検索システムにおける活用に関する考察” or use Aizawa's name in Japanese characters (相澤彰子) for a search in DBLP. The DBLP contains the author “Akiko Aizawa” but does not contain this specific paper or the author's original name in Japanese characters. Our work is to implement a tool which addresses these questions, support the DBLP team in the integration of Japanese papers and reveal the difficulties of realizing the integration.
Composition of the Diploma Thesis
Dates are displayed in the ISO 8601 standard format YYYY-MM-DD, e.g. 2012-10-19.
Although scientific works about the Japanese language often display the Sino-Japanese reading of kanji (a Japanese character set) with uppercase letters to distinguish them from the other “pure” Japanese reading, we will not use uppercase letters to distinguish them in this work.
When a Japanese word is used in its plural form in this work, the word always stays unmodified. The reason is that in the Japanese language there is no differentiation between a singular and plural form.
We use a macron instead of a circumflex to display a long vowel of a Japanese word in Latin transcription (see section SECREF14 ).
Acknowledgement
First I would like to thank Prof. Dr. Bernd Walter and Prof. Dr. Peter Sturm for making this diploma thesis possible. Special thanks go to Florian Reitz for the great support and the useful answers for the questions I had while I have been working on this diploma thesis. I also want to acknowledge the help of Peter Sommerhoff, Daniel Fett, David Christ and Kana Matsumoto for proofreading my work. I thank Dr. Michael Ley, Oliver Hoffmann, Peter Birke and the other members of the Chair of Database and Information Systems of the University of Trier. Last but not least I want to tell some personal words to my family in my and their native language German:
Ich möchte nun noch meinen Eltern und meinem Bruder Peter dafür danken, dass sie mich in meiner Diplomarbeitsphase, meinem Studium und auch schon davor immer unterstützt haben und immer für mich da waren, wenn ich sie brauchte. Ich weiß es zu schätzen.
Writing in Japanese
“My view is that if your philosophy is not unsettled daily
then you are blind to all the universe has to offer.”
(Neil deGrasse Tyson)
First we need to understand some aspects of the Japanese language and especially the different ways of writing Japanese because the peculiarities of the Japanese writing system are a crucial point of our work. It lays the foundation for all Japanese-related subjects such as the structure of Japanese names (discussed in section SECREF19 ), a dictionary for Japanese names (discussed in section SECREF36 ) or the publication metadata source for Japanese publications (discussed in section SECREF39 ).
Hadamitzky ( BIBREF0 , p. 8-57) gives an overview about the basics of Japanese writing. The Japanese writing system includes kanji, hiragana, katakana and the possibility to use Latin characters.
Kanji
Kanji is the Japanese script which consists of traditional Chinese characters. It came to Japan around the 4th century. Since the Japanese had not developed an own writing system yet they began to use the Chinese characters. At the beginning, the characters were linked phonetically with a certain sound, so that they could write down all existing words by their sound. Applying this principle the man'yōgana were created. Every character had one defined way to pronounce it. In addition to this, a second principle was introduced to write Japanese. This time the people orientated themselves on the meaning of the Chinese characters to choose a writing for a word. Applying the second principle, the kanji were created. While the man'yōgana were simplified to hiragana and katakana (see following sections SECREF7 and SECREF11 ) the general usage of kanji did not change.
Due to an increase in number and possible readings of characters, the government began to try to simplify the Japanese writing system after the Meiji Restoration at the end of the 19th century. The last important reform took place after World War II. Along with some other changes and regulations, the permitted characters in official documents (tōyō kanji) were limited to 1850 in 1946 and increased to 1900 in a draft from 1977. In 1981 they were replaced by the “List of Characters for General Use” (jōyō kanji) containing 1945 characters. In 1951 the government published a list of additional 92 kanji permitted for personal names. The number of kanji permitted for personal names increased with time passing by. Eschbach-Szabo ( BIBREF2 , p. 175) says the last change permitted 983 kanji for personal names in 2004. The press tries to abide by the jōyō kanji. Japanese literature (science, fiction, etc.) uses about 4000 characters (comprehensive Sino-Japanese kanji dictionaries contain ca. 10000 characters). Japanese people know approximately 3000 kanji on average.
Due to their capability to give a word a meaning, kanji are used in substantives, verbs, adjectives and Japanese personal names.
An important aspect is reading a kanji because there are several possibilities to read one. Saitō and Silberstein ( BIBREF3 , p. 31-34) describe how to read a kanji. There is a Japanese reading kun and a Sino-Japanese reading on. Depending on the text and grammar context either the kun or on reading is required. For example the kanji 生 is read sei in 学生 (gakusei, meaning: student, on reading) but is read INLINEFORM0 in 生まれる (umareru, meaning: being born, kun reading). A single kanji can have several kun and several on readings.
For our work it is important to know that one character can have several readings in names too.
Hiragana
The syllabary hiragana evolved from the man'yōgana by simplifying the characters. Every syllable is phonetically assigned to one sound of the spoken language (with two exceptions which can have two sounds each). The gojūon table shown in figure FIGREF9 lists the 46 syllables used today in a certain way (it can be compared with the ABC for letters). Another but obsolete way to order the syllables is iroha which is a poem containing all syllables. Although the name implies 50 sounds (gojū means “50”, on means “sound”) there are only 46 syllables left in modern Japanese. Actually, only 45 syllables belong to the gojūon table. The INLINEFORM0 counts as extra symbol (see gojūon tables in figures FIGREF9 and FIGREF12 ).
Other additional syllables are dakuon (e.g. だ/ INLINEFORM0 , recognizable by two little strokes), handakuon (e.g. ぱ/ INLINEFORM1 , recognizable by a little circle) and yōon (e.g. しゃ/ INLINEFORM2 , recognizable by a normally sized character that is followed by a smaller character).
You can write every Japanese word in hiragana but if possible, kanji are usually preferred to avoid problems with homonyms (we take a look at homonyms in chapter SECREF5 ). Hiragana is mainly used to write words not covered by kanji and as inflected endings. Kanji and hiragana are often combined within one word. For example 読む (yomu) is the basic form of the verb “to read”. The kanji 読 means reading by itself and in combination with the hiragana syllable む it becomes the verb “to read” in a special grammatical form specifying tense, politeness level and other properties.
Katakana
The syllabary katakana also evolved from the man'yōgana by simplifying the characters, consists of 46 characters nowadays (representing the same syllables as hiragana) and is usually ordered by the gojūon table. Figure FIGREF12 presents the katakana in a gojūon table. Besides optical differences with hiragana, katakana are used in other contexts. Japanese mostly use them to write foreign words including foreign personal names.
So foreigners often apply katakana for their names. For example, the author's name can be transcribed as パウル·ソマホフ. The dot · in the middle separates family and given name. Foreign names are often written with the given name preceding the family name.
Latin Characters/Transcription
Transcription systems which convert kanji, hiragana and katakana to Latin characters are usually called rōmaji. Japanese can be easily transcribed by 22 letters and two additional signs. Due to many words having the same pronunciation, the meaning of words is sometimes ambiguous if they are transcribed into Latin characters. In 1954 the government released recommendations for transcribing Japanese. It recommended following two transcription systems:
The kunreishiki rōmaji assigns transcriptions according to the order in the gojūon table without regard to phonetic divergences of some consonants (we will discuss these divergences later). It has been introduced for official usage by the government only slightly different in 1937. It became the preferred transcription system in the standard ISO 3602 “Documentation - Romanization of Japanese (kana script)” BIBREF6 .
The hebonshiki rōmaji was developed by a council of Japanese and foreign erudites in 1885 and spread by the American missionary James C. Hepburn (Hebon in Japanese), especially thanks to his Japanese-English dictionary published one year later. This work also employs hebonshiki. Kunreishiki would lead to transcriptions like kunreisiki, hebonsiki and kanzi.
Although the kunreishiki became the preferred system of the government, the international community often prefers the Hepburn system because the written words suggest a more intuitive pronunciation than kunreishiki. There are also language-related transcription systems that are rarely used. Kaneko and Stickel ( BIBREF7 , p. 53-55) mention them:
The important aspect are the system differences because we need to know where they occur when we deal with Personal Name Matching problems later. Figure FIGREF165 in the appendix reveals the differences between the transcription systems. It summarizes 18 differences in all syllables including INLINEFORM0 , INLINEFORM1 and INLINEFORM2 . Unfortunately, there can be even more transcription differences. ISO 3602 highlights some more special cases when it comes to transcribing Japanese. One is the question whether to put an apostrophe after an INLINEFORM3 . To avoid misunderstandings, one should put an apostrophe behind an INLINEFORM4 in certain cases. Otherwise, people could misinterpret the syllable INLINEFORM5 followed by a syllable composed of a vowel or “y” and a vowel as syllables na, ni, nu, ne, no, nya, nyu or nyo. We will outline a practical example of this case in section UID99 .
A second irregularity occurs when the same vowel appears right after another. If there is a morpheme boundary between the vowels, they should be transcribed as “aa”, “ii”, etc. but should be transcribed by an additional circumflex otherwise.
Koop and Inada BIBREF4 write about another difficulty called nigori.
“The nigori (濁, literally, `turbidity', `impurity') ... [means] modifying the pronunciation of the consonant in certain of the kana sounds. It may be either (1) inherent, as in suge (`sedge'), suzu (`grelot'), go (`five'), or (2) applied incidentally to the initial consonant of a word or name-element following another in composition, e.g., Shimabara from shima and hara, nenjū from nen and chū, Harada from hara and ta.” ( BIBREF4 , p. 34)
So, if we want to derive a transcription from the family name 中田, we cannot tell whether to take Nakata or Nakada as the rightful transcription.
Japanese Personal Names
七転び、八起き。 Nana korobi, ya oki.
(Fall seven times, get up eight times.)
Japanese saying
One of the central problems in this work is to deal with Japanese personal names. We need to get a picture of Japanese personal names in general to deal with multiple data sources (like the introduced publication metadata sources in chapter SECREF4 ) which may represent the same name with different scripts or transcription methods. The dictionary ENAMDICT will be very helpful when it comes to extracting and verifying name information.
Structure of Japanese Names
Having the urge to name things is part of the human nature. Names make it easy to refer to things, people or any other object in this world. When it comes to name giving, history shows a development in the Japanese society.
Japanese names are divided into family and given name, similar to the system in the Western culture. When Japanese write their name in kanji they put the family name first, followed by the given name (usually without leaving spaces between them), for example 中村武志 (Takeshi Nakamura). While introducing themselves, they often tell their family name and skip the given name. When Japanese refer to others, they have many name particles they put after a name to express the relationship to the other person. There is the neutral san, chan for children, kun particular for boys or sensei for teachers and doctors. ( BIBREF5 , p. 18-19)
Kagami ( BIBREF8 , p. 913) writes about Japanese personal names. Only the samurai and nobility were allowed to carry family names before the Meiji Restoration in 1868. Merchants carried shop names instead (recognizable by the suffix -ya), for example Kinokuniya (shop name) Bunzaemon (given name). Then everybody had to pick a family name after the Meiji Restoration. Approximately 135000 family names are recognized now. The most common family names are Suzuki, Satō, Tanaka, Yamamoto, Watanabe, Takahashi, Kobayashi, Nakamura, Itō, Saitō and others.
“In the feudal age, first and second given names were used as male names. The first name was Kemyoo which was the order of brothers, and the second name was the formal name given at the coming of age ceremony (genpuku), e.g. the name of a famous general in 12c.: Minamoto (family name) no (of) Kuroo (kemyoo) Yoshitune (formal given name), and before the genpuku ceremony, he was called by Yoomyoo (child name) Ushiwakamaru.” ( BIBREF8 , p. 913)
While there were no restrictions to the number of personal names visible until the Meiji Restoration, due to modernization, Japanese people got the restriction to carry only one given and one family name. ( BIBREF2 , p. 167-169)
Some indicators for assigning the gender to a name also exist. The suffixes -ko (e.g. Hanako), -mi (Natsumi) and -yo (Yachiyo) indicate a female name. Male names are harder to identify because they have no fixed pattern. The suffix -o (Kazuo) mostly belongs to a male name though.
Family names often consist of two kanji characters, rarely of one or three characters. ( BIBREF8 , p. 913)
Eschbach-Szabo ( BIBREF2 , p. 157-309) dedicates an elaborate chapter to Japanese personal names. Compared to the Chinese system, the Japanese naming system shows more tolerance. Several readings are left besides each other, formal rules are not always applied in practice. Japanese apprehend names mainly visually by the characters, secondarily by the reading and sound. This is why several readings for a written name are still acceptable in the modern Japanese world. In the feudal system, names were needed to determine the position and roles of a person in the family and the society rather than characterizing him or her as an individual. Japan has an open naming system which allows adding new names. This is a difference to the exclusive name lists in Germany or France. ( BIBREF2 , p. 157-166)
Even the apparently simple kanji 正 has a lot of possible readings: Akira, Kami, Sada, Taka, Tadashi, Tsura, Nao, Nobu, Masa. We can see the same phenomenon in recently approved kanji too. When we see 昴 we cannot be sure whether it is read Kō or Subaru. ( BIBREF9 )
“Conversely, it often happens that one does not know to write a name of given pronunciation. For example, Ogawa can be written 尾川 or 小川. In Japan, when two people meet for the first time, they exchange business cards. This custom often baffles foreigners, but for Japanese it is a ritual with practical purpose: Japanese do not feel at ease until they see how a name is spelled out in kanji.” ( BIBREF9 )
Figure FIGREF22 illustrates the problem. The cashier tries to read the customer's name and cannot determine the right name. According to the customer's reaction, his first two trials Hiroko and Yūko seem to be wrong. Ogawa considers the name polygraphy as a reason why the creation of new name characters is still allowed.
Some characteristics of the Japanese naming system are: only little renaming of people semantic variance (names indicate different meanings/attributes) admission of foreign elements (foreign names get assimilated) possibility of polygraphic writing diversity of writing (many scripts usable, weak orthographic normalization) number of personal names for one person
In academic circles a Sino-Japanese reading led to a more reputable name. So the famous linguist 上田万年 from the Meiji era became known as Kazutoshi Ueda AND Mannen Ueda (Mannen is the Sino-Japanese on reading, Kazutoshi is the Japanese kun reading). Modern guidebooks underline that maybe one has to take a loan word from another language to find the corresponding reading for a name in kanji. For example, 宇宙 could be read as Kosumo (from the Greek word for cosmos) instead of Uchū. Also ノイ (Noi), derived from the German word “neu” (new), became a Japanese given name. Another imaginable name is “Sky” written as 空海 (meanings: 空 Sky, 海 sea) and transcribed as Sukai (actually kūkai). This would finally show the impact of globalization also on the Japanese naming system. If one has lived in Japan for a while and wants to adapt or register his or her Western name, one can choose corresponding kanji either by meaning or reading of the original name. Another possibility is transcribing the name with katakana. ( BIBREF2 , p. 170-171, 305-309)
The name Anna exists in many cultures. The girls in figure FIGREF29 are both called Anna. Both turn around when they hear their name and respond in their mother tongue (“Yes!” and “Hai!”, respectively).
One principle of Japanese name giving is ateji. Ateji (当て字) means “appropriate characters”. It says Japanese try to find characters with good, positive meanings for their children's name. Examples are 愛子 (愛: ai, love; 子: ko, child), 夏美 (夏: natsu, summer; 美: mi, beauty) or 正 (Tadashi, correct, honest). There is also a list with characters that are allowed but should be avoided because of bad associations. Characters like 蟻 (ari, ant), 苺 (ichigo, strawberry), 陰 (kage, shadow), 悪 (aku, bad/evil) belong to this list. ( BIBREF2 , p. 172-176)
A particular case drew public attention from June 1993 to February 1994 when Shigeru Satō wanted to call his son Akuma, written as 悪魔 (devil/demon). The civil registry office declined the registration after some discussion because they were worried about other children teasing him. The father went to court but the judges also declined the wish. Although the father wanted to give his son a unique, rememberable name, the judges saw a possible problem in his individual identification process and also getting teased (ijime) by other children in school someday. Then Satō tried to choose other characters while keeping the reading Akuma. But also changing the name partly into man'yōgana (亜久魔) did not change anything about the declination because of the phonological equality implying the same negative associations. Thereupon the father picked the character 神 (god) and its unusual reading Jin. Even though Shintoistic gods can be good or evil, the civil registry office accepted the name. Satō announced his intention to keep calling his son Akuma anyway. So a new (yet unofficial) reading for a character might be established. ( BIBREF2 , p. 271-278)
An article of “Japan Today” from December 2012 shows that there is still a debate about this subject.
“[...]Shinzo Abe, the leader of the Liberal Democratic Party made a stand against kirakira names last week when he stated that giving a child a name like Pikachu, which could be written something like 光宙 (`light' and `space'), is tantamount to child abuse, saying: `Children are not pets; we have to provide guidance for parents who would name their child in such a way.' ”( BIBREF11 )
Despite regulations, the discussion about the culture of name giving does not seem to have ended yet. Japanese comics like the one in figure FIGREF34 suggest a happy-go-lucky life if one has a common everyday name like Keiko.
Today's registration of names allows 2983 kanji for given names, 4000 kanji for family names, 700 man'yōgana, 46 hiragana and 46 katakana. There are still people whose names are written with the obsolete kana syllabary hentaigana which has been prohibited in 1948 ( BIBREF2 , p. 176-177; BIBREF12 ). Regarding this variety of characters (and readings) it is not surprising that even well educated Japanese have problems reading certain names too, respectively they cannot be sure that the chosen reading is the correct reading in the current situation. Forbidden is the usage of geometrical and punctuation signs. The sign ◯ (maru) is an example of such a forbidden one. Also forbidden is the usage of Latin characters (rōmaji) at the registration of a name. Rōmaji can be used privately, though. ( BIBREF2 , p. 176-177)
Names can be changed by marriage, adoption or getting a pseudonym or special posthumous name. Titles can be acquired too. ( BIBREF2 , p. 251)
After disestablishing the patriarchal ie system in which a man (for example the husband) is the dominating householder of a family, the family name has not been focused on the affiliation to a family anymore but has been focused on the couple living together in joint lives. ( BIBREF2 , p. 253-255)
Writing a Japanese name can be ambiguous. While the name written in kanji is definite, displaying it in Latin characters leads to several possibilities. Japanese themselves usually write their name using kanji. To find matching authors in the DBLP, it will be crucial for us to have names in Latin characters later on (in chapter SECREF6 ) because the standard encoding format of the file containing the main data of the DBLP project is ISO 8859-1 (Latin-1).
We sometimes talk about “kanji names” or “names in kanji representation” in this work. Although the expression does not suggest it, they shall include all names in Japanese characters, ergo names in kanji, hiragana and katakana.
ENAMDICT
To automatically detect where a Japanese family name in kanji notation ends and the given name begins, we should factor a name dictionary into our work. It is important that this dictionary includes the names written in kanji and a clear transcription for them in Latin characters. A useful dictionary for our purposes is ENAMDICT.
ENAMDICT BIBREF13 is a free dictionary for Japanese proper names, maintained by the Monash University in Victoria (Australia). The Electronic Dictionary Research and Development Group owns the copyright. In 1995, ENAMDICT became an independent project by dividing the universal dictionary EDICT into two projects. ENAMDICT contains person names and non-person names like places and companies as well. Table TABREF38 shows the online statistics about the content of the ENAMDICT file. We will call the categories “name types” in subsequent chapters.
“A proper name is a word or group of words which is recognized as having identification as its specific purpose, and which achieves, or tends to achieve that purpose by means of its distinctive sound alone, without regard to any meaning possessed by that sound from the start, or aquired by it through association with the object thereby identified.” ( BIBREF14 , p. 73)
these intern abbreviations occur again when we construct a database for Japanese names in chapter SECREF74
Publication Metadata Sources
百語より一笑 Hyaku go yori isshō
(A smile is more worth than a hundred words.)
Japanese saying
This chapter gives an overview of the publication metadata sources that we will need later. We take a look at these sources because we will discuss a way to extract metadata information from one source containing Japanese papers and import them into another source in chapter SECREF6 .
Digital Library of the IPSJ
The IPSJ is a Japanese society in the area of information processing and computer science. It was founded in April 1960 and, by its own account, helps evolving computer science and technology and contributes new ideas in the digital age. It regularly publishes the magazine “Information Processing” (jōhō shori) and a journal, holds symposiums and seminars, Special Interest Groups issue technical reports and hold conferences. It is also the Japan representative member of the IFIP and established partnerships with the IEEE, ACM and other organizations. -2 IPSJ develops drafts of international standards and Japanese industrial standards as well. Eight regional research sections are widespread over Japan. IPSJ had over 17000 members in March 2011. ( BIBREF15 ; BIBREF16 )
The IPSJ provides a Digital Library (referenced as IPSJ DL in this work) where everybody can search Japanese papers in the field of computer science. The search page can be displayed in Japanese and English, most papers are written in Japanese. Free papers are accessible in PDF format, non-free can be bought. A tree view provides the order structure of the papers and there is a keyword search available. We are especially interested in the metadata export functions, though. The online application offers following export formats:
OAI-PMH
BibTeX
OWL SWRC
WEKO Export
For our purposes the OAI-PMH is the most suitable solution because we can send simple HTTP requests to the server and get publication metadata as a result. It “provides an application-independent interoperability framework based on metadata harvesting” ( BIBREF17 ) and consists of two groups of participants. Data Providers can be servers hosting and supplying the metadata. Service Providers take the harvester role and process the recieved metadata from the Data Provider. The application-independent interoperability is achieved by using XML as basic exchange format. Arbitrary programs can parse XML input data very easily, so can we.
While accessing the server, the data can be extracted in several ways. We can either access an OAI-PMH repository by the repository name, the metadata format prefix of the record and a unique identifier or get a list of records with only one request.
A request for a list of records looks like this: 1.5 em1.5 em(*@@*)false6pt http: //ipsj.ixsq.nii.ac.jp/ej/ ?action=repository_oaipmh&verb=ListRecords &metadataPrefix=oai_dc It may also contain a start date and an end date or a resumption token. The headers of records include a corresponding time stamp. The server's response to a request offers only 100 publications. We need this resumption token because it determines the point where we resume the harvest.
In the beginning and for debugging, it was more comfortable to increment a counter that acts as the unique identifier and send requests for single entries with the respective ID multiple times. Fortunately, the entries can be addressed by such an integer ID (plus some constant name):
1.5 em1.5 em(*@@*)false6pt
http: //ipsj.ixsq.nii.ac.jp/ej/
?action=repository_oaipmh&verb=GetRecord&metadataPrefix=oai_dc
&(*@\textbf{identifier}@*)=oai:ipsj.ixsq.nii.ac.jp:(*@\textbf{27130} @*)
The last entry containing real publication metadata has the suffix integer 87045 in its ID. After that some entries with status INLINEFORM0 follow. If we continue requesting even higher IDs, we soon get only a reply with the error code INLINEFORM1 anymore, implying there are no publications with higher IDs. We will discuss the implementation of an OAI-PMH harvester for the IPSJ DL in section UID99 .
DBLP Project
The DBLP is a worldwide known database for publication metadata in the field of computer science. Ley BIBREF19 gives a brief explanation of the DBLP, additional information is extracted from the online DBLP FAQ BIBREF20 . It was started in 1993 as a test server for web technologies and named “Database systems and Logic Programming” in the beginning. But it grew and became a popular web application for computer scientists. The Computer Science department of the University of Trier founded the project, since summer 2011 it is a joint project of Schloss Dagstuhl - Leibniz Center for Informatics and the University of Trier.
“For computer science researchers the DBLP web site is a popular tool to trace the work of colleagues and to retrieve bibliographic details when composing the lists of references for new papers. Ranking and profiling of persons, institutions, journals, or conferences is another sometimes controversial usage of DBLP.” ( BIBREF19 )
The publication metadata is stored in the XML file INLINEFORM0 containing more than 2 million publications and exceeding a size of 1 GB (state of October 2012). An excerpt of the beginning of INLINEFORM1 can be found in the appendix section SECREF171 . The header dictates ISO-8859-1 (Latin-1) as encoding of the file. Considering that we want to import Japanese names in kanji (which are not included in Latin-1) we must handle that issue somehow. We will discuss the solution in section UID121 .
The web front end of the DBLP provides an overview of coauthor relationships by a Coauthor Index (see figure FIGREF53 ). The Coauthor Index can be found at the author's page after the list of the author's publications itself. It shows all coauthors, common papers and categorizes the coauthors into groups that worked together by giving the author names corresponding background colors.
In his diploma thesis Vollmer BIBREF23 gives useful hints in terms of converting the INLINEFORM0 file to a relational database. He also compares the performance of several relational database management systems for this conversion.
The DBLP team developed a special format for the integration of new publications. It is called Bibliography Hypertext (BHT), is based on HTML and similar to the HTML code of the tables of contents (TOCs) at the DBLP website. An example of a publication list in BHT format can be found in the appendix in section SECREF168 . A BHT file has the following structure. The header (text between h2 tags) contains the volume, the number/issue and the date of issue. A list of corresponding publications follows next. The list is surrounded by a beginning and a closing INLINEFORM0 tag, single publication entries start with a INLINEFORM1 tag. A comma is used for the separation of authors while there should be a colon after the last author name. Then comes the title which has to end with a period, question mark or exclamation point. The next line provides the start and end page in the volume/issue. At last, an optional URL can be added by an INLINEFORM2 element to specify an “electronic edition” for a paper. Some guidelines need to be considered, too:
there is no closing INLINEFORM0 tag
initials should be avoided (full name is preferred)
titles with only upper case letters should be avoided
“0-” is the default page number value if the page information is missing
The BHT file may contain additional information. For example, conference proceedings may have more headers to achieve a better clarity. But it should be as close to the proposed format as possible to guarantee an easy import without unnecessary burdens. ( BIBREF24 ; BIBREF20 , “What is the preferred format to enter publications into DBLP?”)
We will extend the original format in section UID121 to satisfy our needs in the context of Japanese papers.
Personal Name Matching
“The important thing is not to stop questioning;
curiosity has its own reason for existing.”
(Albert Einstein)
After looking at transcription systems, Japanese personal names and publication metadata sources, we will now have to look at Personal Name Matching to enable us to deal with the Japanese names extracted from the metadata sources. First we will discuss Personal Name Matching in general and then problems of Personal Name Matching for Japanese names in particular.
The expression Personal Name Matching comes from the work by Borgman and Siegfried BIBREF25 and is used here as in the extended definition from Reuther's work ( BIBREF26 , p. 48-51). Borgman and Siegfried only talk about synonyms. Synonyms are possible names for the same person. Reuther extended the definition by also including homonyms. A name is a homonym if it can belong to several persons. Personal Name Matching is known by other titles in literature, too. Niu et al. BIBREF27 discuss Cross Document Name Disambiguation:
“Cross document name disambiguation is required for various tasks of knowledge discovery from textual documents, such as entity tracking, link discovery, information fusion and event tracking. This task is part of the co-reference task: if two mentions of the same name refer to same (different) entities, by definition, they should (should not) be co-referenced. As far as names are concerned, co-reference consists of two sub-tasks:
On et al. BIBREF28 formally express their Name Disambiguation problem as follows:
“Given two long lists of author names, INLINEFORM0 and INLINEFORM1 , for each author name INLINEFORM2 , find a set of author names, INLINEFORM3 such that both INLINEFORM4 and INLINEFORM5 are name variants of the same author.” ( BIBREF28 )
In contrast to the previous definitions Han et al. BIBREF29 define Name Disambiguation like this:
“Name disambiguation can have several causes. Because of name variations, identical names, name misspellings or pseudonyms, two types of name ambiguities in research papers and bibliographies (citations) can be observed. The first type is that an author has multiple name labels. For example, the author `David S. Johnson' may appear in multiple publications under different name abbreviations such as `David Johnson', `D. Johnson', or `D. S. Johnson', or a misspelled name such as `Davad Johnson'. The second type is that multiple authors may share the same name label. For example, 'D. Johnson' may refer to `David B. Johnson' from Rice University, `David S. Johnson' from AT&T research lab, or `David E. Johnson' from Utah University (assuming the authors still have these affiliations).”( BIBREF29 )
The citations above show that there are many expressions for Personal Name Matching (or sub-categories) which are not equally used by different authors. Niu et al. and On et al. restrict Name Disambiguation to finding synonyms, Han et al. include homonyms in their definition. Even more related expressions can be found in literature. As mentioned, we will use Personal Name Matching in this work as Reuther uses it.
The main aspect of Personal Name Matching is handling synonyms and homonyms. Trying to express the problems formally leads to the following description: Let INLINEFORM0 be a set of persons, especially characterized by their names, in a certain data set and INLINEFORM1 a set of all existing persons. We are also being given a function INLINEFORM2 and a relation INLINEFORM3 . The actual problems can be described as
with INLINEFORM0 ; INLINEFORM1 ; INLINEFORM2 .
Case UID60 checks for each person INLINEFORM0 from the person set INLINEFORM1 whether another person INLINEFORM2 from INLINEFORM3 exists, so that their name labels are different ( INLINEFORM4 ) but the person is the same ( INLINEFORM5 ). So this case covers the synonym problem because the same person has several names here.
Case UID61 checks for each person INLINEFORM0 from the person set INLINEFORM1 whether another person INLINEFORM2 exists in INLINEFORM3 , so that their name labels are equal ( INLINEFORM4 ) but the persons behind the names differ ( INLINEFORM5 ). So this case covers the homonym problem because the same name is taken by several people.
The problem Personal Name Matching arises because such a relation INLINEFORM0 usually does not exist and needs to be approximated as good as possible: INLINEFORM1
Thanks to appropriate similarity measurements and a matching threshold INLINEFORM0 , we can find such a relation INLINEFORM1 which is approximately equivalent to the original relation INLINEFORM2 . The main task in Personal Name Matching is finding a good similarity measure for the described problem. ( BIBREF26 , p. 52)
Let us have a look at a vivid example.
The birth name of the famous actor Michael Keaton is Michael John Douglas. Keaton took a pseudonym because he could have been confused with the more famous actor Michael Douglas. Synonyms for Keaton are “Michael Keaton”, “Michael Douglas”, “Michael John Douglas”, “Michael J. Douglas”, “M. Keaton” or “M. J. Douglas”. -1
On the other hand, when we hear the name “Michael Douglas” we cannot be sure which famous actor is referred to, because Michael Douglas is a valid name for both of them. Figure FIGREF62 illustrates this Personal Name Matching problem with Michael Keaton.
The process of Personal Name Matching can be divided into the following steps ( BIBREF26 , p. 56-87):
Criteria for the evaluation of such a process are Precision and Recall ( BIBREF35 , p. 75-81; BIBREF26 , p. 83-85). Let INLINEFORM0 be a set of items, INLINEFORM1 be the set of relevant items (e.g. synonyms) with INLINEFORM2 and INLINEFORM3 be the answer of a request. In our scenario, the request is usually the question “Is the item INLINEFORM4 a synonym, or accordingly INLINEFORM5 ?”. Then we can define: INLINEFORM6 INLINEFORM7
Precision testifies whether the reported synonyms during the Name Matching process are really synonyms, Recall allows us to say whether there are synonyms which have not been found.
We use a combination of the Jaccard Similarity Coefficient and Levenshtein Distance in our tool. Bilenko et al. BIBREF36 explain these string matching methods isolated. Given two word sets INLINEFORM0 and INLINEFORM1 , the simple Jaccard Similarity Coefficient is: INLINEFORM2
The Levenshtein Distance uses the operations replacement, insertion and deletion of a character and is defined by a matrix. Let INLINEFORM0 and INLINEFORM1 be words, INLINEFORM2 and INLINEFORM3 their lengths. Then we can define: DISPLAYFORM0
We modify the Jaccard Similarity Coefficient in a way that it classifies two set items as intersected if their Levenshtein Distance is lower than a certain threshold.
In addition to the general Personal Name Matching, we must take the characteristics of Japanese names into account. Particularly the usage of kanji and several possibilities to transcribe a name make it hard to compare Japanese names. For example, we cannot compare kanji names from the IPSJ DL with the author names in DBLP. Even though kanji are suited best for name comparison it does not work here because the standard encoding of names in DBLP is “Latin-1” which does not support kanji natively.
A big problem for our work is revealed by looking at the given name Akiko with its kanji representation 章子. As we can see in table TABREF71 章子 has several possible readings besides Akiko (left column) and Akiko written in Latin characters does not determine a nonambiguous match in kanji (right column).
The same problem applies to Japanese family names. Table TABREF72 presents the problem with Kojima as a family name example.
Preparation of Japanese Papers for the Import Into the DBLP Data Set
大事の前の小事 Daiji no mae no shōji
(Who wants to achieve big things must do the little things first.)
Japanese saying
This chapter explains the approach to process and combine the various data sources so that we can import Japanese publications in the end. We will proceed step by step to make the ideas behind the solution as comprehensible as possible.
General Approach
First we will construct a table in a relational database containing information about Japanese names and their transcriptions by converting the ENAMDICT name dictionary. Then we set up a data structure for Japanese names that handles the problem of assigning a given and a family name to a newly instantiated author during parsing the publications of IPSJ DL. At last, we will discuss the actual and titular integration of Japanese papers into the DBLP data set including an explanation that shows how to create a harvester for the OAI-PMH protocol.
Converting an ENAMDICT File to a Relational Database
The first step towards being able to handle Japanese names is distinguishing given and family name in the input text. A relational database containing information about Japanese names and their transcriptions is useful for this task. The database should contain names in kanji, their transcriptions in hiragana and Latin characters and the name type to have a good match with the data source ENAMDICT and to provide all necessary name information we need.
To fill the empty database, the ENAMDICT file needs to be analyzed and its data needs to be extracted. The entries usually have the form
KANJI [TRANSCRIPTION] /LATIN (TYPE)/.
We can take the following line as an example of an existing entry:
森田 [もりだ] /Morida (s)/
A parser should export the single entries. First it saves the text between the slashes and searches for the type of the entry. It must be assured that all person name types and no undesired or alleged types will be stored. Types can consist of the characters “s” (surname), “g” (given name), “f” (female name), “m” (male name), “u” (unclassified name), “p” (place name), “h” (full name of a particular person), “pr” (product name), “co” (company name) or “st” (station name). But only the types “s”, “g”, “f” and “m” are important in this case because the parser should only store person names in the database. One exception are the unclassified names and they need to be stored too because they can also contain person names. Using unclassified names carelessly leads to problems, though. On the one hand it is useful if you find a match for the given name but not for the assumed family name. Then it helps to find an unclassified name matching the assumed family name. On the other hand some unclassified names in the ENAMDICT file decrease the data quality of the database. The entry
スターウォーズ /(u) Star Wars (film)/
shows that there are undesired names like film titles in the category “unclassified”. The example also reveals that there is no overall standard for an entry format. Analyzing the file leads to following observations:
text in round brackets might be type or additional commentary (see entry example above)
when only hiragana or katakana are used instead of kanji to display the Japanese name the transcription part is missing because it is not required (see entry example above)
the type information in brackets might actually consist of several type declarations, separated by commas
the type information might be placed before or after the transcription in Latin characters
one entry line might contain several possibilities to interpret the name, the example
イブ /(f) Eve/(u) Ib/Ibu (f)/(m) Yves/
clarifies this aspect
We must consider these observations when we implement the parser.
To handle the problems in UID76 and UID78 we can filter the contents in round brackets. One possibility is using a regular expression like (,|s|u|g|f|m|p|h|pr|co|st) INLINEFORM0 to filter all valid types. Regular expressions are powerful and popular tools for pattern matching. In our case we are looking for valid type expressions including commas to get rid of commentaries. After eliminating commentaries we also want to get rid of unwanted types like place names. So we filter again and only process desired types this way. To handle UID77 we just ignore missing transcriptions in square brackets. Our parser also needs to be flexible enough to deal with observation UID79 which means that it must expect the type(s) at two possible places (before and after the transcription in Latin characters). We can handle the last observation UID80 by using recursive function calls. We call the function that exports one entry with a modified parameter value within the function itself when there is more than one entry in the input line (noticeable by additional slashes).
Before parsing we need to change the original encoding of the ENAMDICT file from “EUC-JP” to “UTF-8” to make it compatible with our program.
During parsing a few inconsistencies in the syntax of the ENAMDICT file occurred:
there were four times no slash in the end of the entry:
甲子太郎 [かしたろう] /Kashitarou (m)
there was once an unnecessary closing bracket without an opening bracket:
近松秋江 [ちかまつしゅうこう] /Chikamatsu Shuukou) (h)/
there was once a backslash where a square bracket was supposed to be put:
キルギス共和国 [キルギスきょうわこく\ /(p) Kyrgyz Republic/Kirghiz Republic/
Instead of constructing a workaround for these problems we should rather correct the only few inconsistencies manually.
A Data Structure for Japanese Names
We will construct a class which is responsible for handling Japanese names and representing them in a convenient way. Therefore, it must be able to save the name in kanji and in at least one Latin transcription. The transcription is necessary to compare found authors in IPSJ DL with authors in the DBLP. The kanji name can be stored as additional author metadata in the DBLP later. Our goal is a standardized representation of a Japanese person. So first we can construct a simple helper class for a single name containing given and family name as strings. This class can be applied to both kanji and Latin names. Our Japanese person usually has these two name representations.
When getting an input name from the IPSJ DL we try to determine the separation point and categorize the tokens into given and family names. The separation point can mostly be identified by white space or a comma between the words. The categorization is done by including information from ENAMDICT. Thanks to ENAMDICT's classification into name types we can use this information to categorize our input name tokens into given and family names. However, we have to cover some unusual cases too because IPSJ DL has no standardized way to provide names. So we get names in various formats. For example, there are entries in which the family name follows the given name directly without any separation markers. Then we can try to take advantage of upper and lower case letters assuming that an uppercase letter means the beginning of a new name token. But we must also be aware of existing input names like “KenjiTODA”. If we get a longer sequence of uppercase letters, this sequence is probably a family name. We can filter these names with a regular expression like [A-Z][a-z]{1,}[A-Z]{3,} (first character is an uppercase letter, followed by at least one lowercase letter, followed by at least three uppercase letters). We also have to recognize abbreviated names and normalize Latin names.
Let us have a look at what we can observe about necessary transcription customizations. One peculiarity is that Japanese like to transcribe their names with an INLINEFORM0 instead of a double vowel. An example is “Hitoshi Gotoh”. The INLINEFORM1 symbolizes the lengthening of a vowel and is a substitute for INLINEFORM2 or INLINEFORM3 in this case. To enable our class to find names like this in ENAMDICT, we have to replace the INLINEFORM4 's lengthening a vowel by the vowel itself because ENAMDICT entries contain double vowels instead of INLINEFORM5 's with this semantic function.
Another observation is ENAMDICT's usage of the Hepburn transcription system throughout the entire dictionary. So we have to convert the name to match the Hepburn system and to check a name via ENAMDICT. The needed character replacements for a conversion into the Hepburn system are shown in table TABREF86 (see also figure FIGREF165 in the appendix).
In addition to the replacements from table TABREF86 , we must consider that names usually start with uppercase letters and replace “Tu”, “Ti”, “Sya” and so on by “Tsu”, “Chi”, “Sha”, etc. as well.
The Japanese INLINEFORM0 is sometimes transcribed as INLINEFORM1 . If INLINEFORM2 is followed by INLINEFORM3 or INLINEFORM4 , this INLINEFORM5 is likely to be transcribed as INLINEFORM6 . The reason is a correlative modification in the pronunciation of INLINEFORM7 in these cases. For example, the family name Kanbe is often transcribed as Kambe in the IPSJ DL data set. -1
Double vowels are sometimes completely dropped in some IPSJ DL author elements. While this might be okay for aesthetic reasons when transcribing the own name, it becomes a problem when we try to find a matching name in a dictionary like ENAMDICT. So we also have to check additional modified names. If there is a single vowel in the name, we must also check the same name whose vowel has become a double vowel. If several single vowels occur in a name, the number of names to be checked rapidly increases too. We have to pay special attention to the doubling of the vowel INLINEFORM0 because INLINEFORM1 AND INLINEFORM2 are possible doublings for the single INLINEFORM3 . Doubling the vowel INLINEFORM4 leads either to INLINEFORM5 or INLINEFORM6 . All other double vowels are intuitive: INLINEFORM7 becomes INLINEFORM8 , INLINEFORM9 becomes INLINEFORM10 , INLINEFORM11 becomes INLINEFORM12 . Taking “Gotoh” as an example we remove the INLINEFORM13 first and check a list of names via ENAMDICT. The list of names consists of “Goto”, “Gooto”, “Gouto”, “Gotoo”, “Gotou”, “Gootoo”, “Goutoo”, “Gootou” and “Goutou”. We can remove “Goto”, “Gooto” and “Gouto” from the list if we know that the INLINEFORM14 (representing a double vowel) has been removed before.
If the input metadata contains a Latin and kanji representation of the author's name, we will try to find a match for these. Names in kanji usually do not have any separation mark, so we must distinguish given and family name by taking advantage of the ENAMDICT dictionary and checking the possible name combinations. Processing author names without kanji representation is okay but a missing Latin representation becomes a problem when it comes to actually integrating the publication into the DBLP data set because all DBLP data are supposed to have a Latin representation. The solution is a search for name candidates (we will discuss it more detailed in section UID121 ).
We cannot be sure that our name matching for Latin and kanji names always succeeds. Therefore, we add some status information to our Japanese name to get a chance to evaluate the outcome of the program. Possible status types are:
The status “ok” means that given and family name have successfully been found in the name dictionary and (if available) the kanji names have successfully been assigned to their corresponding name in Latin characters.
An undefined status usually means that the Latin name is missing. A missing Latin name leads to a never changed name status. In these cases, the name in kanji usually exists anyway.
This is the status type for an abbreviated name like “T. Nakamura”.
If this status occurs, the Latin name could not be found in the name dictionary.
If a kanji name has not been found in the name dictionary or could not be assigned to the Latin name, this status will occur.
As the name suggests, this status means that the data quality of the publication metadata source is most likely bad. Our tool can handle some of these cases well by normalizing the name.
We could have stumbled upon a name anomaly when we see this status type. During implementation this status was narrowed down to a possible name anomaly for abbreviated names.
This status indicates a critical name anomaly. This is the only case in which the tool cannot even give a recommendation for given and family name. The output is the full name of the input data for both given and family name.
In chapter SECREF5 we discussed synonyms and homonyms. With the strategies from above we can deal with synonyms pretty well. Yet, homonyms cannot be recognized this way and are not covered at all by our tool.
Import Into the DBLP Data Set
To be able to import the harvested data into the DBLP, we still need to make the existing publication data processable in an appropriate way for our program, construct a coauthor table for these data, compare publications from the Digital Library of the IPSJ with those available in the DBLP project and provide the new publication metadata for the DBLP adequately.
It is important to convert the DBLP file INLINEFORM0 to a relational database to gain an easier and more efficient access to the data while running our program. We are mainly interested in the basic publication metadata. So we will skip some non-publication records of the DBLP like INLINEFORM1 elements. Our publication database table shall contain columns for an ID, the authors, title, publication year, journal title, journal pages and the volume. Whenever we come across the beginning of a publication type element ( INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 , INLINEFORM7 , INLINEFORM8 , INLINEFORM9 ) during parsing, we reinitialize the variables which store this metadata for the table columns. When we encounter the according XML end tag of the publication we add an SQL INSERT command to a batch of commands. This batch is regularly executed after processing a certain amount of publications. The regular execution of batches allows a better performance than sending single INSERT commands to the database server. There are some recommendations in the DBLP FAQ BIBREF20 for parsing the INLINEFORM10 file. We use the Apache Xerces parser instead of the standard Java SAX parser and need to increase the allocatable heap space for our parser.
While parsing the DBLP file we can construct a table with coauthor relationships along with the DBLP publication table. This coauthor table stores two author names and a publication ID. The ID shows which publication has been written together by the authors and matches the ID in the DBLP publication table. New coauthor relationships will only be inserted if there are at least two authors mentioned in the metadata. If the metadata mentions more than two authors, every possible pair of authors will be inserted into the database.
As already explained in section SECREF39 , we access the OAI-PMH repository by the repository name and the metadata format prefix to get a list of publication metadata entries. The specification of OAI-PMH 2.0 BIBREF17 describes a possibility to retrieve a list of all metadata formats which a Data Provider has to offer. The HTTP request
1.5 em1.5 em(*@@*)false6pt
http: //ipsj.ixsq.nii.ac.jp/ej/?action=repository_oaipmh
&verb=ListMetadataFormats
informs us that there are two metadata formats called oai_dc and junii2. oai_dc is the standard Dublin Core format all Data Providers provide, also traceable in the protocol specification. The “Implementation Guidelines for the Open Archives Initiative Protocol for Metadata Harvesting” BIBREF37 classify the metadata format oai_dc as mandatory. The name junii2 suggests that it is a self-developed format of the National Institute of Informatics (in Tokyo). Comparing these two in IPSJ DL, we notice that junii2 provides a more accurate description of the data, for example regarding additional XML attributes telling us whether the element value is English or Japanese. This additional information is helpful when we process the data in a later step and is missing in the oai_dc representation of the IPSJ server's data. So we will take the metadata prefix junii2 as initial point for harvesting the server's metadata. Figure FIGREF102 shows an according metadata example (also compare figure FIGREF46 ).
The harvesting includes the following steps:
we load the DBLP publication, coauthor relationship and the ENAMDICT data into the RAM
we access the IPSJ server to get publication metadata
we parse the accessed XML metadata (concerning the thoughts from section SECREF85 ) and store the needed publication data temporarily in the RAM.
we add the parsed publication to an SQL command batch to insert the metadata into a relational database (the batch is regularly executed)
we create a BHT file for the parsed publication
at the end we go into all directories with BHT files and concatenate them to one bigger BHT file
During the implementation and testing, some exceptional incidents occurred. We try to cover them besides the expected difficulties like Personal Name Matching and transcriptions. For example, we get “NobukazuYOSHIOKA” as a full input name. Algorithm UID99 shows a way to handle these unusual input data. Japanese sometimes write their family names in upper case letters to distinguish given and family name. [htb]
INLINEFORM0 : full input name
INLINEFORM0 : list of name representations for a Japanese person
function split( INLINEFORM0 ): searches for regular expression and splits text,
splitted text does not contain text that matches the regular expression
function normalize( INLINEFORM0 ): normalizes personal name
new name for person found and added (given and family name separated)
INLINEFORM0 matches regular expression INLINEFORM1 INLINEFORM2 split INLINEFORM3 INLINEFORM4 split INLINEFORM5 normalize INLINEFORM6 INLINEFORM7 BAD_DATA_QUALITY_IN_SOURCE INLINEFORM8 add(new PersonName INLINEFORM9 Categorizing names like “NobukazuYOSHIOKA”
Another observation during testing the program and checking the data is the following. Searching the Japanese given name “Shin'ichi” in the DBLP we notice that there is no uniform way to store certain names in the database. We find “Shin'ichi Aihara” but also “Shin-ichi Adachi” along with other results indicating the same phenomenon. So we see the apostrophe and the hyphen are used equally as syllable separators (we discussed the syllable separation in chapter SECREF14 ). Comparing the author “Shinichi Horiden” from the IPSJ data set and the one from the DBLP data set we can assume they are the same person because they have common coauthors (e.g. Kenji Taguchi and Kiyoshi Itoh) in both databases. The IPSJ data set tells us that the name written in kanji is 本位田真一. We are interested in the part 真一 (Shin'ichi) because we get to know that the separator symbol is sometimes missing. The kanji indicates the syllables INLINEFORM0 , especially focused on INLINEFORM1 and INLINEFORM2 instead of INLINEFORM3 . We would expect an additional separator symbol for a clear (nonambiguous) transcription; but obviously, it has been dropped in this case. A separator symbol can also be found when some double vowels occur. For example, we find “Toru Moto'oka” (元岡達) instead of “Toru Motooka”. This makes it easier to identify the reading of a single kanji (元 moto, 岡 oka, 達 toru). When a separator symbol is needed for a clear transcription, an apostrophe is used as separator symbol in ENAMDICT. While ENAMDICT always uses an apostrophe as separator symbol, DBLP and IPSJ DL use an apostrophe, a hyphen or the separator symbol is missing. We must consider these differences in the data sources for a successful import. For an easier name matching between names in the ENAMDICT and IPSJ DL data set we can add names containing an apostrophe once as they are and once without apostrophes to the relational database when we parse the ENAMDICT file to store person names in a relational database.
Our tool has a statistics class to get an overview over the parsed input data and the quality of the output data. We will have a look at these statistics created after the harvest. There are 81597 records with publication metadata and 8562 records which are marked as INLINEFORM0 in the parsed data. Figure FIGREF114 shows a visualization in pie chart form.
The publication types are declared as “Technical Report”, “Conference Paper”, “Journal Article”, “Departmental Bulletin Paper” or “Article” (compare the table TABREF115 and figure FIGREF116 ).
The statistics also reveal that 74971 publications are published in Japanese, only 4456 in English (compare the pie chart in figure FIGREF117 ).
Our tool detects 1325 publications which are already included in DBLP. A publication is considered found in both databases if the title is the same and at least one author is the same.
The most interesting statistics for our work are these about the evaluation of the quality of author name assignments (compare the bar chart in figure FIGREF119 ):
Fortunately, 180221 of 231162 author names could be matched successfully. There are many reasons for the remaining uncovered cases. 9073 Latin names could not be found in the name dictionary ENAMDICT and 14827 name matchings between the names' Latin and kanji representations did not succeed. These names might be missing at all in the dictionary, delivered in a very unusual format that the tool does not cover, or might not be Japanese or human names at all. Of course, Japanese computer scientists sometimes also cooperate with foreign colleagues but our tool expects Japanese names and is optimized for them. Both IPSJ DL and ENAMDICT provide katakana representations for some Western names. However, katakana representations for Western names are irrelevant for projects like DBLP. But for instance, Chinese names in Chinese characters are relevant. Understandably, our tool does not support any special Personal Name Matching for Chinese names yet because our work is focused on Japanese names. The tool does not take account of the unclassified names of ENAMDICT by default. We can increase the general success rate of the Name Matching process by enabling the inclusion of unclassified names in the configuration file but the quality of the Name Matching process will decrease because the correct differentiation between given and family name cannot be guaranteed anymore. An unclassified name may substitute a given or a family name.
There are 1203 entries that were qualified as “bad data quality in publication metadata source”. They might be handled alright but they are particularly marked to indicate that these cases should also be reviewed manually before any import action is performed.
The numbers of abbreviated names, possible name anomalies and name anomalies are very low. While processing author names which will be later qualified as “possible name anomaly”, the tool cannot decide whether the assignment has been correct or the name is an anomaly. “Name anomalies” are critical anomalies that could not be categorized into any other status.
There could be a few uncovered flaws, for example HTML or code in titles. We must be aware of those when we do the actual import into the DBLP data set.
We will discuss the creation of BHT files and important extensions for the BHT format that fit the requirements of Japanese papers well, based on our knowledge from section SECREF49 . As mentioned, the header dictates ISO-8859-1 (Latin-1) as encoding of the file INLINEFORM0 . Ley's work BIBREF19 reveals that we can use XML/HTML entities to solve this problem. Authors have person records in the DBLP providing additional information. For example, we can find the following entry for Atsuyuki Morishima (森嶋厚行) in the XML file:
1.5 em1.5 em(*@@*)false6pt
<www mdate="2008-02-20" key="homepages/m/AtsuyukiMorishima">
<author>Atsuyuki Morishima</author>
<title>Home Page</title>
<url>http://www.kc.tsukuba.ac.jp/~mori/index.html</url>
<note>森嶋厚行</note>
</www>
We must extend the BHT format to fulfill the requirements and add extra metadata for authors, title and relevant process information. The author talked to members of the DBLP team personally and got the permission to extend the original BHT format to enable us to adapt the format to Japanese papers. Our additions are well formed XML elements. We must substitute all non-ASCII characters by escape characters (XML entities) to ensure the compatibility for DBLP. The additional elements are:
Every author that has a kanji representation in its metadata gets an originalname element:
1.5 em1.5 em(*@@*)false6pt
<originalname latin="Shinsuke Mori">森,信介
</originalname>
If available, the Latin representation is added as an attribute INLINEFORM0 to avoid confusion on assigning the extra information to the right author later on. The element content has a fixed structure. The family name comes first, followed by a comma and the given name.
Every author gets a status information that evaluates the author name assignment. It is displayed by a status element:
1.5 em1.5 em(*@@*)false6pt
<status name="Shinsuke Mori">ok</status>
The connected author is added as an attribute INLINEFORM0 .
If there is no Latin representation of the name of an author, we will add Latin name candidates to the BHT file:
1.5 em1.5 em(*@@*)false6pt
<namecandidates kanji="菅谷正弘">Shougu Sugatani, Seihiro Sugatani, Tadahiro Sugatani, Masahiro Sugatani, Shougu Suganoya, Seihiro Suganoya, Tadahiro Suganoya, Masahiro Suganoya, Shougu Sugaya, Seihiro Sugaya, Tadahiro Sugaya, Masahiro Sugaya, Shougu Sugetani, Seihiro Sugetani, Tadahiro Sugetani, Masahiro Sugetani, Shougu Sugenoya, Seihiro Sugenoya, Tadahiro Sugenoya, Masahiro Sugenoya</namecandidates>
The connected kanji representation is added as an attribute kanji in the namecandidates element. We seek the kanji in ENAMDICT and output all possible name combinations in a comma separated list.
If the original language of the title is Japanese, we will add this title to the BHT file:
1.5 em1.5 em(*@@*)false6pt
<originaltitle lang="ja" type="Journal Article">点予測による自動単語分割</originaltitle>
The XML element originaltitle has the attributes lang (for the paper language) and type (for the publication type).
The tool searches the authors in DBLP and tries to find additional common coauthors in DBLP. If at least two of the main authors of the paper also worked with a certain other person (that is retrieved from DBLP), this person is added to the comma separated list. The Personal Name Matching of author names uses a combination of Levenshtein Distance and Jaccard Similarity Coefficient here.
1.5 em1.5 em(*@@*)false6pt
<commoncoauthors>Masato Mimura</commoncoauthors>
If the tool finds the paper in DBLP, we also add the DBLP key. Records, such as elements with publication metadata, have a unique key in DBLP.
1.5 em1.5 em(*@@*)false6pt
<dblpkey>conf/iscas/HiratsukaGI06</dblpkey>
An example of a BHT file in SPF can be found in the appendix in section SECREF170 (also compare with the original BHT format in section SECREF168 ). After we have finished parsing all Japanese papers, we concatenate the BHT files in SPF that belong together to one bigger BHT file INLINEFORM0 . Publications, respectively BHT files, that belong together are recognizable by the directory structure. If they belong together, they will be in the same directory. We must simply go through the BHT root directory recursively.
Conclusion and Future Work
“Creativity is seeing what everyone else sees,
but then thinking a new thought that has never been
thought before and expressing it somehow.”
(Neil deGrasse Tyson)
The integration of Japanese papers into the DBLP data set has revealed some major problems. The nonambiguous representation of Japanese names (and paper titles, etc.) is done by kanji while DBLP's standard encoding is Latin-1 and Japanese characters are only optionally added to the publications' metadata. This leads to the need of transcribing the Japanese names which in turn also evokes new problems because there is not the transcription but rather a lot of transcription possibilities.
In addition to that, we must ensure a certain data quality even if one data source sometimes lacks this quality. Due to name matching with a name dictionary, format checking and conversions (if necessary), we can actually correct some flaws or at least assimilate the data into our project.
The problem of synonyms is dealt with by transcription manipulations, homonyms could not be addressed in this work. Reuther ( BIBREF26 , p. 159-164) describes an idea to handle homonyms. We could extend our tool by a Coauthor Index as in DBLP for the publications of the IPSJ DL. The idea is based on the assumption that scientists often publish their papers with the same people as coauthors. If the coauthors match a certain coauthor group, the author is considered the same. -1 If the author's coauthors are not members of the expected coauthor groups, the author could be a different person than we expected and we might have a homonym here.
The developed tool is usable and provides among relational databases customized Bibliography Hypertext (BHT) files as output data. Customizations were necessary to optimize the BHT files for Japanese papers and additional important metadata information. Desired but missing metadata like contributors or a short description of the content of a paper can be added without much effort because the relational database already contains these data, only the source code of Kankoukanyuu (our tool) needs to be extended by a few lines.
Though having been created with care regarding correct and well-formed output data, it is not recommended to import the newly created BHT files unchecked. The DBLP team should check the files not to compromise the data quality of DBLP. There might still be undesired format anomalies in the BHT files. The DBLP team also needs to adapt their import system to the extended BHT format developed in this work for the actual import into DBLP.
Titles might be in uppercase letters. This could be improved but we have to pay attention because a primitive solution will not work well. For example, we have to be aware of the popular usage of acronyms in computer science. So some words in uppercase letters can be correct.
Our tool is optimized for the Digital Library of the IPSJ and their OAI-PMH metadata prefix junii2. It can easily be adapted to support the similar and commonly used metadata prefix oai_dc. So the tool would be able to handle other publication metadata sources that support OAI-PMH.
The algorithm for detecting common papers in DBLP and IPSJ DL may be modified to achieve an even better comparison between the databases and detect more common papers.
It would be useful to include a Chinese name dictionary in the future and extend the name search of our tool to cover Chinese names as well. -1
One improvement in the future could be storing the most common names (for example, the 100 most common given and family names) in a separate data structure in the RAM. This way we can improve the runtime by often skipping the search in the huge name data.
We can still increase the success rate of the Name Matching process too. One way is swapping kanji. A typical Japanese name has two kanji for the given name and two kanji for the family name. The family name shall precede the given name. However, this principle could be violated by the publication source. If the Name Matching is not successful, we may swap the first two for the last two characters and try to find a match again.
A second advancement is the additional support of a special Latin character set that is used by Japanese. For instance, we can find the name “Kai” instead of “Kai” in the metadata of IPSJ DL. They look very similar and both represent simple Latin letters but their character codes are different. So programs handle them differently. A simple (but yet unimplemented) substitution function can cover these rare and unusual cases.
Another possibility to take advantage of this work is extracting the author names in kanji from the relational database. So the DBLP team can insert author metadata for already existing authors in DBLP.
We can also have a look at what phases of the Personal Name Matching process have been implemented in this work and to which degree. There are actually different types of Personal Name Matching included in our tool:
The “Standardization” is accomplished by a normalization of the Latin input names at the beginning of the process. Kanji input names get trimmed by removing all whitespace. We do not have a “Blocking” phase as it is proposed by Reuther BIBREF26 . When searching a match between transcribed Japanese names with their original kanji representation we even go a contrary way and increase the number of comparisons by adding reasonable other transcriptions to the matching process. Due to efficient data structures and a comparatively small amount of Japanese papers (less than 100000), our tool has an acceptable runtime (the retrieval of the publication metadata from the IPSJ server takes much longer than processing it). In addition, the search for common coauthors will only be done if the author exists in DBLP. The phases “Analysis” and “Decision Model” are entangled in our tool. If we find a match between a (normalized or modified) input name and a name in the name dictionary, we will immediately consider them a successful match and continue parsing the metadata. When we try to find coauthors in DBLP, we take advantage of the combined Jaccard Levenshtein Distance as explained in chapter SECREF5 .
Instead of checking the complete output data in the “Performance Measurement” phase, we could only take control samples while implementing, debugging, testing and improving our program. A broad manual check of approximately 90000 publications is not possible within the scope of a diploma thesis. The control samples had the expected and desired content but we cannot guarantee the correctness of the output. Under the assumption that ENAMDICT's entries are correct, the predicted Precision should be about INLINEFORM0 because the tool probably does not produce many false positives. But we cannot say anything about the Recall because ENAMDICT does not cover all names that occur in IPSJ DL. All exceptions resulting from the limits of a name dictionary and a bad data quality are supposed to be handled by the status for author name assignments (described in section UID99 ). This gives us the chance to manually handle the noted exceptions afterwards.
All in all, this work is a first approach for an integration of Japanese papers into the DBLP data set and provides a not yet perfect but usable tool for this task. Some major obstacles are overcome.
About the Tool
The developed tool that is also part of this project is named Kankoukanyuu (刊行加入). Kankou means publication, kanyuu means admission. The whole name indicates the ability to import publications. The tool also allows the assimilation of imported publications, of course. The usable functionalities are:
Parsing the DBLP file INLINEFORM0 and converting it to a MySQL database
Converting an ENAMDICT name dictionary file to a MySQL database
Harvesting the IPSJ server, processing the publication metadata and storing it in a MySQL database
Making the harvested publications ready for an import into the DBLP data set by making BHT files
Usage
The tool has been developed and tested on a Linux system with Intel Core 2 Quad and 8 GB RAM in the local computer pool. It has to be executed by command line like this:
1.5 em1.5 em(*@@*)false6pt
java -Xmx5400M -jar kankoukanyuu.jar
The parameter -Xmx5400M allows our program to allocate more than 5 GB RAM and store all necessary data in the RAM for an unproblematic execution.
Possible command line arguments are:
Parse dplb.xml and fill database tables
Convert ENAMDICT dictionary file to a relational database
Harvest the IPSJ server, fill OAI-PMH data into databases and create BHT files (in SPF) - requires DBLP and ENAMDICT database tables from steps above
Concatenate BHT files in Single Publication Format to one bigger file (file all.bht will be created in every folder with BHT files) - requires BHT files in SPF from step above
Do all of the above
Show help text about usage of the tool
The configuration file INLINEFORM0 allows us to change following parameters:
Database related parameters (in INLINEFORM0 section): URL ( INLINEFORM1 ), database name ( INLINEFORM2 ), user name ( INLINEFORM3 ) and password ( INLINEFORM4 )
ENAMDICT related parameter (in INLINEFORM0 section): location of ENAMDICT file ( INLINEFORM1 )
ENAMDICT database related parameters (in INLINEFORM0 section): database table name ( INLINEFORM1 ), decision whether to use unclassified names ( INLINEFORM2 )
DBLP related parameter (in INLINEFORM0 section): location of INLINEFORM1 ( INLINEFORM2 )
DBLP database related parameters (in INLINEFORM0 section): database table name for publications ( INLINEFORM1 ), database table name for coauthor relationships (authorscounttable)
OAI-PMH database (contains output after harvest and parsing process) related parameters (in INLINEFORM0 section): publication table ( INLINEFORM1 ), authors table ( INLINEFORM2 ), titles table ( INLINEFORM3 ), contributors table ( INLINEFORM4 ), descriptions table ( INLINEFORM5 )
Harvester related parameters (in INLINEFORM0 section): location for storing the harvest ( INLINEFORM1 ), start ID for harvester ( INLINEFORM2 ), end ID for harvester ( INLINEFORM3 ), decision whether to use record lists ( INLINEFORM4 )
BHT export related parameters (in INLINEFORM0 section): location for BHT output files ( INLINEFORM1 ), decision whether to compute and show common coauthors (showcommoncoauthors)
Log related parameter (in INLINEFORM0 section): location of log files ( INLINEFORM1 )
A configuration example can be found in the appendix section SECREF172 .
The system must support the Japanese language (meaning Japanese characters) to ensure a successful run.
Kankoukanyuu does not use any Linux-only commands but has not been tested on Microsoft Windows yet.
Used Technologies
The tool itself has been written in Java, using the OpenJDK 6. The handling of databases is done by MySQL 5 and JDBC is used to provide MySQL functionalities within Java.
External libraries are the Apache Xerces parser and the MySQL Connector/J. The Fat Jar Eclipse Plug-In is used to deploy the complete project into one executable Java JAR file. The execution of Kankoukanyuu becomes more user-friendly this way because external libraries are already included and class paths for external libraries does not need to be specified anymore.
Runtime
Measurement indicates the following approximated runtimes of Kankoukanyuu:
We can make some observations. During the harvest, only ca. 30 minutes were spent on processing the harvested data, the rest is needed to retrieve the data from the Japanese server. Depending on whether the local file system or network file system was used, the runtime for the concatenation differs immensely.
BHT Example Proposed By DBLP
1.5 em1.5 em(*@@*)false6pt
Computer Languages, Systems & Structures (journals/cl)
<h2>Volume 34, Numbers 2-3, July-October 2008</h2>
Best Papers 2006 International Smalltalk Conference
<ul>
<li>Wolfgang De Meuter:
Preface.
45
<ee>http://dx.doi.org/10.1016/j.cl.2007.07.001</ee>
<li>David Röthlisberger, Marcus Denker, Éric Tanter:
Unanticipated partial behavioral reflection: Adapting applications at runtime.
46-65
<ee>http://dx.doi.org/10.1016/j.cl.2007.05.001</ee>
<li>Johan Brichau, Andy Kellens, Kris Gybels, Kim Mens, Robert Hirschfeld, Theo D'Hondt:
Application-specific models and pointcuts using a logic metalanguage.
66-82
<ee>http://dx.doi.org/10.1016/j.cl.2007.05.004</ee>
<li>Alexandre Bergel, Stéphane Ducasse, Oscar Nierstrasz, Roel Wuyts:
Stateful traits and their formalization.
83-108
<ee>http://dx.doi.org/10.1016/j.cl.2007.05.003</ee>
<li>Alexandre Bergel, Stéphane Ducasse, Colin Putney, Roel Wuyts:
Creating sophisticated development tools with OmniBrowser.
109-129
<ee>http://dx.doi.org/10.1016/j.cl.2007.05.005</ee>
<li>Luc Fabresse, Christophe Dony, Marianne Huchard:
Foundations of a simple and unified component-oriented language.
130-149
<ee>http://dx.doi.org/10.1016/j.cl.2007.05.002</ee>
</ul>
This is a BHT example proposed by the DBLP team in the DBLP FAQ BIBREF20 .
BHT Example File Created By Kankoukanyuu
1.5 em1.5 em(*@@*)false6pt
<h2>Volume 52, Number 10, October 2011</h2>
<ul>
<li>Shinsuke Mori, Graham Neubig, Yuuta Tsuboi:
A Pointwise Approach to Automatic Word Segmentation.
2944-2952
<ee>http://id.nii.ac.jp/1001/00078161/</ee>
<originalname latin="Shinsuke Mori">森,信介</originalname>
<status name="Shinsuke Mori">ok</status>
<originalname latin="Graham Neubig">ニュービッググラム,</originalname>
<status name="Graham Neubig">no kanji matching found</status>
<originalname latin="Yuuta Tsuboi">坪井,祐太</originalname>
<status name="Yuuta Tsuboi">ok</status>
<originaltitle lang="ja" type="Journal Article">点予測による自動単語分割</originaltitle>
<commoncoauthors>Masato Mimura</commoncoauthors>
</ul>
This is an output example of a BHT file in Single Publication Format (before the concatenation step), created by our tool.
Excerpt From dblp.xml
1.5 em1.5 em(*@@*)false6pt
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE dblp SYSTEM "dblp.dtd">
<dblp>
<article mdate="2002-01-03" key="persons/Codd71a">
<author>E. F. Codd</author>
<title>Further Normalization of the Data Base Relational Model.</title>
<journal>IBM Research Report, San Jose, California</journal>
<volume>RJ909</volume>
<month>August</month>
<year>1971</year>
<cdrom>ibmTR/rj909.pdf</cdrom>
<ee>db/labs/ibm/RJ909.html</ee>
</article>
<article mdate="2002-01-03" key="persons/Hall74">
<author>Patrick A. V. Hall</author>
<title>Common Subexpression Identification in General Algebraic Systems.</title>
<journal>Technical Rep. UKSC 0060, IBM United Kingdom Scientific Centre</journal>
<month>November</month>
<year>1974</year>
</article>
<article mdate="2002-01-03" key="persons/Tresch96">
<author>Markus Tresch</author>
<title>Principles of Distributed Object Database Languages.</title>
<journal>technical Report 248, ETH Zürich, Dept. of Computer Science</journal>
<month>July</month>
<year>1996</year>
</article>
...
Configuration File of Our Tool
1.5 em1.5 em(*@@*)false6pt
[db]
url=myserver
db=mydbname
user=myusername
password=mypassword
[japnamesdb]
table=japnames
useunclassifiednames=false
[dblpdb]
authorscounttable=dblpauthors
dblptable=dblp
[oaidb]
publicationtable=oai_publications
authorstable=oai_authors
titlestable=oai_titles
contributorstable=oai_contributors
descriptionstable=oai_descriptions
[enamdict]
file=./enamdict
[harvester]
filespath=./files-harvester
minid=1
maxid=100000
uselistrecords=true
[dblp]
xmlfile=/dblp/dblp.xml
[bhtexport]
path=./bht
showcommoncoauthors=true
[log]
path=./log | No |
8d793bda51a53a4605c1c33e7fd20ba35581a518 | 8d793bda51a53a4605c1c33e7fd20ba35581a518_0 | Q: what bottlenecks were identified?
Text: Introduction
There are several commercial menu based ASR systems available around the world for a significant number of languages and interestingly speech solution based on these ASR are being used with good success in the Western part of the globe BIBREF0 , BIBREF1 , BIBREF2 , BIBREF3 . Typically, a menu based ASR system restricts user to speak from a pre-defined closed set of words for enabling a transaction. Before commercial deployment of a speech solution it is imperative to have a quantitative measure of the performance of the speech solution which is primarily based on the speech recognition accuracy of the speech engine used. Generally, the recognition performance of any speech recognition based solution is quantitatively evaluated by putting it to actual use by the people who are the intended users and then analyzing the logs to identify successful and unsuccessful transactions. This evaluation is then used to identifying any further improvement in the speech recognition based solution to better the overall transaction completion rates. This process of evaluation is both time consuming and expensive. For evaluation one needs to identify a set of users and also identify the set of actual usage situations and perform the test. It is also important that the set of users are able to use the system with ease meaning that even in the test conditions the performance of the system, should be good, while this can not usually be guaranteed this aspect of keeping the user experience good makes it necessary to employ a wizard of Oz (WoZ) approach. Typically this requires a human agent in the loop during actual speech transaction where the human agent corrects any mis-recognition by actually listening to the conversation between the human user and the machine without the user knowing that there is a human agent in the loop. The use of WoZ is another expense in the testing a speech solution. All this makes testing a speech solution an expensive and time consuming procedure.
In this paper, we describe a method to evaluate the performance of a speech solution without actual people using the system as is usually done. We then show how this method was adopted to evaluate a speech recognition based solution as a case study. This is the main contribution of the paper. The rest of the paper is organized as follows. The method for evaluation without testing is described in Section SECREF2 . In Section SECREF3 we present a case study and conclude in Section SECREF4 .
Evaluation without Testing
Fig. FIGREF1 shows the schematic of a typical menu based speech solution having 3 nodes. At each node there are a set of words that the user is expected to speak and the system is supposed to recognize. In this particular schematic, at the entry node the user can speak any of the INLINEFORM0 words, namely INLINEFORM1 or INLINEFORM2 or INLINEFORM3 or INLINEFORM4 ; INLINEFORM5 is usually called the perplexity of the node in the speech literature. The larger the INLINEFORM6 the more the perplexity and higher the confusion and hence lower the recognition accuracies. In most commercial speech solutions the perplexity is kept very low, typically a couple of words. Once the word at the entry node has been recognized (say word INLINEFORM7 has been recognized), the system moves on to the second node where the active list of words to be recognized could be one of INLINEFORM8 , INLINEFORM9 , INLINEFORM10 , ... INLINEFORM11 if the perplexity at the INLINEFORM12 node is INLINEFORM13 . This is carried on to the third node. A transaction is termed successful if and only if the recognition at each of the three nodes is correct. For example, typically in a banking speech solution the entry node could expect someone to speak among /credit card/, /savings account/, /current account/, /loan product/, /demat/, and /mutual fund transfer/ which has a perplexity of 6. Once a person speaks, say, /savings account/ and is recognized correctly by the system, at the second node it could be /account balance/ or /cheque/ or /last 5 transactions/ (perplexity 3) and at the third node (say, on recognition of /cheque/) it could be /new cheque book request/, /cheque status/, and /stop cheque request/ (perplexity 3). Though we will not dwell on this, it is important to note that an error in recognition at the entry node is more expensive than a recognition error at a lower node.
Based on the call flow, and the domain the system can have several nodes for completion of a transaction. Typical menu based speech solutions strive for a 3 - 5 level nodes to make it usable. In any speech based solution (see Fig. FIGREF3 ) first the spoken utterance is hypothesized into a sequence of phonemes using the acoustic models. Since the phoneme recognition accuracy is low, instead of choosing one phoneme it identifies l-best (typically INLINEFORM0 ) matching phonemes. This phone lattice is then matched with all the expected words (language model) at that node to find the best match. For a node with perplexity INLINEFORM1 the constructed phoneme lattice of the spoken utterance is compared with the phoneme sequence representation of all the INLINEFORM2 words (through the lexicon which is one of he key components of a speech recognition system). The hypothesized phone lattice is declared one of the INLINEFORM3 words depending on the closeness of the phoneme lattice to the phoneme representation of the INLINEFORM4 words.
We hypothesize that we can identify the performance of a menu based speech system by identifying the possible confusion among all the words that are active at a given node. If active words at a given node are phonetically similar it becomes difficult for the speech recognition system to distinguish them which in turn leads to recognition errors. We used Levenshtein distance BIBREF4 , BIBREF5 a well known measure to analyze and identify the confusion among the active words at a given node. This analysis gives a list of all set of words that have a high degree of confusability among them; this understanding can be then used to (a) restructure the set of active words at that node and/or (b) train the words that can be confused by using a larger corpus of speech data. This allows the speech recognition engine to be equipped to be able to distinguish the confusing words better. Actual use of this analysis was carried out for a speech solution developed for Indian Railway Inquiry System to identify bottlenecks in the system before its actual launch.
Case Study
A schematic of a speech based Railway Information system, developed for Hindi language is shown in Fig. FIGREF4 . The system enables user to get information on five different services, namely, (a) Arrival of a given train at a given station, (b) Departure of a given train at a given station, (c) Ticket availability on a given date in a given train between two stations, and class, (d) Fare in a given class in a given train between two stations, and (e) PNR status. At the first recognition node (node-1), there are one or more active words corresponding to each of these services. For example, for selecting the service Fare, the user can speak among /kiraya jankari/, /kiraya/, /fare/. Similarly, for selecting service Ticket availability, user can speak /upalabdhata jankari/ or /ticket availability/ or /upalabdhata/. Generally the perplexity at a node is greater than on equal to the number of words that need to be recognized at that node. In this manner each of the services could have multiple words or phrases that can mean the same thing and the speaker could utter any of these words to refer to that service. The sum of all the possible different ways in which a service can be called ( INLINEFORM0 ) summed over all the 5 services gives the perplexity ( INLINEFORM1 ) at that node, namely, DISPLAYFORM0
The speech recognition engine matches the phoneme lattice of the spoken utterance with all the INLINEFORM0 words which are active. The active word (one among the INLINEFORM1 words) with highest likelihood score is the recognized word. In order to avoid low likelihood recognitions a threshold is set so that even the best likelihood wordis returned only if the likelihood score is greater than the predefined threshold. Completion of a service requires recognitions at several nodes with different perplexity at each node. Clearly depending on the type of service that the user is wanting to use; the user has to go through different number of recognition nodes. For example, to complete the Arrival service it is required to pass through 3 recognition nodes namely (a) selection of a service, (b) selection of a train name and (c) selection of the railway station. While the perplexity (the words that are active) at the service selection node is fixed the perplexity at the station selection node could depend on the selection of the train name at an earlier node. For example, if the selected train stops at 23 stations, then the perplexity at the station selection node will be INLINEFORM2 .
For confusability analysis at each of the node, we have used the Levenshtein distance BIBREF5 or the edit distance as is well known in computer science literature. We found that the utterances /Sahi/ and /Galat/ have 100% recognition. These words Sahi is represented by the string of phonemes in the lexicon as S AA HH I and the word Galat is represented as the phoneme sequence G L AX tT in the lexicon. We identified the edit distance between these two words Sahi and Galat and used that distance measure as the threshold that is able to differentiate any two words (say INLINEFORM0 ). So if the distance between any two active words at a given recognition node is lower than the threshold INLINEFORM1 , then there is a greater chance that those two active words could get confused (one word could be recognized as the other which is within a distance of INLINEFORM2 ). There are ways in which this possible misrecognition words could be avoided. The easiest way is to make sure that these two words together are not active at a given recognition node.
Table TABREF6 shows the list of active word at the node 1 when the speech application was initially designed and Table TABREF7 shows the edit distance between all the active words at the node service given in Fig. FIGREF4 . The distance between words Sahi and Galat was found to be INLINEFORM0 which was set at the threshold, namely INLINEFORM1 . This threshold value was used to identify confusing active words. Clearly, as seen in the Table the distance between word pairs fare, pnr and pnr, prasthan is INLINEFORM2 and INLINEFORM3 respectively, which is very close to the threshold value of INLINEFORM4 . This can cause a high possibility that /fare/ may get recognized as pnr and vice-versa.
One can derive from the analysis of the active words that fare and pnr can not coexist as active words at the same node. The result of the analysis was to remove the active words fare and pnr at that node.
When the speech system was actually tested by giving speech samples, 17 out of 20 instances of /pnr/ was was recognized as fare and vice-versa. Similarly 19 out of 20 instances /pnr/ was misrecognized as prasthan and vice versa This confusion is expected as can be seen from the edit distance analysis of the active words in the Table TABREF7 . This modified active word list (removal of fare and pnr) increased the recognition accuracy at the service node (Fig. FIGREF4 ) by as much as 90%.
A similar analysis was carried out at other recognition nodes and the active word list was suitably modified to avoid possible confusion between active word pair. This analysis and modification of the list of active words at a node resulted in a significant improvement in the transaction completion rate. We will present more experimental results in the final paper.
Conclusion
In this paper we proposed a methodology to identify words that could lead to confusion at any given node of a speech recognition based system. We used edit distance as the metric to identifying the possible confusion between the active words. We showed that this metric can be used effectively to enhance the performance of a speech solution without actually putting it to people test. There is a significant saving in terms of being able to identify recognition bottlenecks in a menu based speech solution through this analysis because it does not require actual people testing the system. This methodology was adopted to restructuring the set of active words at each node for better speech recognition in an actual menu based speech recognition system that caters to masses. | Confusion in recognizing the words that are active at a given node by a speech recognition solution developed for Indian Railway Inquiry System. |
8f838ec579f2609b01227da3d8c77860ac1b39d2 | 8f838ec579f2609b01227da3d8c77860ac1b39d2_0 | Q: What is grounded language understanding?
Text: Introduction
In recent years, neural network based models have become the workhorse of natural language understanding and generation. They empower industrial systems in machine translation BIBREF0 and text generation BIBREF1 , also showing state-of-the-art performance on numerous benchmarks including Recognizing Textual Entailment (RTE) BIBREF2 , Visual Question Answering (VQA) BIBREF3 , and Reading Comprehension BIBREF4 . Despite these successes, a growing body of literature suggests that these approaches do not generalize outside of the specific distributions on which they are trained, something that is necessary for a language understanding system to be widely deployed in the real world. Investigations on the three aforementioned tasks have shown that neural models easily latch onto statistical regularities which are omnipresent in existing datasets BIBREF5 , BIBREF6 , BIBREF7 and extremely hard to avoid in large scale data collection. Having learned such dataset-specific solutions, neural networks fail to make correct predictions for examples that are even slightly out of domain, yet are trivial for humans. These findings have been corroborated by a recent investigation on a synthetic instruction-following task BIBREF8 , in which seq2seq models BIBREF9 , BIBREF10 have shown little systematicity BIBREF11 in how they generalize, that is they do not learn general rules on how to compose words and fail spectacularly when for example asked to interpret “jump twice” after training on “jump”, “run twice” and “walk twice”.
An appealing direction to improve the generalization capabilities of neural models is to add modularity and structure to their design to make them structurally resemble the kind of rules they are supposed to learn BIBREF12 , BIBREF13 . For example, in the Neural Module Network paradigm (NMN, BIBREF12 ), a neural network is assembled from several neural modules, where each module is meant to perform a particular subtask of the input processing, much like a computer program composed of functions. The NMN approach is intuitively appealing but its widespread adoption has been hindered by the large amount of domain knowledge that is required to decide BIBREF12 or predict BIBREF14 , BIBREF15 how the modules should be created (parametrization) and how they should be connected (layout) based on a natural language utterance. Besides, their performance has often been matched by more traditional neural models, such as FiLM BIBREF16 , Relations Networks BIBREF17 , and MAC networks BIBREF18 . Lastly, generalization properties of NMNs, to the best of our knowledge, have not been rigorously studied prior to this work.
Here, we investigate the impact of explicit modularity and structure on systematic generalization of NMNs and contrast their generalization abilities to those of generic models. For this case study, we focus on the task of visual question answering (VQA), in particular its simplest binary form, when the answer is either “yes” or “no”. Such a binary VQA task can be seen as a fundamental task of language understanding, as it requires one to evaluate the truth value of the utterance with respect to the state of the world. Among many systematic generalization requirements that are desirable for a VQA model, we choose the following basic one: a good model should be able to reason about all possible object combinations despite being trained on a very small subset of them. We believe that this is a key prerequisite to using VQA models in the real world, because they should be robust at handling unlikely combinations of objects. We implement our generalization demands in the form of a new synthetic dataset, called Spatial Queries On Object Pairs ( $\operatorname{SQOOP}$ ), in which a model has to perform basic spatial relational reasoning about pairs of randomly scattered letters and digits in the image (e.g. answering the question “Is there a letter A left of a letter B?”). The main challenge in $\operatorname{SQOOP}$ is that models are evaluated on all possible object pairs, but trained on only a subset of them.
Our first finding is that NMNs do generalize better than other neural models when layout and parametrization are chosen appropriately. We then investigate which factors contribute to improved generalization performance and find that using a layout that matches the task (i.e. a tree layout, as opposed to a chain layout), is crucial for solving the hardest version of our dataset. Lastly, and perhaps most importantly, we experiment with existing methods for making NMNs more end-to-end by inducing the module layout BIBREF14 or learning module parametrization through soft-attention over the question BIBREF15 . Our experiments show that such end-to-end approaches often fail by not converging to tree layouts or by learning a blurred parameterization for modules, which results in poor generalization on the hardest version of our dataset. We believe that our findings challenge the intuition of researchers in the field and provide a foundation for improving systematic generalization of neural approaches to language understanding.
The SQOOP\operatorname{SQOOP} Dataset For Testing Systematic Generalization
We perform all experiments of this study on the $\operatorname{SQOOP}$ dataset. $\operatorname{SQOOP}$ is a minimalistic VQA task that is designed to test the model's ability to interpret unseen combinations of known relation and object words. Clearly, given known objects $\operatorname{X}$ , $\operatorname{Y}$ and a known relation $\operatorname{R}$ , a human can easily verify whether or not the objects $\operatorname{X}$ and $\operatorname{Y}$ are in relation $\operatorname{R}$ . Some instances of such queries are common in daily life (is there a cup on the table), some are extremely rare (is there a violin under the car), and some are unlikely but have similar, more likely counter-parts (is there grass on the frisbee vs is there a frisbee on the grass). Still, a person can easily answer these questions by understanding them as just the composition of the three separate concepts. Such compositional reasoning skills are clearly necessary for language understanding models, and $\operatorname{SQOOP}$ is explicitly designed to test for them.
Concretely speaking, $\operatorname{SQOOP}$ requires observing a 64 $\times $ 64 RGB image $\operatorname{x}$ and answering a yes-no question $q = \operatorname{X}\operatorname{R}\operatorname{Y}$ about whether objects $\operatorname{X}$ and $\operatorname{Y}$ are in a spatial relation $\operatorname{R}$ . The questions are represented in a redundancy-free $\operatorname{X}$ $\operatorname{R}$ $\operatorname{Y}$ form; we did not aim to make the questions look like natural language. Each image contains 5 randomly chosen and randomly positioned objects. There are 36 objects: the latin letters A-Z and digits 0-9, and there are 4 relations: left_of, right_of, above, and below. This results in $\times $0 possible unique questions (we do not allow questions about identical objects). To make negative examples challenging, we ensure that both $\times $1 and $\times $2 of a question are always present in the associated image and that there are always distractor objects $\times $3 and $\times $4 such that $\times $5 and $\times $6 are both true for the image. These extra precautions guarantee that answering a question requires the model to locate all possible $\times $7 and $\times $8 then check if any pair of them are in the relation R. Two $\times $9 examples are shown in Figure 1 .
Our goal is to discover which models can correctly answer questions about all $36 \cdot 35$ possible object pairs in $\operatorname{SQOOP}$ after having been trained on only a subset. For this purpose we build training sets containing $36 \cdot 4 \cdot k$ unique questions by sampling $k$ different right-hand-side (RHS) objects $\operatorname{Y}$ 1, $\operatorname{Y}$ 2, ..., $\operatorname{Y}$ k for each left-hand-side (LHS) object $\operatorname{X}$ . We use this procedure instead of just uniformly sampling object pairs in order to ensure that each object appears in at least one training question, thereby keeping the all versions of the dataset solvable. We will refer to $k$ as the #rhs/lhs parameter of the dataset. Our test set is composed from the remaining $36 \cdot 4 \cdot (35-k)$ questions. We generate training and test sets for rhs/lhs values of 1,2,4,8 and 18, as well as a control version of the dataset, #rhs/lhs=35, in which both the training and the test set contain all the questions (with different images). Note that lower #rhs/lhs versions are harder for generalization due to the presence of spurious correlations between lhs and rhs objects that which the models may adapt. In the extreme case of #rhs/lhs=1, a model may learn to predict the rhs object from the lhs. In order to exclude a possible compounding factor of overfitting on the training images, all our training sets contain 1 million examples, so for a dataset with #rhs/lhs = $\operatorname{SQOOP}$0 we generate approximately $\operatorname{SQOOP}$1 different images per question. Pseudocode for generating $\operatorname{SQOOP}$2 can be found in Appendix "SQOOP\operatorname{SQOOP} Pseudocode" .
Models
A great variety of VQA models have been recently proposed in the literature, among which we can distinguish two trends. Some of the recently proposed models, such as FiLM BIBREF16 and Relation Networks (RelNet, BIBREF17 ) are highly generic and do not require any task-specific knowledge to be applied on a new dataset. On the opposite end of the spectrum are modular and structured models, typically flavours of Neural Module Networks BIBREF12 , that require some knowledge about the task at hand to be instantiated. Here, we evaluate systematic generalization of several state-of-the-art models in both families. In all models, the image $\operatorname{x}$ is first fed through a CNN based network, that we refer to as the stem, to produce a feature-level 3D tensor $h_{\operatorname{x}}$ . This is passed through a model-specific computation conditioned on the question $q$ , to produce a joint representation $h_{q\operatorname{x}}$ . Lastly, this representation is fed into a fully-connected classifier network to produce logits for prediction. Therefore, the main difference between the models we consider is how the computation $h_{q\operatorname{x}} = model(h_{\operatorname{x}}, q)$ is performed.
Generic Models
We consider four generic models in this paper: CNN+LSTM, FiLM, Relation Network (RelNet), and Memory-Attention-Control (MAC) network. For CNN+LSTM, FiLM, and RelNet models, the question $q$ is first encoded into a fixed-size representation $h_{q}$ using a unidirectional LSTM network.
CNN+LSTM flattens the 3D tensor $h_{\operatorname{x}}$ to a vector and concatenates it with $h_{q}$ to produce $h_{q\operatorname{x}}$ .
$$h_{q\operatorname{x}} = [vec(h_{\operatorname{x}}) ; h_{q}]$$ (Eq. 3)
RelNet BIBREF17 uses a network $g$ which is applied to all pairs of feature columns of $h_\operatorname{x}$ concatenated with the question representation $h_q$ , all of which is then pooled to obtain $h_{q\operatorname{x}}$ : hqx=i,j g(hx(i), hx(j), hq) where $h_x(i)$ is the $i$ -th feature column of $h_x$ . FiLM networks BIBREF16 use $N$ convolutional FiLM blocks applied to $h_\operatorname{x}$ . A FiLM block is a residual block BIBREF19 in which a feature-wise affine transformation (FiLM layer) is inserted after the 2 convolutional layer. The FiLM layer is conditioned on the question at hand via prediction of the scaling and shifting parameters $\gamma _n$ and $h_\operatorname{x}$0 : [n; n ] = Wnq hq + bnq
hqxn = BN(Wn2 * ReLU(Wn1 * hqxn-1 + bn))
hnqx = hqxn-1 + ReLU(n hqxn n) where $BN$ stands for batch normalization, $*$ stands for convolution and $\odot $ stands for element-wise multiplications. $h^n_{q\operatorname{x}}$ is the output of the $n$ -th FiLM block and $h^0_{q\operatorname{x}} = h_{\operatorname{x}}$ . The output of the last FiLM block $h_{q\operatorname{x}}^N$ undergoes an extra 1 $\times $ 1 convolution and max-pooling to produce $h_{q\operatorname{x}}$ . MAC network of BIBREF18 produces $h_{q\operatorname{x}}$ by repeatedly applying a Memory-Attention-Composition (MAC) cell that is conditioned on the question through an attention mechanism. The MAC model is quite complex and we refer the reader to the original paper for details.
Neural Module Networks
Neural Module Networks (NMN) BIBREF12 are an elegant solution that constructs a question-specific network by composing together trainable neural modules, drawing inspiration from symbolic approaches to question answering BIBREF20 . To answer a question with an NMN, one first constructs the computation graph by making the following decisions: (a) how many modules and of which types will be used, (b) how will the modules be connected to each other, and (c) how are these modules parametrized based on the question. We refer to the aspects (a) and (b) of the computation graph as the layout and the aspect (c) as the parametrization. In the original NMN and in many follow-up works, different module types are used to perform very different computations, e.g. the Find module from BIBREF15 performs trainable convolutions on the input attention map, whereas the And module from the same paper computes an element-wise maximum for two input attention maps. In this work, we follow the trend of using more homogeneous modules started by BIBREF14 , who use only two types of modules: unary and binary, both performing similar computations. We go one step further and retain a single binary module type, using a zero tensor for the second input when only one input is available. Additionally, we choose to use exactly three modules, which simplifies the layout decision to just determining how the modules are connected. Our preliminary experiments have shown that, even after these simplifications, NMNs are far ahead of other models in terms of generalization.
In the original NMN, the layout and parametrization were set in an ad-hoc manner for each question by analyzing a dependency parse. In the follow-up works BIBREF14 , BIBREF15 , these aspects of the computation are predicted by learnable mechanisms with the goal of reducing the amount of background knowledge required to apply the NMN approach to a new task. We experiment with the End-to-end NMN (N2NMN) BIBREF15 paradigm from this family, which predicts the layout with a seq2seq model BIBREF9 and computes the parametrization of the modules using a soft attention mechanism. Since all the questions in $\operatorname{SQOOP}$ have the same structure, we do not employ a seq2seq model but instead have a trainable layout variable and trainable attention variables for each module.
Formally, our NMN is constructed by repeatedly applying a generic neural module $f(\theta , \gamma , h_{l}, h_{r})$ , which takes as inputs the shared parameters $\theta $ , the question-specific parametrization $\gamma $ and the left-hand side and right-hand side inputs $h_{l}$ and $h_{r}$ . $M$ such modules are connected and conditioned on a question $q=(q_1, q_2, q_3)$ as follows: k = i=1s k, i e(qi)
hk = f( , k, j=-1k-1 k, j0 hj, j=-1k-1 k, j1 hj)
hqx = hM
In the equations above, $h_{-1} = 0$ is the zero tensor input, $h_0 = h_x$ are the image features outputted by the stem, and $e$ is the embedding table for the questions words, and we refer to $A=(\alpha ^{k,i}) $ and $T=(\tau ^{k,i}_{0}, \tau ^{k,i}_{1})$ as the parametrization attention matrix and the layout tensor respectively.
We experiment with two choices for the NMN's generic neural module: the $\operatorname{Find}$ module from BIBREF15 and the $\operatorname{Residual}$ module from BIBREF14 with very minor modifications — we use 64 dimensional CNNs in our $\operatorname{Residual}$ blocks since our dataset consists of 64 $\times $ 64 images. The equations for the $\operatorname{Residual}$ module are as follows: = ,
= [ W1; b1; W2; b2; W3; b3],
h = ReLU(W3 * [hl; hr] + b3),
fResidual(, hl, hr) = ReLU(h + W1 * ReLU(W2 * h + b2)) + b1), and for $\operatorname{Find}$ module as follows: = [ W1; b1; W2; b2],
fFind (, hl, hr) = ReLU(W1 * ReLU(W2 * [hl; hr] + b2) + b1). In formulas above $W_1, W_2, W_3$ are convolution weights, and $b_1$ , $b_2$ , $b_3$ are biases. The main difference between $\operatorname{Residual}$ and $\operatorname{Find}$ is that in $\operatorname{Residual}$ all parameters depend on the questions words, where as in $\operatorname{Find}$ convolutional weights are the same for all questions, and only the element-wise multipliers $\gamma $ vary based on the question. We note that the specific $\operatorname{Find}$ module we use in this work is slightly different from the one used in BIBREF15 in that it outputs a feature tensor, not just an attention map. This change was required in order to connect multiple $b_1$0 modules in the same way as we connect multiple residual ones.
Based on the generic NMN model described above, we experiment with several specific architectures as shown in Figure 1 . Each of the models uses $M=3$ modules, which are connected and parametrized differently. In NMN-Chain modules form a sequential chain. Modules 1, 2 and 3 are parametrized based on the first object word, second object word and the relation word respectively, which is achieved by setting the attention $\alpha _1$ , $\alpha _2$ , $\alpha _3$ to the corresponding one-hot vectors. We also experiment with giving the image features $h_{x}$ as the right-hand side input to all 3 modules and call the resulting model NMN-Chain-Shortcut. NMN-Tree is similar to NMN-Chain in that the attention vectors are similarly hard-coded, but we change the connectivity between the modules to be tree-like. Stochastic N2NMN follows the N2NMN approach by BIBREF15 for inducting layout. We treat the layout $T$ as a stochastic latent variable. $T$ is allowed to take two values: $T_{tree}$ as in NMN-Tree, and $T_{chain}$ as in NMN-Chain. We calculate the output probabilities by marginalizing out the layout i.e. probability of answer being “yes” is computed as $p(\textrm {yes}|x,q) = \sum _{T \in \left\lbrace T_{tree}, T_{chain}\right\rbrace } p(\textrm {yes}|T,x,q)p(T)$ . Lastly, Attention N2NMN uses the N2NMN method for learning parametrization BIBREF15 . It is structured just like NMN-Tree but has $\alpha _1$0 computed as $\alpha _1$1 , where $\alpha _1$2 is a trainable vector. We use Attention N2NMN only with the $\alpha _1$3 module because using it with the $\alpha _1$4 module would involve a highly non-standard interpolation between convolutional weights.
Experiments
In our experiments we aimed to: (a) understand which models are capable of exhibiting systematic generalization as required by $\operatorname{SQOOP}$ , and (b) understand whether it is possible to induce, in an end-to-end way, the successful architectural decisions that lead to systematic generalization.
All models share the same stem architecture which consists of 6 layers of convolution (8 for Relation Networks), batch normalization and max pooling. The input to the stem is a 64 $\times $ 64 $\times $ 3 image, and the feature dimension used throughout the stem is 64. Further details can be found in Appendix "Experiment Details" . The code for all experiments will be released in the near future.
Which Models Generalize Better?
We report the performance for all models on datasets of varying difficulty in Figure 2 . Our first observation is that the modular and tree-structured NMN-Tree model exhibits strong systematic generalization. Both versions of this model, with $\operatorname{Residual}$ and $\operatorname{Find}$ modules, robustly solve all versions of our dataset, including the most challenging #rhs/lhs=1 split.
The results of NMN-Tree should be contrasted with those of generic models. 2 out of 4 models (Conv+LSTM and RelNet) are not able to learn to answer all $\operatorname{SQOOP}$ questions, no matter how easy the split was (for high #rhs/lhs Conv+LSTM overfitted and RelNet did not train). The results of other two models, MAC and FiLM, are similar. Both models are clearly able to solve the $\operatorname{SQOOP}$ task, as suggested by their almost perfect $< 1\%$ error rate on the control #rhs/lhs=35 split, yet they struggle to generalize on splits with lower #rhs/lhs. In particular, we observe $13.67 \pm 9.97\%$ errors for MAC and a $34.73 \pm 4.61\%$ errors for FiLM on the hardest #rhs/lhs=1 split. For the splits of intermediate difficulty we saw the error rates of both models decreasing as we increased the #rhs/lhs ratio from 2 to 18. Interestingly, even with 18 #rhs/lhs some MAC and FiLM runs result in a test error rate of $\sim $ $2\%$ . Given the simplicity and minimalism of $\operatorname{SQOOP}$ questions, we believe that these results should be considered a failure to pass the $\operatorname{SQOOP}$ test for both MAC and FiLM. That said, we note a difference in how exactly FiLM and MAC fail on #rhs/lhs=1: in several runs (3 out of 15) MAC exhibits a strong generalization performance ( $\sim 0.5\%$ error rate), whereas in all runs of FiLM the error rate is about $\operatorname{SQOOP}$0 . We examine the successful MAC models and find they have converged to a successful setting of the control attention weights, that is the weights with which MAC units attend to questions words. In particular, MAC models that generalize strongly for each question would have a unit focusing strongly on $\operatorname{SQOOP}$1 and a unit focusing strongly on $\operatorname{SQOOP}$2 . (see Appendix "Additional Results for MAC Model" for more details). As MAC was the strongest competitor of NMN-Tree across generic models, we have performed an ablation study for this model, in which we varied the number of modules and hidden units, as well as experimented with weight decay. These modifications have not resulted in any significant reduction of the gap between MAC and NMN-Tree. Interestingly, we found that using the default high number of MAC units, namely 12, was helpful, possibly it made it more likely that some units are initialized to focus on $\operatorname{SQOOP}$3 and $\operatorname{SQOOP}$4 words (see Appendix "Additional Results for MAC Model" for details).
What is Essential to Strong Generalization of NMN?
The superior generalization performance of NMN-Tree raises the following question: what is the key architectural difference between NMN-Tree and generic models that explains the performance gap between them? We consider two candidate explanations. First, the NMN-Tree model differs from the generic models in that it does not use a language encoder and is instead built from modules that are parametrized by question words directly. Second, NMN-Tree is structured in a particular way, with the idea that modules 1 and 2 may learn to locate objects and module 3 can learn to reason about object locations independently of their identities. To understand which of the two differences is responsible for the superior generalization, we compare the performance of the NMN-Tree, NMN-Chain and NMN-Chain-Shortcut models (see Figure 1 ). These 3 versions of NMN are similar is that none of them are using a language encoder, but they differ in how the modules are connected. The results in Figure 2 show that for both $\operatorname{Find}$ and $\operatorname{Residual}$ module architectures, using a tree layout is absolutely crucial (and sufficient) for generalization, meaning that the generalization gap between NMN-Tree and generic models can not be explained merely by the language encoding step in the latter. In particular, NMN-Chain models perform barely above random chance, doing even worse than generic models on the #rhs/lhs=1 version of the dataset and dramatically failing even on the easiest #rhs/lhs=18 split. This is in stark contrast with NMN-Tree models that exhibits nearly perfect performance on the hardest #rhs/lhs=1 split. As a sanity check we trained NMN-Chain models on the vanilla #rhs/lhs=35 split. We found that NMN-Chain model has little difficulty to learn to answer $\operatorname{SQOOP}$ questions when it sees all of them at training time, even though it shows very poor generalization in our other experiments. Interestingly, NMN-Chain-Shortcut performed much better than NMN-Chain and quite similarly to generic models. We find it remarkable that such a slight change in the model layout as adding the shortcut connections from image features $h_x$ to the chain modules results in a drastic change in generalization performance. In an attempt to understand why NMN-Chain generalizes so poorly we compared the test set responses of the 5 NMN-Chain models trained on #rhs/lhs=1 split. Notably, there was very little agreement between predictions of these 5 runs (Fleiss $\kappa = 0.05$ ), suggesting that NMN-Chain performs rather randomly outside of the training set.
Can the Right Kind of NMN Be Induced?
The strong generalization of the NMN-Tree model is impressive, but a significant amount of prior knowledge about the task was required to come up with the successful layout and parametrization used in this model. We therefore investigate whether the amount of such prior knowledge can be reduced by fixing one of these structural aspects and inducing another.
In our layout induction experiments, we use the Stochastic N2NMN model which treats the layout as a stochastic latent variable with two values ( $T_{tree}$ and $T_{chain}$ , see Section "Experiments" for details). We experiment with N2NMNs using both $\operatorname{Find}$ and $\operatorname{Residual}$ modules and report results with different initial conditions, $p_0(tree) \in {0.1, 0.5, 0.9}$ . We believe that the initial probability $p_0(tree)=0.1$ should not be considered small, as in more challenging datasets the space of layouts would be exponentially large, and sampling the right layout in 10% of all cases could be considered a very lucky initialization. We repeat all experiments on #rhs/lhs=1 and on #rhs/lhs=18 splits, the former to study generalization, and the latter to control whether the failures on #rhs/lhs=1 are caused specifically by the difficulty of this split. The results (see Table 1 ) show that the success of layout induction (i.e. converging to a $p(tree)$ close to $0.9$ ) depends in a complex way on all the factors that we considered in our experiments. The initialization has the most influence: models initialized with $p_0(tree)=0.1$ typically do not converge to a tree (exception being experiments with $\operatorname{Residual}$ module on #rhs/lhs=18, in which 3 out of 5 runs converged to a solution with a high $T_{chain}$0 ). Likewise, models initialized with $T_{chain}$1 always stay in a regime with a high $T_{chain}$2 . In the intermediate setting of $T_{chain}$3 we observe differences in behaviors for $T_{chain}$4 and $T_{chain}$5 modules. In particular, N2NMN based on $T_{chain}$6 modules stays spurious with $T_{chain}$7 when #rhs/lhs=1, whereas N2NMN based on $T_{chain}$8 modules always converges to a tree.
One counterintuitive result in Table 1 is that Stochastic N2NMNs with $\operatorname{Residual}$ modules that were trained with $p_0(tree)=0.5$ and #rhs/lhs=1 make just $1.64 \pm 1.79\%$ errors on the generalization set despite being spurious mixtures between a tree and a chain. Our explanation for this phenomenon is as follows: when connected in a tree, modules of such spurious models generalize well, and when connected as a chain they generalize poorly. The output distribution of the whole model is thus a mixture of the mostly correct $p(y|T=T_{tree},x,q)$ and mostly random $p(y|T=T_{chain},x,q)$ . We verified our reasoning by explicitly evaluating test accuracies for $p(y|T=T_{tree},x,q)$ and $p(y|T=T_{chain},x,q)$ , and we found them to be around $99\%$ and $60\%$ respectively, confirming our hypothesis. As a result the predictions of the spurious models with $p(tree) \approx 0.5$ have lower confidence than those of sharp tree models, as indicated by the high log loss of $p_0(tree)=0.5$0 . We visualize the progress of structure induction for the $p_0(tree)=0.5$1 module with $p_0(tree)=0.5$2 in Figure 3 which shows how $p_0(tree)=0.5$3 saturates to 1.0 for #rhs/lhs=18 and remains around 0.5 when #rhs/lhs=1.
Next, we experiment with the Attention N2NMN model (see Section "Experiments" ) in which the parametrization is learned for each module as an attention-weighted average of word embeddings. In these experiments, we fix the layout to be tree-like and sample the pre-softmax attention weights $\tilde{\alpha }$ from a uniform distribution $U[0;1]$ . As in the layout induction investigations, we experiment with several $\operatorname{SQOOP}$ splits, namely we try #rhs/lhs $\in \lbrace 1,2,18\rbrace $ . The results (reported in Table 2 ) show that Attention N2NMN fails dramatically on #rhs/lhs=1 but quickly catches up as soon as #rhs/lhs is increased to 2. Notably, 9 out of 10 runs on #rhs/lhs=2 resulted in almost perfect performance, and 1 run completely failed to generalize (26% error rate), resulting in a high $8.18\%$ variance of the mean error rate. All 10 runs on the split with 18 rhs/lhs generalized flawlessly. We furthermore inspected the learned attention weights and found that for typical successful runs, module 3 focuses on the relation word, whereas modules 1 and 2 focus on different object words (see Figure 3 ) while still focusing on the relation word. To better understand the relationship between successful layout induction and generalization, we define an attention quality metric $\kappa =\min _{w \in \lbrace X, Y\rbrace } \max _{k \in {1, 2}} \alpha _{k, w} / (1 - \alpha _{k, R})$ . Intuitively, $\kappa $ is large when for each word $w \in {X, Y}$ there is a module $i$ that focuses mostly on this word. The renormalization by $1/(1 - \alpha _{k,R})$ is necessary to factor out the amount of attention that modules 1 and 2 assign to the relation word. For the ground-truth parametrization that we use for NMN-Tree $U[0;1]$0 takes a value of 1, and if both modules 1 and 2 focus on X, completely ignoring Y, $U[0;1]$1 equals 0. The scatterplot of the test error rate versus $U[0;1]$2 (Figure 3 ) shows that for #rhs/lhs=1 high generalization is strongly associated with higher $U[0;1]$3 , meaning that it is indeed necessary to have different modules strongly focusing on different object words in order to generalize in this most challenging setting. Interestingly, for #rhs/lhs=2 we see a lot of cases where N2NMN generalizes well despite attention being rather spurious ( $U[0;1]$4 ).
In order to put Attention N2NMN results in context we compare them to those of MAC (see Table 2 ). Such a comparison can be of interest because both models perform attention over the question. For 1 rhs/lhs MAC seems to be better on average, but as we increase #rhs/lhs to 2 we note that Attention N2NMN succeeds in 9 out of 10 cases on the #rhs/lhs=2 split, much more often than 1 success out of 10 observed for MAC. This result suggests that Attention N2NMNs retains some of the strong generalization potential of NMNs with hard-coded parametrization.
Related Work
The notion of systematicity was originally introduced by BIBREF11 as the property of human cognition whereby “the ability to entertain a given thought implies the ability to entertain thoughts with semantically related contents”. They illustrate this with an example that no English speaker can understand the phrase “John loves the girl” without being also able to understand the phrase “the girl loves John”. The question of whether or not connectionist models of cognition can account for the systematicity phenomenon has been a subject of a long debate in cognitive science BIBREF11 , BIBREF21 , BIBREF22 , BIBREF23 , BIBREF24 . Most recently BIBREF8 and BIBREF25 have shown that lack of systematicity in the generalization is still a concern for the modern seq2seq models. Our findings about the weak systematic generalization of generic VQA models corroborate the aforementioned seq2seq results. We also go beyond merely stating negative generalization results and showcase the high systematicity potential of adding explicit modularity and structure to modern deep learning models.
Besides the theoretical appeal of systematicity, our study is inspired by highly related prior evidence that when trained on downstream language understanding tasks, neural networks often generalize poorly and latch on to dataset-specific regularities. BIBREF5 report how neural models exploit biases in a VQA dataset, e.g. responding “snow” to the question “what covers the ground” regardless of the image because “snow” is the most common answer to this question. BIBREF6 report that many successes in natural language entailment are actually due to exploiting statistical biases as opposed to solving entailment, and that state-of-the-art systems are much less performant when tested on unbiased data. BIBREF7 demonstrate that seemingly state-of-the-art reading comprehension system can be misled by simply appending an unrelated sentence that resembles the question to the document.
Using synthetic VQA datasets to study grounded language understanding is a recent trend started by the CLEVR dataset BIBREF26 . CLEVR images are 3D-rendered and CLEVR questions are longer and more complex than ours, yet the color-shape generalization split that CLEVR includes arguably lacks a cleAnother source of inspiration for tar motivation. More closely related to our work is the ShapeWorld family of datasets by BIBREF27 , that involves a number of VQA generalization tests. ShapeWorld only contains 10 different objects, making it insufficient for our study. Most closely related to our work is the recent study of generalization to long-tail questions about rare objects done by BIBREF28 . They do not, however, consider as many models as we do and do not study the question of whether the best-performing models can be made end-to-end.
The key paradigm that we test in our experiments is Neural Module Networks (NMN). BIBREF12 introduced NMNs as a modular, structured VQA model where a fixed number of hand-crafted neural modules (such as Find, or Compare) are chosen and composed together in a layout determined by the dependency parse of question. BIBREF15 and BIBREF14 followed up by making NMNs end-to-end, removing the non-differentiable parser. The former chose to keep the handcrafted modules and uses reinforcement learning to learn the layout and modules end-to-end. The latter used a ground truth module layout learned separately, and changes the hand-crafted modules for a generic ResNet block structure BIBREF19 for every module. Both BIBREF15 and BIBREF14 reported that several thousands of ground-truth layouts are required to pretrain the layout predictor in order for their approaches to work. In a recent work, BIBREF29 attempt to soften the layout decisions, but training their models end-to-end from scratch performed substantially lower than best models on the CLEVR task.
Conclusion and Discussion
We have conducted a rigorous investigation of an important form of systematic generalization required for grounded language understanding: the ability to reason about all possible pairs of objects despite being trained on a small subset. Our results allow one to draw two important conclusions. For one, the intuitive appeal of modularity and structure in designing neural architectures for language understanding is now supported by our results, which show how a modular model consisting of general purpose residual blocks generalizes much better than a number of baselines, including architectures such as MAC, FiLM and RelNet that were designed specifically for visual reasoning. While this may seem unsurprising, to the best of our knowledge, the literature has lacked such a clear empirical evidence in favor of modular and structured networks before this work. Importantly, we have also shown how sensitive the high performance of the modular models is to the layout of modules, and how a tree-like structure generalizes much stronger than a typical chain of layers.
Our second key conclusion is that coming up with an end-to-end and/or soft version of modular models may be not sufficient for strong generalization. In the very setting where strong generalization is required, end-to-end methods often converge to a different, less compositional solution (e.g. a chain layout or blurred attention). This can be observed especially clearly in our NMN layout and parametrization induction experiments on the #rhs/lhs=1 version of $\operatorname{SQOOP}$ , but notably, strong initialization sensitivity of layout induction remains an issue even on the #rhs/lhs=18 split. This conclusion is relevant in the view of recent work in the direction of making NMNs more end-to-end BIBREF30 , BIBREF29 , BIBREF18 , BIBREF31 . Our findings suggest that merely replacing hard-coded components with learnable counterparts can be insufficient, and that research on regularizers or priors that steer the learning towards more systematic solutions can be required. That said, our parametrization induction results on the #rhs/lhs=2 split are encouraging, as they show that compared to generic models, a weaker nudge (in the form of a richer training signal or a prior) towards systematicity may suffice for end-to-end NMNs.
While our investigation has been performed on a synthetic dataset, we believe that it is the real-world language understanding where our findings may be most relevant. It is possible to construct a synthetic dataset that is bias-free and that can only be solved if the model has understood the entirety of the dataset's language. It is, on the contrary, much harder to collect real-world datasets that do not permit highly dataset-specific solutions, as numerous dataset analysis papers of recent years have shown (see Section "Related Work" for a review). We believe that approaches that can generalize strongly from imperfect and biased data will likely be required, and our experiments can be seen as a simulation of such a scenario. We hope, therefore, that our findings will inform researchers working on language understanding and provide them with a useful intuition about what facilitates strong generalization and what is likely to inhibit it.
Acknowledgements
We thank Maxime Chevalier-Boisvert and Yoshua Bengio for useful discussions. This research was enabled in part by support provided by Compute Canada (www.computecanada.ca), NSERC and Canada Research Chairs. We also thank Nvidia for donating NVIDIA DGX-1 used for this research.
Experiment Details
We trained all models by minimizing the cross entropy loss $\log p(y|x, q)$ on the training set, where $y \in \lbrace \textrm {yes}, \textrm {no}\rbrace $ is the correct answer, $x$ is the image, $q$ is the question. In all our experiments we used the Adam optimizer BIBREF32 with hyperparameters $\alpha =0.0001$ , $\beta _1=0.9$ , $\beta _2=0.999$ , $\epsilon =10^{-10}$ . We continuously monitored validation set performance of all models during training, selected the best one and reported its performance on the test set. The number of training iterations for each model was selected in preliminary investigations based on our observations of how long it takes for different models to converge. This information, as well as other training details, can be found in Table 3 .
Additional Results for MAC Model
We performed an ablation study in which we varied the number of MAC units, the model dimensionality and the level of weight decay for the MAC model. The results can be found in Table 4 .
We also perform qualitative investigations to understand the high variance in MAC's performance. In particular, we focus on control attention weights ( $c$ ) for each run and aim to understand if runs that generalize have clear differences when compared to runs that failed. Interestingly, we observe that in successful runs each word $w \in {\operatorname{X}, \operatorname{Y}}$ has a unit that is strongly focused on it. To present our observations in quantitative terms, we plot attention quality $\kappa =\min _{w \in \lbrace X, Y\rbrace } \max _{k \in [1; 12]} \alpha _{k, w} / (1 - \alpha _{k, R})$ , where $\alpha $ are control scores vs accuracy in Figure 4 for each run (see Section UID12 for an explanation of $\kappa $ ). We can clearly see a strong positive correlation between $\kappa $ and error rate.
Next, we experiment with a hard-coded variation of MAC. In this model, we use hard-coded control scores such that given a $\operatorname{SQOOP}$ question $\operatorname{X}\operatorname{R}\operatorname{Y}$ , the first half of all modules focuses on $\operatorname{X}$ while the second half focuses on $\operatorname{Y}$ . The relationship between MAC and hardcoded MAC is similar to that between NMN-Tree and end-to-end NMN with parameterization induction. However, this model has not performed as well as the successful runs of MAC. We hypothesize that this could be due to the interactions between the control scores and the visual attention part of the model.
SQOOP\operatorname{SQOOP} Pseudocode
[h] Pseudocode for creating $\operatorname{SQOOP}$ [1] $S$ $\leftarrow $ {A,B,C, ..., Z, 0,1,2,3, ..., 9} $Rel$ $\leftarrow $ {left-of, right-of, above, below } relations CreateSQOOPk $TrainQuestions$ $\leftarrow $ [] $AllQuestions$ $\leftarrow $ []
$X$ in $S$ $AllRhs$ $\leftarrow $ RandomSample( $S\setminus X$ , k) sample without replacement from $S \setminus X$ $AllQuestions$ $\leftarrow $ $[X] \times Rel \times (S\setminus X) \cup AllQuestions$
$R,Y$ in $AllRhs \times Rel$ $TrainQuestions$ $\leftarrow $ $(X,R,Y) \cup TrainQuestions$ $TestQuestions$ $\leftarrow $ $AllQuestions\setminus TrainQuestions$ GenerateExample $X,R,Y$ $a$ $AllRhs \times Rel$0 {Yes, No} $AllRhs \times Rel$1 $AllRhs \times Rel$2 $AllRhs \times Rel$3 place $AllRhs \times Rel$4 and $AllRhs \times Rel$5 objects so that $AllRhs \times Rel$6 holds create the image $AllRhs \times Rel$7 $AllRhs \times Rel$8 sample 3 objects from $AllRhs \times Rel$9 and add to $TrainQuestions$0 $TrainQuestions$1 $TrainQuestions$2 Sample $TrainQuestions$3 from $TrainQuestions$4 $TrainQuestions$5 $TrainQuestions$6 Sample $TrainQuestions$7 from $TrainQuestions$8 $TrainQuestions$9 $\leftarrow $0 place $\leftarrow $1 and $\leftarrow $2 objects so that $\leftarrow $3 holds create the image $\leftarrow $4 $\leftarrow $5 add $\leftarrow $6 and $\leftarrow $7 objects to $\leftarrow $8 so that $\leftarrow $9 holds $(X,R,Y) \cup TrainQuestions$0 $(X,R,Y) \cup TrainQuestions$1 sample 1 more object from $(X,R,Y) \cup TrainQuestions$2 and add to $(X,R,Y) \cup TrainQuestions$3 $(X,R,Y) \cup TrainQuestions$4 and $(X,R,Y) \cup TrainQuestions$5 are not in relation $(X,R,Y) \cup TrainQuestions$6 in I
$I$ , $X,R,Y$ , $a$ $Train$ $\leftarrow $ sample $\frac{10^6}{|TrainQuestions|}$ examples for each (X,R,Y) $\in $ $TrainQuestions$ from GenerateExample $X,R,Y$ $Test$ $X,R,Y$0 sample 10 examples for each (X,R,Y) $X,R,Y$1 $X,R,Y$2 from GenerateExample $X,R,Y$3 | Unanswerable |
1835f65694698a9153857e33cd9b86a96772fff5 | 1835f65694698a9153857e33cd9b86a96772fff5_0 | Q: Does the paper report the performance on the task of a Neural Machine Translation model?
Text: Introduction
A hashtag is a form of metadata labeling used in various social networks to help the users to navigate through the content. For example, one of the most popular hashtags on Instagram is "#photooftheday" [photo of the day]. Hashtags are written without any delimiters, although some users use an underscore or camel-casing to separate words. Hashtags themselves may be a great source for features for following opinion mining and social network analysis. Basically hashtags serve as keyphrases for a post in social media. By segmenting the hashtags into separate words we may use regular techniques to process them. The problem of hashtag segmentation resembles of another problem, namely word segmentation.
The problem of word segmentation is widely studied in languages like Chinese, since it lacks whitespaces to separate words, or in German to split compound words. In languages like English or Russian, where compounds are not that frequent as in German and where whitespace delimiters are regularly used, the problem of word segmentation arises mainly when working with hashtags.
Formally the problem is stated as follows: given a string of $n$ character $s = s_1 \ldots s_n$ we need to define the boundaries of the substrings $s_{i:j}, i < j$, so that each substring is meaningful (i.e. is a regular word, named entity, abbreviation, number, etc). The main challenge of this problem is that the segmentation might be ambiguous. For example, a string “somethingsunclear” might be segmented as “something sun clear” or “somethings unclear”. To deal with the ambiguity more processing is required, such as POS-tagging, estimation of frequencies of all hashtag constituencies or their co-occurence frequency. The frequencies can be estimated on a large corpus, such as BNC , COCA , Wikipedia. However when working with noisy user generated data, such as texts or hashtags from social networks, the problem of unknown words (or out of vocabulary words) arises. In language modeling this problem is solved by using smoothing, such as Laplacian smoothing or Knesser-Ney smoothing. Otherwise additional heuristics can be used to extend the dictionary with word-like sequences of characters. Unlike language modelling, in hashtag segmentation frequency estimation is not only source for defining word boundaries. Otherwise candidate substrings can be evaluated according to length BIBREF0.
Several research groups have shown that introducing character level into models help to deal with unknown words in various NLP tasks, such as text classification BIBREF1, named entity recognition BIBREF2, POS-tagging BIBREF3, dependency parsing BIBREF4, word tokenization and sentence segmentation BIBREF5 or machine translation BIBREF6, BIBREF7. The character level model is a model which either treats the text as a sequence of characters without any tokenization or incorporates character level information into word level information. Character level models are able to capture morphological patterns, such as prefixes and suffixes, so that the model is able to define the POS tag or NE class of an unknown word.
Following this intuition, we use a character level model for hashtag segmentation. Our main motivation is the following: if the character level model is able to capture word ending patterns, it should also be able to capture the word boundary patterns. We apply a character level model, specifically, a recurrent neural network, referred further as char-RNN, to the task of hashtag segmentation. The char-RNN is trained and tested on the synthetic data, which was generated from texts, collected from social networks in English and Russian, independently. We generate synthetic data for training by extracting frequent $N$-grams and removing whitespaces. The test data is annotated manually . Since the problem statement is very basic, we use additional techniques, such as active learning, character embeddings and RNN hidden state visualization, to interpret the weights, learned by char-RNN. We address the following research questions and claim our respective contributions:
We show that our char-RNN model outperforms the traditional unigram or bigram language models with extensive use of external sources BIBREF8, BIBREF0.
What is the impact of high inflection in languages such as Russian on the performance of character-level modelling as opposed to languages with little inflection such as English? We claim that character-level models offer benefits for processing highly inflected languages by capturing the rich variety of word boundary patterns.
As getting sufficient amount of annotated training collection is labor-intensive and error-prone, a natural question would be: can we avoid annotating real-world data altogether and still obtain high quality hashtag segmentations? We approach this problem by using morpho-syntactic patterns to generate synthetic hashtags.
A potentially unlimited volume of our synthetic training dataset raises yet another question of whether an informative training subset could be selected. To this extent, we apply an active learning-based strategy to subset selection and identify a small portion of the original synthetic training dataset, necessary to obtain a high performance.
Neural Model for Hashtag Segmentation ::: Sequence Labeling Approach
We treat hashtag segmentation as a sequence labeling task. Each character is labeled with one of the labels $\mathcal {L} = \lbrace 0, 1\rbrace $, (1) for the end of a word, and (0) otherwise (Table TABREF9 and TABREF9). Given a string $s = {s_1, \ldots , s_n}$ of characters, the task is to find the labels $Y^* = {y_1^*. \ldots , y_n^*}$, such that $ Y^* = \arg \max _{Y \in \mathcal {L} ^n} p(Y | s).$
The neural model for hashtag segmentation consists of three layers.
The embedding layer is used to compute the distributed representation of input characters. Each character $c_i$ is represented with an embedding vector $e_i \in \mathbb {R}^{d_e}$, where $d_e$ is the size of the character embedding. $E$ is the look up table of size $|V| \times d_e$, where $V$ is the vocabulary, i.e. the number of unique characters.
The feature layer is used to process the input. We use a bi-directional recurrent layer with LSTM units to process the input in forward and backward directions. The LSTM units we use are default keras LSTM units as introduced by Hochreiter.
The inference layer is used to predict the labels of each character. We use a single dense layer as f or inference and $softmax$ to predict the probabilities of the labels $\mathcal {L} = \lbrace 0, 1\rbrace $.
Each character is assigned with the most probable label.
The parameters of the char-RNN are the following:
Embedding layer = 50 input dimensions;
Feature layer = 64 bidirectional LSTM units;
Inference layer = 2 output neurons with softmax activation function mapped to each of 64 outputs.
Dataset
In this section we describe the datasets we used for hashtag segmentation. We experimented with Russian and English datasets to compare the performance of the char-RNN.
Dataset ::: Russian dataset
To our knowledge there is no available dataset for hashtag segmentation in Russian, so we faced the need to create our own dataset. Our approach to the dataset creation was twofold: the training data was created from social network texts by selecting frequent $n$-grams and generating hashtags following some hashtag patterns. The test dataset consists of real hashtags collected from vk.com (a Russian social network) and were segmented manually.
We followed the same strategy to create an English language dataset.
Dataset ::: Russian dataset ::: Training Dataset Generation
We scraped texts from several pages about civil services from vk.com. Next we extracted frequent $n$-grams that do not contain stopwords and consist of words and digits in various combinations (such as word + 4 digits + word or word + word + 8 digits). We used several rules to merge these $n$-grams so that they resemble real hashtags, for example:
remove all whitespace: wordwordworddigits
Examples: ЁлкаВЗазеркалье, нескольколетназад
replace all whitespace with an underscore: word_word_digits
Examples: увд_юга_столицы
remove some whitespace and replace other spaces with an underscore: word_worddigits.
Examples: ищусвоегогероя_уфпс
A word here might be a word in lower case, upper case or capitalized or an abbreviation. There might be up to four digits.
In general, we introduced 11 types of hashtags, which contain simply constructed hashtags as well as the complex ones. Here are a couple of examples:
The hashtag consists of two parts: the word/abbreviation in the first part and the number or word in the second. The underscore is a delimiter.
Examples: word_2017, NASA_2017, word_word
Two or three words, which are separated by an underscore.
Examples: Word_Word, word_word_word
Dataset ::: Russian dataset ::: Test Dataset Annotation
We segmented manually 2K the most frequent hashtags, extracted from the same collection of the scraped texts.
The resulting size of the Russian dataset is 15k hashtags for training and 2k hashtags for testing.
Dataset ::: English dataset
We used the dataset, released by BIBREF0. This dataset consists of:
a collection of tweets, which we used to generate the synthetic training hashtags according to the same rules as for Russian;
a collection of annotated and separated hashtags, which we used as a testing set. From this test set we excluded ambiguous hashtags, annotated with several possible segmentations.
The resulting size of the English dataset is 15k hashtags for training and 1k hashtags for testing.
Active Learning
We followed the strategy for active learning, as in BIBREF9. The training procedure consists of multiple rounds of training and testing of the model. We start by training the model on 1k hashtags, which were randomly selected from the training dataset. Next we test the model on the reminder of the training dataset and select 1k hashtags according to the current model’s uncertainty in its prediction of the segmentation. These hashtags are not manually relabelled, since a) they belong to the synthetically generated training dataset and b) the correct labeling for these hashtag is already known. In BIBREF9 three uncertainty measure are presented, from which we selected the maximum normalized log-probability (MNLP) assigned by the model to the most likely sequence of tags. The model is then retrained on the hashtags it is uncertain about. Note, that here we do not check if the predictions of the model are correct. We are more interested in training the model on hard examples than in evaluating the quality of intermediate results. We refer the reader to BIBREF9 for more technical details.
Experiments ::: Baseline
As for baseline algorithm, we consider the BIBREF0 system architecture as a state-of-the-art algorithm. Unfortunately, their approach is not straightforwardly applicable to our synthetic Russian dataset, because it requires twofold input: a hashtag and a corresponding tweet or a text from any other social media, which is absent in our task setting due to synthetic nature of the training dataset.
For this reason as a baseline algorithm for English dataset we refer to results from BIBREF0, and as for Russian dataset, we used the probabilistic language model, described in BIBREF8. The probability of a sequence of words is the product of the probabilities of each word, given the word’s context: the preceding word. As in the following equation:
where
In case there is no such a pair of words $(w_{i-1}, w_i)$ in the set of bigrams, the probability of word $w_i$ is obtained as if it was only an unigram model:
where $V$ – vocabulary, $f(w_{i})$ – frequency of word $w_{i}$, and $\alpha $ = 1.
In Table TABREF30 we present three baseline results: LM BIBREF8 for Russian and English datasets; context-based LM BIBREF0 for English dataset only. We treat a segmentation as correct if prediction and target sequences are the same.
Experiments ::: Neural Model
In our experiments we used 5 epochs to train the char-RNN with LSTM units. For each language we observed three datasets with different number of hashtags. In case of Russian language, the more data we use while training, the higher the accuracy. As for English, the highest accuracy score was achieved on a set of 10k hashtags (Table TABREF32). Due to it's lower morphological diversity and complexity the model starts to overfit on training sets with large sizes. The training showed that mostly the model makes wrong predictions of segmentation on hashtags of complex types, such as “wordword_worddigits”.
Our results outperform all choosen baseline both for Russian and English datasets. Note, that we have two baselines for the English dataset: one is purely frequency-based, another is cited from BIBREF0, where external resources are heavily used. We show that using significantly less amount of training data, we achieve a boost in quality by switching from statistical word language models to char-RNN. As expected, the results on Russian dataset are higher than for the English dataset due to higher inflection degree in Russian as opposed to English.
Experiments ::: Active Learning
In order to evaluate the efficiency of deep learning with active learning when used in combination, we run the experiments for both languages. As for the datasets, we took the ones on which the highest accuracy was obtained (15k for Russian and 10k for English).
The learning process consists of multiple rounds which are repeated until the test set is finished. At the beginning we train the model on 1k of randomly selected hashtags and predict the probability of segmentation for the remaining hashtags. Then we sort the remaining hashtags in ascending order according to the probability assigned by the model and pick 1k of hashtags which the model is least confident about. Finally, we add these hashtags with the least probable sequence of tags to the training data and continue training the model. This pipeline is repeated till there are no samples left.
In comparison to our initial experiments, application of active learning demonstrates impressive results. The amount of labeled training data can be drastically reduced, to be more specific, in both cases the size of the training set can be reduced by half without any decline in accuracy (see Figures 2 and 3).
Active learning selects a more informative set of examples in contrast to supervised learning, which is trained on a set of randomly chosen examples. We decided to analyze the updated version of the training data and see if number of morphologically complex types of hashtags is higher than the simple ones. We were able to divide hashatgs into complex and simple as the model is trained on synthetic data and there is a finite number of templates by which each hashtag can be generated.
To better understand the contribution of uncertainty sampling approach, we plot the distribution of different types of hashtags in new training datasets for both languages, Russian and English (see Figure 4 and 5). According to identified types of hashtags in real data, it can be seen from the plots that in both cases the algorithm added more of morphologically complex hashtags to training data – types 3, 6 and 7. These types mostly consist of hashtags with two or three words in lower case without underscore.
Examples of featured types:
wordword_2017
wordword, word2017word
wordwordword, wordword2017word
Experiments ::: Visualization
In order to see if embeddings of similar characters, in terms of string segmentation, appear near each-other in their resulting 50-dimensional embedding space, we have applied one technique for dimensionality reduction: SVD to character embeddings to plot them on 2D space. For both languages meaningful and interpretable clusters can be extracted: capital letters, letters in lower case, digits and underscore, as shown below.
Related Work
The problem of word segmentation has received much attention in Chinese and German NLP for word segmentation and compound splitting BIBREF10, respectively. The major techniques for word segmentation exploit string matching algorithms BIBREF11, language models BIBREF12, BIBREF0 and sequence labeling methods BIBREF10. Recent trend of deep learning as a major approach for any NLP task in general and sequence labeling in particular resulted in using various RNN-based models and CNN-based model for Chinese word segmentation BIBREF10, BIBREF13, BIBREF14.
Since BIBREF10 Chinese word segmentation is addressed as a character labeling task: each character of the input sequence is labeled with one of the four labels $\mathcal {L} = \lbrace B, M, E, S\rbrace $, which stand for character in Begin, Middle or End of the word or Single character word. BIBREF10 uses a maximum entropy tagger to tag each character independently. This approach was extended in BIBREF15 to the sequence modeling task, and linear conditional random fields were used to attempt it and receive state of the art results. A neural approach to Chinese segmentation mainly uses various architectures of character level recurrent neural networks BIBREF16, BIBREF17, BIBREF18 and very deep constitutional networks BIBREF19. Same architectures are used for dialectal Arabic segmentation BIBREF20.
The evolution of German compound splitters is more or less similar to Chinese word segmentation systems. The studies of German compound splitting started with corpus- and frequency-based approaches BIBREF13, BIBREF14 and are now dominated with neural-based distributional semantic models. However, German compound splitting is rarely seen as sequence modeling task.
The problem of hashtag segmentation, analysis and usage in English has been approached by several research groups. As it was shown by BIBREF12 hashtag segmentation for TREC microblog track 2011 BIBREF21 improves the quality of information retrieval, while BIBREF0 shows that hashtag segmentation improves linking of entities extracted from tweets to a knowledge base. Both BIBREF12, BIBREF0 use Viterbi-like algorithm for hashtag segmentation: all possible segmentations of hashtag are scored using a scoring function:
where $P_{Unigram}$ are probabilities, computed according to the unigram model based on a large enough corpus or any N-gram service.
Following the idea of scoring segmentation candidates, BIBREF11 introduces other scoring functions, which include a bigram model (2GM) and a Maximum Unknown Matching (MUM), which is adjustable to unseen words.
BIBREF22 attempt to split camel-cased hashtags using rule-based approach and POS-tagging for further semantic classification. WordSegment has been used for sentiment analysis BIBREF23, BIBREF24 and other applications.
To our knowledge there has been little work done for word or hashtag segmentation in Russian.
Related Work ::: Active Learning in NLP
Active learning is machine learning technique which allows efficient use of the available training data. It presumes that, first an initial model is trained on a very little amount of data and next tested on large unlabeled set. Next the model is able to choose a few most difficult examples and ask an external knowledge source about the desired labels. Upon receiving these labels, the model is updated and retrained on the new train set. There might be a few rounds of label querying and model updating. To use active learning strategy, we need a definition of what a difficult example is and how to score its difficulty. One of the most common scoring approaches is entropy-based uncertainty sampling, which selects the examples with the lowest prediction probability.
Active learning is widely used in NLP applications, when there is little annotated data while the amount of unlabeled data is abundant. Being ultimately used for text classification using traditional machine learning classifiers BIBREF25, BIBREF26, active learning is less known to be used with deep learning sequence classifiers. Recent works report on scoring word embeddings that are likely to be updated with the greatest magnitude BIBREF27 and on using maximum normalized log-probability (MNLP) assigned by the model to the most likely sequence of tags BIBREF9:
Related Work ::: Training on synthetic data
The lack of training data is an issue for many NLP applications. There have been attempts to generate and use synthetic data for training question answering systems BIBREF28 and SQL2text systems BIBREF29. In BIBREF0 synthetic hashtags are generated by removing whitespace characters from frequent n-grams, while in BIBREF30 German compounds are synthesized for further machine translation.
Conclusions
In this paper we approach the problem of hashtag segmentation by using char-RNNs. We treat the problem of hashtag segmentation as a sequence labeling task, so that each symbol of a given string is labeled with 1 (there should be a whitespace after this symbol) or 0 (otherwise). We use two datasets to test this approach in English and in Russian without any language-specific settings. We compare char-RNN to traditional probabilistic algorithms. To interpret the results we use a few visualization techniques and the strategy of active learning to evaluate the complexity of training data, since we use synthetically generated hashtags for training.
The results show that:
When approached on character level, hashtag segmentation problem can be solved using relatively small and simple recurrent neural network model without usage of any external corpora and vocabularies. Such char-RNN not only outperforms significantly traditional frequency-based language models, but also can be trained on synthetic data generated according to morpho-syntactic patterns, without any manual annotation and preprocessing.
In languages with high inflection (such as Russian) the char-RNN achieves higher results than in languages with little inflections (such as English) due to the ability of the char-RNN to capture and memorize word boundary patterns, especially word ending patterns (i.e. adjective endings “ый”,“ая”,“ое” or verbal endings “ать”,“еть” in Russian).
The amount of generated synthetic training data can be limited by using techniques for active learning which allows to select sufficient training subset without any loss of quality.
Acknowledgements
The paper was prepared within the framework of the HSE University Basic Research Program and funded by the Russian Academic Excellence Project '5-100'. | No |
a61732774faf30bab15bf944b2360ec4710870c1 | a61732774faf30bab15bf944b2360ec4710870c1_0 | Q: What are the predefined morpho-syntactic patterns used to filter the training data?
Text: Introduction
A hashtag is a form of metadata labeling used in various social networks to help the users to navigate through the content. For example, one of the most popular hashtags on Instagram is "#photooftheday" [photo of the day]. Hashtags are written without any delimiters, although some users use an underscore or camel-casing to separate words. Hashtags themselves may be a great source for features for following opinion mining and social network analysis. Basically hashtags serve as keyphrases for a post in social media. By segmenting the hashtags into separate words we may use regular techniques to process them. The problem of hashtag segmentation resembles of another problem, namely word segmentation.
The problem of word segmentation is widely studied in languages like Chinese, since it lacks whitespaces to separate words, or in German to split compound words. In languages like English or Russian, where compounds are not that frequent as in German and where whitespace delimiters are regularly used, the problem of word segmentation arises mainly when working with hashtags.
Formally the problem is stated as follows: given a string of $n$ character $s = s_1 \ldots s_n$ we need to define the boundaries of the substrings $s_{i:j}, i < j$, so that each substring is meaningful (i.e. is a regular word, named entity, abbreviation, number, etc). The main challenge of this problem is that the segmentation might be ambiguous. For example, a string “somethingsunclear” might be segmented as “something sun clear” or “somethings unclear”. To deal with the ambiguity more processing is required, such as POS-tagging, estimation of frequencies of all hashtag constituencies or their co-occurence frequency. The frequencies can be estimated on a large corpus, such as BNC , COCA , Wikipedia. However when working with noisy user generated data, such as texts or hashtags from social networks, the problem of unknown words (or out of vocabulary words) arises. In language modeling this problem is solved by using smoothing, such as Laplacian smoothing or Knesser-Ney smoothing. Otherwise additional heuristics can be used to extend the dictionary with word-like sequences of characters. Unlike language modelling, in hashtag segmentation frequency estimation is not only source for defining word boundaries. Otherwise candidate substrings can be evaluated according to length BIBREF0.
Several research groups have shown that introducing character level into models help to deal with unknown words in various NLP tasks, such as text classification BIBREF1, named entity recognition BIBREF2, POS-tagging BIBREF3, dependency parsing BIBREF4, word tokenization and sentence segmentation BIBREF5 or machine translation BIBREF6, BIBREF7. The character level model is a model which either treats the text as a sequence of characters without any tokenization or incorporates character level information into word level information. Character level models are able to capture morphological patterns, such as prefixes and suffixes, so that the model is able to define the POS tag or NE class of an unknown word.
Following this intuition, we use a character level model for hashtag segmentation. Our main motivation is the following: if the character level model is able to capture word ending patterns, it should also be able to capture the word boundary patterns. We apply a character level model, specifically, a recurrent neural network, referred further as char-RNN, to the task of hashtag segmentation. The char-RNN is trained and tested on the synthetic data, which was generated from texts, collected from social networks in English and Russian, independently. We generate synthetic data for training by extracting frequent $N$-grams and removing whitespaces. The test data is annotated manually . Since the problem statement is very basic, we use additional techniques, such as active learning, character embeddings and RNN hidden state visualization, to interpret the weights, learned by char-RNN. We address the following research questions and claim our respective contributions:
We show that our char-RNN model outperforms the traditional unigram or bigram language models with extensive use of external sources BIBREF8, BIBREF0.
What is the impact of high inflection in languages such as Russian on the performance of character-level modelling as opposed to languages with little inflection such as English? We claim that character-level models offer benefits for processing highly inflected languages by capturing the rich variety of word boundary patterns.
As getting sufficient amount of annotated training collection is labor-intensive and error-prone, a natural question would be: can we avoid annotating real-world data altogether and still obtain high quality hashtag segmentations? We approach this problem by using morpho-syntactic patterns to generate synthetic hashtags.
A potentially unlimited volume of our synthetic training dataset raises yet another question of whether an informative training subset could be selected. To this extent, we apply an active learning-based strategy to subset selection and identify a small portion of the original synthetic training dataset, necessary to obtain a high performance.
Neural Model for Hashtag Segmentation ::: Sequence Labeling Approach
We treat hashtag segmentation as a sequence labeling task. Each character is labeled with one of the labels $\mathcal {L} = \lbrace 0, 1\rbrace $, (1) for the end of a word, and (0) otherwise (Table TABREF9 and TABREF9). Given a string $s = {s_1, \ldots , s_n}$ of characters, the task is to find the labels $Y^* = {y_1^*. \ldots , y_n^*}$, such that $ Y^* = \arg \max _{Y \in \mathcal {L} ^n} p(Y | s).$
The neural model for hashtag segmentation consists of three layers.
The embedding layer is used to compute the distributed representation of input characters. Each character $c_i$ is represented with an embedding vector $e_i \in \mathbb {R}^{d_e}$, where $d_e$ is the size of the character embedding. $E$ is the look up table of size $|V| \times d_e$, where $V$ is the vocabulary, i.e. the number of unique characters.
The feature layer is used to process the input. We use a bi-directional recurrent layer with LSTM units to process the input in forward and backward directions. The LSTM units we use are default keras LSTM units as introduced by Hochreiter.
The inference layer is used to predict the labels of each character. We use a single dense layer as f or inference and $softmax$ to predict the probabilities of the labels $\mathcal {L} = \lbrace 0, 1\rbrace $.
Each character is assigned with the most probable label.
The parameters of the char-RNN are the following:
Embedding layer = 50 input dimensions;
Feature layer = 64 bidirectional LSTM units;
Inference layer = 2 output neurons with softmax activation function mapped to each of 64 outputs.
Dataset
In this section we describe the datasets we used for hashtag segmentation. We experimented with Russian and English datasets to compare the performance of the char-RNN.
Dataset ::: Russian dataset
To our knowledge there is no available dataset for hashtag segmentation in Russian, so we faced the need to create our own dataset. Our approach to the dataset creation was twofold: the training data was created from social network texts by selecting frequent $n$-grams and generating hashtags following some hashtag patterns. The test dataset consists of real hashtags collected from vk.com (a Russian social network) and were segmented manually.
We followed the same strategy to create an English language dataset.
Dataset ::: Russian dataset ::: Training Dataset Generation
We scraped texts from several pages about civil services from vk.com. Next we extracted frequent $n$-grams that do not contain stopwords and consist of words and digits in various combinations (such as word + 4 digits + word or word + word + 8 digits). We used several rules to merge these $n$-grams so that they resemble real hashtags, for example:
remove all whitespace: wordwordworddigits
Examples: ЁлкаВЗазеркалье, нескольколетназад
replace all whitespace with an underscore: word_word_digits
Examples: увд_юга_столицы
remove some whitespace and replace other spaces with an underscore: word_worddigits.
Examples: ищусвоегогероя_уфпс
A word here might be a word in lower case, upper case or capitalized or an abbreviation. There might be up to four digits.
In general, we introduced 11 types of hashtags, which contain simply constructed hashtags as well as the complex ones. Here are a couple of examples:
The hashtag consists of two parts: the word/abbreviation in the first part and the number or word in the second. The underscore is a delimiter.
Examples: word_2017, NASA_2017, word_word
Two or three words, which are separated by an underscore.
Examples: Word_Word, word_word_word
Dataset ::: Russian dataset ::: Test Dataset Annotation
We segmented manually 2K the most frequent hashtags, extracted from the same collection of the scraped texts.
The resulting size of the Russian dataset is 15k hashtags for training and 2k hashtags for testing.
Dataset ::: English dataset
We used the dataset, released by BIBREF0. This dataset consists of:
a collection of tweets, which we used to generate the synthetic training hashtags according to the same rules as for Russian;
a collection of annotated and separated hashtags, which we used as a testing set. From this test set we excluded ambiguous hashtags, annotated with several possible segmentations.
The resulting size of the English dataset is 15k hashtags for training and 1k hashtags for testing.
Active Learning
We followed the strategy for active learning, as in BIBREF9. The training procedure consists of multiple rounds of training and testing of the model. We start by training the model on 1k hashtags, which were randomly selected from the training dataset. Next we test the model on the reminder of the training dataset and select 1k hashtags according to the current model’s uncertainty in its prediction of the segmentation. These hashtags are not manually relabelled, since a) they belong to the synthetically generated training dataset and b) the correct labeling for these hashtag is already known. In BIBREF9 three uncertainty measure are presented, from which we selected the maximum normalized log-probability (MNLP) assigned by the model to the most likely sequence of tags. The model is then retrained on the hashtags it is uncertain about. Note, that here we do not check if the predictions of the model are correct. We are more interested in training the model on hard examples than in evaluating the quality of intermediate results. We refer the reader to BIBREF9 for more technical details.
Experiments ::: Baseline
As for baseline algorithm, we consider the BIBREF0 system architecture as a state-of-the-art algorithm. Unfortunately, their approach is not straightforwardly applicable to our synthetic Russian dataset, because it requires twofold input: a hashtag and a corresponding tweet or a text from any other social media, which is absent in our task setting due to synthetic nature of the training dataset.
For this reason as a baseline algorithm for English dataset we refer to results from BIBREF0, and as for Russian dataset, we used the probabilistic language model, described in BIBREF8. The probability of a sequence of words is the product of the probabilities of each word, given the word’s context: the preceding word. As in the following equation:
where
In case there is no such a pair of words $(w_{i-1}, w_i)$ in the set of bigrams, the probability of word $w_i$ is obtained as if it was only an unigram model:
where $V$ – vocabulary, $f(w_{i})$ – frequency of word $w_{i}$, and $\alpha $ = 1.
In Table TABREF30 we present three baseline results: LM BIBREF8 for Russian and English datasets; context-based LM BIBREF0 for English dataset only. We treat a segmentation as correct if prediction and target sequences are the same.
Experiments ::: Neural Model
In our experiments we used 5 epochs to train the char-RNN with LSTM units. For each language we observed three datasets with different number of hashtags. In case of Russian language, the more data we use while training, the higher the accuracy. As for English, the highest accuracy score was achieved on a set of 10k hashtags (Table TABREF32). Due to it's lower morphological diversity and complexity the model starts to overfit on training sets with large sizes. The training showed that mostly the model makes wrong predictions of segmentation on hashtags of complex types, such as “wordword_worddigits”.
Our results outperform all choosen baseline both for Russian and English datasets. Note, that we have two baselines for the English dataset: one is purely frequency-based, another is cited from BIBREF0, where external resources are heavily used. We show that using significantly less amount of training data, we achieve a boost in quality by switching from statistical word language models to char-RNN. As expected, the results on Russian dataset are higher than for the English dataset due to higher inflection degree in Russian as opposed to English.
Experiments ::: Active Learning
In order to evaluate the efficiency of deep learning with active learning when used in combination, we run the experiments for both languages. As for the datasets, we took the ones on which the highest accuracy was obtained (15k for Russian and 10k for English).
The learning process consists of multiple rounds which are repeated until the test set is finished. At the beginning we train the model on 1k of randomly selected hashtags and predict the probability of segmentation for the remaining hashtags. Then we sort the remaining hashtags in ascending order according to the probability assigned by the model and pick 1k of hashtags which the model is least confident about. Finally, we add these hashtags with the least probable sequence of tags to the training data and continue training the model. This pipeline is repeated till there are no samples left.
In comparison to our initial experiments, application of active learning demonstrates impressive results. The amount of labeled training data can be drastically reduced, to be more specific, in both cases the size of the training set can be reduced by half without any decline in accuracy (see Figures 2 and 3).
Active learning selects a more informative set of examples in contrast to supervised learning, which is trained on a set of randomly chosen examples. We decided to analyze the updated version of the training data and see if number of morphologically complex types of hashtags is higher than the simple ones. We were able to divide hashatgs into complex and simple as the model is trained on synthetic data and there is a finite number of templates by which each hashtag can be generated.
To better understand the contribution of uncertainty sampling approach, we plot the distribution of different types of hashtags in new training datasets for both languages, Russian and English (see Figure 4 and 5). According to identified types of hashtags in real data, it can be seen from the plots that in both cases the algorithm added more of morphologically complex hashtags to training data – types 3, 6 and 7. These types mostly consist of hashtags with two or three words in lower case without underscore.
Examples of featured types:
wordword_2017
wordword, word2017word
wordwordword, wordword2017word
Experiments ::: Visualization
In order to see if embeddings of similar characters, in terms of string segmentation, appear near each-other in their resulting 50-dimensional embedding space, we have applied one technique for dimensionality reduction: SVD to character embeddings to plot them on 2D space. For both languages meaningful and interpretable clusters can be extracted: capital letters, letters in lower case, digits and underscore, as shown below.
Related Work
The problem of word segmentation has received much attention in Chinese and German NLP for word segmentation and compound splitting BIBREF10, respectively. The major techniques for word segmentation exploit string matching algorithms BIBREF11, language models BIBREF12, BIBREF0 and sequence labeling methods BIBREF10. Recent trend of deep learning as a major approach for any NLP task in general and sequence labeling in particular resulted in using various RNN-based models and CNN-based model for Chinese word segmentation BIBREF10, BIBREF13, BIBREF14.
Since BIBREF10 Chinese word segmentation is addressed as a character labeling task: each character of the input sequence is labeled with one of the four labels $\mathcal {L} = \lbrace B, M, E, S\rbrace $, which stand for character in Begin, Middle or End of the word or Single character word. BIBREF10 uses a maximum entropy tagger to tag each character independently. This approach was extended in BIBREF15 to the sequence modeling task, and linear conditional random fields were used to attempt it and receive state of the art results. A neural approach to Chinese segmentation mainly uses various architectures of character level recurrent neural networks BIBREF16, BIBREF17, BIBREF18 and very deep constitutional networks BIBREF19. Same architectures are used for dialectal Arabic segmentation BIBREF20.
The evolution of German compound splitters is more or less similar to Chinese word segmentation systems. The studies of German compound splitting started with corpus- and frequency-based approaches BIBREF13, BIBREF14 and are now dominated with neural-based distributional semantic models. However, German compound splitting is rarely seen as sequence modeling task.
The problem of hashtag segmentation, analysis and usage in English has been approached by several research groups. As it was shown by BIBREF12 hashtag segmentation for TREC microblog track 2011 BIBREF21 improves the quality of information retrieval, while BIBREF0 shows that hashtag segmentation improves linking of entities extracted from tweets to a knowledge base. Both BIBREF12, BIBREF0 use Viterbi-like algorithm for hashtag segmentation: all possible segmentations of hashtag are scored using a scoring function:
where $P_{Unigram}$ are probabilities, computed according to the unigram model based on a large enough corpus or any N-gram service.
Following the idea of scoring segmentation candidates, BIBREF11 introduces other scoring functions, which include a bigram model (2GM) and a Maximum Unknown Matching (MUM), which is adjustable to unseen words.
BIBREF22 attempt to split camel-cased hashtags using rule-based approach and POS-tagging for further semantic classification. WordSegment has been used for sentiment analysis BIBREF23, BIBREF24 and other applications.
To our knowledge there has been little work done for word or hashtag segmentation in Russian.
Related Work ::: Active Learning in NLP
Active learning is machine learning technique which allows efficient use of the available training data. It presumes that, first an initial model is trained on a very little amount of data and next tested on large unlabeled set. Next the model is able to choose a few most difficult examples and ask an external knowledge source about the desired labels. Upon receiving these labels, the model is updated and retrained on the new train set. There might be a few rounds of label querying and model updating. To use active learning strategy, we need a definition of what a difficult example is and how to score its difficulty. One of the most common scoring approaches is entropy-based uncertainty sampling, which selects the examples with the lowest prediction probability.
Active learning is widely used in NLP applications, when there is little annotated data while the amount of unlabeled data is abundant. Being ultimately used for text classification using traditional machine learning classifiers BIBREF25, BIBREF26, active learning is less known to be used with deep learning sequence classifiers. Recent works report on scoring word embeddings that are likely to be updated with the greatest magnitude BIBREF27 and on using maximum normalized log-probability (MNLP) assigned by the model to the most likely sequence of tags BIBREF9:
Related Work ::: Training on synthetic data
The lack of training data is an issue for many NLP applications. There have been attempts to generate and use synthetic data for training question answering systems BIBREF28 and SQL2text systems BIBREF29. In BIBREF0 synthetic hashtags are generated by removing whitespace characters from frequent n-grams, while in BIBREF30 German compounds are synthesized for further machine translation.
Conclusions
In this paper we approach the problem of hashtag segmentation by using char-RNNs. We treat the problem of hashtag segmentation as a sequence labeling task, so that each symbol of a given string is labeled with 1 (there should be a whitespace after this symbol) or 0 (otherwise). We use two datasets to test this approach in English and in Russian without any language-specific settings. We compare char-RNN to traditional probabilistic algorithms. To interpret the results we use a few visualization techniques and the strategy of active learning to evaluate the complexity of training data, since we use synthetically generated hashtags for training.
The results show that:
When approached on character level, hashtag segmentation problem can be solved using relatively small and simple recurrent neural network model without usage of any external corpora and vocabularies. Such char-RNN not only outperforms significantly traditional frequency-based language models, but also can be trained on synthetic data generated according to morpho-syntactic patterns, without any manual annotation and preprocessing.
In languages with high inflection (such as Russian) the char-RNN achieves higher results than in languages with little inflections (such as English) due to the ability of the char-RNN to capture and memorize word boundary patterns, especially word ending patterns (i.e. adjective endings “ый”,“ая”,“ое” or verbal endings “ать”,“еть” in Russian).
The amount of generated synthetic training data can be limited by using techniques for active learning which allows to select sufficient training subset without any loss of quality.
Acknowledgements
The paper was prepared within the framework of the HSE University Basic Research Program and funded by the Russian Academic Excellence Project '5-100'. | Unanswerable |
994ac7aa662d16ea64b86510fcf9efa13d17b478 | 994ac7aa662d16ea64b86510fcf9efa13d17b478_0 | Q: Is the RNN model evaluated against any baseline?
Text: Introduction
A hashtag is a form of metadata labeling used in various social networks to help the users to navigate through the content. For example, one of the most popular hashtags on Instagram is "#photooftheday" [photo of the day]. Hashtags are written without any delimiters, although some users use an underscore or camel-casing to separate words. Hashtags themselves may be a great source for features for following opinion mining and social network analysis. Basically hashtags serve as keyphrases for a post in social media. By segmenting the hashtags into separate words we may use regular techniques to process them. The problem of hashtag segmentation resembles of another problem, namely word segmentation.
The problem of word segmentation is widely studied in languages like Chinese, since it lacks whitespaces to separate words, or in German to split compound words. In languages like English or Russian, where compounds are not that frequent as in German and where whitespace delimiters are regularly used, the problem of word segmentation arises mainly when working with hashtags.
Formally the problem is stated as follows: given a string of $n$ character $s = s_1 \ldots s_n$ we need to define the boundaries of the substrings $s_{i:j}, i < j$, so that each substring is meaningful (i.e. is a regular word, named entity, abbreviation, number, etc). The main challenge of this problem is that the segmentation might be ambiguous. For example, a string “somethingsunclear” might be segmented as “something sun clear” or “somethings unclear”. To deal with the ambiguity more processing is required, such as POS-tagging, estimation of frequencies of all hashtag constituencies or their co-occurence frequency. The frequencies can be estimated on a large corpus, such as BNC , COCA , Wikipedia. However when working with noisy user generated data, such as texts or hashtags from social networks, the problem of unknown words (or out of vocabulary words) arises. In language modeling this problem is solved by using smoothing, such as Laplacian smoothing or Knesser-Ney smoothing. Otherwise additional heuristics can be used to extend the dictionary with word-like sequences of characters. Unlike language modelling, in hashtag segmentation frequency estimation is not only source for defining word boundaries. Otherwise candidate substrings can be evaluated according to length BIBREF0.
Several research groups have shown that introducing character level into models help to deal with unknown words in various NLP tasks, such as text classification BIBREF1, named entity recognition BIBREF2, POS-tagging BIBREF3, dependency parsing BIBREF4, word tokenization and sentence segmentation BIBREF5 or machine translation BIBREF6, BIBREF7. The character level model is a model which either treats the text as a sequence of characters without any tokenization or incorporates character level information into word level information. Character level models are able to capture morphological patterns, such as prefixes and suffixes, so that the model is able to define the POS tag or NE class of an unknown word.
Following this intuition, we use a character level model for hashtag segmentation. Our main motivation is the following: if the character level model is able to capture word ending patterns, it should also be able to capture the word boundary patterns. We apply a character level model, specifically, a recurrent neural network, referred further as char-RNN, to the task of hashtag segmentation. The char-RNN is trained and tested on the synthetic data, which was generated from texts, collected from social networks in English and Russian, independently. We generate synthetic data for training by extracting frequent $N$-grams and removing whitespaces. The test data is annotated manually . Since the problem statement is very basic, we use additional techniques, such as active learning, character embeddings and RNN hidden state visualization, to interpret the weights, learned by char-RNN. We address the following research questions and claim our respective contributions:
We show that our char-RNN model outperforms the traditional unigram or bigram language models with extensive use of external sources BIBREF8, BIBREF0.
What is the impact of high inflection in languages such as Russian on the performance of character-level modelling as opposed to languages with little inflection such as English? We claim that character-level models offer benefits for processing highly inflected languages by capturing the rich variety of word boundary patterns.
As getting sufficient amount of annotated training collection is labor-intensive and error-prone, a natural question would be: can we avoid annotating real-world data altogether and still obtain high quality hashtag segmentations? We approach this problem by using morpho-syntactic patterns to generate synthetic hashtags.
A potentially unlimited volume of our synthetic training dataset raises yet another question of whether an informative training subset could be selected. To this extent, we apply an active learning-based strategy to subset selection and identify a small portion of the original synthetic training dataset, necessary to obtain a high performance.
Neural Model for Hashtag Segmentation ::: Sequence Labeling Approach
We treat hashtag segmentation as a sequence labeling task. Each character is labeled with one of the labels $\mathcal {L} = \lbrace 0, 1\rbrace $, (1) for the end of a word, and (0) otherwise (Table TABREF9 and TABREF9). Given a string $s = {s_1, \ldots , s_n}$ of characters, the task is to find the labels $Y^* = {y_1^*. \ldots , y_n^*}$, such that $ Y^* = \arg \max _{Y \in \mathcal {L} ^n} p(Y | s).$
The neural model for hashtag segmentation consists of three layers.
The embedding layer is used to compute the distributed representation of input characters. Each character $c_i$ is represented with an embedding vector $e_i \in \mathbb {R}^{d_e}$, where $d_e$ is the size of the character embedding. $E$ is the look up table of size $|V| \times d_e$, where $V$ is the vocabulary, i.e. the number of unique characters.
The feature layer is used to process the input. We use a bi-directional recurrent layer with LSTM units to process the input in forward and backward directions. The LSTM units we use are default keras LSTM units as introduced by Hochreiter.
The inference layer is used to predict the labels of each character. We use a single dense layer as f or inference and $softmax$ to predict the probabilities of the labels $\mathcal {L} = \lbrace 0, 1\rbrace $.
Each character is assigned with the most probable label.
The parameters of the char-RNN are the following:
Embedding layer = 50 input dimensions;
Feature layer = 64 bidirectional LSTM units;
Inference layer = 2 output neurons with softmax activation function mapped to each of 64 outputs.
Dataset
In this section we describe the datasets we used for hashtag segmentation. We experimented with Russian and English datasets to compare the performance of the char-RNN.
Dataset ::: Russian dataset
To our knowledge there is no available dataset for hashtag segmentation in Russian, so we faced the need to create our own dataset. Our approach to the dataset creation was twofold: the training data was created from social network texts by selecting frequent $n$-grams and generating hashtags following some hashtag patterns. The test dataset consists of real hashtags collected from vk.com (a Russian social network) and were segmented manually.
We followed the same strategy to create an English language dataset.
Dataset ::: Russian dataset ::: Training Dataset Generation
We scraped texts from several pages about civil services from vk.com. Next we extracted frequent $n$-grams that do not contain stopwords and consist of words and digits in various combinations (such as word + 4 digits + word or word + word + 8 digits). We used several rules to merge these $n$-grams so that they resemble real hashtags, for example:
remove all whitespace: wordwordworddigits
Examples: ЁлкаВЗазеркалье, нескольколетназад
replace all whitespace with an underscore: word_word_digits
Examples: увд_юга_столицы
remove some whitespace and replace other spaces with an underscore: word_worddigits.
Examples: ищусвоегогероя_уфпс
A word here might be a word in lower case, upper case or capitalized or an abbreviation. There might be up to four digits.
In general, we introduced 11 types of hashtags, which contain simply constructed hashtags as well as the complex ones. Here are a couple of examples:
The hashtag consists of two parts: the word/abbreviation in the first part and the number or word in the second. The underscore is a delimiter.
Examples: word_2017, NASA_2017, word_word
Two or three words, which are separated by an underscore.
Examples: Word_Word, word_word_word
Dataset ::: Russian dataset ::: Test Dataset Annotation
We segmented manually 2K the most frequent hashtags, extracted from the same collection of the scraped texts.
The resulting size of the Russian dataset is 15k hashtags for training and 2k hashtags for testing.
Dataset ::: English dataset
We used the dataset, released by BIBREF0. This dataset consists of:
a collection of tweets, which we used to generate the synthetic training hashtags according to the same rules as for Russian;
a collection of annotated and separated hashtags, which we used as a testing set. From this test set we excluded ambiguous hashtags, annotated with several possible segmentations.
The resulting size of the English dataset is 15k hashtags for training and 1k hashtags for testing.
Active Learning
We followed the strategy for active learning, as in BIBREF9. The training procedure consists of multiple rounds of training and testing of the model. We start by training the model on 1k hashtags, which were randomly selected from the training dataset. Next we test the model on the reminder of the training dataset and select 1k hashtags according to the current model’s uncertainty in its prediction of the segmentation. These hashtags are not manually relabelled, since a) they belong to the synthetically generated training dataset and b) the correct labeling for these hashtag is already known. In BIBREF9 three uncertainty measure are presented, from which we selected the maximum normalized log-probability (MNLP) assigned by the model to the most likely sequence of tags. The model is then retrained on the hashtags it is uncertain about. Note, that here we do not check if the predictions of the model are correct. We are more interested in training the model on hard examples than in evaluating the quality of intermediate results. We refer the reader to BIBREF9 for more technical details.
Experiments ::: Baseline
As for baseline algorithm, we consider the BIBREF0 system architecture as a state-of-the-art algorithm. Unfortunately, their approach is not straightforwardly applicable to our synthetic Russian dataset, because it requires twofold input: a hashtag and a corresponding tweet or a text from any other social media, which is absent in our task setting due to synthetic nature of the training dataset.
For this reason as a baseline algorithm for English dataset we refer to results from BIBREF0, and as for Russian dataset, we used the probabilistic language model, described in BIBREF8. The probability of a sequence of words is the product of the probabilities of each word, given the word’s context: the preceding word. As in the following equation:
where
In case there is no such a pair of words $(w_{i-1}, w_i)$ in the set of bigrams, the probability of word $w_i$ is obtained as if it was only an unigram model:
where $V$ – vocabulary, $f(w_{i})$ – frequency of word $w_{i}$, and $\alpha $ = 1.
In Table TABREF30 we present three baseline results: LM BIBREF8 for Russian and English datasets; context-based LM BIBREF0 for English dataset only. We treat a segmentation as correct if prediction and target sequences are the same.
Experiments ::: Neural Model
In our experiments we used 5 epochs to train the char-RNN with LSTM units. For each language we observed three datasets with different number of hashtags. In case of Russian language, the more data we use while training, the higher the accuracy. As for English, the highest accuracy score was achieved on a set of 10k hashtags (Table TABREF32). Due to it's lower morphological diversity and complexity the model starts to overfit on training sets with large sizes. The training showed that mostly the model makes wrong predictions of segmentation on hashtags of complex types, such as “wordword_worddigits”.
Our results outperform all choosen baseline both for Russian and English datasets. Note, that we have two baselines for the English dataset: one is purely frequency-based, another is cited from BIBREF0, where external resources are heavily used. We show that using significantly less amount of training data, we achieve a boost in quality by switching from statistical word language models to char-RNN. As expected, the results on Russian dataset are higher than for the English dataset due to higher inflection degree in Russian as opposed to English.
Experiments ::: Active Learning
In order to evaluate the efficiency of deep learning with active learning when used in combination, we run the experiments for both languages. As for the datasets, we took the ones on which the highest accuracy was obtained (15k for Russian and 10k for English).
The learning process consists of multiple rounds which are repeated until the test set is finished. At the beginning we train the model on 1k of randomly selected hashtags and predict the probability of segmentation for the remaining hashtags. Then we sort the remaining hashtags in ascending order according to the probability assigned by the model and pick 1k of hashtags which the model is least confident about. Finally, we add these hashtags with the least probable sequence of tags to the training data and continue training the model. This pipeline is repeated till there are no samples left.
In comparison to our initial experiments, application of active learning demonstrates impressive results. The amount of labeled training data can be drastically reduced, to be more specific, in both cases the size of the training set can be reduced by half without any decline in accuracy (see Figures 2 and 3).
Active learning selects a more informative set of examples in contrast to supervised learning, which is trained on a set of randomly chosen examples. We decided to analyze the updated version of the training data and see if number of morphologically complex types of hashtags is higher than the simple ones. We were able to divide hashatgs into complex and simple as the model is trained on synthetic data and there is a finite number of templates by which each hashtag can be generated.
To better understand the contribution of uncertainty sampling approach, we plot the distribution of different types of hashtags in new training datasets for both languages, Russian and English (see Figure 4 and 5). According to identified types of hashtags in real data, it can be seen from the plots that in both cases the algorithm added more of morphologically complex hashtags to training data – types 3, 6 and 7. These types mostly consist of hashtags with two or three words in lower case without underscore.
Examples of featured types:
wordword_2017
wordword, word2017word
wordwordword, wordword2017word
Experiments ::: Visualization
In order to see if embeddings of similar characters, in terms of string segmentation, appear near each-other in their resulting 50-dimensional embedding space, we have applied one technique for dimensionality reduction: SVD to character embeddings to plot them on 2D space. For both languages meaningful and interpretable clusters can be extracted: capital letters, letters in lower case, digits and underscore, as shown below.
Related Work
The problem of word segmentation has received much attention in Chinese and German NLP for word segmentation and compound splitting BIBREF10, respectively. The major techniques for word segmentation exploit string matching algorithms BIBREF11, language models BIBREF12, BIBREF0 and sequence labeling methods BIBREF10. Recent trend of deep learning as a major approach for any NLP task in general and sequence labeling in particular resulted in using various RNN-based models and CNN-based model for Chinese word segmentation BIBREF10, BIBREF13, BIBREF14.
Since BIBREF10 Chinese word segmentation is addressed as a character labeling task: each character of the input sequence is labeled with one of the four labels $\mathcal {L} = \lbrace B, M, E, S\rbrace $, which stand for character in Begin, Middle or End of the word or Single character word. BIBREF10 uses a maximum entropy tagger to tag each character independently. This approach was extended in BIBREF15 to the sequence modeling task, and linear conditional random fields were used to attempt it and receive state of the art results. A neural approach to Chinese segmentation mainly uses various architectures of character level recurrent neural networks BIBREF16, BIBREF17, BIBREF18 and very deep constitutional networks BIBREF19. Same architectures are used for dialectal Arabic segmentation BIBREF20.
The evolution of German compound splitters is more or less similar to Chinese word segmentation systems. The studies of German compound splitting started with corpus- and frequency-based approaches BIBREF13, BIBREF14 and are now dominated with neural-based distributional semantic models. However, German compound splitting is rarely seen as sequence modeling task.
The problem of hashtag segmentation, analysis and usage in English has been approached by several research groups. As it was shown by BIBREF12 hashtag segmentation for TREC microblog track 2011 BIBREF21 improves the quality of information retrieval, while BIBREF0 shows that hashtag segmentation improves linking of entities extracted from tweets to a knowledge base. Both BIBREF12, BIBREF0 use Viterbi-like algorithm for hashtag segmentation: all possible segmentations of hashtag are scored using a scoring function:
where $P_{Unigram}$ are probabilities, computed according to the unigram model based on a large enough corpus or any N-gram service.
Following the idea of scoring segmentation candidates, BIBREF11 introduces other scoring functions, which include a bigram model (2GM) and a Maximum Unknown Matching (MUM), which is adjustable to unseen words.
BIBREF22 attempt to split camel-cased hashtags using rule-based approach and POS-tagging for further semantic classification. WordSegment has been used for sentiment analysis BIBREF23, BIBREF24 and other applications.
To our knowledge there has been little work done for word or hashtag segmentation in Russian.
Related Work ::: Active Learning in NLP
Active learning is machine learning technique which allows efficient use of the available training data. It presumes that, first an initial model is trained on a very little amount of data and next tested on large unlabeled set. Next the model is able to choose a few most difficult examples and ask an external knowledge source about the desired labels. Upon receiving these labels, the model is updated and retrained on the new train set. There might be a few rounds of label querying and model updating. To use active learning strategy, we need a definition of what a difficult example is and how to score its difficulty. One of the most common scoring approaches is entropy-based uncertainty sampling, which selects the examples with the lowest prediction probability.
Active learning is widely used in NLP applications, when there is little annotated data while the amount of unlabeled data is abundant. Being ultimately used for text classification using traditional machine learning classifiers BIBREF25, BIBREF26, active learning is less known to be used with deep learning sequence classifiers. Recent works report on scoring word embeddings that are likely to be updated with the greatest magnitude BIBREF27 and on using maximum normalized log-probability (MNLP) assigned by the model to the most likely sequence of tags BIBREF9:
Related Work ::: Training on synthetic data
The lack of training data is an issue for many NLP applications. There have been attempts to generate and use synthetic data for training question answering systems BIBREF28 and SQL2text systems BIBREF29. In BIBREF0 synthetic hashtags are generated by removing whitespace characters from frequent n-grams, while in BIBREF30 German compounds are synthesized for further machine translation.
Conclusions
In this paper we approach the problem of hashtag segmentation by using char-RNNs. We treat the problem of hashtag segmentation as a sequence labeling task, so that each symbol of a given string is labeled with 1 (there should be a whitespace after this symbol) or 0 (otherwise). We use two datasets to test this approach in English and in Russian without any language-specific settings. We compare char-RNN to traditional probabilistic algorithms. To interpret the results we use a few visualization techniques and the strategy of active learning to evaluate the complexity of training data, since we use synthetically generated hashtags for training.
The results show that:
When approached on character level, hashtag segmentation problem can be solved using relatively small and simple recurrent neural network model without usage of any external corpora and vocabularies. Such char-RNN not only outperforms significantly traditional frequency-based language models, but also can be trained on synthetic data generated according to morpho-syntactic patterns, without any manual annotation and preprocessing.
In languages with high inflection (such as Russian) the char-RNN achieves higher results than in languages with little inflections (such as English) due to the ability of the char-RNN to capture and memorize word boundary patterns, especially word ending patterns (i.e. adjective endings “ый”,“ая”,“ое” or verbal endings “ать”,“еть” in Russian).
The amount of generated synthetic training data can be limited by using techniques for active learning which allows to select sufficient training subset without any loss of quality.
Acknowledgements
The paper was prepared within the framework of the HSE University Basic Research Program and funded by the Russian Academic Excellence Project '5-100'. | Yes |
9282cf80265a914a13053ab23b77d1a8ed71db1b | 9282cf80265a914a13053ab23b77d1a8ed71db1b_0 | Q: Which languages are used in the paper?
Text: Introduction
A hashtag is a form of metadata labeling used in various social networks to help the users to navigate through the content. For example, one of the most popular hashtags on Instagram is "#photooftheday" [photo of the day]. Hashtags are written without any delimiters, although some users use an underscore or camel-casing to separate words. Hashtags themselves may be a great source for features for following opinion mining and social network analysis. Basically hashtags serve as keyphrases for a post in social media. By segmenting the hashtags into separate words we may use regular techniques to process them. The problem of hashtag segmentation resembles of another problem, namely word segmentation.
The problem of word segmentation is widely studied in languages like Chinese, since it lacks whitespaces to separate words, or in German to split compound words. In languages like English or Russian, where compounds are not that frequent as in German and where whitespace delimiters are regularly used, the problem of word segmentation arises mainly when working with hashtags.
Formally the problem is stated as follows: given a string of $n$ character $s = s_1 \ldots s_n$ we need to define the boundaries of the substrings $s_{i:j}, i < j$, so that each substring is meaningful (i.e. is a regular word, named entity, abbreviation, number, etc). The main challenge of this problem is that the segmentation might be ambiguous. For example, a string “somethingsunclear” might be segmented as “something sun clear” or “somethings unclear”. To deal with the ambiguity more processing is required, such as POS-tagging, estimation of frequencies of all hashtag constituencies or their co-occurence frequency. The frequencies can be estimated on a large corpus, such as BNC , COCA , Wikipedia. However when working with noisy user generated data, such as texts or hashtags from social networks, the problem of unknown words (or out of vocabulary words) arises. In language modeling this problem is solved by using smoothing, such as Laplacian smoothing or Knesser-Ney smoothing. Otherwise additional heuristics can be used to extend the dictionary with word-like sequences of characters. Unlike language modelling, in hashtag segmentation frequency estimation is not only source for defining word boundaries. Otherwise candidate substrings can be evaluated according to length BIBREF0.
Several research groups have shown that introducing character level into models help to deal with unknown words in various NLP tasks, such as text classification BIBREF1, named entity recognition BIBREF2, POS-tagging BIBREF3, dependency parsing BIBREF4, word tokenization and sentence segmentation BIBREF5 or machine translation BIBREF6, BIBREF7. The character level model is a model which either treats the text as a sequence of characters without any tokenization or incorporates character level information into word level information. Character level models are able to capture morphological patterns, such as prefixes and suffixes, so that the model is able to define the POS tag or NE class of an unknown word.
Following this intuition, we use a character level model for hashtag segmentation. Our main motivation is the following: if the character level model is able to capture word ending patterns, it should also be able to capture the word boundary patterns. We apply a character level model, specifically, a recurrent neural network, referred further as char-RNN, to the task of hashtag segmentation. The char-RNN is trained and tested on the synthetic data, which was generated from texts, collected from social networks in English and Russian, independently. We generate synthetic data for training by extracting frequent $N$-grams and removing whitespaces. The test data is annotated manually . Since the problem statement is very basic, we use additional techniques, such as active learning, character embeddings and RNN hidden state visualization, to interpret the weights, learned by char-RNN. We address the following research questions and claim our respective contributions:
We show that our char-RNN model outperforms the traditional unigram or bigram language models with extensive use of external sources BIBREF8, BIBREF0.
What is the impact of high inflection in languages such as Russian on the performance of character-level modelling as opposed to languages with little inflection such as English? We claim that character-level models offer benefits for processing highly inflected languages by capturing the rich variety of word boundary patterns.
As getting sufficient amount of annotated training collection is labor-intensive and error-prone, a natural question would be: can we avoid annotating real-world data altogether and still obtain high quality hashtag segmentations? We approach this problem by using morpho-syntactic patterns to generate synthetic hashtags.
A potentially unlimited volume of our synthetic training dataset raises yet another question of whether an informative training subset could be selected. To this extent, we apply an active learning-based strategy to subset selection and identify a small portion of the original synthetic training dataset, necessary to obtain a high performance.
Neural Model for Hashtag Segmentation ::: Sequence Labeling Approach
We treat hashtag segmentation as a sequence labeling task. Each character is labeled with one of the labels $\mathcal {L} = \lbrace 0, 1\rbrace $, (1) for the end of a word, and (0) otherwise (Table TABREF9 and TABREF9). Given a string $s = {s_1, \ldots , s_n}$ of characters, the task is to find the labels $Y^* = {y_1^*. \ldots , y_n^*}$, such that $ Y^* = \arg \max _{Y \in \mathcal {L} ^n} p(Y | s).$
The neural model for hashtag segmentation consists of three layers.
The embedding layer is used to compute the distributed representation of input characters. Each character $c_i$ is represented with an embedding vector $e_i \in \mathbb {R}^{d_e}$, where $d_e$ is the size of the character embedding. $E$ is the look up table of size $|V| \times d_e$, where $V$ is the vocabulary, i.e. the number of unique characters.
The feature layer is used to process the input. We use a bi-directional recurrent layer with LSTM units to process the input in forward and backward directions. The LSTM units we use are default keras LSTM units as introduced by Hochreiter.
The inference layer is used to predict the labels of each character. We use a single dense layer as f or inference and $softmax$ to predict the probabilities of the labels $\mathcal {L} = \lbrace 0, 1\rbrace $.
Each character is assigned with the most probable label.
The parameters of the char-RNN are the following:
Embedding layer = 50 input dimensions;
Feature layer = 64 bidirectional LSTM units;
Inference layer = 2 output neurons with softmax activation function mapped to each of 64 outputs.
Dataset
In this section we describe the datasets we used for hashtag segmentation. We experimented with Russian and English datasets to compare the performance of the char-RNN.
Dataset ::: Russian dataset
To our knowledge there is no available dataset for hashtag segmentation in Russian, so we faced the need to create our own dataset. Our approach to the dataset creation was twofold: the training data was created from social network texts by selecting frequent $n$-grams and generating hashtags following some hashtag patterns. The test dataset consists of real hashtags collected from vk.com (a Russian social network) and were segmented manually.
We followed the same strategy to create an English language dataset.
Dataset ::: Russian dataset ::: Training Dataset Generation
We scraped texts from several pages about civil services from vk.com. Next we extracted frequent $n$-grams that do not contain stopwords and consist of words and digits in various combinations (such as word + 4 digits + word or word + word + 8 digits). We used several rules to merge these $n$-grams so that they resemble real hashtags, for example:
remove all whitespace: wordwordworddigits
Examples: ЁлкаВЗазеркалье, нескольколетназад
replace all whitespace with an underscore: word_word_digits
Examples: увд_юга_столицы
remove some whitespace and replace other spaces with an underscore: word_worddigits.
Examples: ищусвоегогероя_уфпс
A word here might be a word in lower case, upper case or capitalized or an abbreviation. There might be up to four digits.
In general, we introduced 11 types of hashtags, which contain simply constructed hashtags as well as the complex ones. Here are a couple of examples:
The hashtag consists of two parts: the word/abbreviation in the first part and the number or word in the second. The underscore is a delimiter.
Examples: word_2017, NASA_2017, word_word
Two or three words, which are separated by an underscore.
Examples: Word_Word, word_word_word
Dataset ::: Russian dataset ::: Test Dataset Annotation
We segmented manually 2K the most frequent hashtags, extracted from the same collection of the scraped texts.
The resulting size of the Russian dataset is 15k hashtags for training and 2k hashtags for testing.
Dataset ::: English dataset
We used the dataset, released by BIBREF0. This dataset consists of:
a collection of tweets, which we used to generate the synthetic training hashtags according to the same rules as for Russian;
a collection of annotated and separated hashtags, which we used as a testing set. From this test set we excluded ambiguous hashtags, annotated with several possible segmentations.
The resulting size of the English dataset is 15k hashtags for training and 1k hashtags for testing.
Active Learning
We followed the strategy for active learning, as in BIBREF9. The training procedure consists of multiple rounds of training and testing of the model. We start by training the model on 1k hashtags, which were randomly selected from the training dataset. Next we test the model on the reminder of the training dataset and select 1k hashtags according to the current model’s uncertainty in its prediction of the segmentation. These hashtags are not manually relabelled, since a) they belong to the synthetically generated training dataset and b) the correct labeling for these hashtag is already known. In BIBREF9 three uncertainty measure are presented, from which we selected the maximum normalized log-probability (MNLP) assigned by the model to the most likely sequence of tags. The model is then retrained on the hashtags it is uncertain about. Note, that here we do not check if the predictions of the model are correct. We are more interested in training the model on hard examples than in evaluating the quality of intermediate results. We refer the reader to BIBREF9 for more technical details.
Experiments ::: Baseline
As for baseline algorithm, we consider the BIBREF0 system architecture as a state-of-the-art algorithm. Unfortunately, their approach is not straightforwardly applicable to our synthetic Russian dataset, because it requires twofold input: a hashtag and a corresponding tweet or a text from any other social media, which is absent in our task setting due to synthetic nature of the training dataset.
For this reason as a baseline algorithm for English dataset we refer to results from BIBREF0, and as for Russian dataset, we used the probabilistic language model, described in BIBREF8. The probability of a sequence of words is the product of the probabilities of each word, given the word’s context: the preceding word. As in the following equation:
where
In case there is no such a pair of words $(w_{i-1}, w_i)$ in the set of bigrams, the probability of word $w_i$ is obtained as if it was only an unigram model:
where $V$ – vocabulary, $f(w_{i})$ – frequency of word $w_{i}$, and $\alpha $ = 1.
In Table TABREF30 we present three baseline results: LM BIBREF8 for Russian and English datasets; context-based LM BIBREF0 for English dataset only. We treat a segmentation as correct if prediction and target sequences are the same.
Experiments ::: Neural Model
In our experiments we used 5 epochs to train the char-RNN with LSTM units. For each language we observed three datasets with different number of hashtags. In case of Russian language, the more data we use while training, the higher the accuracy. As for English, the highest accuracy score was achieved on a set of 10k hashtags (Table TABREF32). Due to it's lower morphological diversity and complexity the model starts to overfit on training sets with large sizes. The training showed that mostly the model makes wrong predictions of segmentation on hashtags of complex types, such as “wordword_worddigits”.
Our results outperform all choosen baseline both for Russian and English datasets. Note, that we have two baselines for the English dataset: one is purely frequency-based, another is cited from BIBREF0, where external resources are heavily used. We show that using significantly less amount of training data, we achieve a boost in quality by switching from statistical word language models to char-RNN. As expected, the results on Russian dataset are higher than for the English dataset due to higher inflection degree in Russian as opposed to English.
Experiments ::: Active Learning
In order to evaluate the efficiency of deep learning with active learning when used in combination, we run the experiments for both languages. As for the datasets, we took the ones on which the highest accuracy was obtained (15k for Russian and 10k for English).
The learning process consists of multiple rounds which are repeated until the test set is finished. At the beginning we train the model on 1k of randomly selected hashtags and predict the probability of segmentation for the remaining hashtags. Then we sort the remaining hashtags in ascending order according to the probability assigned by the model and pick 1k of hashtags which the model is least confident about. Finally, we add these hashtags with the least probable sequence of tags to the training data and continue training the model. This pipeline is repeated till there are no samples left.
In comparison to our initial experiments, application of active learning demonstrates impressive results. The amount of labeled training data can be drastically reduced, to be more specific, in both cases the size of the training set can be reduced by half without any decline in accuracy (see Figures 2 and 3).
Active learning selects a more informative set of examples in contrast to supervised learning, which is trained on a set of randomly chosen examples. We decided to analyze the updated version of the training data and see if number of morphologically complex types of hashtags is higher than the simple ones. We were able to divide hashatgs into complex and simple as the model is trained on synthetic data and there is a finite number of templates by which each hashtag can be generated.
To better understand the contribution of uncertainty sampling approach, we plot the distribution of different types of hashtags in new training datasets for both languages, Russian and English (see Figure 4 and 5). According to identified types of hashtags in real data, it can be seen from the plots that in both cases the algorithm added more of morphologically complex hashtags to training data – types 3, 6 and 7. These types mostly consist of hashtags with two or three words in lower case without underscore.
Examples of featured types:
wordword_2017
wordword, word2017word
wordwordword, wordword2017word
Experiments ::: Visualization
In order to see if embeddings of similar characters, in terms of string segmentation, appear near each-other in their resulting 50-dimensional embedding space, we have applied one technique for dimensionality reduction: SVD to character embeddings to plot them on 2D space. For both languages meaningful and interpretable clusters can be extracted: capital letters, letters in lower case, digits and underscore, as shown below.
Related Work
The problem of word segmentation has received much attention in Chinese and German NLP for word segmentation and compound splitting BIBREF10, respectively. The major techniques for word segmentation exploit string matching algorithms BIBREF11, language models BIBREF12, BIBREF0 and sequence labeling methods BIBREF10. Recent trend of deep learning as a major approach for any NLP task in general and sequence labeling in particular resulted in using various RNN-based models and CNN-based model for Chinese word segmentation BIBREF10, BIBREF13, BIBREF14.
Since BIBREF10 Chinese word segmentation is addressed as a character labeling task: each character of the input sequence is labeled with one of the four labels $\mathcal {L} = \lbrace B, M, E, S\rbrace $, which stand for character in Begin, Middle or End of the word or Single character word. BIBREF10 uses a maximum entropy tagger to tag each character independently. This approach was extended in BIBREF15 to the sequence modeling task, and linear conditional random fields were used to attempt it and receive state of the art results. A neural approach to Chinese segmentation mainly uses various architectures of character level recurrent neural networks BIBREF16, BIBREF17, BIBREF18 and very deep constitutional networks BIBREF19. Same architectures are used for dialectal Arabic segmentation BIBREF20.
The evolution of German compound splitters is more or less similar to Chinese word segmentation systems. The studies of German compound splitting started with corpus- and frequency-based approaches BIBREF13, BIBREF14 and are now dominated with neural-based distributional semantic models. However, German compound splitting is rarely seen as sequence modeling task.
The problem of hashtag segmentation, analysis and usage in English has been approached by several research groups. As it was shown by BIBREF12 hashtag segmentation for TREC microblog track 2011 BIBREF21 improves the quality of information retrieval, while BIBREF0 shows that hashtag segmentation improves linking of entities extracted from tweets to a knowledge base. Both BIBREF12, BIBREF0 use Viterbi-like algorithm for hashtag segmentation: all possible segmentations of hashtag are scored using a scoring function:
where $P_{Unigram}$ are probabilities, computed according to the unigram model based on a large enough corpus or any N-gram service.
Following the idea of scoring segmentation candidates, BIBREF11 introduces other scoring functions, which include a bigram model (2GM) and a Maximum Unknown Matching (MUM), which is adjustable to unseen words.
BIBREF22 attempt to split camel-cased hashtags using rule-based approach and POS-tagging for further semantic classification. WordSegment has been used for sentiment analysis BIBREF23, BIBREF24 and other applications.
To our knowledge there has been little work done for word or hashtag segmentation in Russian.
Related Work ::: Active Learning in NLP
Active learning is machine learning technique which allows efficient use of the available training data. It presumes that, first an initial model is trained on a very little amount of data and next tested on large unlabeled set. Next the model is able to choose a few most difficult examples and ask an external knowledge source about the desired labels. Upon receiving these labels, the model is updated and retrained on the new train set. There might be a few rounds of label querying and model updating. To use active learning strategy, we need a definition of what a difficult example is and how to score its difficulty. One of the most common scoring approaches is entropy-based uncertainty sampling, which selects the examples with the lowest prediction probability.
Active learning is widely used in NLP applications, when there is little annotated data while the amount of unlabeled data is abundant. Being ultimately used for text classification using traditional machine learning classifiers BIBREF25, BIBREF26, active learning is less known to be used with deep learning sequence classifiers. Recent works report on scoring word embeddings that are likely to be updated with the greatest magnitude BIBREF27 and on using maximum normalized log-probability (MNLP) assigned by the model to the most likely sequence of tags BIBREF9:
Related Work ::: Training on synthetic data
The lack of training data is an issue for many NLP applications. There have been attempts to generate and use synthetic data for training question answering systems BIBREF28 and SQL2text systems BIBREF29. In BIBREF0 synthetic hashtags are generated by removing whitespace characters from frequent n-grams, while in BIBREF30 German compounds are synthesized for further machine translation.
Conclusions
In this paper we approach the problem of hashtag segmentation by using char-RNNs. We treat the problem of hashtag segmentation as a sequence labeling task, so that each symbol of a given string is labeled with 1 (there should be a whitespace after this symbol) or 0 (otherwise). We use two datasets to test this approach in English and in Russian without any language-specific settings. We compare char-RNN to traditional probabilistic algorithms. To interpret the results we use a few visualization techniques and the strategy of active learning to evaluate the complexity of training data, since we use synthetically generated hashtags for training.
The results show that:
When approached on character level, hashtag segmentation problem can be solved using relatively small and simple recurrent neural network model without usage of any external corpora and vocabularies. Such char-RNN not only outperforms significantly traditional frequency-based language models, but also can be trained on synthetic data generated according to morpho-syntactic patterns, without any manual annotation and preprocessing.
In languages with high inflection (such as Russian) the char-RNN achieves higher results than in languages with little inflections (such as English) due to the ability of the char-RNN to capture and memorize word boundary patterns, especially word ending patterns (i.e. adjective endings “ый”,“ая”,“ое” or verbal endings “ать”,“еть” in Russian).
The amount of generated synthetic training data can be limited by using techniques for active learning which allows to select sufficient training subset without any loss of quality.
Acknowledgements
The paper was prepared within the framework of the HSE University Basic Research Program and funded by the Russian Academic Excellence Project '5-100'. | English, Russian |
41bff17f7d7e899c03b051e20ef01f0ebc5c8bb1 | 41bff17f7d7e899c03b051e20ef01f0ebc5c8bb1_0 | Q: What metrics are used for evaluation?
Text: A Switching Dynamical System for Narrative Generation
In this section, we give a brief overview of Switching Dynamical systems and how they can be used to capture both a scaffold of the narrative as well as the narrative dynamics. We then describe in detail the components of our model and its relation to existing models.
A Switching Dynamical System for Narrative Generation ::: Narrative Dynamics in a Dynamical System
The specifics of the narrative (characters, setting, etc.), will differ between stories, but as BIBREF0 notes, the way they transition to the next point in the narrative (what we refer to as “narrative dynamics") is often shared. Let's say that, as done often, we represent the `narrative specifics' at time step $i$ with a latent vector $Z_i$. A natural way to explicitly model how this state evolves over time that fits with the above observation is as a Linear Dynamical System:
Where $A$ is a matrix, shared across all narratives, and $\Sigma $ is a noise term that takes into consideration idiosyncrasies different narratives will have. The fact that the shared transition matrix $A$ is linear means that narratives will have linearly analogous trajectories through time, despite having different details (comparable to stories with different settings but matching structures such as Ran/King Lear, Ulysses/Odyssey, etc). Of course, the fatal flaw of the model is that it assumes there exists only one transition matrix, and thus only one possible way to transition through a narrative!
A Switching Dynamical System for Narrative Generation ::: Narrative Scaffolds as Switching Variables
A more fitting model would thus be a Switching Linear Dynamical System BIBREF1, BIBREF2, BIBREF3. In an SLDS, we assume there exists a set of $K$ different sets of dynamics, $\lbrace (A_1, \Sigma _1),...(A_K,\Sigma _K)\rbrace $. At time step $i+1$, one of these sets of dynamics is used. The one used depends on the value of a discrete variable at time step $i+1$ called the switching variable, $S_{i+1} \in \lbrace 1,...K\rbrace $:
There is a switching variable $S_i$ associated with each time step. The switching variable value itself evolves over time by a prior Markov process, $P(S_{i+1} | S_{i})$. This top level chain of switching variables thus forms our narrative scaffold, indicating what transitions we must go through in the narrative, with the dynamics matrices indicating how they transition.
A Switching Dynamical System for Narrative Generation ::: Narrative Scaffold - Emotional Trajectory
What the switching variables actually represent can be chosen by the user. Straightforward narrative scaffolds include event sequences BIBREF6, keywords BIBREF7, or latent template ids BIBREF8. More complex but potentially more informative scaffolds may be created using concepts such as story grammar non-terminals BIBREF9, BIBREF10, or character action taken throughout a story BIBREF11.
In our work, we use the sentiment trajectory of the narrative as the scaffold. That is, each $S_i$ for a sentence indicates the overall coarse sentiment of the sentence (Positive, Negative, or Neutral). Though simple, the overall sentiment trajectory of a narrative is important in defining the high level `shape' of a narrative often shared among different narratives BIBREF12, BIBREF13. Furthermore, sentiment trajectory has been shown to be fairly useful in story understanding tasks BIBREF14, BIBREF15. We discuss in the conclusion future directions for using different types of scaffolds.
A Switching Dynamical System for Narrative Generation ::: The Full Model
The final component of the model is a conditional language model that generates sentence $i$ conditioned on the current $Z_i$, and all previous sentences, $X_{:i}$. Generation continues until an <eos> is reached. This conditional language model may be parameterized as desired, but in this work, we parameterize it as an RNN neural network language model.
The graphical model for our SLDS is pictured in Figure FIGREF8. The model consists of three sets of variables: (1) Switching variables $S_1,...,S_N$, (2) Latent state variables $Z_1,...,Z_N$ capturing the details of the narrative at sentence $i$, (3) The sentences themselves $X_1,...X_N$, where each sentence $X_i$ has $n_i$ words, $x^i_1,...x^i_{n_i}$. The joint over all variables factorizes as below into the following components ($X_{:i}$ stands for all sentence before $X_i$):
❶ Narrative Scaffold Planner: The factor $P(S_i | S_{i-1})$ is a transition matrix, which we calculate via count based statistics from training. It is fed in as prior knowledge and fixed.
❷ Narrative Dynamics Network: The factor $P(Z_i | Z_{i-1}, S_i)$ is determined like a switching linear dynamical system:
which is equivalent to drawing $Z_i$ from a Normal distribution with mean $A_{S_i}Z_{i-1}$ and variance $B_{S_i}B_{S_i}^T$.
❸ Conditional Language model: The factor $P(X_i | Z_i, X_{:i})$ is parameterized by an RNN language model conditioned on the latent $Z_i$.
Learning and Posterior Inference
Due to the conditionals parameterized by neural networks we use amortized variational inference in a manner similar to Variational AutoEncoders BIBREF16, both to learn an approximate posterior $q(S, Z | X)$ and to learn the generative model parameters by maximizing a lower bound on the data likelihood (ELBO). We assume that the approximate posterior factorizes as follows:
Like in VAEs, computing these individual factors is done through a parameterized function called the inference or recognition network whose parameters are trained jointly with the generative model. In our case there are two forms for the factors in our posterior: (1) The first form, $q(S_i | \textbf {X}) = q_{S_i}$ is parameterized by a classifier that takes in the set of sentences $\mathbf {X}$ and outputs a categorical distribution over the switching variables. (2) The second form, $q(Z_i| Z_{i-1}, S_i, X_{:i}, X_{i}) = q_{Z_i}$ is realized by functions $f_{\mu }(Z_{i-1}, S_i, X_{:i}, X_{i})$ and $f_\sigma (Z_{i-1}, S_i, X_{:i}, X_{i})$ that output the mean and variance, respectively, of a Gaussian over $Z_i$.
Borrowing terminology from VAEs, the approximate posterior (the factors given above) act as an `encoder', while the generative model from the previous section can be seen as the `decoder'. This type of training has been previously used in BIBREF17, BIBREF18, BIBREF19, BIBREF20, BIBREF21.
Learning and Posterior Inference ::: Lower bound formula & exact training algorithm
As mentioned previously, we optimize all parameters (including the variational factor functions) by optimizing a lower bound on the data likelihood. The model may be trained either with supervision labels for the switching states (in our case, sentiment labels) or without supervised labels.
If one is training without the sentiment labels, then the lower bound on the marginal likelihood (and thus our optimization objective) may be written as follows:
The derivation for this objective is identical to that found in BIBREF18, BIBREF19, and simply relies on using properties of iterated expectations. All expectations are estimated with Monte Carlo samples.
If training with the sentiment labels $S_1,...,S_N$, then the objective is similar (but without the sampling of the switching states), and is augmented with an additional supervision objective as done in BIBREF22:
Final training procedure for a single narrative is:
For each sentence (starting from the first), sample the switching state $S_i$ from $q(S_i | \textbf {X})$.
For each sentence (starting from the first), sample the latent $Z_i$ from $q(Z_i | S_i, Z_{i-1}, X)$.
Evaluate the data likelihood and KL term(s) with these samples.
Take the gradients of the objective function w.r.t. all parameters, using the reparameterization trick for $q_{Z_i}$ BIBREF16 or the Gumbel-Softmax trick for $q_{S_i}$ BIBREF23, and optimize.
Interpolations via Gibbs Sampling
One of the benefits of probabilistic formulation is the possibility (if an inference procedure can be found) of generating narratives with specific constraints, where the constraints may be specified as clamped variables in the model. In this section, we show how narratives may be generated conditioned on arbitrary bits and pieces of the narrative already filled in, using approximate Gibbs sampling. This allows one to, for example, interpolate a narrative given the first and the last sentence (similar to how earlier story generation systems were able to generate with a given end goal in mind). Some examples of these interpolations generated by our system can be found in Table TABREF37. We give the equations and summarize the algorithm in the next sections.
Interpolations via Gibbs Sampling ::: Conditionals for Gibbs Sampling
For our Gibbs sampling algorithm we give the narrative scaffold (switching variables), $S_1,...,S_T \in \mathbf {S}$ and a set of observed sentences, $\mathbf {X^+}$. This may be any set of sentences (the first and last, just the second sentence, etc) as inputs to the system. We wish to find values for the unobserved sentences in set $\mathbf {X^-}$ by sampling from the distribution $P(\mathbf {X^-}, Z_1,...,Z_T | \mathbf {S},\mathbf {X^+})$. We perform this sampling via Gibbs sampling. Two different forms of conditionals need to be derived to do Gibbs sampling. One over some $Z_i$ conditioned on everything else, and one over some $X_i$ conditioned on everything else.
By using the d-separation properties of the graph, and substituting the true posterior over $Z_{i}$ with our approximate posterior $q$, we can show the first distribution is approximately proportional to
The last line is the product between a Gaussian density over $Z_{i+1}$ and $Z_{i}$, respectively. With some algebraic manipulations, one can show the last line is proportional to a single Gaussian PDF over $Z_i$:
To find the second conditional, one can use the d-separation properties of the graph to find that it is proportional to:
These two distributions are simply factors of our conditional language model, and both terms can thus be evaluated easily. In theory, one could use this fact to sample the original conditional via Metropolis-Hastings . Unfortunately, we found this approach to be much too slow for practical purposes. We observed that the simple heuristic of deterministically assigning $X_i$ to be the greedy decoded output of the conditional language model $P(X_{i} | X_{:i}, Z_{i})$ works well, as evidenced by the empirical results. We leave it for future work to research different conditional language model parameterizations that allow easy sampling from this conditional
Interpolations via Gibbs Sampling ::: Gibbs Sampling Interpolation Overview
The variables in the Gibbs sampler are first initialized using some heuristics (see Supplemental Materials for details). After initialization, performing the interpolations with Gibbs sampling follows the below two step process:
For each $Z_i$, sample a value $Z^\prime $ from equation $(1)$ and set $Z_i$ to $Z^\prime $.
For each $X_i$ in $\mathbf {X}^-$, find a new value for $X_i$ by running greedy decoding using the conditional language model.
Training Details ::: Dataset and Preprocessing
We use the ROCStories corpora introduced in BIBREF27. It contains 98,159 short commonsense stories in English as training, and 1,570 stories for validation and test each. Each story in the dataset has five-sentences and captures causal and temporal commonsense relations. We limit our vocabulary size to 16,983 based on a per-word frequency cutoff set to 5. For sentiment tags, we automatically tag the entirety of the corpus with the rule based sentiment tagger, Vader BIBREF28, and bucket the polarity scores of Vader into three tags: neutral, negative, and positive. These tags form the label set of the $S$ variables in our SLDS model. We tokenize the stories with Spacy tokenizer. Each sentences in the input narrative has an <eos> tag except for the S2S model discussed below.
Training Details ::: Switching Linear Dynamical System (SLDS)
SLDS has RNN encoder and decoder networks with single layer GRU cells of hidden size 1024. Model uses an embedding size of 300. We train the model using Adam optimizer with the defaults used by PyTorch. We stop training the models when the validation loss does not decrease for 3 consecutive epochs. Training details remain same as above unless otherwise mentioned.
Training Details ::: Baselines
Language Model (LM): We train a two layer recurrent neural language model with GRU cells of hidden size 512.
Sequence-to-Sequence Attention Model (S2S): We train a two layer neural sequence to sequence model equipped with bi-linear attention function with GRU cells of hidden size 512. Sentiments tags for a narrative (1 for each sentence) are given as input to the model and the corresponding sentences are concatenated together as the output with only one <eos> tag at the end. This model is trained with a 0.1 dropout. This model is comparable to the static model of BIBREF7, and other recent works employing a notion of scaffolding into neural generation (albeit adapted for our setting).
Linear Dynamical System (LDS): We also train a linear dynamical system as discussed in Section SECREF1 as one of our baselines for fair comparisons. Apart from having just a single transition matrix this model has the same architectural details as SLDS.
Semi-Supervised SLDS (SLDS-X%): To gauge the usability of semi-supervision, we also train semi-supervised SLDS models with varying amount of labelled sentiment tags unlike the original model which uses 100% tagged data. We refer to these as SLDS-X%, where X is the % labelled data used for training: 1%, 10%, 25%, and 50%.
Evaluations
As described above, our model is able to perform narrative interpolations via an approximate Gibbs sampling procedure. At the core of our evaluations is thus a fill-in-the-sentences task. We provide 1 or 2 sentences, and require the model to generate the rest of the narrative . We evaluate this via automatic evaluations as well as with crowd-sourced human evaluations. We also report perplexity to evaluate the models' ability to fit the data. Lastly, we look at whether the transitions learned by the SLDS models capture what they are intended to capture: does using the transition matrix associated with a sentiment tag (positive/negative/neutral) lead to a generated sentence with that sentiment?
Evaluations ::: Generating the Interpolations
For the SLDS models, the interpolations are generated via the Gibbs sampling algorithm described earlier. In all experiments for the SLDS models we draw 50 samples (including burn in samples) and output the interpolation that maximizes the probability of the given sentence(s). Since the baselines do not have the means for doing interpolations, we simulate `interpolations' for the baselines; we draw 1000 samples using top k (with k=15) truncated sampling (conditioned on the given initial sentences, if available). We then output the sample that maximizes the probability of the clamped sentences around which we are interpolating the others. We allow the S2S access to the gold sentiment tags. To give a lower bound on the performance of the SLDS model, we do not provide it with gold tags. We instead provide the SLDS model with the semi-noisy tags that are output from $q(S_i | X)$.
Evaluations ::: Automatic Evaluation of Interpolations
We automatically evaluate on four different types of interpolations (where different combinations of sentences are removed and the model is forced to regenerate them), We evaluate the generations with the ROUGE BIBREF29 and METEOR BIBREF30 metrics using the true sentences as targets. Table TABREF33 shows the automatic evaluation results from interpolations using our proposed models and baselines. The #Sent(s) column indicates which sentence(s) were removed, and then regenerated by the model. We gave the baselines a slight edge over SLDS because they pick the best out of 1000 samples while SLDS is only out of 50. The SLDS models see their largest gain over the baseline models when at least the first sentence is given as an input. The baseline models do better when the first and second sentence need to be imputed. This is likely due to the fact that having access to the earlier sentences allows a better initialization for the Gibbs sampler. Surprisingly, the semi-supervised variants of the SLDS models achieve higher scores. The reasons for this is discussed below in the Perplexity section.
Evaluations ::: Human Evaluation of Interpolations ::: Annotation Scheme
As automatic evaluation metrics are not sufficient to assess the quality of any creative task such as narrative generation, we measure the quality of the generations through human evaluation of 200 stories on the Amazon Mechanical Turk platform. We provided Turkers with two generated narratives from two different models, each with five sentences. The first and last sentences were fed to each model as input, and the middle three sentences were generated. Each pair of narratives is graded by 3 users each with two tasks: (1) to rank on a scale of 0-3 each of the sentences except the first one on the basis of its coherency with the previous sentence(s) and (2) compare and rank the two narratives based on their overall coherency, ie how well the story connects the starting/ending sentences.
Evaluations ::: Human Evaluation of Interpolations ::: Human Evaluation Results
Table TABREF41 reports the result of human evaluations of SLDS and baseline generations. We can observe that people preferred narratives generated by SLDS over the ones generated by baseline models (LM and S2S) as they found the former model more coherent, which is an important criteria for narrative generation. 51.3% of the time SLDS generates better narratives than the LM model while LM in turn does it only 35.0% of the times. 13.7% of the generations end up in tie. The mean sentence level coherence score for SLDS is around 12.5% larger than that of the LM, with a slightly lower standard deviation. We see similar results when compared against the S2S model.
Evaluations ::: Language Modeling Perplexity Score
As our models are essentially language models, we evaluated their per-sentence negative log-likelihood and per-word perplexity scores, which can be viewed as an indirect measure of how well a system works as a generative model of narrative text. For the SLDS and LDS models these scores are approximations, an upper bound (the negative of the ELBO) to the actual values. For the other two models the scores are exact. A good model should assign low perplexity scores to its test set. In Table TABREF44 SLDS achieves the lowest scores, implying that it is able to model the data distribution well. In Table TABREF45 we also calculate the perplexity scores for the semi-supervised SLDS models to assess the effectiveness of semi-supervised training. Surprisingly, the models with less supervision scored better in terms of perplexity. One possibility for this might be the use of the soft Gumbel-Softmax in the semi-supervised models. The soft Gumbel-Softmax variant does not commit to using a single transition matrix at each time step (instead linearly combining them, weighted by the Softmax weights). This fact may permit the model greater flexibility in fitting the training data. While this leads to better scores in metrics such as perplexity or BLEU, it does leads to transitions that are worse in capturing the properties they should be capturing, as we shall see in the next section.
Evaluations ::: Evaluation of Transition Dynamics
One matter of interest is whether or not the transitions are capturing what they are supposed to capture, appropriate sentiment. Since we used the sentiment tagger Vader for training tags, we again utilize it to evaluate whether using transitions of a certain sentiment actually leads the model to produce outputs with the given sentiment. To perform this evaluation, we give as input to our models (and the S2S baseline) the sentiment tags for a sentence and allow it to generate a sentence conditioned on these sentiment tags. We then tag the generated sentences with Vader and see if the sentiment tags match the originals. We calculate the F1 score across all sentiment tags and report the macro average. In Table TABREF47 we see that having labels is incredibly important for meaningful transitions. There is a large drop in F1 as the amount of labels given to the model is decreased. The SLDS model that is trained with 100% of the labels performs a little better than even S2S, despite not having direct access to the sentiment labels (SLDS only uses the sentiment labels to decide which transition to use while the S2S model uses attention directly on the sentiment labels).
Related Work
Story/narrative generation has a rich history in the field of AI. Many early systems were based on structured formalisms for describing common narrative structures BIBREF9, BIBREF10, BIBREF31, many being inspired by the initial work of BIBREF0. There has been a swath of recent work that has looked to add some semblance of a `narrative scaffold' back into generation methods BIBREF32, BIBREF6, BIBREF7, BIBREF33. Many of these methods work as conditional LMs (conditioned directly on the scaffold). This line of work may be combined with our formalization as well, by conditioning the generation on the switching state as well, as done in the model of BIBREF4. Recent work by BIBREF34 has similar goals to ours in permitting more controlability in generation systems, developing a RL-based system that allows users to specify an end goal for a story (by specifying the event class that is desired to appear at the end). Their work differs from ours in that it does not deal with text directly, modeling only the sequences of events in the narrative. It may be possible to utilize this model as the scaffolding component in our model (utilizing their RL policy for the scaffold planner, rather than the simple Markovian distribution used here).
Conclusion and Future Work
In this paper, we formulated the problem of narrative generation as a switching dynamical system. We showed how this formulation captures notions important in narrative generation, such as narrative dynamics and scaffolds. We developed an approximate Gibbs sampling algorithm for the model that permits the system to generate interpolations conditioned on arbitrary parts of the narrative, and evaluated these interpolations using both human and automatic evaluations. Though in this work we used sentiment tags for our scaffolds/switching variables, future work may look at utilizing different kinds of information to guide the generation of narratives. Utilizing the main predicate of a sentence as a scaffold would be a logical next step, and may prove more informative then the sentiment trajectory. A scaffold such as this can take on many more possible values then a sentiment tag, and as such, it may prove difficult to assign a set of dynamics to each value. Another avenue for future work would deal with this possible problem. One potential solution could be to associate each switching variable value with a (learned) vector in a probability simplex, and use this vector to combine a small set of “primitive" dynamics matrices in order to get that value's associated set of dynamics. | ROUGE BIBREF29 and METEOR BIBREF30 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.